Search is not available for this dataset
paperhash
string | s2_corpus_id
string | arxiv_id
string | title
string | abstract
string | authors
sequence | summary
string | field_of_study
sequence | venue
string | publication_date
string | n_references
int32 | n_citations
int32 | n_influential_citations
int32 | introduction
string | background
string | methodology
string | experiments_results
string | conclusion
string | full_text
string | decision
bool | decision_text
string | reviews
sequence | comments
sequence | references
sequence | hypothesis
string | month_since_publication
int32 | avg_citations_per_month
float32 | mean_score
float32 | mean_confidence
float32 | mean_novelty
float32 | mean_correctness
float32 | mean_clarity
float32 | mean_impact
float32 | mean_reproducibility
float32 | openreview_submission_id
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
kim|memorization_precedes_generation_learning_unsupervised_gans_with_memory_networks|ICLR_cc_2018_Conference | 1803.01500v2 | Memorization Precedes Generation: Learning Unsupervised GANs with Memory Networks | We propose an approach to address two issues that commonly occur during training of unsupervised GANs. First, since GANs use only a continuous latent distribution to embed multiple classes or clusters of data, they often do not correctly handle the structural discontinuity between disparate classes in a latent space. Second, discriminators of GANs easily forget about past generated samples by generators, incurring instability during adversarial training. We argue that these two infamous problems of unsupervised GAN training can be largely alleviated by a learnable memory network to which both generators and discriminators can access. Generators can effectively learn representation of training samples to understand underlying cluster distributions of data, which ease the structure discontinuity problem. At the same time, discriminators can better memorize clusters of previously generated samples, which mitigate the forgetting problem. We propose a novel end-to-end GAN model named memoryGAN, which involves a memory network that is unsupervisedly trainable and integrable to many existing GAN models. With evaluations on multiple datasets such as Fashion-MNIST, CelebA, CIFAR10, and Chairs, we show that our model is probabilistically interpretable, and generates realistic image samples of high visual fidelity. The memoryGAN also achieves the state-of-the-art inception scores over unsupervised GAN models on the CIFAR10 dataset, without any optimization tricks and weaker divergences. | {
"name": [
"youngjin kim",
"minjung kim",
"gunhee kim"
],
"affiliation": [
{
"laboratory": "",
"institution": "Seoul National University",
"location": "{'settlement': 'Seoul', 'country': 'Korea'}"
},
{
"laboratory": "",
"institution": "Seoul National University",
"location": "{'settlement': 'Seoul', 'country': 'Korea'}"
},
{
"laboratory": "",
"institution": "Seoul National University",
"location": "{'settlement': 'Seoul', 'country': 'Korea'}"
}
]
} | null | [
"Computer Science",
"Mathematics"
] | International Conference on Learning Representations | 2018-02-15 | 46 | 39 | null | null | null | null | null | null | null | true |
I am going to recommend acceptance of this paper despite being worried about the issues raised by reviewer 1. In particular,
1: the best possible inception score would be obtained by copying the training dataset
2: the highest visual quality samples would be obtained by copying the training dataset
3: perturbations (in the hidden space of a convnet) of training data might not be perturbations in l2, and so one might not find a close nearest neighbor with an l2 search
4: it has been demonstrated in other works that perturbations of convnet features of training data (e.g. trained as auto-encoders) can make convincing "new samples"; or more generally, paths between nearby samples in the hidden space of a convnet can be convincing new samples.
These together suggest the possibility that the method presented is not necessarily doing a great job as a generative model or as a density model (it may be, we just can't tell...), but it is doing a good job at hacking the metrics (inception score, visual quality). This is not an issue with only this paper, and I do not want to punish the authors of this papers for the failings of the field; but this work, especially because of its explicit use of training examples in the memory, nicely exposes the deficiencies in our community's methodology for evaluating GANs and other generative models.
| {
"review_id": [
"SyzkuzYxG",
"S1ck4rYxM",
"Bko3dzDlG"
],
"review": [
{
"title": "title: An interesting idea with clear demonstration",
"paper_summary": null,
"main_review": "main_review: MemoryGAN is proposed to handle structural discontinuity (avoid unrealistic samples) for the generator, and the forgetting behavior of the discriminator. The idea to incorporate memory mechanism into GAN is interesting, and the authors make nice interpretation why this needed, and clearly demonstrate which component helps (including the connections to previous methods). \n\nMy major concerns:\n\nFigure 1 is questionable in demonstrating the advantage of proposed MemoryGAN. My understanding is that four z's used in DCGAN and MemoryGAN are \"randomly sampled\" and fixed, interpolation is done in latent space, and propagate to x to show the samples. Take MNIST for example, It can be seen that the DCGAN has to (1) transit among digits in different classes, while MemoryGAN only (2) transit among digits in the same class. Task 1 is significantly harder than task 2, it is not surprise that DCGAN generate unrealistic images. A better experiment is to fix four digits from different class at first, find their corresponding latent codes, do interpolation, and propagate back to sample space to visualize results. If the proposed technique can truly handle structural discontinuity, it will \"jump\" over the sample manifold from one class to another, and thus avoid unrealistic samples. Also, the current illustration also indicates that the generated samples by MemoryGAN is not diverse.\n\nIt seems the memory mechanism can bring major computational overhead, is it possible to provide the comparison on running time?\n\nTo what degree the MemoryGAN can handle structural discontinuity? It can be seen from Table 2 that larger improvement is observed when tested on a more diverse dataset. For example, the improvement gap from MNIST to CIFAR is larger. If the MemoryGAN can truly deal with structural discontinuity, the results on generating a wide range of different images for ImageNet may endow the paper with higher impact.\n\nThe authors should consider to make their code reproducible and public. \n\n\nMinor comments:\n\nIn Section 4.3, Please fix \"Results in 2\" as \"Results in Table 2\".\n\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review from AnonReviewer3",
"paper_summary": null,
"main_review": "main_review: [Overview]\n\nIn this paper, the authors proposed a novel model called MemoryGAN, which integrates memory network with GAN. As claimed by the authors, MemoryGAN is aimed at addressing two problems of GAN training: 1) difficult to model the structural discontinuity between disparate classes in the latent space; 2) catastrophic forgetting problem during the training of discriminator about the past synthesized samples by the generator. It exploits the life-long memory network and adapts it to GAN. It consists of two parts, discriminative memory network (DMN) and Memory Conditional Generative Network (MCGN). DMN is used for discriminating input samples by integrating the memory learnt in the memory network, and MCGN is used for generating images based on random vector and the sampled memory from the memory network. In the experiments, the authors evaluated memoryGAN on three datasets, CIFAR-10, affine-MNIST and Fashion-MNIST, and demonstrated the superiority to previous models. Through ablation study, the authors further showed the effects of separate components in memoryGAN. \n\n[Strengths]\n\n1. This paper is well-written. All modules in the proposed model and the experiments were explained clearly. I enjoyed much to read the paper.\n\n2. The paper presents a novel method called MemoryGAN for GAN training. To address the two infamous problems mentioned in the paper, the authors proposed to integrate a memory network into GAN. Through memory network, MemoryGAN can explicitly learn the data distribution of real images and fake images. I think this is a very promising and meaningful extension to the original GAN. \n\n3. With MemoryGAN, the authors achieved best Inception Score on CIFAR-10. By ablation study, the authors demonstrated each part of the model helps to improve the final performance.\n\n[Comments]\n\nMy comments are mainly about the experiment part:\n\n1. In Table 2, the authors show the Inception Score of images generated by DCGAN at the last row. On CIFAR-10, it is ~5.35. As the authors mentioned, removing EM, MCGCN and Memory will result in a conventional DCGAN. However, as far as I know, DCGAN could achieve > 6.5 Inception Score in general. I am wondering what makes such a big difference between the reported numbers in this paper and other papers?\n\n2. In the experiments, the authors set N = 16,384, and M = 512, and z is with dimension 16. I did not understand why the memory size is such large. Take CIFAR-10 as the example, its training set contains 50k images. Using such a large memory size, each memory slot will merely count for several samples. Is a large memory size necessary to make MemoryGAN work? If not, the authors should also show ablated study on the effect of different memory size; If it is true, please explain why is that. Also, the authors should mention the training time compared with DCGAN. Updating memory with such a large size seems very time-consuming.\n\n3. Still on the memory size in this model. I am curious about the results if the size is decreased to the same or comparable number of image categories in the training set. As the author claimed, if the memory network could learn to cluster training data into different category, we should be able to see some interesting results by sampling the keys and generate categoric images.\n\n4. The paper should be compared with InfoGAN (Chen et al. 2016), and the authors should explain the differences between two models in the related work. Similar to MemoryGAN, InfoGAN also did not need any data annotations, but could learn the latent code flexibly.\n\n[Summary]\n\nThis paper proposed a new model called MemoryGAN for image generation. It combined memory network with GAN, and achieved state-of-art performance on CIFAR-10. The arguments that MemoryGAN could solve the two infamous problem make sense. As I mentioned above, I did not understand why the authors used such large memory size. More explanations and experiments should be conducted to justify this setting. Overall, I think MemoryGAN opened a new direction of GAN and worth to further explore.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Comments on the probabilistic interpretation, writing and the generalization ability",
"paper_summary": null,
"main_review": "main_review: In summary, the paper introduces a memory module to the GANs to address two existing problems: (1) no discrete latent structures and (2) the forgetting problem. The memory provides extra information for both the generation and the discrimination, compared with vanilla GANs. Based on my knowledge, the idea is novel and the Inception Score results are excellent. However, there are several major comments should be addressed, detailed as follows:\n\n1. The probabilistic interpretation seems not correct.\n\nAccording to Eqn (1), the authors define the likelihood of a sample x given a slot index c as p(x|c=i) = N(q; K_i, sigma^2), where q is the normalized output of a network mu given x. It seems that this is not a well defined probability distribution because the Gaussian distribution is defined over the whole space while the support of q is restricted within a simplex due to the normalization. Then, the integral over x should be not equal to 1 and hence all of the probabilistic interpretation including the equations in the Section 3. and results in the Section 4.1. are not reliable. I'm not sure whether there is anything misunderstood because the writing of the Section 3 is not so clear. \n\n2. The writing of the Section 3 should be improved.\n\nCurrently, the Section 3 is not easy to follow for me due to the following reasons. First, there lacks a coherent description of the notations. For instance, what's the difference between x and x', used in Section 3.1.1 and 3.1.2 respectively? According to the paper, both denote a sample. Second, the setting is somewhat unclear. For example, it is not natural to discuss the posterior without the clear definition of the likelihood in Eqn (1). Third, a lot of details and comparison with other methods should be moved to other parts and the summary of the each part should be stated explicitly and clearly before going into details.\n\n3. Does the large memory hurt the generalization ability of the GANs?\n\nFirst of all, I notice that the random noise is much lower dimensional than the memory, e.g. 2 v.s. 256 on affine-MNIST. Does such large memory hurt the generalization ability of GANs? I suspect that most of the information are stored in the memory and only small change of the training data is allowed. I found that the samples in Figure 1 and Figure 5 are very similar and the interpolation only shows a very small local subspace near by a training data, which cannot show the generalization ability. Also note that the high Inception Score cannot show the generalization ability as well because memorizing the training data will obtain the highest score. I know it's hard to evaluate a GAN model but I think the authors can at least show the nearest neighbors in the training dataset and the training data that maximizes the activation of the corresponding memory slot together with the generated samples to see the difference.\n\nBesides, personally speaking, Figure 1 is not so fair because a MemoryGAN only shows a very small local subspace near by a training data while the vanilla GAN shows a large subspace, making the quality of the generation different. The MemoryGAN also has failure samples in the whole latent space as shown in Figure 4.\n\nOverall, I think this paper is interesting but currently it does not reach the acceptance threshold.\n\nI change the rating to 6 based on the revised version, in which most of the issues are addressed.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.6666666865348816,
0.5555555820465088
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Change rating to 6",
"Reply for Reviewer 2",
"We'd like to appreciate Reviewer 1",
"Reply for Reviewer 3",
"Reply for Reviewer 1"
],
"comment": [
"Thanks for the detailed rebuttal. I'm glad to see most of the issues are addressed in the revision and I'd like to change the rating to 6.",
"We thank Reviewer 2 for positive and constructive reviews. Below, we respond each comment in details. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\n\n1. Fig.1.\nInitially, we fixed the discrete latent variable c for MemoryGAN, because it is a memory index, and thus it is meaningless to interpolate over c. However, we follow Reviewer’s rationale and update Fig.1 in the new draft. Please check it.\nIn new Fig.1.(b,d), we first randomly sample both (z,c) shown at the four corners in blue boxes. We then generate 64 images by interpolating both (z,c). However, since the interpolation over c is meaningless, we take key values K_c of the four randomly sampled c’s, and then perform interpolation over their K_c’s. Then, for each interpolated K_c’, we find the memory slot c = argmax p(c|K_c’), i.e. the memory index whose posterior is the highest with respect to K_c’.\nAs shown in Fig.1.(b,d), different classes are shown at the four corners, and other samples gradually change, but no structural discontinuity occurs. We hope the modified Fig.1 delivers the merits of MemoryGAN more intuitively.\n\n2. Computation overhead.\nAs we replied to Reviewer 2, we measure the training time per epoch for MemoryGAN (4,128K parameters) and DCGAN (2,522K parameters), which are 135 sec and 124 sec, respectively.\nIt means MemoryGAN is only 8.9% slower than DCGAN for training, even with a scalable memory module. At test time, since only generator is used, there is no time difference between MemoryGAN and DCGAN. \n\n3. ImageNet experiments.\nWe observed that the memory module significantly helps improve the performance when using highly diverse datasets. For example, inception scores are higher for CIFAR10 than for FashionMNIST. Thus, as Reviewer 2 suggested, we can easily expect that the our MemoryGAN works better for the ImageNet dataset. We did not test with ImageNet, mainly because of too long training time (more than two weeks by our estimation). However, we will do it as a future work.\n\n4. Source code and typos.\nWe plan to make public the source code. \nThank you for correct typos!\n",
"We'd like to appreciate Reviewer 1 again for the constructive comments, which are greatly helpful to make our paper better. We are very glad that our rebuttal clarifies Reviewer 1's concerns.",
"We thank Reviewer 3 for positive and constructive reviews. Below, we respond each comment in details. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\n\n1. DCGAN inception scores.\nThanks for a correction. As R3 pointed out, the DCGAN inception score of the original paper is 6.54+-0.67. The value 5.35 that we reported previously was the score of “MemoryGAN without memory”, which is identical to the DCGAN in terms of model structure. That was the reason why we named it as DCGAN. However, the “MemoryGAN without memory” had different details from the DCGAN, including the ELU activation (instead of ReLU and Leaky ReLU) and layer-normalization (instead of batch normalization). To resolve the confusion, we change the values of Table 1 to 6.54+-0.67 (the numbers reported in the original DCGAN paper).\n\n2. The memory size of MemoryGAN.\nIn our experiments, we set the memory size based on the performance on the validation set. The memory is used to represent not only positive samples but also possible fake samples. Thus, the memory size is rather large (n=16384), for the CIFAR10 dataset whose size is 50,000. That is, the more diverse the dataset is, the larger memory size is required to represent both variability. When we used a half-size memory (n=8192), the inception score for CIFAR10 decreased from 8.04 to 6.71. \nAs Reviewer 3 suggested, we test with decreasing the memory size to n=16, which is similar to the number of classes, on the Fashion-MNIST and CIFAR10 datasets. We obtain the inception score 6.14 for Fashion-MNIST with n=16, which is slightly lower than the reported score 6.39 with n=4096. On the other hand, for CIFAR10, the inception score significantly decreases from 8.04 with n=16384 to 3.06 with n=16. These results indicate that intra-class variability of Fashion-MNIST is small, while that of CIFAR10 is very high.\n\n3. Training/Test time.\nThe training time per epoch for MemoryGAN (4,128K parameters) and DCGAN (2,522K parameters) are 135 sec and 124 sec, respectively. It means MemoryGAN is only 8.9% slower than DCGAN for training, even with a scalable memory module. At test time, since only the generator is used, there is no time difference between MemoryGAN and DCGAN.\n\n4. Comparison with InfoGAN.\nThere are two key differences between InfoGAN and MemoryGAN. First, InfoGAN implicitly learns the latent cluster information of data into model parameters, while MemoryGAN explicitly maintains the information about the whole training set using a life-long memory network. Thus, MemoryGAN keeps track of current cluster information stably and flexibly without suffering from forgetting old samples. Second, MemoryGAN explicitly offers various distributions like prior distribution p(c), conditional likelihood p(x|c) and marginal likelihood p(x), unlike InfoGAN. Such interpretability is useful for designing or training the models.\n",
"We thank Reviewer 1 for positive and constructive reviews. Below, we respond each comment in details. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\n\n1. The probabilistic interpretation.\nThanks for pointing out the unclearness of our formulation. First of all, the normalizing constant does not affect the model formulation, because it is a common denominator in the posterior. However, as Reviewer 1 pointed out, we use distributions on a unit sphere, and thus they should be Von Mises-Fisher (vMF) distributions with a concentration constant k=1, instead of Gaussian distributions. Without changing any fundamentals of MemoryGAN, we change the Gaussian mixtures to Von Mises-Fisher Mixtures in the draft. We appreciate Reviewer 1 for the correction.\n\n2. Writing improvement of Section 3.\n(1) The difference between x and x' in Section 3.1.1-3.1.2.\nWe used x to denote samples for updating discriminator parameters, and x’ for updating the memory module. Since every training sample goes through these two updating operations, there is no need to use both, and we unify them to x.\n(2) Discuss the posterior without the clear definition of the likelihood in Eq.(1). \nThe likelihood for Eq.(1) is identical to that of the standard vMF mixture model. Thus, we omitted it and directly introduced the posterior equation. We will clarify them.\n(3) Overall organization \nWe will re-organize the draft so that key ideas in each part are explicitly summarized before the details.\n\n3. Generalization ability.\nAs Reviewer 1 suggested, we add an additional result to the Figure 5, where for each sample produced by MemoryGAN (in the left-most column), the seven nearest images in the training set are shown in the following columns. Apparently, our MemoryGAN generates novel images rather than merely memorizing and retrieving the images in the training set.\nThe memory is used to represent not only positive samples but also possible fake samples. Thus, the memory size is rather large (n=16384), for the CIFAR10 dataset. That is, the more diverse the dataset is, the larger memory size is required to represent both variability. In our experiments, we set the memory size based on the performance on the validation set. \n\n4. Fig.1.\nInitially, we fixed the discrete latent variable c for MemoryGAN, because it is a memory index, and thus it is meaningless to interpolate over c. However, we follow Reviewer’s rationale and update Fig.1 in the new draft. Please check it.\nIn new Fig.1.(b,d), we first randomly sample both (z,c) shown at the four corners in blue boxes. We then generate 64 images by interpolating both (z,c). However, since the interpolation over c is meaningless, we take key values K_c of the four randomly sampled c’s, and then perform interpolation over their K_c’s. Then, for each interpolated K_c’, we find the memory slot c = argmax p(c|K_c’), i.e. the memory index whose posterior is the highest with respect to K_c’.\nAs shown in Fig.1.(b,d), different classes are shown at the four corners, and other samples gradually change, but no structural discontinuity occurs. We hope the modified Fig.1 delivers the merits of MemoryGAN more intuitively.\n\n5. Failure cases of Fig.4.\nAs Reviewer 1 pointed out, MemoryGAN also has failure samples in the whole latent space as shown in Figure 4. Since our approach is completely unsupervised, sometimes a single memory slot may include similar images from different classes. It causes failure cases. Nevertheless, significant proportion of memory slots of MemoryGAN contain similar shaped single class, which leads much better performance than existing unsupervised GAN models. \n"
]
} | {
"paperhash": [
"gulrajani|improved_training_of_wasserstein_gans",
"berthelot|began:_boundary_equilibrium_generative_adversarial_networks",
"antipov|face_aging_with_conditional_generative_adversarial_networks",
"dumoulin|adversarially_learned_inference",
"zhang|colorful_image_colorization",
"kingma|adam:_a_method_for_stochastic_optimization",
"liu|deep_learning_face_attributes_in_the_wild",
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"xiao|fashion-mnist:_a_novel_image_dataset_for_benchmarking_machine_learning_algorithms",
"kemker|measuring_catastrophic_forgetting_in_neural_networks",
"lu|attribute-guided_face_generation_using_conditional_cyclegan",
"lu|conditional_cyclegan_for_attribute_guided_face_image_generation",
"mroueh|fisher_gan",
"zhu|unpaired_image-to-image_translation_using_cycle-consistent_adversarial_networks",
"dash|tac-gan_-_text_conditioned_auxiliary_classifier_generative_adversarial_network",
"kim|learning_to_discover_cross-domain_relations_with_generative_adversarial_networks",
"kaiser|learning_to_remember_rare_events",
"mroueh|mcgan:_mean_and_covariance_feature_matching_gan",
"dai|calibrating_energy-based_generative_adversarial_networks",
"arjovsky|wasserstein_gan",
"zhang|image_de-raining_using_a_conditional_generative_adversarial_network",
"arjovsky|towards_principled_methods_for_training_generative_adversarial_networks",
"shrivastava|learning_from_simulated_and_unsupervised_images_through_adversarial_training",
"zhang|stackgan:_text_to_photo-realistic_image_synthesis_with_stacked_generative_adversarial_networks",
"isola|image-to-image_translation_with_conditional_adversarial_networks",
"ledig|photo-realistic_single_image_super-resolution_using_a_generative_adversarial_network",
"salimans|improved_techniques_for_training_gans",
"nowozin|f-gan:_training_generative_neural_samplers_using_variational_divergence_minimization",
"santoro|one-shot_learning_with_memory-augmented_neural_networks",
"reed|generative_adversarial_text_to_image_synthesis",
"reed|learning_deep_representations_of_fine-grained_visual_descriptions",
"yan|attribute2image:_conditional_image_generation_from_visual_attributes",
"aubry|seeing_3d_chairs:_exemplar_part-based_2d-3d_alignment_using_a_large_dataset_of_cad_models"
],
"title": [
"Improved Training of Wasserstein GANs",
"BEGAN: Boundary Equilibrium Generative Adversarial Networks",
"FACE AGING WITH CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS",
"",
"Colorful Image Colorization",
"Published as a conference paper at ICLR 2015 ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION",
"Deep Learning Face Attributes in the Wild *",
"NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE",
"Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms",
"Measuring Catastrophic Forgetting in Neural Networks",
"Attribute-Guided Face Generation Using Conditional CycleGAN",
"TAC-GAN -Text Conditioned Auxiliary Classifier Generative Adversarial Network",
"LSUN: Construction of a Large-Scale Image Dataset using Deep Learning with Humans in the Loop",
"Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks",
"Image De-raining Using a Conditional Generative Adversarial Network",
"Learning to Discover Cross-Domain Relations with Generative Adversarial Networks",
"Image-to-Image Translation with Conditional Adversarial Networks",
"Under review as a conference paper at ICLR 2017 SEMI-SUPERVISED LEARNING WITH CONTEXT-CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS",
"Under review as a conference paper at ICLR 2017 UNSUPERVISED CROSS-DOMAIN IMAGE GENERATION",
"Iterative Bregman Projections for Regularized Transportation Problems",
"AMORTISED MAP INFERENCE FOR IMAGE SUPER-RESOLUTION",
"",
"Learning from Simulated and Unsupervised Images through Adversarial Training",
"StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks",
"Image-to-Image Translation with Conditional Adversarial Networks",
"Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network",
"Improved Techniques for Training GANs",
"f -GAN: Training Generative Neural Samplers using Variational Divergence Minimization",
"End-To-End Memory Networks",
"Generative Adversarial Text to Image Synthesis",
"Neural Turing Machines",
"Attribute2Image: Conditional Image Generation from Visual Attributes",
"Seeing 3D chairs: exemplar part-based 2D-3D alignment using a large dataset of CAD models"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"ishaan gulrajani",
"faruk ahmed",
"martin arjovsky",
"vincent dumoulin",
"aaron courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david berthelot",
"thomas schumm",
"luke metz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"grigory antipov",
"moez baccouche",
"jean-luc dugelay"
],
"affiliation": [
{
"laboratory": "",
"institution": "Orange Labs",
"location": "{'addrLine': '4 rue Clos Courtel', 'postCode': '35512', 'settlement': 'Cesson-Sévigné', 'country': 'France'}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"vincent dumoulin",
"ishmael belghazi",
"ben poole",
"olivier mastropietro",
"alex lamb",
"martin arjovsky",
"aaron courville"
],
"affiliation": [
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "Neural Dynamics and Computation Lab",
"institution": "",
"location": "{'settlement': 'Stanford'}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
}
]
},
{
"name": [
"richard zhang",
"phillip isola",
"alexei a efros"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
}
]
},
{
"name": [
"diederik p kingma",
"jimmy lei ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ziwei liu",
"ping luo",
"xiaogang wang",
"xiaoou tang",
"hong kong"
],
"affiliation": [
{
"laboratory": "",
"institution": "The Chinese University",
"location": "{}"
},
{
"laboratory": "",
"institution": "The Chinese University",
"location": "{}"
},
{
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": "{}"
},
{
"laboratory": "",
"institution": "The Chinese University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dzmitry bahdanau",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"han xiao",
"kashif rasul",
"roland vollgraf"
],
"affiliation": [
{
"laboratory": "Zalando Research Mühlenstraße 25",
"institution": "",
"location": "{'postCode': '10243', 'settlement': 'Berlin'}"
},
{
"laboratory": "Zalando Research Mühlenstraße 25",
"institution": "",
"location": "{'postCode': '10243', 'settlement': 'Berlin'}"
},
{
"laboratory": "Zalando Research Mühlenstraße 25",
"institution": "",
"location": "{'postCode': '10243', 'settlement': 'Berlin'}"
}
]
},
{
"name": [
"ronald kemker",
"marc mcclure",
"angelina abitino",
"tyler hayes",
"christopher kanan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yongyi lu",
"yu-wing tai",
"chi-keung tang"
],
"affiliation": [
{
"laboratory": "",
"institution": "The Hong Kong University of Science and Technology",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "The Hong Kong University of Science and Technology",
"location": "{}"
}
]
},
{
"name": [
"ayushman dash",
"john gamboa",
"sheraz ahmed",
"marcus liwicki",
"muhammad zeshan afzal"
],
"affiliation": [
{
"laboratory": "",
"institution": "MindGarage -University of Kaiserslautern",
"location": "{'country': 'Germany'}"
},
{
"laboratory": "",
"institution": "MindGarage -University of Kaiserslautern",
"location": "{'country': 'Germany'}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yu ari fisher",
" seff",
"thomas funkhouser",
"jianxiong xiao"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Princeton University",
"location": "{}"
}
]
},
{
"name": [
"jun-yan zhu",
"taesung park",
"phillip isola",
"alexei a efros",
"summer winter",
"van gogh",
"cezanne monet",
"ukiyo-e monet photos"
],
"affiliation": [
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
},
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
},
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
},
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
},
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
},
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
},
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
},
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"taeksoo kim",
"moonsu cha",
"hyunsoo kim",
"jung kwon lee",
"jiwon kim"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"phillip isola",
"jun-yan zhu",
"tinghui zhou",
"alexei a efros"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"remi denton",
"sam gross",
"rob fergus"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yaniv taigman",
"adam polyak",
"lior wolf"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jean-david benamou",
"guillaume carlier",
"marco cuturi",
"luca nenna",
"gabriel peyré"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Université Paris-Dauphine",
"location": "{'settlement': 'Ceremade'}"
},
{
"laboratory": "",
"institution": "Kyoto University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Université Paris-Dauphine",
"location": "{'settlement': 'Ceremade'}"
}
]
},
{
"name": [
"casper kaae sønderby",
"jose caballero",
"lucas theis",
"wenzhe shi",
"ferenc huszár"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chris donahue",
"julian mcauley",
"miller puckette"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ashish shrivastava",
"tomas pfister",
"oncel tuzel",
"josh susskind",
"wenda wang",
"russ webb"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"han zhang",
"tao xu",
"hongsheng li",
"shaoting zhang",
"xiaogang wang",
"xiaolei huang",
"dimitris metaxas"
],
"affiliation": [
{
"laboratory": "",
"institution": "Rutgers University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": "{}"
},
{
"laboratory": "",
"institution": "Baidu Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Rutgers University",
"location": "{}"
}
]
},
{
"name": [
"phillip isola",
"jun-yan zhu",
"tinghui zhou",
"alexei a efros"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"christian ledig",
"lucas theis",
"ferenc huszár",
"jose caballero",
"andrew cunningham",
"alejandro acosta",
"andrew aitken",
"alykhan tejani",
"johannes totz",
"zehan wang",
"wenzhe shi twitter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tim salimans",
"ian goodfellow",
"wojciech zaremba",
"vicki cheung",
"alec radford",
"xi chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sebastian nowozin",
"botond cseke",
"ryota tomioka"
],
"affiliation": [
{
"laboratory": "Machine Intelligence and Perception Group Microsoft Research Cambridge",
"institution": "",
"location": "{'country': 'UK'}"
},
{
"laboratory": "Machine Intelligence and Perception Group Microsoft Research Cambridge",
"institution": "",
"location": "{'country': 'UK'}"
},
{
"laboratory": "Machine Intelligence and Perception Group Microsoft Research Cambridge",
"institution": "",
"location": "{'country': 'UK'}"
}
]
},
{
"name": [
"sainbayar sukhbaatar",
"arthur szlam",
"jason weston",
"rob fergus"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"scott reed",
"zeynep akata",
"xinchen yan",
"lajanugen logeswaran reedscot",
" akata",
" llajan",
"bernt schiele",
"honglak lee"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'settlement': 'Ann Arbor', 'region': 'MI', 'country': 'USA'}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'settlement': 'Ann Arbor', 'region': 'MI', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'settlement': 'Ann Arbor', 'region': 'MI', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'settlement': 'Ann Arbor', 'region': 'MI', 'country': 'USA'}"
}
]
},
{
"name": [
"alex graves",
"greg wayne",
"google deepmind",
" london"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xinchen yan",
"jimei yang",
"kihyuk sohn",
"honglak lee"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'settlement': 'Ann Arbor', 'country': 'USA'}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "NEC Labs",
"location": "{'settlement': 'Cupertino', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'settlement': 'Ann Arbor', 'country': 'USA'}"
}
]
},
{
"name": [
"mathieu aubry",
"daniel maturana",
"alexei a efros",
"bryan c russell",
"josef sivic"
],
"affiliation": [
{
"laboratory": "",
"institution": "INRIA",
"location": "{}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Intel Labs",
"location": "{}"
},
{
"laboratory": "",
"institution": "INRIA",
"location": "{}"
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | 0.464286 | 0.592593 | 0.75 | null | null | null | null | null | rkO3uTkAZ |
|
higgins|scan_learning_hierarchical_compositional_visual_concepts|ICLR_cc_2018_Conference | 1707.03389v3 | SCAN: Learning Hierarchical Compositional Visual Concepts | The seemingly infinite diversity of the natural world arises from a relatively small set of coherent rules, such as the laws of physics or chemistry. We conjecture that these rules give rise to regularities that can be discovered through primarily unsupervised experiences and represented as abstract concepts. If such representations are compositional and hierarchical, they can be recombined into an exponentially large set of new concepts. This paper describes SCAN (Symbol-Concept Association Network), a new framework for learning such abstractions in the visual domain. SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner. Unlike state of the art multimodal generative model baselines, our approach requires very few pairings between symbols and images and makes no assumptions about the form of symbol representations. Once trained, SCAN is capable of multimodal bi-directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to break away from its training data distribution and imagine novel visual concepts through symbolically instructed recombination of previously learnt concepts. | {
"name": [
"irina higgins",
"nicolas sonnerat",
"loic matthey",
"arka pal",
"christopher p burgess",
"matko bošnjak",
"murray shanahan",
"matthew botvinick",
"demis hassabis",
"alexander lerchner deepmind",
" london"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
} | null | [
"Computer Science",
"Mathematics"
] | arXiv.org | 2017-07-11 | 41 | 51 | null | null | null | null | null | null | null | true | This paper initially received borderline reviews. The main concern raised by all reviewers was a limited experimental evaluation (synthetic only). In rebuttal, the authors provided new results on the CelebA dataset, which turned the first reviewer positive. The AC agrees there is merit to this approach, and generally appreciates the idea of compositional concept learning. | {
"review_id": [
"rkzoyZW-M",
"H1vrGEM-G",
"BkeoFCjgG"
],
"review": [
{
"title": "title: interesting idea, but limited experimental evaluation",
"paper_summary": null,
"main_review": "main_review: This paper introduces a VAE-based model for translating between images and text. The main way that their model differs from other multimodal methods is that their latent representation is well-suited to applying symbolic operations, such as AND and IGNORE, to the text. This gives them a more expressive language for sampling images from text.\n\nPros:\n- The paper is well written, and it provides useful visualizations and implementation details in the appendix.\n\n- The idea of learning compositional representations inside of a VAE framework is very appealing.\n\n- They provide a modular way of learning recombination operations.\n\nCons:\n- The experimental evaluation is limited. They test their model only on a simple, artificial dataset. It would also be helpful to see a more extensive evaluation of the model's ability to learn logical recombination operators, since this is their main contribution.\n\n- The approach relies on first learning a pretrained visual VAE model, but it is unclear how robust this is. Should we expect visual VAEs to learn features that map closely to the visual concepts that appear in the text? What happens if the visual model doesn't learn such a representation? This again could be addressed with experiments on more challenging datasets.\n\n- The paper should explain the differences and trade offs between other multimodal VAE models (such as their baselines, JMVAE and TrELBO) more clearly. It should also clarify differences between the SCAN_U baseline and SCAN in the main text.\n\n- The paper suggests that using the forward KL-divergence is important, but this does not seem to be tested with experiments.\n\n- The three operators (AND, IN COMMON, and IGNORE) can easily be implemented as simple transformations of a (binary) bag-of-words representation. What about more complex operations, such as OR, which seemingly cannot be encoded this way?\n\nOverall, I am borderline on this paper, due to the limited experimental evaluation, but lean slightly towards acceptance.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Good paper, but some pieces are missing",
"paper_summary": null,
"main_review": "main_review: Summary\n---\nThis paper proposes a new model called SCAN (Symbol-Concept Association Network) for hierarchical concept learning. It trains one VAE on images then another one on symbols and aligns their latent spaces. This allows for symbol2image and image2symbol inference. But it also allows for generalization to new concepts composed from existing concepts using logical operators. Experiments show that SCAN generates images which correspond to provided concept labels and span the space of concepts which match these labels.\n\nThe model starts with a beta-VAE trained on images (x) from the relevant domain (in this case, simple scenes generated from DeepMind Lab which vary across a few known dimensions). This is complemented by the SCAN model, which is a beta-VAE trained to reconstruct symbols (y; k-hot encoded concepts like {red, suitcase}) with a slightly modified objective. SCAN optimizes the ELBO plus a KL term which pushes the latent distribution of the y VAE toward the latent distribution of the x (image) VAE. This aligns the latent representations so now a symbol can be encoded into a latent distribution z and decoded as an image.\n\nOne nice property of the learned latent representation is that more specific concepts have more specific latent representations. Consider latent distributions z1 and z2 for a more general symbol {red} and a more specific symbol {red, suitcase}. Fewer dimensions of z2 have high variance than dimensions of z1. For example, the latent space could encode red and suitcase in two dimensions (as binary attributes). z1 would have high variance on all dimensions but the one which encodes red and z2 would have high variance on all dimensions but red and suitcase. In the reported experiments some of the dimensions do seem to be interpretable attributes (figure 5 right).\n\nSCAN also pays particular attention to hierarchical concepts. Another very simple model (1d convolution layer) is learned to mimic logical operators. Normally a SCAN encoder takes {red} as input and the decoder reconstructs {red}. Now another model is trained that takes \"{red} AND {suitcase}\" as input and reconstructs {red, suitcase}. The two input concepts {red} and {suitcase} are each encoded by a pre-trained SCAN encoder and then those two distributions are combined into one by a simple 1d convolution module trained to implement the AND operator (or IGNORE/IN COMMON). This allows images of concepts like {small, red, suitcase} to be generated even if small red suitcases are not in the training data.\n\nExperiments provide some basic verification and analysis of the method:\n1) Qualitatively, concept samples are correct and diverse, generating images with all configurations of attributes not specified by the input concept.\n2) As SCAN sees more diverse examples of a concept (e.g. suitcases of all colors instead of just red ones) it starts to generate more diverse image samples of that concept.\n3) SCAN samples/representations are more accurate (generate images of the right concept) and more diverse (far from a uniform prior in a KL sense) than JMVAE and TELBO baselines.\n4) SCAN is also compared to SCAN_U, which uses an image beta-VAE that learned an entangled (Unstructured) representation. SCAN_U performed worse than SCAN\nand baselines.\n5) Concepts expressed as logical combinations of other concepts generalize well for both the SCAN representation and the baseline representations.\n\n\nStrengths\n---\n\nThe idea of concept learning considered here is novel and satisfying. It imposing logical, hierarchical structure on latent representations in a general way. This suggests opportunities for inserting prior information and adds interpretability to the latent space.\n\n\nWeaknesses\n---\n\nI think this paper is missing some important evaluation.\n\nRole/Nature of Disentangled Features not Clear (major):\n\n* Disentangled features seem to be very important for SCAN to work well (SCAN vs SCAN_U). It seems that the only difference between the unstructured (entangled) and the structured (disentangled) visual VAE is the color space of the input (RGB vs HSV). If so, this should be stated more clearly in the main paper. What role did beta-VAE (tuning beta) as opposed to plain VAE play in learning disentangled features?\n\n* What color space was used for the JMVAE and TELBO baselines? Training these with HSV seems especially important for establishing a good comparison, but it would be good to report results for HSV and RGB for all models.\n\n* How specific is the HSV trick to this domain? Would it matter for natural images?\n\n* How would a latent representation learned via supervision perform? (Maybe explicitly align dimensions of z to red/suitcase/small with supervision through some mechanism. c.f. \"Discovering Hidden Factors of Variation in Deep Networks\" by Cheung et al.)\n\nEvaluation of sample complexity (major):\n\n* One of the main benefits of SCAN is that it works with less training data. There should be a more systematic evaluation of this claim. In particular, I would like to see a Number of Examples vs Performance (Accuracy/Diversity) plot for both SCAN and the baselines.\n\nMinor questions/comments/concerns:\n\n* What do the logical operators learn that the hand-specified versions do not?\n\n* Does training SCAN with the structure provided by the logical operators lead to improved performance?\n\n* There seems to be a mistake in figure 5 unless I interpreted it incorrectly. The right side doesn't match the left side. During the middle stage of training object hues vary on the left, but floor color becomes less specific on the right. Shouldn't object color become less specific?\n\n\nPrelimary Evaluation\n---\n\nThis clear and well written paper describes an interesting and novel way of learning a model of hierarchical concepts. It's missing some evaluation that would help establish the sample complexity benefit more precisely (a claimed contribution) and add important details about unsupervised disentangled representations. I would happy to increase my rating if these are addressed.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A neural network that learns visual concepts and basic operators over them.",
"paper_summary": null,
"main_review": "main_review: This paper proposed a novel neural net architecture that learns object concepts by combining a beta-VAE and SCAN. The SCAN is actually another beta-VAE with an additional term that minimizes the KL between the distribution of its latent representation and the first beta-VAE’s latent distribution. The authors also explored how this structure could be further expanded to incorporate another neural net that learns operators (and, in common, ignore), and demonstrated that the proposed system is able to generate accurate and diverse scenes given the visual descriptions.\n\nIn general, I think this paper is interesting. It’s studying an important problem with a newly proposed neural net structure. The experimental results are good and the model is compared with very recent baselines.\n\nI am, however, still lukewarm on this submission for its limited technical innovation and over-simplified experimental setup.\n\nThis paper does have technical innovations: the SCAN architecture and the way they learn “recombination operators” are newly proposed. However, there are in essence very straightforward extensions of VAE and beta-VAE (this is based on the fact that beta-VAE itself is a simple modification of VAE and the effect was discussed in a number of concurrent papers).\n\nThis would still be fine, as many small modifications of neural net architecture turn out to reveal fundamental insights that push the field forward. This is, however, not the case in this paper (at least not in the current manuscript) due to its over-simplified experiments. The authors are using images as input, but the images are all synthetic, and further, they are all synthesized to have highly regular structure. This suggests the network is likely to overfit the data and learn a straightforward mapping from input to the code. It’s unclear how well the system is able to generalize to real-world scenarios. Note that even datasets like MNIST has much higher complexity than the dataset used in this paper (though the dataset in this paper is more colorful).\n\nI agree that the proposed method performs better that its recent competitors. However, many of those methods like TripleELBO are not explicitly designed for these ‘recombination operators’. In contrast, they seem to perform well on real datasets. I would strongly suggest the authors perform additional experiments on standard benchmarks for a fair comparison.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.6666666865348816,
0.4444444477558136
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Extra experiments with CelebA demonstrate that SCAN significantly outperforms JMVAE and TrELBO",
"We have added sample complexity evaluation that demonstrates that SCAN training is more stable than JMVAE and TrELBO, our experiments with CelebA in RGB space clarify the role of colour space in learning disentangled representations",
"Good response",
"We have added extra experiments with CelebA, SCAN_U and SCAN with reverse KL and show that SCAN still significantly outperforms all baselines",
"post-rebuttal comments",
"Response to post-rebuttal comments"
],
"comment": [
"Dear Reviewer,\n\nThank you for your feedback. We have added an additional section describing the comparison of our approach to JMVAE and TrELBO on CelebA. Unlike the similar TrELBO experiments, we did minimal pre-processing of the dataset (only cropping to 64x64) and trained the models on the noisy attribute labels out of the box. As you may be aware, CelebA attributes are notoriously unreliable - many are subjective, refer to aspects of the images that get cropped away or are plain wrong. Our experiments demonstrate that SCAN significantly outperforms both baselines (but TrELBO in particular) and discovers a subset of attributes that refer to something meaningful based on the visual examples present in the dataset, while ignoring the uninformative attributes. SCAN is then able to traverse the individual directions of variation it has discovered and imagine both positive and negative examples of the attribute. This is unlike the baselines, which can only imagine positive examples after being trained on positive examples. \n\nWe hope that our experiments address your concerns about the technical innovation of our approach, since we demonstrate that currently SCAN is the only model that is able to learn compositional hierarchical visual concepts on real visual datasets.\n\nHappy holidays!\n",
"Dear Reviewer,\n\nThank you for your feedback. Please find the responses to your points below:\n\nRole/Nature of Disentangled Features not Clear (major):\n\n* Disentangled features seem to be very important for SCAN to work well (SCAN vs SCAN_U). It seems that the only difference between the unstructured (entangled) and the structured (disentangled) visual VAE is the color space of the input (RGB vs HSV). If so, this should be stated more clearly in the main paper. What role did beta-VAE (tuning beta) as opposed to plain VAE play in learning disentangled features?\n\nThe statement about the “only difference” is not quite right. While an HSV colour space helps beta-VAE disentangle the particular DeepMind Lab dataset we used, the conversion from RGB to HSV is not sufficient for disentangling. As shown in our additional SCAN_U experiments in Table 1, it is still important to use a carefully tuned beta-VAE rather than a plain VAE to get good enough disentanglement for SCAN to work. Furthermore, we have added additional experiments with CelebA where we learn disentangled visual representations with a beta-VAE in RGB space. A plain VAE is unable to learn such disentangled representations, as was shown in Higgins et al, 2017.\n\n\n\n\n\n* What color space was used for the JMVAE and TELBO baselines? Training these with HSV seems especially important for establishing a good comparison, but it would be good to report results for HSV and RGB for all models.\n\nAll baselines are trained in HSV space when using the DeepMind Lab dataset in our paper. We have now added additional experiments on CelebA, where all models are now trained using the RGB colour space.\n\n\n\n\n\n* How specific is the HSV trick to this domain? Would it matter for natural images?\n\nThe HSV trick was useful for the DeepMind Lab dataset, but it is not necessary for all datasets as demonstrated in the new CelebA experiments.\n\n\n\n\n\n* How would a latent representation learned via supervision perform? (Maybe explicitly align dimensions of z to red/suitcase/small with supervision through some mechanism. c.f. \"Discovering Hidden Factors of Variation in Deep Networks\" by Cheung et al.)\n\nA latent representation learnt via supervision would also work, as long as the latent distribution is from the location/scale distributional family. Hence, the work by Cheung et al or DC-IGN by Kulkarni et al would both be suitable for grounding SCAN. We concentrated on the unsupervised beta-VAE, since we wanted to minimise human intervention and bias.\n\n\n\n\n\nEvaluation of sample complexity (major):\n\n* One of the main benefits of SCAN is that it works with less training data. There should be a more systematic evaluation of this claim. In particular, I would like to see a Number of Examples vs Performance (Accuracy/Diversity) plot for both SCAN and the baselines.\n\nWe have added a plot with this information in the supplementary materials.\n\n\n\n\n\nMinor questions/comments/concerns:\n\n* What do the logical operators learn that the hand-specified versions do not?\n\nIn general we find that the learnt operators have better accuracy and diversity, achieving 0.79 (learnt) vs 0.54 (hand crafted) accuracy (higher is better) and 1.05 (learnt) vs 2.03 (hand crafted) diversity (lower is better) scores. We have added a corresponding comment in the paper.\n\n\n\n\n\n* Does training SCAN with the structure provided by the logical operators lead to improved performance?\n\nWe find that the logical operators do improve the diversity of samples since the training of the logical operators relies on the visual grounding that is exactly the same as SCAN uses. For example, we can recover the diversity of SCAN_R samples by training its recombination operators with a forward KL. We have added a note about this to the paper.\n\n\n\n\n* There seems to be a mistake in figure 5 unless I interpreted it incorrectly. The right side doesn't match the left side. During the middle stage of training object hues vary on the left, but floor color becomes less specific on the right. Shouldn't object color become less specific?\n\nThank you for pointing it out. We have fixed it.\n\n\nHappy holidays!",
"Thanks for the response! It nicely addressed my concerns, so I increased my rating.",
"Dear Reviewer,\n\nThank you for your feedback. Please find the responses to your points below:\n\n\n- The experimental evaluation is limited. They test their model only on a simple, artificial dataset. It would also be helpful to see a more extensive evaluation of the model's ability to learn logical recombination operators, since this is their main contribution.\n\nWe have now added an additional section demonstrating that SCAN significantly outperforms both JMVAE and TrELBO on CelebA - a significantly more challenging and realistic dataset.\n\n\n\n\n- The approach relies on first learning a pretrained visual VAE model, but it is unclear how robust this is. Should we expect visual VAEs to learn features that map closely to the visual concepts that appear in the text? What happens if the visual model doesn't learn such a representation? This again could be addressed with experiments on more challenging datasets.\n\nSCAN does indeed rely on learning disentangled visual representations as defined in Bengio (2013) and Higgins et al (2017). The performance of SCAN drops as the quality of disentanglement drops, as demonstrated by the additional SCAN_U baselines we have added to Table 1. It has, however, been shown that beta-VAE is able to learn disentangled representation on more challenging datasets (Higgins et al, 2017a, b), and we have shown that SCAN can significantly outperform both JMVAE and TrELBO on CelebA in the additional section we have added at the end of the paper. When training SCAN on CelebA, we show that SCAN is able to ignore symbolic (text) attributes that do not refer to anything meaningful in the image space, and ground the remaining attributes in whatever dictionary of visual primitives it has access to (not all of which map directly to the symbolic attributes). For example, the “attractiveness” attribute is subjective and has no direct mapping to a particular visual primitive, yet SCAN learns that in the CelebA dataset it tends to refer to young females. \n\n\n\n\n\n- The paper should explain the differences and trade offs between other multimodal VAE models (such as their baselines, JMVAE and TrELBO) more clearly. It should also clarify differences between the SCAN_U baseline and SCAN in the main text.\n\nWe have added the explanations in text. In summary, TrELBO tends to learn a flat and unstructured conceptual latent space, that results in very poor diversity of their samples. JMVAE, on the other hand, comes close to our approach in the limit where the text labels provide enough supervision to help disentangle the joint latent space q(z|x,y). In that case, the joint posterior q(z|x,y) and the symbolic posterior q(z|y) of JMVAE become equivalent to the visual posterior q(z|x) and symbolic posterior q(z|y) of SCAN, since both use forward KL to ground q(z|y). Hence, the biggest differences between our approach and JMVAE are: 1) we are able to learn disentangled visual primitives in an unsupervised manner while JMVAE relies on good structured labels to supervise this process; 2) we use a staged optimisation process, where we first learn vision, then concepts, while JMVAE performs joint optimisation. In practice we find that JMVAE training is more sensitive to architectural and hyperparameter choices and hence most of the time performs worse than SCAN.\n\nSCAN_U is a version of SCAN that grounds concepts in an unstructured visual latent space. We have now added extra experiments to show how the performance of SCAN drops as the level of visual disentanglement in SCAN_U is decreased. \n\n\n\n\n- The paper suggests that using the forward KL-divergence is important, but this does not seem to be tested with experiments.\n\nWe have added the additional baseline with reverse KL (SCAN_R) to Table 1 and showed that it has really bad diversity as predicted by our reasoning.\n\n\n\n\n- The three operators (AND, IN COMMON, and IGNORE) can easily be implemented as simple transformations of a (binary) bag-of-words representation. What about more complex operations, such as OR, which seemingly cannot be encoded this way?\n\nIn this work, we focus on operators that can be used to traverse the implicit hierarchy of concepts, and since OR is not one of such operators, it is outside the scope of the current paper. We agree that it is interesting to implement and study additional, more complex operations, which we leave for future work.\n\nHappy holidays!",
"I appreciate the authors' effort along the direction. The additional experiments strengthened the paper, but I feel it still needs more work. \n\nThe technical innovation of the paper is to learn 'recombination operators'. As I said in the original review, methodologically the innovation is quite straightforward, but it can make a good paper if well evaluated. The additional experiments on celebA, however, are not evaluating the 'recombination operators'. It is basically suggesting beta-VAE can learn smooth interpolations (or extrapolations) given a certain attribute. This is nice, but connects better to the original beta-VAE paper than to this paper.\n\nIn general, this paper has great potentials but will benefit from another cycle. Would that be possible to really learn recombination operators on real images? If SCAN can learn concepts like pale-skin (and) big lips, or attractive (ignore) arched eyebrows, the paper will be much stronger.",
"Dear Reviewer,\n\nThank you for taking the time to comment on the updated version of our paper. You suggest that you do not find our additional experiments convincing enough because we do not train recombination operators on the celebA dataset. However, in our understanding your original review did not ask for these experiments. It suggested that we do a fair comparison with the JMVAE and TrELBO baselines on a real dataset, followed by a remark that the baselines were not explicitly designed for recombination operators. In our understanding it implied that the only fair comparison was to compare the abstract concept learning step across the original models. Furthermore, it is unfortunate that your request for the additional experiments with the recombination operators has arrived at this stage. While we cannot update our manuscript before the decision deadline, we would be happy to run the additional experiments for the camera ready version of the paper. \n\nYour original review had reservations about the technical novelty of our approach, which you stated in itself was not a problem as long as we could demonstrate that our approach outperforms the current state of the art methods on realistic datasets. We believe that our new experiments on CelebA demonstrate exactly that. \n\nIn your current comment you suggest that our additional CelebA experiments only demonstrate that beta-VAE can learn smooth interpolations and extrapolations of certain attributes. However, we believe that our additional experiments demonstrate that SCAN can learn new meaningful abstractions that are grounded in the basic visual factors discovered by beta-VAE, but which beta-VAE alone could not have, and in fact did not discover. \n\nIn addition, please note that unlike the CelebA experiments in the TrELBO paper, we did not remove mislabeled attributes from the training set, which consequently made the training task significantly harder for all models. The fact that SCAN was able to work well in such a setting is a further demonstration of the robustness of our approach. \n\nIn summary, we believe that we have demonstrated the usefulness and the power of our approach over the recent state of the art baseline methods on an important problem of learning hierarchical compositional visual concepts. Our approach may seem -- at first glance -- like a “straightforward” modification to existing VAE variants, but it is the only one that is currently able to discover meaningful compositional visual abstractions on realistic datasets. \n"
]
} | {
"paperhash": [
"tenenbaum|building_machines_that_learn_and_think_like_people",
"higgins|darla:_improving_zero-shot_transfer_in_reinforcement_learning",
"vedantam|generative_models_of_visually_grounded_imagination",
"suzuki|joint_multimodal_learning_with_deep_generative_models",
"jaderberg|reinforcement_learning_with_unsupervised_auxiliary_tasks",
"higgins|beta-vae:_learning_basic_visual_concepts_with_a_constrained_variational_framework",
"wang|deep_variational_canonical_correlation_analysis",
"pu|variational_autoencoder_for_deep_learning_of_images,_labels_and_captions",
"garnelo|towards_deep_symbolic_reinforcement_learning",
"oord|wavenet:_a_generative_model_for_raw_audio",
"oord|conditional_image_generation_with_pixelcnn_decoders",
"pathak|context_encoders:_feature_learning_by_inpainting",
"abadi|tensorflow:_large-scale_machine_learning_on_heterogeneous_distributed_systems",
"pandey|variational_methods_for_conditional_multimodal_deep_learning",
"mnih|asynchronous_methods_for_deep_reinforcement_learning",
"silver|mastering_the_game_of_go_with_deep_neural_networks_and_tree_search",
"lake|human-level_concept_learning_through_probabilistic_program_induction",
"he|deep_residual_learning_for_image_recognition",
"sohn|learning_structured_output_representation_using_deep_conditional_generative_models",
"yan|attribute2image:_conditional_image_generation_from_visual_attributes",
"mnih|human-level_control_through_deep_reinforcement_learning",
"gregor|draw:_a_recurrent_neural_network_for_image_generation",
"kingma|adam:_a_method_for_stochastic_optimization",
"liu|deep_learning_face_attributes_in_the_wild",
"szegedy|going_deeper_with_convolutions",
"kingma|semi-supervised_learning_with_deep_generative_models",
"firer-blaess|wikipedia",
"rezende|stochastic_backpropagation_and_approximate_inference_in_deep_generative_models",
"mikolov|distributed_representations_of_words_and_phrases_and_their_compositionality",
"srivastava|multimodal_learning_with_deep_boltzmann_machines",
"bengio|representation_learning:_a_review_and_new_perspectives",
"vincent|stacked_denoising_autoencoders:_learning_useful_representations_in_a_deep_network_with_a_local_denoising_criterion",
"baillargeon|infants'_physical_world",
"tenenbaum|bayesian_modeling_of_human_concept_learning",
"|dsprites:_disentanglement_testing_sprites_dataset",
"|generative_adversarial_text-to-image",
"|auto-encoding_variational_bayes._iclr",
"smith|sources_of_uncertainty_in_intuitive_physics",
"baillargeon|young_infants_'_reasoning_about_the_physical_and_spatial_properties_of_a_hidden_object",
"spelke|principles_of_object_perception",
"|cognition_and_categorization,_chapter_principles_of_categorization"
],
"title": [
"Building Machines that Learn and Think Like People",
"DARLA: Improving Zero-Shot Transfer in Reinforcement Learning",
"Generative Models of Visually Grounded Imagination",
"Joint Multimodal Learning with Deep Generative Models",
"Reinforcement Learning with Unsupervised Auxiliary Tasks",
"beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework",
"Deep Variational Canonical Correlation Analysis",
"Variational Autoencoder for Deep Learning of Images, Labels and Captions",
"Towards Deep Symbolic Reinforcement Learning",
"WaveNet: A Generative Model for Raw Audio",
"Conditional Image Generation with PixelCNN Decoders",
"Context Encoders: Feature Learning by Inpainting",
"TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"Variational methods for conditional multimodal deep learning",
"Asynchronous Methods for Deep Reinforcement Learning",
"Mastering the game of Go with deep neural networks and tree search",
"Human-level concept learning through probabilistic program induction",
"Deep Residual Learning for Image Recognition",
"Learning Structured Output Representation using Deep Conditional Generative Models",
"Attribute2Image: Conditional Image Generation from Visual Attributes",
"Human-level control through deep reinforcement learning",
"DRAW: A Recurrent Neural Network For Image Generation",
"Adam: A Method for Stochastic Optimization",
"Deep Learning Face Attributes in the Wild",
"Going deeper with convolutions",
"Semi-supervised Learning with Deep Generative Models",
"Wikipedia",
"Stochastic Backpropagation and Approximate Inference in Deep Generative Models",
"Distributed Representations of Words and Phrases and their Compositionality",
"Multimodal learning with deep Boltzmann machines",
"Representation Learning: A Review and New Perspectives",
"Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion",
"Infants' Physical World",
"Bayesian Modeling of Human Concept Learning",
"dsprites: Disentanglement testing sprites dataset",
"Generative adversarial text-to-image",
"Auto-encoding variational bayes. ICLR",
"Sources of uncertainty in intuitive physics",
"Young Infants ' Reasoning about the Physical and Spatial Properties of a Hidden Object",
"Principles of Object Perception",
"Cognition and Categorization, chapter Principles of Categorization"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"J. Tenenbaum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"I. Higgins",
"Arka Pal",
"Andrei A. Rusu",
"L. Matthey",
"Christopher P. Burgess",
"A. Pritzel",
"M. Botvinick",
"C. Blundell",
"Alexander Lerchner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ramakrishna Vedantam",
"Ian S. Fischer",
"Jonathan Huang",
"K. Murphy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Masahiro Suzuki",
"Kotaro Nakayama",
"Y. Matsuo"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Max Jaderberg",
"Volodymyr Mnih",
"Wojciech M. Czarnecki",
"T. Schaul",
"Joel Z. Leibo",
"David Silver",
"K. Kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"I. Higgins",
"L. Matthey",
"Arka Pal",
"Christopher P. Burgess",
"Xavier Glorot",
"M. Botvinick",
"S. Mohamed",
"Alexander Lerchner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Weiran Wang",
"Honglak Lee",
"Karen Livescu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yunchen Pu",
"Zhe Gan",
"Ricardo Henao",
"Xin Yuan",
"Chunyuan Li",
"Andrew Stevens",
"L. Carin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Garnelo",
"Kai Arulkumaran",
"M. Shanahan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Aäron van den Oord",
"S. Dieleman",
"H. Zen",
"K. Simonyan",
"O. Vinyals",
"Alex Graves",
"Nal Kalchbrenner",
"A. Senior",
"K. Kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Aäron van den Oord",
"Nal Kalchbrenner",
"L. Espeholt",
"K. Kavukcuoglu",
"O. Vinyals",
"Alex Graves"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Deepak Pathak",
"Philipp Krähenbühl",
"Jeff Donahue",
"Trevor Darrell",
"Alexei A. Efros"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Martín Abadi",
"Ashish Agarwal",
"P. Barham",
"E. Brevdo",
"Z. Chen",
"C. Citro",
"G. Corrado",
"Andy Davis",
"J. Dean",
"M. Devin",
"Sanjay Ghemawat",
"I. Goodfellow",
"A. Harp",
"G. Irving",
"M. Isard",
"Yangqing Jia",
"R. Józefowicz",
"Lukasz Kaiser",
"M. Kudlur",
"J. Levenberg",
"Dandelion Mané",
"R. Monga",
"Sherry Moore",
"D. Murray",
"C. Olah",
"M. Schuster",
"Jonathon Shlens",
"Benoit Steiner",
"I. Sutskever",
"Kunal Talwar",
"P. Tucker",
"Vincent Vanhoucke",
"Vijay Vasudevan",
"F. Viégas",
"O. Vinyals",
"P. Warden",
"M. Wattenberg",
"M. Wicke",
"Yuan Yu",
"Xiaoqiang Zheng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Gaurav Pandey",
"Ambedkar Dukkipati"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Volodymyr Mnih",
"Adrià Puigdomènech Badia",
"Mehdi Mirza",
"Alex Graves",
"T. Lillicrap",
"Tim Harley",
"David Silver",
"K. Kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"David Silver",
"Aja Huang",
"Chris J. Maddison",
"A. Guez",
"L. Sifre",
"George van den Driessche",
"Julian Schrittwieser",
"Ioannis Antonoglou",
"Vedavyas Panneershelvam",
"Marc Lanctot",
"S. Dieleman",
"Dominik Grewe",
"John Nham",
"Nal Kalchbrenner",
"I. Sutskever",
"T. Lillicrap",
"M. Leach",
"K. Kavukcuoglu",
"T. Graepel",
"D. Hassabis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Lake",
"R. Salakhutdinov",
"J. Tenenbaum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kaiming He",
"X. Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kihyuk Sohn",
"Honglak Lee",
"Xinchen Yan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Xinchen Yan",
"Jimei Yang",
"Kihyuk Sohn",
"Honglak Lee"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Volodymyr Mnih",
"K. Kavukcuoglu",
"David Silver",
"Andrei A. Rusu",
"J. Veness",
"Marc G. Bellemare",
"Alex Graves",
"Martin A. Riedmiller",
"A. Fidjeland",
"Georg Ostrovski",
"Stig Petersen",
"Charlie Beattie",
"Amir Sadik",
"Ioannis Antonoglou",
"Helen King",
"D. Kumaran",
"Daan Wierstra",
"S. Legg",
"D. Hassabis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Karol Gregor",
"Ivo Danihelka",
"Alex Graves",
"Danilo Jimenez Rezende",
"Daan Wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Diederik P. Kingma",
"Jimmy Ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ziwei Liu",
"Ping Luo",
"Xiaogang Wang",
"Xiaoou Tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Christian Szegedy",
"Wei Liu",
"Yangqing Jia",
"P. Sermanet",
"Scott E. Reed",
"Dragomir Anguelov",
"D. Erhan",
"Vincent Vanhoucke",
"Andrew Rabinovich"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Diederik P. Kingma",
"S. Mohamed",
"Danilo Jimenez Rezende",
"M. Welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sylvain Firer-Blaess",
"C. Fuchs"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Danilo Jimenez Rezende",
"S. Mohamed",
"Daan Wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Tomas Mikolov",
"I. Sutskever",
"Kai Chen",
"G. Corrado",
"J. Dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nitish Srivastava",
"R. Salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yoshua Bengio",
"Aaron C. Courville",
"Pascal Vincent"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Pascal Vincent",
"H. Larochelle",
"Isabelle Lajoie",
"Yoshua Bengio",
"Pierre-Antoine Manzagol"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Baillargeon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Tenenbaum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [],
"affiliation": []
},
{
"name": [],
"affiliation": []
},
{
"name": [
"Kevin A. Smith",
"E. Vul"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Renée Baillargeon",
"A. Brown",
"J. Deloache",
"Jerry Dejong",
"Julia Devos",
"Marcia Graber",
"G. Gustafson",
"S. Hanko",
"E. Heffley",
"Oskar Richter",
"Tom Kessler",
"Stephanie Hanko-Summers",
"Anna Szado"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Spelke"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
}
],
"arxiv_id": [
"1604.00289v3",
"1707.08475",
"1705.10762v8",
"1611.01891v1",
"1611.05397v1",
"",
"1610.03454v3",
"1609.08976v1",
"1609.05518v2",
"1609.03499v2",
"1606.05328v2",
"1604.07379v2",
"1603.04467",
"1603.01801v2",
"1602.01783v2",
"",
"",
"1512.03385v1",
"",
"1512.00570v2",
"",
"1502.04623v2",
"1412.6980v9",
"1411.7766v3",
"1409.4842v1",
"1406.5298",
"",
"1401.4082v3",
"1310.4546v1",
"",
"1206.5538v3",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
[
"background"
],
[
"background",
"methodology"
],
[
"background"
],
[
"background",
"methodology"
],
[
"background"
],
[
"background",
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[
"methodology"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
]
],
"isInfluential": [
true,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | null | 91 | 0.56044 | 0.555556 | 0.75 | null | null | null | null | null | rkN2Il-RZ |
|
sugiyama|biasvariance_decomposition_for_boltzmann_machines|ICLR_cc_2018_Conference | Bias-Variance Decomposition for Boltzmann Machines | We achieve bias-variance decomposition for Boltzmann machines using an information geometric formulation. Our decomposition leads to an interesting phenomenon that the variance does not necessarily increase when more parameters are included in Boltzmann machines, while the bias always decreases. Our result gives a theoretical evidence of the generalization ability of deep learning architectures because it provides the possibility of increasing the representation power with avoiding the variance inflation. | {
"name": [],
"affiliation": []
} | We achieve bias-variance decomposition for Boltzmann machines using an information geometric formulation. | [
"Boltzmann machine",
"bias-variance decomposition",
"information geometry"
] | null | 2018-02-15 22:29:30 | 27 | null | null | null | null | null | null | null | null | false | This paper presents a bias/variance decomposition for Boltzmann machines using the generalized Pythagorean Theorem from information geometry. The main conclusion is that counterintuitively, the variance may decrease as the model is made larger. There are probably some interesting ideas here, but there isn't a clear take-away message, and it's not clear how far this goes beyond previous work on estimation of exponential families (which is a well-studied topic).
Some of the reviewers caught mathematical errors in the original draft; the revised version fixed these, but did so partly by removing a substantial part of the paper about hidden variables. The analysis, then, is limited to fully observed Boltzmann machines, which have less practical interest to the field of deep learning.
| {
"review_id": [
"rkv5HU5gG",
"Sy_JLe5lz",
"HknVXoHez"
],
"review": [
{
"title": "title: Bias-Variance Decomposition for Boltzmann Machines Review",
"paper_summary": null,
"main_review": "main_review: Summary: The goal of this paper is to analyze the effectiveness and generalizability of deep learning. This authors present a theoretical analysis of bias-variance decomposition for hierarchical graphical models, specifically Boltzmann Machines (BM). The analysis follows a geometric formulation of hierarchical probability distributions. The authors describe a general log-linear model and other variations of it such as the standard BM, arbitrary-order BM and Restricted BM to motivate their approach. \n\nThe authors first define the bias-variance decomposition of KL divergence using Pythagorean theorem followed by applying Cramer-Rao bound and show that the variance decreases when adding more parameters in the model. \n\nPositives:\n-The paper is clearly written and the analysis is helpful to show the effect of adding more parameters on the variance and bias in a general architecture (the Boltzmann Machines)\n-The authors did a good job covering general probabilistic models and progression of models starting with the log-linear model.\n-The authors provided an example to illustrate the theory, by showing that the variance decreases with the increase of model parameters.\n\nQuestions:\n-How does this analysis apply to other deep learning architectures such as Convolutional Neural Networks?\n-How does this analysis apply to other frameworks such as variational auto-encoders and generative adversarial networks?",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: This paper uses an information geometric view on hierarchical models to discuss bias a variance in Boltzmann machines, presenting interesting conclusions, whereby some care seems to be needed in the derivations and discussion. ",
"paper_summary": null,
"main_review": "main_review: This paper uses an information geometric view on hierarchical models to discuss a bias - variance decomposition in Boltzmann machines, presenting interesting conclusions, whereby some more care appears to be needed for making these claims. \n\nThe paper arrives at the main conclusion that it is possible to reduce both the bias and the variance in a hierarchical model. The discussion is not specific to deep learning nor to Boltzmann machines, but actually addresses hierarchical exponential family models. The methods pertaining hierarchical models are interesting and presented in a clear way. My concern are the following points: \n\nThe main theorem presents only a lower bound, meaning that it provides no guarantee that the variance can indeed be reduced. \n\nThe paper seems to ignore that a model with hidden variables may be singular, in which case the Fisher metric is not positive definite and the Cramer Rao bound has no meaning. This interferes with the claims and derivations made in the paper in the case of models with hidden variables. The problem seems to lie in the fact that the presented derivations assume that an optimal distribution in the data manifold is given (see Theorem 1 and proof), effectively making this a discussion about a fully observed hierarchical model. In particular, it is not further specified how to obtain θˆB(s) in page 6 before (13). \n\nAlso, in page 5 the paper states that ``it is known that the EM-algorithm can obtain the global optimum of Equation (12) (Amari, 2016, Section 8.1.3)''. However, what is shown in that reference is only that: (Theorem 8.2., Amari, 2016) ``The KL-divergence decreases monotonically by repeating the E-step and the M-step. Hence, the algorithm converges to an equilibrium.'' A model with hidden variables can have several global and local optimisers (see, e.g. https://arxiv.org/abs/1709.05276). The critical points of the EM algorithm can have a non trivial structure, as has been observed in the case of non negative rank matrix varieties (see, e.g., https://arxiv.org/pdf/1312.5634.pdf). \n\nOTHER\n\nIn page 3, ``S_\\beta is e-flat and S_\\alpha ... '', should this not be the other way around? (See also page 5 last paragraph of Section 2.) Please also indicate the precise location in the provided reference. \n\nAll pages up to page 5 are introduction. Section 2.3. as presented is very vague and does not add much to the discussion. \n\nIn page 7, please explain E ψ(θˆ )^2 −ψ(θ∗ )^2=0 \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The paper presents a interesting analysis revealing the usefulness of analysing the generation ability of ML models based on insights from information geometry. However, a lower bound on the KL-divergence is less informative in practice. ",
"paper_summary": null,
"main_review": "main_review: Summary of the paper:\nThe paper derives a lower bound on the expected squared KL-divergence between a true distribution and the sample based maximum likelihood estimate (MLE) of that distribution modelled by an Boltzmann machine (BM) based on methods from information geometry. This KL-divergence is first split into the squared KL-divergence between the true distribution and MLE of that distribution, and the expected squared KL-divergence between the MLE of the true distribution and the sample based MLE (in a similar spirit to splitting the excess error into approximation and estimation error in statistical learning theory). The letter is than lower bounded (leading to a lower bound on the overall KL-divergence) by a term which does not necessarily increase if the number of model parameters is increased. \n\n\nPros:\n- Using insights from information geometry opens up a very interesting and (to my knowledge) new approach for analysing the generalisation ability of ML models.\n- I am not an expert on information geometry and I did not find the time to follow all the steps of the proof in detail, but the analysis seems to be correct.\n\nCons:\n- The fact that the lower bound does not necessary increase with a growing number of parameters does not guarantee that the same holds true for the KL-divergence (in this sense an upper bound would be more informative). Therefore, it is not clear how much of insights the theoretical analysis gives for practitioners (it could be nice to analyse the tightness of the bound for toy models).\n- Another drawback reading the practical impact is, that the theorem bounds the expected squared KL-divergence between a true distribution and the sample based MLE, while training minimises the divergence between the empirical distribution and the model distribution ( i.e. the sample based MLE in the optimal case), and the theorem does not show the dependency on the letter. \n\nI found some parts difficulty to understand and clarity could be improved e.g. by\n- explaining why minimising KL(\\hat P, P_B) is equivalent to minimising the KL-divergence between the empirical distribution and the Gibbs distribution \\Phi.\n- explaining in which sense the formula on page 4 is equivalent to “the learning equation of Boltzmann machines”.\n- explaining what is the MLE of the true distribution (I assume the closest distribution in the set of distributions that can be modelled by the BM).\n\nMinor comments:\n- page 1: and DBMs….(Hinton et al., 2006) : The paper describes deep belief networks (DBNs) not DBMs \n- \\theta is used to describe the function in eq. (2) as well as the BM parameters in Section 2.2 \n- page 5: “nodes H is” -> “nodes H are” \n\n\n\nREVISION:\nThanks to the reviewers for replying to my comments and making the changes. I think they improved the paper. On the other hand the other reviewers raised valid questions, that led to my decision to not change the overall rating of the paper.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.4444444477558136,
0.4444444477558136
],
"confidence": [
1,
1,
0.25
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Thank you for the review",
"Thank you for the review",
"Thank you for the review"
],
"comment": [
"Thank you for your valuable comments. We have revised our manuscript according to reviewers’ comments and corrected mistakes. In particular, the bound provided in Theorem 1 has been revised and the example provided after Theorem 1 has been updated. Moreover, we have newly added empirical evaluation of our theoretical result.\nAnalyzing the relationship between our model and such neural network models suggested in you comments, in particular probabilistic models of variational auto-encoders and generative adversarial networks, is not in the scope of this paper but our exciting future topic.",
"Thank you for your valuable comments. We have revised our manuscript according to reviewers’ comments and corrected mistakes. In particular, the bound provided in Theorem 1 has been revised and the example provided after Theorem 1 has been updated. Moreover, we have newly added empirical evaluation of our theoretical result. In the following, we answer each question.\n\n> REVIEWER 2: The main theorem presents only a lower bound, meaning that it provides no guarantee that the variance can indeed be reduced. \n\n> ANSWER: We have additionally conducted empirical evaluation of our theoretical lower bound in the revised version. Please check Section 4 and Figure 2. We confirm that our lower bound is quite tight in practice when the sample size N becomes large and the variance reduction actually happens.\n\n> REVIEWER 2: The paper seems to ignore that a model with hidden variables may be singular.\n\n> ANSWER: Thank you very much for pointing this out. You are right and our theoretical results cannot be directly applied to models with hidden variables. Thus we have removed models with hidden variables from our paper and newly added discussion about this issue in the last paragraph in Section 2.3. Please note that our main theoretical contribution is still fully valid.\n\n> REVIEWER 2: A model with hidden variables can have several global and local optimisers. In particular, it is not further specified how to obtain θˆB(s) in page 6 before (13).\n\n> ANSWER: Thank you very much for pointing this out. You are right and we have revised our text (now in Appendix as we have removed the section of models with hidden variables).\n\n> REVIEWER 2: In page 3, ``S_\\beta is e-flat and S_\\alpha ... '', should this not be the other way around? (See also page 5 last paragraph of Section 2.) Please also indicate the precise location in the provided reference. \n\n> ANSWER: You are right. This should be the other way around. We have corrected this in the revised version and also clarified the location of the reference (Appendix, after Eq.(17)).\n\n> REVIEWER 2: All pages up to page 5 are introduction. Section 2.3. as presented is very vague and does not add much to the discussion. \n\n> ANSWER: Thank you for pointing this out. We have revised and extended Section 2.3. Although Section 2.1 is preliminary, the other parts of Section 2 are not introduction but necessary discussion to formulate the family of Boltzmann machines as the log-linear model.\n\n> REVIEWER 2: In page 7, please explain E ψ(θˆ )^2 −ψ(θ∗ )^2=0 \n\n> ANSWER: Thank you for pointing this out. This was wrong and now corrected. This is indeed irreducible error as the Fisher information vanishes. Please check the revised Theorem 1 and its proof.\n",
"Thank you for your valuable comments. We have revised our manuscript according to reviewers’ comments and corrected mistakes. In particular, the bound provided in Theorem 1 has been revised and the example provided Theorem 1 has been updated. Moreover, we have newly added empirical evaluation of our theoretical result. In the following, we answer each question.\n\n> REVIEWER 1: it is not clear how much of insights the theoretical analysis gives for practitioners (it could be nice to analyse the tightness of the bound for toy models).\n\n> ANSWER: We have additionally conducted empirical evaluation of the tightness of our theoretical lower bound in the revised version. Please check Section 4 and Figure 2. We confirm that our lower bound is quite tight in practice.\n\n> REVIEWER 1: the theorem bounds the expected squared KL-divergence between a true distribution and the sample based MLE, while training minimises the divergence between the empirical distribution and the model distribution, and the theorem does not show the dependency on the letter. \n\n> ANSWER: The KL-divergence between the empirical distribution and the model distribution in each training monotonically decreases if we include more parameters (see Equation (15)). But overfitting surely occurs if we include too many parameters and this is our motivation of performing bias-variance decomposition to analyze the generalizability of BMs. We have added this discussion in the first paragraph in P.5.\n\n> REVIEWER 1: explaining why minimising KL(\\hat P, P_B) is equivalent to minimising the KL-divergence between the empirical distribution and the Gibbs distribution \\Phi.\n\n> ANSWER: This is because \\hat P is the empirical distribution and P_B coincides with the Gibbs distribution \\Phi.\n\n> REVIEWER 1: explaining in which sense the formula on page 4 is equivalent to “the learning equation of Boltzmann machines”.\n\n> ANSWER: This is because \\hat{\\eta}(x) and \\eta_B(x) coincide with the expectation for the outcome x with respect to the empirical distribution obtained from data and the model distribution represented by the Boltzmann Machine B, respectively. We have revised the text to clarify this point.\n\n> REVIEWER 1: explaining what is the MLE of the true distribution (I assume the closest distribution in the set of distributions that can be modelled by the BM).\n\n> ANSWER: You are right. The MLE of the true distribution is the closest distribution in the set of distributions that can be modelled by the BM in terms of the KL divergence. We have revised the text to clarify this point.\n\n> REVIEWER 1: page 1: and DBMs….(Hinton et al., 2006) : The paper describes deep belief networks (DBNs) not DBMs \n\n> ANSWER: We have removed this citation and replaced with [Goodfellow et al. (2016, Chapter 20)].\n\n> REVIEWER 1: \\theta is used to describe the function in eq. (2) as well as the BM parameters in Section 2.2 \n\n> ANSWER: We have changed the symbol in Eq.(2) for consistency.\n\n> REVIEWER 1: page 5: “nodes H is” -> “nodes H are” \n\n> ANSWER: We have corrected this.\n"
]
} | {
"paperhash": [
"ackley|a_learning_algorithm_for_boltzmann_machines",
"amari|information_geometry_on_hierarchy_of_probability_distributions",
"amari|information_geometry_and_its_applications",
"amari|information_geometry_of_boltzmann_machines",
"dinh|sharp_minima_can_generalize_for_deep_nets",
"faber|a_closer_look_at_the_bias-variance_trade-off_in_multivariate_calibration",
"gierz|continuous_lattices_and_domains",
"goodfellow|deep_learning",
"hinton|training_products_of_experts_by_minimizing_contrastive_divergence",
"ito|encyclopedic_dictionary_of_mathematics",
"kawaguchi|generalization_in_deep_learning",
"keskar|on_large-batch_training_for_deep_learning:_generalization_gap_and_sharp_minima",
"|representational_power_of_restricted_boltzmann_machines_and_deep_belief_networks",
"lecun|deep_learning",
"min|interpretable_sparse_high-order_boltzmann_machines",
"nakahara|information-geometric_measure_for_neural_spikes",
"nakahara|a_comparison_of_descriptive_models_of_a_single_spike_train_by_information-geometric_measure",
"neyshabur|exploring_generalization_in_deep_learning",
"salakhutdinov|deep_boltzmann_machines",
"salakhutdinov|an_efficient_learning_procedure_for_deep_boltzmann_machines",
"sejnowski|higher-order_boltzmann_machines",
"smolensky|information_processing_in_dynamical_systems:_foundations_of_harmony_theory",
"sugiyama|information_decomposition_on_structured_space",
"sugiyama|tensor_balancing_on_statistical_manifold",
"watanabe|almost_all_learning_machines_are_singular",
"wu|towards_understanding_generalization_of_deep_learning:_perspective_of_loss_landscapes",
"yamazaki|singularities_in_complete_bipartite_graph-type_boltzmann_machines_and_upper_bounds_of_stochastic_complexities"
],
"title": [
"A learning algorithm for Boltzmann machines",
"Information geometry on hierarchy of probability distributions",
"Information Geometry and Its Applications",
"Information geometry of Boltzmann machines",
"Sharp minima can generalize for deep nets",
"A closer look at the bias-variance trade-off in multivariate calibration",
"Continuous Lattices and Domains",
"Deep Learning",
"Training products of experts by minimizing contrastive divergence",
"Encyclopedic Dictionary of Mathematics",
"Generalization in deep learning",
"On large-batch training for deep learning: Generalization gap and sharp minima",
"Representational power of restricted Boltzmann machines and deep belief networks",
"Deep learning",
"Interpretable sparse high-order Boltzmann machines",
"Information-geometric measure for neural spikes",
"A comparison of descriptive models of a single spike train by information-geometric measure",
"Exploring generalization in deep learning",
"Deep Boltzmann machines",
"An efficient learning procedure for deep Boltzmann machines",
"Higher-order Boltzmann machines",
"Information processing in dynamical systems: Foundations of harmony theory",
"Information decomposition on structured space",
"Tensor balancing on statistical manifold",
"Almost all learning machines are singular",
"Towards understanding generalization of deep learning: Perspective of loss landscapes",
"Singularities in complete bipartite graph-type Boltzmann machines and upper bounds of stochastic complexities"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"d h ackley",
"g e hinton",
"t j sejnowski"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"s amari"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"s amari"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"s amari",
"k kurata",
"h nagaoka"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"l dinh",
"r pascanu",
"s bengio",
"y bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"n m faber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"g gierz",
"k h hofmann",
"k keimel",
"j d lawson",
"m mislove",
"d s scott"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"i goodfellow",
"y bengio",
"a courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"g e hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"k ito"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"k kawaguchi",
"l p kaelbling",
"y bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"n s keskar",
"d mudigere",
"j nocedal",
"m smelyanskiy",
"p t p tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"n ",
"le roux",
"y bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"y lecun",
"y bengio",
"g hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"m r min",
"x ning",
"c cheng",
"m gerstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"h nakahara",
"s amari"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"h nakahara",
"s amari",
"b j richmond"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"b neyshabur",
"s bhojanapalli",
"d mcallester",
"n srebro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"r salakhutdinov",
"g e hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"r salakhutdinov",
"g e hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"t j sejnowski"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p smolensky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"m sugiyama",
"h nakahara",
"k tsuda"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"m sugiyama",
"h nakahara",
"k tsuda"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"s watanabe"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"l wu",
"z zhu",
"w e "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"k yamazaki",
"s watanabe"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"1703.04933v2",
"",
"",
"1807.07987v2",
"",
"",
"1710.05468v9",
"",
"",
"1807.07987v2",
"",
"",
"",
"1706.08947v2",
"",
"",
"",
"",
"1601.05533v2",
"1702.08142v3",
"",
"1706.10239v2",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.518519 | 0.75 | null | null | null | null | null | rkMt1bWAZ |
||
krishnan|neumann_optimizer_a_practical_optimization_algorithm_for_deep_neural_networks|ICLR_cc_2018_Conference | 1712.03298v1 | Neumann Optimizer: A Practical Optimization Algorithm for Deep Neural Networks | Progress in deep learning is slowed by the days or weeks it takes to train large models. The natural solution of using more hardware is limited by diminishing returns, and leads to inefficient use of additional resources. In this paper, we present a large batch, stochastic optimization algorithm that is both faster than widely used algorithms for fixed amounts of computation, and also scales up substantially better as more computational resources become available. Our algorithm implicitly computes the inverse Hessian of each mini-batch to produce descent directions; we do so without either an explicit approximation to the Hessian or Hessian-vector products. We demonstrate the effectiveness of our algorithm by successfully training large ImageNet models (InceptionV3, ResnetV1-50, ResnetV1-101 and InceptionResnetV2) with mini-batch sizes of up to 32000 with no loss in validation error relative to current baselines, and no increase in the total number of steps. At smaller mini-batch sizes, our optimizer improves the validation error in these models by 0.8-0.9\%. Alternatively, we can trade off this accuracy to reduce the number of training steps needed by roughly 10-30\%. Our work is practical and easily usable by others -- only one hyperparameter (learning rate) needs tuning, and furthermore, the algorithm is as computationally cheap as the commonly used Adam optimizer. | {
"name": [
"shankar krishnan",
"ying xiao",
"rif a saurous"
],
"affiliation": [
{
"laboratory": "",
"institution": "Machine Perception",
"location": "{'addrLine': '1600 Amphitheatre Parkway Mountain View', 'postCode': '94043', 'region': 'CA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Machine Perception",
"location": "{'addrLine': '1600 Amphitheatre Parkway Mountain View', 'postCode': '94043', 'region': 'CA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Machine Perception",
"location": "{'addrLine': '1600 Amphitheatre Parkway Mountain View', 'postCode': '94043', 'region': 'CA', 'country': 'USA'}"
}
]
} | null | [
"Computer Science",
"Mathematics"
] | International Conference on Learning Representations | 2017-12-08 | 43 | 18 | null | null | null | null | null | null | null | true | Pros:
+ Clearly written paper.
+ Easily implemented algorithm that appears to have excellent scaling properties and can even improve on validation error in some cases.
+ Thorough evaluation against the state of the art.
Cons:
- No theoretical guarantees for the algorithm.
This paper belongs in ICLR if there is enough space.
| {
"review_id": [
"Hy5t5WIxG",
"ByPYAMtgG",
"BkEmOSvef"
],
"review": [
{
"title": "title: See below.",
"paper_summary": null,
"main_review": "main_review: \nThis paper presents a new 2nd-order algorithm that implicitly uses curvature information, and it shows the intuition behind the approximation schemes in the algorithms and also validates the heuristics in various experiments. The method involves using Neumann Series and Richardson iteration to avoid Hessian-vector product in second order method for NN. In the actual performance, the paper presents both practical efficiency and better generalization error in different deep neural networks for image classification tasks, and the authors also show differences according to different settings, e.g., Batch Size, Regularization. The numerical examples are relatively clear and easy to figure out details.\n\n1. While the paper presents the algorithm as an optimization algorithm, although it gets better learning performance, it would be interesting to see how well it is as an optimizer. For example, one simple experiment would be showing how it works for convex problems, e.g., logistic regression. Realistic DNN systems are very complex, and evaluating the method in a simple setting would help a lot in determining what if anything is novel about the method.\n\n2. Also, for deep learning problems, it would be more convincing to see how different initialization can affect the performances. \n\n3. Although the authors present their algorithm as a second order method at beginning, the final algorithm is kind of like a complex momentum SGD with limited memory. Rather than simply throwing out a new method with a new name, it would be helpful to understand what the steps of this method are implicitly doing. Please explain more about this.\n\n4. It said that the algorithm is hyperparameter free except for learning rate. However, it is hard to see why there is no need to tune other hyperparameters, e.g., Cubic Regularizer, Repulsive Regularizer. The effect/sensitivity of hyperparameters for second order methods are quite different than hyperparameters for first order methods, and it is of interest to know how hyperparameters for implicit second order methods perform.\n\n5. For Section 4.2, the well know benefit by using large batch size to train models is that it could reduce training time and epochs. However, from Table 3, there is no such phenomenon. Please explain.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: [UPDATED] Interesting idea, nice experiments, good motivation but lacks theoretical understanding",
"paper_summary": null,
"main_review": "main_review: The paper proposes a new algorithm, where they claim to use Hessian implicitly and are using a motivation from power-series. In general, I like the paper.\n\nTo me, Algorithm 1 looks like some kind of proximal-point type algorithm. Algorithm 2 is more heuristic approach, with a couple of parameters to tune it. Given the fact that there is convergence analysis or similar theoretical results, I would expect to have much more numerical experiments. E.g. there is no results of Algorithm 1. I know it serves as a motivation, but it would be nice to see how it works.\n\nOtherwise, the paper is clearly written.\nThe topic is important, but I am a bit afraid of significance. One thing what I do not understand is, that why they did not compare with Adam? (they mention Adam algorithm soo many times, that it should be compared to).\n\nI am also not sure, how sensitive the results are for different datasets? Algorithm 2 really needs so many parameters (not just learning rate). How \\alpha, \\beta, \\gamma, \\mu, \\eta, K influence the speed? how sensitive is the algorithm for different choices of those parameters?\n\n\n ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Neumann Optimizer: A Practical Optimization Algorithm for Deep Neural Networks",
"paper_summary": null,
"main_review": "main_review: Summary: \nThe paper proposes Neumman optimizer, which makes some adjustments to the idealized Neumman algorithm to improve performance and stability in training. The paper also provides the effectiveness of the algorithm by training ImageNet models (Inception-V3, Resnet-50, Resnet-101, and Inception-Resnet-V2). \n \nComments:\nI really appreciate the author(s) by providing experiments using real models on the ImageNet dataset. The algorithm seems to be easily used in practice. \n\nI do not have many comments for this paper since it focuses only in practical view without theory guarantee rigorously. \n\nAs you mention in the paper that the algorithm uses the same amount of computation and memory as Adam optimizer, but could you please provide the reason why you only compare Neumann Optimizer with Baseline RMSProp but not with Adam? As we know, Adam is currently very well-known algorithm to train DNN. Do you think it would be interesting if you could compare the efficiency of Neumann optimizer with Adam? I understand that you are trying to improve the existing results with their optimizer, but this paper also introduces new algorithm. \n\nThe question is that, with the given architectures and dataset, what algorithm should people consider to use between Neumann optimizer and Adam? Why should people use Neumann optimizer but not Adam, which is already very well-known? If Neumann optimizer can surpass Adam on ImageNet, I think your algorithm will be widely used after being published. \n \nMinor comments:\nPage 3, in eq. (3): missing “-“ sign\nPage 3, in eq. (6): missing “transpose” on \\nabla \\hat{f}\nPage 4, first equation: O(|| \\eta*mu_t ||^2)\nPage 5, in eq. (9): m_{k-1}\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.5555555820465088,
0.5555555820465088
],
"confidence": [
0.5,
0.75,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Details on our experiments",
"Neumann Optimizer",
"Combined response to AnonReviewer3's comments",
"Changes made to the updated paper",
"Combined response to AnonReviewer1's comments",
"Overall response to reviewer's comments",
"Combined response to AnonReviewer2's comments",
"Combined response to Boris Ginsburg's comments"
],
"comment": [
"On the Resnet-V1, each P100 had 32 examples, so 32000 corresponds to 1000 GPUs. This is updated in Table 3 now.\n\nTo your question about small batches -- we ran the algorithm in asynchronous mode ( mini-batches of 32, with 50 workers doing separate mini-batches); the final output was quite a bit worse in terms of test accuracy (76.8% instead of 79.2%). It’s not clear whether it’s the Neumann algorithm with batch size 32 or the async that causes the degradation though. So at the least algorithm doesn’t blow up with small batches, but we haven’t explored this setting enough to say anything conclusive.\n",
"Thank you for adding more experiments. \n\nIn my opinion, it is hard to judge your paper since you do not have any theoretical guarantee rigorously. But it seems that your algorithm is promising. Therefore, I increased the rating score for giving it a chance to be published. I hope the practitioners will try to use it and see if there is any drawback. \n",
"Thank you AnonReviewer3 for your thoughts and comments: we address your comments below and hope to clear up one misconception (caused by poor labelling of Table 3):\n\n1. We have added an experiment in Appendix B to show the results on a synthetic logistic regression problem. We compared the Neumann optimizer with SGD, Adam and a Newton algorithm for varying batch sizes. Our method outperforms SGD and Adam consistently, and while Newton’s method descends to a better loss, it comes at a steep per-step cost. We believe there are other large batch methods like Nesterov and SVRG that might get to lower losses than our method. However, none of these algorithms perform well on training a deep neural net. \n\n2. We've included an Appendix D with a new experiment illustrating that different initializations and trajectories of optimization all give the same quality model output (for the Inception V3 model).\n\n3. We're not quite sure what the reviewer is looking for here: it seems that Section 2 gives a derivation of the method: the method is implicitly inverting the Hessian (which is convexified after regularization) of a mini-batch. Our algorithm crucially differs from standard momentum in that gradient evaluation occurs at a different point from the current iterate (in Algorithm 1), and we are not applying an exponential decay (a standard momentum update would blow up if you did this).\n\n4. We agree that it is of interest to further study the sensitivity to hyperparameters. The results that we have hold for ImageNet, but also for CIFAR-10 and CIFAR-100 with no change in hyperparameters, so we think that the results are likely to carry over to most modern CNN architectures on image datasets -- the hyperparameter choice will likely work out of the box (much like the beta_1, beta_2 and epsilon parameters in Adam). We agree that there appears to be quite a few hyperparameters, but \\alpha and \\beta are regularization coefficients, so they have to be roughly scaled to the loss; \\gamma is a moving average coefficient and never needs to be changed; \\mu is dependent only on time, not the model; finally, training is quite insensitive to K (as mentioned in Section 3.2). Thus, the only hyperparameter that needs to be specified is the learning rate \\eta, and that does determine the speed of optimization.\n\n5. The epochs listed in Table 3 are total epochs (i.e., total sum of all samples seen by all workers), so using twice as many workers is in fact twice as fast (we've updated the table to clarify this). We're a little concerned that we were not clear on the significance of the experimental results: our algorithm scales up to a batch size of 32000 (beating state-of-the-art for large batch training), and we obtain linear speedups across this regime i.e., we can run 500 workers, in 1/10th the time that it takes the usual 50 worker baseline. We think of this as the major contribution of our work.\n",
"We have made the following changes in the new version of the paper that we uploaded. \n\nWe have added some new experiments \n\n(1) Comparison to Adam in Figure 1,\n(2) Multiple Initializations in Appendix D, and \n(3) A Stochastic Convex Problem in Appendix B, along with small edits suggested by reviewers.\n",
"Thank you AnonReviewer1 for your feedback and comments.\n\nWe ran a new set of experiments comparing Adam, RMSprop and Neumann (Figure 1). Adam achieves similar (or worse) results to the RMSprop baselines: in comparison to our Neumann optimizer, the training is slower, the output model is lower quality, and the optimizer scales poorly. When training with Adam, we observed instability with default parameters (especially, epsilon). We changed it to 0.01 and 1.0 and have two runs which show dramatically different results. Our initial reason for not including comparisons to Adam was that we wanted to use standard models and training parameters (i.e., the Inception and Resnet papers use RMSprop).\n\nWe hope that practitioners will consider Neumann over Adam for the following reasons:\n- Significantly higher quality output models when training using few GPUs.\n- Ability to scale up to vastly more GPUs/TPUs, and overall decreased training time.\n\nWe’ve incorporated your minor comments -- thanks again!\n",
"We thank all the reviewers for their feedback. Before we address individual comments, we would like to mention some key themes in this paper that seem to have been lost mainly due to our presentation in the experiments section. \n\n(1) Training deep nets fast (in wall time/parameter updates) without affecting validation performance is important. Previous attempts to scale up, using large batch size and parallelization, hit limits which we avoid. For example, using 500 workers computing gradients in parallel, we can train Resnet-V1-50 to 76.5% accuracy in a little less than 2 hours. In contrast, in Goyal et al. “Accurate, large minibatch SGD: Training Imagenet in 1 hour.”, the maximum batch size was 8000 (equivalent to 250 workers), and in You et al. “Scaling SGD batch size to 32k for imagenet training”, there is a substantial 0.4-0.7% degradation in final model performance.\n\n(2) Our method actually achieves better validation performance (~1% better) compared to the published best performance on image models in multiple architectures.\n",
"Thank you AnonReviewer2 for your comments. Here are our responses:\n\nWe have added a number of new experiments, including (1) Solving a stochastic convex optimization problem (where the Neumann optimizer is far better than SGD or Adam), (2) Comparisons with Adam on Inception-V3 (see below) and (3) Multiple runs of the Neumann algorithm on Inception-V3 showing that the previous experiments are reproducible.\n\nTo the comment about running Algorithm 1: we’ve run it on stochastic convex problems before, where it performs much better than either SGD or Adam. On deep neural nets, our earlier experience with similar “two-loop” algorithms (i.e., freeze the mini-batch, and perform substantial inner-loop computation) lead us to the conclusion that Algorithm 1 would most likely not perform very well at training deep neural nets. The main difficulty is that the inner loop iterations “overfit” to the mini-batch. As you mentioned, this is meant to purely motivational for Algorithm 2. \n\nAdam achieves similar (or worse) results to the RMSprop baselines (Figure 1): in comparison to our Neumann optimizer, the training is slower, the output model is lower quality, and the optimizer scales poorly. When training with Adam, we observed instability with default parameters (especially, epsilon). We changed it to 0.01 and 1.0 and have two runs which show dramatically different results. Our initial reason for not including comparisons to Adam was that we wanted to use standard models and training parameters (i.e., the Inception and Resnet papers use RMSprop).\n\nWe think that the significance in our paper lies in the strong experimental results:\n1. Significantly improved accuracy in output models (using a small number of workers) over published baselines -- i.e., just switching over to our optimizer will increase accuracy by 0.8-0.9%.\n2. Excellent scaling behaviour (even using a very large number of workers).\nFor example, our experimental results for (2) are strictly stronger than those in the literature for large batch training.\n\nThe results that we have hold for ImageNet, but also for CIFAR-10 and CIFAR-100 with no change in hyperparameters, so we think that the results are likely to carry over to most modern CNN architectures on image datasets -- the hyperparameter choice will likely work out of the box (much like the beta_1, beta_2 and epsilon parameters in Adam). We agree that there appears to be quite a few hyperparameters, but \\alpha and \\beta are regularization coefficients, so they have to be roughly scaled to the loss; \\gamma is a moving average coefficient and never needs to be changed; \\mu is dependent only on time, not the model; finally, training is quite insensitive to K (as mentioned in Section 3.2). Thus, the only hyperparameter that needs to be specified is the learning rate \\eta, and that does determine the speed of optimization.\n",
"Thanks for your interest and detailed feedback, Boris. We’ve incorporated most of your feedback, and hope to answer some of your questions below:\n\n1. We’ve added the small calculation for this in Section 3.1.\t\n\n2. A couple of things are going on here:\ni) We allow the mini-batch to vary in algorithm 2. This is a pretty significant change (we like to think of solving a stochastic bootstrap style subproblem instead of deterministic ones).\nii) We change the notation to offset the w_t (so that w_t in Algorithm 2 actually correspond to w_t + \\mu m_t in Algorithm 1). This is a pure notational change, and has no effect on the iteration -- we could also have done the same thing for Algorithm 1 (i.e., we could have unrolled m_t as the sum of gradients).\n\n3. It seems somewhat insensitive to period of the resets, but the resets are necessary, especially at the start.\n\n4. The coefficients we have for m_{t-1} and d_t aren’t a convex combination, and additionally, we subtract an extra \\eta d_t from the update in Line 11 (this subtraction is correct, and somewhat surprising...you accidentally identified it as a typo below). It’s somewhat difficult to reinterpret as momentum.\n\n5. We have not tried on large models without batch normalization. Since most convolutional architectures include batch norm, we had not thought to have experiments along this axis.\n6. By default, all the models we used include weight decay -- so Figure 5 (the ablation experiments) should give you an idea of what happens if you use weight decay and not cubic + repulsive.\n\nTypos:\nWe have all except (5) and (6) -- thank you! (5) and (6) are actually correct -- it definitely looks a little strange, but what we’ve done is to keep track of offset variables w_t + \\mu m_t.\n"
]
} | {
"paperhash": [
"you|imagenet_training_in_minutes",
"you|scaling_sgd_batch_size_to_32k_for_imagenet_training",
"goyal|accurate,_large_minibatch_sgd:_training_imagenet_in_1_hour",
"wang|tacotron:_towards_end-to-end_speech_synthesis",
"chaudhari|entropy-sgd:_biasing_gradient_descent_into_wide_valleys",
"sagun|eigenvalues_of_the_hessian_in_deep_learning:_singularity_and_beyond",
"keskar|on_large-batch_training_for_deep_learning:_generalization_gap_and_sharp_minima",
"bottou|optimization_methods_for_large-scale_machine_learning",
"he|identity_mappings_in_deep_residual_networks",
"grosse|a_kronecker-factored_approximate_fisher_matrix_for_convolution_layers",
"he|deep_residual_learning_for_image_recognition",
"szegedy|rethinking_the_inception_architecture_for_computer_vision",
"martens|optimizing_neural_networks_with_kronecker-factored_approximate_curvature",
"kingma|adam:_a_method_for_stochastic_optimization",
"dauphin|identifying_and_attacking_the_saddle_point_problem_in_high-dimensional_non-convex_optimization",
"zeiler|adadelta:_an_adaptive_learning_rate_method",
"pearlmutter|fast_exact_multiplication_by_the_hessian",
"robbins|a_stochastic_approximation_method",
"you|imagenet_training_in_24_minutes",
"lecun|deep_learning",
"you|large_batch_training_of_convolutional_networks",
"wang|stochastic_quasi-newton_methods_for_nonconvex_stochastic_optimization",
"szegedy|inception-v4,_inception-resnet_and_the_impact_of_residual_connections_on_learning",
"agarwal|second_order_stochastic_optimization_in_linear_time",
"keskar|adaqn:_an_adaptive_quasi-newton_algorithm_for_training_rnns",
"mokhtari|global_convergence_of_online_limited_memory_bfgs",
"defazio|saga:_a_fast_incremental_gradient_method_with_support_for_non-strongly_convex_composite_objectives",
"mokhtari|res:_regularized_stochastic_bfgs_algorithm",
"byrd|a_stochastic_quasi-newton_method_for_large-scale_optimization",
"sohl-dickstein|fast_large-scale_optimization_by_unifying_stochastic_gradient_and_quasi-newton_methods",
"shalev-shwartz|stochastic_dual_coordinate_ascent_methods_for_regularized_loss",
"vinyals|krylov_subspace_descent_for_deep_learning",
"kahan|numerical_linear_algebra",
"varga|matrix_iterative_analysis",
"martens|training_deep_and_recurrent_networks_with_hessian-free_optimization"
],
"title": [
"ImageNet Training in Minutes",
"LARGE BATCH TRAINING OF CONVOLUTIONAL NET-WORKS",
"Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour",
"TACOTRON: TOWARDS END-TO-END SPEECH SYN-THESIS",
"ENTROPY-SGD: BIASING GRADIENT DESCENT INTO WIDE VALLEYS",
"EIGENVALUES OF THE HESSIAN IN DEEP LEARNING: SINGULARITY AND BEYOND",
"ON LARGE-BATCH TRAINING FOR DEEP LEARNING: GENERALIZATION GAP AND SHARP MINIMA",
"Optimization Methods for Large-Scale Machine Learning",
"Identity Mappings in Deep Residual Networks",
"A Kronecker-factored approximate Fisher matrix for convolution layers",
"Deep Residual Learning for Image Recognition",
"Rethinking the Inception Architecture for Computer Vision",
"",
"Published as a conference paper at ICLR 2015 ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION",
"Identifying and attacking the saddle point problem in high-dimensional non-convex optimization",
"ADADELTA: AN ADAPTIVE LEARNING RATE METHOD",
"Fast Exact Multiplication by the Hessian",
"Institute of Mathematical Statistics is collaborating with JSTOR to digitize, preserve, and extend access to The Annals of Mathematical Statistics",
"ImageNet Training in Minutes",
"Deep Learning",
"LARGE BATCH TRAINING OF CONVOLUTIONAL NET-WORKS",
"STOCHASTIC QUASI-NEWTON METHODS FOR NONCONVEX STOCHASTIC OPTIMIZATION",
"Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning",
"Second-Order Stochastic Optimization for Machine Learning in Linear Time",
"adaQN: An Adaptive Quasi-Newton Algorithm for Training RNNs",
"Global Convergence of Online Limited Memory BFGS",
"Complexity Analysis of the Lasso Regularization Path",
"RES: Regularized Stochastic BFGS Algorithm",
"A Stochastic Quasi-Newton Method for Large-Scale Optimization",
"Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods",
"Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization",
"Krylov Subspace Descent for Deep Learning",
"NUMERICAL LINEAR ALGEBRA",
"",
"Training Deep and Recurrent Networks with Hessian-Free Optimization"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"yang you",
"zhao zhang",
"cho-jui hsieh",
"james demmel",
"kurt keutzer",
"u c berkeley",
"u c davis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yang you",
"igor gitman",
"boris ginsburg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"priya goyal",
"piotr dollár",
"ross girshick",
"pieter noordhuis",
"lukasz wesolowski",
"aapo kyrola",
"andrew tulloch yangqing",
"jia kaiming",
"he facebook"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yuxuan wang",
"r j skerry-ryan",
"daisy stanton",
"yonghui wu",
"ron j weiss",
"navdeep jaitly",
"zongheng yang",
"ying xiao",
"zhifeng chen",
"samy bengio",
"quoc le",
"yannis agiomyrgiannakis",
"rob clark",
"rif a saurous",
"decoder rnn"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pratik chaudhari",
"anna choromanska",
"stefano soatto",
"yann lecun",
"carlo baldassi",
"christian borgs",
"jennifer chayes",
"levent sagun",
"riccardo zecchina"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Los Angeles'}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Los Angeles'}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Microsoft Research New England",
"location": "{'settlement': 'Cambridge'}"
},
{
"laboratory": "",
"institution": "Microsoft Research New England",
"location": "{'settlement': 'Cambridge'}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"levent sagun",
"léon bottou",
"yann lecun"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
}
]
},
{
"name": [
"nitish shirish keskar",
"dheevatsa mudigere",
"jorge nocedal",
"mikhail smelyanskiy",
"ping tak",
"peter tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"léon bottou",
"frank e curtis",
"jorge nocedal"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Lehigh University",
"location": "{'settlement': 'Bethlehem', 'region': 'PA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Northwestern University",
"location": "{'settlement': 'Evanston', 'region': 'IL', 'country': 'USA'}"
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
}
]
},
{
"name": [
"roger grosse",
"james martens"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Toronto Toronto",
"location": "{'region': 'ON', 'country': 'Canada'}"
},
{
"laboratory": "",
"institution": "University of Toronto Toronto",
"location": "{'region': 'ON', 'country': 'Canada'}"
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
}
]
},
{
"name": [
"christian szegedy",
"zbigniew wojna"
],
"affiliation": [
{
"laboratory": "",
"institution": "University College London",
"location": "{}"
},
{
"laboratory": "",
"institution": "University College London",
"location": "{}"
}
]
},
{
"name": [
"jasper snoek",
"hugo larochelle",
"ryan p adams"
],
"affiliation": [
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
}
]
},
{
"name": [
"diederik p kingma",
"jimmy lei ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yann n dauphin",
"razvan pascanu",
"caglar gulcehre",
"kyunghyun cho",
"surya ganguli",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "CIFAR Fellow",
"location": "{}"
}
]
},
{
"name": [
"matthew d zeiler"
],
"affiliation": [
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'country': 'USA'}"
}
]
},
{
"name": [
"barak a pearlmutter"
],
"affiliation": [
{
"laboratory": "",
"institution": "Siemens Corporate",
"location": "{'addrLine': '755 College Road East Princeton', 'postCode': '08540', 'region': 'NJ'}"
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"yang you",
"zhao zhang",
"cho-jui hsieh",
"james demmel",
"kurt keutzer",
"u c berkeley",
"u c davis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nicholas g polson",
"vadim o sokolov"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Chicago",
"location": "{}"
},
{
"laboratory": "",
"institution": "George Mason University",
"location": "{}"
}
]
},
{
"name": [
"yang you",
"igor gitman",
"boris ginsburg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xiao wang",
"shiqian ma",
"donald goldfarb",
"wei liu"
],
"affiliation": [
{
"laboratory": "",
"institution": "Chinese Academy of Sciences",
"location": "{'country': 'China'}"
},
{
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": "{'settlement': 'Shatin, Hong Kong', 'region': 'N. T', 'country': 'China'}"
},
{
"laboratory": "",
"institution": "Columbia University",
"location": "{'settlement': 'New York', 'region': 'NY', 'country': 'USA'}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"christian szegedy",
"sergey ioffe",
"vincent vanhoucke",
"alexander a alemi"
],
"affiliation": [
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'addrLine': '1600 Amphitheatre Parkway Mountain View', 'region': 'CA'}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'addrLine': '1600 Amphitheatre Parkway Mountain View', 'region': 'CA'}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'addrLine': '1600 Amphitheatre Parkway Mountain View', 'region': 'CA'}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'addrLine': '1600 Amphitheatre Parkway Mountain View', 'region': 'CA'}"
}
]
},
{
"name": [
"naman agarwal",
"brian bullins",
"elad hazan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nitish shirish keskar",
"albert s berahas"
],
"affiliation": [
{
"laboratory": "",
"institution": "Northwestern University",
"location": "{'settlement': 'Evanston', 'region': 'IL', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Northwestern University Evanston",
"location": "{'region': 'IL', 'country': 'USA'}"
}
]
},
{
"name": [
"aryan mokhtari",
"alejandro ribeiro"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": "{'postCode': '19104', 'region': 'PA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Pennsylvania Philadelphia",
"location": "{'postCode': '19104', 'region': 'PA', 'country': 'USA'}"
}
]
},
{
"name": [
"julien mairal"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
}
]
},
{
"name": [
"aryan mokhtari",
"alejandro ribeiro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"r h byrd",
"s l hansen",
"jorge nocedal",
"y singer"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Colorado",
"location": "{'settlement': 'Boulder', 'region': 'CO', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Northwestern University",
"location": "{'settlement': 'Evanston', 'region': 'IL', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Northwestern University",
"location": "{'settlement': 'Evanston', 'region': 'IL', 'country': 'USA'}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jascha sohl-dickstein",
"ben poole"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shai shalev-shwartz",
"tong zhang"
],
"affiliation": [
{
"laboratory": "",
"institution": "Hebrew University",
"location": "{'settlement': 'Jerusalem', 'country': 'Israel'}"
},
{
"laboratory": "",
"institution": "Rutgers University",
"location": "{'region': 'NJ', 'country': 'USA'}"
}
]
},
{
"name": [
"oriol vinyals",
"daniel povey"
],
"affiliation": [
{
"laboratory": "",
"institution": "U. C",
"location": "{'postCode': '94704', 'settlement': 'Berkeley Berkeley', 'region': 'CA'}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"w kahan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"richard s varga",
"springer heidelberg",
"dordrecht london",
"new york"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"james martens",
"ilya sutskever"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{}"
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 86 | 0.209302 | 0.555556 | 0.583333 | null | null | null | null | null | rkLyJl-0- |
|
fu|learning_robust_rewards_with_adverserial_inverse_reinforcement_learning|ICLR_cc_2018_Conference | 263870625 | null | Learning Robust Rewards with Adverserial Inverse Reinforcement Learning | Reinforcement learning provides a powerful and general framework for decision
making and control, but its application in practice is often hindered by the need
for extensive feature and reward engineering. Deep reinforcement learning methods
can remove the need for explicit engineering of policy or value features, but
still require a manually specified reward function. Inverse reinforcement learning
holds the promise of automatic reward acquisition, but has proven exceptionally
difficult to apply to large, high-dimensional problems with unknown dynamics. In
this work, we propose AIRL, a practical and scalable inverse reinforcement learning
algorithm based on an adversarial reward learning formulation that is competitive
with direct imitation learning algorithms. Additionally, we show that AIRL is
able to recover portable reward functions that are robust to changes in dynamics,
enabling us to learn policies even under significant variation in the environment
seen during training. | {
"name": [
"justin fu",
"katie luo",
"sergey levine"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of California",
"location": "{'postCode': '94720', 'settlement': 'Berkeley Berkeley', 'region': 'CA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'postCode': '94720', 'settlement': 'Berkeley Berkeley', 'region': 'CA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'postCode': '94720', 'settlement': 'Berkeley Berkeley', 'region': 'CA', 'country': 'USA'}"
}
]
} | We propose an adversarial inverse reinforcement learning algorithm capable of learning reward functions which can transfer to new, unseen environments. | [
"inverse reinforcement learning",
"deep reinforcement learning"
] | null | 2018-02-15 22:29:31 | 25 | 188 | 53 | null | null | null | null | null | null | true | The AIRL is presented as a scalable inverse reinforcement learning algorithm. A key idea is to produce "disentangled rewards", which are invariant to changing dynamics; this is done by having the rewards depend only on the current state. There are some similarities with GAIL and the authors argue that this is effectively a concrete implementation of GAN-GCL that actually works. The results look promising to me and the portability aspect is neat and useful!
In general, the reviewers found this paper and its results interesting and I think the rebuttal addressed many of the concerns. I am happy that the reproducibility report is positive which helped me put this otherwise potentially borderline paper into the 'accept' bucket. | {
"review_id": [
"ryyF8NyZM",
"ryZzenclz",
"Hyn6kL_xG"
],
"review": [
{
"title": "title: A variante of the GAN-GCL for Inverse RL (IRL) is presented and evaluated. The difference with the original algorithm is the fact that the sampling happens at the level of stat-actions instead of full trajectories, to reduce its variance. Empirical results clearly show the advantage of this method.",
"paper_summary": null,
"main_review": "main_review: This paper revisits the generative adversarial network guided cost learning (GAN-GCL) algorithm presented last year. The authors argue learning rewards from sampled trajectories has a high variance. Instead, they propose to learn a generative model wherein actions are sampled as a function of states. The same energy model is used for sampling actions: the probability of an action is proportional to the exponential of its reward. To avoid overfitting the expert's demonstrations (by mimicking the actions directly instead of learning a reward that can be generalized to different dynamics), the authors propose to learn rewards that depend only on states, and not on actions. Also, the proposed reward function includes a shaping term, in order to cover all possible transformations of the reward function that could have been behind the expert's actions. The authors argue formally that this is necessary to disentangle the reward function from the dynamics. Th paper also demonstrates this argument empirically (e.g. Figure 1).\n\nThis paper is well-written and technically sound. The empirical evaluations seem to be supporting the main claims of the paper. The paper lacks a little bit in novelty since it is basically a variante of GAN-GCL, but it makes it up with the inclusion of a shaping term in the rewards and with the related formal arguments. The empirical evaluations could also be strengthened with experiments in higher-dimensional systems (like video games). \n\n\"Under maximum entropy IRL, we assume the demonstrations are drawn from an optimal policy p(\\tau) \\propto exp(r(tau))\" This is not an assumption, it's the form of the solution we get by maximizing the entropy (for regularization).\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Using the deterministic-MDP formulation of MaxEnt IRL is a concern",
"paper_summary": null,
"main_review": "main_review: SUMMARY:\nThis paper considers the Inverse Reinforcement Learning (IRL) problem, and particularly suggests a method that obtains a reward function that is robust to the change of dynamics of the MDP.\n\nIt starts from formulating the problem within the MaxEnt IRL framework of Ziebart et al. (2008). The challenge of MaxEnt IRL is the computation of a partition function. Guided Cost Learning (GCL) of Finn et al. (2016b) is an approximation of MaxEnt IRL that uses an adaptive importance sampler to estimate the partition function. This can be shown to be a form of GAN, obtained by using a specific discriminator [Finn et al. (2016a)].\n\nIf the discriminator directly works with trajectories tau, the result would be GAN-GCL. But this leads to high variance estimates, so the paper suggests using a single state-action formulation, in which the discriminator f_theta(s,a) is a function of (s,a) instead of the trajectory. The optimal solution of this discriminator is to have f(s,a) = A(s,a) — the advantage function.\nThe paper, however, argues that the advantage function is “entangled” with the dynamics, and this is undesirable. So it modified the discriminator to learn a function that is a combination of two terms, one only depends on state-action and the other depends on state, and has the form of shaped reward transformation.\n\n\nEVALUATION:\n\nThis is an interesting paper with good empirical results. As I am not very familiar with the work of Finn et al. (2016a) and Finn et al. (2016b), I have not verified the detail of derivations of this new paper very closely. That being said, I have some comments and questions:\n\n\n* The MaxEnt IRL formulation of this work, which assumes that p_theta(tau) is proportional to exp( r_theta (tau) ), comes from\n[Ziebart et al., 2008] and assumes a deterministic dynamics. Ziebart’s PhD dissertation [Ziebart, 2010] or the following paper show that the formulation is different for stochastic dynamics:\n\nZiebart, Bagnell, Dey, “The Principle of Maximum Causal Entropy for Estimating Interacting Processes,” IEEE Trans. on IT, 2013.\n\nIs it still a reasonable thing to develop based on this earlier, an inaccurate, formulation?\n\n\n* I am not convinced about the argument of Appendix C that shows that AIRL recovers reward up to constants.\nIt is suggested that since the only items on both sides of the equation on top of p. 13 depend on s’ are h* and V, they should be equal.\nThis would be true if s’ could be chosen arbitrararily. But s’ would be uniquely determined by s for a deterministic dynamics. In that case, this conclusion is not obvious anymore.\n\nConsider the state space to be integers 0, 1, 2, 3, … .\nSuppose the dynamics is that whenever we are at state s (which is an integer), at the next time step the state decreases toward 1, that is s’ = phi(s,a) = s - 1; unless s = 0, which we just stay at s’ = s = 0. This is independent of actions.\nAlso define r(s) = 1/s for s>=1 and r(0) = 0.\nSuppose the discount factor is gamma = 1 (note that in Appendix B.1, the undiscounted case is studied, so I assume gamma = 1 is acceptable).\n\nWith this choices, the value function V(s) = 1/s + 1/(s-1) + … + 1/1 = H_s, i.e., the Harmonic function.\nThe advantage function is zero. So we can choose g*(s) = 0, and h*(s) = h*(s’) = 1.\nThis is in contrast to the conclusion that h*(s’) = V(s’) + c, which would be H_s + c, and g*(s) = r(s) = 1/s.\n(In fact, nothing is special about this choice of reward and dynamics.)\n\nAm I missing something obvious here?\n\nAlso please discuss how ergodicity leads to the conclusion that spaces of s’ and s are identical. What does “space of s” mean? Do you mean the support of s? Please make the argument more rigorous.\n\n\n* Please make the argument of Section 5.1 more rigorous.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Learns environment-independent rewards; reasonable next step in adversarial IRL",
"paper_summary": null,
"main_review": "main_review: The paper provides an approach to learning reward functions in high-dimensional domains, showing that it performs comparably to other recent approaches to this problem in the imitation-learning setting. It also argues that a key property to learning generalizable reward functions is for them to depend on state, but not state-action or state-action-state. It uses this property to produce \"disentangled rewards\", demonstrating that they transfer well to the same task under different transition dynamics.\n\nThe need for \"state-only\" rewards is a useful insight and is covered fairly well in the paper. The need for an \"adversarial\" approach is not justified as fully, but perhaps is a consequence of recent work. The experiments are thorough, although the connection to the motivation in the abstract (wanting to avoid reward engineering) is weak.\n\nDetailed feedback:\n\n\"deployed in at test-time on environments\" -> \"deployed at test time in environments\"?\n\n\"which can effectively recover disentangle the goals\" -> \"which can effectively disentangle the goals\"?\n\n\"it allows for sub-optimality in demonstrations, and removes ambiguity between demonstrations and the expert policy\": I am not certain what is being described here and it doesn't appear to come up again in the paper. Perhaps remove it?\n\n\"r high-dimensional (Finn et al., 2016b) Wulfmeier\" -> \"r high-dimensional (Finn et al., 2016b). Wulfmeier\".\n\n\"also consider learning cost function with\" -> \"also consider learning cost functions with\"?\n\n\"o learn nonlinear cost function have\" -> \"o learn nonlinear cost functions have\".\n\n\" are not robust the environment changes\" -> \" are not robust to environment changes\"?\n\n\"We present a short proof sketch\": It is unclear to me what is being proven here. Please state the theorem.\n\n\"In the method presented in Section 4, we cannot learn a state-only reward function\": I'm not seeing that. Or, maybe I'm confused between rewards depending on s vs. s,a vs. s,a,s'. Again, an explicit theorem statement might remove some confusion here.\n\n\"AIRLperforms\" -> \"AIRL performs\".\n\nFigure 2: The blue and green colors look very similar to me. I'd recommend reordering the legend to match the order of the lines (random on the bottom) to make it easier to interpret.\n\n\"must reach to goal\" -> \"must reach the goal\"?\n\n\"pointmass\" -> \"point mass\". (Multiple times.)\n\nAmin, Jiang, and Singh's work on efficiently learning a transferable reward function seems relevant here. (Although, it might not be published yet: https://arxiv.org/pdf/1705.05427.pdf.)\n\nPerhaps the final experiment should have included state-only runs. I'm guessing that they didn't work out too well, but it would still be good to know how they compare.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.5555555820465088,
0.5555555820465088
],
"confidence": [
0.5,
0.25,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to AnonReviewer1",
"After rebuttal",
"Response to AnonReviewer3",
"The partition function is learned as part of f(\\tau)",
"No performance guarantees, but may work well in practice depending on the environment",
"Code is partially released (will fully release after acceptance)",
"Response to AnonReviewer2"
],
"comment": [
"Thank you for the thoughtful feedback. We’ve incorporated the suggestions to the best of our ability, and clarified portions of the paper, as described below.\n\n> \"Under maximum entropy IRL, we assume the demonstrations are drawn from an optimal policy p(\\tau) \\propto exp(r(tau))\" This is not an assumption, it's the form of the solution we get by maximizing the entropy (for regularization).\n\nWe’ve modified Section 3 to remove this ambiguity (note that we’ve also modified the section to use the causal entropy framework as requested by another reviewer). This statement was referring to the fact that we are assuming the expert is drawing samples from the distribution p(tau), not the fact that p(tau) \\propto exp(r(tau)).\n\n> \"The paper lacks a little bit in novelty since it is basically a variant of GAN-GCL, but it makes it up with the inclusion of a shaping term in the rewards and with the related formal arguments.\"\n\nIn regard to GAN-GCL, we would note that, although the method draws heavily on the theory in this workshop paper, it is unpublished and does not describe an implementation of any actual algorithm -- the GAN-GCL paper simply describes a theoretical connection between GANs and IRL. Our implementation of the algorithm that is closest to the one suggested by the theory in the GAN-GCL workshop paper does not perform very well in practice (Section 7.3).\n",
"Thank you for your reply and the revision of the paper. I briefly gone through the revised paper. My concerns have been addressed (but I should say that I have not verified the math closely).",
"Thank you for the detailed feedback. We have included all of the typo corrections and clarifications, as well as included state-only runs in the imitation learning experiments (Section 7.3). As detailed below, we believe that we have addressed all of the issues raised in your review, but we would appreciate any further feedback you might offer.\n\n> The need for an \"adversarial\" approach is not justified as fully, but perhaps is a consequence of recent work.\n\nAdversarial approaches are an inherent consequence of using sampling-based methods for training energy-based models, and we’ve edited Section 2, paragraph 2 to make this more clear. There is in fact no other (known) choice for doing this: any method that does maxent IRL and generates samples (rather than assuming known dynamics) must be adversarial in nature, as shown by Finn16a. Traditional methods like tabular MaxEnt IRL [Ziebart 08] have an adversarial nature as they must alternate between an inner-loop RL problem (the sampler) and updating the reward function (the discriminator).\n\n> Although the connection to the motivation in the abstract (wanting to avoid reward engineering) is weak.\n\nWe’ve slightly modified the paragraph before section 7.1 to make this connection more clear. We use environments where a reward function is available for the purpose of easily collecting demonstrations (otherwise we would need to resort to motion capture or teleoperation). However the experimental setup after demo collection is exactly the same as one would encounter while using IRL when a ground truth reward is not available.\n\n> Amin, Jiang, and Singh's work on efficiently learning a transferable reward function seems relevant here. (Although, it might not be published yet: https://arxiv.org/pdf/1705.05427.pdf.)\n\nAmin, Jian & Singh’s work is indeed relevant and we have also included it in the related work section.\n\n> Perhaps the final experiment should have included state-only runs. I'm guessing that they didn't work out too well, but it would still be good to know how they compare.\n\nWe’ve included these in the experiments. State-only runs perform slightly worse as expected, since the true reward has torque penalty terms which depend on the action, and cannot be captured by the model. However the performance isn’t so bad that the agent fails to solve the task.\n",
"Section 3.1 of Finn 2016 (http://arxiv.org/abs/1611.03852) is incorrect in regard to learning the partition function on the bias of the last sigmoid layer. We can't uniquely separate the bias term from the rest of the function approximator. For example, the cost function approximator c_\\theta(tau) could incorporate the log Z term and we could set the learned bias term to 0. Thus, there is no point in explicitly adding a separate learned bias term to capture the partition function as in Finn16 - we simply learn a function f(\\tau) which implicitly learns the partition function, although we cannot extract it.",
"If the ground truth reward depends on both states and actions, the algorithm cannot represent the true reward and thus the performance of the policy will not match that of the experts (we have included new experiments in Section 7.3 for this case). The results will likely depend on the task - in our experiments the performance was not much worse than the experts, but the only action-dependent term in the reward for OpenAI Gym locomotion tasks is a control penalty for actions with large magnitude.\n\nHowever, we also argue that no IRL algorithm which operates over arbitrary reward function classes will be able to recover ground truth rewards in this case, since we cannot avoid reward shaping (section 5). In order to remove shaping, we need to manually restrict the class of reward functions such that shaping is not possible. An alternative approach is to adopt a multi-task IRL paradigm to generalize across different dynamics.\n\nThe state definition for most OpenAI Gym locomotion tasks (including the ones used in this paper) contains velocities - thus we can still represent the ground truth reward. ",
"Hi Jin, Max, and Sam\n\nWe won't release the full code until after acceptance, but we have already released a publicly available implementation of the baselines + the \"non-robust\" version of AIRL. This should be a very good starting point for reproducibility. If you can provide an email address, we can send you a link (so as to not break anonymity on OpenReview).",
"Thank you for the constructive feedback. We’ve incorporated your comments and clarified certain points of the paper below. Please let us know if there are other additional issues which need clarification.\n\n> The MaxEnt IRL formulation of this work, which assumes that p_theta(tau) is proportional to exp( r_theta (tau) ), comes from\n[Ziebart et al., 2008] and assumes a deterministic dynamics. Ziebart’s PhD dissertation [Ziebart, 2010] or the following paper show that the formulation is different for stochastic dynamics.\nIs it still a reasonable thing to develop based on this earlier, an inaccurate, formulation?\n\nWe have updated the background (section 3) and appendix (section A) to use the maximum causal entropy framework rather than the earlier maximum entropy framework of [Ziebart 08]. Our algorithm requires no changes since the causal entropy framework more accurately describes what we were doing in the first place (our old derivations were valid in the deterministic case, where MaxEnt and MaxCausalEnt are identical, but in the stochastic case, our approach in fact matches MaxCausalEnt).\n\n> * I am not convinced about the argument of Appendix C that shows that AIRL recovers reward up to constants.\nAlso please discuss how ergodicity leads to the conclusion that spaces of s’ and s are identical. What does “space of s” mean? Do you mean the support of s? Please make the argument more rigorous.\n* Please make the argument of Section 5.1 more rigorous.\n\nWe’ve provided more formal proofs for Section 5 and the appendix. In order to fix the statements, we’ve changed the condition on the dynamics - a major component is that it requires that each state be reachable from >1 other state within one step. Ergodicity is neither a sufficient nor necessary condition on the dynamics, but special cases such as an ergodic MDP with self-transitions at each state satisfies the new condition (though the minimum necessary conditions are less restrictive).\n"
]
} | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 84 | 2.238095 | 0.592593 | 0.5 | null | null | null | null | null | rkHywl-A- |
gruslys|the_reactor_a_fast_and_sampleefficient_actorcritic_agent_for_reinforcement_learning|ICLR_cc_2018_Conference | 67203472 | null | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the β-leaveone-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training. | {
"name": [
"ūnas audr",
" gruslys",
"will dabney",
"mohammad gheshlaghi azar",
"marc g bellemare",
"rémi munos"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
} | Reactor combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN while giving better run-time performance than A3C. | [
"reinforcement learning",
"policy gradient",
"distributional reinforcement learning",
"distributed computing"
] | null | 2018-02-15 22:29:29 | 35 | 93 | 4 | null | null | null | null | null | null | true | This paper presents a nice set of results on a new RL algorithm. The main downside is the limitation to the Atari domain, but otherwise the ablation studies are nice and the results are strong. | {
"review_id": [
"rksMwz9xG",
"r1MU1AtlG",
"SJRs56Ylz"
],
"review": [
{
"title": "title: Nice integration of recent deep RL advances with decent empirical results",
"paper_summary": null,
"main_review": "main_review: This paper presents a new reinforcement learning architecture called Reactor by combining various improvements in\ndeep reinforcement learning algorithms and architectures into a single model. The main contributions of the paper\nare to achieve a better bias-variance trade-off in policy gradient updates, multi-step off-policy updates with\ndistributional RL, and prioritized experience replay for transition sequences. The different modules are integrated\nwell and the empirical results are very promising. The experiments (though limited to Atari) are well carried out and\nthe evaluation is performed on both sample efficiency and training time.\n\nPros:\n1. Nice integration of several recent improvements in deep RL, along with a few novel tricks to improve training.\n2. The empirical results on 57 Atari games are impressive, in terms of final scores as well as real-time training speed.\n\nCons:\n1. Reactor is still less sample-efficient than Rainbow, with significantly lower scores after 200M frames. While the\nreactor trains much faster, it does use more parallel compute, so the comparison with Rainbow on wall clock time is\n not entirely fair. Would a distributed version of Rainbow perform better in this respect?\n2. Empirical comparisons are restricted to the Atari domain. The conclusions of the paper will be much stronger if\nresults are also shown on other environments like Mujoco/Vizdoom/Deepmind Lab.\n3. Since the paper introduces a few new ideas like prioritized sequence replay, it would help if a more detailed analysis\n was performed on the impact of these individual schemes, even if in a model simpler than the Reactor. For instance, one could investigate the impact of prioritized sequence replay in models like multi-step DQN or recurrent DQN. This will help us understand the impact of each of these ideas in a more comprehensive fashion.\n\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: interesting work with several contributions and large experiments with some but not all recent approaches",
"paper_summary": null,
"main_review": "main_review: This paper proposes a novel reinforcement learning algorithm containing several contributions made by the authors: 1) a policy gradient algorithm that uses value function estimates to improve the policy gradient, 2) a distributed multi-step off-policy algorithm to estimate the value function, 3) an experience replay buffer mechanism that can handle sequences and (4) a distributed architecture, where threads are dedicated to either learning or interracting with the environment. Most contributions consist in improvements to handle multi-step trajectories instead of single step transitions. The resulting algorithm is evaluated on the ATARI domain and shown to outperform other similar algorithms, both in terms of score and training time. Ablation studies are also performed to study the interest of the 4 contributions. \n\nI find the paper interesting. It is also well written and reasonably clear. The experiments are large, although I was disappointed that PPO was not included in the evaluation, as this algorithm also trains much faster than other algorithms.\n\nquality\n+ several contributions\n+ impressive experiments\n\nclarity\n- I found the replay buffer not as clear as the other parts of the paper.\n. run time comparison: source of the code for the baseline methods?\n+ ablation study showing the merits of the different contributions\n- Methods not clearly labeled. For example, what is the difference between Reactor and Reactor 500M?\n\noriginality\n+ 4 contributions\n\nsignificance\n+ important problem, very active area of research\n+ comparison to very recent algorithms\n- but no PPO in the evaluation",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A promising \"Rainbow-like\" combination of deep RL techniques, unfortunately with no proper comparison to Rainbow",
"paper_summary": null,
"main_review": "main_review: This paper proposes a novel reinforcement learning algorithm (« The Reactor ») based on the combination of several improvements to DQN: a distributional version of Retrace, a policy gradient update rule called beta-LOO aiming at variance reduction, a variant of prioritized experience replay for sequences, and a parallel training architecture. Experiments on Atari games show a significant improvement over prioritized dueling networks in particular, and competitive performance compared to Rainbow, at a fraction of the training time.\n\nThere are definitely several interesting and meaningful contributions in this submission, and I like the motivations behind them. They are not groundbreaking (essentially extending existing techniques) but are still very relevant to current RL research.\n\nUnfortunately I also see it as a step back in terms of comparison to other algorithms. The recent Rainbow paper finally established a long overdue clear benchmark on Atari. We have seen with the « Deep Reinforcement Learning that Matters » paper how important (and difficult) it is to properly compare algorithms on deep RL problems. I assume that this submission was mostly written before Rainbow came out, and that comparisons to Rainbow were hastily added just before the ICLR deadline: this would explain why they are quite limited, but in my opinion it remains a major issue, which is the main reason why I am advocating for rejection.\n\nMore precisely, focusing on the comparison to Rainbow which is the main competitor here, my concerns are the following:\n- There is almost no discussion on the differences between Reactor and Rainbow (actually the paper lacks a « related work » section). In particular Rainbow also uses a version of distributional multi-step, which as far as I can tell may not be as well motivated (from a mathematical point of view) as the one in this submission (since it does not correct for the « off-policyness » of the replay data), but still seems to work well on Atari.\n- Rainbow is not distributed. This was a deliberate choice by its authors to focus on algorithmic comparisons. However, it seems to me that it could benefit from a parallel training scheme like Reactor’s. I believe a comparison between Reactor and Rainbow needs to either have them both parallelized or none of them (especially for a comparison on time efficiency like in Fig. 2)\n- Rainbow uses the traditional feedforward DQN architecture while Reactor uses a recurrent network. It is not clear to which extent this has an impact on the results.\n- Rainbow was stopped at 200M steps, at which point it seems to be overall superior to Reactor at 200M steps. The results as presented here emphasize the superiority of Reactor at 500M steps, but a proper comparison would require Rainbow results at 500M steps as well.\n\nIn addition, although I found most of the paper to be clear enough, some parts were confusing to me, in particular:\n- « multi-step distributional Bellman operator » in 3.2: not clear exactly what the target distribution is. If I understand correctly this is the same as the Rainbow extension, but this link is not mentioned.\n- 3.4.1 (network architecture): a simple diagram in the appendix would make it much easier to understand (Table 3 is still hard to read because it is not clear which layers are connected together)\n- 3.3 (prioritized sequence replay): again a visual illustration of the partitioning scheme would in my opinion help clarify the approach\n\nA few minor points to conclude:\n- In eq. 6, 7 and the rest of this section, A does not depend (directly) on theta so it should probably be removed to avoid confusion. Note also that using the letter A may not be best since A is used to denote an action in 3.1.\n- In 3.1: « Let us assume that for the chosen action A we have access to an estimate R(A) of Qπ(A) » => « unbiased estimate »\n- In last equation of p.5 it is not clear what q_i^n is\n- There is a lambda missing on p.6 in the equation showing that alphas are non-negative on average, just before the min\n- In the equation above eq. 12 there is a sum over « i=1 »\n- That same equation ends with some h_z_i that are not defined\n- In Fig. 2 (left) for Reactor we see one worker using large batches and another one using many threads. This is confusing.\n- 3.3 mentions sequences of length 32 but 3.4 says length 33.\n- 3.3 says tree operations are in O(n ln(n)) but it should be O(ln(n))\n- At very end of 3.3 it is not clear what « total variation » is.\n- In 3.4 please specify the frequency at which the learner thread downloads shared parameters and uploads updates\n- Caption of Fig. 3 talks about « changing the number of workers » for the left plot while it is in the right plot\n- The explanation on what the variants of Reactor (ND and 500M) mean comes after results are shown in Fig. 2.\n- Section 4 starts with Fig. 3 without explaining what the task is, how performance is measured, etc. It also claims that Distributional Retrace helps while this is not the case in Fig. 3 (I realize it is explained afterwards, but it is confusing when reading the sentence « We can also see... »). Finally it says priorization is the most important component while the beta-LOO ablation seems to perform just the same.\n- Footnote 3 should say it is 200M observations except for Reactor 500M\n- End of 4.1: « The algorithms that we compare Reactor against are » => missing ACER, A3C and Rainbow\n- There are two references for « Sample efficient actor-critic with experience replay »\n- I do not see the added benefit of the Elo computation. It seems to convey essentially the same information as average rank.\n\nAnd a few typos:\n- Just above 2.1.3: « increasing » => increasingly\n- In 3.1: « where V is a baseline that depend » => depends\n- p.7: « hight » => high, and « to all other sequences » => of all other sequences\n- Double parentheses in Bellemare citation at beginning of section 4\n- Several typos in appendix (too many to list)\n\nNote: I did not have time to carefully read Appendix 6.3 (contextual priority tree)\n\nEdit after revision: bumped score from 5 to 7 because (1) authors did many improvements to the paper, and (2) their explanations shed light on some of my concerns",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.6666666865348816,
0.6666666865348816
],
"confidence": [
0.75,
0.25,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Final thoughts",
"Authors response to AnonReviewer1",
"Author response to AnonReviewer2",
"Authors' response to AnonReviewer3",
"Final thoughts",
"New revision."
],
"comment": [
"Thanks!\n\nI can definitely imagine it was hard to make a proper comparison to Rainbow within such a short timeframe. I still think such a comparison would be quite valuable, to better evaluate the impact of their respective unique components. I'm afraid we are back to a situation where it's not clear what works best -- I guess that's the curse of the Atari benchmark.\n\nI appreciate the many improvements to the paper (though I lack time to look at them thoroughly), in particular the Appendix section on the comparisons with Rainbow. I admit I had read your paper as a DQN extension, while it makes more sense to see it as an A3C extension. I'll change my score to acceptance.\n\nNB: I disagree with the statement that \"In the human-starts evaluation Reactor significantly outperforms Rainbow at 200M steps\". It has slightly higher median normalized score, but lower Elo score. I don't think we can draw a solid conclusion from this (like claiming that \"Reactor generalizes better to these unseen starting states\").\n\nAlso if you can fix this typo in a final version, it looks like you added a \"i=1\" in eq. 12's sum, but forgot its upper bound.",
"Thank you very much for your review and recognising novelty of our contributions.\n\n>> I found the replay buffer not as clear as the other parts of the paper.\n\nWe will do our best to clarify the description, most likely in the appendix given space limitations.\n\n>> Methods not clearly labeled. For example, what is the difference between Reactor and Reactor 500M?\n\nWe will clarify the labels. `Reactor 500M` denotes the performance of Reactor at 500 million training steps. \n\n>> but no PPO in the evaluation\n\nThe PPO paper did not present results at 200M frames but at 40M frames, and their results seem to be weaker than ACER on 40M frames: ACER was better than PPO on 28/49 games tested. For the purpose of comparison to other algorithms, we chose to evaluate all algorithms at (at least) 200M frames, and Reactor is much better than ACER on 200M frames. Unfortunately, we don’t know how PPO perform at 200M frames, so a direct comparison is impossible.\n",
"We were happy to see that the reviewer recognised the novelty both the introduced ideas (prioritization, distributional Retrace and the beta-LOO policy gradient algorithm) and integration of the ideas into a single agent architecture.\n\n>> Reactor is still less sample-efficient than Rainbow, with significantly lower scores after 200M frames\n\nThis is not correct. In the human-starts evaluation Reactor significantly outperforms Rainbow at 200M steps. In the no-op-starts evaluation Rainbow significantly outperforms Reactor at 200M steps. Both Reactor and Rainbow were trained with 30 random no-op-starts. Their evaluation with 30 random human starts shows how well each algorithm generalizes to new initial conditions. We would argue that the issues of generalization here are similar to those seen between training and testing error in supervised learning. We thus show that Reactor generalizes better to these unseen starting states.\n\n>> While the Reactor trains much faster, it does use more parallel compute, so the comparison with Rainbow on wall clock time is not entirely fair.\n\nThe reviewer is right in the sense that Reactor executes more floating point operations per second, but it trains much shorter in wall time resulting in an overall similar number of computations executed. We make no claim that Reactor uses overall less computational operations to train an agent. Nevertheless, we believe that having a fast algorithm in terms of wall time is important because of the potential to shorten experimentation time. The measure is still informative, as one may choose Reactor over Rainbow when multiple CPU machines are available (as opposed to a single GPU machine).\n\n>> Empirical comparisons are restricted to the Atari domain.\n\nWe focused on Atari domain to facilitate the comparison to the prior work.\n\n>> Since the paper introduces a few new ideas like prioritized sequence replay, it would help if a more detailed analysis was performed on the impact of these individual schemes\n\nThe paper already contains the ablation study comparing relative importances of individual components. Since the number of novel contributions is large (beta-LOO, distributional retrace, prioritized sequence replay), it is difficult to explore all possible configurations of the components.\n",
" Thank you very much for your helpful review.\n\n>> There is almost no discussion on the differences between Reactor and Rainbow\n>> I assume that this submission was mostly written before Rainbow came out, and that comparisons to Rainbow were hastily added just before the ICLR deadline\n\nAdmittedly, the comparisons with Rainbow were less detailed than we would have liked. Please note that Rainbow was put on Arxiv only three weeks before the ICLR submission deadline. However we have already included experimental comparisons with Rainbow, both in the form of presenting the learning curves and final evaluations. We will add a more in-depth comparison with Rainbow and discussion of related work in the appendix.\n\n>> I believe a comparison between Reactor and Rainbow needs to either have them both parallelized or none of them.\n\nRainbow works on GPUs, Reactor works on CPUs. A single GPU is not equivalent to a single CPU. Parallelizing Rainbow is out of the scope of this work. First, because this was not the focus of our work. Second, because it would be a non-trivial task potentially worth publication on its own. More generally, the same parallelization argument would also apply to comparisons between A3C and DQN.\n\n>> Rainbow uses the traditional feedforward DQN architecture while Reactor uses a recurrent network. It is not clear to which extent this has an impact on the results.\n\n\nThere are many differences between Rainbow and Reactor: 1) LSTM vs frame stacking, 2) actor-critic vs value-based algorithm 3) beta-LOO vs Q-learning, 4) Retrace vs n-step learning, 5) sequence prioritization vs transition prioritization, 6) entropy bonus vs noisy networks. Reactor is not an incremental improvement of Rainbow and is a completely different algorithm. This makes it impractical to compare on a component-by-component basis. For the most important contributions we performed an ablation study within Reactor’s framework, but naturally we can not ablate every architectural choice that we have made.\n\n>> Rainbow was stopped at 200M steps, at which point it seems to be overall superior to Reactor at 200M steps.\n\nThis is not correct. In the human-starts evaluation Reactor significantly outperforms Rainbow at 200M steps. In the no-op-starts evaluation Rainbow significantly outperforms Reactor at 200M steps. Both Reactor and Rainbow were trained with 30 random no-op-starts. Their evaluation with 30 random human starts shows how well each algorithm generalizes to new initial conditions. We would argue that the issues of generalization here are similar to those seen between training and testing error in supervised learning. We thus show that Reactor generalizes better to these unseen starting states.\n\n>> (network architecture): a simple diagram in the appendix would make it much easier to understand\n\nWe will add the diagram to the supplementary material.\n\n>> again a visual illustration of the partitioning scheme would in my opinion help clarify the approach\n\nWe will add an illustration to the supplementary material. We will also correct all other typos mentioned in the review. Thank you for taking note of them.\n",
"Thanks to the authors for their response. As I mentioned in the initial review, I think the method is definitely promising and provides improvements. My comments were more on claims like \"Reactor significantly outperforms Rainbow\" which is not evident from the results in the paper (a point also noted by Reviewer 3). These claims could be made more specific, with appropriate caveats, or additional experiments could be performed to help substantiate the claims better. ",
"We have just added a new revision addressing the reviewer comments, which we much appreciate."
]
} | {
"paperhash": [
"hessel|rainbow:_combining_improvements_in_deep_reinforcement_learning",
"schulman|proximal_policy_optimization_algorithms",
"bellemare|a_distributional_perspective_on_reinforcement_learning",
"fortunato|noisy_networks_for_exploration",
"vezhnevets|feudal_networks_for_hierarchical_reinforcement_learning",
"anschel|averaged-dqn:_variance_reduction_and_stabilization_for_deep_reinforcement_learning",
"he|learning_to_play_in_a_day:_faster_deep_reinforcement_learning_by_optimality_tightening",
"jaderberg|reinforcement_learning_with_unsupervised_auxiliary_tasks",
"zhao|deep_reinforcement_learning_with_experience_replay_based_on_sarsa",
"mitliagkas|asynchrony_begets_momentum,_with_an_application_to_deep_learning",
"mnih|asynchronous_methods_for_deep_reinforcement_learning",
"wang|dueling_network_architectures_for_deep_reinforcement_learning",
"schaul|prioritized_experience_replay",
"hasselt|deep_reinforcement_learning_with_double_q-learning",
"lillicrap|continuous_control_with_deep_reinforcement_learning",
"li|toward_minimax_off-policy_value_estimation",
"schulman|trust_region_policy_optimization",
"kingma|adam:_a_method_for_stochastic_optimization",
"bellemare|the_arcade_learning_environment:_an_evaluation_platform_for_general_agents",
"precup|off-policy_temporal_difference_learning_with_function_approximation",
"sutton|policy_gradient_methods_for_reinforcement_learning_with_function_approximation",
"hochreiter|long_short-term_memory"
],
"title": [
"Rainbow: Combining Improvements in Deep Reinforcement Learning",
"Proximal Policy Optimization Algorithms",
"A Distributional Perspective on Reinforcement Learning",
"Noisy Networks for Exploration",
"FeUdal Networks for Hierarchical Reinforcement Learning",
"Averaged-DQN: Variance Reduction and Stabilization for Deep Reinforcement Learning",
"Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening",
"Reinforcement Learning with Unsupervised Auxiliary Tasks",
"Deep reinforcement learning with experience replay based on SARSA",
"Asynchrony begets momentum, with an application to deep learning",
"Asynchronous Methods for Deep Reinforcement Learning",
"Dueling Network Architectures for Deep Reinforcement Learning",
"Prioritized Experience Replay",
"Deep Reinforcement Learning with Double Q-Learning",
"Continuous control with deep reinforcement learning",
"Toward Minimax Off-policy Value Estimation",
"Trust Region Policy Optimization",
"Adam: A Method for Stochastic Optimization",
"The Arcade Learning Environment: An Evaluation Platform for General Agents",
"Off-Policy Temporal Difference Learning with Function Approximation",
"Policy Gradient Methods for Reinforcement Learning with Function Approximation",
"Long Short-Term Memory"
],
"abstract": [
"\n \n The deep reinforcement learning community has made several independent improvements to the DQN algorithm. However, it is unclear which of these extensions are complementary and can be fruitfully combined. This paper examines six extensions to the DQN algorithm and empirically studies their combination. Our experiments show that the combination provides state-of-the-art performance on the Atari 2600 benchmark, both in terms of data efficiency and final performance. We also provide results from a detailed ablation study that shows the contribution of each component to overall performance.\n \n",
"We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a \"surrogate\" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.",
"In this paper we argue for the fundamental importance of the value distribution: the distribution of the random return received by a reinforcement learning agent. This is in contrast to the common approach to reinforcement learning which models the expectation of this return, or value. Although there is an established body of literature studying the value distribution, thus far it has always been used for a specific purpose such as implementing risk-aware behaviour. We begin with theoretical results in both the policy evaluation and control settings, exposing a significant distributional instability in the latter. We then use the distributional perspective to design a new algorithm which applies Bellman's equation to the learning of approximate value distributions. We evaluate our algorithm using the suite of games from the Arcade Learning Environment. We obtain both state-of-the-art results and anecdotal evidence demonstrating the importance of the value distribution in approximate reinforcement learning. Finally, we combine theoretical and empirical evidence to highlight the ways in which the value distribution impacts learning in the approximate setting.",
"We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent's policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and dueling agents (entropy reward and $\\epsilon$-greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance.",
"We introduce FeUdal Networks (FuNs): a novel architecture for hierarchical reinforcement learning. Our approach is inspired by the feudal reinforcement learning proposal of Dayan and Hinton, and gains power and efficacy by decoupling end-to-end learning across multiple levels -- allowing it to utilise different resolutions of time. Our framework employs a Manager module and a Worker module. The Manager operates at a lower temporal resolution and sets abstract goals which are conveyed to and enacted by the Worker. The Worker generates primitive actions at every tick of the environment. The decoupled structure of FuN conveys several benefits -- in addition to facilitating very long timescale credit assignment it also encourages the emergence of sub-policies associated with different goals set by the Manager. These properties allow FuN to dramatically outperform a strong baseline agent on tasks that involve long-term credit assignment or memorisation. We demonstrate the performance of our proposed system on a range of tasks from the ATARI suite and also from a 3D DeepMind Lab environment.",
"Instability and variability of Deep Reinforcement Learning (DRL) algorithms tend to adversely affect their performance. Averaged-DQN is a simple extension to the DQN algorithm, based on averaging previously learned Q-values estimates, which leads to a more stable training procedure and improved performance by reducing approximation error variance in the target values. To understand the effect of the algorithm, we examine the source of value function estimation errors and provide an analytical comparison within a simplified model. We further present experiments on the Arcade Learning Environment benchmark that demonstrate significantly improved stability and performance due to the proposed extension.",
"We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel technique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improvements in both training time and accuracy.",
"Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\\% expert human performance, and a challenging suite of first-person, three-dimensional \\emph{Labyrinth} tasks leading to a mean speedup in learning of 10$\\times$ and averaging 87\\% expert human performance on Labyrinth.",
"SARSA, as one kind of on-policy reinforcement learning methods, is integrated with deep learning to solve the video games control problems in this paper. We use deep convolutional neural network to estimate the state-action value, and SARSA learning to update it. Besides, experience replay is introduced to make the training process suitable to scalable machine learning problems. In this way, a new deep reinforcement learning method, called deep SARSA is proposed to solve complicated control problems such as imitating human to play video games. From the experiments results, we can conclude that the deep SARSA learning shows better performances in some aspects than deep Q learning.",
"Asynchronous methods are widely used in deep learning, but have limited theoretical justification when applied to non-convex problems. We show that running stochastic gradient descent (SGD) in an asynchronous manner can be viewed as adding a momentum-like term to the SGD iteration. Our result does not assume convexity of the objective function, so it is applicable to deep learning systems. We observe that a standard queuing model of asynchrony results in a form of momentum that is commonly used by deep learning practitioners. This forges a link between queuing theory and asynchrony in deep learning systems, which could be useful for systems builders. For convolutional neural networks, we experimentally validate that the degree of asynchrony directly correlates with the momentum, confirming our main result. An important implication is that tuning the momentum parameter is important when considering different levels of asynchrony. Furthermore, our theory suggests ways of counter-acting the adverse effects of asynchrony. We see that a simple mechanism like using negative algorithmic momentum can be beneficial under high asynchrony. Since asynchronous methods have better hardware efficiency, this result may shed light on when asynchronous execution is more efficient for deep learning systems.",
"We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.",
"In recent years there have been many successes of using deep representations in reinforcement learning. Still, many of these applications use conventional architectures, such as convolutional networks, LSTMs, or auto-encoders. In this paper, we present a new neural network architecture for model-free reinforcement learning. Our dueling network represents two separate estimators: one for the state value function and one for the state-dependent action advantage function. The main benefit of this factoring is to generalize learning across actions without imposing any change to the underlying reinforcement learning algorithm. Our results show that this architecture leads to better policy evaluation in the presence of many similar-valued actions. Moreover, the dueling architecture enables our RL agent to outperform the state-of-the-art on the Atari 2600 domain.",
"Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.",
"\n \n The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.\n \n",
"We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.",
"This paper studies the off-policy evaluation problem, where one aims to estimate the value of a target policy based on a sample of observations collected by another policy. We first consider the single-state, or multi-armed bandit case, establish a finite-time minimax risk lower bound, and analyze the risk of three standard estimators. For the so-called regression estimator, we show that while it is asymptotically optimal, for small sample sizes it may perform suboptimally compared to an ideal oracle up to a multiplicative factor that depends on the number of actions. We also show that the other two popular estimators can be arbitrarily worse than the optimal, even in the limit of infinitely many data points. The performance of the estimators are studied in synthetic and real problems; illustrating the methods strengths and weaknesses. We also discuss the implications of these results for off-policy evaluation problems in contextual bandits and fixed-horizon Markov decision processes.",
"In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.",
"We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.",
"In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available.",
"We introduce the first algorithm for off-policy temporal-difference learning that is stable with linear function approximation. Off-policy learning is of interest because it forms the basis for popular reinforcement learning methods such as Q-learning, which has been known to diverge with linear function approximation, and because it is critical to the practical utility of multi-scale, multi-goal, learning frameworks such as options, HAMs, and MAXQ. Our new algorithm combines TD(λ) over state–action pairs with importance sampling ideas from our previous work. We prove that, given training under any -soft policy, the algorithm converges w.p.1 to a close approximation (as in Tsitsiklis and Van Roy, 1997; Tadic, 2001) to the action-value function for an arbitrary target policy. Variations of the algorithm designed to reduce variance introduce additional bias but are also guaranteed convergent. We also illustrate our method empirically on a small policy evaluation problem. Our current results are limited to episodic tasks with episodes of bounded length. 1Although Q-learning remains the most popular of all reinforcement learning algorithms, it has been known since about 1996 that it is unsound with linear function approximation (see Gordon, 1995; Bertsekas and Tsitsiklis, 1996). The most telling counterexample, due to Baird (1995) is a seven-state Markov decision process with linearly independent feature vectors, for which an exact solution exists, yet This is a re-typeset version of an article published in the Proceedings of the 18th International Conference on Machine Learning (2001). It differs from the original in line and page breaks, is crisper for electronic viewing, and has this funny footnote, but otherwise it is identical to the published article. for which the approximate values found by Q-learning diverge to infinity. This problem prompted the development of residual gradient methods (Baird, 1995), which are stable but much slower than Q-learning, and fitted value iteration (Gordon, 1995, 1999), which is also stable but limited to restricted, weaker-than-linear function approximators. Of course, Q-learning has been used with linear function approximation since its invention (Watkins, 1989), often with good results, but the soundness of this approach is no longer an open question. There exist non-pathological Markov decision processes for which it diverges; it is absolutely unsound in this sense. A sensible response is to turn to some of the other reinforcement learning methods, such as Sarsa, that are also efficient and for which soundness remains a possibility. An important distinction here is between methods that must follow the policy they are learning about, called on-policy methods, and those that can learn from behavior generated by a different policy, called off-policy methods. Q-learning is an off-policy method in that it learns the optimal policy even when actions are selected according to a more exploratory or even random policy. Q-learning requires only that all actions be tried in all states, whereas on-policy methods like Sarsa require that they be selected with specific probabilities. Although the off-policy capability of Q-learning is appealing, it is also the source of at least part of its instability problems. For example, in one version of Baird’s counterexample, the TD(λ) algorithm, which underlies both Qlearning and Sarsa, is applied with linear function approximation to learn the action-value function Q for a given policy π. Operating in an on-policy mode, updating state– action pairs according to the same distribution they would be experienced under π, this method is stable and convergent near the best possible solution (Tsitsiklis and Van Roy, 1997; Tadic, 2001). However, if state-action pairs are updated according to a different distribution, say that generated by following the greedy policy, then the estimated values again diverge to infinity. This and related counterexamples suggest that at least some of the reason for the instability of Q-learning is that it is an off-policy method; they also make it clear that this part of the problem can be studied in a purely policy-evaluation context. Despite these problems, there remains substantial reason for interest in off-policy learning methods. Several researchers have argued for an ambitious extension of reinforcement learning ideas into modular, multi-scale, and hierarchical architectures (Sutton, Precup & Singh, 1999; Parr, 1998; Parr & Russell, 1998; Dietterich, 2000). These architectures rely on off-policy learning to learn about multiple subgoals and multiple ways of behaving from the singular stream of experience. For these approaches to be feasible, some efficient way of combining off-policy learning and function approximation must be found. Because the problems with current off-policy methods become apparent in a policy evaluation setting, it is there that we focus in this paper. In previous work we considered multi-step off-policy policy evaluation in the tabular case. In this paper we introduce the first off-policy policy evaluation method consistent with linear function approximation. Our mathematical development focuses on the episodic case, and in fact on a single episode. Given a starting state and action, we show that the expected offpolicy update under our algorithm is the same as the expected on-policy update under conventional TD(λ). This, together with some variance conditions, allows us to prove convergence and bounds on the error in the asymptotic approximation identical to those obtained by Tsitsiklis and Van Roy (1997; Bertsekas and Tsitsiklis, 1996). 1. Notation and Main Result We consider the standard episodic reinforcement learning framework (see, e.g., Sutton & Barto, 1998) in which a learning agent interacts with a Markov decision process (MDP). Our notation focuses on a single episode of T time steps, s0, a0, r1, s1, a1, r2, . . . , rT , sT , with states st ∈ S, actions at ∈ A, and rewards rt ∈ <. We take the initial state and action, s0 and a0, to be given arbitrarily. Given a state and action, st and at, the next reward, rt+1, is a random variable with mean rt st and the next state, st+1, is chosen with probabilities pt stst+1 . The final state is a special terminal state that may not occur on any preceding time step. Given a state, st, 0 < t < T , the action at is selected according to probability π(st, at) or b(st, at) depending on whether policy π or policy b is in force. We always use π to denote the target policy, the policy that we are learning about. In the on-policy case, π is also used to generate the actions of the episode. In the off-policy case, the actions are instead generated by b, which we call the behavior policy. In either case, we seek an approximation to the action-value function Q : S ×A 7→ < for the target policy π: Q(s, a) = Eπ { rt+1 + · · ·+ γrT | st = s, at = a } , where 0 ≤ γ ≤ 1 is a discount-rate parameter. We consider approximations that are linear in a set of feature vectors {φsa}, s ∈ S, a ∈ A: Q(s, a) ≈ θφsa = n ∑",
"Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy.",
"Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
],
"authors": [
{
"name": [
"Matteo Hessel",
"Joseph Modayil",
"H. V. Hasselt",
"T. Schaul",
"Georg Ostrovski",
"Will Dabney",
"Dan Horgan",
"Bilal Piot",
"M. G. Azar",
"David Silver"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"John Schulman",
"Filip Wolski",
"Prafulla Dhariwal",
"Alec Radford",
"Oleg Klimov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Marc G. Bellemare",
"Will Dabney",
"R. Munos"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Meire Fortunato",
"M. G. Azar",
"Bilal Piot",
"Jacob Menick",
"Ian Osband",
"Alex Graves",
"Vlad Mnih",
"R. Munos",
"D. Hassabis",
"O. Pietquin",
"C. Blundell",
"S. Legg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Vezhnevets",
"Simon Osindero",
"T. Schaul",
"N. Heess",
"Max Jaderberg",
"David Silver",
"K. Kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Oron Anschel",
"Nir Baram",
"N. Shimkin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Frank S. He",
"Yang Liu",
"A. Schwing",
"Jian Peng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Max Jaderberg",
"Volodymyr Mnih",
"Wojciech M. Czarnecki",
"T. Schaul",
"Joel Z. Leibo",
"David Silver",
"K. Kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Dongbin Zhao",
"Haitao Wang",
"Kun Shao",
"Yuanheng Zhu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ioannis Mitliagkas",
"Ce Zhang",
"Stefan Hadjis",
"C. Ré"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Volodymyr Mnih",
"Adrià Puigdomènech Badia",
"Mehdi Mirza",
"Alex Graves",
"T. Lillicrap",
"Tim Harley",
"David Silver",
"K. Kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ziyun Wang",
"T. Schaul",
"Matteo Hessel",
"H. V. Hasselt",
"Marc Lanctot",
"Nando de Freitas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"T. Schaul",
"John Quan",
"Ioannis Antonoglou",
"David Silver"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"H. V. Hasselt",
"A. Guez",
"David Silver"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"T. Lillicrap",
"Jonathan J. Hunt",
"A. Pritzel",
"N. Heess",
"Tom Erez",
"Yuval Tassa",
"David Silver",
"D. Wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Lihong Li",
"R. Munos",
"Csaba Szepesvari"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"John Schulman",
"S. Levine",
"P. Abbeel",
"Michael I. Jordan",
"Philipp Moritz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Diederik P. Kingma",
"Jimmy Ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Marc G. Bellemare",
"Yavar Naddaf",
"J. Veness",
"Michael Bowling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Doina Precup",
"R. Sutton",
"S. Dasgupta"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Sutton",
"David A. McAllester",
"Satinder Singh",
"Y. Mansour"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sepp Hochreiter",
"J. Schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1710.02298",
"1707.06347",
"1707.06887",
"1706.10295",
"1703.01161",
"1611.01929",
"1611.01606",
"1611.05397",
null,
"1605.09774",
"1602.01783",
"1511.06581",
"1511.05952",
"1509.06461",
"1509.02971",
null,
"1502.05477",
"1412.6980",
"1207.4708",
null,
null,
null
],
"s2_corpus_id": [
"19135734",
"28695052",
"966543",
"5176587",
"6656096",
"2215254",
"10713737",
"14717992",
"351692",
"6668563",
"6875312",
"5389801",
"13022595",
"6208256",
"16326763",
"2210187",
"16046818",
"6628106",
"1552061",
"7784749",
"1211821",
"1915014"
],
"intents": [
[],
[],
[],
[],
[],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"methodology"
],
[
"background",
"methodology"
],
[],
[
"background"
],
[],
[],
[],
[
"background"
],
[],
[
"methodology"
],
[
"methodology"
],
[
"background"
],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
true,
false,
true,
false,
false,
false,
false,
false,
true,
true,
false,
false
]
} | null | 84 | 1.107143 | 0.666667 | 0.583333 | null | null | null | null | null | rkHVZWZAZ |
hallam|compact_neural_networks_based_on_the_multiscale_entanglement_renormalization_ansatz|ICLR_cc_2018_Conference | 1711.03357v3 | Compact Neural Networks based on the Multiscale Entanglement Renormalization Ansatz | The goal of this paper is to demonstrate a method for tensorizing neural networks based upon an efficient way of approximating scale invariant quantum states, the Multi-scale Entanglement Renormalization Ansatz (MERA). We employ MERA as a replacement for linear layers in a neural network and test this implementation on the CIFAR-10 dataset. The proposed method outperforms factorization using tensor trains, providing greater compression for the same level of accuracy and greater accuracy for the same level of compression. We demonstrate MERA-layers with 3900 times fewer parameters and a reduction in accuracy of less than 1% compared to the equivalent fully connected layers.
| {
"name": [
"andrew hallam",
"edward grant",
"vid stojevic",
"simone severini",
"andrew g green"
],
"affiliation": [
{
"laboratory": "",
"institution": "University College London",
"location": "{}"
},
{
"laboratory": "",
"institution": "University College London",
"location": "{}"
},
{
"laboratory": "",
"institution": "University College London",
"location": "{}"
},
{
"laboratory": "",
"institution": "University College London",
"location": "{}"
},
{
"laboratory": "",
"institution": "University College London",
"location": "{}"
}
]
} | null | [
"Computer Science",
"Physics"
] | British Machine Vision Conference | 2017-11-09 | 26 | null | null | null | null | null | null | null | null | false | This paper proposes a tree-structured tensor factorisation method for parameter reduction. The reviewers felt the paper was somewhat interesting, but agreed that more detail was needed in the method description, and that the experiments were on the whole uninformative. This seems like a promising research direction which needs more empirical work, but is not ready for publication as is. | {
"review_id": [
"SyR6NUcxz",
"HJfixHulz",
"B1TCDcOxG"
],
"review": [
{
"title": "title: Not very rigorously written, results are mediocre",
"paper_summary": null,
"main_review": "main_review: The authors study compressing feed forward layers using low rank tensor decompositions. For instance a feed forward layer of 4096 x 4096 would first be reshaped into a rank-12 tensor with each index having dimension 2, and then a tensor decomposition would be applied to reduce the number of parameters. \n\nPrevious work used tensor trains which decompose the tensor as a chain. Here the authors explore a tree like decomposition. The authors only describe their model using pictures and do not provide any rigorous description of how their decomposition works.\n\nThe results are mediocre. While the author's approach does seem to reduce the feed forward net parameters by 30% compared to the tensor train decomposition for similar accuracy, the total number of parameters for both MERA (authors' approach) and Tensor Train is similar since in this regime the CNN parameters dominate (and the authors' approach does not work to compress those).\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting method but the paper needs to be rewritten",
"paper_summary": null,
"main_review": "main_review: In the paper the authors suggest to use MERA tensorization technique for compressing neural networks. MERA itseld in a known framework in QM but not in ML. Although the idea seems to be fruitful and interesting I find the paper quite unclear. The most important part is section 2 which presents the methodology used. However there no equations or formal descriptions of what is MERA and how it works. Only figures which are difficult to understand. It is almost impossible to reproduce the results based on such iformal description of tensorization method. The authors should be more careful and provide more details when describing the algorithm. There was enough room for making the algorithm more clear. This is my main point for critisism.\n\nAnother issue is related with practical usefulness. MERA allows to get better compression than TT keeping the same accuracy. But the authors do compress only one layer. In this case the total compression of DNN is almost tha same so why do we need yet another tensorization technique? I think the authors should try tenzorizing several layers and explore whether they can do any better than TT compression. Currently I would say the results are comparable but not definitely better.\n\nUPDATE: The revised version seems to be a bit more clear. Now the reader unfamiliar with MERA (with some efforts) can understand how the methods works. Although my second concern remains. Up to now it looks just yet another tensorization method with only slight advantages over TT framework. Tensorization of conv.layers could improve the paper a lot. I increased the score to 5 for making the presentation of MERA more readable.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting idea but more experimental validation required",
"paper_summary": null,
"main_review": "main_review: The paper presents a new parameterization of linear maps for use in neural networks, based on the Multiscale Entanglement Renormalization Ansatz (MERA). The basic idea is to use a hierarchical factorization of the linear map, that greatly reduces the number of parameters while still allowing for relatively complex interactions between variables to be modelled. A limited number of experiments on CIFAR10 suggests that the method may work a bit better than related factorizations.\n\nThe paper contains interesting new ideas and is generally well written. However, a few things are not fully explained, and the experiments are too limited to be convincing.\n\n\nExposition\nOn a first reading, it is initially unclear why we are talking about higher order tensors at all. Usually, fully connected layers are written as matrix-vector multiplications. It is only on the bottom of page 3 that it is explained that we will reshape the input to a rank-k (k=12) tensor before applying the MERA factored map. It would be helpful to state this sooner. It would also be nice to state that (in the absense of any factorization of the weight tensor) a linear contraction of such a high-rank tensor is no less general than a matrix-vector multiplication.\n\nMost ML researchers will not know Haar measure. It would be more reader friendly to say something like \"uniform distribution over orthogonal matrices (i.e. Haar measure)\" or something like that. Explaining how to sample orthogonal matrices / tensors (e.g. by SVD) would be helpful as well.\n\nThe article does not explain what \"disentanglers\" are. It is very important to explain this, because it will not be generally known by the machine learning audience, and is the main thing that distinguishes this work form earlier tree-based factorizations.\n\nOn page 5 it is explained that the computational complexity of the proposed method is N^{log_2 D}. For D=2, this is better than a fully connected layer. Although this theoretical speedup may not currently have been realized, it perhaps could be achieved by a custom GPU kernel. It would be nice to highlight this potential benefit in the introduction.\n\n\nTheoretical motivation\nAlthough I find the theoretical motivation for the method somewhat compelling, some questions remain that the authors may want to address. For one thing, the paper talks about exploiting \"hierarchical / multiscale structure\", but this does not refer to the spatial multi-scale structure that is naturally present in images. Instead, the dimensions of a hidden activation vector are arbitrarily ordered, partitioned into pairs, and reshaped into a (2, 2, ..., 2) shape tensor. The pairing of dimensions determines the kinds of interactions the MERA layer can express. Although the earlier layers could learn to produce a representation that can be effectively analyzed by the MERA layer, one is left to wonder if the method could be made to exploit the spatial multi-scale structure that we know is actually present in image data.\n\nAnother point is that although from a classical statistics perspective it would seem that reducing the number of parameters should be generally beneficial, it has been observed many times that in deep learning, highly overparameterized models are easier to optimize and do not necessarily overfit. Thus at this point it is not clear whether starting with a highly constrained parameterization would allow us to obtain state of the art accuracy levels, or whether it is better to start with an overparameterized model and gradually constrain it or perform a post-training compression step.\n\n\nExperiments\nIn the introduction it is claimed that the method of Liu et al. cannot capture correlations on different length scales because it lacks disentanglers. Although this may be theoretically correct, the paper does not experimentally verify that the proposed factorization with disentanglers outperforms a similar approach without disentanglers. In my opinion this is a critical omission, because the addition of disentanglers seems to be the main or perhaps only difference to previous work.\n\nThe experiments show that MERA can drastically reduce the number of parameters of fully connected layers with only a modest drop in accuracy, for a particular ConvNet trained on CIFAR10. Unfortunately this ConvNet is far from state of the art, so it is not clear if the method would also work for better architectures. Furthermore, training deep nets can be tricky, and so the poor performance makes it impossible to tell if the baseline is (unintentionally) crippled.\n\nComparing MERA-2 to TT-3 or MERA-3 to TT-5 (which have an approximately equal number of parameters), the difference in accuracy appears to be less than 1 percentage point. Since only a handful of specific MERA / TT architectures were compared on a single dataset, it is not at all clear that we can expect MERA to outperform TT in many situations. In fact, it is not even clear that the small difference observed is stable under random retraining.\n\n\nSummary\nAn interesting paper with novel theoretical ideas, but insufficient experimental validation. Some expository issues need to be fixed.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.4444444477558136,
0.4444444477558136
],
"confidence": [
0.75,
0.5,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to AnonReviewer3",
"Response to AnonReviewer1",
"Response to AnonReviewer2"
],
"comment": [
"\nThank you for your helpful comments. \n\nWe have adapted the manuscript to include a more comprehensive description of those principles that may not be familiar to a machine learning audience and a more formal description of the MERA layers. We hope that you find the revised version to be more clear.\n\nIn the MERA and MPO experiments we compress the two penultimate layers of the network. We have amended the paper to make this more clear. \n",
"Thank you for your helpful comments. \n\nWe have revised the manuscript to include a more comprehensive description of the MERA decomposition. We hope that you find this sufficient. \n\nWe have now considered the regime in which the convolutional parameters make up a relatively small number of the total number of parameters in the network. In this ablated network the MERA layer also outperforms the tensor train network. ",
"\nThank you for your considered response. \n\nWe have adapted the manuscript to give a more comprehensive description of the architecture and those principles that may not be familiar to a machine learning audience, including tensor notation, disentanglers and sampling random orthogonal matrices/tensors. We hope that this is now more clear. \n\nThe role of the disentanglers, in terms of performance, has not been directly examined. One difficulty is that removing the disentanglers also reduces the number of model parameters thus biasing the comparison. We do not believe this would be a fair comparison. In future work we are planning to more thoroughly examine the role of disentanglers in various architectures. \n\nWhether compression is best achieved by factorizing the weight tensors or constraining or distilling larger models during or after training is an interesting question and we don’t make this comparison. However, using factorization initially would seem to allow for models with more capacity using the same number of parameters and the two approaches are not always mutually exclusive\n\nRegarding your comment about the reshaping of the activation vector from the final convolutional layer. We agree that this is a somewhat arbitrary choice that is also apparent in other tensorization methods. This issue could be avoided by constructing the entire network from tensor components, which we plan to examine in future work. \n\nTo compare the MERA and TT factorization methods we used a very simple architecture and basic data augmentation to as best possible isolate the effects of factorization from other design choices. It would indeed be very interesting to test these methods in a more complicated model. \n\nThank you again for a very helpful and detailed response. \n"
]
} | {
"paperhash": [
"liu|machine_learning_by_unitary_tensor_network_of_hierarchical_tree_structure",
"levine|deep_learning_and_quantum_entanglement:_fundamental_connections_with_implications_to_network_design",
"garipov|ultimate_tensorization:_compressing_convolutional_and_fc_layers_alike",
"cichocki|tensor_networks_for_dimensionality_reduction_and_large-scale_optimization:_part_1_low-rank_tensor_decompositions",
"lin|why_does_deep_and_cheap_learning_work_so_well?",
"stoudenmire|supervised_learning_with_quantum-inspired_tensor_networks",
"novikov|tensorizing_neural_networks",
"hinton|distilling_the_knowledge_in_a_neural_network",
"ioffe|batch_normalization:_accelerating_deep_network_training_by_reducing_internal_covariate_shift",
"he|delving_deep_into_rectifiers:_surpassing_human-level_performance_on_imagenet_classification",
"kingma|adam:_a_method_for_stochastic_optimization",
"mehta|an_exact_mapping_between_the_variational_renormalization_group_and_deep_learning",
"saxe|exact_solutions_to_the_nonlinear_dynamics_of_learning_in_deep_linear_neural_networks",
"orús|a_practical_introduction_to_tensor_networks:_matrix_product_states_and_projected_entangled_pair_states",
"oseledets|tensor-train_decomposition",
"schollwoeck|the_density-matrix_renormalization_group_in_the_age_of_matrix_product_states",
"eisert|colloquium:_area_laws_for_the_entanglement_entropy",
"vidal|entanglement_renormalization:_an_introduction",
"chan|an_introduction_to_the_density_matrix_renormalization_group_ansatz_in_quantum_chemistry",
"vidal|class_of_quantum_many-body_states_that_can_be_efficiently_simulated.",
"mezzadri|how_to_generate_random_matrices_from_the_classical_compact_groups",
"verstraete|matrix_product_states_represent_ground_states_faithfully",
"feynman|simulating_physics_with_computers",
"stewart|the_efficient_generation_of_random_orthogonal_matrices_with_an_application_to_condition_estimators",
"liu|2_3_o_ct_2_01_8_machine_learning_by_two-dimensional_hierarchical_tensor_networks_:_a_quantum_information_theoretic_perspective_on_deep_architectures",
"cichocki|low-rank_tensor_networks_for_dimensionality_reduction_and_large-scale_optimization_problems:_perspectives_and_challenges_part_1",
"srivastava|dropout:_a_simple_way_to_prevent_neural_networks_from_overfitting",
"maas|rectifier_nonlinearities_improve_neural_network_acoustic_models"
],
"title": [
"Machine learning by unitary tensor network of hierarchical tree structure",
"Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design",
"Ultimate tensorization: compressing convolutional and FC layers alike",
"Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions",
"Why Does Deep and Cheap Learning Work So Well?",
"Supervised Learning with Quantum-Inspired Tensor Networks",
"Tensorizing Neural Networks",
"Distilling the Knowledge in a Neural Network",
"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"Adam: A Method for Stochastic Optimization",
"An exact mapping between the Variational Renormalization Group and Deep Learning",
"Exact solutions to the nonlinear dynamics of learning in deep linear neural networks",
"A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States",
"Tensor-Train Decomposition",
"The density-matrix renormalization group in the age of matrix product states",
"Colloquium: Area laws for the entanglement entropy",
"Entanglement Renormalization: An Introduction",
"An Introduction to the Density Matrix Renormalization Group Ansatz in Quantum Chemistry",
"Class of quantum many-body states that can be efficiently simulated.",
"How to generate random matrices from the classical compact groups",
"Matrix product states represent ground states faithfully",
"Simulating physics with computers",
"The Efficient Generation of Random Orthogonal Matrices with an Application to Condition Estimators",
"2 3 O ct 2 01 8 Machine Learning by Two-Dimensional Hierarchical Tensor Networks : A Quantum Information Theoretic Perspective on Deep Architectures",
"Low-Rank Tensor Networks for Dimensionality Reduction and Large-Scale Optimization Problems: Perspectives and Challenges PART 1",
"Dropout: a simple way to prevent neural networks from overfitting",
"Rectifier Nonlinearities Improve Neural Network Acoustic Models"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"Ding Liu",
"Shi-Ju Ran",
"P. Wittek",
"C. Peng",
"R. Garc'ia",
"G. Su",
"M. Lewenstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yoav Levine",
"David Yakira",
"Nadav Cohen",
"A. Shashua"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"T. Garipov",
"D. Podoprikhin",
"Alexander Novikov",
"D. Vetrov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Cichocki",
"Namgil Lee",
"I. Oseledets",
"A. Phan",
"Qibin Zhao",
"D. Mandic"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Henry W. Lin",
"Max Tegmark"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Stoudenmire",
"D. Schwab"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Alexander Novikov",
"D. Podoprikhin",
"A. Osokin",
"D. Vetrov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Geoffrey E. Hinton",
"O. Vinyals",
"J. Dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sergey Ioffe",
"Christian Szegedy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kaiming He",
"X. Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Diederik P. Kingma",
"Jimmy Ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Pankaj Mehta",
"D. Schwab"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Andrew M. Saxe",
"James L. McClelland",
"S. Ganguli"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Orús"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"I. Oseledets"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"U. Schollwoeck"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Eisert",
"M. Cramer",
"M. Plenio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Vidal"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Chan",
"Jonathan J. Dorando",
"Debashree Ghosh",
"J. Hachmann",
"Eric Neuscamman",
"Haitao Wang",
"Takeshi Yanai"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Vidal"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"F. Mezzadri"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"F. Verstraete",
"J. Cirac"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Feynman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Stewart"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ding Liu",
"Shi-Ju Ran",
"P. Wittek",
"C. Peng",
"Raúl Blázquez García",
"G. Su",
"M. Lewenstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Cichocki",
"Namgil Lee",
"I. Oseledets",
"A. Phan",
"Qibin Zhao",
"D. Mandic"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nitish Srivastava",
"Geoffrey E. Hinton",
"A. Krizhevsky",
"I. Sutskever",
"R. Salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Andrew L. Maas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1710.04833v4",
"1704.01552v2",
"1611.03214v1",
"1609.00893",
"1608.08225v4",
"1605.05775",
"1509.06569v2",
"1503.02531v1",
"1502.03167v3",
"1502.01852",
"1412.6980v9",
"1410.3831v1",
"1312.6120v3",
"1306.2164v3",
"",
"1008.3477",
"",
"0912.1651v2",
"0711.1398v1",
"quant-ph/0610099",
"math-ph/0609050v2",
"cond-mat/0505140v6",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
[],
[
"background"
],
[],
[],
[
"background",
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"background"
],
[],
[
"methodology"
],
[
"background"
],
[
"background",
"methodology"
],
[
"methodology"
],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background",
"methodology"
],
[],
[
"background",
"methodology"
],
[
"methodology"
],
[],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[],
[],
[
"methodology"
]
],
"isInfluential": [
false,
true,
false,
false,
false,
true,
true,
true,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true
]
} | null | 87 | null | 0.407407 | 0.666667 | null | null | null | null | null | rkGZuJb0b |
|
huang|parametric_adversarial_divergences_are_good_task_losses_for_generative_modeling|ICLR_cc_2018_Conference | Parametric Adversarial Divergences are Good Task Losses for Generative Modeling | Generative modeling of high dimensional data like images is a notoriously difficult and ill-defined problem. In particular, how to evaluate a learned generative model is unclear.
In this paper, we argue that *adversarial learning*, pioneered with generative adversarial networks (GANs), provides an interesting framework to implicitly define more meaningful task losses for unsupervised tasks, such as for generating "visually realistic" images. By relating GANs and structured prediction under the framework of statistical decision theory, we put into light links between recent advances in structured prediction theory and the choice of the divergence in GANs. We argue that the insights about the notions of "hard" and "easy" to learn losses can be analogously extended to adversarial divergences. We also discuss the attractive properties of parametric adversarial divergences for generative modeling, and perform experiments to show the importance of choosing a divergence that reflects the final task. | {
"name": [],
"affiliation": []
} | Parametric adversarial divergences implicitly define more meaningful task losses for generative modeling, we make parallels with structured prediction to study the properties of these divergences and their ability to encode the task of interest. | [
"parametric",
"adversarial",
"divergence",
"generative",
"modeling",
"gan",
"neural",
"network",
"task",
"loss",
"structured",
"prediction"
] | null | 2018-02-15 22:29:24 | 48 | null | null | null | null | null | null | null | null | false | Pros:
- The paper proposes interesting new ideas on evaluating generative models.
- Paper provides hints at interesting links between structural prediction and adversarial learning.
- Authors propose a new dataset called Thin-8 to demonstrate the new ideas and argue that it is useful in general to study generative models.
- The paper is well written and the authors have made a good attempt to update the paper after reviewer comments.
Cons:
- The proposed ideas are high level and the paper lack deeper analysis.
- Apart from demonstrating that the parametric divergences perform better than non-parametric divergences are interesting, but the reviewers think that practical importance of the results are weak in comparison to previous works.
With this analysis, the committee recommends this paper for workshop. | {
"review_id": [
"SJvLO1WZf",
"S1owSiOeM",
"H1EMeWfgz"
],
"review": [
{
"title": "title: Limited novelty for GAN and adversarial divergences literature",
"paper_summary": null,
"main_review": "main_review: This paper takes some steps in the direction of understanding adversarial learning/GAN and relating GANs and structured prediction under statistical decision theory framework. \n\nOne of the main contribution of the paper is to study/analyze parametric adversarial divergences and link it with structured losses. Although, I see a value in the idea considered in the paper, it is not clear to me how much novelty does this work bring on top of the following two papers:\n\n1) S. Liu. Approximation and convergence properties of generative adversarial learning. In NIPS, 2017.\n2) S. Arora. Generalization and equilibrium in generative adversarial nets (GANs). In ICML, 2017.\n\nMost of their theoretical results seems to be already existing in literature (Liu, Arora, Arjovsky) in some form of other and it is claimed that this paper put these result in perspective in an attempt to provide a more principled view of the nature and usefulness of adversarial divergences, in comparison to traditional divergences.\n\nHowever, it seems to me that the paper is limited both in theoretical novelty and practical usefulness of these results. Especially, I could not see any novel contribution for GAN literature or adversarial divergences. \n\nI would suggests authors to clearly specify novelties and contrast their work with\n1) GAN literature: ([2] Arora's results) \n2) Adversarial divergences literature: ([1] Liu)\n\nAlso, provide more experiments to support several claims (without any rigorous theoretical justifications) made in the paper.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A very general formulation without real theoretical or empirical support",
"paper_summary": null,
"main_review": "main_review: This paper introduces a family of \"parametric adversarial divergences\" and argue that they have advantages over other divergences in generative modelling, specially for structured outputs. \n\nThere's clear value in having good inductive biases (e.g. expressed in the form of the discriminator architecture) when defining divergences for practical applications. However, I think that the paper would be much more valuable if its focus shifted from presenting a new notion of divergence to deep-diving into the effect of inductive biases and presenting more specific results (theoretical and / or empirical) in structured prediction or other problems. In its current form the paper doesn't seem particularly strong for either the divergence or GAN literatures. Some reasons below:\n\n* There are no specific results on properties of the divergences, or axioms that justify them. I think that presenting a very all-encompassing formulation without a strong foundation does not add value. \n* There's abundant literature on f-divergences which show that there's a 1-1 relationship between divergences and optimal (Bayes) risks of classification problems (e.g. Reid at al. Information, Divergence and Risk for Binary Experiments in JMLR and Garcia-Garcia et al. Divergences and Risks for Multiclass Experiments in COLT). This disproves the point that the authors make that it's not possible to encode information about the final task in the divergence. If the loss for the task is proper, then it's well known how to construct a divergence which coincides with the optimal risk.\n* The divergences presented in this work are different from the above since the risk is minimised over a parametric class instead of over the whole set of integrable functions. However, practical estimators of f-divergences also reduce the optimization space (e.g. unit ball in a RKHS as in Nguyen et al. Estimating Divergence Functionals and the\nLikelihood Ratio by Convex Risk Minimization or Ruderman et al. Tighter Variational Representations of f-Divergences via Restriction to Probability Measures). So, given the lack of strong foundation for the formulation, \"parametric adversarial divergences\" feel more like estimators of other divergences than a relevant new family.\n* There are many estimators for f-divergences (like the ones cited above and many others based e.g. on nearest-neighbors) that are sample-based and thus correspond to the \"implicit\" case that the authors discuss. They don't necessarily need to use the dual form. So table 1 and the first part of Section 3.1 are not accurate.\n* The experiments are few and too specific, specially given that the paper presents a very general framework. The first experiment just shows that Wasserstein GANs don't perform well in an specific dataset and use that to validate a point about those GANs not being good for high dimensions due to their sample complexity. That feels like confirmation bias and also does not really say anything about the parametric adversarial GANs, which are the focus of the paper.\n\nIn summary, I like the authors idea to explore the restriction of the function class of dual representations to produce useful-in-practice divergences, but the paper feels a bit middle of the road. The theory is not strong and the experiments don't necessary support the intuitive claims made in the paper.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting way to think about GANs",
"paper_summary": null,
"main_review": "main_review: This paper is in some sense a \"position paper,\" giving a framework for thinking about the loss functions implicitly used by the generator of GAN-type models. It advocates thinking about the loss in a way similar to how it is considered in structured prediction. It also proposes that approximating the dual formulation of various divergences with functions from a parametric class, as is typically done in GAN-type setups, is not only more tractable (computationally and in sample complexity) than the full nonparametric estimation, but also gives a better actual loss.\n\nOverall, I like the argument here, and think that it is a useful framework for thinking about these things. My main concern is that the practical contribution on top of Liu et al. (2017) might be somewhat limited.\n\nA few small points:\n\n- f-divergences can actually be nonparametrically estimated purely from samples, e.g. with the k-nearest neighbor estimator of https://arxiv.org/abs/1411.2045, or (for certain f-divergences) the kernel density based estimator of https://arxiv.org/abs/1402.2966. These are unlikely to lead to a practical learning algorithm, but could be mentioned in Table 1.\n\n- The discussion of MMD in the end of section 3.1 is a little off. MMD is fundamentally defined by the kernel choice; Dziugaite et al. (2015) only demonstrated that the Gaussian RBF kernel is a poor choice for MNIST modeling, while the samples of Li et al. (2015) simply by using a mixture of Gaussian kernels were much better. No reasonable fixed kernel is likely to yield good results on a harder image modeling problem, but that is a slightly different message than the one this paragraph conveys.\n\n- It would be interesting to replicate the analysis of Danihelka et al. (2017) on the Thin-8 dataset. This might help clarify which of the undesirable effects observed in the VAE model here are due to likelihood, and which due to other aspects of VAEs (like the use of the lower bound).",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.3333333432674408,
0.5555555820465088
],
"confidence": [
0.5,
0.75,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Difference with Liu et al. (2017) and Arora et al. (2017)",
"part 2",
"Answer to AnonReviewer2.",
"Answer to AnonReviewer3",
"part 2",
"Thin-8 Dataset",
"part 2",
"Thin-8 and Visual Hyperplane will be released.",
"Answer to AnonReviewer1"
],
"comment": [
"We are posting this comment since two reviewers have asked us to clarify the difference between our work and:\n\n(1) S. Liu et al. Approximation and convergence properties of generative adversarial learning. In NIPS, 2017.\n(2) S. Arora et al. Generalization and equilibrium in generative adversarial nets (GANs). In ICML, 2017.\n\nThe following is an extract of our updated related work section.\n\nArora et al. (2017) argue that analyzing GANs with a nonparametric (optimal discriminator) view does not really make sense, because the usual nonparametric divergences considered have bad sample complexity. They also prove sample complexities for parametric divergences. Liu et al. (2017) prove under some conditions that globally minimizing a neural divergence is equivalent to matching all moments that can be represented within the discriminator family. They unify parametric divergences with nonparametric divergences and introduce the notion of strong and weak divergence. However, both those works do not attempt to study the meaning and practical properties of parametric divergences. In our work, we start by introducing the notion of final task, and then discuss why parametric divergences can be good task losses with respect to usual final tasks. We also perform experiments to determine properties of some parametric divergences, such as invariance, ability to enforce constraints and properties of interest, as well as the difference with their nonparametric counterparts. Finally, we unify structured prediction and generative modeling, which could give a new perspective to the community.",
"\nR: \"The discussion of MMD in the end of section 3.1 is a little off. MMD is fundamentally defined by the kernel choice; Dziugaite et al. (2015) only demonstrated that the Gaussian RBF kernel is a poor choice for MNIST modeling, while the samples of Li et al. (2015) simply by using a mixture of Gaussian kernels were much better. No reasonable fixed kernel is likely to yield good results on a harder image modeling problem, but that is a slightly different message than the one this paragraph conveys.\"\n\nA: We updated section 3.1 to make it clear that MMD is fundamentally dependent on the choice of kernels. In particular we emphasize that the fact that MMD does not perform well for generative modeling is because generic kernels are used. We actually provide a more complete discussion of the choice of kernels in Section 3.2 \"Ability to Integrate Desirable Properties for the Final Task\", where we discuss the possibility to learn the kernel based on data instead of hand-defining it. That discussion was motivated by the possibility of integrating more knowledge about the final task into the kernel.\n\nR: \"It would be interesting to replicate the analysis of Danihelka et al. (2017) on the Thin-8 dataset. This might help clarify which of the undesirable effects observed in the VAE model here are due to likelihood, and which due to other aspects of VAEs (like the use of the lower bound).\"\n\nA: Thank you for this interesting experiment idea. Indeed RealNVP is attractive as a generative model for comparing maximum likelihood and parametric divergences because the likelihood can be evaluated explicitly. However, we think that such an experiment is currently out of the scope of the paper because it is quite non-trivial for the following reasons.\n\nOne reason is that there are no obvious extensions of RealNVP to convolutional architectures, which are arguably the best architecture to deal efficiently with high-resolution images. However, there are other generators with also feature explicit likelihood. Probably one of the best known architectures with explicit likelihood for image generation is the PixelCNN (Van den Oord et al. (2016), https://arxiv.org/pdf/1601.06759.pdf). Training them using maximum-likelihood is not a problem because teacher-forcing is used, which allows to parallelize the process. However training them using a discriminator requires generating samples, which is extremely slow, because images have to be generated pixel after pixel.\n\n\nWe hope we have addressed the reviewer's concerns and we thank the reviewer again for their constructive review.",
"We thank the reviewer for taking the time to review our paper, and for evaluating our paper as a position paper - which is indeed what we intended our paper to be.\n\nConcerning the difference with the work of Liu et al. (2017), we refer the reviewer to Shuang Liu's comment \"Mathematical View vs. Philosophical View\", as well as our comment \"Difference with Liu et al. (2017) and Arora et al. (2017)\". We have updated our related work section to better contrast our work with those works (see the revised version).\n\nThe bottom line is that while Liu et al. (2017) concentrate more on the mathematical properties of parametric adversarial divergences, they do not attempt to study the meaning and practical properties of parametric divergences. In our paper, we start by introducing the notion of final task, which is our true goal, but is often difficult to formalize and hard to learn from directly. We then give arguments why parametric divergences can be good approximations/surrogates for the final task at hand. To do that, we review results from the literature, establish links with structured prediction theory, and perform a series of preliminary experiments to better understand parametric divergences by attempting to answer the following questions. How are they affected by various factors: discriminator family, transformations of the dataset? How important is the sample complexity? How good are they at dealing with challenging datasets such as high-dimensional data, or data with abstract structure and constraints?\n\nAs you have noted, we are not claiming that we have a complete theory of parametric divergences. Rather, we are proposing new ways to think of parametric divergences, and more generally of the (final) task of generative modeling.\n\nWe now answer the reviewer's questions:\n\nR: \"f-divergences can actually be nonparametrically estimated purely from samples, e.g. with the k-nearest neighbor estimator of https://arxiv.org/abs/1411.2045, or (for certain f-divergences) the kernel density based estimator of https://arxiv.org/abs/1402.2966. These are unlikely to lead to a practical learning algorithm, but could be mentioned in Table 1.\"\n\nA: Thank you for pointing out that there is a rich literature on estimating f-divergences from samples. We have updated section 3.1 to include some of those techniques. However, one should note that those techniques all make additional (implicit or explicit) assumptions on the densities. We updated the table caption and Section 3.1 to reflect that.",
"We thank the reviewer for taking the time to review our paper.\n\nWe now answer the reviewer’s comments and questions.\n\nR: “Most of their theoretical results seems to be already existing in literature (Liu, Arora, Arjovsky) in some form of other and it is claimed that this paper put these result in perspective in an attempt to provide a more principled view of the nature and usefulness of adversarial divergences, in comparison to traditional divergences.”\n\nConcerning the difference with the work of Liu et al. (2017), we refer the reviewer to Shuang Liu's comment \"Mathematical View vs. Philosophical View\", as well as our comment \"Difference with Liu et al. (2017) and Arora et al. (2017)\". Concerning the difference with the work of Arora et al. (2017), we also refer the reviewer to our comment \"Difference with Liu et al. (2017) and Arora et al. (2017)\".\nWe have updated our related work section to better contrast our work with those works.\n\nThe bottom line is that those works focus on specific mathematical properties of parametric divergences. Arora et al. (2017) focus on statistical efficiency of parametric divergences. Liu et al. (2017) focus on topological properties of adversarial divergences and the mathematical interpretation of minimizing neural divergences (in a nutshell: matching moments).\n\nHowever, neither of those works attempts to study the meaning and practical properties of parametric divergences. In our paper, we start by introducing the notion of final task, which is our true goal, but is often difficult to formalize and hard to learn from directly. We then give arguments why parametric divergences can be good approximations/surrogates for the final task at hand. To do that, we review results from the literature, establish links with structured prediction theory, and perform a series of preliminary experiments to better understand parametric divergences by attempting to answer the following questions. How are they affected by various factors: discriminator family, transformations of the dataset? How important is the sample complexity? How good are they at dealing with challenging datasets such as high-dimensional data, or data with abstract structure and constraints?\n\nR: “However, it seems to me that the paper is limited both in theoretical novelty and practical usefulness of these results. Especially, I could not see any novel contribution for GAN literature or adversarial divergences.”\n\nA: Here are some potential contributions to the adversarial divergence literature:\n- it is often believed in the GAN literature that weaker losses (in the topological sense) are easier to learn than stronger losses. There has indeed been work in the adversarial divergence literature on the relative strength and convergence properties of adversarial divergences. However, to the best of our knowledge, there is no rigorous theory that explains why weaker losses are easier to learn. By relating adversarial divergences used in generative modeling with the task losses used in structured prediction, we put into perspective some theoretical results from structured prediction theory that actually show and quantify how the strength of the objective affects the ease of learning the model. Because those results are consistent with the intuition that weaker divergences are easier to learn, they give additional reasons to think that this intuition is correct.\n\nWe take this opportunity to emphasize that it is highly non-trivial to derive a rigorous theory on quantifying which divergences are better for learning. Unlike structured prediction, where the task loss is also used for evaluating the learned model, there is no one good way of evaluating generative models yet. Because a rigorous theory should study the influence of minimizing a divergence on minimizing the evaluation metric, any theory that is derived on divergences can only be as meaningful as the evaluation metric considered.\n",
"R: \" The divergences presented in this work are different from the above since the risk is minimised over a parametric class instead of over the whole set of integrable functions. However, practical estimators of f-divergences also reduce the optimization space (e.g. unit ball in a RKHS as in Nguyen et al. Estimating Divergence Functionals and the\nLikelihood Ratio by Convex Risk Minimization or Ruderman et al. Tighter Variational Representations of f-Divergences via Restriction to Probability Measures). So, given the lack of strong foundation for the formulation, \"parametric adversarial divergences\" feel more like estimators of other divergences than a relevant new family.\"\n\nA: Whether parametric divergences are a new family or simply estimators is more of an opinion. However, our opinion is that parametric divergences are a new family because they have very different sample complexities than their nonparametric counterparts, and because they will only match the moments that the discriminator family can represent.\n\nR: \"There are many estimators for f-divergences (like the ones cited above and many others based e.g. on nearest-neighbors) that are sample-based and thus correspond to the \"implicit\" case that the authors discuss. They don't necessarily need to use the dual form. So table 1 and the first part of Section 3.1 are not accurate.\"\n\nA: This is true, thanks for pointing it out. However, all these methods make additional assumptions about the densities, some of which are conceptually similar to smoothing the density, which makes them different from the true f-divergence. We updated Section 3.1 to reflect that.\n\nR: The experiments are few and too specific, specially given that the paper presents a very general framework. The first experiment just shows that Wasserstein GANs don't perform well in an specific dataset and use that to validate a point about those GANs not being good for high dimensions due to their sample complexity. That feels like confirmation bias and also does not really say anything about the parametric adversarial GANs, which are the focus of the paper.\n\nA: For the first experiment [Sample Complexity], it is well known that models trained with parametric divergences have no trouble generating MNIST and CIFAR. See for instance the DCGAN paper (https://pdfs.semanticscholar.org/3575/6f711a97166df11202ebe46820a36704ae77.pdf) and the WGAN-GP paper (https://arxiv.org/pdf/1704.00028.pdf).\nOn the contrary, using the true Wasserstein yields bad results on CIFAR. Our point is to raise awareness that parametric Wasserstein is NOT nonparametric Wasserstein, by showing that the resulting samples are much worse.\nThe second experiment [Robustness to Transformations] focuses on understanding actual properties of parametric divergences, by seeing how robust they are to simple transformations such as rotations and additive noise.\nThe third and fourth experiment compare parametric divergences with nonparametric divergences by taking a popular parametric divergence: the parametric Wasserstein and comparing with the most popular nonparametric divergence: the KL. Using them to train exactly the same generator architectures, we see that KL fails as the resolution goes from 32x32 to 512x512, while the parametric Wasserstein yields results with comparable quality to the training set. Similarly on the task of generating sequence of 5 digits that sum to 25, we see that the parametric Wasserstein is better at enforcing the constraint than the KL.\n\nTo sum up, our experiments show the difference between parametric and nonparametric divergence, study invariance properties of parametric divergences, and compare how well parametric and nonparametric divergences can deal with high-dimensionality and enforcing constraints.\n\nIt's true we do not have strong theory. But as we stated in the beginning, it's very challenging to prove that parametric divergences are a good proxy for human perception, when mathematically defining human perception is itself challenging. So the best we can do is to study the properties of the parametric divergences.\n\nWe hope we have addressed the reviewer's concerns and thank the reviewer again for their time.",
"We have released the Thin-8 dataset (1585 samples, 16 people, 512x512 resolution).\n\nPlease find it here: https://gabrielhuang.github.io/code/",
"Now, we give some of our potential contributions to the GAN literature, and more generally to the generative modeling literature:\n- we give further experimental evidence that parametric divergences can be better than maximum-likelihood for modeling structured data. We consider two tasks: modeling high-dimensional 512x512 data lying on a low-dimensional manifold (Thin-8 dataset), and modeling data with high-level abstract structure/constraints (visual hyperplane task). On both those tasks, we show that to train the same generator, minimizing a WGAN-GP parametric divergence yields better samples than optimizing objectives related to maximum likelihood (VAE evidence lower-bound).\n- in the GAN literature, parametric divergences are commonly referred to as \"lower bounds\" or \"variational lower bounds\" of their corresponding nonparametric divergences (see for instance the f-GAN paper by Nowozin et al. (2017), https://arxiv.org/pdf/1606.00709.pdf). We think that the terminology is misleading, and we show in this paper that parametric divergences are not to be thought merely as a lower-bound of the corresponding nonparametric divergences. First, statistics-wise, parametric divergences have been shown to have very different sample complexities than nonparametric divergences. Moreover, if the final goal is generative modeling, parametric divergences can be more meaningful objectives; they have been shown to only match the moments that the discriminator family is able to represent, which in image generation seems to be enough to generate visually appealing samples. On the contrary, most nonparametric divergences are strict and enforce matching all moments, which is unnecessarily constraining, and might actually make the objective harder to learn. Finally, we illustrated experimentally that those differences do matter, by showing that using an objective derived from the true (nonparametric) Wasserstein yields worse results than using a parametric Wasserstein in high dimensions.\n- to the best of our knowledge, we have not found extensive studies in the GAN literature of the behavior of parametric divergences with respect to transformations of the distributions. This is important because in GANs, those divergences are minimized using gradient descent. Thus a divergence suitable for generative modeling should vary smoothly with respect to sensible transformations of the dataset (such as deformations, for images) in order to provide a meaningful learning signal to the generator. Therefore, we carry out preliminary experiments to assess the invariance properties of some parametric divergences to simple transformations. One should note that although such simple transformations are not completely representative of the ones induced by a GAN during the course of learning, it is not obvious how to design more complex transformations, such as ones that depart from the data manifold (other than noise, or image blurring).\n\nEven if we are not yet capable of deriving a rigorous theory, we do believe that parametric divergences are strong candidates to consider in generative modeling, both as learning objectives and as evaluation metrics. As pointed out in Colin Raffel’s comment, our paper is laying some of the groundwork for designing more meaningful and practical objectives in generative modeling. We hope that our work helps other researchers get a better perspective on generative modeling, and acts as a reminder to always keep the final task, which is our true goal, in mind.\n\n\nWe hope we have addressed the reviewer's concerns and we thank the reviewer again for taking the time to review our paper.",
"Thank you for your comment, Ilya.\n\nWe will release the Thin-8 dataset as well as the Visual Hyperplane (MNIST digits summing to 25), as soon as our submission is de-anonymized, along with the data-augmentation code (elastic deformations for Thin-8).\n\nPlease note that the Visual Hyperplane dataset is generated on-the-fly from MNIST: every time a sample is requested, a combination of 5 symbolic digits is sampled uniformly from all possible combinations that sum to 25, then a corresponding image is sampled from MNIST for each symbolic digit. Finally, the 5 images are concatenated.",
"We thank the reviewer for their long and thorough review.\n\nBefore we start addressing the reviewer's concerns, we would like to make it clear that we are a position paper. We are not claiming to introduce a new family of divergences. Rather, we are giving the name of \"parametric adversarial divergence\" to the divergences which have been used recently in GANs, and attempting to better understand why they are good candidates for generative modeling.\n\nWe now answer the reviewer's points:\n\nR: \"There are no specific results on properties of the divergences, or axioms that justify them. I think that presenting a very all-encompassing formulation without a strong foundation does not add value.\"\n\nA: It's actually very hard to obtain theoretical results for our work. What we claim is that parametric divergences can be a good approximation of our final task, which in the case of generation, is to generate realistic and diverse samples. It is not something that can be easily evaluated or proved: it is notoriously difficult to mathematically define a perceptual loss, so it's not obvious how to prove rigorously that parametric divergences approximate the perceptual loss well, other than by looking at samples, or using meaningful but debatable proxies such as inception score.\n\nR: \"There's abundant literature on f-divergences which show that there's a 1-1 relationship between divergences and optimal (Bayes) risks of classification problems (e.g. Reid at al. Information, Divergence and Risk for Binary Experiments in JMLR and Garcia-Garcia et al. Divergences and Risks for Multiclass Experiments in COLT). This disproves the point that the authors make that it's not possible to encode information about the final task in the divergence. If the loss for the task is proper, then it's well known how to construct a divergence which coincides with the optimal risk.\"\n\nA: What you are referring to is the equivalence between computing a divergence and solving a classification problem. This is seen in GANs as the discriminator is solving a classification problem with the appropriate loss between two distributions p and q, the loss of which corresponds to the divergence between p and q. In fact, by choosing the appropriate losses one can recover any f-divergence and any IPM (it corresponds to choosing the Delta in equation 1 of our paper).\nHowever the binary loss here is very different from what we call task loss or final loss. The final loss is what we actually care about (images that respect perspective, that are not blurry, made of full objects). Instead the loss you are referring to is a loss that defines the binary classification problem between p and q. We updated the paper to include your references. Originally we were based on the work of Sriperumbudur et al 2012. Thank you for helping us complete the references.\n\n"
]
} | {
"paperhash": [
"arora|generalization_and_equilibrium_in_generative_adversarial_nets_(gans)",
"belanger|structured_prediction_energy_networks",
"bellemare|the_cramer_distance_as_a_solution_to_biased_wasserstein_gradients",
"benamou|iterative_bregman_projections_for_regularized_transportation_problems",
"cuturi|sinkhorn_distances:_lightspeed_computation_of_optimal_transport",
"dziugaite|training_generative_neural_networks_via_maximum_mean_discrepancy_optimization",
"goodfellow|generative_adversarial_nets",
"gretton|a_kernel_method_for_the_two-sample-problem",
"gulrajani|improved_training_of_wasserstein_gans",
"lamb|professor_forcing:_a_new_algorithm_for_training_recurrent_networks",
"li|mmd_gan:_towards_deeper_understanding_of_moment_matching_network",
"li|generative_moment_matching_networks",
"liu|approximation_and_convergence_properties_of_generative_adversarial_learning",
"mohamed|learning_in_implicit_generative_models",
"oord|pixel_recurrent_neural_networks",
"radford|unsupervised_representation_learning_with_deep_convolutional_generative_adversarial_networks",
"salimans|improved_techniques_for_training_gans",
"theis|a_note_on_the_evaluation_of_generative_models",
"bousquet|from_optimal_transport_to_generative_modeling:_the_vegan_cookbook",
"cortes|structured_prediction_theory_based_on_factor_graph_complexity",
"moon|multivariate_f-divergence_estimation_with_confidence",
"mroueh|mcgan:_mean_and_covariance_feature_matching_gan",
"nguyen|estimating_divergence_functionals_and_the_likelihood_ratio_by_convex_risk_minimization",
"sashank|on_the_decreasing_power_of_kernel_and_distance_based_nonparametric_hypothesis_tests_in_high_dimensions",
"mark|information,_divergence_and_risk_for_binary_experiments",
"roth|stabilizing_training_of_generative_adversarial_networks_through_regularization",
"ruderman|tighter_variational_representations_of_f-divergences_via_restriction_to_probability_measures"
],
"title": [
"Learning to Discover Cross-Domain Relations with Generative Adversarial Networks",
"Structured Prediction Energy Networks",
"The Cramer Distance as a Solution to Biased Wasserstein Gradients",
"BACKPROPAGATION THROUGH THE VOID: OPTIMIZING CONTROL VARIATES FOR BLACK-BOX GRADIENT ESTIMATION",
"Approximate Bayesian Computation scheme for parameter inference and model selection in dynamical systems",
"Adaptation Algorithm and Theory Based on Generalized Discrepancy",
"Generative Adversarial Nets",
"A Kernel Method for the Two-Sample Problem",
"Improved Training of Wasserstein GANs",
"Professor Forcing: A New Algorithm for Training Recurrent Networks",
"Domain Adaptation: Learning Bounds and Algorithms",
"New Analysis and Algorithm for Learning with Drifting Distributions",
"Characteristic Kernels and RKHS Embedding of Measures",
"Learning in Implicit Generative Models",
"Pixel Recurrent Neural Networks Aäron van den Oord",
"Under review as a conference paper at ICLR 2016 UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS",
"Improved Techniques for Training GANs",
"A NOTE ON THE EVALUATION OF GENERATIVE MODELS",
"From optimal transport to generative modeling: the VEGAN cookbook",
"Structured Prediction Theory Based on Factor Graph Complexity",
"Multivariate f -Divergence Estimation With Confidence",
"Under review as a conference paper at ICLR 2017 SEMI-SUPERVISED LEARNING WITH CONTEXT-CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS",
"Estimating divergence functionals and the likelihood ratio by convex risk minimization",
"On the Decreasing Power of Kernel and Distance based Nonparametric Hypothesis Tests in High Dimensions",
"Information, Divergence and Risk for Binary Experiments",
"Stabilizing Training of Generative Adversarial Networks through Regularization",
"Tighter Variational Representations of f -Divergences via Restriction to Probability Measures"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"taeksoo kim",
"moonsu cha",
"hyunsoo kim",
"jung kwon lee",
"jiwon kim"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david belanger",
"andrew mccallum"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Massachusetts Amherst",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Massachusetts Amherst",
"location": "{}"
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"will grathwohl",
"dami choi",
"yuhuai wu",
"geoffrey roeder",
"david duvenaud"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Toronto and Vector Institute",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto and Vector Institute",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto and Vector Institute",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto and Vector Institute",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto and Vector Institute",
"location": "{}"
}
]
},
{
"name": [
"tina toni",
"david welch",
"natalja strelkowa",
"andreas ipsen",
"michael p h stumpf"
],
"affiliation": [
{
"laboratory": "",
"institution": "Imperial College London",
"location": "{'country': 'UK'}"
},
{
"laboratory": "",
"institution": "Imperial College London",
"location": "{'country': 'UK *'}"
},
{
"laboratory": "",
"institution": "Imperial College London",
"location": "{'country': 'UK'}"
},
{
"laboratory": "",
"institution": "Imperial College London",
"location": "{'country': 'UK'}"
},
{
"laboratory": "",
"institution": "Imperial College London",
"location": "{'country': 'UK'}"
}
]
},
{
"name": [
"corinna cortes",
"mehryar mohri",
"andres mu ñoz medina"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ian j goodfellow",
"jean pouget-abadie",
"mehdi mirza",
"bing xu",
"david warde-farley",
"sherjil ozair",
"aaron courville",
"yoshua bengio",
" delhi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Université de Montréal from Ecole Polytechnique",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "Sherjil Ozair is visiting",
"institution": "Université de Montréal from Indian Institute of Technology",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"arthur gretton",
"karsten m borgwardt",
"malte j rasch",
"bernhard schölkopf",
"alexander smola"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ishaan gulrajani",
"faruk ahmed",
"martin arjovsky",
"vincent dumoulin",
"aaron courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex lamb",
"anirudh goyal",
"ying zhang",
"saizheng zhang",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yishay mansour",
"mehryar mohri",
"afshin rostamizadeh"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mehryar mohri",
"andres muñoz medina"
],
"affiliation": [
{
"laboratory": "",
"institution": "Courant Institute of Mathematical Sciences",
"location": "{'settlement': 'New York', 'region': 'NY'}"
},
{
"laboratory": "",
"institution": "Courant Institute of Mathematical Sciences",
"location": "{'settlement': 'New York', 'region': 'NY'}"
}
]
},
{
"name": [
"bharath k sriperumbudur",
"gert r g lanckriet"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shakir mohamed",
"balaji lakshminarayanan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nal kalchbrenner",
"google deepmind"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alec radford",
"luke metz",
"soumith chintala"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tim salimans",
"ian goodfellow",
"wojciech zaremba",
"vicki cheung",
"alec radford",
"xi chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lucas theis",
"matthias bethge"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"olivier bousquet",
"sylvain gelly",
"ilya tolstikhin",
"carl-johann simon-gabriel",
"bernhard schölkopf",
"google brain"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"corinna cortes",
"vitaly kuznetsov",
"mehryar mohri",
"scott yang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Courant Institute",
"location": "{'postCode': '10012', 'settlement': 'New York', 'region': 'NY'}"
}
]
},
{
"name": [
"kevin r moon",
"alfred o hero"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'settlement': 'Ann Arbor', 'region': 'MI'}"
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'settlement': 'Ann Arbor', 'region': 'MI'}"
}
]
},
{
"name": [
"remi denton",
"sam gross",
"rob fergus"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xuanlong nguyen",
"martin j wainwright",
"michael i jordan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sashank j reddi",
"aaditya ramdas",
"barnabas poczos",
"aarti singh",
"larry wasserman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mark d reid",
"robert c williamson"
],
"affiliation": [
{
"laboratory": "",
"institution": "Australian National University",
"location": "{'postCode': 'ACT 0200', 'settlement': 'Canberra', 'country': 'Australia'}"
},
{
"laboratory": "NICTA Canberra ACT 0200",
"institution": "Australian National University",
"location": "{'country': 'Australia'}"
}
]
},
{
"name": [
"kevin roth",
"aurelien lucchi",
"sebastian nowozin",
"thomas hofmann"
],
"affiliation": [
{
"laboratory": "",
"institution": "ETH Zürich",
"location": "{}"
},
{
"laboratory": "",
"institution": "ETH Zürich",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research Cambridge",
"location": "{'country': 'UK'}"
},
{
"laboratory": "",
"institution": "ETH Zürich",
"location": "{}"
}
]
},
{
"name": [
"avraham ruderman",
"mark d reid",
"james petterson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Australian National University and NICTA",
"location": "{'settlement': 'Canberra', 'country': 'Australia'}"
},
{
"laboratory": "",
"institution": "NICTA",
"location": "{'settlement': 'Canberra', 'country': 'Australia'}"
}
]
}
],
"arxiv_id": [
"",
"",
"1705.10743v1",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.407407 | 0.583333 | null | null | null | null | null | rkEtzzWAb |
||
tsuzuku|variancebased_gradient_compression_for_efficient_distributed_deep_learning|ICLR_cc_2018_Conference | Variance-based Gradient Compression for Efficient Distributed Deep Learning | Due to the substantial computational cost, training state-of-the-art deep neural networks for large-scale datasets often requires distributed training using multiple computation workers. However, by nature, workers need to frequently communicate gradients, causing severe bottlenecks, especially on lower bandwidth connections. A few methods have been proposed to compress gradient for efficient communication, but they either suffer a low compression ratio or significantly harm the resulting model accuracy, particularly when applied to convolutional neural networks. To address these issues, we propose a method to reduce the communication overhead of distributed deep learning. Our key observation is that gradient updates can be delayed until an unambiguous (high amplitude, low variance) gradient has been calculated. We also present an efficient algorithm to compute the variance and prove that it can be obtained with negligible additional cost. We experimentally show that our method can achieve very high compression ratio while maintaining the result model accuracy. We also analyze the efficiency using computation and communication cost models and provide the evidence that this method enables distributed deep learning for many scenarios with commodity environments. | {
"name": [],
"affiliation": []
} | A new algorithm to reduce the communication overhead of distributed deep learning by distinguishing ‘unambiguous’ gradients. | [
"distributed deep learning",
"gradient compression",
"collective communication",
"data parallel distributed sgd",
"image classification"
] | null | 2018-02-15 22:29:31 | 18 | null | null | null | null | null | null | null | null | false | The reviewers find the gradient compression approach novel and interesting, but they find the empirical evaluation not fully satisfactory. Some aspects of the paper have improved with the feedback from the reviewers, but because of the domain of the paper, experimental evaluation is very important. I recommend improving the experiments by incorporating the reviewers' comments. | {
"review_id": [
"rkZd9y9xz",
"ByqfOWqlM",
"B1O_32YeM"
],
"review": [
{
"title": "title: Ok but not good enough",
"paper_summary": null,
"main_review": "main_review: The paper proposes a novel way of compressing gradient updates for distributed SGD, in order to speed up overall execution. While the technique is novel as far as I know (eq. (1) in particular), many details in the paper are poorly explained (I am unable to understand) and experimental results do not demonstrate that the problem targeted is actually alleviated.\n\nMore detailed remarks:\n1: Motivating with ImageNet taking over a week to train seems misplaced when we have papers claiming to train ImageNet in 1 hour, 24 mins, 15 mins...\n4.1: Lemma 4.1 seems like you want B > 1, or clarify definition of V_B.\n4.2: This section is not fully comprehensible to me.\n- It seems you are confusingly overloading the term gradient and words derived (also in other parts or the paper). What is \"maximum value of gradients in a matrix\"? Make sure to use something else, when talking about individual elements of a vector (which is constructed as an average of gradients), etc.\n- Rounding: do you use deterministic or random rounding? Do you then again store the inaccuracy?\n- I don't understand definition of d. It seems you subtract logarithm of a gradient from a scalar.\n- In total, I really don't know what is the object that actually gets communicated, and consequently when you remark that this can be combined with QSGD and the more below it, I don't understand it. This section has to be thoroughly explained, perhaps with some illustrative examples.\n4.3: allgatherv remark: does that mean that this approach would not scale well to higher number of workers?\n4.4: Remarks about quantization and mantissa manipulation are not clear to me again, or what is the point in doing so. Possible because the problems above.\n5: I think this section is not too useful unless you can accompany it with actual efficient implementation and contrast the practical performance. \n6: Given that I don't understand how you compress the information being communicated, it is hard to believe the utility of the method. The objective was to speed up training time because communication is bottleneck. If you provide 12,000x compression, is it any more practically useful than providing 120x compression? What would be the difference in runtime? Such questions are never discussed. Further, if in the implementation you discuss masking mantissa, I have serious concern about whether the compression protocol is feasible to implement efficiently, without writing some extremely low-level code. I think the soundness of work addressing this particular problem is damaged if not implemented properly (compared to other kinds of works in current ML related research). Therefore I highly recommend including proper time comparison with a baseline in the future.\nFurther, I don't understand 2 things about the Tables. a) how do you combine the proposed method with Momentum in SGD? This is not discussed as far as I can see. b) What is \"QSGD, 2bit\" If I remember QSGD protocol correctly, there's no natural mapping of 2bit to its parameters.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Simple yet efficient new algorithm for gradient compression with good performance.",
"paper_summary": null,
"main_review": "main_review: The authors propose a new gradient compression method for efficient distributed training of neural networks. The authors propose a novel way of measuring ambiguity based on the variance of the gradients. In the experiment, the proposed method shows no or slight degradation of accuracy with big savings in communication cost. The proposed method can easily be combined with other existing method, i.e., Storm (2015), based on the absolute value of the gradient and shows further efficiency. \n\nThe paper is well written: clear and easy to understand. The proposed method is simple yet powerful. Particularly, I found it interesting to re-evaluate the variance with (virtually) increasing larger batch size. The performance shown in the experiments is also impressive. \n\nI found it would have also been interesting and helpful to define and show a new metric that incorporates both accuracy and compression rate into a single metric, e.g., how much accuracy is lost (or gained) per compression rate relatively to the baseline of no compression. With this metric, the comparison would be easier and more intuitive. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The idea to adopt approximated variances of gradients to reduce communication cost seems to be interesting. However, there also exist several major issues in the paper.",
"paper_summary": null,
"main_review": "main_review: This paper proposes a variance-based gradient compression method to reduce the communication overhead of distributed deep learning. Experiments on real datasets are used for evaluation. \n\nThe idea to adopt approximated variances of gradients to reduce communication cost seems to be interesting. However, there also exist several major issues in the paper.\n\nFirstly, the authors propose to combine two components to reduce communication cost, one being variance-based gradient compression and the other being quantization and parameter encoding. But the contributions of these two components are not separately analyzed or empirically verified. \n\nSecondly, the experimental results are unconvincing. The accuracy of Momentum SGD for ‘Strom, \\tau=0.01’ on CIFAR-10 is only 10.6%. Obviously, the learning procedure is not convergent. It is highly possible that the authors do not choose a good hyper-parameter. Furthermore, the proposed method (not the hybrid) is not necessarily better than Strom except for the case of Momentum SGD on CIFAR-10. Please note that the case of Momentum SGD on CIFAR-10 may have a problematic experimental setting for Strom. In addition, it is weird that the experiment on ImageNet does not adopt the same setting as that on CIFAR-10 to evaluate both Adam and Momentum SGD. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.6666666865348816,
0.5555555820465088
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Thank you for your response.",
"Thank you for your review.",
"Thank you for your review. (2)",
"Eratta",
"Thank you for your review.",
"Thank you for your review.",
"Thanks for update"
],
"comment": [
"First of all, thank you for reading our response and giving us an additional comment. To support our arguments, we estimate actual speedup by variance-based compression using micro-benchmarks of communication over slow interconnection.\nFirst, we measured computation and communication time for training ResNet50 on 16 nodes using Infiniband. It took 302.72 ms for computation and 69.95 ms for communication for each iteration in average. We note that ResNet50 contains about 102 MB of parameters.\nNext, we measured communication time of allreduce without compression and that of allgatherv with compression. We used 16 t2.micro instances of AWS Its point-to-point bandwidth was about 100MB/s. We used OSU micro-benchmarks (http://mvapich.cse.ohio-state.edu/benchmarks/) for the measurements. Summary of the result is the following:\n--\nallreduce\ncompression | Avg elapsed time (ms)\n1 | 9,572.95\n--\nallgatherv\ncompression | Avg elapsed time (ms)\n10 | 3,440.70\n100 | 314.17\n1,000 | 30.09\n10,000 | 4.26\n--\nWith this result, we can see that communication takes longer time compared to usual computation time even with 100x compression. Thus, we can say that even with only 16 nodes, compression ratio over a hundred is desirable to achieve high scalability. In use cases with more nodes, communication will take longer and thousands of times of compression will help. We hope this addresses your concern.",
"Thanks for the review. We're glad to hear that you found our technique to be novel. We've amended our paper in light of your review. We hope this helps explain the details and demonstrate how our technique alleviates the problem of transmitting gradients between nodes.\n\nSection 1\n> Motivating with ImageNet taking over a week to train seems misplaced when we have papers claiming to train ImageNet in 1 hour, 24 mins, 15 mins...\nThanks for the comment. We have amended our paper with the following:\n‘’’For example, it takes over a week to train ResNet-50 on the ImageNet dataset if using a single GPU. … For example, when using 1000BASE-T Ethernet, communication takes at least ten times longer than forward and backward computation for ResNet-50, making multiple nodes impractical. High performance interconnections such as InfiniBand and Omni-Path are an order of magnitude more expensive than commodity interconnections, which limits research and development of deep learning using large-scale datasets to a small number of researchers.’’’\n\n> 4.1: Lemma 4.1 seems like you want B > 1, or clarify definition of V_B.\nCorrect. We have amended our paper with the following:\n‘’’Lemma 4.1\n A sufficient condition that a vector -g is a descent direction is\n \\|g - \\nabla f(x)\\|_2^2 < \\|g\\|_2^2.\nWe are interested in the case of g = \\nabla f_B(x), the gradient vector of the loss function over B.\nBy the weak law of large numbers, when B > 1, the left-hand side with g = \\nabla f_B(x) can be estimated as follows.’’’\nNote, \\nabla_B f(x) in lemma 4.1 of our first paper was replaced with a symbol g.\n\n> 4.2: This section is not fully comprehensible to me.\n> - It seems you are confusingly overloading the term gradient and words derived (also in other parts or the paper). What is \"maximum value of gradients in a matrix\"? Make sure to use something else, when talking about individual elements of a vector (which is constructed as an average of gradients), etc.\nYou’re right. We amended the paper to replace gradient with ‘gradient element’ when we refer elements of gradient vectors.\n‘’’Our quantization except for the sign bit is as follows. For a weight matrix W_k (or a weight tensor in CNN), there is a group of gradient elements corresponding to the matrix. Let M_k be the maximum absolute value in the group.’’’\n\n> - Rounding: do you use deterministic or random rounding? Do you then again store the inaccuracy?\nGood questions. We amended our paper as follows:\nSec 4.2’’’... We do not adopt stochastic rounding like QSGD nor accumulate rounding error g_i - g'_i for the next batch because this simple rounding does not harm accuracy empirically.’’’\n\n> - I don't understand definition of d. It seems you subtract logarithm of a gradient from a scalar.\nd is a difference of two scalars.\n> In total, I really don't know what is the object that actually gets communicated, and consequently when you remark that this can be combined with QSGD and the more below it, I don't understand it. This section has to be thoroughly explained, perhaps with some illustrative examples.\nWe hope our clarification between ‘gradient’ and ‘gradient element’ made the definition of d clearer. We amended our paper as follows:\nSec 4.2‘’’... After deciding which gradient elements to send, each worker sends pairs of a value of a gradient element and its parameter index …’’’\nSec 4.2’’’... Because the variance-based sparsification method described in subsection 4.1 is orthogonal to the quantization shown above, we can reduce communication cost further using sparsity promoting quantization methods such as QSGD instead.’’’\n\n> 4.3: allgatherv remark: does that mean that this approach would not scale well to higher number of workers?\nIt does scale well. We amended our paper with the following:\nSec 4.3’’’Thanks to the high compression ratio possible with this algorithm in combination with other compression methods, even large numbers of workers can be supported.’’’\n\n> 4.4: Remarks about quantization and mantissa manipulation are not clear to me again, or what is the point in doing so. Possible because the problems above.\nTo make a point of the mantissa operations clear, we amended our paper with the following:\n‘’’The quantization of parameters described in subsection 4.2 can also be efficiently implemented with the standard binary floating point representation using only binary operations and integer arithmetic as follows.’’’",
"> 5: I think this section is not too useful unless you can accompany it with actual efficient implementation and contrast the practical performance. \nYes, we would like to be able to do this comparison. We amended the paper to include this at the beginning of Section 5 -- Performance Analysis:\n‘’’Because common deep learning libraries do not currently support access to gradients of each sample, it is difficult to contrast practical performance of an efficient implementation in the commonly used software environment.In light of this, we estimate speedup of each iteration by gradient compression with a performance model of communication and computation.’’’\n\n> If you provide 12,000x compression, is it any more practically useful than providing 120x compression?\nYes, we believe it can be practically useful, depending on the underlying computation infrastructure. With existing compression methods, computation with a large number of nodes essentially requires high bandwidth connections like InfiniBand. Much higher levels of compression make it possible to consider large numbers of nodes even with commodity-level bandwidth connections. We amended our paper with the following:\nSec 6.1’’’The hybrid algorithm's compression ratio is several orders higher than existing compression methods with a low reduction in accuracy. This indicates the algorithm can make computation with a large number of nodes feasible on commodity level infrastructure that would have previously required high-end interconnections.’’’\nSec 6.2’’’In this example as well as the previous CIFAR10 example, Variance-based Gradient Compression shows a significantly higher compression ratio, with comparable accuracy. While in this case, Strom's method's accuracy was comparable with no compression, given the significant accuracy degradation with Strom's method on CIFAR10, it appears Variance-based Gradient Compression provides a more robust solution.’’’\n\n> Further, if in the implementation you discuss masking mantissa, I have serious concern about whether the compression protocol is feasible to implement efficiently, without writing some extremely low-level code. \nYes, low-level code would be required to use our method. It is also true for other existing methods.\n\n> Therefore I highly recommend including proper time comparison with a baseline in the future. Once parameter variance is provided within one of the standard calculation libraries of primitives for neural deep neural networks, this time comparison can be done.\n\n> a) how do you combine the proposed method with Momentum in SGD? This is not discussed as far as I can see. \nWe amended our paper with the following:\nSec. 4.1 ‘’’... In the combination with optimization methods like Momentum SGD, gradient elements not sent are assumed to be equal to zero.’’’\n\n> b) What is \"QSGD, 2bit\" If I remember QSGD protocol correctly, there's no natural mapping of 2bit to its parameters.\nThank you for your comment. We misunderstood meaning ‘bit” used in experiment section of original QSGD paper. We asked the authors at NIPS, and we reran experiments. We amended our paper as follows:\nSec 6.1’’’We used two's complement in implementation of QSGD and \"bit\" represents the number of bits used to represent each element of gradients. \"d\" represents a bucket size.’’’",
"We found that we used mistakenly smaller \\zeta for our algorithm than the value specified in our paper, and thus we reran experiments and updated experimental results.\nWe also found inconsistency of our setting for QSGD with its original paper, and we corrected the experimental results.",
"Thank you for your review and helpful suggestion.\nWe tried to make a new single metric, however, we are not sure how to combine accuracy and compression ratio as they are not directly comparable.\nTo make a comparison between methods more intuitive, we added scatter plots of accuracy and compression ratio in Appendix C.",
"Thank you for your review. We are glad to hear that you found our algorithm interesting.\n\n> But the contributions of these two components are not separately analyzed or empirically verified. \nThank you for your comment. The main contribution is intended to be the variance-based gradient compression, with the quantization provided as a way to fit both values of gradient elements and its index in 32-bit while not rounding many elements to zero. We amended our paper with the following:\nSec 4.2’’’To allow for comparison with other compression methods, we propose a basic quantization process. …’’’\n\n> The accuracy of Momentum SGD for ‘Strom, \\tau=0.01’ on CIFAR-10 is only 10.6%. Obviously, the learning procedure is not convergent. It is highly possible that the authors do not choose a good hyper-parameter.\nThank you for your comment. We amended our paper with the following:\nSec.6.1’’’We note that we observed unstable behaviors with other thresholds around 0.01.’’’\nAppendix D’’’The code is available in examples of Chainer on GitHub.’’’\n\n> Furthermore, the proposed method (not the hybrid) is not necessarily better than Strom except for the case of Momentum SGD on CIFAR-10. Please note that the case of Momentum SGD on CIFAR-10 may have a problematic experimental setting for Strom.\nThank you for your comment. We amended our paper with the following:\nSec. 6.1’’’We also would like to mention the difficulty of hyperparameter tuning in Strom's method. … On the other hand, our algorithm is free from such problem. Moreover, when we know good threshold for Strom's algorithm, we can just combine ours to get further compression.’’’\n\n> In addition, it is weird that the experiment on ImageNet does not adopt the same setting as that on CIFAR-10 to evaluate both Adam and Momentum SGD.\nThank you for your comment. We amended our paper with the following:\nSec 6.2 ‘’’We also evaluated algorithms with replacing MomentumSGD and its learning rate scheduling to Adam with its default hyperparameter.’’’",
"After seeing your response, and reviews of other reviewers, my opinion is still that this is an interesting work, but more needs to be done to publish it.\n\nIn particular, you propose something that you show is an interesting thing to do, but you do not demonstrate that this is actually a useful thing to do. This is very important difference for the specific problem you try to address. Comments such as \"yes, we believe it can be practically useful\" are in my opinion deeply insufficient, and the belief should be explicitly captured in experimental results. This is what I would suggest to focus on in a revision."
]
} | {
"paperhash": [
"aji|sparse_communication_for_distributed_gradient_descent",
"alistarh|communication-efficient_stochastic_gradient_descent,_with_applications_to_neural_networks",
"ba|adam:_a_method_for_stochastic_optimization",
"de|automated_inference_with_adaptive_batches",
"dryden|communication_quantization_for_data-parallel_training_of_deep_neural_networks",
"goyal|accurate,_large_minibatch_sgd:_training_imagenet_in_1_hour",
"he|deep_residual_learning_for_image_recognition",
"krizhevsky|learning_multiple_layers_of_features_from_tiny_images",
"russakovsky|imagenet_large_scale_visual_recognition_challenge",
"seide|1-bit_stochastic_gradient_descent_and_application_to_data-parallel_distributed_training_of_speech_dnns",
"simonyan|very_deep_convolutional_networks_for_large-scale_image_recognition",
"strom|scalable_distributed_dnn_training_using_commodity_gpu_cloud_computing",
"sutskever|on_the_importance_of_initialization_and_momentum_in_deep_learning",
"szegedy|going_deeper_with_convolutions",
"thakur|optimization_of_collective_communication_operations_in_mpich",
"tokui|chainer:_a_next-generation_open_source_framework_for_deep_learning",
"träff|a_simple,_pipelined_algorithm_for_large,_irregular_all-gather_problems",
"wen|terngrad:_ternary_gradients_to_reduce_communication_in_distributed_deep_learning"
],
"title": [
"Sparse communication for distributed gradient descent",
"Communication-efficient stochastic gradient descent, with applications to neural networks",
"Adam: A method for stochastic optimization",
"Automated inference with adaptive batches",
"Communication quantization for data-parallel training of deep neural networks",
"Accurate, large minibatch SGD: training imagenet in 1 hour",
"Deep residual learning for image recognition",
"Learning multiple layers of features from tiny images",
"Imagenet large scale visual recognition challenge",
"1-bit stochastic gradient descent and application to data-parallel distributed training of speech DNNs",
"Very deep convolutional networks for large-scale image recognition",
"Scalable distributed DNN training using commodity GPU cloud computing",
"On the importance of initialization and momentum in deep learning",
"Going deeper with convolutions",
"Optimization of collective communication operations in MPICH",
"Chainer: a next-generation open source framework for deep learning",
"A simple, pipelined algorithm for large, irregular all-gather problems",
"TernGrad: Ternary gradients to reduce communication in distributed deep learning"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"a f aji",
"k heafield"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d alistarh",
"d grubic",
"j li",
"r tomioka",
"m vojnovic"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"j ba",
"d kingma"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"s de",
"a yadav",
"d jacobs",
"t goldstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"n dryden",
"s a jacobs",
"t moon",
"b van essen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p goyal",
"p dollár",
"r b girshick",
"p noordhuis",
"l wesolowski",
"a kyrola",
"a tulloch",
"y jia",
"k he"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"k he",
"x zhang",
"s ren",
"j sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a krizhevsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"o russakovsky",
"j deng",
"h su",
"j krause",
"s satheesh",
"s ma",
"z huang",
"a karpathy",
"a khosla",
"m bernstein",
"a c berg",
"l fei-fei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"f seide",
"h fu",
"j droppo",
"g li",
"d yu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"k simonyan",
"a zisserman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"n strom"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"i sutskever",
"j martens",
"g dahl",
"g hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"c szegedy",
"w liu",
"y jia",
"p sermanet",
"s reed",
"d anguelov",
"d erhan",
"v vanhoucke",
"a rabinovich"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"r thakur",
"r rabenseifner",
"w gropp"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"s tokui",
"k oono",
"s hido",
"j clayton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"j l träff",
"a ripke",
"c siebert",
"p balaji",
"r thakur",
"w gropp"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"w wen",
"c xu",
"f yan",
"c wu",
"y wang",
"y chen",
"h li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1704.05021v2",
"",
"1412.6980v9",
"",
"",
"1706.02677v2",
"1512.03385v1",
"",
"1409.0575v3",
"",
"",
"",
"",
"1409.4842v1",
"",
"",
"",
"1705.07878v6"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.518519 | 0.75 | null | null | null | null | null | rkEfPeZRb |
||
strauss|ensemble_methods_as_a_defense_to_adversarial_perturbations_against_deep_neural_networks|ICLR_cc_2018_Conference | 1709.03423v2 | Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks | Deep learning has become the state of the art approach in many machine learning problems such as classification. It has recently been shown that deep learning is highly vulnerable to adversarial perturbations. Taking the camera systems of self-driving cars as an example, small adversarial perturbations can cause the system to make errors in important tasks, such as classifying traffic signs or detecting pedestrians. Hence, in order to use deep learning without safety concerns a proper defense strategy is required. We propose to use ensemble methods as a defense strategy against adversarial perturbations. We find that an attack leading one model to misclassify does not imply the same for other networks performing the same task. This makes ensemble methods an attractive defense strategy against adversarial attacks. We empirically show for the MNIST and the CIFAR-10 data sets that ensemble methods not only improve the accuracy of neural networks on test data but also increase their robustness against adversarial perturbations. | {
"name": [],
"affiliation": []
} | null | [
"Mathematics",
"Computer Science"
] | arXiv.org | 2017-09-11 | 26 | null | null | null | null | null | null | null | null | false | The paper empirically evaluates the effectiveness of ensembles of deep networks against adversarial examples. The paper adds little to the existing literature in this area: an detailed study on "ensemble adversarial training" already exists, and the experimental evaluation in this paper is limited to MNIST and CIFAR (results on those datasets do not necessarily transfer very well to much higher-dimensional datasets such as ImageNet). Moreover, the reviewers identify several shortcomings in the experimental setup of the paper. | {
"review_id": [
"r1k7g8Oxz",
"By2d_sBef",
"B1fsb6bWz"
],
"review": [
{
"title": "title: A solid and effective idea, but a limited analysis. ",
"paper_summary": null,
"main_review": "main_review: Summary: This paper proposes to use ensembling as an adversarial defense mechanism. The defense is evaluated on MNIST and CIFAR10 ans shows reasonable performance against FGSM and BIM.\n\nClarity: The paper is clearly written and easy to follow. \n\nOriginality: Building an ensemble of models is a well-studied strategy that was shown long ago to improve generalization. As far as I know, this paper is however the first to empirically study the robustness of ensembles against adversarial examples. \n\nQuality: While this paper contributes to show that ensembling works reasonably well against adversarial examples, I find the contribution limited in general.\n- The method is not compared against other adversarial defenses. \n- The results illustrate that adding Gaussian noise on the training data clearly outperforms the other considered ensembling strategies. However, the authors do not go beyond this observation and do not appear to try to understand why it is the case. \n- Similarly, the Bagging strategy is shown to perform reasonably well (although it appears as a weaker strategy than Gaussian noise) but no further analysis is carried out. For instance, it is known that the reduction of variance is maximal in an ensemble when its constituents are maximally decorrelated. It would be worth studying more systematically if this correlation (or 'diversity') has an effect on the robustness against adversarial examples. \n- I didn't understand the motivation behind considering two distinct gradient estimators. Why deriving the exact gradient of an ensemble is more complicated?\n\nPros: \n- Simple and effective strategy.\n- Clearly written paper. \nCons:\n- Not compared against other defenses.\n- Limited analysis of the results. \n- Ensembling neural networks is very costly in terms of training. This should be considered.\n\nOverall, this paper presents an interesting and promising direction of research. However, I find the current analysis (empirically and/or theoretically) to be too limited to constitutes a solid enough piece of work. For this reason, I do not recommend this paper for acceptance. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A simple technique missing comparison with related ones.",
"paper_summary": null,
"main_review": "main_review: This paper describes the use of ensemble methods to improve the robustness of neural networks to adversarial examples. Adversarial examples are images that have been slightly modified (e.g. by adding some small perturbation) so that the neural network will predict a wrong class label.\n\nEnsemble methods have been used by the machine learning community since long time ago to provide more robust and accurate predictions.\n\nIn this paper the authors explore their use to increase the robustness of neural networks to adversarial examples.\n\nDifferent ensembles of 10 neural networks are considered. These include techniques such as bagging or injecting noise in the \ntraining data. \n\nThe results obtained show that ensemble methods can sometimes significantly improve the robustness against adversarial examples. However,\nthe performance of the ensemble is also highly deteriorated by these examples, although not as much as the one of a single neural network.\n\nThe paper is clearly written.\n\nI think that this is an interesting paper for the deep learning community showing the benefits of ensemble methods against adversarial\nexamples. My main concern with this paper is the lack of comparison with alternate techniques to increase the robustness against adversarial examples. The authors should have compared with the methods described in:\n\n(Goodfellow et al., 2014; Papernot et al., 2016c), \n(Papernot et al., 2016d) \n(Gu & Rigazio, 2014)\n\nFurthermore, the ensemble approach has the main disadvantage of increasing the prediction time by a lot. For example, with 10 elements in the ensemble, predictions are 10 times more expensive.\n------------------------------\nI have read the updated version of the paper. I think the authors have done a good job comparing with related techniques. Therefore, I have slightly increased my score.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Simple but effective idea, light in the presentation and experimentations ",
"paper_summary": null,
"main_review": "main_review: In this manuscript, the authors empirically investigated the robustness of some different deep neural networks ensembles to two types of attacks, namely FGSM and BIM, on two popular datasets, MNIST and CIFAR10. The authors concluded that the ensembles are more accurate on both clean and adversaries samples than a single deep neural network. Therefore, the ensembles are more robust in terms of the ability to correctly classify the adversary attacks.\n\nAs the authors stated, an attack that is designed to fool one network does not necessarily fool the other networks in the same way. This is likely why ensembles appear more robust than single deep learners. However, robustness of ensembles to the white-box attacks that are generated from the ensemble is still low for FGS. Generally speaking, although FGS attacks generated from one network can fool less the whole ensembles, generating FGS adversaries from a given ensemble is still able to effectively fool it. Therefore, if the attacker has access to the ensemble or even know the classification system based on that ensemble, then the ensemble-based system is still vulnerable to the attacks generated specifically from it. Simple ensemble methods are not likely to confer significant robustness gains against adversaries.\n\nIn contrast to FGS results, surprisingly BIM-Grad1 is able to fool more the ensemble than BIM-Grad2. Therefore, it seems that if the attacker makes BIM adversaries from only a single classifier, then she can simply and yet effectively mislead the whole ensemble. In comparison to BIM-Grad2, BIM-Grad1 results show that BIM attacks from one network (BIM-Grad1) can more successfully fool the other different networks in the ensembles in a similar way! BIM-Grad2 is not that much able to fool the ensemble-based system even this attack generated from the ensemble (white-box attacks). In order to confirm the robustness of the ensembles to BIM attacks, the authors can do more experiments by generating BIM-Grad2 attacks with higher number of iterations.\n\nIndeed, the low number of iterations might cause the lower rate of success for generating adversaries by BIM-Grad2. In fact, BIM adversaries from the ensembles might require more number of iterations to effectively fool the majority of the members in the ensembles. Therefore, increasing the number of iterations can increase the successful rate of generating BIM-Average Grad2 adversaries. Note that in this case, it is recommended to compare the amount of distortion (perturbation) with different number of iterations in order to indicate the effectiveness of the ensembles to white-box BIM attacks.\n\nDespite to averaging the output probabilities to compute the ensemble final prediction, the authors generated the adversaries from the ensemble by computing the sum of the gradients of the classifiers loss. A proper approach would have been to average of these gradients. The fact the sum is not divided by the number of members (i.e., sum of gradients instead of average of gradients) is increasing the step size of the adversarial method proportionally to the ensemble size, raising questions on the validity of the comparison with the single-model adversarial generation.\n\nOverall, I found the paper as having several methodological flaws in the experimental part, and rather light in terms of novel ideas. As noticed in the introduction, the idea of using ensemble for enhancing robustness as already been proposed. Making a paper only to restate it, is too light for acceptation. Moreover, experimental setup using a lot of space for comparing results on standard datasets (i.e., MNIST and CIFAR10), even with long presentation of these datasets. Several issues are raised in the current experiments and require adjustments. Experiments should also be more elaborated to make the case stronger, following at least some of indications provided. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.6666666865348816,
0.3333333432674408
],
"confidence": [
0.5,
0.5,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Acknowledgements revision, December 18",
"Thank you very much for your constructive feedback",
"We greatly appreciate your insightful feedback",
"We highly appreciate your feedback"
],
"comment": [
"We would like to acknowledge that we uploaded a revised version of our paper on December 18. Those changes where motivated by the comments of the reviewers and can be summarized in:\n\n1. We added a subsection and a table to compare our method against other popular methods (Adversarial Training and Defensive Distillation).\n2. We added a paragraph describing the advantages as well as the disadvantages of using ensemble methods as defense method.\n3. We reformulated a few sentenses and fixed a few typos.\n\nWe described all this changes in our responses to the reviewers on December 18. ",
"Thank you very much for your constructive feedback and your valuable comments. \n\nWe first like to respond to your last comment about originality of our work: To the best of our knowledge, this is the first paper to empirically evaluate the robustness of ensembles against adversarial attacks. The first paper we cited in this context was about how to build ensembles of specialist defenses (classifiers that classify on a subset of classes only and then are joined to be able to predict all classes. The focus is rather on how to build these specialist classifiers.) and the second paper showed an attack on how to break such specialist defenses. However, we did not find any paper that considered general ensemble methods as defense mechanism and analyzed what kind of ensembles are more robust.\n\nWe agree with you in terms that by attacking with FGSM in combination with Adv. 2 and BIM in combination with Adv. 1 one obtains the strongest attack against ensembles, which we also wrote in the experimental part of our paper. \n\nTo your comment that the accuracy on attacked ensembles is relatively low we would like to highlight that all images were scaled to the unit interval [0,1]. Hence, for example the BIM attack on MNIST could make a maximum distortion of 20% at each pixel and in CIFAR-10 case up to 2%. We added a comparison of our method with other defense methods (defensive distillation and adversarial training) on the same kind of attacks to show the effectiveness of ensembles.\n\nIn respect to running BIM-Grad. 2 attacks with more iterations: You are correct that by increasing the number of iterations one can get somewhat better attacks (however this comes at a significant increase of computational cost). Nevertheless, we had to fix the parameters for our evaluations. Note that in the BIM attack the values are clipped to be in an epsilon neighborhood of the true image. This might be why running the attacks for more iterations has no major effect.\n\nYou mentioned that in Grad. 2 the average of the gradients might be better than the sum of the gradients. Here, we like to point out that in both the FGSM and the BIM attack one always computes the sign function of the gradients and sign(\\sum(gradients)) = sign(\\average(gradients)). However, we agree with you that the average is the correct gradient for our weighting system (even though it results in the very same FGSM and BIM attacks). Hence, we changed our manuscript accordingly.\n\nTo your final comment about the experiment part: we added a comparison of our method with “defensive distillation” (Papernot et al., 2016d) and “adversarial training” (Goodfellow et al., 2014; Papernot et al., 2016c) to make our case stronger.\n",
"We greatly appreciate your insightful feedback. We would like to respond to your comments concerning quality:\n\n1.\tWe added a comparison with other defense methods, specifically with “adversarial training” (Goodfellow et al., 2014; Papernot et al., 2016c) and with “defensive distillation” (Papernot et al., 2016d).\n\n2.\tIt is true that adding Gaussian noise produced the best defense strategy, however this came at a cost of a reduction in accuracy on (unperturbed) test data of about 7% in the Cifar-10 case. That is why we considered Bagging as the better method: it might be a little worse on adversarial perturbed data but better than the Gaussian noise case on unperturbed test data. It is our believe that in real applications unperturbed data is the standard case and adversarial attacked data is a special event. Hence, loosing accuracy on test data can be quite problematic. \n\n3.\tThanks for your idea of evaluating the effect of diversity of the classifiers on the defensive performance of the ensembles. We think that this is worth looking into. But we believe, this would go beyond the scope of our manuscript.\n\n4.\tThe objective of using Grad. 1 was to study the transferability of an attack of one classifier to all classifiers in the ensemble. As we mentioned in the paper, Grad. 2 represents the correct gradient to attack an ensemble.\n\n5.\tWe agree with you that computing Grad. 1 is no more complicated than computing Grad. 2. Hence, we changed the corresponding sentences accordingly.\n\nYou are correct about the increased computational costs when using ensembles. We therefore added a new paragraph were we highlight the advantages of using ensembles as well as the disadvantages (like an increase of computational costs and memory requirements). The advantage section includes especially the increase in accuracy on unperturbated test data while still performing well against adversaries.\n",
"We highly appreciate your feedback.\n\nIn respect to your concerns about the comparability with other methods: We added a section where we compare ensembles with “adversarial learning” (Goodfellow et al., 2014; Papernot et al., 2016c) and with “defensive distillation” (Papernot et al., 2016d). We hope that this resolves your concerns about comparability. Note, we did not compare with (Gu & Rigazio, 2014), due to the non-trivial parameter choices required by this method (particularly the choice of the network architecture).\n\nWe agree with you that the increased computational time when using ensembles should be mentioned in the paper. Hence, we added a new paragraph to the manuscript about the advantages and disadvantages of using ensembles including topics like prediction time, memory requirements, but also higher accuracy on unperturbed test data (we found this is one of the main advantages of ensembles over other defense methods).\n"
]
} | {
"paperhash": [
"he|adversarial_example_defenses:_ensembles_of_weak_defenses_are_not_strong",
"tramèr|ensemble_adversarial_training:_attacks_and_defenses",
"feinman|detecting_adversarial_samples_from_artifacts",
"abbasi|robustness_to_adversarial_examples_through_an_ensemble_of_specialists",
"metzen|on_detecting_adversarial_perturbations",
"goodfellow|cleverhans_v0.1:_an_adversarial_machine_learning_library",
"papernot|technical_report_on_the_cleverhans_v2.1.0_adversarial_examples_library",
"carlini|towards_evaluating_the_robustness_of_neural_networks",
"kurakin|adversarial_examples_in_the_physical_world",
"papernot|practical_black-box_attacks_against_machine_learning",
"papernot|practical_black-box_attacks_against_deep_learning_systems_using_adversarial_examples",
"papernot|the_limitations_of_deep_learning_in_adversarial_settings",
"moosavi-dezfooli|deepfool:_a_simple_and_accurate_method_to_fool_deep_neural_networks",
"papernot|distillation_as_a_defense_to_adversarial_perturbations_against_deep_neural_networks",
"goodfellow|explaining_and_harnessing_adversarial_examples",
"graham|fractional_max-pooling",
"gu|towards_deep_neural_network_architectures_robust_to_adversarial_examples",
"szegedy|intriguing_properties_of_neural_networks",
"krizhevsky|imagenet_classification_with_deep_convolutional_neural_networks",
"dahl|context-dependent_pre-trained_deep_neural_networks_for_large-vocabulary_speech_recognition",
"sermanet|traffic_sign_recognition_with_multi-scale_convolutional_networks",
"kussul|neural_network_with_ensembles",
"dietterich|multiple_classifier_systems",
"breiman|bagging_predictors",
"kolen|backpropagation_is_sensitive_to_initial_conditions",
"hinton|top_downloads_in_ieee_xplore_[reader's_choice]",
"hao|deep_learning",
"krizhevsky|learning_multiple_layers_of_features_from_tiny_images",
"koren|the_bellkor_solution_to_the_netflix_grand_prize",
"lecun|gradient-based_learning_applied_to_document_recognition",
"|iv)_the_last_method_is_to_add_some_small_gaussian_noise_to_the_training_data_so_that_all_classifiers_are_trained_on_a_similar_but_different_training_set",
"|ii)_the_second_method_is_to_train_multiple_classifiers_with_different_but_similar_network_architectures_to_ensure_obtaining_a_set_of_even_more_diverse_classifiers"
],
"title": [
"Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong",
"Ensemble Adversarial Training: Attacks and Defenses",
"Detecting Adversarial Samples from Artifacts",
"Robustness to Adversarial Examples through an Ensemble of Specialists",
"On Detecting Adversarial Perturbations",
"Cleverhans V0.1: an Adversarial Machine Learning Library",
"Technical Report on the CleverHans v2.1.0 Adversarial Examples Library",
"Towards Evaluating the Robustness of Neural Networks",
"Adversarial examples in the physical world",
"Practical Black-Box Attacks against Machine Learning",
"Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples",
"The Limitations of Deep Learning in Adversarial Settings",
"DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks",
"Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks",
"Explaining and Harnessing Adversarial Examples",
"Fractional Max-Pooling",
"Towards Deep Neural Network Architectures Robust to Adversarial Examples",
"Intriguing properties of neural networks",
"ImageNet classification with deep convolutional neural networks",
"Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition",
"Traffic sign recognition with multi-scale Convolutional Networks",
"Neural network with ensembles",
"Multiple Classifier Systems",
"Bagging Predictors",
"Backpropagation is Sensitive to Initial Conditions",
"Top Downloads in IEEE Xplore [Reader's Choice]",
"Deep Learning",
"Learning Multiple Layers of Features from Tiny Images",
"The BellKor Solution to the Netflix Grand Prize",
"Gradient-based learning applied to document recognition",
"iv) The last method is to add some small Gaussian noise to the training data so that all classifiers are trained on a similar but different training set",
"ii) The second method is to train multiple classifiers with different but similar network architectures to ensure obtaining a set of even more diverse classifiers"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"Warren He",
"James Wei",
"Xinyun Chen",
"Nicholas Carlini",
"D. Song"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Florian Tramèr",
"Alexey Kurakin",
"Nicolas Papernot",
"D. Boneh",
"P. Mcdaniel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Reuben Feinman",
"Ryan R. Curtin",
"S. Shintre",
"Andrew B. Gardner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Mahdieh Abbasi",
"Christian Gagné"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. H. Metzen",
"Tim Genewein",
"Volker Fischer",
"Bastian Bischoff"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"I. Goodfellow",
"Nicolas Papernot",
"P. Mcdaniel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicolas Papernot",
"Fartash Faghri",
"Nicholas Carlini",
"I. Goodfellow",
"Reuben Feinman",
"Alexey Kurakin",
"Cihang Xie",
"Yash Sharma",
"Tom B. Brown",
"Aurko Roy",
"Alexander Matyasko",
"Vahid Behzadan",
"Karen Hambardzumyan",
"Zhishuai Zhang",
"Yi-Lin Juang",
"Zhi Li",
"Ryan Sheatsley",
"Abhibhav Garg",
"J. Uesato",
"W. Gierke",
"Yinpeng Dong",
"David Berthelot",
"P. Hendricks",
"Jonas Rauber",
"Rujun Long",
"P. Mcdaniel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicholas Carlini",
"D. Wagner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Alexey Kurakin",
"I. Goodfellow",
"Samy Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicolas Papernot",
"P. Mcdaniel",
"I. Goodfellow",
"S. Jha",
"Z. B. Celik",
"A. Swami"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicolas Papernot",
"P. Mcdaniel",
"I. Goodfellow",
"S. Jha",
"Z. B. Celik",
"A. Swami"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicolas Papernot",
"P. Mcdaniel",
"S. Jha",
"Matt Fredrikson",
"Z. B. Celik",
"A. Swami"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Seyed-Mohsen Moosavi-Dezfooli",
"Alhussein Fawzi",
"P. Frossard"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicolas Papernot",
"P. Mcdaniel",
"Xi Wu",
"S. Jha",
"A. Swami"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"I. Goodfellow",
"Jonathon Shlens",
"Christian Szegedy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Benjamin Graham"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Gu",
"Luca Rigazio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Christian Szegedy",
"Wojciech Zaremba",
"I. Sutskever",
"Joan Bruna",
"D. Erhan",
"I. Goodfellow",
"R. Fergus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Krizhevsky",
"I. Sutskever",
"Geoffrey E. Hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"George E. Dahl",
"Dong Yu",
"L. Deng",
"A. Acero"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Sermanet",
"Yann LeCun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Kussul",
"O. Makeyev",
"T. Baidyk",
"D. Reyes"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Thomas G. Dietterich"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"L. Breiman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Kolen",
"J. Pollack"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Geoffrey E. Hinton",
"L. Deng",
"Dong Yu",
"George E. Dahl",
"Abdel-rahman Mohamed",
"N. Jaitly",
"A. Senior",
"Vincent Vanhoucke",
"Patrick Nguyen",
"Tara N. Sainath",
"Brian Kingsbury"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Xing Hao",
"Guigang Zhang",
"Shang Ma"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Krizhevsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Y. Koren"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yann LeCun",
"L. Bottou",
"Yoshua Bengio",
"P. Haffner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [],
"affiliation": []
}
],
"arxiv_id": [
"1706.04701v1",
"1705.07204v5",
"1703.00410v3",
"1702.06856v3",
"1702.04267v2",
"",
"1610.00768v6",
"1608.04644v2",
"1607.02533v4",
"1602.02697",
"",
"1511.07528v1",
"1511.04599v3",
"1511.04508v2",
"1412.6572v3",
"1412.6071",
"1412.5068v4",
"1312.6199v4",
"",
"",
"",
"",
"",
"",
"",
"",
"1807.07987v2",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
[
"methodology"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[],
[],
[],
[],
[],
[
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"background",
"methodology"
],
[],
[],
[
"background",
"methodology"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[],
[],
[
"background"
],
[],
[],
[
"methodology"
],
[
"methodology"
],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | null | 89 | null | 0.481481 | 0.583333 | null | null | null | null | null | rkA1f3NpZ |
|
thangarasa|leap_learning_embeddings_for_adaptive_pace|ICLR_cc_2018_Conference | LEAP: Learning Embeddings for Adaptive Pace | Determining the optimal order in which data examples are presented to Deep Neural Networks during training is a non-trivial problem. However, choosing a non-trivial scheduling method may drastically improve convergence. In this paper, we propose a Self-Paced Learning (SPL)-fused Deep Metric Learning (DML) framework, which we call Learning Embeddings for Adaptive Pace (LEAP). Our method parameterizes mini-batches dynamically based on the \textit{easiness} and \textit{true diverseness} of the sample within a salient feature representation space. In LEAP, we train an \textit{embedding} Convolutional Neural Network (CNN) to learn an expressive representation space by adaptive density discrimination using the Magnet Loss. The \textit{student} CNN classifier dynamically selects samples to form a mini-batch based on the \textit{easiness} from cross-entropy losses and \textit{true diverseness} of examples from the representation space sculpted by the \textit{embedding} CNN. We evaluate LEAP using deep CNN architectures for the task of supervised image classification on MNIST, FashionMNIST, CIFAR-10, CIFAR-100, and SVHN. We show that the LEAP framework converges faster with respect to the number of mini-batch updates required to achieve a comparable or better test performance on each of the datasets. | {
"name": [],
"affiliation": []
} | LEAP combines the strength of adaptive sampling with that of mini-batch online learning and adaptive representation learning to formulate a representative self-paced strategy in an end-to-end DNN training protocol. | [
"deep metric learning",
"self-paced learning",
"representation learning",
"cnn"
] | null | 2018-02-15 22:29:22 | 35 | null | null | null | null | null | null | null | null | false | Although paper has been improved with new quantitative results and additional clarity, the reviewers agree though that larger-scale experiments would better highlight the utility of the method. There are some concerns with computational cost, despite the fact that the two networks are trained asynchronously. A baseline against a single, asynchronously trained network (multiple GPUs) would help strengthen this point. Some reviewers expressed concerns with novelty. | {
"review_id": [
"ry9RWezWM",
"S1p86uteG",
"Byjs3NyZz"
],
"review": [
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: The authors purpose a method for creating mini batches for a student network by using a second learned representation space to dynamically selecting examples by their 'easiness and true diverseness'. The framework is detailed and results on MNIST, cifar10 and fashion-MNIST are presented. The work presented is novel but there are some notable omissions: \n - there are no specific numbers presented to back up the improvement claims; graphs are presented but not specific numeric results\n- there is limited discussion of the computational cost of the framework presented \n- there is no comparison to a baseline in which the additional learning cycles used for learning the embedding are used for training the student model.\n- only small data sets are evaluated. This is unfortunate because if there are to be large gains from this approach, it seems that they are more likely to be found in the domain of large scale problems, than toy data sets like mnist. \n\n**edit\nIn light of the changes made, and in particular the performance gains achieved on CIFAR-100, i have increased my ratting from a 4 to a 6",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review ",
"paper_summary": null,
"main_review": "main_review: (Summary)\nThis paper is about learning a representation with curriculum learning style minibatch selection in an end-to-end framework. The authors experiment the classification accuracy on MNIST, FashionMNIST, and CIFAR-10 datasets.\n\n(Pros)\nThe references to the deep metric learning methods seem up to date and nicely summarizes the recent literatures.\n\n(Cons)\n1. The method lacks algorithmic novelty and the exposition of the method severely inhibits the reader from understand the proposed idea. Essentially, the method is described in section 3. First of all, it's not clear what the actual loss the authors are trying to minimize. Also, \\min_v E(\\theta, v; \\lambda, \\gamma) is incorrect. It looks to me like it should be E \\ell (...) where \\ell is the loss function. \n\n2. The experiments show almost no discernable practical gains over 'random' baseline which is the baseline for random minibatch selection.\n\n(Assessment)\nClear rejection. The method is poorly written, severely lacks algorithmic novelty, and the proposed approach shows no empirical gains over random mini batch sampling.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The authors propose a method that uses an embedding network trained with magnet loss for adaptively sampling and feeding the student network that is being trained for the actual task",
"paper_summary": null,
"main_review": "main_review: While the idea is novel and I do agree that I have not seen other works along these lines there are a few things that are missing and hinder this paper significantly.\n\n1. There are no quantitative numbers in terms of accuracy improvements, overhead in computation in having two networks.\n2. The experiments are still at the toy level, the authors can tackle more challenging datasets where sampling goes from easy to hard examples like birdsnap. MNIST, FashionMNIST and CIFAR-10 are all small datasets where the true utility of sampling is not realized. Authors should be motivated to run the large scale experiments.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.2222222238779068,
0.3333333432674408
],
"confidence": [
0.5,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to all reviewers (Part 1)",
"Response to all reviewers (Part 2)"
],
"comment": [
"We thank all of the reviewers for their careful review of our paper, and for the valuable comments and constructive criticism that ensued. We performed a major revision to the paper to take all of them into account, and in the process, we believe the paper has improved significantly. These are detailed below:\n\nR1 Methodology clarification\n\nWe made significant updates to the methodology in Section 3. In Section 3.1, we provide a detailed training algorithm for the embedding CNN which uses the Magnet loss to form a representation space consisting of $K$ clusters for $C$ classes by adaptive density discrimination. This results in a training set $D$ partitioned into learned representation space, $D_K^c$, while maintaining \tintra-class variation and inter-class similarity. The details of the objective function for the LEAP framework are added in Section 3.2, which is given by:\n\n\\min_{\\theta, \\mathcal{W}} \\mathbb{E}(\\theta, \\mathcal{W}; \\lambda, \\gamma) = \\sum_{i=1}^{n}w_i\\mathcal{L}(y_i, f(x_i,\\theta)) - \\lambda \\sum_{i=1}^{n}w_i - \\gamma\\|\\mathcal{W}\\|_{2,1}, \\ \\text{s.t} \\ \\mathcal{W} \\in [0,1]^{n}\n\nIn LEAP, we assume that a dataset contain $N$ samples, $\\mathcal{D} = \\{\\mathbf{x}_n\\}_{n=1}^{N}$, is grouped into $K$ clusters for each class $c$ through the Magnet loss to get: $\\{\\mathcal{D}^{k}\\}_{k=1}^K$, where $\\mathcal{D}^{k}$ corresponds to the $k^{th}$ cluster, $n_k$ is the number of samples in each cluster and $\\sum_{k=1}^{K}n_k = N$. A weight vector is $\\mathcal{W}^{k} = (\\mathcal{W}_1^k,\\ldots,\\mathcal{W}_{n_k}^k)^T$, where each $\\mathcal{W}_{n_k}^k$ is assigned a weight $[0,1]^{n_k}$ for each sample in cluster $k$ for $K$ clusters. \n\nThe easiness and true diverseness terms are given by $\\lambda$ and $\\gamma$. We use the negative $l_1$-norm: $-\\|\\mathcal{W}\\|_1$ to select easy samples over hard samples. The negative $l_2$-norm is used to disperse non-zero elements of the weights $\\mathcal{W}$ across a large number of clusters so that we can get a diverse set of training samples. \n\nIn addition, we give specific details on the LEAP algorithm (Section 3.2) for training the student CNN, where we indicate how the embedding CNN and student CNN are used in conjunction. In this subsection, we also present the self-paced sample selection strategy, which specifies how the training samples are selected based on the “easiness” and “true diverseness” according to the student CNN model, such that we solve $\\min_{\\mathcal{W}}\\mathbb{E}(\\theta, \\mathcal{W}; \\lambda, \\gamma)$. If the cross-entropy loss, $\\mathcal{L}(y_i^{k}, f(x_i^{k},\\theta))$, is less than $(\\lambda + \\gamma\\frac{1}{\\sqrt{i}+\\sqrt{i-1}})$, then we assign a weight $\\mathcal{W}_i^{k} = 1$, otherwise $\\mathcal{W}_i^{k} = 0$. $i$ is the training instance’s rank w.r.t. its cross-entropy loss value within its cluster. The instance with a smaller loss than the assigned threshold will be selected during training. Therefore, the new $\\mathcal{W}$ becomes equal to $\\min_{\\mathcal{W}}\\mathbb{E}(\\theta, \\mathcal{W}; \\lambda, \\gamma)$. Next, we update the learning pace for $\\lambda$ and $\\gamma$.\n",
"\n\nR1, R2, R4 Quantitative results to backup improvement claims\n\nA table with a summary of the experimental results is provided in Section 5. Please refer to the latest revision for the updated Table 1. Here, we present the test accuracy (%) results across all datasets including: MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100, and SVHN for the following sampling methods: Learning Embeddings for Adaptive Pace (LEAP), Self-Paced Learning with Diversity (SPLD), and Random. The test accuracy results of MNIST, Fashion-MNIST, and CIFAR-10 are averaged over 5 runs. The results for CIFAR-100 and SVHN are averaged over 4 runs. The results show that there is a noticeable increase in test performance across all datasets with the LEAP dynamic sampling strategy, especially for the CIFAR-100 dataset.\n\nR2, R4 Computational cost of this framework\n\nWe agree that training two complex CNN architectures (i.e. VGG-16, ResNet-18, etc.) would raise concerns for overhead in computation. However, we would like to clarify that the embedding CNN and student CNN are asynchronously trained in parallel by using multiprocessing to share data between processes in a local environment using arrays and values. The idea is to have an embedding CNN that is adaptively sculpting a representation space, while the student CNN is being trained. The student CNN leverages the $K$ cluster representations constructed by the embedding CNN, to select samples based on the “easiness” from each of the $K$ clusters for each class, $c$ in $C$ classes. This way we are ensuring that the samples that the student model considers “easy” also maintains diversity, which is important for constructing mini-batches iteratively. Therefore, the extra training cost of the embedding CNN can be mitigated by having it train in parallel to the actual classification model. This setup is more apparent in Section 3, which contains more specific and updated details of the methodology for both the embedding CNN and student CNN. \n\nR1, R2, R4 Experiments on complex datasets\n\nWe conducted experiments on two additional datasets, SVHN and CIFAR-100 which is considered a more fine-grained visual recognition dataset. We used a WideResNet for the student CNN and VGG-16 for the embedding CNN to train on CIFAR-100 using LEAP. The specific training scheme used for CIFAR-100 is detailed in Section 4.4. The CIFAR-100 experiments revealed that we achieve a noticeable gain in performance when using the LEAP framework with a test accuracy of 79.17% \\pm 0.24%. The LEAP framework outperforms the baselines, SPLD and Random, by 4.50% and 3.72%, respectively. Effectively, we saw that on a more challenging fine-grained classification task, the LEAP framework performs really well. While we agree with the reviewers that the true utility of our framework can be realized in large-scale problems (i.e. BirdSnap, ImageNet, etc.), we have yet to perform those experiments.\n\nThe MNIST experiments were mainly performed to show that the LEAP framework can be employed end-to-end for a simple supervised classification task. Then, we extended this to Fashion-MNIST which is considered a direct drop-in replacement for MNIST. Fashion-MNIST served to be another small classification dataset that can be used to test and verify the feasibility of our approach, which also served to be successful. CIFAR-10 experiments showed that we can learn a representation space with $K$ clusters for each class in the dataset, by extracting features from RGB images and computing the Magnet loss with the embedding CNN. Then, we showed that we can use this learned representation space to adaptively sample “easy” training instances diversely from $K$ clusters for each classified class."
]
} | {
"paperhash": [
"arthur|k-means++:_the_advantages_of_careful_seeding",
"bell|learning_visual_similarity_for_product_design_with_convolutional_neural_networks",
"bengio|curriculum_learning",
"choy|universal_correspondence_network",
"elman|learning_and_development_in_neural_networks:_the_importance_of_starting_small",
"fathi|semantic_instance_segmentation_via_deep_metric_learning",
"frome|devise:_a_deep_visual-semantic_embedding_model",
"he|deep_residual_learning_for_image_recognition",
"horn|the_inaturalist_challenge",
"hsieh|collaborative_metric_learning",
"huang|self-paced_model_learning_for_robust_visual_tracking",
"jiang|easy_samples_first:_selfpaced_reranking_for_zero-example_multimedia_search",
"jiang|self-paced_learning_with_diversity",
"kiapour|where_to_buy_it:_matching_street_clothing_photos_in_online_shops",
"kumar|self-paced_learning_for_latent_variable_models",
"lapedriza|are_all_training_examples_equally_valuable",
"lecun|gradient-based_learning_applied_to_document_recognition",
"jae|learning_the_easy_things_first:_self-paced_visual_category_discovery",
"li|self-paced_convolutional_neural_network_for_computer_aided_detection_in_medical_imaging_analysis",
"loshchilov|online_batch_selection_for_faster_training_of_neural_networks",
"netzer|reading_digits_in_natural_images_with_unsupervised_feature_learning",
"rippel|metric_learning_with_adaptive_density_discrimination",
"sangineto|self_paced_deep_learning_for_weakly_supervised_object_detection",
"schroff|facenet:_a_unified_embedding_for_face_recognition_and_clustering",
"simonyan|very_deep_convolutional_networks_for_large-scale_image_recognition",
"sohn|improved_deep_metric_learning_with_multi-class_n-pair_loss_objective",
"song|deep_metric_learning_via_facility_location",
"song|deep_metric_learning_via_lifted_structured_feature_embedding",
"steven|self-paced_learning_for_long-term_tracking",
"tang|self-paced_dictionary_learning_for_image_classification",
"ustinova|learning_deep_embeddings_with_histogram_loss",
"maaten|visualizing_high-dimensional_data_using_t-sne",
"wang|deep_metric_learning_with_angular_loss",
"zagoruyko|wide_residual_networks",
"zhang|embedding_label_structures_for_fine-grained_feature_representation"
],
"title": [
"K-means++: The advantages of careful seeding",
"Learning visual similarity for product design with convolutional neural networks",
"Curriculum learning",
"Universal correspondence network",
"Learning and development in neural networks: The importance of starting small",
"Semantic instance segmentation via deep metric learning",
"Devise: A deep visual-semantic embedding model",
"Deep residual learning for image recognition",
"The inaturalist challenge",
"Collaborative metric learning",
"Self-paced model learning for robust visual tracking",
"Easy samples first: Selfpaced reranking for zero-example multimedia search",
"Self-paced learning with diversity",
"Where to buy it: Matching street clothing photos in online shops",
"Self-paced learning for latent variable models",
"Are all training examples equally valuable",
"Gradient-based learning applied to document recognition",
"Learning the easy things first: Self-paced visual category discovery",
"Self-paced convolutional neural network for computer aided detection in medical imaging analysis",
"Online batch selection for faster training of neural networks",
"Reading digits in natural images with unsupervised feature learning",
"Metric learning with adaptive density discrimination",
"Self paced deep learning for weakly supervised object detection",
"Facenet: A unified embedding for face recognition and clustering",
"Very deep convolutional networks for large-scale image recognition",
"Improved deep metric learning with multi-class n-pair loss objective",
"Deep metric learning via facility location",
"Deep metric learning via lifted structured feature embedding",
"Self-paced learning for long-term tracking",
"Self-paced dictionary learning for image classification",
"Learning deep embeddings with histogram loss",
"Visualizing high-dimensional data using t-sne",
"Deep metric learning with angular loss",
"Wide residual networks",
"Embedding label structures for fine-grained feature representation"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"david arthur",
"sergei vassilvitskii"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sean bell",
"kavita bala"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoshua bengio",
"jérôme louradour",
"ronan collobert",
"jason weston"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"junyoung christopher b choy",
"silvio gwak",
"manmohan savarese",
" chandraker"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jeffrey l elman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alireza fathi",
"zbigniew wojna",
"vivek rathod",
"peng wang",
"hyun oh song",
"sergio guadarrama",
"kevin p murphy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"andrea frome",
"greg corrado",
"jonathon shlens",
"samy bengio",
"jeffrey dean",
"marcaurelio ranzato",
"tomas mikolov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"grant van horn",
"oisin mac aodha",
"yang sup song",
"alexander shepard",
"hartwig adam",
"pietro perona",
"serge j belongie"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"cheng-kang hsieh",
"longqi yang",
"yin cui",
"tsung-yi lin",
"serge belongie",
"deborah estrin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wenhui huang",
"jason gu",
"xin ma",
"yibin li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lu jiang",
"deyu meng",
"teruko mitamura",
"alexander g hauptmann"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lu jiang",
"deyu meng",
"shoou-i yu",
"zhenzhong lan",
"shiguang shan",
"alexander hauptmann"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"m hadi kiapour",
"xufeng han",
"svetlana lazebnik",
"alexander c berg",
"tamara l berg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"m pawan kumar",
"benjamin packer",
"daphne koller"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"agata lapedriza",
"hamed pirsiavash",
"zoya bylinskii",
"antonio torralba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yann lecun",
"lon bottou",
"yoshua bengio",
"patrick haffner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yong jae",
"lee ",
"k grauman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xiang li",
"aoxiao zhong",
"ming lin",
"ning guo",
"mu sun",
"arkadiusz sitek",
"jieping ye",
"james thrall",
"quanzheng li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ilya loshchilov",
"frank hutter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yuval netzer",
"tao wang",
"adam coates",
"alessandro bissacco",
"bo wu",
"andrew y ng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"oren rippel",
"manohar paluri",
"piotr dollár",
"lubomir d bourdev"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"enver sangineto",
"moin nabi",
"dubravko culibrk",
"nicu sebe"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"florian schroff",
"dmitry kalenichenko",
"james philbin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"karen simonyan",
"andrew zisserman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kihyuk sohn"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hyun oh song",
"stefanie jegelka",
"vivek rathod",
"kevin murphy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hyun oh song",
"yu xiang",
"stefanie jegelka",
"silvio savarese"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"james steven",
"supancic iii",
"deva ramanan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ye tang",
"yu-bin yang",
"yang gao"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"evgeniya ustinova",
"victor s lempitsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"l j p van der maaten",
"g e hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jian wang",
"feng zhou",
"shilei wen",
"xiao liu",
"yuanqing lin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sergey zagoruyko",
"nikos komodakis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xiaofan zhang",
"feng zhou",
"yuanqing lin",
"shaoting zhang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"1606.03558v3",
"",
"1703.10277v1",
"",
"1512.03385v1",
"",
"",
"",
"",
"",
"",
"",
"1311.6510v1",
"",
"",
"",
"1511.06343v4",
"",
"1511.05939v2",
"1605.07651v3",
"1503.03832v3",
"",
"",
"1612.01213v2",
"1511.06452v1",
"",
"",
"1611.00822v1",
"",
"1708.01682v1",
"1605.07146v4",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.37037 | 0.666667 | null | null | null | null | null | rk9kKMZ0- |
||
bai|convolutional_sequence_modeling_revisited|ICLR_cc_2018_Conference | Convolutional Sequence Modeling Revisited | This paper revisits the problem of sequence modeling using convolutional
architectures. Although both convolutional and recurrent architectures have a
long history in sequence prediction, the current "default" mindset in much of
the deep learning community is that generic sequence modeling is best handled
using recurrent networks. The goal of this paper is to question this assumption.
Specifically, we consider a simple generic temporal convolution network (TCN),
which adopts features from modern ConvNet architectures such as a dilations and
residual connections. We show that on a variety of sequence modeling tasks,
including many frequently used as benchmarks for evaluating recurrent networks,
the TCN outperforms baseline RNN methods (LSTMs, GRUs, and vanilla RNNs) and
sometimes even highly specialized approaches. We further show that the
potential "infinite memory" advantage that RNNs have over TCNs is largely
absent in practice: TCNs indeed exhibit longer effective history sizes than their
recurrent counterparts. As a whole, we argue that it may be time to (re)consider
ConvNets as the default "go to" architecture for sequence modeling. | {
"name": [],
"affiliation": []
} | We argue that convolutional networks should be considered the default starting point for sequence modeling tasks. | [
"Temporal Convolutional Network",
"Sequence Modeling",
"Deep Learning"
] | null | 2018-02-15 22:29:35 | 54 | null | null | null | null | null | null | null | null | false | meta score: 5
This paper gives a thorough experimental comparison of convolutional vs recurrent networks for a variety of sequence modelling tasks. The experimentation is thorough, but the main point of the paper, that convolutional networks are unjustly ignored for sequence modelling, is overstated as there are several areas where convolutional networks are well explored.
Pros:
clear and well-written
thorough set of experiments
Cons
original contribution is not strong
it is not as radical to consider convolutional networks for sequence modeling as the authors seem to suggest
| {
"review_id": [
"SkdHpQDez",
"HkUwN_Ylf",
"HkTNLM5gM"
],
"review": [
{
"title": "title: The authors benchmark a general-purpose convolutional architecture on several sequence modeling tasks across a variety of domains. The results will be of broad use to the community, although some of the claims in the paper could do with more justification.",
"paper_summary": null,
"main_review": "main_review: In this paper, the authors argue for the use of convolutional architectures as a general purpose tool for sequence modeling. They start by proposing a generic temporal convolution sequence model which leverages recent advances in the field, discuss the respective advantages of convolutional and recurrent networks, and benchmark their architecture on a number of different tasks.\n\nThe paper is clearly written and easy to follow, does a good job of presenting both the advantages and disadvantages of the proposed method, and convincingly makes the point that convolutional architectures should at least be considered for any sequence modeling task; they are indeed still often overlooked, in spite of some strong performances in language modeling and translation in recent works.\n\nThe only part which is slightly less convincing is the section about effective memory size. While it is true that learning longer term dependencies can be difficult in standard RNN architectures, it is interesting to notice that the SoTA results presented in appendix B.3 for language modeling on larger data sets are architectures which focus on remedying this difficulty (cache model and hierarchical LSTM). It would also be interesting to see how TCN works on word prediction tasks which are devised explicitly to test for longer memory, such as Lambada (1) or Children Books Test (2).\n\nAs a minor point, adding a measure of complexity in terms of number of operations could be a useful hardware-independent indication of the computational cost of the architecture.\n\nPros:\n- Clearly written, well executed paper\n- Makes a strong point for the use of convolutional architecture for sequences\n- Provides useful benchmarks for the community\n\nCons:\n- The claims on effective memory size need more context and justification\n\n1: The LAMBADA dataset: Word prediction requiring a broad discourse context, Paperno et al. 2016\n2: The Goldilocks principle: reading children's books with explicit memory representation, Hill et al. 2016",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Nice results but not much novelty and I don't think that the views are as contrarian as the paper claims.",
"paper_summary": null,
"main_review": "main_review: The authors claim that convolutional networks should be considered as possible replacements of recurrent neural networks as the default choice for solving sequential modelling problems. The paper describes an architecture similar to wavenet with residual connections. Empirical results are presented on a large number of tasks where the convolutional network often outperforms modern recurrent baselines or reaches similar performance.\n\nThe biggest strength of the paper is the large number of tasks on which the models are evaluated. The experiments seem sound and the information in both the paper and the appendix seem to allow for replication. That said, I don’t think that all the tasks are very relevant for comparing convolutional and recurrent architectures. While the time windows that RNNs can deal with are infinite in principle, it is common knowledge that the effective length of the dependencies RNNs can model is quite limited in practise. Many of the artificial task like the adding problem and sequential MNIST have been designed to highlight this weakness of RNNs. I don’t find it very surprising that these tasks are easy to solve with a feedforward architecture with a large enough context window. The more impressive results are in my opinion those on the language modelling tasks where one would indeed expect RNNs to be more suitable for capturing dependencies that require stack-like memory functionality. \n\nWhile the related work is quite comprehensive, it downplays the popularity of convolutional architectures throughout history a bit. Especially in speech recognition, RNNs have only recently started to gain popularity while deep feedforward networks applied to overlapping time windows (i.e., 1D convolutions) have been the state-of-the-art for years. Of course the recent successes of dilated convolutions are likely to change the landscape in this application domain yet again.\n\nThe paper is well-structured and written. If anything, it is perhaps a little bit wordy at times but I prefer that over obscurity due to brevity.\n\nThe ideas in the paper are not novel and neither do the authors claim that they are. Unfortunately, I also think that the impact of the work is also somewhat limited due to the enormous success of the wavenet architecture. I do think that the results on the real-world tasks are valuable and worthy of publication. However, I feel that the authors exaggerate the extent to which researchers in this field still consider RNNs superior models for sequences. \n\n+ Many experiments and tasks.\n+ Well-written and clear.\n+ Good results\n- Somewhat exaggerated claims about the extent to which RNNs are still being considered more suitable sequence models\n than dilated convolutions. Especially in light of the success of Wavenet.\n- Not much novelty/originality.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Convolutional networks are good for solving sequence tasks",
"paper_summary": null,
"main_review": "main_review: This paper argues that convolutional networks should be the default\napproach for sequence modeling.\n\nThe paper is nicely done and rather easy to understand. Nevertheless, I find\nit difficult to assess its significance. In order to support the original hypothesis,\nI think that a much larger and more diverse set of experiments should have\nbeen considered. As pointed out by another reviewer please add https://arxiv.org/abs/1703.04691\nto your references.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.7777777910232544,
0.4444444477558136,
0.3333333432674408
],
"confidence": [
0.75,
0.75,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to AnonReviewer2",
"Revision posted",
"Response to Comment",
"Response to AnonReviewer3",
"Response to Comment",
"Response to Comment",
"Response to AnonReviewer1",
"Response to Comment",
"Response to Comment"
],
"comment": [
"Thanks for your note, though we honestly found it a bit surprising. The entire point of our paper _is_ to evaluate the improved TCN performance over a large and diverse set of experiments, and on this point it is by far the single _most diverse_ study of CNN vs. RNN performance that we are aware of. And while many of the particular benchmarks are indeed \"small-sized\" in and of themselves, they are standard benchmarks for evaluating the performance of recurrent networks (see appendix A for some references to papers that used these benchmark tests); and we include experiments on domains such as Wikitext-103, which is certainly not a small dataset.\n\nRegarding arXiv:1703.04691, see our comments in the response to the discussant who originally brought this up.\n",
"We thank the reviewers and other discussants for their comments. In order to address points discussed in OpenReview reviews, comments, and our responses, we have updated our paper. The key changes are as follows:\n\n1. We’ve added content to the Related Work section. This content elaborates on the relationship to prior work (e.g., non-dilated gated ConvNets, convolutional models for sequence to sequence prediction, etc.), in accordance with our responses to OpenReview reviews and comments. As highlighted in the revision, the TCN model we focus on avoids much of the specialized machinery present in prior work and is evaluated on an extremely diverse set of tasks rather than a specific domain or application.\n\n2. We have added experiments on the LAMBADA dataset, as suggested by Reviewer 3, which in fact show very strong performance for the TCN models. LAMBADA is an especially challenging task where each data sample consists of a long context segment (4.6 sentences on average) and a target sentence, the last word of which needs to be predicted. In this setting, a human can perfectly predict the last word when given the context, but most of the existing models (e.g., LSTM, vanilla RNN) fail to do so. As shown in Table 1 of Section 4 in the revision, without much tuning (due to limited rebuttal time), TCN can achieve a perplexity of < 1300 on LAMBADA, substantially outperforming LSTMs (~4000 ppl) and vanilla RNNs (~15000 ppl), as listed in prior works. This is a strong result that suggests that TCNs are able to recall from a much larger context than recurrent networks, and thus may be more suitable for tasks where long dependencies are required.\n\n3. The appendix now includes a new section that compares the baseline TCN to a TCN that uses a gating mechanism. This mainly serves as a comparison point to the Dauphin et al. paper, which one reviewer pointed out was not sufficiently addressed in our original draft. Our experiments show that a gating mechanism can indeed be useful on certain language modeling tasks, but such benefits may not generalize well to other tasks (e.g., polyphonic music and other benchmark tasks). Thus, while we do absolutely agree with the relevance of the Dauphin et al. paper, and stress this more in the update, we also feel that much the same considerations apply here as to e.g., the WaveNet paper, where the focus of the previous work was really on a single domain, whereas our paper stresses the generality of convolutional sequence models.\n\n4. The revision includes the latest results on certain large experiments (e.g., Wikitext-103). Specifically, as mentioned in our responses, the TCN achieves a perplexity of 45.2 on this dataset (the only change from our original result was simple optimizing the model for longer), compared to an LSTM that achieves 48.4 perplexity.\n",
"Thank you for your comment. We refer you to the version of our work on arXiv [1], where we have provided a table with updated results for both baselines and the TCNs. \n\nNote that for the PTB perplexity you mentioned, to reach a level of 58.3 you still need a lot more (advanced) recurrent optimizations and regularizations [2], which is orthogonal to what we try to accomplish here. For an LSTM with standard regularizations and no more than 13M parameters, we got 78.93 perplexity, which is consistent with prior works and open source codes. For other tasks, for example on LAMBADA, the prior results can be seen at [3], with an LSTM perplexity at 5357. Graves et al. also tested on this dataset and Wikitext-103 [4], getting similar result as ours. \n\nAlso note that we explicitly emphasize that state-of-the-art architectures do attain much lower errors than the generic TCN _and_ LSTM architectures we consider, which we highlight in the appendix.\n\n[1] https://arxiv.org/abs/1803.01271\n[2] https://github.com/salesforce/awd-lstm-lm\n[3] https://arxiv.org/pdf/1606.06031\n[4] https://arxiv.org/pdf/1612.04426",
"Thank you very much for this review. We agree on most points, except in the ultimate conclusions and assessment of the current \"default\" mindset of temporal modeling in RNNs.\n\nFirst, we agree that speech data in particular (or perhaps audio data more broadly), is indeed one instance where CNNs do appear to have a historical edge over recurrent models, and we can emphasize this in the background section. Indeed, as you mention, the success of WaveNet has certainly made clear the power of CNNs in this application domain.\n\nThe question, then, is to what extent the community already feels that the success of WaveNet in the speech setting is sufficient to \"standardize\" the use of CNNs across all sequence prediction tasks. And our genuine impression here is that these ideas have yet to permeate the mindset of the community for generic sequence prediction. Numerous resources (e.g., Goodfellow et al.'s deep learning book, with its chapter \"Sequence Modeling: Recurrent and Recursive Nets\", plus virtually all current papers on recurrent networks), still highlight LSTMs and other similar architectures as the \"standard\" for sequence modeling. The precise goal of our work is to highlight the fact that WaveNet-like architectures (though substantially simplified too, as we describe below) can indeed work well across the many other settings we consider. And we feel that this is an important point to make empirically, even if the results or conclusion may seem \"unsurprising\" to people who are very familiar with CNN architectures.\n\nThe second point, also, is that the architecture we consider is indeed simpler than WaveNet in many respects: e.g. no gated activation but just ReLUs (which, as we highlighted in our response to a previous reviewer, we will include more experimentation on in a forthcoming update), no context stacks, etc; and residual units and dilation structure that more directly mirror the corresponding \"standard\" architectures in convolutional image networks. Thus, a practitioner wishing to apply WaveNet-style architectures to some new sequence prediction task may be unclear about which elements of the architecture are really necessary, and we attempt to distill this as much as possible in our current paper.\n\nOverall, therefore, we agree that the significance of our current work is largely making the empirical point that TCN architectures are not just for audio, but really for any sequence modeling problem. But we do feel that this is an important point to make and thoroughly substantiate, even given the success of WaveNet.\n",
"Thanks for your note. We will certainly update the paper to include this arXiv report. However, we also believe that the precise conclusions of this report are somewhat orthogonal as it applies an architecture virtually identical to WaveNet to one particular time series prediction task; thus, from an architectural standpoint, we think that the WaveNet paper is the more relevant prior work, which of course we do cite and discuss. In contrast, the goal of our current work is to highlight a simpler architecture and empirically study it across a wide range of sequence modeling tasks. But as mentioned, we're happy to include the reference and explain this connection.\n",
"Thanks for your comment. However, we do strongly disagree that this should be the main lesson from the Melis et al. paper, and it's really orthogonal to the main point we are trying to make. The takeaway from the Melis et al. paper should absolutely not be that \"ordinary LSTMs get 60 perplexity on PTB\", but rather that extensive hyperparameter tuning _can_ improve LSTM results on any given task, effectively overfitting to the test set. And therefore, it's difficult to confirm whether much of the follow-on work in LSTM models (the subset of works that look at only a few relatively small datasets) is really improving the underlying model or just essentially working by tuning hyperparameters. The Melis et al. paper exactly points to the need to evaluate on a wider and more diverse set of benchmarks, where extensive hyperparameter optimization cannot simply \"overfit\" the data. The authors are quite explicit on this point, even mentioning in one of their review rebuttals: \"The main criticism seems to center on evaluating models on datasets that are too small which increases evaluation variance, and the results are thus not trustworthy. That is a very good summary of the main message of the paper!\"\n\nWe also feel such large-scale hyperparameter search for _one_ dataset is not particularly informative (and most importantly, not representative of what practitioners would typically encounter). As the authors did not release the set of hyperparameters they obtained, we reproduced the LSTM result using the best hyperparameter set that we found in prior works and open source codes (without advanced regularizations). Moreover, given the message we want to convey in our paper, we believe it is much more important to evaluate a model's performance across tasks and datasets, instead of doing extensive hyperparameter search(es) on a single dataset. The results for both the LSTM and TCN models use minimal tuning, and thus we feel are a good illustration of initial \"expected\" performance.",
"Thank you very much for the review, we agree with virtually all your points. As per your suggestion, we are currently integrating experiments on the LAMBADA dataset into the paper, and will post a revision with these results shortly.\n",
"Thanks for the note. We believe this note is addressing the same points as the note above (with a few additional follow-on points), so we refer to our comment above.",
"Thanks very much for your note. We absolutely agree with your general comments about the related work. We respond to two different points here, because in our mind there are two different categories in the papers you mention.\n\nFirst, the Kalchbrenner et al., and Gehring et al., papers both relate to convolutional sequence to sequence models. While we absolutely agree that this work is related to our topic, we made the explicit choice not to consider seq2seq models in this paper. The rationale for us is that these models differ in substantial ways from \"pure\" temporal convolutional models. Since the input to the model is the entire input sentence (captured by non-causal convolutions), and only the autoregressive output network needs to follow causal generation, the task itself is quite different from pure temporal sequence modeling, even if it may be an extension. Specifically, the two-stage encoder/decoder architecture (first to encode the entire input sentence, then to autoregressively generate the translation) of typical seq2seq models seems so fundamental to these approaches that we felt it was substantially more specialized than the generic temporal modeling problem.\n\nHowever, we also of course concede that the work is related, especially given the machine translation community's departure from pure recurrent networks to convolutional (or even pure attention-based) models. Thus we will edit the paper to cite these works and address these points (we'll be posting a revised version within a week or so).\n\nSecond, there is the work of Dauphin et al., which more directly relates to a language modeling task. And while we _do_ cite this work, we believe your point combined with the point in the comment below is more that we don't devote sufficient attention to this previous work. We agree that the relationship is not clarified enough in the paper and are currently revising to fix this, but let us briefly mention here the connections and how we see this relationship.\n\nFirst, we should mention that while we did include the 48.9 PPL figure on one GPU, running the TCN model for more epochs (still on one GPU) actually achieves a PPL of 45.2, which isn't far off from Dauphin’s 44.9. (Note that we use a network approximately half the size of Dauphin et al.’s, and little tuning.) We'll naturally update the paper on this point. Second, the main technical contribution of the paper of Dauphin et al. is the combination of (non-dilated) convolutional networks with a gating mechanism. We experimented quite extensively with this gating mechanism combined with our generic TCN architecture, but didn’t see significant overall performance improvements due to the gating mechanism. We can include these results in an appendix. Indeed, a main characteristic of our work is simply the claim that the generic TCN architecture (which is quite simple in nature, as we highlight) is _sufficient_ to achieve most of the benefits proposed by more complex convolutional architectures, without the need for attention, gating mechanisms, and other architectural elaborations. We believe that the comparison to the Dauphin et al. work actually supports this conclusion, and we will update the paper accordingly (we will post a follow-up note here once the paper has been updated).\n"
]
} | {
"paperhash": [
"chung|empirical_evaluation_of_gated_recurrent_neural_networks_on_sequence_modeling",
"gehring|convolutional_sequence_to_sequence_learning",
"greff|lstm:_a_search_space_odyssey",
"he|deep_residual_learning_for_image_recognition",
"lecun|gradient-based_learning_applied_to_document_recognition",
"long|fully_convolutional_networks_for_semantic_segmentation",
"merity|pointer_sentinel_mixture_models",
"merity|regularizing_and_optimizing_lstm_language_models",
"oord|wavenet:_a_generative_model_for_raw_audio",
"oord|conditional_image_generation_with_pixelcnn_decoders",
"pascanu|on_the_difficulty_of_training_recurrent_neural_networks",
"salimans|weight_normalization:_a_simple_reparameterization_to_accelerate_training_of_deep_neural_networks",
"wu|on_multiplicative_integration_with_recurrent_neural_networks",
"yu|multi-scale_context_aggregation_by_dilated_convolutions",
"arjovsky|unitary_evolution_recurrent_neural_networks",
"borovykh|conditional_time_series_forecasting_with_convolutional_neural_networks",
"chang|dilated_recurrent_neural_networks",
"chung|hierarchical_multiscale_recurrent_neural_networks",
"cooijmans|recurrent_batch_normalization",
"dauphin|language_modeling_with_gated_convolutional_networks",
"grave|improving_neural_language_models_with_a_continuous_cache",
"jing|tunable_efficient_unitary_neural_networks_(eunn)_and_their_application_to_rnns",
"kalchbrenner|neural_machine_translation_in_linear_time",
"koutnik|a_clockwork_rnn",
"krueger|regularizing_rnns_by_stabilizing_activations",
"krueger|zoneout:_regularizing_rnns_by_randomly_preserving_hidden_activations",
"le|a_simple_way_to_initialize_recurrent_networks_of_rectified_linear_units",
"paperno|the_lambada_dataset:_word_prediction_requiring_a_broad_discourse_context",
"press|using_the_output_embedding_to_improve_language_models",
"subakan|diagonal_rnns_in_symbolic_music_modeling",
"zhang|architectural_complexity_measures_of_recurrent_neural_networks"
],
"title": [
"Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling",
"Convolutional Sequence to Sequence Learning",
"LSTM: A Search Space Odyssey",
"Deep Residual Learning for Image Recognition",
"Gradient-based learning applied to document recognition",
"Fully Convolutional Networks for Semantic Segmentation",
"Pointer Sentinel Mixture Models",
"Regularizing and Optimizing LSTM Language Models",
"WAVENET: A GENERATIVE MODEL FOR RAW AUDIO Aäron van den Oord",
"Conditional Image Generation with PixelCNN Decoders Aäron van den Oord",
"On the difficulty of training Recurrent Neural Networks",
"Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks",
"On Multiplicative Integration with Recurrent Neural Networks",
"MULTI-SCALE CONTEXT AGGREGATION BY DILATED CONVOLUTIONS",
"Enabling the Adoption of Processing-in-Memory: Challenges, Mechanisms, Future Research Directions",
"Conditional time series forecasting with convolutional neural networks",
"Rough paths, Signatures and the modelling of functions on streams",
"Published as a conference paper at ICLR 2017 HIERARCHICAL MULTISCALE RECURRENT NEURAL NETWORKS",
"RECURRENT BATCH NORMALIZATION",
"Language Modeling with Gated Convolutional Networks",
"Under review as a conference paper at ICLR 2017 IMPROVING NEURAL LANGUAGE MODELS WITH A CONTINUOUS CACHE",
"On the difficulty of training Recurrent Neural Networks",
"Neural Machine Translation in Linear Time",
"A Clockwork RNN",
"REGULARIZING RNNS BY STABILIZING ACTIVATIONS",
"Under review as a conference paper at ICLR 2017 ZONEOUT: REGULARIZING RNNS BY RANDOMLY PRESERVING HIDDEN ACTIVATIONS",
"A Simple Way to Initialize Recurrent Networks of Rectified Linear Units",
"The LAMBADA dataset: Word prediction requiring a broad discourse context *",
"Using the Output Embedding to Improve Language Models",
"DIAGONAL RNNS IN SYMBOLIC MUSIC MODELING",
"Architectural Complexity Measures of Recurrent Neural Networks"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"junyoung chung",
"caglar gulcehre",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal CIFAR Senior Fellow",
"location": "{}"
}
]
},
{
"name": [
"jonas gehring",
"michael auli",
"david grangier",
"denis yarats",
"yann n dauphin facebook",
"a i research"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"klaus greff",
"rupesh k srivastava",
"jan koutník",
"bas r steunebrink",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
}
]
},
{
"name": [
"yann lecun",
"léon bottou",
"yoshua bengio",
"patrick haffner",
"yoshua bottou",
"patrick bengio",
" haffner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jonathan long",
"evan shelhamer",
"trevor darrell",
"u c berkeley"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"stephen merity",
"james bradbury",
"richard socher"
],
"affiliation": [
{
"laboratory": "",
"institution": "MetaMind -A Salesforce Company",
"location": "{'settlement': 'Palo Alto', 'region': 'CA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "MetaMind -A Salesforce Company",
"location": "{'settlement': 'Palo Alto', 'region': 'CA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "MetaMind -A Salesforce Company",
"location": "{'settlement': 'Palo Alto', 'region': 'CA', 'country': 'USA'}"
}
]
},
{
"name": [
"stephen merity",
"nitish shirish",
"richard socher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sander dieleman",
"heiga zen",
"karen simonyan",
"nal kalchbrenner",
"andrew senior",
"koray kavukcuoglu",
"google deepmind",
" london"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"google deepmind",
"nal kalchbrenner",
"lasse espeholt",
"alex graves",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"razvan pascanu",
"tomas mikolov",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": "",
"institution": "Universite de Montreal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Brno University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Universite de Montreal",
"location": "{}"
}
]
},
{
"name": [
"tim salimans",
"diederik p kingma"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yuhuai wu",
"saizheng zhang",
"ying zhang",
"yoshua bengio",
"ruslan salakhutdinov"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{}"
},
{
"laboratory": "MILA",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "MILA",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "MILA",
"institution": "Université de Montréal",
"location": "{}"
}
]
},
{
"name": [
"yu fisher",
"vladlen koltun"
],
"affiliation": [
{
"laboratory": "",
"institution": "Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Intel Labs",
"location": "{}"
}
]
},
{
"name": [
"saugata ghose",
"kevin hsieh",
"amirali boroumand",
"rachata ausavarungnirun",
"onur mutlu",
" core",
"access memory",
" comp",
"memory access"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"anastasia borovykh",
"sander bohte",
"cornelis w oosterlee"
],
"affiliation": [
{
"laboratory": "",
"institution": "Università di Bologna",
"location": "{'settlement': 'Bologna', 'country': 'Italy'}"
},
{
"laboratory": "",
"institution": "Centrum Wiskunde & Informatica",
"location": "{'settlement': 'Amsterdam', 'country': 'The Netherlands'}"
},
{
"laboratory": "",
"institution": "Centrum Wiskunde & Informatica",
"location": "{'settlement': 'Amsterdam', 'country': 'The Netherlands'}"
}
]
},
{
"name": [
"terry lyons",
"kelly wy- att",
"justin sharp",
"horatio boedihardjo",
"hao ni",
"danyu yang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"junyoung chung",
"sungjin ahn",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
}
]
},
{
"name": [
"tim cooijmans",
"nicolas ballas",
"césar laurent",
"çaglar gülçehre",
"aaron courville"
],
"affiliation": [
{
"laboratory": "",
"institution": "MILA -Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "MILA -Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "MILA -Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "MILA -Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "MILA -Université de Montréal",
"location": "{}"
}
]
},
{
"name": [
"yann n dauphin",
"angela fan",
"michael auli",
"david grangier"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"edouard grave",
"armand joulin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"razvan pascanu",
"tomas mikolov",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": "",
"institution": "Universite de Montreal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Brno University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Universite de Montreal",
"location": "{}"
}
]
},
{
"name": [
"nal kalchbrenner",
"lasse espeholt",
"karen simonyan",
"aäron van den oord",
"alex graves",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": "",
"institution": "Google Deepmind",
"location": "{'settlement': 'London', 'country': 'UK'}"
},
{
"laboratory": "",
"institution": "Google Deepmind",
"location": "{'settlement': 'London', 'country': 'UK'}"
},
{
"laboratory": "",
"institution": "Google Deepmind",
"location": "{'settlement': 'London', 'country': 'UK'}"
},
{
"laboratory": "",
"institution": "Google Deepmind",
"location": "{'settlement': 'London', 'country': 'UK'}"
},
{
"laboratory": "",
"institution": "Google Deepmind",
"location": "{'settlement': 'London', 'country': 'UK'}"
},
{
"laboratory": "",
"institution": "Google Deepmind",
"location": "{'settlement': 'London', 'country': 'UK'}"
}
]
},
{
"name": [
"jan koutník",
"klaus greff",
"faustino gomez"
],
"affiliation": [
{
"laboratory": "",
"institution": "Manno-Lugano",
"location": "{'postCode': 'CH-6928', 'country': 'Switzerland'}"
},
{
"laboratory": "",
"institution": "Manno-Lugano",
"location": "{'postCode': 'CH-6928', 'country': 'Switzerland'}"
},
{
"laboratory": "",
"institution": "Manno-Lugano",
"location": "{'postCode': 'CH-6928', 'country': 'Switzerland'}"
}
]
},
{
"name": [
"david krueger",
"roland memisevic"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Montreal Montreal",
"location": "{'postCode': 'H3T 1J4', 'region': 'QC', 'country': 'Canada'}"
},
{
"laboratory": "",
"institution": "University of Montreal Montreal",
"location": "{'postCode': 'H3T 1J4', 'region': 'QC', 'country': 'Canada'}"
}
]
},
{
"name": [
"david krueger",
"tegan maharaj",
"jános kramár",
"mohammad pezeshki",
"nicolas ballas",
"nan rosemary ke",
"anirudh goyal",
"yoshua bengio",
"aaron courville",
"christopher pal"
],
"affiliation": [
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "École Polytechnique de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "École Polytechnique de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "École Polytechnique de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "École Polytechnique de Montréal",
"location": "{}"
}
]
},
{
"name": [
"quoc v le",
"navdeep jaitly",
"geoffrey e hinton google"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"denis paperno",
"germán kruszewski",
"angeliki lazaridou",
"ngoc quan",
" pham",
"raffaella bernardi",
"sandro pezzelle",
"marco baroni",
"gemma boleda",
"raquel fernández"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "University of Amsterdam",
"location": "{}"
}
]
},
{
"name": [
"ofir press",
"lior wolf"
],
"affiliation": [
{
"laboratory": "",
"institution": "Tel-Aviv University",
"location": "{'country': 'Israel'}"
},
{
"laboratory": "",
"institution": "Tel-Aviv University",
"location": "{'country': 'Israel'}"
}
]
},
{
"name": [
"y cem",
"paris smaragdis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"saizheng zhang",
"yuhuai wu",
"tong che",
"zhouhan lin",
"roland memisevic",
"ruslan salakhutdinov",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"",
"1708.02182v1",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.518519 | 0.666667 | null | null | null | null | null | rk8wKk-R- |
||
zambrano|gating_out_sensory_noise_in_a_spikebased_long_shortterm_memory_network|ICLR_cc_2018_Conference | Gating out sensory noise in a spike-based Long Short-Term Memory network | Spiking neural networks are being investigated both as biologically plausible models of neural computation and also as a potentially more efficient type of neural network. While convolutional spiking neural networks have been demonstrated to achieve near state-of-the-art performance, only one solution has been proposed to convert gated recurrent neural networks, so far.
Recurrent neural networks in the form of networks of gating memory cells have been central in state-of-the-art solutions in problem domains that involve sequence recognition or generation. Here, we design an analog gated LSTM cell where its neurons can be substituted for efficient stochastic spiking neurons. These adaptive spiking neurons implement an adaptive form of sigma-delta coding to convert internally computed analog activation values to spike-trains. For such neurons, we approximate the effective activation function, which resembles a sigmoid. We show how analog neurons with such activation functions can be used to create an analog LSTM cell; networks of these cells can then be trained with standard backpropagation. We train these LSTM networks on a noisy and noiseless version of the original sequence prediction task from Hochreiter & Schmidhuber (1997), and also on a noisy and noiseless version of a classical working memory reinforcement learning task, the T-Maze. Substituting the analog neurons for corresponding adaptive spiking neurons, we then show that almost all resulting spiking neural network equivalents correctly compute the original tasks. | {
"name": [],
"affiliation": []
} | We demonstrate a gated recurrent asynchronous spiking neural network that corresponds to an LSTM unit. | [
"spiking neural networks",
"LSTM",
"recurrent neural networks"
] | null | 2018-02-15 22:29:35 | 21 | null | null | null | null | null | null | null | null | false | The reviewers agreed that the paper was somewhat preliminary in terms of the exposition and empirical work. They all find the underlying problem quite interesting and challenging (i.e. spiking recurrent networks). However, the manuscript failed to motivate the approach. In particular, everyone agrees that spiking networks are very interesting, but it's unclear what problem the presented work is solving. The authors need to be more clear about their motivation and then close the loop with empirical validation that their approach is solving the motivating problem (i.e. do we learn something about biological plausibility, are spiking networks better than traditional LSTMs at modeling a particular kind of data, or are they more efficiently implemented on hardware?). Motivating the work with one of these followed by convincing experiments would make this a much stronger paper.
Pros:
- Tackles an interesting and challenging problem at the intersection of neuroscience and ML
- A novel method for creating a spiking LSTM
Cons:
- The motivation is not entirely clear
- The empirical analysis is too simple and does not demonstrate the advantages of this approach
- The paper seems unfocused and could use rewriting
| {
"review_id": [
"SyWwzQceM",
"BkeiHSFxz",
"BkQ6S3QZz"
],
"review": [
{
"title": "title: A spiking implementation of LSTMs",
"paper_summary": null,
"main_review": "main_review: The authors propose a first implementation of spiking LSTMs. This is an interesting and open problem. However, the present work somewhat incomplete, and requires further experiments and clarifications.\n\nPros:\n1. To my best knowledge, this is the first mapping of LSTMs to spiking networks\n2. The authors tackle an interesting and challenging problem.\n\nCons:\n1. In the abstract the authors mention that another approach has been taken, but is never stated what’s the problem that this new one is trying to address. Also, H&S 1997 tested several tasks, which is the one that the authors are referring to?\n2. Figure 1 is not very easy to read. The authors can spell out the labels of the axis (e.g. S could be input, S)\n3. Why are output and forget gates not considered here?\n4. A major point in mapping LSTMs to spiking networks is its biological plausibility. However, the authors do not seem to explore this. Of particular interest is its relationship to a recent proposal of a cortical implementation of LSTMs (Cortical microcircuits as gated-RNNs, NIPS 2017).\n5. The text should be improved, for example in the abstract: “that almost all resulting spiking neural network equivalents correctly..”, please rephrase.\n6. Current LSTMs are applied in much more challenging problems than the original ones. It would be important to test one of this, perhaps the relatively simple pixel-by-pixel MNIST task. If this is not feasible, please comment.\n\nMinor comments:\n1. Change in the abstract “can be substituted for” > “can be substituted by”\n2. A new body of research aims at using backprop in spiking RNNs (e.g. Friedemann and Ganguli 2017). The present work gets around this by training the analog version instead. It would be of interesting to discuss how to train spiking-LSTMs as this is an important topic for future research. \n3. As the main promise of using spiking nets (instead of rate) is their potential efficiency in neuromorphic systems, it would be interesting to contrast in the text the two options for LSTMs, and give some more quantitative analyses on the gain of spiking-LSTM versus rate-LSTMs in terms of efficiency.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: An interesting idea, but it seems that the main claims of are not sufficiently well proven",
"paper_summary": null,
"main_review": "main_review: First the authors suggest an adaptive analog neuron (AAN) model which can be trained by back-propagation and then mapped to an Adaptive Spiking Neuron (ASN). Second, the authors suggest a network module called Adaptive Analog LSTM Cell (AA-LSTM) which contains input cells, input gates, constant error carousels (CEC) and output cells. Jointly with the AA-LSTM, the authors describe a spiking model (AS-LSTM) that is meant to reproduce its transfer function. It is shown quantitatively that the transfer functions of isolated AAN and AA-LSTM units are well approximated by their spiking counterparts. Two sets of experiments are reported, a sequence prediction task taken from the original LSTM paper and a T-maze task solved with reward based learning.\n\nIn general, the paper presents an interesting idea. However, it seems that the main claims of the introduction are not sufficiently well proven later. Also, I believe that the tasks are rather simple and therefore it is not demonstrated that the approach performs well on practically relevant tasks.\n\nOn general level, it should be clarified whether the model is meant to reproduce features of biology or whether the model is meant to be efficient. If the model is meant to reproduce biology, some features of the model are problematic. In particular, that the CEC is modeled with an infinitely long integration time constant of the input current. This would produce infinitely long EPSPs. However, I think there is a chance that minor changes of the model could still work while being more realistic. For example, I would find it more convincing to put the CEC into the adaptation time constants by using a large tau_gamma or tau_eta.\n\nIf the model is meant to provide efficient spiking neural networks, I find the tasks too simple and too artificial. This is particularly true in comparison to the speech recognition tasks VAD and TIMIT which were already solved in Esser et al. with spiking and efficient feedforward networks. \n\nThe authors say in the introduction that they target to model recurrent neural networks. This is an important open question. The usage of the CEC is an interesting idea toward this goal.\nHowever, beside the presence of CEC I do not see any recurrence in the used networks. This seems in contradiction with what is implicitly claimed in the introduction, title and abstract. There are only input-output neuron connections in the sequence prediction task, and a single hidden layer for the T-maze (which does not seem to be recurrently connected). This is problematic as the authors mention that their goal is to reproduce the functionality of LSTMs with spiking neurons for which the network recurrence is an important feature. \n\n\nRegarding more low-level comments:\n\n- The authors used a truncated version of RTRL to train LSTMs and standard back-propagation for single neurons. I wonder why two different algorithms were used, as, in principle, they compute the same gradient\neither forward or backward.\nIs there a reason for this? Did the truncated RTRL bring any\nadditional benefit compared to the exact backpropagation already\nimplemented in automatic differentiation software?\n\n- The sigma-delta neuron model seems quite ad-hoc and incompatible\nwith most simulators and dedicated hardware. I wonder whether the\nAS-LSTM model would still be valid if the ASN model is replaced with a\nstandard SRM model for instance.\n\n- The authors claim in the introduction that they made an analytical conversion from discrete to continuous time. I did not find this in the main text.\n\n- The axes in Figure 1 are not defined (what is Delta S?) and the\ncaption does not match. \"Average output signal [...] as a function of its incoming PSC I\" output signal is not defined, and S is presented in the graph, but not I.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A method for converting a trained analog LSTM-type network to a spiking version.",
"paper_summary": null,
"main_review": "main_review: Here the authors propose a variant of an analog LSTM and then further propose a mechanism by which to convert it to a spiking network, in what a computational neuroscientist would call a 'mean-field' approach. The result is a network that communicates using only spikes. In general I think that the problem of training or even creating spiking networks from analog networks is interesting and worthy of attention from the ML community. However, this manuscript feels very early and I believe needs further focus and work before it will have impact in the community. \n\nI can see three directions in which this work could be improved to provide wider interest:\n1. Neurophysiological realism - It appears the authors are not interested in this direction given the focus of the manuscript ( other than mentioning the brain as motivation).\n\n2. ML interest - From a pure ML point of view some interesting questions relate to training / computations / representations / performance. However, in the manuscript the tasks trained are exceedingly simple and unconvincing from either a representations or performance perspective. Since the main novelty of the manuscript is the 'spikification' algorithm, little is learned about how spiking networks function, or how spiking networks might represent data or implement computations. \n\n3. Hardware considerations - There is no analysis of what has been made more efficient, more sped-up, how to meaningfully implement the algorithm, etc., etc. A focus in this direction could find an applied audience.\n\nAs a minor comment, the paper could stand to be improved in terms of exposition. In particular, the paper relies on ideas from other papers and the assumption is largely made that the reader is familiar with them, although the paper is self-contained.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.4444444477558136,
0.3333333432674408
],
"confidence": [
0.5,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [],
"comment": []
} | {
"paperhash": [
"attwell|an_energy_budget_for_signaling_in_the_grey_matter_of_the_brain",
"bakker|reinforcement_learning_with_long_short-term_memory",
"bohte|efficient_spike-coding_with_multiplicative_adaptation_in_a_spike_response_model",
"cho|learning_phrase_representations_using_rnn_encoder-decoder_for_statistical_machine_translation",
"chung|empirical_evaluation_of_gated_recurrent_neural_networks_on_sequence_modeling",
"denève|efficient_codes_and_balanced_networks",
"diehl|fast-classifying,_high-accuracy_spiking_deep_networks_through_weight_and_threshold_balancing",
"gers|learning_precise_timing_with_lstm_recurrent_networks",
"greff|lstm:_a_search_space_odyssey",
"harmon|multi-player_residual_advantage_learning_with_general_function_approximation",
"hochreiter|long_short-term_memory",
"hunsberger|spiking_deep_networks_with_lif_neurons",
"neil|learning_to_be_efficient:_algorithms_for_training_low-latency,_low-compute_deep_spiking_neural_networks",
"o'connor|sigma_delta_quantized_networks",
"o'connor|real-time_classification_and_sensor_fusion_with_a_spiking_deep_belief_network",
"rombouts|neurally_plausible_reinforcement_learning_of_working_memory_tasks",
"shrestha|a_spike-based_long_short-term_memory_on_a_neurosynaptic_processor",
"yoon|lif_and_simplified_srm_neurons_encode_signals_into_spikes_via_a_form_of_asynchronous_pulse_sigma-delta_modulation",
"zambrano|fast_and_efficient_asynchronous_neural_computation_with_adapting_spiking_neural_networks",
"zambrano|efficient_computation_in_adaptive_artificial_spiking_neural_networks",
"zaremba|an_empirical_exploration_of_recurrent_network_architectures"
],
"title": [
"An energy budget for signaling in the grey matter of the brain",
"Reinforcement Learning with Long Short-Term Memory",
"Efficient Spike-Coding with Multiplicative Adaptation in a Spike Response Model",
"Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"Empirical evaluation of gated recurrent neural networks on sequence modeling",
"Efficient codes and balanced networks",
"Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing",
"Learning precise timing with lstm recurrent networks",
"Lstm: A search space odyssey",
"Multi-player residual advantage learning with general function approximation",
"Long short-term memory",
"Spiking deep networks with lif neurons",
"Learning to be efficient: Algorithms for training low-latency, low-compute deep spiking neural networks",
"Sigma delta quantized networks",
"Real-time classification and sensor fusion with a spiking deep belief network",
"Neurally plausible reinforcement learning of working memory tasks",
"A spike-based long short-term memory on a neurosynaptic processor",
"LIF and Simplified SRM Neurons Encode Signals Into Spikes via a Form of Asynchronous Pulse Sigma-Delta Modulation",
"Fast and efficient asynchronous neural computation with adapting spiking neural networks",
"Efficient computation in adaptive artificial spiking neural networks",
"An empirical exploration of recurrent network architectures"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"d attwell",
"s b laughlin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" bakker"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"s m bohte"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kyunghyun cho",
"bart van merriënboer",
"caglar gulcehre",
"dzmitry bahdanau",
"fethi bougares",
"holger schwenk",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"junyoung chung",
"caglar gulcehre",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sophie denève",
"christian k machens"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p u diehl",
"d neil",
"j binas",
"m cook",
"s-c liu",
"m pfeiffer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nicol n felix a gers",
"jürgen schraudolph",
" schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"klaus greff",
"k rupesh",
"jan srivastava",
" koutník",
"jürgen bas r steunebrink",
" schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"m harmon",
"l c baird",
"iii "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"s hochreiter",
"j schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"eric hunsberger",
"chris eliasmith"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d neil",
"m pfeiffer",
"s-c liu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p o'connor",
"m welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p o'connor",
"d neil",
"s-c liu",
"t delbruck",
"m pfeiffer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"j o rombouts",
"s m bohte",
"p r roelfsema"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"amar shrestha",
"khadeer ahmed",
"yanzhi wang",
"adam t david p widemann",
"brian c moody",
"qinru van essen",
" qiu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"y c yoon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d zambrano",
"s m bohte"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"davide zambrano",
"roeland nusselder",
"h steven scholte",
"sander bohte"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wojciech zaremba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"arXiv:1406.1078",
"1412.3555v1",
"",
"",
"",
"1503.04069v2",
"",
"",
"1510.08829v1",
"",
"1611.02024v2",
"",
"",
"",
"",
"1609.02053v1",
"1710.04838v1",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.407407 | 0.666667 | null | null | null | null | null | rk8R_JWRW |
||
loshchilov|fixing_weight_decay_regularization_in_adam|ICLR_cc_2018_Conference | Fixing Weight Decay Regularization in Adam | We note that common implementations of adaptive gradient algorithms, such as Adam, limit the potential benefit of weight decay regularization, because the weights do not decay multiplicatively (as would be expected for standard weight decay) but by an additive constant factor.
We propose a simple way to resolve this issue by decoupling weight decay and the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i)
decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam, and (ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter).
We also demonstrate that longer optimization runs require smaller weight decay values for optimal results and introduce a normalized variant of weight decay to reduce this dependence. Finally, we propose a version of Adam with warm restarts (AdamWR) that has strong anytime performance while achieving state-of-the-art results on CIFAR-10 and ImageNet32x32.
Our source code will become available after the review process. | {
"name": [],
"affiliation": []
} | Fixing weight decay regularization in adaptive gradient methods such as Adam | [
"Adam",
"Adaptive Gradient Methods",
"weight decay",
"L2 regularization"
] | null | 2018-02-15 22:29:44 | 21 | null | null | null | null | null | null | null | null | false | This paper generated quite a bit of controversy among reviewers. The main claim of the paper is that Adam and related optimizers are broken because their "weight decay" regularization is not actually weight decay. It proposes to modify Adam to decay all weights the same regardless of the gradient variances.
Calling Adam's weight decay mechanism a mistake seems very far-fetched to me. Neural net optimization researchers are well aware of the connection between weight decay and L2 regularization and the fact that they don't correspond in preconditioned methods. L2 regularization is basically the only justification I have heard for weight decay, and despite rejecting this interpretation, the paper does not provide an alternative justification.
Decoupling the optimization from the cost function is a well-established principle. This abstraction barrier is not completely clean (e.g. gradient noise has well-known regularization effects), and the experiments of this paper perhaps provide evidence that the choices may be coupled in this case. This is an interesting finding, and probably worth following up on. However, the paper seems to sweep the "decoupling optimization and cost" issue under the carpet and take for granted that the decay rate is what should be held fixed. All three reviewers found the presentation to be misleading, and I would agree with them. While there may be an interesting contribution here, I cannot endorse the paper as-is.
| {
"review_id": [
"rJvLpvdez",
"SJs7uYYeM",
"HJX7HvOez"
],
"review": [
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: The paper presents an alternative way to implement weight decay in Adam. Empirical results are shown to support this idea.\n\nThe idea presented in the paper is interesting, but I have some concerns about it.\n\nFirst, the authors argue that the weight decay should be implemented in a way different from the minimization of a L2 regularization. This seems a very weird statement to me. In fact, it easy to see that what the authors propose is to minimize two different objective functions in SGDW and AdamW! I am not even sure how I should interpret what they propose. The fact is that SGD and Adam are optimization algorithms, so we cannot just change the update rule in the same way in both algorithms and expect them to behave in the same way just because the added terms have the same shape!\n\nSecond, the equation (5) that re-normalize the weight decay parameter as been obtained on one dataset, as the author admit, and tested only on another one. I am not sure this is enough to be considered as a scientific proof.\n\nAlso, the empirical experiments seem to use the cosine annealing of the learning rate. This means that the only thing the authors proved is that their proposed change yields better results when used with a particular setting of the cosine annealing. What happens in the other cases?\n\nTo summarize, I think the idea is interesting but the paper might not be ready to be presented in a scientific conference.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Important work supported with experiments",
"paper_summary": null,
"main_review": "main_review: At the heart of the paper, there is a single idea: to decouple the weight decay from the number of steps taken by the optimization process (the paragraph at the end of page 2 is the key to the paper). This is an important and largely overlooked area of implementation and most off-the-shelf optimization algorithms, unfortunately, miss this point, too. I think that the proposed implementation should be taken seriously, especially in conjunction with the discussion that has been carried out with the work of Wilson et al., 2017 (https://arxiv.org/abs/1705.08292).\n\nThe introduction does a decent job explaining why it is necessary to pay attention to the norm of the weights as the training progresses within its scope. However, I would like to add a couple more points to the discussion: \n- \"Optimal weight decay is a function (among other things) of the total number of epochs / batch passes.\" in principle, it is a function of weight updates. Clearly, it depends on the way the decay process is scheduled. However, there is a bad habit in DL where time is scaled by the number of epochs rather than the number of weight updates which sometimes lead to misleading plots (for instance, when comparing two algorithms with different batch sizes).\n- Another ICLR 2018 submission has an interesting take on the norm of the weights and the algorithm (https://openreview.net/forum?id=HkmaTz-0W¬eId=HkmaTz-0W). Figure 3 shows the histograms of SGD/ADAM with and without WD (the *un-fixed* version), and it clearly shows how the landscape appear misleadingly different when one doesn't pay attention to the weight distribution in visualizations. \n- In figure 2, it appears that the training process has three phases, an initial decay, a steady progress, and a final decay that is more pronounced in AdamW. This final decay also correlates with the better test error of the proposed method. This third part also seems to correspond to the difference between Adam and AdamW through the way they branch out after following similar curves. One wonders what causes this branching and whether the key the desired effects are observed at the bottom of the landscape.\n- The paper concludes with \"Advani & Saxe (2017) analytically showed that in the limited data regime of deep networks the presence of eigenvalues that are zero forms a frozen subspace in which no learning occurs and thus smaller (e.g., zero) initial weight norms should be used to achieve best generalization results.\" Related to this there is another ICLR 2018 submission (https://openreview.net/forum?id=rJrTwxbCb), figure 1 shows that the eigenvalues of the Hessian of the loss have zero forms at the bottom of the landscape, not at the beginning. Back to the previous point, maybe that discussion should focus on the second and third phases of the training, not the beginning. \n- Finally, it would also be interesting to discuss the relation of the behavior of the weights at the last parts of the training and its connection to pruning. \n\nI'm aware that one can easily go beyond the scope of the paper by adding more material. Therefore, it is not completely reasonable to expect all such possible discussions to take place at once. The paper as it stands is reasonably self-contained and to the point. Just a minor last point that is irrelevant to the content of the work: The slash punctuation mark that is used to indicate 'or' should be used without spaces as in 'epochs/batch'.\n\nEdit: Thanks very much for the updates and refinements. I stand by my original score and would like to indicate my support for this style of empirical work in scientific conferences.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Novel investigation and insight about weight decay in SGD variants",
"paper_summary": null,
"main_review": "main_review: This paper investigates weight decay issues lied in the SGD variants, especially Adam. Current implementations of adaptive gradient algorithms implicitly contain a crucial flaw, by which \bweight decay in these methods does not correspond to L2 regularization. To fix this issue, this paper proposes the decoupling method between weight decay and the gradient-based update.\n\nOverall, this paper is well-written and contain sufficient references to note the overview of recent adaptive gradient-based methods for DNN. In addition, this paper investigates the crucial issue in the recent adaptive gradient methods and find the problem in weight decay. This is an interesting finding. And the proposed method to fix this issue is simple and reasonable. Their experimental results to validate the effectiveness of their proposed method are well-organized. In particular, the investigation on hyperparameter spaces shows the strong advantage of the proposed methods.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.6666666865348816,
0.7777777910232544
],
"confidence": [
0.75,
0.75,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Re: Review",
"Re: Agreeing with AnonReviewer3 ",
"Re: Some more explanations",
"Re: Comparisons with SGD and SGDR needed",
"Re: divergence in scores",
"Re: Thanks for the reply (1/2)",
"Re: I confirm my low score",
"Re: divergence in scores",
"Rebuttal",
"Re: I agree with AnonReviewer3",
"Re: Clarification (2/2)",
"Re: ICLR 2018 Conference Acceptance Decision ",
"+ corrected sentence, + extended Figure 1",
"Re: Thanks for the reply (2/2)",
"Re: Clarification (1/2)",
"Re: + corrected sentence, + extended Figure 1",
"Re: Important work supported with experiments"
],
"comment": [
"“the equation (5) that re-normalize the weight decay parameter as been obtained on one dataset, as the author admit, and tested only on another one.”\nWhile we don’t have evidence that the sqrt scaling we propose is optimal, we believe that *some* scaling should be considered when the total number of batch passes changes (due to the change of the total number of epochs or/and batch size). It is not (computationally) straightforward to investigate the optimal scaling because it is coupled with other hyperparameters. We note, however, that our focused study on CIFAR-10 and ImageNet32x32 represents the first attempt in this direction, and that it at least demonstrates that in these two cases, sqrt scaling is much better than the previous default (no scaling). \n\n“Also, the empirical experiments seem to use the cosine annealing of the learning rate. This means that the only thing the authors proved is that their proposed change yields better results when used with a particular setting of the cosine annealing. What happens in the other cases?”\n\nWe note that we experimented with and presented a set of results/figures with different settings of cosine annealing (varying its initial learning rate). As discussed in Section 2, the separability effect provided by the proposed decoupling does not rely on cosine annealing. In response to the reviewer’s comment, we have now also included SuppFigure 5 (for the moment, unfortunately, it is not of the greatest possible resolution due to its high computational cost) which shows the results without cosine annealing. We included the following text in section 5.3.\n\n\"We investigated whether the use of much longer runs (1800 epochs) of the original Adam with L2 regularization makes the use of cosine annealing unnecessary. The results of Adam without cosine annealing (i.e., with fixed learning rate) for a 4 by 4 logarithmic grid of hyperparameter settings are given in SuppFigure 5 in the supplementary material. Even after taking into account the low resolution of the grid, the results appear to be at best comparable to the ones obtained with AdamW with 18 times less epochs and a smaller network (see SuppFigure 2). These results are not very surprising given Figure 1 (which demonstrates the effectiveness of AdamW) and SuppFigure 2 (which demonstrates the necessity to use some learning rate schedule such as cosine annealing).\"\n\n\nWe agree that the impact of weight decay on the objective function should be mentioned. We included the following text in our discussion section.\n\n\"In this paper, we argue that the popular interpretation that weight decay = L2 regularization is not precise. Instead, the difference between the two leads to the following important consequences. Two algorithms as different as SGD and Adam will exhibit different effective rates of weight decay even if the same regularization coefficient is used to include L2 regularization in the objective function. Moreover, two algorithms as different as SGDW and AdamW will optimize two effectively different objective functions even if the same weight decay factor is used. Our findings suggest that the original Adam algorithm with L2 regularization affects effective rates of weight decay in a way that precludes effective regularization, and that effective regularization is achievable by decoupling the weight decay.\"\n",
"We note that AnonReviewer3 said\n\"In fact, it easy to see that what the authors propose is to minimize two different objective functions in SGDW and AdamW!\" \nUnfortunately, the word \"effective\" (or implicit, or similar a word) was missing. Obviously the explicit loss-based objective functions present to SGDW and AdamW are not different. However, the effective objectives are different due to weight decay because if converted to an explicit form and per batch pass, they would be different. \n\nYour comment says that \n\"it is completely misleading to claim that different algorithms optimize different objective functions (unless explicitly modified which is not the case here)\"\nWhile you referred it to us, we note that you say that it is misleading to claim that AnonReviewer3 claimed, i.e., that two different algorithms (SGDW and AdamW) minimize two different objective functions given that they are not explicitly modified.\n\nFollowing the original comment of AnonReviewer3, we demonstrated our example with SGD-A and SGD-B to show that one does not have to go as far as for regularization to see that the *effective* objective function can be *interpreted to be modified*. We showed that for SGD-A and SGD-B. ",
"SGD is not invariant to rescaling of the objective function if the learning rate is not rescaled accordingly. Therefore, the use of a different learning can be viewed as optimization of a different *effective* objective function. As we showed in our original comment, this interpretation is mathematically correct. Already for SGD, the rescaling can lead the algorithm to different basins of attractions if a multimodal function is present (even for the same noise seed used). \nOf course, introducing an additive term or weight decay is likely to lead to a more drastic change of the search landscape. However, our point was to show that your comment about the two different effective objectives is not a special situation of SGDW and AdamW.",
"“For practical purposes, I would like to know whether its worth attempting to use SGDW or SGDWR rather than standard SGD.” \n\nSGDW is worth using if you consider that the search space of hyperparameters of SGDW shown in Figure 1 is easier to search that the one of SGD shown in the same figure. We consider this to be the case due to the more separable nature of that space as described in the paper. Another reason to prefer SGDW to SGD is the proposed normalized weight decay that allows you to simplify the search for the weight decay factor suitable to different computational budgets. Please compare the first two rows of SuppFigure 3: the normalized weight decay factor of 1/20 is suitable for 25 and 400 epochs, in contrast to the raw weight decay factor whose optimal value changes by a factor of about 4.\nAs you can see in Figure 1, despite the fact that it is easier to tune SGDW than SGD, the best validation errors that can be obtained by both algorithms are comparable. Therefore, we only claim that SGDW “simplifies the problem of hyperparameter tuning in SGD” and did not run SGD for Figure 3 which would match the results of SGDW (similarly to Figure 1), i.e., reproduce 2.86% of Gastaldi. However, due to a request made to us earlier, we have included an additional line for the ImageNet32x32 experiment (see Figure 3 right): results for original Adam (with cosine annealing). Similarly to the results on CIFAR-10 (see Figure 3 left), the best results of Adam (out of a set of weight decay factors) were substantially worse than the ones of AdamW. \n\n“I also note that Figure 3 suggests that Adam variants seems always inferior to comparison vanilla SGD methods, which also leads to the question of why bother \"fixing\" Adam if SGD variants are better and simpler choices?”\n\nPlease note that Figure 3 shows that the proposed “fixed” Adam drastically reduces the gap between SGD on CIFAR-10 and performs equally well (no longer inferior) on ImageNet32x32. As mentioned at the end of our introduction, our motivation was to contribute to the goal that “practitioners do not need to switch between Adam and SGD anymore, which in turn should help to reduce the common issue of selecting dataset/task-specific training algorithms and their hyperparameters”. “Fixing” Adam for the considered image classification datasets where its gap with SGD is significant might be a good indication of progress towards achieving the above-mentioned goal. \n\n“was w_t constant between restarts and set according to equation (5), and if so what w_norm was used? In this case, what value of \\alpha_t was used?” \nw_norm is the normalized weight decay hyperparameter, set to the value indicated in the plots (e.g., as 0.025). It is used to derive w_t according to equation (5). Since all inputs of equation (5) are constant between restarts in our setup, w_t is constant as well. Please note that if batch size would change during the run (e.g., increase), then w_t would change as well. \n\nalpha_t is constant and corresponds to the initial learning rate, then it is multiplied by the schedule multiplier eta_t which includes cosine annealing and restarts\n",
"There was lots of discussion with some confusion, but fortunately in the end we seemed to converge. We believe that at the crux of this all was that AnonReviewer3 takes objection to us calling AdamW an optimizer, rather than a combination of an optimizer and a regularization method. Now that we understand what the reviewer’s concern is, we fully agree that this is a fair comment. \n\nWe are very sympathetic towards AnonReviewer3’s wish to cleanly isolate the regularization from the optimization. That is of course very desirable since it makes everything easier to think about and understand. \nThe problem is that as soon as you include the regularizer as part of the objective function, adaptive gradient methods will (with a finite budget) de-emphasize the regularizer for those weights that have large gradients. Our paper exposes this problem and that this leads to poor performance of L2 reg in combination with adaptive gradient methods, while the method that coined the term ‘weight decay’ (and is equivalent to L2 reg for non-adaptive gradient methods when the weight decay factor is rescaled according to learning rate) still continues to work well when using adaptive gradients. \n\nWe are more than happy to call AdamW a combination of an optimizer and a regularization method. We hope that this clears up the confusion and that we will be given the chance to expose to the community that L2 reg != weight decay when using adaptive gradients, and to thereby clear up a misunderstanding about one of the central parts underlying most DL applications. \n\nWe thank the reviewers for the helpful feedback and for all their work!",
"We thank the reviewer for the reply just posted. In that reply, the reviewer clarifies that he/she was not satisfied with our rebuttal to their first point (and thus lowered the score from 6 to 4), so we will now reply in more detail to this point:\n\n\"First, the authors argue that the weight decay should be implemented in a way different from the minimization of a L2 regularization. This seems a very weird statement to me. I am not even sure how I should interpret what they propose.\"\n\nWe believe there might be a misunderstanding: we don't propose weight decay. Weight decay was proposed at NIPS 1988 by Hanson & Pratt as described in our Section 2. It is a well-known regularization method, and there is the misconception in the DL community that it is identical to L2 regularization, but as we show this equality does not hold for adaptive gradient algorithms. We also show that when using the original formulation of weight decay rather than L2 regularization, Adam generalizes much better. This is in line with the intuition we give on L2 regularization not penalizing weights with large gradients enough (see Equation 4 and the text below it). To us, the original formulation of weight decay is intuitive, as it simply moves weights closer to zero like most regularization methods. It is identical to L2 regularization in the standard case (no adaptive gradients), and -- as we show -- it also continues to work for adaptive gradient algorithms, where we and others have shown L2 to not lead to good generalization.\n\n\"In fact, it easy to see that what the authors propose is to minimize two different objective functions in SGDW and AdamW! The fact is that SGD and Adam are optimization algorithms, so we cannot just change the update rule in the same way in both algorithms and expect them to behave in the same way just because the added terms have the same shape!\"\n\nWe already replied to this in part in our original rebuttal, including new text in our discussion section. We just realized that the reviewer's comment might also be meant as criticizing that we use the same type of weight decay for both SGD and Adam. If that was the intended interpretation, we would like to point out that we simply use the original weight decay by Hanson and Pratt (1988) in both cases, and it works in both cases. Importantly, this is not a bakeoff Adam vs. SGD or AdamW vs. SGDW; it's about studying which regularization method (L2 reg / weight decay) continues to work for adaptive gradient methods.\n\nFurther, about the reviewer's point that SGDW and AdamW optimize different objective functions, we would like to offer the following thoughts:\nConsider the following two different versions of SGD: \ni) SGD-A: SGD with some learning rate A optimizes f(x) \nii) SGD-B: SGD with some learning rate B optimizes f(x) \nWe then note that SGD-B can be viewed as SGD-A optimizing the different objective function g(x)=f(x) * B / A.\nThus, one can interpret a change of the learning rate (leading to a different algorithm that is still an instantiation of the SGD family) as a modification of the effective objective function. Therefore, the possible interpretation of SGDW and AdamW as of two optimizers dealing with two different objectives (the reviewer's main criticism) *is not a special situation*. Instead, the same interpretation is possible even for SGD with different hyperparameter settings. The same interpretation is also possible when comparing the original SGD and Adam (a more complex g(x) should be considered) and other algorithms. We agree that this is an interesting perspective and nontrivial, but it should not be held against one particular approach. \n\n",
"We thank you for your criticism. Our replies to your comments are given in the forum. \n\nWe note that some of your comments criticize claims that we do not make (we mentioned them in our replies in the forum). \n\nOur disagreement might be (among other things) be due to your view that weight decay is \n\"just an L2 regularization added to the objective function\" as mentioned in your comment. \nThe core point of our paper is to show that this view is imprecise. \n\nWe try to follow Richard P. Feynman's view that\n\n\"Science is a way of trying not to fool yourself.\nThe first principle is that you must not fool yourself, and you\nare the easiest person to fool. So you have to be very careful\nabout that. After you've not fooled yourself, it's easy not to\nfool other[ scientist]s. You just have to be honest in a\nconventional way after that.\"\nand\n\"It doesn't matter how beautiful your theory is, \nit doesn't matter how smart you are. \nIf it doesn't agree with experiment, it's wrong.\"\n\nOur paper attempts to clarify the commonly accepted misinterpretation (according to your comment, \nyou share this view with \"Later, it was clear that it was just an L2 regularization added to the objective function.\") that L2 = weight decay and thus reduces the effect of \"fooling\" of the DL community. \nWe find it rewarding to clarify our point case by case in the forum but quite hard to go against \n\"severe misunderstanding present in the paper of the issues related to optimization, regularization and optimization algorithms.\"\n\nFor instance, \n\"a lot of confusion about a number of different things.\" \nwere introduced after you first mentioned \n\"In fact, it easy to see that what the authors propose is to minimize two different objective functions in SGDW and AdamW!\" \nand then \n\"Claiming that SGD and Adam optimize different objective functions is a severe misunderstanding of how these algorithms work.\"\nwhile \ni) missing the crucial term \"effective\", \nii) missing the context of our simple demonstration that the *effective objective function* can be interpreted to be different even for SGD with different learning rates (see SGD-A and SGD-B) and thus also for SGD and Adam under some nontrivial transformation. The point of this interpretation was to show that the change of the effective objective function *is not a special situation* of SGDW and AdamW as you seemed to suggested. In fact, the impact of the learning rate on the effective objective function *can be* more drastic as the effect of the weight decay factor. This is not necessarily something simple to digest but it does not make it wrong.",
"An important disagreement centers around our newly introduced text in the conclusion. \nIt starts with \"In this paper, we argue\". \nWe see that the reviewers find this text to be wrong. \nWe think that the text is correct and tried to provide our arguments in the forum.\nHowever, the feedback suggests that the text is confusing and/or wrong and therefore we \nwill be glad to remove it and perform other corrections if required.",
"We thank all reviewers for their positive evaluation and their valuable comments. We've uploaded a revision to address the issues raised and replied to reviewers and anonymous comments individually in the OpenReview forum. \nWe are glad that the reviewers agree that our work is novel, simple and might provide useful insights. We agree that some of our experimental findings need to be explored on a wider range of datasets and tasks. Nevertheless, we hope that our paper provides a useful bit of information to better understand regularization of deep neural networks. \n\nThank you again for your reviews!\n",
"Please consider our reply to Area Chair.",
"\"Claiming that SGD and Adam optimize different objective functions is a severe misunderstanding of how these algorithms work.\"\n\nPlease have a closer look at our example of SGD-A and SGD-B.\nYou previously mentioned \"SGDW and AdamW have two different objective functions.\" \nTherefore, in our previous reply, we explained and showed that even for SGD (see SGD-A and SGD-B), the change of the learning rate can be *interpreted* as a rescaling of the objective function. While this rescaling does not change the location of the minima, it does change iterates of the algorithm and thus solutions to be found by the algorithm (if the same budget is used and the algorithm is not invariant). This is mathematically valid already for SGD. Then, we mentioned that one can find a more complex transformation of g(x) so that Adam optimizing f(x) can be viewed as SGD optimizing g(x)!=f(x). Such g(x) is not trivial compared to the case of SGD-A / SGD-B, where it is just a rescaling of the objective function. Therefore our interpretation is valid. \n",
"We are disappointed about this decision, especially seeing that our average score (6.33) lies in the acceptance region.\nNevertheless, we have taken the feedback seriously and improved the paper substantially in the meantime; see https://arxiv.org/pdf/1711.05101.pdf\n\nJust for the record (e.g., for any potential future reviewers), we would like to clear up a few points:\n\n> Calling Adam's weight decay mechanism a mistake seems very far-fetched to me.\n\nWe agree that this would be far-fetched. Please note that we never said that this is a mistake in Adam; we said that the common *implementation of L2 regularization/weight decay* does not correspond to the original proposal of weight decay regularization for the case of adaptive gradient algorithms, such as Adam. This is not a criticism of Adam, but of the way weight decay regularization is implemented in common deep learning libraries.\n\n> Neural net optimization researchers are well aware of the connection between weight decay and L2 regularization and the fact that they don't correspond in preconditioned methods. \n\nThis statement does not correspond with our experience. 10/10 researchers we've asked (from several universities) did not know about this. We note that after hearing about a simple-to-prove fact it is tempting to convince oneself that one always knew this fact. If there exists a prior reference pointing out the inequivalence then we would love to hear about it; but if no such a reference exists we stand by our claims.\n\n> All three reviewers found the presentation to be misleading, and I would agree with them. \n\nWe refer to the original reviews below: two of the reviewers were very positive, with scores of 7 and 8.\n\n> L2 regularization is basically the only justification I have heard for weight decay, and despite rejecting this interpretation, the paper does not provide an alternative justification. Decoupling the optimization from the cost function is a well-established principle. This abstraction barrier is not completely clean (e.g. gradient noise has well-known regularization effects), and the experiments of this paper perhaps provide evidence that the choices may be coupled in this case. This is an interesting finding, and probably worth following up on. However, the paper seems to sweep the \"decoupling optimization and cost\" issue under the carpet and take for granted that the decay rate is what should be held fixed.\n\nWe agree that our ICLR submission did not study this in enough detail, and this is the key part of the paper we have improved in the meantime.\nSpecifically, in our new version (available at https://arxiv.org/pdf/1711.05101.pdf ), we've added a new Section 3 to formally contrast L2 regularization and weight decay, and to provide intuition about what the latter means for adaptive gradient algorithms. In particular, to relate weight decay to which objective function is being optimized, for the special case of a fixed diagonal preconditioner matrix M = [diag(\\vec{s})^{-1}], we derived an equivalence of weight decay to a new version of L2 regularization that takes the norm of weights scaled by the preconditioner. From the angle of which objective function is being optimized, it is thus possible to understand weight decay as follows:\n- if all gradients are typically of equal size, weight decay is the same as L2 regularization\n- if some weights typically have larger gradients they get penalized more heavily (proportionally to their typical gradients).\nThis encodes a preference in the objective function not only for solutions with small weights, but particularly with small weights for those weights that tend to have larger gradients than others. One interpretation of why this may work well is that it may drive the search towards flatter minima that generalize better than sharper minima.",
"Thanks for the note! \n\nEven if the shape of the hyperparameter space would drastically change outside of the current range, the claim would be correct for SGD because the already presented results alone make it impossible to first fix the learning rate LR to any value from the range and then expect that the best weight decay found for that LR value would be nearly-optimal for all other possible values of LR. However, we agree that the example given in the sentence is unfortunate because it asks the reader to extrapolate instead of dealing with the data that is shown. It is confusing and we will correct that with a better example whose results are shown in Figure 1: when LR=0.5, optimal weight decay factor is 1/8 *0.001 but it is not optimal for all other settings of LR. \n\nRegarding the values outside of the current range, it seems very unlikely that better results for LR>0.2 exist given the isolines shown in Figure 1 (note the elliptic shape and that the top results for LR=0.2 are worse than for LR=0.1) and that none of the papers with ResNets on CIFAR-10 (with standard settings of batch size, etc.) we are aware of use LR>0.2. In fact, since momentum-SGD is a standard baseline, its hyperparameters for ResNets on CIFAR-10 have been heavily tuned by researchers so that LR often lies in [0.05, 0.1] that matches the best region of momentum-SGD shown in Figure 1.\n\nThank you for helping to avoid possible confusions: we will correct the sentence and extend Figure 1 of momentum-SGD by an additional column with LR=0.4 and even larger LR if necessary.",
"Since the reviewer refers to the recent call to rigor in the DL community, we would like to directly quote from Ali Rahimi's test-of-time award talk at NIPS (https://www.youtube.com/watch?v=Qi1Yry33TQE&feature=youtu.be), since we strongly believe that our work is very aligned with the message of this talk. At time 18 minutes, 43 seconds in the link above, Ali said the following:\n\n\"Think about in the past year, the experiments you ran where you were going for a position on a leaderboard and trying out different techniques to see if they could give you an edge in performance. And now, think about, in the past year, the experiments you ran where you were chasing down an explanation for a strange phenomenon you had observed; where you were trying to find the root cause for something weird. We do a lot of the former kind of experiments; we could use a lot more of the latter. Simple experiments, simple theorems are the building blocks that let us understand more complicated systems.\"\n\nOur work is precisely an instantiation of this latter kind of work that has become so very sparse in the DL community.\nStarting from the strange phenomenon that Adam does not generalize as well as SGD (observed by us and others), we got to the bottom of a misunderstanding that is held in the entire deep learning community: L2 regularization and weight decay are NOT the same for adaptive gradient methods. Yet, they are being equated by the entire community, to the extent that weight decay has become the standard name for L2 regularization (see, e.g., the book \"Deep Learning\", http://www.deeplearningbook.org/, by Goodfellow, Bengio and Courville, section \"5.2.2 Regularization\" or section \"7.1.1 L^2 regularization\"). Adam had over 4000 citations in 2017 alone; don't you think the people using it and other adaptive gradients methods would like to know that when they write they're using weight decay, they're actually not using weight decay due to this inequality between weight decay and L2 regularization?\nAli's last sentence in the above was: \"Simple experiments, simple theorems are the building blocks that let us understand more complicated systems.\" Our work shows a simple 1-line modification of Adam to get weight decay back, and shows simple experiments that very clearly demonstrate the substantial effect of this change. It really doesn't get much simpler than this, and it fixes a common misunderstanding about a core method (L2 regularization / weight decay) the entire community uses in almost every practical system. We strongly believe that the deep learning community should learn about these results. As Ali Rahimi might say, this is one step towards fixing the misunderstandings in the current alchemy.\n(Just in case the reviewer [unlike Ali Rahimi in his talk] feels that rigor equates theorems: we could also trivially prove a theorem \"L2 regularization and weight decay are not the same for adaptive gradient methods\", simply by counterexample.) \n\nWe thank the reviewer for the feedback and hope that based on these explanations he/she agrees that our work is very aligned with the recent call for more rigor and more work that helps us understand DL (and fix misunderstandings) by means of simple observations and simple, clear experiments.",
">> Weight decay was proposed as a heuristic, without a clear understanding of what it was doing. Later, it was clear that it was just an L2 regularization added to the objective function. \n\nThis statement needs to be corrected. The original paper discusses the bias term in the corresponding section and the authors were clear about it. \nYour view that weight decay is \"just an L2 regularization added to the objective function\" is different from the view that our paper proposes. This might be at the core of our disagreement. \n\n>> Hence, what you propose are just two different regularizers, one in SGDW and one in AdamW. Nothing wrong with proposing regularizers, but confusing regularizers (that change where the minima are) and optimization algorithms is a severe problem in a paper.\n\nWe do not *propose regularizers* which is here *weight decay*, we implement weight decay in SGD and Adam (as SGDW and AdamW) in *its canonical way* and not as L2 regularization. \nPlease do not reinterpret our paper to correct it with a counter-argument.\n\n>> - \"one can interpret a change of the learning rate (leading to a different algorithm that is still an instantiation of the SGD family) as a modification of the effective objective function.\"\n>> Unfortunately, this is also wrong. The rescaling of the function by a constant of course does not change the location of the minima of the function.\n\nWe claim:\n\"one can interpret a change of the learning rate (leading to a different algorithm that is still an instantiation of the SGD family) as a modification of the effective objective function.\"\n\nAnonReviewer3 claims:\nThe rescaling of the function by a constant of course does not change the location of the minima of the function.\n\nNote #1: \nOur claim does not contradict what you say here, i.e., that the change of the objective function does not change the location of the minima. Thus the use of \"Unfortunately, this is also wrong\" does not seem appropriate.\n\nNote #2:\nThe rescaling of the objective function does not change the isolines of the objective function and thus does not change the location of the minima of the function. However, it is incorrect to suppose that the rescaling does not change the iterates of the optimization algorithm (here and after, we say this even if the same random seed is used) if that algorithm is not invariant to rescaling of the objective function. SGD does not posses this invariance property. Therefore, rescaling of the objective function without rescaling the learning rate *does change iterates* and thus does affect where the algorithm will end up in the decision space after a given number of iterations. If the problem at hand is unimodal, when after a fixed number of iterations the algorithm will be at different distances from the global optimum. If the problem at hand is multimodal, then the algorithm can potentially be at different basins of attraction which may have different generalization capabilities. \n\nConclusion:\nAs we showed in our example of SGD-A/SGD-B, \n\"one can interpret a change of the learning rate (leading to a different algorithm that is still an instantiation of the SGD family) as a modification of the effective objective function.\" \nis mathematically correct.\nThe reviewer's comment made it appear that we claim that the rescaling of the objective function does change the location of local minima. Nowhere in the text of the paper or in the forum here we claim that. However, rescaling of the objective function affects the optimization process and it cannot be neglected if the algorithm at hand is not invariant to the rescaling transformation. Thus, it seems inappropriate to correct us with \"Unfortunately, this is also wrong.\" Please do not reinterpret our paper to correct it with a counter-argument. \n\n>> \"The same interpretation is also possible when comparing the original SGD and Adam\" is also false: any optimization algorithm (should!) guarantee convergence to a (local) minimum of the function it is optimizing. \n\nWe do not find where we say that there should be no guarantees. Please do not reinterpret our paper to beat it with a counter-argument. Regarding the possibility of the interpretation, see the second part of this reply.\n",
"Following your suggestion, we extended Figure 1 to show the results for much larger weight decay factors. The results confirmed our expectations that the original figures included the basin of optimal hyperparameter settings of the considered experimental setup. You rightly pointed out that a sentence describing Figure 1 was confusing; we have fixed the sentence to provide a more illustrative example. ",
"We agree that \"number of epochs/batch passes\" should be changed to \"number of batch passes/weight updates\" and fixed this (see Section 1). We also included the following text in Section 3:\n\n\"We note a recent relevant observation of \\cite{li2017visualizing} who demonstrated that a smaller batch size (for the same total number of epochs) leads to the shrinking effect of weight decay being more pronounced. Here, we propose to address that effect with normalized weight decay.\"\n\nFollowing the insight that you provided, we included the following text in our discussion section.\n\n\"The results shown in Figure 2 suggest that Adam and AdamW follow very similar curves most of the time until the third phase of the run where AdamW starts to branch out to outperform Adam. As pointed out by an anonymous reviewer, it would be interesting to investigate what causes this branching and whether the desired effects are observed at the bottom of the landscape. One could investigate this using the approach of \\cite{im2016empirical} to switch from Adam to AdamW at a given epoch index. Since it is quite possible that the effect of regularization is not that pronounced in the early stages of training, one could think of designing a version of Adam which exploits this by being fast in the early stages and well-regularized in the late stages of training. The latter might be achieved with a custom schedule of the weight decay factor.\"\n"
]
} | {
"paperhash": [
"chrabaszcz|a_downsampled_variant_of_imagenet_as_an_alternative_to_the_cifar_datasets",
"gastaldi|shake-shake_regularization",
"huang|densely_connected_convolutional_networks",
"keskar|on_large-batch_training_for_deep_learning:_generalization_gap_and_sharp_minima",
"li|visualizing_the_loss_landscape_of_neural_nets",
"loshchilov|sgdr:_stochastic_gradient_descent_with_warm_restarts",
"radford|unsupervised_representation_learning_with_deep_convolutional_generative_adversarial_networks",
"madhu|high-dimensional_dynamics_of_generalization_error_in_neural_networks",
"dinh|sharp_minima_can_generalize_for_deep_nets"
],
"title": [
"Deep Learning with Low Precision by Half-wave Gaussian Quantization",
"Under review as a conference paper at ICLR 2017 SQUEEZENET: ALEXNET-LEVEL ACCURACY WITH 50X FEWER PARAMETERS AND <0.5MB MODEL SIZE",
"Densely Connected Convolutional Networks",
"ON LARGE-BATCH TRAINING FOR DEEP LEARNING: GENERALIZATION GAP AND SHARP MINIMA",
"Visualizing the Loss Landscape of Neural Nets",
"SGDR: STOCHASTIC GRADIENT DESCENT WITH WARM RESTARTS",
"Under review as a conference paper at ICLR 2016 UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS",
"High-dimensional dynamics of generalization error in neural networks",
"Sharp Minima Can Generalize For Deep Nets"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"zhaowei cai",
"u c san"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"forrest n iandola",
"song han",
"matthew w moskewicz",
"khalid ashraf",
"william j dally",
"kurt keutzer",
"u c berkeley"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
}
]
},
{
"name": [
"gao huang",
"zhuang liu",
"laurens van der maaten",
"kilian q weinberger"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nitish shirish keskar",
"dheevatsa mudigere",
"jorge nocedal",
"mikhail smelyanskiy",
"ping tak",
"peter tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"ilya loshchilov",
"frank hutter"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Freiburg",
"location": "{'settlement': 'Freiburg', 'country': 'Germany'}"
},
{
"laboratory": "",
"institution": "University of Freiburg",
"location": "{'settlement': 'Freiburg', 'country': 'Germany'}"
}
]
},
{
"name": [
"alec radford",
"luke metz",
"soumith chintala"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"madhu s advani",
"andrew m saxe",
"early stopping"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"laurent dinh",
"razvan pascanu",
"samy bengio",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"1712.09913v3",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.592593 | 0.666667 | null | null | null | null | null | rk6qdGgCZ |
||
wen|learning_intrinsic_sparse_structures_within_long_shortterm_memory|ICLR_cc_2018_Conference | 36483539 | 1709.05027 | Learning Intrinsic Sparse Structures within Long Short-Term Memory | Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59x speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to non- LSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is available. | {
"name": [
"wei wen",
"yiran chen",
"hai li",
"yuxiong he",
"samyam rajbhandari",
"minjia zhang",
"wenhan wang",
"fang liu",
"bin hu",
"§ business"
],
"affiliation": [
{
"laboratory": "",
"institution": "Duke University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Duke University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Duke University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Duke University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Duke University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Duke University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Duke University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Duke University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Duke University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Duke University",
"location": "{}"
}
]
} | null | [
"Sparsity",
"Model Compression",
"Acceleration",
"LSTMs",
"Recurrent Neural Networks",
"Structural Learning"
] | null | 2018-02-15 22:29:52 | 38 | 137 | 29 | null | null | null | null | null | null | true | The reviewers really liked this paper. This paper presents a tweak to the LSTM cell that introduces sparsity, thus reducing the number of parameters in the model.
The authors show that their sparse models match the performance of the non-sparse baselines. While the results are not state-of-the-art but vanilla implementations of standard models, this is still of interest to the community. | {
"review_id": [
"r14eWGtez",
"SJ15MyGeG",
"B15mUMdef"
],
"review": [
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: The authors propose a technique to compress LSTMs in RNNs by using a group Lasso regularizer which results in structured sparsity, by eliminating individual hidden layer inputs at a particular layer. The authors present experiments on unidirectional and bidirectional LSTM models which demonstrate the effectiveness of this method. The proposed techniques are evaluated on two models: a fairly large LSTM with ~66.0M parameters, as well as a more compact LSTM with ~2.7M parameters, which can be sped up significantly through compression.\nOverall this is a clearly written paper that is easy to follow, with experiments that are well motivated. To the best of my knowledge most previous papers in the area of RNN compression focus on pruning or compression of the node outputs/connections, but do not focus as much on reducing the computation/parameters within an RNN cell. I only have a few minor comments/suggestions which are listed below:\n\n1. It is interesting that the model structure where the number of parameters is reduced to the number of ISSs chosen from the proposed procedure does not attain the same performance as when training with a larger number of nodes, with the group lasso regularizer. It would be interesting to conduct experiments for a range of \\lambda values: i.e., to allow for different degrees of compression, and then examine whether the model trained from scratch with the “optimal” structure achieves performance closer to the ISS-based strategy, for example, for smaller amounts of compression, this might be the case?\n\n2. In the experiment, the authors use a weaker dropout when training with ISS. Could the authors also report performance for the baseline model if trained with the same dropout (but without the group LASSO regularizer)?\n\n3. The colors in the figures: especially the blue vs. green contrast is really hard to see. It might be nicer to use lighter colors, which are more distinct.\n\n4. The authors mention that the thresholding operation to zero-out weights based on the hyperparameter \\tau is applied “after each iteration”. What is an iteration in this context? An epoch, a few mini-batch updates, per mini-batch? Could the authors please clarify.\n\n5. Clarification about the hyperparameter \\tau used for sparsification: Is \\tau determined purely based on the converged weight values in the model when trained without the group LASSO constraint? It would be interesting to plot a histogram of weight values in the baseline model, and perhaps also after the group LASSO regularized training.\n\n6. Is the same value of \\lambda used for all groups in the model? It would be interesting to consider the effect of using stronger sparsification in the earlier layers, for example.\n\n7. Section 4.2: Please explain what the exact match (EM) and F1 metrics used to measure performance of the BIDAF model are, in the text. \n\nMinor Typographical/Grammatical errors:\n- Sec 1: “... in LSTMs meanwhile maintains the dimension consistency.” → “... in LSTMs while maintaining the dimension consistency.”\n- Sec 1: “... is public available” → “is publically available”\n- Sec 2: Please rephrase: “After learning those structures, compact LSTM units remain original structural schematic but have the sizes reduced.”\n- Sec 4.1: “The exactly same training scheme of the baseline ...” → “The same training scheme as the baseline ...”",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Nice work on RNN compression ",
"paper_summary": null,
"main_review": "main_review: Quality: \nThe motivation and experimentation is sound.\n\nOriginality:\nThis work is a natural follow up on previous work that used group lasso for CNNs, namely learning sparse RNNs with group-lasso. Not very original, but nevertheless important.\n\nClarity:\nThe fact that the method is using a group-lasso regularization is hidden in the intro section and only fully mentioned in section 3.2 I would mention that clearly in the abstract.\n\nSignificance:\nLeaning small models is important and previous sparse RNN work (Narang, 2017) did not do it in a structured way, which may lead to slower inference step time. So this is an investigation of interest for the community.\n\nMinor comments:\n- One main claim in the paper is that group lasso is better than removing individual weights, yet not experimental evidence is provided for that.\n- The authors found that their method beats \"direct design\". This is somewhat unintuitive, yet no explanation is provided. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: the text is verbose but method is simple",
"paper_summary": null,
"main_review": "main_review: The paper spends lots of (repeated) texts on motivating and explaining ISS. But the algorithm is simple, using group lasso to find components that are can retained to preserve the performance. Thus the novelty is limited.\n\nThe experiments results are good.\n\nSec 3.1 should be made more concise. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.6666666865348816,
0.5555555820465088
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to reviews",
"Thanks for clarifications and RHN baseline. ",
"State-of-the-art experiments on ptb added",
"A better ptb baseline -- the RHN model -- is added",
"Response to reviews",
"Updating to the state-of-the-art",
"Response to reviews"
],
"comment": [
"Thanks for reviewing.\n\nWe have made sec 3.1 as concise as possible. We have moved some to the Appendix A. \n\nThe key novelty/contribution of the paper is to identify the structure inside RNNs (including LSTMs and RHNs) that shall be considered as a group (\"ISS\") to most effectively explore sparsity. Once the group is identified, using group lasso becomes intuitive. That is why we describe ISS, the structure of the group, in details and illustrate the intuitions and analysis behind it. We clarified this in the revision.",
"Increasing my score as this strengthens the work.",
"Hi Aaron,\n\nWe have added state-of-the-art model for ptb to show the generalizability of our approach. In limited time, we select Recurrent Highway Networks for fast evaluation since it is open source here https://github.com/julian121266/RecurrentHighwayNetworks. You may refer to Table 1 in the paper (https://arxiv.org/pdf/1607.03474.pdf) for the state-of-the-art models on ptb. \"Variational RHN + WT\" is the one we used as the baseline. \nOur results are covered in Table 2 in our paper. In a nutshell, our approach can reduce the RHN width from 830 to 517 without losing perplexity.\n\nThanks.",
"To follow up the concerns on the ptb dataset.\n\nWe have added state-of-the-art model for ptb to show the generalizability of our approach. We select Recurrent Highway Networks. Please refer to Table 1 in the paper (https://arxiv.org/pdf/1607.03474.pdf) for the state-of-the-art models on ptb. \"Variational RHN + WT\" is the one we used as the baseline. \nOur results are covered in Table 2 in our paper. In a nutshell, our approach can reduce the RHN width (the number of units per layer) from 830 to 517 without losing perplexity.",
"Thanks for reviewing.\n\n1. ISS approach can learn an “optimal” structure whose accuracy is better than the same “optimal” model but trained without using group Lasso.\nFor example, in Table 1, the first model learned by ISS approach has “optimal” structure with hidden sizes of (373, 315), and its perplexity is better than the same model (in the last row) but trained without using group Lasso regularization.\n\n2. With the same dropout (keep ratio 0.6), the baseline model overfits, and, with early stop, the best validation perplexity is 97.73 which is worse than the original 82.57.\n\n3. Changed green to yellow.\n\n4. Per mini-batch\n\n5. Yes, \\tau is determined purely based on the trained model without group LASSO regularization. No training is needed to select it.\nThanks for sharing this thought. Histogram is added in Appendix C. Instead of plotting the histogram of all weights, we plot the histogram of vector lengths of each “ISS weight groups”. We suppose it is more interesting because group Lasso essentially squeezes the length of each vector. The plot shows that the histogram is shifted to zeros by group Lasso regularization.\n\n6. Yes, to reduce the number of hyper-parameters, an identical \\lambda is used for all groups.\nWe tried to linearly scale the strength of regularization on each group by the vector length of the “ISS group weight” as used by Alvarez et al. 2016, however, it didn’t help to improve sparsity in our experiments.\n\n7. We now add the reference of the definition of EM and F1 (Rajpurkar et al. 2016) into the paper:\n“Exact match. This metric measures the percentage of predictions that match any one of the ground truth answers exactly.”\n“(Macro-averaged) F1 score. This metric measures the average overlap between the prediction and ground truth answer. We treat the prediction and ground truth as bags of tokens, and compute their F1. We take the maximum F1 over all of the ground truth answers for a given question, and then average over all of the questions.”\n\n8. Corrected. Thanks for so many useful details.\n\nPaper are revised based on the comments.",
"Thanks for accepting our paper!!! We've taken comments from the reviewers and advanced ptb models to the state-of-the-art during the rebuttal. We will manage to advance the results on SQuAD and more. Thanks for reviewing.",
"Thanks for reviewing.\n\nTo clarity: \nWe have mentioned group Lasso in the abstract. However, please note that, any structured sparsity optimization can be integrated into ISS, like group connection pruning based on the norm of the group (as used by Hao Li et. al. 2017 ).\n\nTo minor comments:\n- The speedup vs sparsity is added in Fig. 1, to quantitatively justify the gain of structured sparsity over non-structured sparsity.\n- In our context, \"direct design\" refers to using the same network architecture but with smaller hidden sizes. The comparison is in Table 1.\n- We are working on a better ptb baseline -- the RHN model (https://arxiv.org/abs/1607.03474), to solve the concerns on ptb dataset. The training takes time, but we will post our results as soon as the experiments are done. However, our results on SQuAD may reflect that the approach works in general.\n\nThanks!"
]
} | {
"paperhash": [
"rasley|hyperdrive:_exploring_hyperparameters_with_pop_scheduling",
"philipp|nonparametric_neural_networks",
"yoon|combined_group_and_exclusive_sparsity_for_deep_neural_networks",
"louizos|bayesian_compression_for_deep_learning",
"narang|exploring_sparsity_in_recurrent_neural_networks",
"wen|coordinating_filters_for_faster_deep_neural_networks",
"wu|a_compact_dnn:_approaching_googlenet-level_accuracy_of_classification_and_domain_adaptation",
"bradbury|quasi-recurrent_neural_networks",
"seo|bidirectional_attention_flow_for_machine_comprehension",
"molchanov|pruning_convolutional_neural_networks_for_resource_efficient_inference",
"zoph|neural_architecture_search_with_reinforcement_learning",
"álvarez|learning_the_number_of_neurons_in_deep_networks",
"li|pruning_filters_for_efficient_convnets",
"guo|dynamic_network_surgery_for_efficient_dnns",
"wen|learning_structured_sparsity_in_deep_neural_networks",
"park|faster_cnns_with_direct_sparse_convolutions_and_guided_pruning",
"zilly|recurrent_highway_networks",
"cortes|adanet:_adaptive_structural_learning_of_artificial_neural_networks",
"rajpurkar|squad:_100,000+_questions_for_machine_comprehension_of_text",
"lu|learning_compact_recurrent_neural_networks",
"prabhavalkar|on_the_compression_of_recurrent_neural_networks_with_an_application_to_lvcsr_acoustic_modeling_for_embedded_speech_recognition",
"he|deep_residual_learning_for_image_recognition",
"amodei|deep_speech_2_:_end-to-end_speech_recognition_in_english_and_mandarin",
"han|deep_compression:_compressing_deep_neural_network_with_pruning,_trained_quantization_and_huffman_coding",
"lebedev|fast_convnets_using_group-wise_brain_damage",
"han|learning_both_weights_and_connections_for_efficient_neural_network",
"liu|sparse_convolutional_neural_networks",
"hinton|distilling_the_knowledge_in_a_neural_network",
"szegedy|going_deeper_with_convolutions",
"zaremba|recurrent_neural_network_regularization",
"cho|learning_phrase_representations_using_rnn_encoder–decoder_for_statistical_machine_translation",
"jaderberg|speeding_up_convolutional_neural_networks_with_low_rank_expansions",
"denil|predicting_parameters_in_deep_learning",
"yuan|model_selection_and_estimation_in_regression_with_grouped_variables",
"schuster|bidirectional_recurrent_neural_networks",
"hochreiter|long_short-term_memory",
"marcus|building_a_large_annotated_corpus_of_english:_the_penn_treebank",
"|adanet:_adaptive_structural_learning_of_artificial_neural_networks"
],
"title": [
"HyperDrive: exploring hyperparameters with POP scheduling",
"Nonparametric Neural Networks",
"Combined Group and Exclusive Sparsity for Deep Neural Networks",
"Bayesian Compression for Deep Learning",
"Exploring Sparsity in Recurrent Neural Networks",
"Coordinating Filters for Faster Deep Neural Networks",
"A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation",
"Quasi-Recurrent Neural Networks",
"Bidirectional Attention Flow for Machine Comprehension",
"Pruning Convolutional Neural Networks for Resource Efficient Inference",
"Neural Architecture Search with Reinforcement Learning",
"Learning the Number of Neurons in Deep Networks",
"Pruning Filters for Efficient ConvNets",
"Dynamic Network Surgery for Efficient DNNs",
"Learning Structured Sparsity in Deep Neural Networks",
"Faster CNNs with Direct Sparse Convolutions and Guided Pruning",
"Recurrent Highway Networks",
"AdaNet: Adaptive Structural Learning of Artificial Neural Networks",
"SQuAD: 100,000+ Questions for Machine Comprehension of Text",
"Learning compact recurrent neural networks",
"On the compression of recurrent neural networks with an application to LVCSR acoustic modeling for embedded speech recognition",
"Deep Residual Learning for Image Recognition",
"Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin",
"Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding",
"Fast ConvNets Using Group-Wise Brain Damage",
"Learning both Weights and Connections for Efficient Neural Network",
"Sparse Convolutional Neural Networks",
"Distilling the Knowledge in a Neural Network",
"Going deeper with convolutions",
"Recurrent Neural Network Regularization",
"Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation",
"Speeding up Convolutional Neural Networks with Low Rank Expansions",
"Predicting Parameters in Deep Learning",
"Model selection and estimation in regression with grouped variables",
"Bidirectional recurrent neural networks",
"Long Short-Term Memory",
"Building a Large Annotated Corpus of English: The Penn Treebank",
"AdaNet: Adaptive Structural Learning of Artificial Neural Networks"
],
"abstract": [
"The quality of machine learning (ML) and deep learning (DL) models are very sensitive to many different adjustable parameters that are set before training even begins, commonly called hyperparameters. Efficient hyperparameter exploration is of great importance to practitioners in order to find high-quality models with affordable time and cost. This is however a challenging process due to a huge search space, expensive training runtime, sparsity of good configurations, and scarcity of time and resources. We develop a scheduling algorithm POP that quickly identifies among promising, opportunistic and poor configurations of hyperparameters. It infuses probabilistic model-based classification with dynamic scheduling and early termination to jointly optimize quality and cost. We also build a comprehensive hyperparameter exploration infrastructure, HyperDrive, to support existing and future scheduling algorithms for a wide range of usage scenarios across different ML/DL frameworks and learning domains. We evaluate POP and HyperDrive using complex and deep models. The results show that we speedup the training process by up to 6.7x compared with basic approaches like random/grid search and up to 2.1x compared with state-of-the-art approaches while achieving similar model quality compared with prior work.",
"Automatically determining the optimal size of a neural network for a given task without prior information currently requires an expensive global search and training many networks from scratch. In this paper, we address the problem of automatically finding a good network size during a single training cycle. We introduce *nonparametric neural networks*, a non-probabilistic framework for conducting optimization over all possible network sizes and prove its soundness when network growth is limited via an L_p penalty. We train networks under this framework by continuously adding new units while eliminating redundant units via an L_2 penalty. We employ a novel optimization algorithm, which we term *adaptive radial-angular gradient descent* or *AdaRad*, and obtain promising results.",
"The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting. To resolve this issue, we propose a sparsity regularization method that exploits both positive and negative correlations among the features to enforce the network to be sparse, and at the same time remove any redundancies among the features to fully utilize the capacity of the network. Specifically, we propose to use an exclusive sparsity regularization based on (1, 2)-norm, which promotes competition for features between different weights, thus enforcing them to fit to disjoint sets of features. We further combine the exclusive sparsity with the group sparsity based on (2, 1)-norm, to promote both sharing and competition for features in training of a deep neural network. We validate our method on multiple public datasets, and the results show that our method can obtain more compact and efficient networks while also improving the performance over the base networks with full weights, as opposed to existing sparsity regularizations that often obtain efficiency at the expense of prediction accuracy.",
"Compression and computational efficiency in deep learning have become a problem of great significance. In this work, we argue that the most principled and effective way to attack this problem is by adopting a Bayesian point of view, where through sparsity inducing priors we prune large parts of the network. We introduce two novelties in this paper: 1) we use hierarchical priors to prune nodes instead of individual weights, and 2) we use the posterior uncertainties to determine the optimal fixed point precision to encode the weights. Both factors significantly contribute to achieving the state of the art in terms of compression rates, while still staying competitive with methods designed to optimize for speed or energy efficiency.",
"Recurrent Neural Networks (RNN) are widely used to solve a variety of problems and as the quantity of data and the amount of available compute have increased, so have model sizes. The number of parameters in recent state-of-the-art networks makes them hard to deploy, especially on mobile phones and embedded devices. The challenge is due to both the size of the model and the time it takes to evaluate it. In order to deploy these RNNs efficiently, we propose a technique to reduce the parameters of a network by pruning weights during the initial training of the network. At the end of training, the parameters of the network are sparse while accuracy is still close to the original dense neural network. The network size is reduced by 8x and the time required to train the model remains constant. Additionally, we can prune a larger dense network to achieve better than baseline performance while still reducing the total number of parameters significantly. Pruning RNNs reduces the size of the model and can also help achieve significant inference time speed-up using sparse matrix multiply. Benchmarks show that using our technique model size can be reduced by 90% and speed-up is around 2x to 7x.",
"Very large-scale Deep Neural Networks (DNNs) have achieved remarkable successes in a large variety of computer vision tasks. However, the high computation intensity of DNNs makes it challenging to deploy these models on resource-limited systems. Some studies used low-rank approaches that approximate the filters by low-rank basis to accelerate the testing. Those works directly decomposed the pre-trained DNNs by Low-Rank Approximations (LRA). How to train DNNs toward lower-rank space for more efficient DNNs, however, remains as an open area. To solve the issue, in this work, we propose Force Regularization, which uses attractive forces to enforce filters so as to coordinate more weight information into lower-rank space1. We mathematically and empirically verify that after applying our technique, standard LRA methods can reconstruct filters using much lower basis and thus result in faster DNNs. The effectiveness of our approach is comprehensively evaluated in ResNets, AlexNet, and GoogLeNet. In AlexNet, for example, Force Regularization gains 2× speedup on modern GPU without accuracy loss and 4:05× speedup on CPU by paying small accuracy degradation. Moreover, Force Regularization better initializes the low-rank DNNs such that the fine-tuning can converge faster toward higher accuracy. The obtained lower-rank DNNs can be further sparsified, proving that Force Regularization can be integrated with state-of-the-art sparsity-based acceleration methods.",
"Recently, DNN model compression based on network architecture design, e.g., SqueezeNet, attracted a lot attention. No accuracy drop on image classification is observed on these extremely compact networks, compared to well-known models. An emerging question, however, is whether these model compression techniques hurt DNNs learning ability other than classifying images on a single dataset. Our preliminary experiment shows that these compression methods could degrade domain adaptation (DA) ability, though the classification performance is preserved. Therefore, we propose a new compact network architecture and unsupervised DA method in this paper. The DNN is built on a new basic module Conv-M which provides more diverse feature extractors without significantly increasing parameters. The unified framework of our DA method will simultaneously learn invariance across domains, reduce divergence of feature representations, and adapt label prediction. Our DNN has 4.1M parameters, which is only 6.7% of AlexNet or 59% of GoogLeNet. Experiments show that our DNN obtains GoogLeNet-level accuracy both on classification and DA, and our DA method slightly outperforms previous competitive ones. Put all together, our DA strategy based on our DNN achieves state-of-the-art on sixteen of total eighteen DA tasks on popular Office-31 and Office-Caltech datasets.",
"Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.",
"Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typically these methods use attention to focus on a small portion of the context and summarize it with a fixed-size vector, couple attentions temporally, and/or often form a uni-directional attention. In this paper we introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test.",
"We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation - a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach.",
"Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.",
"Nowadays, the number of layers and of neurons in each layer of a deep network are typically set manually. While very deep and wide networks have proven effective in general, they come at a high memory and computation cost, thus making them impractical for constrained platforms. These networks, however, are known to have many redundant parameters, and could thus, in principle, be replaced by more compact architectures. In this paper, we introduce an approach to automatically determining the number of neurons in each layer of a deep network during learning. To this end, we propose to make use of a group sparsity regularizer on the parameters of the network, where each group is defined to act on a single neuron. Starting from an overcomplete network, we show that our approach can reduce the number of parameters by up to 80\\% while retaining or even improving the network accuracy.",
"The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.",
"Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of $\\bm{108}\\times$ and $\\bm{17.7}\\times$ respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at this https URL.",
"High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation. Experimental results show that SSL achieves on average 5.1x and 3.1x speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25% to 92.60%, which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by around ~1%. Open source code is in this https URL",
"Phenomenally successful in practical inference problems, convolutional neural networks (CNN) are widely deployed in mobile devices, data centers, and even supercomputers. The number of parameters needed in CNNs, however, are often large and undesirable. Consequently, various methods have been developed to prune a CNN once it is trained. Nevertheless, the resulting CNNs offer limited benefits. While pruning the fully connected layers reduces a CNN's size considerably, it does not improve inference speed noticeably as the compute heavy parts lie in convolutions. Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels. \nWe present a method to realize simultaneously size economy and speed improvement while pruning CNNs. Paramount to our success is an efficient general sparse-with-dense matrix multiplication implementation that is applicable to convolution of feature maps with kernels of arbitrary sparsity patterns. Complementing this, we developed a performance model that predicts sweet spots of sparsity levels for different layers and on different computer architectures. Together, these two allow us to demonstrate 3.1--7.3$\\times$ convolution speedups over dense convolution in AlexNet, on Intel Atom, Xeon, and Xeon Phi processors, spanning the spectrum from mobile devices to supercomputers. We also open source our project at this https URL.",
"Many sequential processing tasks require complex nonlinear transition functions from one step to the next. However, recurrent neural networks with 'deep' transition functions remain difficult to train, even when using Long Short-Term Memory (LSTM) networks. We introduce a novel theoretical analysis of recurrent networks based on Gersgorin's circle theorem that illuminates several modeling and optimization issues and improves our understanding of the LSTM cell. Based on this analysis we propose Recurrent Highway Networks, which extend the LSTM architecture to allow step-to-step transition depths larger than one. Several language modeling experiments demonstrate that the proposed architecture results in powerful and efficient models. On the Penn Treebank corpus, solely increasing the transition depth from 1 to 10 improves word-level perplexity from 90.6 to 65.4 using the same number of parameters. On the larger Wikipedia datasets for character prediction (text8 and enwik8), RHNs outperform all previous results and achieve an entropy of 1.27 bits per character.",
"We present new algorithms for adaptively learning artificial neural networks. Our algorithms (AdaNet) adaptively learn both the structure of the network and its weights. They are based on a solid theoretical analysis, including data-dependent generalization guarantees that we prove and discuss in detail. We report the results of large-scale experiments with one of our algorithms on several binary classification tasks extracted from the CIFAR-10 dataset. The results demonstrate that our algorithm can automatically learn network structures with very competitive performance accuracies when compared with those achieved for neural networks found by standard approaches.",
"We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. \nThe dataset is freely available at this https URL",
"Recurrent neural networks (RNNs), including long short-term memory (LSTM) RNNs, have produced state-of-the-art results on a variety of speech recognition tasks. However, these models are often too large in size for deployment on mobile devices with memory and latency constraints. In this work, we study mechanisms for learning compact RNNs and LSTMs via low-rank factorizations and parameter sharing schemes. Our goal is to investigate redundancies in recurrent architectures where compression can be admitted without losing performance. A hybrid strategy of using structured matrices in the bottom layers and shared low-rank factors on the top layers is found to be particularly effective, reducing the parameters of a standard LSTM by 75%, at a small cost of 0.3% increase in WER, on a 2,000-hr English Voice Search task.",
"We study the problem of compressing recurrent neural networks (RNNs). In particular, we focus on the compression of RNN acoustic models, which are motivated by the goal of building compact and accurate speech recognition systems which can be run efficiently on mobile devices. In this work, we present a technique for general recurrent model compression that jointly compresses both recurrent and non-recurrent inter-layer weight matrices. We find that the proposed technique allows us to reduce the size of our Long Short-Term Memory (LSTM) acoustic model to a third of its original size with negligible loss in accuracy.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech-two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, enabling experiments that previously took weeks to now run in days. This allows us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.",
"We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers in ConvNets. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion. After such pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. We investigate different ways to add group-wise prunning to the learning process, and show that severalfold speedups of convolutional layers can be attained using group-sparsity regularizers. Our approach can adjust the shapes of the receptive fields in the convolutional layers, and even prune excessive feature maps from ConvNets, all in data-driven way.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.",
"Deep neural networks have achieved remarkable performance in both image classification and object detection problems, at the cost of a large number of parameters and computational complexity. In this work, we show how to reduce the redundancy in these parameters using a sparse decomposition. Maximum sparsity is obtained by exploiting both inter-channel and intra-channel redundancy, with a fine-tuning step that minimize the recognition loss caused by maximizing sparsity. This procedure zeros out more than 90% of parameters, with a drop of accuracy that is less than 1% on the ILSVRC2012 dataset. We also propose an efficient sparse matrix multiplication algorithm on CPU for Sparse Convolutional Neural Networks (SCNN) models. Our CPU implementation demonstrates much higher efficiency than the off-the-shelf sparse matrix libraries, with a significant speedup realized over the original dense network. In addition, we apply the SCNN model to the object detection problem, in conjunction with a cascade model and sparse fully connected layers, to achieve significant speedups.",
"A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.",
"We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation.",
"In this paper, we propose a novel neural network model called RNN Encoder‐ Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixedlength vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder‐Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.",
"The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5× speedup with no loss in accuracy, and 4.5× speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.",
"We demonstrate that there is significant redundancy in the parameterization of several deep learning models. Given only a few weight values for each feature it is possible to accurately predict the remaining values. Moreover, we show that not only can the parameter values be predicted, but many of them need not be learned at all. We train several different architectures by learning only a small number of weights and predicting the rest. In the best case we are able to predict more than 95% of the weights of a network without any drop in accuracy.",
"Summary. We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor analysis‐of‐variance problem as the most important and well‐known example. Instead of selecting factors by stepwise backward elimination, we focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non‐negative garrotte for factor selection. The lasso, the LARS algorithm and the non‐negative garrotte are recently proposed regression methods that can be used to select individual variables. We study and propose efficient algorithms for the extensions of these methods for factor selection and show that these extensions give superior performance to the traditional stepwise backward elimination method in factor selection problems. We study the similarities and the differences between these methods. Simulations and real examples are used to illustrate the methods.",
"In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported.",
"Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.",
"Abstract : As a result of this grant, the researchers have now published oil CDROM a corpus of over 4 million words of running text annotated with part-of- speech (POS) tags, with over 3 million words of that material assigned skeletal grammatical structure. This material now includes a fully hand-parsed version of the classic Brown corpus. About one half of the papers at the ACL Workshop on Using Large Text Corpora this past summer were based on the materials generated by this grant.",
"There have been several major lines of research on the theoretical understanding of neural networks. The first one deals with understanding the properties of the objective function used when training neural networks (Choromanska et al., 2014; Sagun et al., 2014; Zhang et al., 2015; Livni et al., 2014; Kawaguchi, 2016). The second involves studying the black-box optimization algorithms that are often used for training these networks (Hardt et al., 2015; Lian et al., 2015). The third analyzes the statistical and generalization properties of neural networks (Bartlett, 1998; Zhang et al., 2016; Neyshabur et al., 2015; Sun et al., 2016). The fourth adopts a generative point of view assuming that the data actually comes from a particular network, which it shows how to recover (Arora et al., 2014; 2015). The fifth investigates the expressive ability of neural networks, analyzing what types of mappings they can learn (Cohen et al., 2015; Eldan & Shamir, 2015; Telgarsky, 2016; Daniely et al., 2016). This paper is most closely related to the work on statistical and generalization properties of neural networks. However, instead of analyzing the problem of learning with a fixed architecture, we study a more general task of learning both architecture and model parameters simultaneously. On the other hand, the insights that we gain by studying this more general setting can also be directly applied to the setting with a fixed architecture."
],
"authors": [
{
"name": [
"Jeff Rasley",
"Yuxiong He",
"Feng Yan",
"Olatunji Ruwase",
"Rodrigo Fonseca"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"George Philipp",
"J. Carbonell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jaehong Yoon",
"Sung Ju Hwang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Christos Louizos",
"Karen Ullrich",
"M. Welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sharan Narang",
"G. Diamos",
"Shubho Sengupta",
"Erich Elsen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Wen",
"Cong Xu",
"Chunpeng Wu",
"Yandan Wang",
"Yiran Chen",
"Hai Helen Li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Chunpeng Wu",
"W. Wen",
"Tariq Afzal",
"Yongmei Zhang",
"Yiran Chen",
"Hai Helen Li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"James Bradbury",
"Stephen Merity",
"Caiming Xiong",
"R. Socher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Minjoon Seo",
"Aniruddha Kembhavi",
"Ali Farhadi",
"Hannaneh Hajishirzi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Pavlo Molchanov",
"Stephen Tyree",
"Tero Karras",
"Timo Aila",
"J. Kautz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Barret Zoph",
"Quoc V. Le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Álvarez",
"M. Salzmann"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Hao Li",
"Asim Kadav",
"Igor Durdanovic",
"H. Samet",
"H. Graf"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yiwen Guo",
"Anbang Yao",
"Yurong Chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Wen",
"Chunpeng Wu",
"Yandan Wang",
"Yiran Chen",
"Hai Helen Li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jongsoo Park",
"Sheng R. Li",
"W. Wen",
"P. T. P. Tang",
"Hai Helen Li",
"Yiran Chen",
"P. Dubey"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Zilly",
"R. Srivastava",
"J. Koutník",
"J. Schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Corinna Cortes",
"X. Gonzalvo",
"Vitaly Kuznetsov",
"M. Mohri",
"Scott Yang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Pranav Rajpurkar",
"Jian Zhang",
"Konstantin Lopyrev",
"Percy Liang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Zhiyun Lu",
"Vikas Sindhwani",
"Tara N. Sainath"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Rohit Prabhavalkar",
"O. Alsharif",
"A. Bruguier",
"Ian McGraw"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kaiming He",
"X. Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Dario Amodei",
"S. Ananthanarayanan",
"Rishita Anubhai",
"Jin Bai",
"Eric Battenberg",
"Carl Case",
"J. Casper",
"Bryan Catanzaro",
"Jingdong Chen",
"Mike Chrzanowski",
"Adam Coates",
"G. Diamos",
"Erich Elsen",
"Jesse Engel",
"Linxi (Jim) Fan",
"Christopher Fougner",
"Awni Y. Hannun",
"Billy Jun",
"T. Han",
"P. LeGresley",
"Xiangang Li",
"Libby Lin",
"Sharan Narang",
"A. Ng",
"Sherjil Ozair",
"R. Prenger",
"Sheng Qian",
"Jonathan Raiman",
"S. Satheesh",
"David Seetapun",
"Shubho Sengupta",
"Anuroop Sriram",
"Chong-Jun Wang",
"Yi Wang",
"Zhiqian Wang",
"Bo Xiao",
"Yan Xie",
"Dani Yogatama",
"J. Zhan",
"Zhenyao Zhu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Song Han",
"Huizi Mao",
"W. Dally"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"V. Lebedev",
"V. Lempitsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Song Han",
"Jeff Pool",
"J. Tran",
"W. Dally"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Baoyuan Liu",
"Min Wang",
"H. Foroosh",
"M. Tappen",
"M. Pensky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Geoffrey E. Hinton",
"O. Vinyals",
"J. Dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Christian Szegedy",
"Wei Liu",
"Yangqing Jia",
"P. Sermanet",
"Scott E. Reed",
"Dragomir Anguelov",
"D. Erhan",
"Vincent Vanhoucke",
"Andrew Rabinovich"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Wojciech Zaremba",
"I. Sutskever",
"O. Vinyals"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kyunghyun Cho",
"B. V. Merrienboer",
"Çaglar Gülçehre",
"Dzmitry Bahdanau",
"Fethi Bougares",
"Holger Schwenk",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Max Jaderberg",
"A. Vedaldi",
"Andrew Zisserman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Misha Denil",
"B. Shakibi",
"Laurent Dinh",
"Marc'Aurelio Ranzato",
"Nando de Freitas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Yuan",
"Yi Lin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Schuster",
"K. Paliwal"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sepp Hochreiter",
"J. Schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Mitchell P. Marcus",
"Beatrice Santorini",
"Mary Ann Marcinkiewicz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
}
],
"arxiv_id": [
null,
"1712.05440",
null,
"1705.08665",
"1704.05119",
"1703.09746",
"1703.04071",
"1611.01576",
"1611.01603",
null,
"1611.01578",
"1611.06321",
"1608.08710",
"1608.04493",
"1608.03665",
"1608.01409",
"1607.03474",
"1607.01097",
"1606.05250",
"1604.02594",
"1603.08042",
"1512.03385",
"1512.02595",
"1510.00149",
"1506.02515",
"1506.02626",
null,
"1503.02531",
"1409.4842",
"1409.2329",
"1406.1078",
"1405.3866",
"1306.0543",
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"5064168",
"631115",
"33731452",
"9328854",
"10135357",
"6863796",
"7506173",
"51559",
"8535316",
"17240902",
"12713052",
"10356927",
"14089312",
"744803",
"2056019",
"23294944",
"1101453",
"11206737",
"11816014",
"1468875",
"7946440",
"206594692",
"11590585",
"2134321",
"7204133",
"2238772",
"1617104",
"7200347",
"206592484",
"17719760",
"5590763",
"17864746",
"1639981",
"6162124",
"18375389",
"1915014",
"252796",
"265038919"
],
"intents": [
[
"methodology"
],
[
"methodology",
"background"
],
[
"methodology"
],
[
"background"
],
[
"methodology",
"background"
],
[
"background"
],
[],
[],
[
"methodology",
"background"
],
[
"background"
],
[
"background"
],
[
"methodology",
"background"
],
[
"methodology"
],
[],
[
"methodology",
"background"
],
[],
[
"background"
],
[
"methodology",
"background"
],
[
"methodology",
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"methodology"
],
[],
[
"background"
],
[],
[
"background"
],
[
"methodology",
"result"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[
"methodology",
"background"
]
],
"isInfluential": [
false,
false,
false,
false,
true,
false,
false,
false,
true,
false,
false,
false,
true,
false,
true,
false,
true,
true,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
true,
false,
false,
false,
true,
false
]
} | null | 84 | 1.630952 | 0.62963 | 0.75 | null | null | null | null | null | rk6cfpRjZ |
cubuk|intriguing_properties_of_adversarial_examples|ICLR_cc_2018_Conference | 1711.02846v1 | Intriguing Properties of Adversarial Examples | It is becoming increasingly clear that many machine learning classifiers are vulnerable to adversarial examples. In attempting to explain the origin of adversarial examples, previous studies have typically focused on the fact that neural networks operate on high dimensional data, they overfit, or they are too linear. Here we show that distributions of logit differences have a universal functional form. This functional form is independent of architecture, dataset, and training protocol; nor does it change during training. This leads to adversarial error having a universal scaling, as a power-law, with respect to the size of the adversarial perturbation. We show that this universality holds for a broad range of datasets (MNIST, CIFAR10, ImageNet, and random data), models (including state-of-the-art deep networks, linear models, adversarially trained networks, and networks trained on randomly shuffled labels), and attacks (FGSM, step l.l., PGD). Motivated by these results, we study the effects of reducing prediction entropy on adversarial robustness. Finally, we study the effect of network architectures on adversarial sensitivity. To do this, we use neural architecture search with reinforcement learning to find adversarially robust architectures on CIFAR10. Our resulting architecture is more robust to white \emph{and} black box attacks compared to previous attempts.
| {
"name": [],
"affiliation": []
} | null | [
"Mathematics",
"Computer Science"
] | International Conference on Learning Representations | 2017-11-08 | 27 | null | null | null | null | null | null | null | null | false | I am somewhat of two minds from the paper. The authors show empirically that adversarial perturbation error follows power law and looks for a possible explanation. The tie in with generalization is not clear to me and makes me wonder how to evaluate the significance of the finding of the power law distribution.. On the other hand, the authors present an interesting analysis, show that the finding holds in all the cases they explored and also found that architecture search can be used to find neural networks that are more resilient to adversarial search (the last shouldn't be surprising if that was indeed the training criterion).
All in all, I think that while the paper needs a further iteration prior to publication, it already contains interesting bits that could spur very interesting discussion at the Workshop.
(Side note: There's a reference missing on page 4, first paragraph) | {
"review_id": [
"Bkm7cMvgf",
"B1yz5XhgM",
"B17JC8dlf"
],
"review": [
{
"title": "title: Their explanation seems to be done by non-strict argument, and their proposed methods do not seem related to their discovery so much.",
"paper_summary": null,
"main_review": "main_review: This paper insists that adversarial error for small adversarial perturbation follows power low as a function of the perturbation size, and explains the cause by the logit-difference distributions using mean-field theory.\nThen, the authors propose two methods for improving adversarial robustness (entropy regularization and NAS with reinforcement learning).\n\n[strong points]\n* Based on experimental results over a broad range of datasets, deep network models and their attacks.\n* Discovery of the fact that adversarial error follows a power low as a function of the perturbation size epsilon for small epsilon.\n* They found entropy regularization improves adversarial robustness.\n* Their neural architecture search (NAS) with reinforcement learning found robust deep networks.\n\n[weak points]\n* Unclear derivation of Eq. (9). (What expansion is used in Eq. (21)?)\n* Non-strict argument using mean-field theory.\n* Unclear connection between their discovered universality and their proposals (entropy regularization and NAS with reinforcement learning).",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting experiments, wrong conclusions.",
"paper_summary": null,
"main_review": "main_review: This work presents an empirical study aiming at improving the understanding of the vulnerability of neural networks to adversarial examples. Paraphrasing the authors, the main observation of the study is that the vulnerability is due to an inherent uncertainty that neural networks have about their predictions ( the difference between the logits). This is consistent across architectures, datasets. Further, the authors note that \"the universality is not a result of the specific content of these datasets nor the ability of the model to generalize.\"\n\nWhile this empirical study contains valuable information, its above conclusions are factually wrong. It can be theoretically proven at least using two routes. They are also in contradiction with other empirical observations consistent across several previous studies. \n\n1-Constructive counter-argument: Consider a neural network that always outputs a constant prediction. It (1) is by definition independent of any dataset (2) generalizes perfectly (3) has zero adversarial error, hence contradicting the central statement of the paper. \n\n2- Analysis-based counter-argument: Consider a neural network with one hidden layer and two classes. It is easy to show that the difference between the scores (logits) of the two classes is linear in the operator norm of the hidden weight matrix and linear in the L2-norm of the last weight vector. Therefore, the robustness of the model indeed depends on its capability to generalize because the latter is essentially governed by the geometric margin of the linear separator and the spectral norm of the weight matrix (see [1,2,3]). QED.\n\n3- Further, the lack of calibration of neural networks and its causes are well known. Among other things, it is due to the use of building blocks (such as batch-norm [4]), regularization (e.g., weight decay) or the use of softmax+cross-entropy during training. While this is convenient for optimization reasons, it indeed hurts the calibration. The authors should try to train a neural network with a large margin criteria and see if the same phenomenon still holds when they measure the geometric margin. Another alternative is to use a temperature with the softmax[4]. Therefore, the observations of the empirical study cannot be generalized to neural networks and should be explicitly restricted to neural networks using softmax with cross-entropy as criteria. \n\nI believe the conclusions of this study are misleading, hence I recommend to reject the paper. \n\n\n[1] Spectrally Normalized Margin-bounds Margin bounds for neural networks (Bartlett et al., 2017)\n[2] Parseval Networks: Improving Robustness to Adversarial Examples (Cisse et al., 2017) \n[3] Formal Guarantees on the Robustness of a classifier against adversarial examples (Hein et al., 2017)\n[4] On the Calibration of Modern Neural Networks (Guo et al., 2017)",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: Very intriguing paper and results to say the least. I like the way it is written, and the neat interpretations that the authors give of what is going on (instead of assuming that readers will see the same). There is a well presented story of experiments to follow which gives us insight into the problem. \n\nInteresting insight into defensive distillation and the effects of uncertainty in neural networks.\n\nQuality/Clarity: well written and was easy for me to read\nOriginality: Brings both new ideas and unexpected experimental results.\nSignificance: Creates more questions than it answers, which imo is a positive as this topic definitely deserves more research.\n\nRemarks:\n- Maybe re-render Figure 3 at a higher resolution?\n- The caption of Figure 5 doesn't match the labels in the figure's legend, and also has a weird wording, making it unclear what (a) and (b) refer to.\n- In section 4 you say you test your models with FGSM accuracy, but in Figure 7 you report stepll and PGD accuracy, could you also plot the same curves for FGSM?\n- In Figure 4, I'm not sure I understand the right-tail of the distributions. Does it mean that when Delta_ij is very large, epsilon can be very small and still cause an adversarial pertubation? If so does it mean that overconfidence in the extreme is also bad?\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.2222222238779068,
0.7777777910232544
],
"confidence": [
0.25,
0.75,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Typos corrected and figures added",
"continuation",
"Adversarial training did include fractional epsilon, and unit L2-norm attacks lead to larger pixel changes. ",
"We clarified the expansion in Eq. 21, and would be happy to address any other step if needed.",
"summary of our previous rebuttals",
"Experiments suggested by reviewer corroborate our original conclusions, we reworded ambiguous sentences in the abstract "
],
"comment": [
"We thank the reviewer for a careful reading of the manuscript and the helpful suggestions. \nWe are delighted that the reviewer thinks this topic deserves more research; we certainly agree! \n\nWe implemented the suggestions by the reviewer, as detailed below:\n- Re-rendered Fig. 3 at a higher resolution. We noticed that Fig. 3 may look pixelated on certain web browsers, but rendered correctly on all pdf viewers we have tried. \n- Corrected the typos and clarified the caption of Fig. 5. We appreciate the reviewer noticing this.\n- Experiment 1 networks were trained with stepll and Experiment 2 networks were trained with PGD. However, we did use FGSM accuracy on the validation set to choose the architectures. For this reason, we followed the reviewer's suggestion and plotted the same curves for FGSM attack in Figure 15. \n- Fig. 4 presents histograms where both axes are shown in log scale. The right-tail of the distributions signify that there are not many samples with as large \\Delta_{ij} values. \n\n",
"Responses to specific points: \n1- This thought experiment actually agrees with our paper: a neural network that always outputs a constant has no uncertainty about its predictions, and thus has zero adversarial error. Furthermore, our theory assumed uncorrelated logits, but we empirically show that the power-law tails are robust to the the amount of correlation present in the logits of the commonly used neural networks. In the thought experiment suggested by the reviewer, the logits are maximally correlated. It is for this reason that our theory may not apply. Finally, we note that in this example the input-logit Jacobian is zero. In this case, our mathematical framework correctly predicts that \\hat\\epsilon\\to\\infty and so no amount of adversarial perturbation will change the predicted class. \n\n2- As mentioned above, we agree that models that generalize better have higher adversarial robustness. As can be seen in Fig. 1a and 1b, the models with best generalization (NASNet, Inception-ResNet v2, Inception v4) are also adversarially most robust, especially for small epsilon values. This is why we performed Neural Architecture Search to find adversarially robust architectures: although the qualitative form of the adversarial error is a power-law with similar exponents, the quantitative robustness can be improved via adversarial training, architecture engineering, and regularization. We have used all three of these techniques to increase the adversarial robustness in our study. \n\n3- The lack of calibration of neural networks and its causes may be well known, but our contribution is to point out that the functional form of the logit differences is universal across datasets and models, and unchanged after training. \n\nWe added two new figures to the appendix, Fig. 12 and Fig. 13, which show that the reported universality is not restricted to neural networks using softmax with cross-entropy as loss. In Fig. 12, we trained a fully-connected network on MNIST with hinge-loss (as suggested by the reviewer). We attacked this network both by differentiating the hinge-loss and a cross-entropy loss (attacks with cross-entropy loss are more successful, as also observed in the submission “Certified Defenses against Adversarial Examples”). We show that both of these attacks lead to the same universal behavior, both for adversarial error and for logit differences. We repeat the same experiment using an L2-norm loss for training, and reach the same results as the experiments in the original submission. \n\nIn short, our original submission is in agreement with the reviewer’s perspective; and newly performed experiments as suggested by the reviewer obey the universality that is presented by our paper. ",
"Thanks for the positive comment and the interesting question. Kurakin et al. did use non-integer values of epsilon during training. As mentioned in their paper, epsilon was sampled from a truncated normal defined in [0,16]. \n\nRegarding test-time: as we show in Fig 2c for MNIST, attacks with unit L2 norm have the same power-law form and exponent as FGSM, but allow for much larger change in each pixel value. For ImageNet, unit L2 norm attack has the same power-law form and exponent up to an epsilon of 70; this means that one pixel could change by as large as 70 due to adversarial distortion and still be in the power-law region. We will include an additional plot about this in the next version of our submission. ",
"We thank the reviewer for a careful reading of the manuscript and the helpful feedback. We are glad that the reviewer found several strong points about our paper. Below we respond point by point to the criticism:\n\n1) In Eq. (21) we expand both F(r+\\Delta_{1j}) and P(r+\\Delta_{1j}) to lowest order in \\Delta_{1j}, using regular Taylor expansion. We have added another step to the derivation to clarify. \n\n2) While mean field theory is an approximate framework, it has a long history of effective use across a wide range of fields studying complex behavior including machine learning. For example, there are papers that approach neural networks from a mean field perspective dating back to at least 1989 [1]. Here, at each step of our calculation we evaluate the validity of the mean field approximation in Fig. 3 and Fig. 4. If there is a specific point in the approximation that the reviewer objects to, we would be happy to address it further.\n\n3) Our proposed entropy regularization is directly related to the finding that logit difference distribution is universal. Since the adversarial error has a universal form due to the universal behavior of logit difference distribution, we tried to increase the logit differences to make our models more robust. As we show in Fig. 6, the entropy regularizer does increase the logit differences, as expected. Due to the increased logit differences, models that were trained with and without adversarial training are more robust to adversarial examples, as shown in Fig. 5 (MNIST) and Fig. 11 (CIFAR10). \n\nAs mentioned in the paper, although the functional form of the adversarial error is universal, better models are quantitatively more robust to adversarial examples (e.g. Figs 1a and 1b). Given this, we wanted to study whether architecture can be engineered to improve adversarial accuracy. As mentioned in our submission, recent papers have found that larger models are more robust, but left unanswered whether models that generalize better are less susceptible to adversarial examples [2,3]. Using NAS, we show that models that generalize better are more robust, however model size does not seem to correlate strongly with adversarial sensitivity. Our findings together present a unified analysis of a model’s sensitivity to adversarial examples: commonalities among datasets, cause of the commonalities, and dependence on architecture. \n\n[1] Peterson, Carsten. \"A mean field theory learning algorithm for neural networks.\" Complex systems 1 (1987): 995-1019.\n[2] Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. \"Adversarial machine learning at scale.\" arXiv preprint arXiv:1611.01236 (2016).\n[3] Madry, Aleksander, et al. \"Towards deep learning models resistant to adversarial attacks.\" arXiv preprint arXiv:1706.06083 (2017).",
"We thank the reviewers for their reviews. We would like to summarize our responses to individual reviewers. Our work shows two fundamental (and surprising) commonalities across datasets and models: logit differences and adversarial error have the same functional form across all models tested on MNIST, CIFAR10, and ImageNet. We show that these commonalities even hold for random data, and we theoretically derive the origin and its consequences under a mean-field approximation. Based on our observations we propose a counter-intuitive regularization term, entropy penalty, to reduce adversarial sensitivity. Since our results imply that better models are more robust, we use neural architecture search (NAS) to find a model that is adversarially more robust than previously available models. We can move the part on NAS to appendix if the reviewers see fit. In summary, our paper makes important contributions on three fronts: empirical findings, theoretical explanations of these findings, and practical results on adversarial robustness.\n\nMain criticism by AnonReviewer3 is based on a miscommunication. As explained below, our results agree with this reviewer: models that generalize better tend to be more robust. Furthermore, we implemented the experiments proposed by this reviewer, including a thought experiment. We show that the results of all of these experiments support our conclusions. Thanks to the suggestions by this reviewer, we have changed the wording to make our point more clear.\n\nAnonReviewer1 is concerned with the mean field approximation we employed, which we disagree with. Mean-field approximation has been used for more than a century to model complex systems, and its strengths as well as shortcomings are well understood[1,2,3,4,5]. We evaluated the validity of our approximations at every step. We are happy to discuss any particular step of the derivation, however we believe that the reviewer’s general criticism of mean-field approximation is not specific enough to warrant the rejection of the paper or for us to address the concern. The step of the derivation that the reviewer found unclear (Eq. 21) was just a Taylor expansion to smallest order, which we clarified in our revision.\n\nOverall, the reviewer reports do not have concrete disagreements with our results, and the reviewers found our experiments to be interesting over a broad range of datasets, models, and attacks. We have supported our arguments with concrete empirical evidence. In light of these, we hope that the AnonReviewer1 and AnonReviewer3 reconsider their scores. \n\n[1] Weiss, Pierre. \"L'hypothèse du champ moléculaire et la propriété ferromagnétique.\" J. phys. theor. appl. 6.1 (1907): 661-690.\n[2] Peterson, Carsten. \"A mean field theory learning algorithm for neural networks.\" Complex systems 1 (1987): 995-1019.\n[3] Kardar, Mehran. Statistical physics of fields. Cambridge University Press, 2007.\n[4] Poole, Ben, et al. \"Exponential expressivity in deep neural networks through transient chaos.\" Advances in neural information processing systems. 2016.\n[5] Schoenholz, Samuel S., et al. \"Deep Information Propagation.\" ICLR. 2017.\n",
"We would first like to thank the reviewer for their careful reading of our manuscript and thoughtful comments. We are glad that the reviewer believes our study contains valuable information and interesting experiments. Meanwhile, we would like to address the concerns raised.\n\nSummary: We are confident that our results and conclusions are not at odds with the perspective of the referee. We believe the main issue stems from some ambiguous language in the original text that we have now corrected. We have also implemented the additional experiments proposed by the referee and have found them to corroborate our original conclusions. \n\nDetails: We believe that there is some confusion regarding what was meant by our statement “the universality is not a result of the specific content of these datasets nor the ability of the model to generalize.\" We are not proposing that the susceptibility of a neural network to adversarial examples is independent of its ability to generalize. Instead, we are saying that the functional form of the adversarial error as a function of epsilon does not depend on generalization (i.e. that it should scale like A * \\epsilon regardless of the network’s ability to generalize, as shown by our experiments on randomly sampled logits and MNIST with randomly-shuffled labels). In fact, we agree with the referee that the constant, A, will depend on the spectral norm of the Jacobian (and hence the readout weight matrix) and on the network’s ability to generalize. We copy some excerpts from the original submission to corroborate this below. However, the reviewer’s concerns allowed us to realize that our original phrasing was ambiguous. We have therefore reworded our conclusions to be clearer by replacing the problematic statement with: “Here we show that distributions of logit differences have a universal functional form. This functional form is independent of architecture, dataset, and training protocol; nor does it change during training.” We have also removed the sentence “Here we argue that the origin of adversarial examples is primarily due to an inherent uncertainty that neural networks have about their predictions.”\n\nExcerpts from the original text showing agreement with the referee:\n\n“We observe that although the qualitative form of logit differences and adversarial error is universal, it can be quantitatively improved with entropy regularization and better network architectures.” \n“...vanilla NASNet-A (best clean accuracy in our study) has a lower adversarial error than adversarially trained [models]…”\nIn eq. 8 we find that the threshold for an adversarial error is proportional to J^TJ. This is clearly proportional to the spectral norm of the Jacobian.\n"
]
} | {
"paperhash": [
"zoph|learning_transferable_architectures_for_scalable_image_recognition",
"madry|towards_deep_learning_models_resistant_to_adversarial_attacks",
"xu|feature_squeezing:_detecting_adversarial_examples_in_deep_neural_networks",
"zhang|understanding_deep_learning_requires_rethinking_generalization",
"kurakin|adversarial_machine_learning_at_scale",
"kurakin|adversarial_examples_in_the_physical_world",
"szegedy|inception-v4,_inception-resnet_and_the_impact_of_residual_connections_on_learning",
"he|deep_residual_learning_for_image_recognition",
"szegedy|rethinking_the_inception_architecture_for_computer_vision",
"hinton|distilling_the_knowledge_in_a_neural_network",
"goodfellow|explaining_and_harnessing_adversarial_examples",
"szegedy|going_deeper_with_convolutions",
"russakovsky|imagenet_large_scale_visual_recognition_challenge",
"szegedy|intriguing_properties_of_neural_networks",
"cissé|parseval_networks:_improving_robustness_to_adversarial_examples",
"miyato|virtual_adversarial_training:_a_regularization_method_for_supervised_and_semi-supervised_learning",
"nayebi|biologically_inspired_protection_of_deep_networks_from_adversarial_attacks",
"pereyra|regularizing_neural_networks_by_penalizing_confident_output_distributions",
"zoph|neural_architecture_search_with_reinforcement_learning",
"schoenholz|deep_information_propagation",
"poole|exponential_expressivity_in_deep_neural_networks_through_transient_chaos",
"papernot|practical_black-box_attacks_against_machine_learning",
"papernot|distillation_as_a_defense_to_adversarial_perturbations_against_deep_neural_networks",
"schoenholz|a_structural_approach_to_relaxation_in_glassy_liquids",
"cubuk|identifying_structural_flow_defects_in_disordered_solids_using_machine-learning_methods."
],
"title": [
"Learning Transferable Architectures for Scalable Image Recognition",
"Towards Deep Learning Models Resistant to Adversarial Attacks",
"Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images",
"UNDERSTANDING DEEP LEARNING REQUIRES RE-THINKING GENERALIZATION",
"ADVERSARIAL MACHINE LEARNING AT SCALE",
"Workshop track -ICLR 2017 ADVERSARIAL EXAMPLES IN THE PHYSICAL WORLD",
"Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning",
"Deep Residual Learning for Image Recognition",
"Rethinking the Inception Architecture for Computer Vision",
"Distilling the Knowledge in a Neural Network",
"EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES",
"Going deeper with convolutions",
"ImageNet Large Scale Visual Recognition Challenge",
"Intriguing properties of neural networks Christian Szegedy",
"Parseval Networks: Improving Robustness to Adversarial Examples",
"Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning",
"Biologically inspired protection of deep networks from adversarial attacks",
"Under review as a conference paper at ICLR 2017 REGULARIZING NEURAL NETWORKS BY PENALIZING CONFIDENT OUTPUT DISTRIBUTIONS",
"Under review as a conference paper at ICLR 2017 NEURAL ARCHITECTURE SEARCH WITH REINFORCEMENT LEARNING",
"DEEP INFORMATION PROPAGATION",
"Exponential expressivity in deep neural networks through transient chaos",
"Practical Black-Box Attacks against Machine Learning",
"Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks",
"A structural approach to relaxation in glassy liquids",
"Identifying structural flow defects in disordered solids using machine learning methods"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"barret zoph",
"google brain",
"vijay vasudevan",
"quoc v le google brain"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aleksander m ądry",
"aleksandar makelov",
"ludwig schmidt",
"dimitris tsipras",
"adrian vladu"
],
"affiliation": [
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology Cambridge",
"location": "{'postCode': '02139', 'region': 'MA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology Cambridge",
"location": "{'postCode': '02139', 'region': 'MA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology Cambridge",
"location": "{'postCode': '02139', 'region': 'MA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology Cambridge",
"location": "{'postCode': '02139', 'region': 'MA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology Cambridge",
"location": "{'postCode': '02139', 'region': 'MA', 'country': 'USA'}"
}
]
},
{
"name": [
"anh nguyen",
"jason yosinski",
"jeff clune"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Wyoming",
"location": "{}"
},
{
"laboratory": "",
"institution": "Cornell University",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Wyoming",
"location": "{}"
}
]
},
{
"name": [
"chiyuan zhang",
"samy bengio",
"moritz hardt",
"benjamin recht",
"oriol vinyals",
"google deepmind"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alexey kurakin",
"ian j goodfellow",
"samy bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alexey kurakin",
"ian j goodfellow",
"samy bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"christian szegedy",
"sergey ioffe",
"vincent vanhoucke",
"alexander a alemi"
],
"affiliation": [
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'addrLine': '1600 Amphitheatre Parkway Mountain View', 'region': 'CA'}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'addrLine': '1600 Amphitheatre Parkway Mountain View', 'region': 'CA'}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'addrLine': '1600 Amphitheatre Parkway Mountain View', 'region': 'CA'}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'addrLine': '1600 Amphitheatre Parkway Mountain View', 'region': 'CA'}"
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
}
]
},
{
"name": [
"christian szegedy",
"zbigniew wojna"
],
"affiliation": [
{
"laboratory": "",
"institution": "University College London",
"location": "{}"
},
{
"laboratory": "",
"institution": "University College London",
"location": "{}"
}
]
},
{
"name": [
"geoffrey hinton",
"oriol vinyals",
"jeff dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ian j goodfellow",
"jonathon shlens",
"christian szegedy"
],
"affiliation": [
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'settlement': 'Mountain View', 'region': 'CA'}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'settlement': 'Mountain View', 'region': 'CA'}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'settlement': 'Mountain View', 'region': 'CA'}"
}
]
},
{
"name": [
"christian szegedy",
"wei liu",
"inc google",
"scott reed",
"anguelov dragomir",
"vincent vanhoucke",
"andrew rabinovich"
],
"affiliation": [
{
"laboratory": "",
"institution": "Google Inc",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of North Carolina",
"location": "{'settlement': 'Chapel Hill Yangqing Jia'}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{}"
}
]
},
{
"name": [
"olga russakovsky",
"jia deng",
"hao su",
"jonathan krause",
"sanjeev satheesh",
"sean ma",
"zhiheng huang",
"andrej karpathy",
"aditya khosla",
"michael bernstein",
"alexander c berg",
"li fei-fei",
"jim mutch",
"sharat chikkerur",
"hristo paskov",
"ruslan salakhutdinov",
"stan bileschi",
"hueihan jhuang",
"ibm research †",
"georgia tech",
"lexing xie",
"hua ouyang",
"apostol natsev",
"tatsuya harada",
"hideki nakayama",
"yoshitaka ushiku",
"yuya yamashita",
"jun imura",
"yasuo kuniyoshi",
"georges quénot",
"yuanqing lin",
"fengjun lv",
"shenghuo zhu",
"ming yang",
"timothee cour",
"kai yu",
"liangliang cao",
"zhen li",
"min-hsuan tsai",
"xiao zhou",
"thomas huang",
"tong zhang",
"cai-zhi zhu",
"shiníchi satoh",
"jorge sanchez",
"florent perronnin",
"thomas mensink",
"asako kanezaki",
"sho inaba",
"hiroshi muraoka",
"yasuo kuniyoshi nii"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Xerox Research Centre Europe",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Xerox Research Centre Europe",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"inc wojciech google",
" zaremba",
"ilya sutskever",
"joan bruna",
"ian goodfellow",
"rob fergus"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University Dumitru Erhan Google Inc",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Montreal",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University Facebook Inc",
"location": "{}"
}
]
},
{
"name": [
"moustapha cisse",
"piotr bojanowski",
"edouard grave",
"yann dauphin",
"nicolas usunier"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"takeru miyato",
"shin-ichi maeda",
"masanori koyama",
"shin ishii"
],
"affiliation": [
{
"laboratory": "",
"institution": "Networks, Inc",
"location": "{'settlement': 'Tokyo', 'country': 'Japan'}"
},
{
"laboratory": "",
"institution": "Networks, Inc",
"location": "{'settlement': 'Tokyo', 'country': 'Japan'}"
},
{
"laboratory": "",
"institution": "Kyoto University",
"location": "{'settlement': 'Kyoto', 'country': 'Japan'}"
},
{
"laboratory": "",
"institution": "Kyoto University",
"location": "{'settlement': 'Kyoto', 'country': 'Japan'}"
}
]
},
{
"name": [
"aran nayebi",
"surya ganguli"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
}
]
},
{
"name": [
"gabriel pereyra",
"george tucker",
"jan chorowski",
"łukasz kaiser",
"geoffrey hinton"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Toronto & Google Brain",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto & Google Brain",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto & Google Brain",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto & Google Brain",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto & Google Brain",
"location": "{}"
}
]
},
{
"name": [
"barret zoph",
"quoc v le google brain"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"samuel s schoenholz",
"google brain",
"justin gilmer",
"surya ganguli",
"jascha sohl-dickstein"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
}
]
},
{
"name": [
"ben poole",
"subhaneil lahiri",
"maithra raghu",
"surya ganguli"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Google Brain and Cornell University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
}
]
},
{
"name": [
"nicolas papernot",
"patrick mcdaniel",
"ian goodfellow",
"somesh jha",
"z berkay celik",
"ananthram swami"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nicolas papernot",
"patrick mcdaniel",
"xi wu",
"jha § somesh",
"ananthram swami"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Wisconsin",
"location": "{'settlement': 'Madison'}"
},
{
"laboratory": "",
"institution": "University of Wisconsin",
"location": "{'settlement': 'Madison'}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "United States Army Research Laboratory",
"institution": "",
"location": "{'settlement': 'Adelphi', 'region': 'Maryland'}"
}
]
},
{
"name": [
"s s schoenholz",
"e d cubuk",
"d m sussman",
"e kaxiras",
"a j liu"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Pennsylvania",
"location": "{'postCode': '19104', 'settlement': 'Philadelphia', 'region': 'Pennsylvania', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{'postCode': '02138', 'settlement': 'Cambridge', 'region': 'Massachusetts', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Pennsylvania",
"location": "{'postCode': '19104', 'settlement': 'Philadelphia', 'region': 'Pennsylvania', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{'postCode': '02138', 'settlement': 'Cambridge', 'region': 'Massachusetts', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Pennsylvania",
"location": "{'postCode': '19104', 'settlement': 'Philadelphia', 'region': 'Pennsylvania', 'country': 'USA'}"
}
]
},
{
"name": [
"e d cubuk",
"s s schoenholz",
"j m rieser",
"b d malone",
"j rottler",
"d j durian",
"e kaxiras",
"a j liu"
],
"affiliation": [
{
"laboratory": "",
"institution": "Harvard University",
"location": "{'postCode': '02138', 'settlement': 'Cambridge', 'region': 'Massachusetts', 'country': 'USA'}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "University of Pennsylvania",
"location": "{'postCode': '19104', 'settlement': 'Philadelphia', 'region': 'Pennsylvania', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{'postCode': '02138', 'settlement': 'Cambridge', 'region': 'Massachusetts', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of British Columbia",
"location": "{'postCode': 'V6T1Z4', 'settlement': 'Vancouver', 'region': 'BC', 'country': 'Canada'}"
},
{
"laboratory": "",
"institution": "University of Pennsylvania",
"location": "{'postCode': '19104', 'settlement': 'Philadelphia', 'region': 'Pennsylvania', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{'postCode': '02138', 'settlement': 'Cambridge', 'region': 'Massachusetts', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Pennsylvania",
"location": "{'postCode': '19104', 'settlement': 'Philadelphia', 'region': 'Pennsylvania', 'country': 'USA'}"
}
]
}
],
"arxiv_id": [
"",
"1706.06083v4",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 87 | null | 0.481481 | 0.5 | null | null | null | null | null | rk6H0ZbRb |
|
liao|graph_partition_neural_networks_for_semisupervised_classification|ICLR_cc_2018_Conference | Graph Partition Neural Networks for Semi-Supervised Classification | We present graph partition neural networks (GPNN), an extension of graph neural networks (GNNs) able to handle extremely large graphs. GPNNs alternate between locally propagating information between nodes in small subgraphs and globally propagating information between the subgraphs. To efficiently partition graphs, we experiment with spectral partitioning and also propose a modified multi-seed flood fill for fast processing of large scale graphs. We extensively test our model on a variety of semi-supervised node classification tasks. Experimental results indicate that GPNNs are either superior or comparable to state-of-the-art methods on a wide variety of datasets for graph-based semi-supervised classification. We also show that GPNNs can achieve similar performance as standard GNNs with fewer propagation steps. | {
"name": [],
"affiliation": []
} | null | [] | null | 2018-02-15 22:29:39 | 36 | null | null | null | null | null | null | null | null | false | This paper was perceived as being well written, but the technical contribution was seen as being incremental and somewhat heuristic in nature. Some important prior work was not discussed and more extensive experimentation was recommended.
However, the proposed approach of partitioning the graph into sub graphs and a schedule alternating between intra and inter graph partitions operations has some merit.
The AC recommends inviting this paper to the Workshop Track. | {
"review_id": [
"r1Q8qCdgf",
"Hkk48Xg-f",
"HkaZrhuez"
],
"review": [
{
"title": "title: Partitioning for better message passing - maybe?",
"paper_summary": null,
"main_review": "main_review: The authors investigate different message passing schedules for GNN learning. Their proposed approach is to partition the graph into disjoint subregions, pass many messages on the sub regions and pass fewer messages between regions (an approach that is already considered in related literature, e.g., the BP literature), with the goal of minimizing the number of messages that need to be passed to convey information between all pairs of nodes in the network. Experimentally, the proposed approach seems to perform comparably to existing methods (or slightly worse on average in some settings). The paper is well-written and easy to read. My primary concern is with novelty. Many similar ideas have been floating around in a variety of different message-passing communities. With no theoretical reason to prefer the proposed approach, it seems like it may be of limited interest to the community if speed is its only benefit (see detailed comments below).\n\nSpecific comments:\n\n1) \"When information from any one node has reached all other nodes in the graph for the first time, this problem is considered as solved.\"\n\nPerhaps it is my misunderstanding of the way in which GNNs work, but isn't the objective actually to reach a set of fixed point equations. If so, then simply propagating information from one side of the graph may not be sufficient.\n\n2) The experimental results in Section 4.4 are almost impossible to interpret. Perhaps it is better to plot number of edges updated versus accuracy? This at least would put them on equal footing. In addition, the experiments that use randomness should be repeated and plotted on average (just in case you happened to pick a bad schedule).\n\n3) More generally, why not consider random schedules (i.e., just pick a random edge, update, repeat) or random partitions? I'm not certain that a fixed set will perform best independent of the types of updates being considered, and random schedules, like the fully synchronous case for an important baseline (especially if update speed is all you care about).\n\nTypos:\n\n-pg. 6, \"Thm. 2\" -> \"Table 2\"",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Incremental improvement on graph neural networks with heuristic graph partitioning",
"paper_summary": null,
"main_review": "main_review: Since existing GNNs are not computational efficient when dealing with large graphs, the key engineering contributions of the proposed method, GPNN, are a partitioning and the associated scheduling components. \n\nThe paper is well written and easy to follow. However, related literature for message passing part is inadequate. \n\nI have two concerns. The primary one is that the method is incremental and rather heuristic. For example, in Section 2.2, Graph Partition part, the authors propose to \"first randomly sample the initial seed nodes biased towards nodes which are labeled and have a large out-degree\", they do not give any reasons for the preference of that kind of nodes. \n\nThe second one is that of the experimental evaluation. GPNN is on par with other methods on small graphs such as citation networks, performs comparably to other methods, and only clearly outperforms on distantly-supervised entity extraction dataset. Thus, it is not clear if GPNN is more effective than others in general. As for experiments on DIEL dataset, the authors didn't compare to GCN due to the simple reason that GCN ran out of memory. However, vanilla GCN could be trivially partitioned and propagating just as shown in this paper. I think such experiment is crucial, without which I cannot assess this method properly.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Extends the GNN framework to handle large graphs by running async updates on subgraphs derived by using graph partitioning algorithms. Results demonstrated on semi-supervised task. ",
"paper_summary": null,
"main_review": "main_review: Graph Neural Networks are methods using NNs to deal with graph data (each data point has some features, and there is some known connectivity structure among nodes) for problems such as semi-supervised classification. They can also be viewed as an abstraction and generalizations of RNNs to arbitrary graphs. As such they assume each unit has inputs from other nodes, as well as from some stored representation of a state and upon receiving all its information and executing a computation on the values of these inputs and its internal state, it can update the state as well as propagate information to neighbouring nodes. \n\nThis paper deals with the question of computing over very large input graphs where learning becomes computationally problematic (eg hard to use GPUs, optimization gets difficult due to gradient issues, etc). The proposed solution is to partition the graph into sub graphs, and use a schedule alternating between performing intra and inter graph partitions operations. To achieve that two things need to be determined - how to partition the graph, and which schedules to choose. The authors experiment with existing and somewhat modified solutions for each of these problems and present results that show that for large graphs, these methods are indeed effective and achieve state-of-the-art/improved results over existing methos. \n\nThe main critique is that this feels more of an engineering solution to running such GNNs on large graphs than a research innovations. The proposed algorithms are straight forward and/or utilize existing algorithms, and introduce many hyper parameters and ad-hoc decisions (the scheduling to choose for instance). In addition, they do not satisfy any theoretical framework, or proposed in the context of a theoretical framework than has guarantees of mathematical properties that are desirable. As such it is likely of use for practitioners but not a major research contribution. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.5555555820465088,
0.5555555820465088
],
"confidence": [
0.5,
0.5,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response:",
"Response:",
"Response:"
],
"comment": [
"We thank the reviewer for the valuable comments. We did large-scale experiments with GCN as suggested. \n\nQ1: The method is incremental.\nA1: We agree that our contribution is an extension of earlier work. However, given the rapidly increasing interest in graph neural networks and their variants (cf. some recent references below), we believe studying methods to make them computationally effective is very valuable for the community. As GNNs operate on graphs that are often very different from common probabilistic graphical models (PGMs), the impact of different schedules in the two areas may be very different. For example, spanning tree based schedules are known to be very effective for PGMs. However, many graphs require a very large number of spanning trees to achieve satisfactory performance, which in turn seems to cause optimization problems (cf. the experimental results with minimal spanning trees in Sect. 4.5).\n\nLi, Y., Tarlow, D., Brockschmidt, M. and Zemel, R., 2016. Gated graph sequence neural networks. ICLR. \n\nQi, X., Liao, R., Jia, J., Fidler, S. and Urtasun, R., 2017. 3d graph neural networks for rgbd semantic segmentation. ICCV.\n\nLi, R., Tapaswi, M., Liao, R., Jia, J., Urtasun, R. and Fidler, S., 2017. Situation Recognition with Graph Neural Networks. ICCV.\n\nGarcia, V. and Bruna, J., 2017. Few-Shot Learning with Graph Neural Networks. arXiv preprint arXiv:1711.04043.\n\nBruna, J. and Li, X., 2017. Community Detection with Graph Neural Networks. arXiv preprint arXiv:1705.08415.\n\nNowak, A., Villar, S., Bandeira, A. and Bruna, J. A Note on Learning Algorithms for Quadratic Assignment with Graph Neural Networks. arXiv preprint arXiv:1706.07450.\n\nQ2: Add literature of message passing.\nA2: We will add relevant work in PGMs. We plan to discuss the two papers below in an updated submission, but would be happy to incorporate more papers.\n\nSontag, D. and Jaakkola, T., 2009, April. Tree block coordinate descent for MAP in graphical models. AISTATS.\n\nKomodakis, N., Paragios, N. and Tziritas, G., 2011. MRF energy minimization and beyond via dual decomposition. IEEE PAMI.\n\nQ3: Preference of nodes with high-degree.\nA3: We prefer the high-degree nodes as the seeds because in other graphs tasks (e.g., influence maximization in social networks), high-degree heuristics are shown to be a simple and yet strong baseline (cf. paper below). We will update the paper to make this reasoning clearer.\n\nKempe, D., Kleinberg, J. and Tardos, É., 2003, August. Maximizing the spread of influence through a social network. In ACM SIGKDD.\n\nQ4: GCN on DIEL.\nA4: We ran a set of experiments of GCN on the DIEL dataset. This required a significant amount of engineering effort in the implementation. First, we use sparse operations in many places and reduce the feature dimension by introducing a learnable linear layer such that the model to fit into 128GB CPU memory. We then implemented a partition based schedule for GCN. In particular, we first get the partition using the proposed multi-seed flood fill method. Then we construct two graph laplacian matrices for the disconnected clusters and the cut, denoting as L_cluster and L_cut. In original GCN, a layer is expressed as ReLU( L * X * W ) where L, X and W are graph laplacian, node states and weight parameters respectively. In the partition based GCN, it is ReLU( L_cut * L_cluster * X * W ). We tuned hyperparameters and the results are summarized as below.\n---------------------------------------------------------------------------\nMethod | GCN | GCN + Partition | GGNN | GPNN |\n---------------------------------------------------------------------------\nAvg. Recall | 48.14 | 48.47 | 51.15 | 52.11 | \n---------------------------------------------------------------------------\n\nWe observe that (1) both GCN and its partition variant are worse than that of GGNN and our GPNN; (2) partition based GCN has a marginal improvement over the vanilla one.\n\nOne reason why GCN performs poorly is that it requires more layers to reach similar performance of GNN since a k-layer GCN will propagate messages k-hops away whereas GNN has the advantage of propagating more even within one layer. Directly adding more layers is infeasible here as it can not fit into memory. We tried to reduce the feature dimension in order to add more layers which leads to a new issue that the features may not be discriminative enough. We hypothesize that if we could add more layers to GCN without reducing the feature dimension too much, GCN will perform similarly. However, it requires more memory and/or intensive optimization of the code which we left as future work. \n\nThe marginal gain of partition based GCN is understandable as the model just splits one linear transform L into L_cut and L_cluster without enhancing the model capacity significantly. Note that the sparsities of L * X and L_cut * L_cluster * X are different.\n\nFinally, our code based on Tensorflow will be released soon.",
"We thank the reviewer for the valuable comments. Given the increasing popularity of graph neural networks, e.g., see recent references in A1 of Anonymous Reviewer 4, we believe it is still valuable to share our studies of graph partitioning and message-passing schedules with the ICLR community. As our results show, these approaches will be very important as the community starts considering larger graphs than those currently being investigated in the graph network literature.",
"We thank the reviewer for bringing up random schedules. We added the experiment as per suggestion.\n\nQ1: Reach a set of fixed point equations.\nA1: The original GNN paper (Scarselli et al. 2009) indeed requires that the state update function is a contraction map (and by Banach’s theorem thus has a fixed point). However, recent gated GNN adaptations (e.g. Li et al. 2015) drop this requirement and instead just fix a number of propagation steps as a hyperparameter; training and testing is then very similar to the RNN setting. We also follow the latter setting since (1) for a general nonlinear dynamic system, no guarantee can be made regarding whether fixed points can be reached; and (2) the learning algorithm, i.e., back-propagation through time (BPTT) would be significantly more time consuming as fixed-point convergence typically requires very many propagation steps, which is impractical for very large graphs. In the paper, we use a synthetic broadcasting problem to study the difference in efficiency of various message passing schedules in an idealized setting. As you observe, propagation across the whole graph may often not suffice to solve all tasks, but is a simple way to study if long-range dependencies between different vertices can be modeled at all.\n\nQ2: Experimental results in section 4.4.\nA2: We assume the reviewer has a typo here in an sense that you actually refer to section 4.5. \nThanks for your suggestion of plotting number of edges updated versus accuracy. We will replot in the final version. To clarify, in Fig. 2 (c), assuming graph G(V, E) is singly connected, then the “# edges per propagation step” of MST, Sequential, Synchronous and Partition are |V|-1, |E|, |E| and |E|. We also attach the average results of 10 runs with different random seeds on Cora as below. \n-----------------------------------------------------------------------------------\n| Prop Step | 1 | 3 | 5 | \n-----------------------------------------------------------------------------------\n| MST | 59.94 +- 0.89 | 71.83 +- 0.96 | 77.1 +- 0.72 |\n-----------------------------------------------------------------------------------\n| Sequential | 73.04 +- 1.93 | 77.55 +- 0.65 | 74.89 +- 1.26 |\n-----------------------------------------------------------------------------------\n| Synchronous | 67.36 +- 1.44 | 80.15 +- 0.80 | 80.06 +- 0.98 |\n-----------------------------------------------------------------------------------\n| Partition | 68.1 +- 1.98 | 80.27 +- 0.78 | 80.12 +- 0.93 |\n-----------------------------------------------------------------------------------\nWe will plot the mean curve with error bar and improve the writing in the final version.\n\nQ3: Random and Synchronous Schedules\nA3: To clarify, we did compare with a fully synchronous schedule which is the one adopted by the GGNN model. Also, speed is not the only benefit, as with partition based schedules, memory is saved which enables us to apply the model to large-scale graph problems. \n\nDeveloping schedules that depend on the type of updates is a very interesting and promising direction. We will explore it in the future. On the other side, our schedule is not fixed in a sense that the partition depends the structure of input graph. \n\nWe did an experiment on random schedules. In particular, for k-step propagation, we randomly sample 1/k proportion of edges from the whole edge set without replacement and use them for propagation. We summarize the results (10 runs) on the Cora dataset in the table below,\n--------------------------------------------------------\n| K | 2 | 3 | 5 | 10 | \n--------------------------------------------------------\n| Avg Acc | 76.03 | 74.71 | 72.09 | 69.99 | \n--------------------------------------------------------\n| Std Acc | 1.55 | 1.31 | 1.81 | 2.26 | \n--------------------------------------------------------\nFrom the results, we can see that the best average accuracy (K = 2) is 76.03 which is still lower than both synchronous and our partition based schedule. Note that this result roughly matches the one with spanning trees. The reason might be that random schedules typically need more propagation steps to spread information throughout the graph. However, more propagation steps of GNNs may lead to issues in learning with BPTT. Additional results on other datasets will be included in the final version. \n"
]
} | {
"paperhash": [
"bruna|spectral_networks_and_locally_connected_networks_on_graphs",
"duvenaud|convolutional_networks_on_graphs_for_learning_molecular_fingerprints",
"elidan|residual_belief_propagation:_informed_scheduling_for_asynchronous_message_passing",
"gilmer|neural_message_passing_for_quantum_chemistry",
"grover|node2vec:_scalable_feature_learning_for_networks",
"kingma|adam:_a_method_for_stochastic_optimization",
"li|gated_graph_sequence_neural_networks",
"perozzi|deepwalk:_online_learning_of_social_representations",
"scarselli|the_graph_neural_network_model",
"schlichtkrull|modeling_relational_data_with_graph_convolutional_networks",
"sen|collective_classification_in_network_data",
"sukhbaatar|learning_multiagent_communication_with_backpropagation",
"sutton|improved_dynamic_schedules_for_belief_propagation",
"von|a_tutorial_on_spectral_clustering",
"yang|revisiting_semi-supervised_learning_with_graph_embeddings",
"li|situation_recognition_with_graph_neural_networks",
"marino|the_more_you_know:_using_knowledge_graphs_for_image_classification"
],
"title": [
"Spectral Networks and Deep Locally Connected Networks on Graphs",
"Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"Residual Belief Propagation: Informed Scheduling for Asynchronous Message Passing",
"Neural Message Passing for Quantum Chemistry",
"node2vec: Scalable Feature Learning for Networks",
"Published as a conference paper at ICLR 2015 ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION",
"GATED GRAPH SEQUENCE NEURAL NETWORKS",
"DeepWalk: Online Learning of Social Representations",
"The graph neural network model The graph neural network model",
"Graph Convolutional Matrix Completion Rianne van den Berg",
"Collective Classification in Network Data",
"Learning Multiagent Communication with Backpropagation",
"Improved Dynamic Schedules for Belief Propagation",
"GraphSAINT: GRAPH SAMPLING BASED INDUCTIVE LEARNING METHOD",
"Revisiting Semi-Supervised Learning with Graph Embeddings",
"Situation Recognition with Graph Neural Networks",
"The More You Know: Using Knowledge Graphs for Image Classification"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"joan bruna",
"wojciech zaremba",
"arthur szlam",
"yann lecun"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "The City College of New York",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
}
]
},
{
"name": [
"david duvenaud",
"dougal maclaurin",
"jorge aguilera-iparraguirre",
"rafael gómez-bombarelli",
"timothy hirzel",
"alán aspuru-guzik",
"ryan p adams"
],
"affiliation": [
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
}
]
},
{
"name": [
"gal elidan",
"ian mcgraw",
"daphne koller"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
}
]
},
{
"name": [
"justin gilmer",
"samuel s schoenholz",
"patrick f riley",
"oriol vinyals",
"george e dahl"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aditya grover",
"jure leskovec"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik p kingma",
"jimmy lei ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yujia li",
"richard zemel",
"marc brockschmidt",
"daniel tarlow"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"bryan perozzi",
"steven skiena"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stony Brook University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stony Brook University",
"location": "{}"
}
]
},
{
"name": [
"franco ; scarselli",
"marco ; gori",
"markus hagenbuchner",
"gabriele monfardini",
"ah chung tsoi",
"; chung"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"thomas n kipf",
"max welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"prithviraj sen",
"galileo namata",
"mustafa bilgic",
"lise getoor",
"brian gallagher",
"tina eliassi-rad"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sainbayar sukhbaatar",
"arthur szlam",
"rob fergus"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"charles sutton",
"andrew mccallum"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Massachusetts",
"location": "{'postCode': '01003', 'settlement': 'Amherst', 'region': 'MA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Massachusetts",
"location": "{'postCode': '01003', 'settlement': 'Amherst', 'region': 'MA', 'country': 'USA'}"
}
]
},
{
"name": [
"hanqing zeng",
"hongkuan zhou",
"ajitesh srivastava",
"rajgopal kannan",
"viktor prasanna"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"zhilin yang",
"william w cohen",
"ruslan salakhutdinov"
],
"affiliation": [
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{}"
}
]
},
{
"name": [
"ruiyu li",
"makarand tapaswi",
"renjie liao",
"jiaya jia",
"raquel urtasun",
"sanja fidler",
" kong"
],
"affiliation": [
{
"laboratory": "",
"institution": "The Chinese University of Hong",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{}"
},
{
"laboratory": "",
"institution": "The Chinese University of Hong",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kenneth marino",
"ruslan salakhutdinov",
"abhinav gupta"
],
"affiliation": [
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{'addrLine': '5000 Forbes Ave', 'postCode': '15213', 'settlement': 'Pittsburgh', 'region': 'PA'}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{'addrLine': '5000 Forbes Ave', 'postCode': '15213', 'settlement': 'Pittsburgh', 'region': 'PA'}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{'addrLine': '5000 Forbes Ave', 'postCode': '15213', 'settlement': 'Pittsburgh', 'region': 'PA'}"
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.518519 | 0.5 | null | null | null | null | null | rk4Fz2e0b |
||
babaeizadeh|stochastic_variational_video_prediction|ICLR_cc_2018_Conference | 1710.11252v2 | Stochastic Variational Video Prediction | Predicting the future in real-world settings, particularly from raw sensory observations such as images, is exceptionally challenging. Real-world events can be stochastic and unpredictable, and the high dimensionality and complexity of natural images requires the predictive model to build an intricate understanding of the natural world. Many existing methods tackle this problem by making simplifying assumptions about the environment. One common assumption is that the outcome is deterministic and there is only one plausible future. This can lead to low-quality predictions in real-world settings with stochastic dynamics. In this paper, we develop a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables. To the best of our knowledge, our model is the first to provide effective stochastic multi-frame prediction for real-world video. We demonstrate the capability of the proposed method in predicting detailed future frames of videos on multiple real-world datasets, both action-free and action-conditioned. We find that our proposed method produces substantially improved video predictions when compared to the same model without stochasticity, and to other stochastic video prediction methods. Our SV2P implementation will be open sourced upon publication. | {
"name": [
"mohammad babaeizadeh",
"chelsea finn",
"dumitru erhan",
"roy campbell",
"sergey levine"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Illinois at Urbana",
"location": "{'settlement': 'Champaign'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "University of Illinois at Urbana",
"location": "{'settlement': 'Champaign'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
}
]
} | null | [
"Computer Science"
] | International Conference on Learning Representations | 2017-10-30 | 41 | 488 | null | null | null | null | null | null | null | true | Not quite enough for an oral but a very solid poster. | {
"review_id": [
"r17bOI8yG",
"S1riI7OxM",
"S1gH28vgM"
],
"review": [
{
"title": "title: Suggest an accept but it requires further revision",
"paper_summary": null,
"main_review": "main_review: Quality: above threshold\nClarity: above threshold, but experiment details are missing.\nOriginality: slightly above threshold.\nSignificance: above threshold\n\nPros:\n\nThis paper proposes a stochastic variational video prediction model. It can be used for prediction in optionally available external action cases. The inference network is a convolution net and the generative network is using a previously structure with minor modification. The result shows its ability to sample future frames and outperforms with methods in qualitative and quantitive metrics.\n\nCons:\n\n1. It is a nice idea and it seems to perform well in practice, but are there careful experiments justifying the 3-stage training scheme? For example, compared with other schemes like alternating between 3 stages, dynamically soft weighting terms. \n\n2. It is briefly mentioned in the context, but has there any attempt towards incorporating previous frames context for z, instead of sampling from prior? This piece seems much important in the scenarios which this paper covers.\n\n3. No details about training (training data size, batches, optimization) are provided in the relevant section, which greatly reduces the reproducibility and understanding of the proposed method. For example, it is not clear whether the model can generative samples that are not previously seen in the training set. It is strongly suggested training details be provided. \n\n4. Minor, If I understand correctly, in equation in the last paragraph above 3.1, z instead of z_t \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Convincing demonstration of stochastic video predictions on real data",
"paper_summary": null,
"main_review": "main_review: The submission presents a method or video prediction from single (or multiple) frames, which is capable of producing stochastic predictions by means of training a variational encoder-decoder model. Stochastic video prediction is a (still) somewhat under-researched direction, due to its inherent difficulty.\n\nThe method can take on several variants: time-invariant [latent variable] vs. time-variant, or action-conditioned vs unconditioned. The generative part of the method is mostly borrowed from Finn et al. (2016). Figure 1 clearly motivates the problem. The method itself is fairly clearly described in Section 3; in particular, it is clear why conditioning on all frames during training is helpful. As a small remark, however, it remains unclear what the action vector a_t is comprised of, also in the experiments.\n\nThe experimental results are good-looking, especially when looking at the provided web site images. \nThe main goal of the quantitative comparison results (Section 5.2) is to determine whether the true future is among the generated futures. While this is important, a question that remains un-discussed is whether all generated stochastic samples are from realistic futures. The employed metrics (best PSNR/SSIM among multiple samples) can only capture the former, and are also pixel-based, not perceptual.\n\nThe quantitative comparisons are mostly convincing, but Figure 6 needs some further clarification. It is mentioned in the text that \"time-varying latent sampling is more stable beyond the time horizon used during training\". While true for Figure 6b), this statement is contradicted by both Figure 6a) and 6c), and Figure 6d) seems to be missing the time-invariant version completely (or it overlaps exactly, which would also need explanation). As such, I'm not completely clear on whether the time variant or invariant version is the stronger performer.\n\nThe qualitative comparisons (Section 5.3) are difficult to assess in the printed material, or even on-screen. The animated images on the web site provide a much better impression of the true capabilities, and I find them convincing.\n\nThe experiments only compare to Reed et al. (2017)/Kalchbrenner et al. (2017), with Finn at al. (2016) as a non-stochastic baseline, but no comparisons to, e.g., Vondrick et al. (2016) are given. Stochastic prediction with generative adversarial networks is a bit dismissed in Section 2 with a mention of the mode-collapse problem.\n\nOverall the submission makes a significant enough contribution by demonstrating a (mostly) working stochastic prediction method on real data.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: 1) Summary\nThis paper proposed a new method for predicting multiple future frames in videos. A new formulation is proposed where the frames’ inherent noise is modeled separate from the uncertainty of the future. This separation allows for directly modeling the stochasticity in the sequence through a random variable z ~ p(z) where the posterior q(z | past and future frames) is approximated by a neural network, and as a result, sampling of a random future is possible through sampling from the prior p(z) during testing. The random variable z can be modeled in a time-variant and time-invariant way. Additionally, this paper proposes a training procedure to prevent their method from ignoring the stochastic phenomena modeled by z. In the experimental section, the authors highlight the advantages of their method in 1) a synthetic dataset of shapes meant to clearly show the stochasticity in the prediction, 2) two robotic arm datasets for video prediction given and not given actions, and 3) A challenging human action dataset in which they perform future prediction only given previous frames.\n\n\n\n2) Pros:\n+ Novel/Sound future frame prediction formulation and training for modeling the stochasticity of future prediction.\n+ Experiments on the synthetic shapes and robotic arm datasets highlight the proposed method’s power of multiple future frame prediction possible.\n+ Good analysis on the number of samples improving the chance of outputting the correct future, the modeling power of the posterior for reconstructing the future, and a wide variety of qualitative examples.\n+ Work is significant for the problem of modeling the stochastic nature of future frame prediction in videos.\n\n\n\n\n3) Cons:\nApproximate posterior in non-synthetic datasets:\nThe variable z seems to not be modeling the future very well. In the robot arm qualitative experiments, the robot motion is well modeled, however, the background is not. Given that for the approximate posterior computation the entire sequence is given (e.g. reconstruction is performed), I would expect the background motion to also be modeled well. This issue is more evident in the Human 3.6M experiments, as it seems to output blurriness regardless of the true future being observed. This problem may mean the method is failing to model a large variety of objects and clearly works for the robotic arm because a very similar large shape (e.g. robot arm) is seen in the training data. Do you have any comments on this?\n\n\n\nFinn et al 2016 PNSR performance on Human 3.6M:\nIs the same exact data, pre-processing, training, and architecture being utilized? In her paper, the PSNR for the first timestep on Human 3.6M is about 41 (maybe 42?) while in this paper it is 38.\n\n\n\nAdditional evaluation on Human 3.6M:\nPSNR is not a good evaluation metric for frame prediction as it is biased towards blurriness, and also SSIM does not give us an objective evaluation in the sense of semantic quality of predicted frames. It would be good if the authors present additional quantitative evaluation to show that the predicted frames contain useful semantic information [1, 2, 3, 4]. For example, evaluating the predicted frames for the Human 3.6M dataset to see if the human is still detectable in the image or if the expected action is being predicted could be useful to verify that the predicted frames contain the expected meaningful information compared to the baselines.\n\n\n\nAdditional comments:\nAre all 15 actions being used for the Human 3.6M experiments? If so, the fact of the time-invariant model performs better than the time-variant one may not be the consistent action being performed (last sentence of 5.2). The motion performed by the actors in each action highly overlaps (talking on the phone action may go from sitting to walking a little to sitting again, and so on). Unless actions such as walking and discussion were only used, it is unlikely the time-invariant z is performing better because of consistent action. Do you have any comments on this?\n\n\n\n4) Conclusion\nThis paper proposes an interesting novel approach for predicting multiple futures in videos, however, the results are not fully convincing in all datasets. If the authors can provide additional quantitative evaluation besides PSNR and SSIM (e.g. evaluation on semantic quality), and also address the comments above, the current score will improve.\n\n\n\nReferences:\n[1] Emily Denton and Vighnesh Birodkar. Unsupervised Learning of Disentangled Representations from Video. In NIPS, 2017.\n[2] Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, and Honglak Lee. Learning to generate long-term future via hierarchical prediction. In ICML, 2017.\n[3] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv preprint arXiv:1710.10196, 2017.\n[4] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved Techniques for Training GANs. In NIPS, 2017.\n\n\nRevised review:\nGiven the authors' thorough answers to my concerns, I have decided to change my score. I would like to thank the authors for a very nice paper that will definitely help the community towards developing better video prediction algorithms that can now predict multiple futures.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.6666666865348816,
0.6666666865348816
],
"confidence": [
0.75,
0.75,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Updated paper + addressing comments.",
"Updated paper + addressing comments.",
"Thank you for the questions.",
"Addressing comment.",
"Updated paper + addressing comments.",
"re: prior work"
],
"comment": [
"Thank you for your insightful comments and suggestions. We have addressed most of your concerns. Please see our responses below and let us know if you have any further comments on the paper. Thanks!\n\n- \"Additional evaluation on Human 3.6M: PSNR is not a good evaluation metric for frame prediction\"\n\nThank you for this suggestion. We’ve updated the paper (please look at Figure 7 and 6th paragraph of 5.3) to address your comment. In order to investigate the quality difference between SV2P predicted frames and “Finn et al (2016)”, we performed a new experiment in which we used the open sourced version of the object detector from “Huang et al. (2016)”:\nhttps://github.com/tensorflow/models/blob/master/research/object_detection/models/ssd_mobilenet_v1_feature_extractor.py\nto detect the humans inside the predicted frames. We used the confidence of this detection as an additional metric to evaluate the difference between different methods. The results of this comparison which shows higher quality for SV2P can be found in (newly added) Figure 7. \n\n\n- \"Are all 15 actions being used for the Human 3.6M experiments?\"\n\nWe’ve updated the 2nd bullet point in 5.1 to clear this up in the paper. Yes, we are using all the actions. In regard to changing actions: since the videos are relatively short (about 20 frames), there aren't any videos where the actor changes the behavior in the middle. That said, the identity of the behavior is not the only source of stochasticity, since even within a single action (e.g., walking), the actor might choose to walk at different speeds and in different directions.\n\n\n- \"I would expect the background motion to also be modeled well.”\n\nWe've added a discussion of this in Section 5.3 (paragraph 4). Note that the approximate posterior over z is still trained with the ELBO, which means that it must compress the information in future events. Perfect reconstruction of high-quality images from posterior distributions over latent states is an open problem, and the results in our experiments compare favorably to those typically observed even in single-image VAEs (e.g. see Xue et al. (2016))\n\n\n- \"Finn et al 2016 PNSR performance on Human 3.6M: In her paper, the PSNR for the first timestep on Human 3.6M is about 41 (maybe 42?) while in this paper it is 38\"\n\nFor “Finn et al. (2016)”, we used the open-source version of the code here:\nhttps://github.com/tensorflow/models/tree/master/research/video_prediction\nwhich is a reimplementation of the models used in the Finn et al. ‘16 paper. We are not exactly sure where the discrepancy is coming from. However, we would like to point out that whatever issue resulted in slightly slower PSNR for the deterministic model would have affected our model as well, since we used the same code for the base model. Hence, the comparison is still valid.\n",
"Thank you for your great comments. comments and constructive criticism. We updated the paper to address all of your comments. Please let us know if you have any more suggestions or comments. Thanks!\n\n\n- “No details about training (training data size, batches, optimization) are provided in the relevant section” \n\nThank you for your great comment. To further investigate the effect of our proposed training method, we conducted more experiments by alternating between different steps of suggested method. The updated Figure 4c reflects the results of this experiments. As it can be seen in this graph, the suggested steps help with both stability and convergence of the model.\n\nWe also provided details of the training method in Appendix A to address your comment regarding using soft-terms as well as reproducibility. We will also release the code after acceptance. \n\n\n- “It is briefly mentioned in the context, but has there any attempt towards incorporating previous frames context for z, instead of sampling from prior? This piece seems much important in the scenarios which this paper covers.“\n\nThis is one of the future work directions mentioned in the conclusion section. We’ve expanded this discussion in the conclusion a bit to address this better.\n\n\n- “Minor, If I understand correctly, in equation in the last paragraph above 3.1, z instead of z_t”\n\nThank you for the detailed comment. We fixed the typo.\n",
"\nFor comparison with VPN, we did NOT train any model. Instead, the authors of Reed et al. 2017 provided their trained model which we used for evaluation.\n\nIn terms of numbers, the model from Reed et al. 2017 has 119,538,432 while our model has 8,378,497. Hopefully this helps to get a better understanding of the generalizations.",
"Thank you for the great comment. We address your old version (since it had some great questions) as well as the updated version. Please let us know if we missed a question and/or if you have more questions/comments.\n\n- Why does it help to sample latents multiple times if the inference procedure is identical at all time steps? Is it simply because you get extra bits of stochasticity?\n\nThank you for the great question. Please note that we are not claiming the time-invariant latent predicts higher quality images. The claim is that it is more stable beyond training time horizon (look at Figure 6b). It is best if we answer this questions intuitively with an example. Think about a simple shape which moves randomly and changes its direction in each time step (e.g. brownian motion). A time-invariant latent should encode the info about *all* of the time steps and the generative model should learn how to decode all of this information, step by step, and therefore it runs out of *information* after all the time-steps which causes the collapse after the training time horizon. However, a time-variant latent only includes information about the *current* time frames and stays stable after any time horizon. However, this pushes the complexity to posterior approximation since it should *find* a distribution aligned with what is happening at training time. That is why the result are not that different (other plots of Figure 6). This can be improved by conditioning the prior or posterior on input (which we are currently working on) or other techniques such as backproping through the best out of multiple samples (e.g. look at Fragkiadaki et al. (2017)).\n\n- The plot in figure 4a shows the KL loss going to 0. This seems odd to me, because the KL term in a VAE usually roughly corresponds with the diversity of the samples. If it's close to 0, then the information passing through the posterior is close to 0, isn't it?\n\nIt does not go to 0 but converges to a small number, usually 3 to 5 (please note that y-axis has a very large scale). Intuitively, the key is to keep the divergence small enough so sampling from prior at test time still makes sense, and big enough so there is enough information to train the generative network. In our experiments we found this magical number to be around 3 to 5. \n\n- In the third phase, where you increase the KL, do you just increase the KL to 1? Or, like in Higgins et al (2016), do you tune the multiplicative constant for the KL term? Did you try any (informal) experiments with other types of pre-training. What did/did not seem to work well?\n\nPlease check (the newly added) Appendix. In the current setting we do not increase KL to 1 but increase it to 1e-3. Regarding the training we also included more information in the updated Figure 4. We tried variations of the proposed training mechanism to see how it affects the training. Besides that (informally) we tried different approaches for KL annealing. Since the explained training mechanism is practical enough, we stopped exploring more. \n\n- In your case, I don't think the latent will capture the type of objects in the scene. Otherwise, the latents could \"conflict\" with the context. \n\nGreat question! We indeed observed the conflict that you mentioned while we were developing the model. e.g. a green circle was being morphed into a red triangle! However, there are is a key remark which prevents this from happening in the current architecture and it’s the reconstruction loss. Intuitively, in the training time, the posterior should encode the information into a distribution in which *any* sample from it results to a correct answer to minimizes the loss. Therefore, first, it avoids adding any not necessary information which is accessible during generative (e.g. the shape and color). Second, it encodes all the required info for a correct prediction (e.g. movement). Therefore, as you mentioned, the latent values include only the *movement* info and not the context. This contains more information compared to random bits though. \n\n- In the inference network (top), could you please be more specific about how you transform the Tx64x64x3 tensor into a 32x32x32 tensor (combining the dimensions across time)? Thanks!\n\nYes, we combine the time dimension and stride of 2 downsamples to 32x32\n",
"Thank you for insightful comments and constructive criticism. We updated the paper to address all of your comments. Please let us know if you have any more suggestions or comments. Thanks!\n\n- \"As a small remark, however, it remains unclear what the action vector a_t is comprised of, also in the experiments.\"\n\nThank you for the good point. We’ve updated the paper to clarify what the actions are for each dataset. Please check 5.1 for more clarification. \n\n\n- \"a question that remains un-discussed is whether all generated stochastic samples are from realistic futures\"\n\nWe’ve updated the paper to clarify this issue. Please look at the 2nd paragraph of Section 4 for updates. In short, as we observed empirically from the predicted videos, the output videos are within the realistic possibilities. However, in some cases, the predicted frames are not realistic and are averaging more than one future (e.g. first random sample in Figure 1-C).\n\n\n- \"The employed metrics (best PSNR/SSIM among multiple samples) can only capture the former, and are also pixel-based, not perceptual.\"\n\nThank you for the great comment. We’ve updated the paper (please look at Figure 7 and 6th paragraph of 5.3) to address your comment. In order to investigate the quality difference between SV2P predicted frames and “Finn et al (2016)”, we performed a new experiment in which we used the open sourced version of the object detector from “Huang et al. (2016)”:\nhttps://github.com/tensorflow/models/blob/master/research/object_detection/models/ssd_mobilenet_v1_feature_extractor.py\nto detect the humans inside the predicted frames. We used the confidence of this detection as an additional metric to evaluate the difference between different methods. The results of this comparison which shows higher quality for SV2P can be found in (newly added) Figure 7. \n\n\n- \"time-varying latent sampling is more stable beyond the time horizon used during training\". While true for Figure 6b), this statement is contradicted by both Figure 6a) and 6c).\"\n\nThank you for the great question. We’ve updated the paper (last two paragraphs of 5.2) to include your observation. Please note that our original claim was that the time-variant latent seems to be more “stable” beyond the time horizon used during training (which is highly evident in Figure 6b). And we are NOT claiming that time-variant latent generates “higher quality” results. However, we agree that this stability is not always the case as it is more evident in late frames of Figure 6a.\n\n\n",
"We apologize for missing that highly-relevant reference. We will include a reference in the next revision.\n\nNote that Walker et al. '16 does not predict video frames; thus, we cannot compare to the approach. Unlike Walker et al. '16, our work does not require optical flow supervision nor an optical flow solver, which tend to not work consistently on real videos (as optical flow is an open research problem [1,2,3]). Our method only uses raw videos. Furthermore, we show that a CVAE trained from scratch does not work consistently, and propose a pre-training scheme which, in our experiments, consistently finds a good solution.\n\n[1] https://arxiv.org/abs/1705.01352\n[2] https://arxiv.org/abs/1612.01925\n[3] https://arxiv.org/abs/1604.01827"
]
} | {
"paperhash": [
"ebert|self-supervised_visual_planning_with_temporal_skip_connections",
"tulyakov|mocogan:_decomposing_motion_and_content_for_video_generation",
"chen|video_imagination_from_a_single_image_with_transformation_generation",
"denton|unsupervised_learning_of_disentangled_representations_from_video",
"chiappa|recurrent_environment_simulators",
"reed|parallel_multiscale_autoregressive_density_estimation",
"liu|video_frame_synthesis_using_deep_voxel_flow",
"goodfellow|nips_2016_tutorial:_generative_adversarial_networks",
"kalchbrenner|video_pixel_networks",
"finn|deep_visual_foresight_for_planning_robot_motion",
"krishnan|structured_inference_networks_for_nonlinear_state_space_models",
"vondrick|generating_videos_with_scene_dynamics",
"xue|visual_dynamics:_probabilistic_future_frame_synthesis_via_cross_convolutional_networks",
"walker|an_uncertain_future:_forecasting_from_static_images_using_variational_autoencoders",
"jia|dynamic_filter_networks",
"gao|linear_dynamical_neural_population_models_through_nonlinear_embeddings",
"lotter|deep_predictive_coding_networks_for_video_prediction_and_unsupervised_learning",
"finn|unsupervised_learning_for_physical_interaction_through_video_prediction",
"johnson|composing_graphical_models_with_neural_networks_for_structured_representations_and_fast_inference",
"bowman|generating_sentences_from_a_continuous_space",
"mathieu|deep_multi-scale_video_prediction_beyond_mean_square_error",
"oh|action-conditional_video_prediction_using_deep_networks_in_atari_games",
"shi|convolutional_lstm_network:_a_machine_learning_approach_for_precipitation_nowcasting",
"srivastava|unsupervised_learning_of_video_representations_using_lstms",
"ranzato|video_(language)_modeling:_a_baseline_for_generative_models_of_natural_videos",
"kingma|auto-encoding_variational_bayes",
"li|video_generation_from_text",
"vondrick|generating_the_future_with_adversarial_transformers",
"fragkiadaki|motion_prediction_under_multimodality_with_conditional_stochastic_networks",
"vondrick|anticipating_the_future_by_watching_unlabeled_video",
"bubić|prediction,_cognition_and_the_brain"
],
"title": [
"Self-Supervised Visual Planning with Temporal Skip Connections",
"MoCoGAN: Decomposing Motion and Content for Video Generation",
"Video Imagination from a Single Image with Transformation Generation",
"Unsupervised Learning of Disentangled Representations from Video",
"RECURRENT ENVIRONMENT SIMULATORS",
"Parallel Multiscale Autoregressive Density Estimation",
"Video Frame Synthesis using Deep Voxel Flow",
"NIPS 2016 Tutorial: Generative Adversarial Networks",
"Video Pixel Networks",
"Published as a conference paper at ICLR 2015 ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION",
"Structured Inference Networks for Nonlinear State Space Models",
"Generating Videos with Scene Dynamics",
"Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks",
"An Uncertain Future: Forecasting from Static Images using Variational Autoencoders",
"Dynamic Filter Networks",
"Linear dynamical neural population models through nonlinear embeddings",
"DEEP PREDICTIVE CODING NETWORKS FOR VIDEO PREDICTION AND UNSUPERVISED LEARNING",
"Conditional Generative Adversarial Nets",
"Composing graphical models with neural networks for structured representations and fast inference",
"Generating Sentences from a Continuous Space",
"DEEP MULTI-SCALE VIDEO PREDICTION BEYOND MEAN SQUARE ERROR",
"Action-Conditional Video Prediction using Deep Networks in Atari Games",
"Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting",
"Unsupervised Learning of Video Representations using LSTMs",
"VIDEO (LANGUAGE) MODELING: A BASELINE FOR GENERATIVE MODELS OF NATURAL VIDEOS",
"Auto-Encoding Variational Bayes",
"Video Generation From Text",
"Generating the Future with Adversarial Transformers",
"Motion Prediction Under Multimodality with Conditional Stochastic Networks",
"Anticipating Visual Representations from Unlabeled Video",
"Prediction, cognition and the brain"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"frederik ebert",
"chelsea finn",
"alex x lee",
"sergey levine"
],
"affiliation": [
{
"laboratory": "",
"institution": "UC Berkeley",
"location": "{'country': 'United States'}"
},
{
"laboratory": "",
"institution": "UC Berkeley",
"location": "{'country': 'United States'}"
},
{
"laboratory": "",
"institution": "UC Berkeley",
"location": "{'country': 'United States'}"
},
{
"laboratory": "",
"institution": "UC Berkeley",
"location": "{'country': 'United States'}"
}
]
},
{
"name": [
"sergey tulyakov",
"ming-yu liu",
"xiaodong yang",
"jan kautz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"baoyang chen",
"wenmin wang",
"jinzhuo wang",
"xiongtao chen"
],
"affiliation": [
{
"laboratory": "",
"institution": "Peking University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Peking University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Peking University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Peking University",
"location": "{}"
}
]
},
{
"name": [
"emily denton",
"vighnesh birodkar"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
}
]
},
{
"name": [
"silvia chiappa",
"sébastien racaniere",
"daan wierstra",
"mohamed shakir",
" deepmind",
" london"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"scott reed",
"aäron van den oord",
"nal kalchbrenner",
"sergio gómez colmenarejo",
"ziyu wang",
"dan belov",
"nando de freitas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ziwei liu",
"raymond a yeh",
"xiaoou tang",
"yiming liu",
"aseem agarwala"
],
"affiliation": [
{
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": "{}"
},
{
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": "{}"
},
{
"laboratory": "",
"institution": "Pony.AI Inc",
"location": "{}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{}"
}
]
},
{
"name": [
"ian goodfellow",
"generative modeling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nal kalchbrenner",
"aäron van den oord",
"karen simonyan",
"ivo danihelka",
"oriol vinyals",
"alex graves",
"koray kavukcuoglu",
"google deepmind",
" london"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik p kingma",
"jimmy lei ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rahul g krishnan",
"uri shalit",
"david sontag"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
}
]
},
{
"name": [
"carl vondrick",
"hamed pirsiavash",
"antonio torralba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tianfan xue",
"jiajun wu",
"katherine l bouman",
"william t freeman"
],
"affiliation": [
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology",
"location": "{}"
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology",
"location": "{}"
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology",
"location": "{}"
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology",
"location": "{}"
}
]
},
{
"name": [
"jacob walker",
"carl doersch",
"abhinav gupta",
"martial hebert"
],
"affiliation": [
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{}"
}
]
},
{
"name": [
"bert de brabandere",
"xu jia",
"tinne tuytelaars",
"luc van gool"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yuanjun gao",
"evan archer",
"liam paninski",
"john p cunningham"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"william lotter",
"gabriel kreiman",
"david cox"
],
"affiliation": [
{
"laboratory": "",
"institution": "Harvard University Cambridge",
"location": "{'postCode': '02215', 'region': 'MA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Harvard University Cambridge",
"location": "{'postCode': '02215', 'region': 'MA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Harvard University Cambridge",
"location": "{'postCode': '02215', 'region': 'MA', 'country': 'USA'}"
}
]
},
{
"name": [
"mehdi mirza",
"simon osindero"
],
"affiliation": [
{
"laboratory": "",
"institution": "Université de Montréal Montréal",
"location": "{'postCode': 'H3C 3J7', 'region': 'QC'}"
},
{
"laboratory": "",
"institution": "Flickr / Yahoo Inc",
"location": "{'postCode': '94103', 'settlement': 'San Francisco', 'region': 'CA'}"
}
]
},
{
"name": [
"matthew james johnson",
"david duvenaud",
"alexander b wiltschko",
"sandeep r datta",
"ryan p adams"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"samuel r bowman",
"luke vilnis",
"oriol vinyals",
"andrew m dai",
"rafal jozefowicz",
"bengio samy",
" google brain"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"michael mathieu",
"camille couprie",
"yann lecun"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
}
]
},
{
"name": [
"junhyuk oh",
"xiaoxiao guo",
"honglak lee",
"richard lewis",
"satinder singh"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'postCode': '48109', 'settlement': 'Ann Arbor', 'region': 'MI', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'postCode': '48109', 'settlement': 'Ann Arbor', 'region': 'MI', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'postCode': '48109', 'settlement': 'Ann Arbor', 'region': 'MI', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'postCode': '48109', 'settlement': 'Ann Arbor', 'region': 'MI', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'postCode': '48109', 'settlement': 'Ann Arbor', 'region': 'MI', 'country': 'USA'}"
}
]
},
{
"name": [
"xingjian shi",
"zhourong chen",
"hao wang",
"dit-yan yeung",
"wai-kin wong",
"wang-chun woo"
],
"affiliation": [
{
"laboratory": "",
"institution": "Hong Kong University of Science and Technology",
"location": "{}"
},
{
"laboratory": "",
"institution": "Hong Kong University of Science and Technology",
"location": "{}"
},
{
"laboratory": "",
"institution": "Hong Kong University of Science and Technology",
"location": "{}"
},
{
"laboratory": "",
"institution": "Hong Kong University of Science and Technology",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nitish srivastava",
"ruslan salakhutdinov"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{'addrLine': '6 Kings College Road', 'postCode': 'M5S 3G4', 'settlement': 'Toronto', 'region': 'ON', 'country': 'CANADA'}"
},
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{'addrLine': '6 Kings College Road', 'postCode': 'M5S 3G4', 'settlement': 'Toronto', 'region': 'ON', 'country': 'CANADA'}"
}
]
},
{
"name": [
"marc ' aurelio ranzato",
"arthur szlam",
"joan bruna",
"michael mathieu",
"ronan collobert",
"sumit chopra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik p kingma",
"max welling"
],
"affiliation": [
{
"laboratory": "Machine Learning Group",
"institution": "Universiteit van Amsterdam",
"location": "{}"
},
{
"laboratory": "Machine Learning Group",
"institution": "Universiteit van Amsterdam",
"location": "{}"
}
]
},
{
"name": [
"yitong li",
"martin renqiang",
"dinghan shen",
"david carlson",
"lawrence carin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "NEC Labs America Princeton",
"institution": "",
"location": "{'postCode': '08540', 'region': 'NJ'}"
},
{
"laboratory": "NEC Labs America Princeton",
"institution": "",
"location": "{'postCode': '08540', 'region': 'NJ'}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"carl vondrick",
"antonio torralba"
],
"affiliation": [
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology",
"location": "{}"
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology",
"location": "{}"
}
]
},
{
"name": [
"katerina fragkiadaki",
"jonathan huang",
"susanna ricco"
],
"affiliation": [
{
"laboratory": "",
"institution": "Carnegie Mellon University † Google Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University † Google Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University † Google Research",
"location": "{}"
}
]
},
{
"name": [
"carl vondrick",
"hamed pirsiavash",
"antonio torralba"
],
"affiliation": [
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology †University of Maryland",
"location": "{'settlement': 'Baltimore County'}"
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology †University of Maryland",
"location": "{'settlement': 'Baltimore County'}"
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology †University of Maryland",
"location": "{'settlement': 'Baltimore County'}"
}
]
},
{
"name": [
"andreja bubic",
"d yves von cramon",
"ricarda i schubotz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 88 | 5.545455 | 0.666667 | 0.833333 | null | null | null | null | null | rk49Mg-CW |
|
hu|topicbased_question_generation|ICLR_cc_2018_Conference | Topic-Based Question Generation | Asking questions is an important ability for a chatbot. This paper focuses on question generation. Although there are existing works on question generation based on a piece of descriptive text, it remains to be a very challenging problem. In the paper, we propose a new question generation problem, which also requires the input of a target topic in addition to a piece of descriptive text. The key reason for proposing the new problem is that in practical applications, we found that useful questions need to be targeted toward some relevant topics. One almost never asks a random question in a conversation. Due to the fact that given a descriptive text, it is often possible to ask many types of questions, generating a question without knowing what it is about is of limited use. To solve the problem, we propose a novel neural network that is able to generate topic-specific questions. One major advantage of this model is that it can be trained directly using a question-answering corpus without requiring any additional annotations like annotating topics in the questions or answers. Experimental results show that our model outperforms the state-of-the-art baseline. | {
"name": [],
"affiliation": []
} | We propose a neural network that is able to generate topic-specific questions. | [] | null | 2018-02-15 22:29:38 | 22 | null | null | null | null | null | null | null | null | false | The pros and cons of the paper under consideration can be summarized below:
Pros:
* Reviewers thought the underlying model is interesting and intuitive
* Main contributions are clear
Cons:
* There is confusion between keywords and topics, which is leading to a somewhat confused explanation and lack of clear comparison with previous work. Because of this, it is hard to tell whether the proposed approach is clearly better than the state of the art.
* Typos and grammatical errors are numerous
As the authors noted, the concerns about the small dataset are not necessarily warranted, but I would encourage the authors to measure the statistical significance of differences in results, which would help alleviate these concerns.
An additional comment: it might be worth noting the connections to query-based or aspect-based summarization, which also have a similar goal of performing generation based on specific aspects of the content.
Overall, the quality of the paper as-is seems to be somewhat below the standards of ICLR (although perhaps on the borderline), but the idea itself is novel and results are good. I am not recommending it for acceptance to the main conference, but it may be an appropriate contribution for the workshop track. | {
"review_id": [
"rk27E6PlG",
"HyrNUBYlz",
"B1chwjFlz"
],
"review": [
{
"title": "title: Motivation, experiments, and evaluation are flawed",
"paper_summary": null,
"main_review": "main_review: This paper presents a neural network-based approach to generate topic-specific questions with the motivation that topical questions are more meaningful in practical applications like real-world conversations. Experiments and evaluation have been conducted on the AQAD corpus to show the effectiveness of the approach.\n\nAlthough the main contributions are clear, the paper contains numerous typos, grammatical errors, incomplete sentences, and a lot of discrepancies between text, notations, and figures making it ambiguous and difficult to follow. \n\nAuthors claim to generate topic-specific questions, however, the dataset choice, experiments, and examples show that the generated questions are essentially keyword/key phrase-based. This is also apparent in Section 4.1 where authors present some observation without any supporting proof or empirical evidence. Moreover, the example in Figure 1 shows a conversation, but, typically, in an ongoing multi-round conversation people do not tend to repeat the keywords or key phrases or named entities, and topic shifts might occur at any time. \n\nOverall, a misconception about topic vs. keywords might have led the authors to claim that their work is the first to generate topic-specific questions whereas this has been studied before by Chali & Hasan (2015) in a non-neural setting. \"Topic\" in general has a broader meaning, I would suggest authors to see this to get an idea about what topic entails to in a conversational setting: https://developer.amazon.com/alexaprize/contest-rules . I think the proposed work is mostly related to: 1) \"Towards Natural Question-Guided Search\" by Kotov and Zhai (2010), and 2) \"K2Q: Generating Natural Language Questions from Keywords with User Refinements\" by Zheng et al. (2011), and other recent factoid question generation papers where questions are generated from a given fact (e.g. \"Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus\" by Serban et al. (2016)).\n\nIt is not clear how the question types are extracted from the given sentences. Please provide details. Which keywords are employed to accomplish this? Also, please explain the absence of the \"why\" type question. \n\nFigure 3 and the associated descriptions are very hard to follow. Please draw the figure by matching it with the descriptions. Where are the bi-LSTMs in the figure? What are ac_t and em_t? \n\nMy major concern is with the experiments and evaluation. The dataset essentially contains questions about product reviews and does not match authors motivation/observation about real-world conversations. Moreover, evaluation has been conducted on a very small test set (just about 1% of the selected corpus), making the results unconvincing. More details are necessary about how exactly Kim's and Liu's models are used to get question types and topics. \n\nHuman evaluation results per category would have been more useful. How did you combine the scores of the human evaluation categories? Also, automatic evaluation and human evaluation results do not correlate well. Please explain.\n\n\n\n\n\n\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review from AnonReviewer1",
"paper_summary": null,
"main_review": "main_review: This paper proposed a topic-based question generation method, which requires the input of target topic in addition to the descriptive text. In the proposed method, the authors first extract the topic based on the similarity of the target question token and answer token using word embedding. Then, the author proposed a topic-specific question generation model by encoding the extracted topic using LSTM and a pre-decode technique that the second decoding is conditioned on the hidden representation of the first decoding result. The authors performed the experiment on AQAD dataset, and show their performance achieve state-of-the-art result when using automatically generated topic, and perform better when using the ground truth topic. \n\n[Strenghts]\n\nThis paper introduced a topic-based question generation model, which generate question conditioned on the topic and question type. The authors proposed heuristic method to extract the topic and question type without further annotation. The proposed model can generate question with respect to different topic and pre-decode seems a useful trick. \n\n[Weaknesses]\n\nThis paper proposed an interesting and intuitive question generation model. However, there are several weaknesses existed:\n\n1: It's true that given a descriptive text, it is often possible to ask many types of questions. But it also leads to different answers. In this paper, the authors treat the descriptive text as answers, is this motivation still true if the question generation is conditioned on answers, not descriptive text? Table 4 shows some examples, given the sentence, even conditioned on different topics, the generated question is similar. \n\n2: In terms of the experiment, the authors use AQAD to evaluate proposed method. When the ground truth topic is provided, it's not fair to compare with the previous method, since knowing the similar word present in the answer will have great benefits to question generation. \n\nIf we only consider the automatically generated topic, the performance of the proposed model is similar to the previous method (Du et al). Without the pre-decode technique, the performance is even worse. \n\n3: In section 4.2, the authors claim this is the theoretical explanation of the generalization capability of the proposed model (also appear in topic effect analysis). It is true that the proposed method may have better compositionality, but I didn't see any **theoretical** explantation about this. \n\n4: The automatically extracted topic can be very noisy, but the paper didn't mention any of the extracted topics on AQAD dataset. \n\n[Summary]\n\na topic-based question generation method, which requires the input of target topic in addition to the descriptive text. However, as I pointed out above, there are several weaknesses in the paper. Taking all these into account, I think this paper still needs more works to make it solid and comprehensive before being accepted.\nAdd Comment",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting paper on a question generation approach focusing on some topics and question type with promising experimental results. ",
"paper_summary": null,
"main_review": "main_review: The authors propose a scheme to generate questions based on some answer sentences, topics and question types. Topics are extracted from questions using similar words in question-answer pairs. It is similar to what we find in some Q&A systems (like lexical answer types in Watson). A sequence classifier is also used to tag the presence of topic words. Question types correspond mostly to salient questions words. LSTMs are used to encode the various inputs and generate the questions. \n\nThe paper is well written and easy to follow. I would expect more explanations why sentence classification and labeling results presented in Table 2 are so low. \n\nExperimental results on question generation are convincing and clearly indicate that the approach is effective to generate relevant and well-structured short questions. \n\nThe main weakness of the paper is the selected set of question types that seems to be a fuzzy combination of answer types and question types (for ex. yes/no). Some questions type can be highly ambiguous; for instance “What” might lead to a definition, a quantity, some named entities... Hence I suggest you revise your qt set. \n\nI would also suggest, for your next experiments, that you try to generate questions leading to answers with list of values. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.2222222238779068,
0.3333333432674408,
0.7777777910232544
],
"confidence": [
1,
0.75,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Thank you for your review and comments",
"Thank you for your review and comments",
"Thank you for your helpful feedback!"
],
"comment": [
"1. We are very sorry about the typos, grammatical errors, etc. We will fix them in the final version. And we will fix the incomplete Figure 3 in the new version.\n\n2. Thank you for pointing out the \"topic\" problem. The terms topic and keyword are fairly ambiguous. We can use the keyword in the new version. Our work is quite different from the paper that you mentioned. Our is not generating questions totally based on given keywords. Our motivation is based on the fact that several questions can be asked based on a given sentence. Hence, we want to generate questions about the given \"subject\" or \"theme\" conditioned on the given descriptive text. \n\n3. We simply use keywords such as \"what\",\"how\", etc. to extract question types from the questions in the training data set. We will add details in the new version. The absence of the “why” type is a mistake, we missed it. In that case, all the \"why\" data is divided into the \"other\" type. Especially, the experimental results for other types won't be changed. And our conclusion still holds.\n\n4. It is impossible for people to ask questions about some things but don't mention them in the question. For example: **question** \"are the ==tips== interchangeable ?\", **answer**: \"the ==tips== are one piece metallic construction solid glued in place .\". So our observation/motivation is still true. As for the size of the test set, we believe that thousands of samples are enough to test the model since those samples are randomly sampled from a big dataset. For example, the NIST test set for machine translation consists of thousands samples too and its training set can be 2 million sentence pairs (we can see that in lots of machine translation researches). But we will employ a larger dataset in the next experiment. We will add more details about Kim's and Liu's models in the revised version. \n\n5. For \"How did you combine the scores of the human evaluation categories?\", that is a problem. We also found it is hard to combine them, so we ask the participants to give one score according to the naturalness modality from an overall view. For \"why automatic evaluation and human evaluation results do not correlate well,\" in general, e.g. \"+how tall is+ the =lamp= itself ?\", regarding different ways to formulate questions, it needs many words (marked by ++) to express , while regarding to what subject/topic (marked by ==) to ask questions, it needs just one or two. Since BLEU favours longer references while humans judge based on overall expressions, that's the reason for different performances of automatic BLEU evaluation and human judgments.",
"1. We believe our motivation still true even if the question generation is conditioned on answers. That is because: (1) quite many answers can’t be classified properly even by people. Please see the example for reviewer 1. (2) If there is a good correspondence among all the questions and answers, the accuracy of the sentence classification and labeling will not be so low (see Table 2 in the paper). (3) Actually , daily conversation can also be regarded as a kind of inquiry and answer. (4) Here, we given an example to explain that our data set still match our motivation/observation : **question**: \"are the =tips= interchangeable ?\", **answer**: \"the =tips= are one piece metallic construction solid glued in place .\". On the other hand, we believe the generated questions conditioned on different topics on Table 4 are not similar. The first question in the last three rows is about \"the manufacturing date\", and the second question is asking \"the origin of the bottle\", while the third one is about the \"manufacturer\". Of course, it is right to say they are similar if we talk about the similarity at a higher level since they are all about \"manufacture\". And this is because they have the same context \"bottle says 'made in usa'.\"\n\n2. Yes, we know and that is also the reason why we split the Table 1 into 2 parts. Let us only consider the automatically generated topic. It is true that our performance is even worse without the pre-decode technique (but we have pre-decode technique). (1) We give the reason for that in the paper. It is because we would like to build a system to generate controlled questions but the poor sentence classification and labeling accuracy lead our system to generate wrong sentences. (2) As there is no existing method that can perform our proposed new task, we compare with the conventional question generation (our model is not designed for that purpose). (3) The inconsistency of training and testing puts our model at a disadvantage. To achieve the proposed goal, we have to use the extracted ground truth to train, but to have a fair comparison with the conventional question generation method we then must test our model using auto-generated topics and question types.\n\n3. Thank you for pointing out this problem. We will correct that. What we have given is not a strict mathematical proof, it is a brief explanation. As assumed it is hard to give a mathematical proof in neural networks. \n\n4. We used several methods to tackle the noise problem. (1) We removed the stop words using NLTK(a natural language toolkit). (2) We employed the Bagging approach in the extraction process. (3) We proposed the pre-decode technique. At the same time we allow the topic to correspond to the empty value. According to our statistics, about 32.6% of the sentences in the training set failed to extract the topic. We would rather have it corresponding to the empty value than to introduce noise. Here, we given some examples: \n(1) **answer** : \"yes it is . it is a great product . it works with all devices except samsung 7 inch tablet . it is not the keyboard it is samsung . good luck.\" **question** : \"is this keyboard compatable with the acer a500 tablet ?\" **topics** : \"tablet keyboard\"\n(2) **answer** : \"yes . it is comfortable even with my glasses on .\" **question** : \"are these comfortable if you wear glasses ? do they hurt your ears from physical contact ?\" **topics** : \"comfortable glasses\" \n(3) **answer** : \"it does . but it is not as good as smart phone apps available today .\" **question** : \"this equipment translates from spanish to portuguese ?\" **topics** : \"\" (empty)\n",
"We are also surprised about the poor accuracy of sentence classification and labeling. Based on observation and analysis of the result. We noticed that the difficulty lies in the fact that multiple questions can be asked based on one given sentence. It is also based on this fact that we proposed to ask questions targeted toward some relevant indicators. \n\nFor example, for a given answer \"it's sort of got a cardboard feel to it , but it feels very sturdy nonetheless.\" , the question might be \"how does it feel?\", \"what does it feel like?\" or \"is it sturdy enough?\" but the ground truth question is \"what material is it made out of ?\". So, it is quite challenging to generate questions consistent with the given ground truth based only on the given answer.\n\nWe will revise the qt set. Thanks. There are actual ambiguities existing in some question types. Since we want to propose a scheme to generate controlled questions in an unsupervised manner, so there is little information can help us to identify the question types in detail. But the \"answer types\" you mentioned greatly inspired us. We can do that by considering the answers type. Thanks.\n\nThanks for experiment suggestion. It's an interesting idea. We will do that in the next experiment. We think an special memory mechanism can be designed to do that."
]
} | {
"paperhash": [
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"chen|microsoft_coco_captions:_data_collection_and_evaluation_server",
"du|learning_to_ask:_neural_question_generation_for_reading_comprehension",
"kim|convolutional_neural_networks_for_sentence_classification",
"diederik|adam:_a_method_for_stochastic_optimization",
"mishkin|all_you_need_is_a_good_init",
"sutskever|sequence_to_sequence_learning_with_neural_networks",
"cogswell|reducing_overfitting_in_deep_networks_by_decorrelating_representations",
"liu|empower_sequence_labeling_with_task-aware_neural_language_model",
"mostafazadeh|generating_natural_questions_about_an_image",
"wan|modeling_ambiguity,_subjectivity,_and_diverging_viewpoints_in_opinion_question_answering_systems",
"zhou|neural_system_combination_for_machine_translation"
],
"title": [
"NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE",
"AN ALGORITHM FOR THE PRINCIPAL COMPONENT ANALYSIS OF LARGE DATA SETS",
"Learning to Ask: Neural Question Generation for Reading Comprehension",
"Convolutional Neural Networks for Sentence Classification",
"Published as a conference paper at ICLR 2015 ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION",
"ALL YOU NEED IS A GOOD INIT",
"Sequence to Sequence Learning with Neural Networks",
"Barlow Twins: Self-Supervised Learning via Redundancy Reduction",
"Empower Sequence Labeling with Task-Aware Neural Language Model",
"Generating Natural Questions About an Image",
"Modeling Ambiguity, Subjectivity, and Diverging Viewpoints in Opinion Question Answering Systems",
"Neural System Combination for Machine Translation"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"dzmitry bahdanau",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nathan halko",
"mark tygert"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Colorado at Boulder",
"location": "{'postCode': '526 UCB', 'settlement': 'Boulder', 'region': 'CO'}"
},
{
"laboratory": "",
"institution": "Tel Aviv University",
"location": "{'addrLine': 'Ramat Aviv'}"
}
]
},
{
"name": [
"xinya du",
"junru shao",
"claire cardie"
],
"affiliation": [
{
"laboratory": "",
"institution": "Cornell University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Cornell University",
"location": "{}"
}
]
},
{
"name": [
"yoon kim"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
}
]
},
{
"name": [
"diederik p kingma",
"jimmy lei ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dmytro mishkin",
"jiri matas"
],
"affiliation": [
{
"laboratory": "",
"institution": "Czech Technical University",
"location": "{'country': 'Prague Czech Republic'}"
},
{
"laboratory": "",
"institution": "Czech Technical University",
"location": "{'country': 'Prague Czech Republic'}"
}
]
},
{
"name": [
"ilya sutskever",
"oriol vinyals",
"quoc v le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jure zbontar",
"li jing",
"ishan misra",
"yann lecun",
"stéphane deny"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"liyuan liu",
"jingbo shang",
"xiang ren",
"frank f xu",
"huan gui",
"jian peng",
"jiawei han"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": "{}"
}
]
},
{
"name": [
"nasrin mostafazadeh",
"ishan misra",
"jacob devlin",
"margaret mitchell",
"xiaodong he",
"lucy vanderwende"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Rochester",
"location": "{}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mengting wan",
"julian mcauley"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'San Diego'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'San Diego'}"
}
]
},
{
"name": [
"long zhou",
"wenpeng hu",
"jiajun zhang",
"chengqing zong"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Chinese Academy of Sciences",
"location": "{'settlement': 'Beijing', 'country': 'China'}"
},
{
"laboratory": "",
"institution": "University of Chinese Academy of Sciences",
"location": "{'settlement': 'Beijing', 'country': 'China'}"
},
{
"laboratory": "",
"institution": "University of Chinese Academy of Sciences",
"location": "{'settlement': 'Beijing', 'country': 'China'}"
},
{
"laboratory": "",
"institution": "University of Chinese Academy of Sciences",
"location": "{'settlement': 'Beijing', 'country': 'China'}"
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.444444 | 0.75 | null | null | null | null | null | rk3pnae0b |
||
richemond|diffusing_policies_towards_wasserstein_policy_gradient_flows|ICLR_cc_2018_Conference | Diffusing Policies : Towards Wasserstein Policy Gradient Flows | Policy gradients methods often achieve better performance when the change in policy is limited to a small Kullback-Leibler divergence. We derive policy gradients where the change in policy is limited to a small Wasserstein distance (or trust region). This is done in the discrete and continuous multi-armed bandit settings with entropy regularisation. We show that in the small steps limit with respect to the Wasserstein distance $W_2$, policy dynamics are governed by the heat equation, following the Jordan-Kinderlehrer-Otto result. This means that policies undergo diffusion and advection, concentrating near actions with high reward. This helps elucidate the nature of convergence in the probability matching setup, and provides justification for empirical practices such as Gaussian policy priors and additive gradient noise. | {
"name": [],
"affiliation": []
} | Linking Wasserstein-trust region entropic policy gradients, and the heat equation. | [
"Optimal transport",
"policy gradients",
"entropy regularization",
"reinforcement learning",
"heat equation",
"Wasserstein",
"JKO",
"gradient flows"
] | null | 2018-02-15 22:29:49 | 28 | null | null | null | null | null | null | null | null | false | Dear authors,
The authors all agreed that this was an interesting topic but that the novelty, either theoretical or empirical, was lacking. This, the paper cannot be accepted to ICLR in its current state but I encourage the authors to make the recommended updates and to push their idea further. | {
"review_id": [
"rk0iJ0FgM",
"Syy9DXtef",
"Sywhphuez"
],
"review": [
{
"title": "title: Important topic but the work is a presentation of known material",
"paper_summary": null,
"main_review": "main_review: The main object of the paper is the (entropy regularized) policy updates. Policy iterations are viewed as a gradient flow in the small timestep limit. Using this, (and following Jordan et al. (1998)) the desired PDE (Equation 21) is obtained. The rest of the paper discusses the implications of Equation 21 including but not limited to what happens when the time derivative of the policy is zero, and the link to noisy gradients.\n\nEven though the topic is interesting and would be of interest to the community, the paper mainly presents known results and provides an interpretation from the point of view of policy dynamics. I fail to see the significance nor the novelty in this work (esp. in light of Jordan et al. (1998) and Peyre (2015)).\n\nThat said, I believe that exposing such connections will prove to be useful, and I encourage the authors to push the area forward. In particular, it would be useful to see demonstrations of the idea, and experimental justifications even in the form of references would be a welcome addition to the literature.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Diffusing Policies : Towards Wasserstein Policy Gradient Flows",
"paper_summary": null,
"main_review": "main_review: In this paper the authors studied policy gradient with change of policies limited by a trust region of Wasserstein distance in the multi-armed bandit setting. They show that in the small steps limit, the policy dynamics are governed by the heat equation (Fokker-Planck equation). This theoretical result helps us understand both the convergence property and the probability matching property in policy gradient using concepts in diffusion and advection from the heat equation. To the best of my knowledge, this line of research was dated back to the paper by Jordan et al in 1998, where they showed that the continuous control policy transport follows the Fokker-Planck equation. In general I found this line of research very interesting as it connects the convergence of proximal policy optimization to optimal transport, and I appreciate seeing recent developments on this line of work. \n\nIn terms of theoretical contributions, I see that this paper contains some novel ideas in connecting gradient flow with Wasserstein distance regularization to the Fokker-Planck equation. Furthermore its interpretation on the Brownian diffusion processes justifies the link between entropy-regularization and noisy gradients (with isotropic Gaussian noise regularization for exploration). I also think this paper is well-written and mathematically sound. While I understand the knowledge of this paper based on standard knowledge in PDE of diffusion processes and Ito calculus, I am not experienced enough in this field to judge whether these contributions are significant enough for a standalone contribution, as the problem setting is limited to multi-armed bandits.\n\nMy major critic to this paper is its practical value. Besides the proposed Sinkhorn-Knopp based algorithm in the Appendix that finds the optimal policy as fixed point of (44), I am unsure how these results lead to more effective policy gradient algorithms (with lower variance in gradient estimators, or with quasi-monotonic performance improvement etc.). There are also no experiments in this paper (for example to compare the standard policy gradient algorithm with the one that solves the Fokker-Planck equation) to demonstrate the effectiveness of the theoretical findings.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting insights on policy gradient flows but the novel contributions are unclear",
"paper_summary": null,
"main_review": "main_review: The paper ‘Diffusing policies: Towards Wasserstein policy gradient flows’ explores \nthe connections between reinforcement learning and the theory of quadratic optimal transport (i.e.\nusing the Wasserstein_2 as a regularizer of an iterative problem that converges toward\nan optimal policy). Following a classical result from Jordan-Kinderlehrer-Otto, they show that \nthe policy dynamics are governed by the heat equation, that translates in an advection-diffusion \nscheme. This allows to draw insights on the convergence of empirical practices in the field.\n\nThe paper is clear and well-written, and provides a comprehensive survey of known results in the \nfield of Optimal Transport. The insights on why empirical strategies such as additive gradient noise\nare very interesting and helps in understanding why they work in practical settings. That being said, \nmost of the results presented in the paper are already known (e.g. from the book of Samtambrogio or the work \nof G. Peyré on entropic Wasserstein gradient flows) and it is not exactly clear what are the original\ncontributions of the paper. The fact that the objective is to learn policies\nhas little to no impact on the derivations of calculus. It clearly suggests that the entropy \nregularized Wasserstein_2 distance should be used in numerical experiments but this point is not \nsupported by experimental results. Their direct applications is rapidly ruled out by highlighting the \ncomputational complexity of solving such gradient flows but in the light of recent papers (see \nthe work of Genevay https://arxiv.org/abs/1706.00292 or another paper submitted to ICLR on large scale optimal transport \nhttps://openreview.net/forum?id=B1zlp1bRW) numerical applications should be tractable. For these reasons \nI feel that the paper would clearly be more interesting for the practitioners (and maybe to some extent \nfor the audience of ICLR) if numerical applications of the presented theory were discussed or sketched \nin classical reinforcement learning settings. \n\nMinor comments:\n - in Equation (10) why is there a ‘d’ in front of the coupling \\gamma ? \n - in Section 4.5, please provide references for why numerical estimators of gradient of Wasserstein distances\nare biased. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.4444444477558136,
0.3333333432674408
],
"confidence": [
0.75,
0.5,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"RE : Interesting insights",
"RE : practical value",
"Thank you for your review and comments."
],
"comment": [
"We thank the reviewer for their interest and for their comments on clarity and style.\n\nWe do agree the paper would benefit from practical results ; we feel there is value from a theoretical standpoint in exposing the connections with proximal mappings and gradient flow PDEs to the RL community, as we hope the general method of equating proximal regularizer, gradient flow PDE, and related stochastic process will become more widespread.\n\nWe are also thankful for your referencing of https://arxiv.org/abs/1706.00292 and https://openreview.net/forum?id=B1zlp1bRW, both of which we were unaware of as of time of writing this paper, obviously. We are indeed hopeful to remediate the lack of empirical results due to both tractability of large-scale optimal transport, and of compatibility of function approximation methods with Fokker-Planck diffusion. We will endeavour to include insights from these papers in further work. \n\nFinally, the d_\\gamma in equation (10) is a notation artifact made to link with the d_\\gamma in equation (9), but it probably is cleaner to correct and omit it. Regarding biased sample gradients of the Wasserstein distance, we do provide our article's fifth reference - a key recent paper that has highlighted this issue is Bellemare et al.'s https://arxiv.org/abs/1705.10743 ; we will clarify that we are referring to sample gradients bias here.",
"Thank you very much for your insights and comments, as well as encouraging words on soundness and writing style. We are in agreement that the paper would benefit both from a theoretical standpoint if we could extend the results to the n-step returns setting, and from a practical perspective if we could an exhibit a numerically tractable algorithm using the Wasserstein policy iteration. While theoretical difficulties have arisen in combining neural-network based function approximation with the Fokker-Planck PDE, we do share this reviewer's concern and urgency on that point, and are currently undergoing work on this in a tabular setting.",
"Indeed the calculations of sections 3 are found in the major work of Jordan et al. (1998) ; however, it is to our knowledge the first time that the entropy-regularized policy gradient functional is examined in a Wasserstein trust region context (which explains why no references were given for empirical work) in the reinforcement learning context. We do respectfully agree with the reviewer that adding empirical results is the most urgent line of further work. \n\nWe do state clearly that 'Our contribution largely consists in highlighting the connection between the functional of reinforcement learning and these mathematical methods inspired by statistical thermodynamics, in particular\nthe Jordan-Kinderlehrer-Otto result.' in the discussion. However, and as was stated by another reviewer ('Furthermore its interpretation on the Brownian diffusion processes justifies the link between entropy-regularization and noisy gradients (with isotropic Gaussian noise regularization for exploration)', we believe that the SDE interpretation is new and gives theoretical and intuitive grounding to such articles as https://arxiv.org/abs/1706.10295 and https://arxiv.org/pdf/1707.06887.pdf. Similarly the diffusive nature of convergence to the energy-based policies of Sabes and Jordan was not previously known to us; and we hope the method we have used opens up several new possibilities of continuous relaxations of trust-region RL settings via SDEs and PDEs."
]
} | {
"paperhash": [
"amari|information_geometry_and_its_applications",
"ambrosio|gradient_flows:_in_metric_spaces_and_in_the_space_of_probability_measures",
"bellemare|a_distributional_perspective_on_reinforcement_learning",
"bellemare|the_cramer_distance_as_a_solution_to_biased_wasserstein_gradients",
"bousquet|from_optimal_transport_to_generative_modeling:_the_vegan_cookbook",
"bregman|the_relaxation_method_of_finding_the_common_point_of_convex_sets_and_its_application_to_the_solution_of_problems_in_convex_programming",
"carlier|convergence_of_entropic_schemes_for_optimal_transport_and_gradient_flows",
"chaudhari|deep_relaxation:_partial_differential_equations_for_optimizing_deep_neural_networks",
"cominetti|asymptotic_analysis_of_the_exponential_penalty_trajectory_in_linear_programming",
"cuturi|sinkhorn_distances:_lightspeed_computation_of_optimal_transport",
"frogner|learning_with_a_wasserstein_loss",
"genevay|gan_and_vae_from_an_optimal_transport_point_of_view",
"gozlan|transport_inequalities._a_survey",
"jordan|the_variational_formulation_of_the_fokker-planck_equation",
"kantorovich|on_the_transfer_of_masses_(in_russian)",
"léonard|a_survey_of_the_schrodinger_problem_and_some_of_its_connections_with_optimal_transport",
"mnih|human-level_control_through_deep_reinforcement_learning",
"mnih|asynchronous_methods_for_deep_reinforcement_learning",
"nesterov|interior-point_polynomial_algorithms_in_convex_programming",
"peyré|entropic_wasserstein_gradient_flows",
"revuz|grundlehren_der_mathematischen_wissenschaften",
"sabes|reinforcement_learning_by_probability_matching",
"santambrogio|optimal_transport_for_applied_mathematicians_:_calculus_of_variations,_pdes_and_modeling._progress_in_nonlinear_differential_equations_and_their_applications",
"schrodinger|uber_die_umkehrung_der_naturgesetze",
"schulman|high-dimensional_continuous_control_using_generalized_advantage_estimation",
"sinkhorn|concerning_nonnegative_matrices_and_doubly_stochastic_matrices",
"villani|optimal_transport_:_old_and_new._grundlehren_der_mathematischen_wissenschaften",
"williams|function_optimization_using_connectionist_reinforcement_learning_algorithms"
],
"title": [
"Information Geometry and Its Applications",
"Gradient flows: in metric spaces and in the space of probability measures",
"A distributional perspective on reinforcement learning",
"The cramer distance as a solution to biased wasserstein gradients",
"From optimal transport to generative modeling: the vegan cookbook",
"The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming",
"Convergence of entropic schemes for optimal transport and gradient flows",
"Deep relaxation: partial differential equations for optimizing deep neural networks",
"Asymptotic analysis of the exponential penalty trajectory in linear programming",
"Sinkhorn distances: Lightspeed computation of optimal transport",
"Learning with a wasserstein loss",
"Gan and vae from an optimal transport point of view",
"Transport inequalities. a survey",
"The variational formulation of the Fokker-Planck equation",
"On the transfer of masses (in russian)",
"A survey of the Schrodinger problem and some of its connections with optimal transport",
"Human-level control through deep reinforcement learning",
"Asynchronous methods for deep reinforcement learning",
"Interior-point polynomial algorithms in convex programming",
"Entropic wasserstein gradient flows",
"Grundlehren der mathematischen Wissenschaften",
"Reinforcement learning by probability matching",
"Optimal Transport for Applied Mathematicians : Calculus of Variations, PDEs and Modeling. Progress in Nonlinear Differential Equations and Their Applications",
"Uber die umkehrung der naturgesetze",
"High-dimensional continuous control using generalized advantage estimation",
"Concerning nonnegative matrices and doubly stochastic matrices",
"Optimal Transport : Old and New. Grundlehren der mathematischen Wissenschaften",
"Function optimization using connectionist reinforcement learning algorithms"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"s amari"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"l ambrosio",
"n gigli",
"g savaré"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"m g bellemare",
"w dabney",
"r munos"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"m g bellemare",
"i danihelka",
"w dabney",
"s mohamed",
"b lakshminarayanan",
"s hoyer",
"r munos"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"o bousquet",
"s gelly",
"i tolstikhin",
"c j simon-gabriel",
"b scholkopf"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"l m bregman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"g carlier",
"v duval",
"g peyre",
"b schmitzer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p chaudhari",
"a oberman",
"s osher",
"s soatto",
"g carlier"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"r cominetti",
"j san martin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"m cuturi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"c frogner",
"c zhang",
"h mobahi",
"m araya-polo",
"t poggio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a genevay",
"g peyré",
"m cuturi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"n gozlan",
"c léonard"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"r jordan",
"d kinderlehrer",
"f otto"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"l kantorovich"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"c léonard"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"v mnih",
"k kavukcuoglu",
"d silver",
"a a rusu",
"j veness",
"m g bellemare",
"a graves",
"m riedmiller",
"a k fidjeland",
"g ostrovski"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"v mnih",
"a puigdomenech",
"m badia",
"a mirza",
"t graves",
"t lillicrap",
"d harley",
"k silver",
" kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"y nesterov",
"a nemirovskii",
"y ye"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"g peyré"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d revuz",
"m yor"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p sabes",
"m jordan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"f santambrogio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"e schrodinger"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"j schulman",
"s levine",
"p moritz",
"m i jordan",
"p abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"r sinkhorn",
"p knopp"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"c villani"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"r j williams",
"j peng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"1707.06887v1",
"1705.10743v1",
"1705.07642v1",
"",
"1512.02783v2",
"1704.04932v2",
"",
"",
"1506.05439v3",
"1706.01807v1",
"1003.3852v1",
"",
"",
"1308.0215v1",
"",
"1602.01783v2",
"",
"1502.06216v4",
"",
"",
"",
"",
"arXiv:1502.05477",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.37037 | 0.583333 | null | null | null | null | null | rk3mjYRp- |
||
quan|policy_gradient_for_multidimensional_action_spaces_action_sampling_and_entropy_bonus|ICLR_cc_2018_Conference | Policy Gradient For Multidimensional Action Spaces: Action Sampling and Entropy Bonus | In recent years deep reinforcement learning has been shown to be adept at solving sequential decision processes with high-dimensional state spaces such as in the Atari games. Many reinforcement learning problems, however, involve high-dimensional discrete action spaces as well as high-dimensional state spaces. In this paper, we develop a novel policy gradient methodology for the case of large multidimensional discrete action spaces. We propose two approaches for creating parameterized policies: LSTM parameterization and a Modified MDP (MMDP) giving rise to Feed-Forward Network (FFN) parameterization. Both of these approaches provide expressive models to which backpropagation can be applied for training. We then consider entropy bonus, which is typically added to the reward function to enhance exploration. In the case of high-dimensional action spaces, calculating the entropy and the gradient of the entropy requires enumerating all the actions in the action space and running forward and backpropagation for each action, which may be computationally infeasible. We develop several novel unbiased estimators for the entropy bonus and its gradient. Finally, we test our algorithms on two environments: a multi-hunter multi-rabbit grid game and a multi-agent multi-arm bandit problem. | {
"name": [],
"affiliation": []
} | policy parameterizations and unbiased policy entropy estimators for MDP with large multidimensional discrete action space | [
"deep reinforcement learning",
"policy gradient",
"multidimensional action space",
"entropy bonus",
"entropy regularization",
"discrete action space"
] | null | 2018-02-15 22:29:40 | 23 | null | null | null | null | null | null | null | null | false | The paper has some interesting ideas around auto-regressive policies and estimating their entropy for exploration. The use of autoregressive policies in RL is not particularly novel, and the estimate of entropy for such models is straightforward. Finally, the experiments focus on very simple tasks. | {
"review_id": [
"ry9X12Fgz",
"HJs1WiFlM",
"Bk4yQ1Alz"
],
"review": [
{
"title": "title: Simple autoregressive model for action spaces, but missing some baselines",
"paper_summary": null,
"main_review": "main_review: The authors present two autoregressive models for sampling action probabilities from a factorized discrete action space. On a multi-agent gridworld task and a multi-agent multi-armed bandit task, the proposed method seems to benefit from their lower-variance entropy estimator for exploration bonus. A few key citations were missing - notably the LSTM model they propose is a clear instance of an autoregressive density estimator, as in PixelCNN, WaveNet and other recently popular deep architectures. In that context, this work can be viewed as applying deep autoregressive density estimators to policy gradient methods. At least one of those papers ought to be cited. It also seems like a simple, obvious baseline is missing from their experiments - simply independently outputting D independent softmaxes from the policy network. Without that baseline it's not clear that any actual benefit is gained by modeling the joint distribution between actions, especially since the optimal policy for an MDP is provably deterministic anyway. The method could even be made to capture dependencies between different actions by adding a latent probabilistic layer in the middle of the policy network, inducing marginal dependencies between different actions. A direct comparison against one of the related methods in the discussion section would help better contextualize the paper as well. A final point on clarity of presentation - in keeping with the convention in the field, the readability of the tables could be improved by putting the top-performing models in bold, and Table 2 should almost certainly be replaced by a boxplot.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: In this paper, the authors suggest introducing dependencies between actions in RL settings with multi-dimensional action spaces by way of two mechanisms (using an RNN and making partial action specification as part of the state); they then introduce entropy pseudo-rewards whose maximization corresponding to joint entropy maximization.\n\nIn general, the multidimensional action methods either seem incremental or non novel to me. The combined use of the chain rule and RNNs (LSTM or not) to induce correlations in multi-dimensional outputs is well know (sequence-to-sequence networks, pixelRNN, etc.) and the extension to RL presents no difficulties, if it is not already known. Note very related work in https://arxiv.org/pdf/1607.07086.pdf and https://www.media.mit.edu/projects/improving-rnn-sequence-generation-with-rl/overview/ .\n\nAs for the MMDP technique, I believe it is folklore (it can for instance be found as a problem in a problem set - http://stellar.mit.edu/S/course/2/sp04/2.997/courseMaterial/topics/topic2/readings/problemset4/problemset4.pdf). Note that both approaches could be combined; the first idea is essentially a policy method, the second, a value method. The second method could be used to provide stronger, partial action-conditional baselines (or even critics) to the first method.\n\nThe entropy derivation are more interesting - and the smoothed entropy technique is as far as I know, novel. The experiments are well done, though on simple toy environments.\n\nMinor:\n- In section 3.2, one should in principle tweak the discount factor of the modified MDP to recover behavior identical to the original one with large action space. This should be noted (alternatively, the discount between non-environment transitions should be set to 1).\n\n- From the description at the end of 3.2, and figure 1.b, it seems actions fed to the MMDP feed-forward network are not one-\nhot; I thought this was pretty surprising as it would almost certainly affect performance? Note also that the collection of feed-forward network which collectively output the joint vector can be thought of as an RNN with non-learned state transition.\n\n- Since the function optimized can be written as an expectation of reward+pseudo-reward, the proof of theorem 4 can be simplified by using generic score-function optimization arguments (see Stochastic Computation Graphs, Schulman et al).\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The paper deals with the problem of large-multi-dimensional action space in RL. It proposes an auto-regressive model to represent the policy, in which the value of action at each dimension will be represented as a function of state and the \"previous\" dimensions. I think the idea is very interesting and useful but it is already explored in the context of Deep RL before. So It is not entirely novel contrary to the authors claim.",
"paper_summary": null,
"main_review": "main_review: Clarity and quality:\n\nThe paper is well written and the ideas are motivated clearly both in writing and with block diagram panels. Also the fact that the paper considers different variants of the idea adds to the quality of the paper. May main concern is with the quality of results which is limited to some toy/synthetic problems. Also the comparison with the previous work is missing.The paper would benefit from a more in depth numerical analysis of this approach both by applying it to more challenging/standard domains such as Mujoco and also by comparing the results with prior approaches such as A3C, DDPG and TRPO.\n\nOriginality, novelty and Significance:\n\nThe paper claims that the approach is novel in the context of policy gradient and Deep RL. I am not sure this is entirely the case since there is a recent work from Google Brain (https://arxiv.org/pdf/1705.05035.pdf ) which consider almost the identical idea with the same variation in the context of DQN and policy gradient (they call their policy gradient approach Prob SDQN). The Brain paper also makes a much more convincing case with their numerical analysis, applied to more challenging domains such as control suite. The paper under review should acknowledge this prior work and discuss the similarities and the differences. Also since the main idea and the algorithms are quiet similar to the Brain paper I believe the novelty of this work is at best marginal.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.5555555820465088,
0.4444444477558136
],
"confidence": [
0.75,
0.5,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"General reply to all reviewers and the list of changes in the revised paper (4th Jan 2018)",
"Thank you for your review!",
"Thank you for your review!",
"Thank you for your review!"
],
"comment": [
"We would like to thank all three reviewers for your detailed reviews and useful insights. Your comments have led to a greatly improved revised paper, without substantially changing the content of the original paper. \n\nOne consensus among the reviewers seems to be that although the material on entropy estimates is novel and interesting, the material on autoregressive models (MMDP and LSTM) is less novel since this material was already known in the folklore or presented in recent papers. You also requested that in addition to the autoregressive models, we examine other baseline policies.\n\nResponding to these concerns, we have re-organized the material to place more emphasis on the entropy estimates and less emphasis on the autoregressive policy models. In doing so, we have cited earlier work on using MMDP and LSTM for policy gradient, including the recent Google Brain paper. We also made a major effort to generate results for two baseline policies: (1) a single feed-forward network with multiple heads; and (2) CommNet. For both of these baselines, we examined our various entropy estimates. \n\nPlease note that with the exception of new experimental results, there is no new material presented in our revised paper. However, the paper has undergone a major reorganization. The list of changes is below:\n\n- The title of the paper is changed to reflect the new emphasis on the entropy estimates.\n\n- The abstract and introduction are rewritten to focus on the entropy estimates and removed the claim that the autoregressive policies are novel.\n\n- The entropy estimates section is moved to before the policy parameterization section.\n\n- The smoothed mode estimator is moved from the appendix to the entropy estimates section (now subsection 3.3).\n\n- In the policy parameterization section, a new subsection 4.1 is added to explain the new baseline policy (1) above.\n\n- The MMDP subsection is shortened to present the minimal explanation and the details are moved to Appendix D. The MMDP subsection acknowledges prior work by the Google Brain paper.\n\n- The LSTM subsection acknowledges relevant prior work.\n\n- In the experimental results section, we introduce CommNet and added results for baseline policies (1), (2) for the hunter-rabbit environment and added results for baseline policy (1) for multi-arm bandits.\n\n- The hunter-rabbit result analysis is rewritten to place more emphasis on the role of the entropy estimates across different models for the policy.\n\n- In Table 1, best performing models are bolded and horizontal lines are added to improve readability.\n\n- Table 2 is turned into a boxplot and is now Figure 2.\n\n- In Related Work section, we first discussed relevant work with regards to the entropy estimates before the policy parameterizations.\n\n- The Conclusion is shortened so the paper stays within the recommended 8-page length.\n\n- The hyperparameters for the two baseline policies are added to Appendix A.\n\n- At the end of the proof of theorem 4 in Appendix C, we note that the theorem can also be proved by material introduced by Stochastic Computational Graph of Schulman et al and provided a reference.\n\n- Appendix D is added to explain the details of MMDP. We noted that the discount between non-environment transitions should be set to 1 to match the original MDP and that we tried representing the actions fed to the Modified MDP feed-forward network as one-hot vectors.\n\n- Appendix E is added to explain the state representation for baseline policy (2).\n\n- Other minor changes to improve readability.\n",
"We have revised our paper to acknowledge the Google Brain paper in the formulation of the MMDP and the LSTM policy parameterizations (Sections 4 and 6). We have also acknowledged other relevant previous work on autoregressive models for policy gradient.\n\nOur paper differs from the Google Brain paper with regards to exploration strategies. Whereas the Brain paper injects noise into the action space to encourage exploration, the focus of our paper is to develop novel unbiased estimates for the entropy bonus and its gradient. \n\nWe put great effort into trying to apply our approach to the Mujoco domain. However, we faced technical challenges and thus could not complete it in time. For example, the OpenAI Mujoco interface, which uses Mujoco 1.3.1, is incompatible with our workstations, which are Macs with NVMe disks. For more info on the issue, please have a look at the links below:\n\nhttps://github.com/openai/mujoco-py/issues/36\nhttp://www.mujoco.org/forum/index.php?threads/error-could-not-open-disk.3441/\n\nWe also had issues compiling Mujoco and its dependencies on our HPC, such as the Mesa 3D Graphics Library. Although we were not able to run experiments in the more complex Mujoco environments, we believe that the simplicity of the environments used in our paper help to highlight critical issues related to entropy bonus. \n\nThank you for your pointer to A3C, DDPG and TRPO. Our entropy estimators are orthogonal to these approaches and thus they potentially can be combined with them. We may explore the benefits of our entropy estimates for these approaches in future work.\n",
"Thank you for your helpful pointers to the relevant LSTM and MMDP literature. In light of your review, we have rewritten our paper to focus on the novel entropy estimates and properly acknowledge relevant previous works. We believe this new emphasis has led to a substantially improved paper. \n\nAs you requested, in the revised paper, we noted that for Modified MDP, the discount between non-environment transitions should be set to 1 to match the original MDP (which is what we did in our experiments). \n\nAs you requested, we also tried representing the actions fed to the Modified MDP feed-forward network as one-hot vectors. We noted in Appendix D that the one-hot vectors did not bring substantial improvement. \n\nFinally, we took a close look at the Schulman et al paper for proving our result for the smoothed gradient entropy estimator. Although with this approach the \"proof\" would just be a couple lines, to justify using the proof rigorously would require a lengthy explanation on how to fit our model into the model of Schulman et al. We have, however, indicated that the result could be alternatively proven using Theorem 1 of Schulman et al and provided a reference.",
"In our revision, we acknowledge that the LSTM policy parameterization is not entirely new and can, in fact, be seen as an adaption of auto-regressive techniques in supervised sequence modeling to reinforcement learning (Sections 4.3 and 6). We have reorganized our paper to focus on the novel entropy estimates.\n\nAs you requested, we added experimental results for the baseline for which the policy is a FFN with multiple heads. We refer to this as Independent Sampling (IS). We ran experiments for IS with and without (estimates of) the entropy bonus for both the rabbit and bandit environments. \n\nYou also suggested that we compare our results with one of the approaches in the literature. For this, we choose the paper “Learning Multiagent Communication with Backpropagation”. Please find the results in Section 5. As you requested, we also put the top-performing models in bold and turned Table 2 into a boxplot (Table 2 is now Figure 2).\n"
]
} | {
"paperhash": [
"mnih|playing_atari_with_deep_reinforcement_learning",
"mnih|asynchronous_methods_for_deep_reinforcement_learning",
"sukhbaatar|learning_multiagent_communication_with_backpropagation",
"sutskever|sequence_to_sequence_learning_with_neural_networks",
"vinyals|starcraft_ii:_a_new_challenge_for_reinforcement_learning",
"bello|neural_optimizer_search_with_reinforcement_learning",
"wojciech|sobolev_training_for_neural_networks",
"dulac-arnold|deep_reinforcement_learning_in_large_discrete_action_spaces",
"lin|stardata:_a_starcraft_ai_research_dataset",
"brendan|combining_policy_gradient_and_q-learning",
"parisotto|actor-mimic:_deep_multitask_and_transfer_reinforcement_learning"
],
"title": [
"Playing Atari with Deep Reinforcement Learning",
"Asynchronous Methods for Deep Reinforcement Learning",
"Learning Multiagent Communication with Backpropagation",
"Sequence to Sequence Learning with Neural Networks",
"StarCraft II: A New Challenge for Reinforcement Learning",
"Neural Optimizer Search with Reinforcement Learning",
"Sobolev Training for Neural Networks",
"Deep Reinforcement Learning in Large Discrete Action Spaces",
"STARDATA: A StarCraft AI Research Dataset",
"",
"ACTOR-MIMIC DEEP MULTITASK AND TRANSFER REINFORCEMENT LEARNING"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"volodymyr mnih",
"koray kavukcuoglu",
"david silver",
"alex graves",
"ioannis antonoglou",
"daan wierstra",
"martin riedmiller"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"volodymyr mnih",
"mehdi mirza",
"alex graves",
"tim harley",
"timothy p lillicrap",
"david silver",
"google deepmind"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Montreal",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Montreal",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Montreal",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Montreal",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Montreal",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Montreal",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Montreal",
"location": "{}"
}
]
},
{
"name": [
"sainbayar sukhbaatar",
"arthur szlam",
"rob fergus"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ilya sutskever",
"oriol vinyals",
"quoc v le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"oriol vinyals",
"timo ewalds",
"sergey bartunov",
"petko georgiev",
"alexander sasha vezhnevets",
"michelle yeo",
"alireza makhzani",
"heinrich k üttler",
"john agapiou",
"julian schrittwieser",
"john quan",
"stephen gaffney",
"stig petersen",
"karen simonyan",
"tom schaul",
"hado van hasselt",
"david silver",
"timothy lillicrap",
"kevin calderone",
"paul keet",
"anthony brunasso",
"david lawrence",
"anders ekermo",
"jacob repp",
"rodney tsing blizzard"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"irwan bello",
"barret zoph",
"vijay vasudevan",
"quoc v le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wojciech marian czarnecki",
"simon osindero",
"max jaderberg",
"grzegorz swirszcz",
"razvan pascanu deepmind",
" london"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"gabriel dulac-arnold",
"richard evans",
"hado van hasselt",
"peter sunehag",
"timothy lillicrap",
"jonathan hunt",
"timothy mann",
"theophane weber",
"thomas degris",
"ben coppin",
"google deepmind"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"zeming lin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"brendan o'donoghue",
"rémi munos",
"koray kavukcuoglu",
"volodymyr mnih"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"emilio parisotto",
"jimmy ba",
"ruslan salakhutdinov"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{'settlement': 'Toronto', 'region': 'Ontario', 'country': 'Canada'}"
},
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{'settlement': 'Toronto', 'region': 'Ontario', 'country': 'Canada'}"
},
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{'settlement': 'Toronto', 'region': 'Ontario', 'country': 'Canada'}"
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.481481 | 0.75 | null | null | null | null | null | rk3b2qxCW |
||
wu|endtoend_abnormality_detection_in_medical_imaging|ICLR_cc_2018_Conference | End-to-End Abnormality Detection in Medical Imaging | Deep neural networks (DNN) have shown promising performance in computer vision. In medical imaging, encouraging results have been achieved with deep learning for applications such as segmentation, lesion detection and classification. Nearly all of the deep learning based image analysis methods work on reconstructed images, which are obtained from original acquisitions via solving inverse problems (reconstruction). The reconstruction algorithms are designed for human observers, but not necessarily optimized for DNNs which can often observe features that are incomprehensible for human eyes. Hence, it is desirable to train the DNNs directly from the original data which lie in a different domain with the images. In this paper, we proposed an end-to-end DNN for abnormality detection in medical imaging. To align the acquisition with the annotations made by radiologists in the image domain, a DNN was built as the unrolled version of iterative reconstruction algorithms to map the acquisitions to images, and followed by a 3D convolutional neural network (CNN) to detect the abnormality in the reconstructed images. The two networks were trained jointly in order to optimize the entire DNN for the detection task from the original acquisitions. The DNN was implemented for lung nodule detection in low-dose chest computed tomography (CT), where a numerical simulation was done to generate acquisitions from 1,018 chest CT images with radiologists' annotations. The proposed end-to-end DNN demonstrated better sensitivity and accuracy for the task compared to a two-step approach, in which the reconstruction and detection DNNs were trained separately. A significant reduction of false positive rate on suspicious lesions were observed, which is crucial for the known over-diagnosis in low-dose lung CT imaging. The images reconstructed by the proposed end-to-end network also presented enhanced details in the region of interest. | {
"name": [],
"affiliation": []
} | Detection of lung nodule starting from projection data rather than images. | [
"End-to-End training",
"deep neural networks",
"medical imaging",
"image reconstruction"
] | null | 2018-02-15 22:29:48 | 37 | null | null | null | null | null | null | null | null | false | Authors present an evaluation of end-to-end training connecting reconstruction network with detection network for lung nodules.
Pros:
- Optimizing a mapping jointly with the task may preserve more information that is relevant to the task.
Cons:
- Reconstruction network is not "needed" to generate an image -- other algorithms exist for reconstructing images from raw data. Therefore, adding the reconstruction network serves to essentially add more parameters to the neural network. As a baseline, authors should compare to a detection-only framework with a comparable number of parameters to the end-to-end system. Since this is not provided, the true benefit of end-to-end training cannot be assessed.
- Performance improvement presented is negligible
- Novelty is not clear / significant | {
"review_id": [
"SkoQMHqlG",
"S1gaKDqlM",
"Byyu-H4-f"
],
"review": [
{
"title": "title: A well written paper showing the promise of DNNs for solving tough inverse imaging problems. The contributions seem incremental, not properly enunciated, or appropriately validated.",
"paper_summary": null,
"main_review": "main_review: The paper proposes a DNN for patch-based lung nodule detection, directly from the CT projection data. The two-component network, comprising of the reconstruction network and the nodule detection network, is trained end-to-end. The trained network was validated on a simulated dataset of 1018\tlow-dose chest CT images. It is shown that end-to-end training produces better results compared to a two-step approach, where the reconstruction DNN was trained first and the detection DNN was trained on the reconstructed images. \n\nPros\n\nIt is a well written paper on a very important problem. It shows the promise of DNNs for solving difficult inverse problems of great importance. It shows encouraging results as well. \n\nCons\n\nThe contributions seem incremental, not properly enunciated, or appropriately validated.\n\nThe putative contributions of the paper can be \n(a) Directly solving the target problem from raw sensory data without first solving an inversion problem\n(b) (Directly) solving the lung nodule detection problem using a DNN. \n(c) A novel reconstruction DNN as a component of the above pipeline.\n(d) A novel detection network as a component of the above pipeline.\n\nLet's take them one by one:\n\n(a) As pointed out by authors, this is in the line of work being done in speech recognition, self-driving cars, OCR etc. and is a good motivation for the work but not a contribution. It's application to this problem can require significant innovation which is not the case as components have been explored before and there is no particular innovation involved in using them together in a pipeline either.\n\n(c) As also pointed by the authors, there are many previous approaches - Adler & Oktem (2017), Hammernik et al (2017) etc. among others. Another notable reference (not cited) is Jin et al. \"Deep Convolutional Neural Network for Inverse Problems in Imaging.\" arXiv preprint arXiv:1611.03679 (2016). These last two (and perhaps others) train DNNs to learn unrolled iterative methods to reconstruct the CT image. The approach proposed in the paper is not compared by them (and perhaps others), neither at a conceptual level nor experimentally. So, this clearly is not the main contribution of the paper.\n\n(d) Similarly, there is nothing particularly novel about the detection network nor the way it is used. \n\nThis brings us to (b). The proposed approach to solve this problem may indeed by novel (I am not an expert in this application area.), but considering that there is a considerable body of work on this problem, the paper provides not comparative evaluation of the proposed approach to published ones in the literature. It just provides an internal comparison of end-to-end training vis-a-vis two step training. \n\nTo summarize, the contributions seem incremental, not properly enunciated, or appropriately validated.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Practical work in a useful direction in medical image analysis",
"paper_summary": null,
"main_review": "main_review: This paper proposes to jointly model computed tomography reconstruction and lesion detection in the lung, training the mapping from raw sinogram to detection outputs in an end-to-end manner. In practice, such a mapping is computed separately, without regard to the task for wich the data is to be used. Because such a mapping loses information, optimizing such a mapping jointly with the task should preserve more information that is relevant to the task. Thus, using raw medical image data should be useful for lesion detection in CT as well as most other medical image analysis tasks.\n\n\nStyle considerations:\n\nThe work is adequately motivated and the writing is generally clear. However, some phrases are awkward and unclear and there are occasional minor grammar errors. It would be useful to ask a native English speaker to polish these up, if possible. Also, there are numerous typos that could nonetheless be easily remedied with some final proofreading. Generally, the work is well articulated with sound structure but needs polish.\n\nA few other minor style points to address:\n- \"g\" is used throughout the paper for two different networks and also to define gradients - if would be more clear if you would choose other letters.\n- S3.3, p. 7 : reusing term \"iteration\"; clarify\n- fig 10: label the columns in the figure, not in the description\n- fig 11: label the columns in the figure with iterations\n- fig 8 not referenced in text\n\n\nQuestions:\n\n1. Before fine-tuning, were the reconstruction and detection networks trained end-to-end (with both L2 loss and cross-entropy loss) or were they trained separately and then joined during fine-tuning?\n(If it is the former and not the latter, please make that more clear in the text. I expect that it was indeed the former; in case that it was not, I would expect fully end-to-end training in the revision.)\n\n2. Please confirm: during the fine-tuning phase of training, did you use only the cross-entropy loss and not the L2 loss?\n\n3a. From equation 3 to equation 4 (on an iteration of reconstruction), the network g() was dropped. It appears to replace the diagonal of a Hessian (of R) which is probably a conditioning term. Have you tried training a g() network? Please discuss the ramifications of removing this term.\n\n3b. Have you tracked the condition number of the Jacobian of f() across iterations? This should be like tracking the condition number of the Hessian of R(x).\n\n4. Please discuss: is it better to replace operations on R() with neural networks rather than to replace R()? Why?\n\n5. On page 5, you write \"masks for lung regions were pre-calculated\". Were these masks manual segmentations or created with an automated method?\n\n6. Why was detection only targetted on \"non-small nodules\"? Have you tried detecting small nodules?\n\n7. On page 11, you state: \"The tissues in lung had much better contrast in the end-to-end network compared to that in the two-step network\". I don't see evidence to support that claim. Could you demonstrate that?\n\n8. On page 12, relating to figure 11, you state:\n\n\"Whereas both methods kept similar structural component, the end-to-end method had more focus on the edges and tissues inside lung compared to the two-step method. As observed in figure 11(b), the structures of the lung tissue were much more clearer in the end-to-end networks. This observation indicated that sharper edge and structures were of more importance for the detection network than the noise level in the reconstructed images, which is in accordance with human perceptions when radiologists perform the same task.\"\n\nHowever, while these claims appear intuitive and such results may be expected, they are not backed up by figure 11. Looking at the feature map samples in this figure, I could not identify whether they came from different populations. I do not see the evidence for \"more focus on the edges and tissues inside lung\" for the end-to-end method in fig 11. It is also not obvious whether indeed \"the structures of the lung tissue were much more clearer\" for the end-to-end method, in fig 11. Can you clarify the evidence in support of these claims? \n\n\nOther points to address:\n\n1. Please report statistical significance for your results (eg. in fig 5b, in the text, etc.). Also, please include confidence intervals in table 2.\n\n2. Although cross-entropy values, detection metrics were not (except for the ROC curve with false positives and false negatives). Please compute: accuracy, precision, and recall to more clearly evaluate detection performance.\n\n3a. \"Abnormality detection\" implies the detection of anything that is unusual in the data. The method you present targets a very specific abnormality (lesions). I would suggest changing \"abnormality detection\" to \"lesion detection\".\n\n3b. The title should also be updated accordingly. Considering also that the presented work is on a single task (lesion detection) and a single medical imaging modality (CT), the current title appears overly broad. I would suggest changing it from \"End-to-End Abnormality Detection in Medical Imaging\" -- possibly to something like \"End-to-End Computed Tomography for Lesion Detection\".\n\n\nConclusion:\n\nThe motivation of this work is valid and deserves attention. The implementation details for modeling reconstruction are also valuable. It is interesting to see improvement in lesion detection when training end-to-end from raw sinogram data. However, while lung lesion detection is the only task on which the utility of this method is evaluated, detection improvement appears modest. This work would benefit from additional experimental results or improved analysis and discussion.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting direction, but let's check the details",
"paper_summary": null,
"main_review": "main_review: The authors present an end to end training of a CNN architecture that combines CT image signal processing and image analysis. This is an interesting paper. Time will tell whether a disease specific signal processing will be the future of medical image analysis, but - to the best of my knowledge - this is one of the first attempts to do this in CT image analysis, a field that is of significance both to researchers dealing with image reconstruction (denoising, etc.) and image analysis (lesion detection). As such I would be positive about the topic of the paper and the overall innovation it promises both in image acquisition and image processing, although I would share the technical concerns pointed out by Reviewer2, and the authors would need good answers to them before this study would be ready to be presented. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.4444444477558136,
0.5555555820465088
],
"confidence": [
0.75,
0.75,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [],
"comment": []
} | {
"paperhash": [
"abadi|tensorflow:_large-scale_machine_learning_on_heterogeneous_distributed_systems",
"armato|the_lung_image_database_consortium_(lidc)_and_image_database_resource_initiative_(idri):_a_completed_reference_database_of_lung_nodules_on_ct_scans",
"samuel|data_from_lidc-idri._the_cancer_imaging_archive",
"bojarski|end_to_end_learning_for_self-driving_cars",
"chen|prior_image_constrained_compressed_sensing_(piccs):_a_method_to_accurately_reconstruct_dynamic_ct_images_from_highly_undersampled_projection_data_sets",
"chen|learned_experts'_assessment-based_reconstruction_network",
"ciompi|automatic_classification_of_pulmonary_peri-fissural_nodules_in_computed_tomography_using_an_ensemble_of_2d_views_and_a_convolutional_neural_network_out-of-the-box",
"clark|the_cancer_imaging_archive_(tcia):_maintaining_and_operating_a_public_information_repository",
"de|2nd_place_solution_for_the_2017_national_datasicence_bowl",
"deans|the_radon_transform_and_some_of_its_applications",
"erdogan|monotonic_algorithms_for_transmission_tomography",
"gong|iterative_pet_image_reconstruction_using_convolutional_neural_network_representation",
"graves|towards_end-to-end_speech_recognition_with_recurrent_neural_networks",
"greenspan|guest_editorial_deep_learning_in_medical_imaging:_overview_and_future_promise_of_an_exciting_new_technique",
"gregor|learning_fast_approximations_of_sparse_coding",
"gurcan|lung_nodule_detection_on_thoracic_computed_tomography_images:_preliminary_evaluation_of_a_computer-aided_diagnosis_system",
"hammernik|learning_a_variational_network_for_reconstruction_of_accelerated_mri_data",
"iizuka|let_there_be_color!:_joint_end-to-end_learning_of_global_and_local_image_priors_for_automatic_image_colorization_with_simultaneous_classification",
"kalra|strategies_for_ct_radiation_dose_optimization",
"kim|low-dose_ct_reconstruction_using_spatially_encoded_nonlocal_penalty",
"kingma|adam:_a_method_for_stochastic_optimization",
"li|self-paced_convolutional_neural_network_for_computer_aided_detection_in_medical_imaging_analysis",
"lustig|compressed_sensing_mri",
"macmahon|guidelines_for_management_of_small_pulmonary_nodules_detected_on_ct_scans:_a_statement_from_the_fleischner_society",
"noo|single-slice_rebinning_method_for_helical_conebeam_ct",
"patz|overdiagnosis_in_lowdose_computed_tomography_screening_for_lung_cancer",
"pickhardt|abdominal_ct_with_model-based_iterative_reconstruction_(mbir):_initial_results_of_a_prospective_trial_comparing_ultralow-dose_with_standard-dose_imaging",
"schlemper|a_deep_cascade_of_convolutional_neural_networks_for_dynamic_mr_image_reconstruction",
"arindra|pulmonary_nodule_detection_in_ct_images:_false_positive_reduction_using_multi-view_convolutional_networks",
"emil|image_reconstruction_in_circular_cone-beam_computed_tomography_by_constrained,_total-variation_minimization",
"sun|national_lung_screening_trial_research_team_et_al._reduced_lung-cancer_mortality_with_low-dose_computed_tomographic_screening",
"ginneken|off-the-shelf_convolutional_neural_network_features_for_pulmonary_nodule_detection_in_computed_tomography_scans",
"wang|penalized_weighted_least-squares_approach_to_sinogram_noise_reduction_and_image_reconstruction_for_low-dose_x-ray_computed_tomography",
"wang|end-to-end_scene_text_recognition",
"wu|iterative_low-dose_ct_reconstruction_with_priors_trained_by_artificial_neural_network",
"wu|a_cascaded_convolutional_nerual_network_for_x-ray_low-dose_ct_image_denoising",
"ying|classification_of_exacerbation_frequency_in_the_copdgene_cohort_using_deep_learning_with_deep_belief_networks"
],
"title": [
"Tensorflow: Large-scale machine learning on heterogeneous distributed systems",
"The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans",
"Data from LIDC-IDRI. the cancer imaging archive",
"End to end learning for self-driving cars",
"Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic ct images from highly undersampled projection data sets",
"Learned experts' assessment-based reconstruction network",
"Automatic classification of pulmonary peri-fissural nodules in computed tomography using an ensemble of 2D views and a convolutional neural network out-of-the-box",
"The cancer imaging archive (TCIA): maintaining and operating a public information repository",
"2nd place solution for the 2017 national datasicence bowl",
"The Radon transform and some of its applications",
"Monotonic algorithms for transmission tomography",
"Iterative PET image reconstruction using convolutional neural network representation",
"Towards end-to-end speech recognition with recurrent neural networks",
"Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique",
"Learning fast approximations of sparse coding",
"Lung nodule detection on thoracic computed tomography images: Preliminary evaluation of a computer-aided diagnosis system",
"Learning a variational network for reconstruction of accelerated MRI data",
"Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification",
"Strategies for CT radiation dose optimization",
"Low-dose CT reconstruction using spatially encoded nonlocal penalty",
"Adam: A method for stochastic optimization",
"Self-paced convolutional neural network for computer aided detection in medical imaging analysis",
"Compressed sensing mri",
"Guidelines for management of small pulmonary nodules detected on CT scans: a statement from the fleischner society",
"Single-slice rebinning method for helical conebeam CT",
"Overdiagnosis in lowdose computed tomography screening for lung cancer",
"Abdominal CT with model-based iterative reconstruction (MBIR): initial results of a prospective trial comparing ultralow-dose with standard-dose imaging",
"A deep cascade of convolutional neural networks for dynamic MR image reconstruction",
"Pulmonary nodule detection in CT images: false positive reduction using multi-view convolutional networks",
"Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization",
"National Lung Screening Trial Research Team et al. Reduced lung-cancer mortality with low-dose computed tomographic screening",
"Off-the-shelf convolutional neural network features for pulmonary nodule detection in computed tomography scans",
"Penalized weighted least-squares approach to sinogram noise reduction and image reconstruction for low-dose X-ray computed tomography",
"End-to-end scene text recognition",
"Iterative low-dose CT reconstruction with priors trained by artificial neural network",
"A cascaded convolutional nerual network for x-ray low-dose CT image denoising",
"Classification of exacerbation frequency in the copdgene cohort using deep learning with deep belief networks"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"martín abadi",
"ashish agarwal",
"paul barham",
"eugene brevdo",
"zhifeng chen",
"craig citro",
"greg s corrado",
"andy davis",
"jeffrey dean",
"matthieu devin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"geoffrey samuel g armato",
"luc mclennan",
" bidaut",
"f michael",
"charles r mcnitt-gray",
"anthony p meyer",
"binsheng reeves",
"denise r zhao",
"claudia i aberle",
"eric a henschke",
" hoffman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"g samuel",
"iii armato",
"geoffrey mclennan",
"luc bidaut",
"f michael",
"charles r mcnitt-gray",
"anthony p meyer",
"laurence p reeves",
" clarke"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mariusz bojarski",
"davide del testa",
"daniel dworakowski",
"bernhard firner",
"beat flepp",
"prasoon goyal",
"lawrence d jackel",
"mathew monfort",
"urs muller",
"jiakai zhang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"guang-hong chen",
"jie tang",
"shuai leng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yi hu chen",
"weihua zhang",
"huaiqiaing zhang",
"peixi sun",
"kun liao",
"jiliu he",
"ge zhou",
" wang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"francesco ciompi",
"bartjan de hoop",
"sarah j van riel",
"kaman chung",
"ernst th scholten",
"matthijs oudkerk",
"pim a de",
"jong ",
"mathias prokop",
"bram van ginneken"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kenneth clark",
"bruce vendt",
"kirk smith",
"john freymann",
"justin kirby",
"paul koppel",
"stephen moore",
"stanley phillips",
"david maffitt",
"michael pringle"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"julian de",
"wit ",
"daniel hammack"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" stanley r deans"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hakan erdogan",
"jeffrey a fessler"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kuang gong",
"jiahui guan",
"kyungsang kim",
"xuezhu zhang",
"georges el fakhri",
"jinyi qi",
"quanzheng li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex graves",
"navdeep jaitly"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"bram hayit greenspan",
"ronald m van ginneken",
" summers"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"karol gregor",
"yann lecun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"berkman metin n gurcan",
"nicholas sahiner",
"heang-ping petrick",
"ella a chan",
"philip n kazerooni",
"lubomir cascade",
" hadjiiski"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kerstin hammernik",
"teresa klatzer",
"erich kobler",
" michael p recht",
"thomas daniel k sodickson",
"florian pock",
" knoll"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"satoshi iizuka",
"edgar simo-serra",
"hiroshi ishikawa"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"michael m mannudeep k kalra",
"thomas l maher",
"leena m toth",
"michael a hamberg",
"jo-anne blake",
"sanjay shepard",
" saini"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kyungsang kim",
"georges el fakhri",
"quanzheng li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik kingma",
"jimmy ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xiang li",
"aoxiao zhong",
"ming lin",
"ning guo",
"mu sun",
"arkadiusz sitek",
"jieping ye",
"james thrall",
"quanzheng li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"michael lustig",
"juan m david l donoho",
"john m santos",
" pauly"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" heber macmahon",
"gordon john hm austin",
"christian j gamsu",
"james r herold",
"david p jett",
"edward f naidich",
"stephen j patz",
" swensen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"frédéric noo",
"michel defrise",
"rolf clackdoyle"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"paul edward f patz",
"constantine pinsky",
" gatsonis",
"d jorean",
"barnett s sicks",
"martin c kramer",
"caroline tammemägi",
"william c chiles",
"denise r black",
" aberle"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"meghan g perry j pickhardt",
"david h lubner",
"jie kim",
"julie a tang",
"alejandro muñoz ruma",
"guang-hong del rio",
" chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jo schlemper",
"jose caballero",
"joseph v hajnal",
"anthony price",
"daniel rueckert"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"arnaud arindra",
"adiyoso setio",
"francesco ciompi",
"geert litjens",
"paul gerke",
"colin jacobs",
"sarah j van riel",
"mathilde ",
"marie winkler wille",
"matiullah naqibullah",
"clara i sánchez",
"bram van ginneken"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"y emil",
"xiaochuan sidky",
" pan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jian sun",
"huibin li",
"zongben xu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" bram van ginneken",
"a a arnaud",
"colin setio",
"francesco jacobs",
" ciompi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jing wang",
"tianfang li",
"hongbing lu",
"zhengrong liang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kai wang",
"boris babenko",
"serge belongie"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dufan wu",
"kyungsang kim",
"georges el fakhri",
"quanzheng li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dufan wu",
"kyungsang kim",
"georges el fakhri",
"quanzheng li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jun ying",
"joyita dutta",
"ning guo",
"chenhui hu",
"dan zhou",
"arkadiusz sitek",
"quanzheng li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"arXiv:1603.04467",
"",
"",
"arXiv:1604.07316",
"",
"arXiv:1707.09636",
"",
"",
"",
"",
"",
"1710.03344v1",
"",
"",
"",
"",
"1704.00447v1",
"",
"",
"",
"1412.6980v9",
"",
"",
"",
"",
"",
"",
"1704.02422v2",
"",
"",
"",
"",
"",
"",
"",
"arXiv:1705.04267",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.444444 | 0.666667 | null | null | null | null | null | rk1FQA0pW |
||
hausman|learning_an_embedding_space_for_transferable_robot_skills|ICLR_cc_2018_Conference | 65039738 | null | Learning an Embedding Space for Transferable Robot Skills | We present a method for reinforcement learning of closely related skills that are parameterized via a skill embedding space. We learn such skills by taking advantage of latent variables and exploiting a connection between reinforcement learning and variational inference. The main contribution of our work is an entropy-regularized policy gradient formulation for hierarchical policies, and an associated, data-efficient and robust off-policy gradient algorithm based on stochastic value gradients. We demonstrate the effectiveness of our method on several simulated robotic manipulation tasks. We find that our method allows for discovery of multiple solutions and is capable of learning the minimum number of distinct skills that are necessary to solve a given set of tasks. In addition, our results indicate that the hereby proposed technique can interpolate and/or sequence previously learned skills in order to accomplish more complex tasks, even in the presence of sparse rewards.
| {
"name": [
"karol hausman",
"jost tobias springenberg",
"ziyu wang",
"nicolas heess",
"martin riedmiller"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Southern California",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Southern California",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Southern California",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Southern California",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Southern California",
"location": "{}"
}
]
} | null | [
"Deep Reinforcement Learning",
"Variational Inference",
"Control",
"Robotics"
] | null | 2018-02-15 22:29:17 | 47 | 316 | 10 | null | null | null | null | null | null | true | This is a paper introducing a hierarchical RL method which incorporates the learning of a latent space, which enables the sharing of learned skills.
The reviewers unanimously rate this as a good paper. They suggest that it can be further improved by demonstrating the effectiveness through more experiments, especially since this is a rather generic framework. To some extent, the authors have addressed this concern in the rebuttal.
| {
"review_id": [
"r1aiqauxf",
"HkEQMXAxz",
"Hk9a7-qlG"
],
"review": [
{
"title": "title: I find the method to be theoretically interesting and valuable to the learning community. However, the experiments are not entirely convincing.",
"paper_summary": null,
"main_review": "main_review: In this paper, (previous states, action) pairs and task ids are embedded into the same latent space with the goal of generalizing and sharing across skill variations. Once the embedding space is learned, policies can be modified by passing in sampled or learned embeddings.\n\nNovelty and Significance: To my knowledge, using a variational approach to embedding robot skills is novel. Significantly, the embedding is learned from off-policy trajectories, indicating feasibility on a real-world setting. The manipulation experiments show nice results on non-trivial tasks. However, no comparisons are shown against prior work in multitask or transfer learning. Additionally, the tasks used to train the embedding space were tailored exactly to the target task, making it unclear that this method will work generally.\n\nQuestions:\n- I am not sure how to interpret Figure 3. Do you use Bernoulli in the experiments?\n- How many task IDs are used for each experiment? 2?\n- Are the manipulation experiments learned with the off-policy variant?\n- Figure 4b needs the colors to be labeled. Video clips of the samples would be a plus.\n- (Major) For the experiments, only exactly the useful set of tasks is used to train the embedding. What happens if a single latent space is learned from all the tasks, and Spring-wall, L-wall, and Rail-push are each learned from the same embedding. \n\nI find the method to be theoretically interesting and valuable to the learning community. However, the experiments are not entirely convincing.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: This is an interesting deep reinforcement learning paper that introduces a new principled framework for learning versatile skills. This is a good paper.",
"paper_summary": null,
"main_review": "main_review: The paper presents a new approach for hierarchical reinforcement learning which aims at learning a versatile set of skills. The paper uses a variational bound for entropy regularized RL to learn a versatile latent space which represents the skill to execute. The variational bound is used to diversify the learned skills as well as to make the skills identifyable from their state trajectories. The algorithm is tested on a simple point mass task and on simulated robot manipulation tasks.\n\nThis is a very intersting paper which is also very well written. I like the presented approach of learning the skill embeddings using the variational lower bound. It represents one of the most principled approches for hierarchical RL. \n\nPros: \n- Interesting new approach for hiearchical reinforcement learning that focuses on skill versatility\n- The variational lower bound is one of the most principled formulations for hierarchical RL that I have seen so far\n- The results are convincing\n\nCons:\n- More comparisons against other DRL algorithms such as TRPO and PPO would be useful\n\nSummary: This is an interesting deep reinforcement learning paper that introduces a new principled framework for learning versatile skills. This is a good paper.\n\nMore comments:\n- There are several papers that focus on learning versatile skills in the context of movement primitive libraries, see [1],[2],[3]. These papers should be discussed.\n\n[1] Daniel, C.; Neumann, G.; Kroemer, O.; Peters, J. (2016). Hierarchical Relative Entropy Policy Search, Journal of Machine Learning Research (JMLR),\n[2] End, F.; Akrour, R.; Peters, J.; Neumann, G. (2017). Layered Direct Policy Search for Learning Hierarchical Skills, Proceedings of the International Conference on Robotics and Automation (ICRA).\n[3] Gabriel, A.; Akrour, R.; Peters, J.; Neumann, G. (2017). Empowered Skills, Proceedings of the International Conference on Robotics and Automation (ICRA).\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting work at the intersection of reinforcement learning and variational inference",
"paper_summary": null,
"main_review": "main_review: The submission tackles an important problem of learning and transferring multiple motor skills. The approach relies on using an embedding space defined by latent variables and entropy-regularized policy gradient / variational inference formulation that encourages diversity and identifiability in latent space.\n\nThe exposition is clear and the method is well-motivated. I see no issues with the mathematical correctness of the claims made in the paper. The experimental results are both instructive of how the algorithm operates (in the particle example), and contain impressive robotic results. I appreciated the experiments that investigated cases where true number of tasks and the parameter T differ, showing that the approach is robust to choice of T.\n\nThe submission focuses particularly on discrete tasks and learning to sequence discrete tasks (as training requires a one-hot task ID input). I would like a bit of discussion on whether parameterized skills (that have continuous space of target location, or environment parameters, for example) can be supported in the current formulation, and what would be necessary if not.\n\nOverall, I believe this is in interesting piece of work at a fruitful intersection of reinforcement learning and variational inference, and I believe would be of interest to ICLR community.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.6666666865348816,
0.6666666865348816
],
"confidence": [
0.75,
1,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to Reviewer 2",
"Response to Reviewer 3",
"Response to Reviewer 1",
"Manuscript updated",
"Revision of Review",
"Manuscript updated"
],
"comment": [
"We thank the reviewer for their comments and suggestions.\n\nOur method does indeed support parameterized skills as suggested by the reviewer. For instance, the low-level policy could receive an embedding conditioned on a continuous target location instead of the task ID (given a suitable embedding space). It is also not limited to the multi-task setting, i.e., the number of tasks T used for training can be set to 1 (as explored in the point-mass experiments). We will add this to the discussion to the paper.",
"We are grateful for the insightful comments and suggestions.\n\nPlease find the answers to the inline questions below, we will clarify all of these points in the final version of the paper.\n- I am not sure how to interpret Figure 3. Do you use Bernoulli in the experiments? \n- A Bernoulli distribution is only used for for Figure 3 to demonstrate that our method can work with other distributions.\n\n- How many task IDs are used for each experiment? 2?\n- Yes, T was set to 2 for the manipulation experiments.\n\n- Are the manipulation experiments learned with the off-policy variant?\n- That is correct. All experiments were performed in an off-policy setting. This decision was made due to the higher sample-efficiency of the off-policy methods.\n\n- Figure 4b needs the colors to be labeled. Video clips of the samples would be a plus.\n- We will add the labels and address this problem in the final version of the paper\n\nRegarding the last question on training the embedding space on all of the tasks; we are currently working on this experiment and are planning to include it in the final version of the paper. It is worth noting that the multi-task RL training can be challenging (especially with poorly scaled rewards) and it maintains as an open problem that is beyond the scope of this work. Our method presents a solution to a problem of finding an embedding space that enables re-using, interpolating and sequencing previously learned skills, with the assumption that the RL agent was able to learn them in the first place. However, we strongly believe that the off-policy setup presented in this work has much more flexibility that its on-policy equivalents as to how to address the multi-task RL problem.\n",
"We very much appreciate the reviewer’s comments and suggestions. \n\nRegarding the comparison to other on-policy methods such as TRPO or PPO, we would like to emphasize that the presented approach is mostly independent of the underlying RL learning algorithm. In fact, it will be easier to implement our approach in the on-policy setup. The off-policy setup with experience replay that we are considering requires additional care due to the embedding variable which we also maintain in the replay buffer. In Section 5, we present all the modifications necessary to running our method in the more data-efficient off-policy setup, which we believe is crucial to running it on the real robots in the future.\n\nWe would also like to thank the reviewer for pointing out the additional references - we will be very happy to include them. While some of the high-level ideas are related, there are differences both in the formulation and the algorithmic framework. An important aspect of our work is that we show how to apply entropy-regularized RL with latent variables when working with neural networks and in an off-policy setting, avoiding both the burden of using a limited number of hand-crafted features and allowing for data-efficient learning.\n",
"We would you like to notify the reviewer that the pdf has been updated with the requested changes including the new experiment with the embedding pre-trained on all 6 tasks.",
"I would really like to see an experiment where an embedding space is trained on a wider variety of tasks rather than just what is needed to generalize to the target task. However, I find that this paper is a valuable contribution to ICLR, and I think that it should be accepted.\n\nAs ICLR allows the authors to upload a new pdf, I do not understand why the author response only said they would make changes in the final version (especially for things like labeling a figure). ",
"Dear reviewers, \nWe would like to let you know that we have updated the manuscript with the changes requested in your reviews. Thank you again for your feedback. "
]
} | {
"paperhash": [],
"title": [],
"abstract": [],
"authors": [],
"arxiv_id": [],
"s2_corpus_id": [],
"intents": [],
"isInfluential": []
} | null | 84 | 3.761905 | 0.666667 | 0.833333 | null | null | null | null | null | rk07ZXZRb |
madry|towards_deep_learning_models_resistant_to_adversarial_attacks|ICLR_cc_2018_Conference | 1706.06083v4 | Towards Deep Learning Models Resistant to Adversarial Attacks | Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against a well-defined class of adversaries. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest robustness against a first-order adversary as a natural security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. | {
"name": [
"aleksander m ądry",
"aleksandar makelov",
"ludwig schmidt",
"dimitris tsipras",
"adrian vladu"
],
"affiliation": [
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology Cambridge",
"location": "{'postCode': '02139', 'region': 'MA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology Cambridge",
"location": "{'postCode': '02139', 'region': 'MA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology Cambridge",
"location": "{'postCode': '02139', 'region': 'MA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology Cambridge",
"location": "{'postCode': '02139', 'region': 'MA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology Cambridge",
"location": "{'postCode': '02139', 'region': 'MA', 'country': 'USA'}"
}
]
} | null | [
"Computer Science",
"Mathematics"
] | International Conference on Learning Representations | 2017-06-19 | 0 | 9,408 | null | null | null | null | null | null | null | true | This paper presents new results on adversarial training, using the framework of robust optimization. Its minimax nature allows for principled methods of both training and attacking neural networks.
The reviewers were generally positive about its contributions, despite some concerns about 'overclaiming'. The AC recommends acceptance, and encourages the authors to also relate this work with the concurrent ICLR submission (https://openreview.net/forum?id=Hk6kPgZA-) which addresses the problem using a similar approach. | {
"review_id": [
"rkO53U_ez",
"SyRt7SoxG",
"Hy0j8ecgz"
],
"review": [
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: This paper proposes to look at making neural networks resistant to adversarial loss through the framework of saddle-point problems. They show that, on MNIST, a PGD adversary fits this framework and allows the authors to train very robust models. They also show encouraging results for robust CIFAR-10 models, but with still much room for improvement. Finally, they suggest that PGD is an optimal first order adversary, and leads to optimal robustness against any first order attack.\n\nThis paper is well written, brings new ideas and perfoms interesting experiments, but its claims are somewhat bothering me, considering that e.g. your CIFAR-10 results are somewhat underwhelming. All you've really proven is that PGD on MNIST seems to be the ultimate adversary. You contrast this to the fact that the optimization is non-convex, but we know for a fact that MNIST is fairly simple in that regime; iirc a linear classifier gets something like 91% accuracy on MNIST. So my guess is that the optimization problem on MNIST is in fact pretty convex and mostly respects the assumptions of Danskin's theorem, but not so much for CIFAR-10 (maybe even less so for e.g. ImageNet, which is what Kurakin et al. seem to find).\n\nConsidering your CIFAR-10 results, I don't think anyone should \"suggest that secure neural networks are within reach\", because 1) there is still room for improvement 2) it's a safe bet that someone will always just come up with a better attack than whatever defense we have now. It has been this way in many disciplines (crypto, security) for centuries, I don't see why deep learning should be exempt. Simply saying \"we believe that our robust models are significant progress on the defense side\" was enough, because afaik you did improve on CIFAR-10's SOTA; don't overclaim. \nYou make these kinds of claims in a few other places in this paper, please be careful with that.\n\nThe contributions in your appendix are interesting. \nAppendix A somewhat confirms one of the postulates in Goodfellow et al. (2014): \"The direction of perturbation, rather than the specific point in space, matters most. Space is not full of pockets of adversarial examples that finely tile the reals like the rational numbers\".\nAppendix B and C are not extremely novel in my mind, but definitely add more evidence. \nAppendix E is quite nice since it gives an insight into what actually makes the model resistant to adversarial examples.\n\n\nRemarks:\n- The update for PGD should be using \\nabla_{x_t} L(\\theta,x_t,y), (rather than only \\nabla_x)?\n- In table 2, attacking a with 20-step PGD is doing better than 7-step. When you say \"other hyperparameter choices didn’t offer a significant decrease in accuracy\", does that include the number of steps? If not why stop there? What happens for more steps? (or is it too computationally intensive?)\n- You only seem to consider adversarial examples created from your dataset + adv. noise. What about rubbish class examples? (e.g. rgb noise)\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The unreasonable effectiveness of gradient descent",
"paper_summary": null,
"main_review": "main_review: This paper consolidates and builds on recent work on adversarial examples and adversarial training for image classification. Its contributions:\n\n - Making the connection between adversarial training and robust optimization more explicit.\n\n - Empirical evidence that:\n * Projected gradient descent (PGD) (as proposed by Kurakin et al. (2016)) reasonably approximates the optimal attack against deep convolutional neural networks\n * PGD finds better adversarial examples, and training with it yields more robust models, compared to FGSM \n\n - Additional empirical analysis:\n * Comparison of weights in robust and non-robust MNIST classifiers\n * Vulnerability of L_infty-robust models to to L_2-bounded attacks\n\nThe evidence that PGD consistently finds good examples is fairly compelling -- when initialized from 10,000 random points near the example to be disguised, it usually finds examples of similar quality. The remaining variance that's present in those distributions shouldn't hurt learning much, as long as a significant fraction of the adversarial examples are close enough to optimal.\n\nGiven the consistent effectiveness of PGD, using PGD for adversarial training should yield models that are reliably robust (for a specific definition of robustness, such as bounded L_infinity norm). This is an improvement over purely heuristic approaches, which are often less robust than claimed.\n\nThe comparison to R+FGSM is interesting, and could be extended in a few small ways. What would R+FGSM look like with 10,000 restarts? The distribution should be much broader, which would further demonstrate how PGD works better on these models. Also, when generating adversarial examples for testing, how well would R+FGSM work if you took the best of 2,000 random restarts? This would match the number of gradient computations required by PGD with 100 steps and 20 restarts. Again, I expect that PGD would be better, but this would make that point clearer. I think this analysis would make the paper stronger, but I don't think it's required for acceptance, especially since R+FGSM itself is such a recent development.\n\nOne thing not discussed is the high computational cost: performing a 40-step optimization of each training example will be ~40 times slower than standard stochastic gradient descent. I suspect this is the reason why there are results on MNIST and CIFAR, but not ImageNet. It would be very helpful to add some discussion of this.\n\nThe title seems unnecessarily vague, since many papers have been written with the same goal -- make deep learning models resistant to adversarial attacks. (This comment does not affect my opinion about whether or not the paper should be accepted, and is merely a suggestion for the authors.)\n\nAlso, much of the paper's content is in the appendices. This reads like a journal article where the references were put in the middle. I don't know if that's fixable, given conference constraints.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting experimental results, but insufficient to support some of the strong claims made in the paper",
"paper_summary": null,
"main_review": "main_review: - The authors investigate a minimax formulation of deep network learning to increase their robustness, using projected gradient descent as the main adversary. The idea of formulating the threat model as the inner maximization problem is an old one. Many previous works on dealing with uncertain inputs in classification apply this minimax approach using robust optimization, e.g.: \n\nhttps://www2.eecs.berkeley.edu/Pubs/TechRpts/2003/CSD-03-1279.pdf\nhttp://www.jmlr.org/papers/volume13/ben-tal12a/ben-tal12a.pdf\n\nIn the case of convex uncertainty sets, many of these problems can be solved efficiently to a global minimum. Generalization bounds on the adversarial losses can also be proved. Generalizing this approach to non-convex neural network learning makes sense, even when it is hard to obtain any theoretical guarantees. \n\n- The main novelty is the use of projected gradient descent (PGD) as the adversary. From the experiments it seems training with PGD is very robust against a set of adversaries including fast gradient sign method (FGSM), and the method proposed in Carlini & Wagner (CW). Although the empirical results are promising, in my opinion they are not sufficient to support the bold claim that PGD is a 'universal' first order adversary (on p2, in the contribution list) and provides broad security guarantee (in the abstract). For example, other adversarial example generation methods such as DeepFool and Jacobian-based Saliency Map approach are missing from the comparison. Also it is not robust to generalize from two datasets MNIST and CIFAR alone. \n\n- Another potential issue with using projected gradient descent as adversary is the quality of the adversarial example generated. The authors show empirically that PGD finds adversarial examples with very similar loss values on multiple runs. But this does not exclude the possibility that PGD with different step sizes or line search procedure, or the use of randomization strategies such as annealing, can find better adversarial examples under the same threat model. This could make the robustness of the network rather dependent on the specific implementation of PGD for the inner maximization problem. \n\n- In Tables 3, 4, and 5 in the appendix, in most cases models trained with PGD are more robust than models trained with FGSM as adversary, modulo the phenomenon of label leakage when using FGSM as attack. However in the bottom right corner of Table 4, FGSM training seems to be more robust than PGD training against black box PGD attacks. This raises the question on whether PGD is truly 'universal' and provides broad security guarantees, once we add more first order attacks methods to the mix. \n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.6666666865348816,
0.6666666865348816
],
"confidence": [
0.5,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Re: Certified Defenses for Data Poisoning Attacks",
"Review Response",
"Review Response",
"Re: Lyu et al. (2015)",
"Review Response"
],
"comment": [
"We thank the reviewer for inquiring about the novelty of our work. As we point out in our paper (see Page 3), the min-max formulation itself is not new. In fact, problem formulations of this form have been studied for multiple decades (c.f. the work of Abraham Wald). Moreover, there is a rich literature concerning min-max problems in robust optimization. Claiming a min-max formulation as new would ignore a significant body of prior work.\n\nAs we point out in the introduction, our main contribution is *how* we employ the min-max formulation to study adversarially robust machine learning. To the best of our knowledge, our paper is the first detailed study of the min-max formulation for robust neural networks. From a scientific point of view, the question is not only whether adversarial robustness can be described with a min-max formulation, but whether such a formulation actually matches the computational reality we face in the practice of deep learning.\n\nOne concrete contribution is our thorough experiments exploring the adversarial loss landscape. Combined with Danskin’s theorem, they give evidence to the theory that adversarial training is indeed a principled way to solve the aforementioned min-max problem. Furthermore, our paper conducted the first public attack challenge to ascertain the robustness of a proposed defense. The challenge showed that our MNIST model is the first deep network that could not easily be broken with a new attack (subject to l_infinity constraints).\n\nFinally, we would also like to point out that our paper appeared publicly nearly concurrently with the NIPS paper mentioned by the reviewer. Moreover, there are several important differences compared to this paper. For instance, their focus is on robustness to corrupt training data, while our paper is about robustly classifying new unseen examples. Overall, we believe that reducing the comparison with the cited NIPS paper to the min-max formulation is an oversimplification of both their work and our work.\n",
"We thank the reviewer for the feedback.\n \nWe agree with the reviewer that min-max approach for robust classification have been studied before. As we mention in our paper (right after equation 2.1), such formulations go back at least to the work of Abraham Wald in the 1940s (e.g., see https://www.jstor.org/stable/1969022). What we view as the main contribution of our paper lies, however, not in introducing a new problem formulation but in studying if such formulation can inform training methods that lead to reliably robust deep learning models in practice.\n \nWe do not claim that training with a PGD adversary is the main novelty of our paper - prior work has already employed a variety of iterative first-order methods. Instead, our goal is to argue that training with PGD is a principled approach to adversarial robustness and to give both theoretical and empirical evidence for this view. (See the connection via Danskin’s theorem, and our loss function explorations in Appendix A.) Moreover, we demonstrate that adversarial training with PGD - when done properly - leads to state-of-the-art robustness on two canonical datasets. In contrast to much other work in this area, we have also validated the robustness of our models via a public challenge in which our model underwent (unsuccessful) attacks by other research groups.\n \nRegarding PGD being a \"universal\" first-order adversary and the \"broad security guarantee\" claim: first, we would like to note that on Page 2 of our paper, we state that we provide evidence for this view, not that this view is necessarily correct. Still, we believe that from the point of view of first order methods, our evaluation approach is comprehensive. Moreover, it is also worth noting that there has been increasing evidence for this view since we first published our paper: (i) No researcher has been able to break our released models. (ii) Follow-up work has used verification tools to test the PGD approaches and found that the adversarial examples found by iterative first order methods are almost as good as the adversarial examples found with a computationally expensive exhaustive search. Hence we believe that at least in the context of L_infinity robustness, viewing PGD as a universal first-order adversary has merit.\n \nRegarding JSMA and DeepFool: JSMA is an attack that is designed to perturb as few pixels as possible (often by a large value). Restricting this attack to the space of attacks we consider (0.3 distance from original in L_infinity norm) leads to an attack that is very slow and, as far as we could tell, less potent than PGD. Deepfool is an attack that has been designed with an aim of computing minimum norm perturbations. For the regime we are studying, the only difference between DeepFool and the CW attack is the choice of target class to use at each step. We didn’t feel that testing against this variation was necessary (given the length of our paper). Again we want to emphasize that we invited the community to attempt attacks against our published model and we didn’t receive any attacks that significantly lowered the performance of our model. Nevertheless we will perform the suggested experiments and add them to the final version.\n \nRegarding the point about PGD step size and variations: While one needs to tune PGD to a certain degree, we found that the method is robust to reasonable changes in the choice of PGD parameters. Training against different PGD variants also leads to robust networks. \n \nRegarding Table 4: We emphasize that Table 4 contains results for transfer attacks, which add an additional complication due to the mismatch between the source model used to construct the attack, and the target model that we would like to attack. We do observe that training with FGSM offers more robustness against *transfer* attacks constructed using networks trained with PGD. There is an important caveat however. The larger robustness is due to the difference in the two models, not because FGSM produces inherently more robust models. We view this effect as an artifact of the transferability phenomenon rather than a fundamental shortcoming of PGD-based adversarial training. When we consider the minimum across all columns in each row, the PGD-trained target model offers significantly more robustness than the FGSM-trained model (64.2% vs. 0.0%).\n",
"We thank the reviewer for the positive feedback.\n \nRegarding R+FGSM: We evaluated our robust networks against R+FGSM with multiple restarts and got the following results.\n- MNIST. PGD-40: 93.2%, R+FGSM x40: 92.2%, R+FGSM x2000: 90.51%.\n- CIFAR10 (non-wide). PGD-10: 43.02%, R+FGSM x10: 50.17%, R+FGSM x2000: 48.66%.\nThese experiments suggest that for evaluation purposes R+FGSM is qualitatively similar to PGD (at least for adversarially trained networks). Still if one attempts to adversarially *train* using R+FGSM, the resulting classifier overfits to the R+FGSM perturbations and while achieving high training and test accuracy against R+FGSM, it is completely vulnerable to PGD.\n \nWe also created loss histograms to compare PGD and R+FGSM with the results plotted at https://ibb.co/gcJuxG (final loss value frequency over 10,000 random restarts for 5 random examples). We observe that R+FGSM exhibits a similar concentration phenomenon to that observed for PGD in Appendix A. We will include these experiments in the final paper version. We agree that further investigating the difference between PGD and R+FGSM is worth exploring in subsequent research.\n \nRegarding the computational cost of robust optimization: It is indeed true that training against a PGD adversary increases the training time by a factor that is roughly equal to the number of PGD steps. This is a drawback of this method that we hope will be addressed in future research. But, at this point, getting sufficient robustness indeed leads to a running time overhead. We will add a brief discussion of this in the final version of the paper.\n \nRegarding title choice: Our intention was to convey that there exist robustness baselines that current techniques can achieve. Still, we agree that the title might be too vague and we will revisit our choice.",
"We thank the reviewer for bringing the work of Lyu et al. to our attention. We will cite it and discuss it in the \"Related Work\" section of our updated paper. As we already point out in our paper, both the general min-max framework, as well as its application to the problem of adversarial examples, are not new. Min-max formulations have been used extensively in the context of robust optimization and statistics, going back at least to the work of Abraham Wald in the 1930s and 40s. In the context of adversarial examples, we already cite the work of Shaham et al. (https://arxiv.org/abs/1511.05432) and Huang et al. (https://arxiv.org/abs/1511.03034), which consider a similar min-max formulation and appeared on arXiv nearly concurrently with the work of Lyu et al.\n\nTo clarify our contributions: the min-max formulation is part of the approach and *not* claimed as a contribution (see our introduction and the reply to \"Certified Defenses for Data Poisoning Attacks\" above). Instead, one of our main contributions is to study the loss landscape of the saddle point problem, *without replacing the loss by its first-order approximation*. It is known that solving the saddle point problem with a first-order approximation of the loss (see Figure 6 of Appendix B in our paper) produces networks that are vulnerable to more sophisticated (multi-step) attacks.",
"We thanks the reviewer for providing feedback.\n \nRegarding \"secure networks are within reach\" claim: We definitely agree that there is (large) room for improvement on CIFAR10. Our claim, however, comes from the fact that (to the best of our knowledge) the classifiers we trained were the first ones to robustly classify any non-trivial fraction of the test set. We indeed believe (and provide experimental evidence for it) that no attack will significantly decrease the accuracy of our classifier. We view our results as a baseline that shows that classification robustness is indeed achievable (against a well-defined class of adversaries at least). We agree that our claims ended up sounding too strong though. We will update our paper to tone them down.\n \nRegarding CIFAR10 results: The reviewer points out that the optimization landscape for MNIST is much simpler than that of CIFAR10 and that this would explain the difference in the performance of the resulting classifiers. We want to point out, however, that our performance on CIFAR10 is due to poor generalization and not the difficulty of the training problem itself. As can be seen in Figure 1b, we are able to train a perfectly robust classifier with 100% adversarial accuracy on the training set. This shows that the optimization landscape of the problem is still tractable with a PGD adversary.\n \nRegarding CIFAR10 attack parameters: We didn’t explore additional parameters due to the computational constraints at the time. Overall we have observed that the number of PGD steps does not change the resulting accuracy by more than a few percent. For instance, if we retrain the (non-wide version of the) CIFAR10 network with a 5-step PGD adversary we get the following accuracies when testing against PGD: 5 steps -> 45.00%, 10 steps -> 43.02%, 20 steps -> 42.65%, 100 steps -> 42.21%.\n \nRegarding rubbish class examples: We agree that rubbish class examples are an important class to consider. However it is unclear how to rigorously define them. As we discuss in our paper (3rd paragraph of page 3), providing any kind of robustness guarantees requires a precise definitions of the allowed adversarial perturbations.\n"
]
} | {
"paperhash": [
"li|second-order_adversarial_attack_and_certifiable_robustness",
"schott|towards_the_first_adversarially_robust_neural_network_model_on_mnist",
"brendel|decision-based_adversarial_attacks:_reliable_attacks_against_black-box_machine_learning_models",
"carlini|ground-truth_adversarial_examples",
"he|adversarial_example_defenses:_ensembles_of_weak_defenses_are_not_strong",
"carlini|adversarial_examples_are_not_easily_detected:_bypassing_ten_detection_methods",
"tramèr|ensemble_adversarial_training:_attacks_and_defenses",
"tramèr|the_space_of_transferable_adversarial_examples",
"xu|feature_squeezing:_detecting_adversarial_examples_in_deep_neural_networks",
"rozsa|towards_robust_deep_neural_networks_with_bang",
"torkamani|robust_large_margin_approaches_for_machine_learning_in_adversarial_settings",
"kurakin|adversarial_machine_learning_at_scale",
"sharif|accessorize_to_a_crime:_real_and_stealthy_attacks_on_state-of-the-art_face_recognition",
"carlini|towards_evaluating_the_robustness_of_neural_networks",
"papernot|on_the_effectiveness_of_defensive_distillation",
"carlini|defensive_distillation_is_not_robust_to_adversarial_examples",
"sokolić|robust_large_margin_deep_neural_networks",
"papernot|transferability_in_machine_learning:_from_phenomena_to_black-box_attacks_using_adversarial_samples",
"he|deep_residual_learning_for_image_recognition",
"köbis|on_robust_optimization",
"papernot|the_limitations_of_deep_learning_in_adversarial_settings",
"moosavi-dezfooli|deepfool:_a_simple_and_accurate_method_to_fool_deep_neural_networks",
"papernot|distillation_as_a_defense_to_adversarial_perturbations_against_deep_neural_networks",
"lyu|a_unified_gradient_regularization_family_for_adversarial_examples",
"huang|learning_with_a_strong_adversary",
"fawzi|analysis_of_classifiers’_robustness_to_adversarial_perturbations",
"he|delving_deep_into_rectifiers:_surpassing_human-level_performance_on_imagenet_classification",
"goodfellow|explaining_and_harnessing_adversarial_examples",
"gu|towards_deep_neural_network_architectures_robust_to_adversarial_examples",
"nguyen|deep_neural_networks_are_easily_fooled:_high_confidence_predictions_for_unrecognizable_images",
"szegedy|intriguing_properties_of_neural_networks",
"biggio|evasion_attacks_against_machine_learning_at_test_time",
"krizhevsky|imagenet_classification_with_deep_convolutional_neural_networks",
"collobert|a_unified_architecture_for_natural_language_processing:_deep_neural_networks_with_multitask_learning",
"globerson|nightmare_at_test_time:_robust_learning_by_feature_deletion",
"dalvi|adversarial_classification",
"wald|statistical_decision_functions",
"wald|statistical_decision_functions_which_minimize_the_maximum_risk",
"wald|contributions_to_the_theory_of_statistical_estimation_and_testing_hypotheses",
"he|adversarial_example_defense:_ensembles_of_weak_defenses_are_not_strong",
"|a_lot_of_recent_literature_on_adversarial_training_discusses_the_phenomenon_of_transferability_goodfellow_et_al",
"shaham|understanding_adversarial_training:_increasing_local_stability_of_neural_nets_through_robust_optimization",
"|building_on_the_above_insights",
"|tensor_flow_models_repository"
],
"title": [
"Second-Order Adversarial Attack and Certifiable Robustness",
"Towards the first adversarially robust neural network model on MNIST",
"Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models",
"Ground-Truth Adversarial Examples",
"Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong",
"Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods",
"Ensemble Adversarial Training: Attacks and Defenses",
"The Space of Transferable Adversarial Examples",
"Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks",
"Towards Robust Deep Neural Networks with BANG",
"Robust Large Margin Approaches for Machine Learning in Adversarial Settings",
"Adversarial Machine Learning at Scale",
"Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition",
"Towards Evaluating the Robustness of Neural Networks",
"On the Effectiveness of Defensive Distillation",
"Defensive Distillation is Not Robust to Adversarial Examples",
"Robust Large Margin Deep Neural Networks",
"Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples",
"Deep Residual Learning for Image Recognition",
"On Robust Optimization",
"The Limitations of Deep Learning in Adversarial Settings",
"DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks",
"Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks",
"A Unified Gradient Regularization Family for Adversarial Examples",
"Learning with a Strong Adversary",
"Analysis of classifiers’ robustness to adversarial perturbations",
"Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"Explaining and Harnessing Adversarial Examples",
"Towards Deep Neural Network Architectures Robust to Adversarial Examples",
"Deep neural networks are easily fooled: High confidence predictions for unrecognizable images",
"Intriguing properties of neural networks",
"Evasion Attacks against Machine Learning at Test Time",
"ImageNet classification with deep convolutional neural networks",
"A unified architecture for natural language processing: deep neural networks with multitask learning",
"Nightmare at test time: robust learning by feature deletion",
"Adversarial classification",
"Statistical Decision Functions",
"Statistical Decision Functions Which Minimize the Maximum Risk",
"Contributions to the Theory of Statistical Estimation and Testing Hypotheses",
"Adversarial Example Defense: Ensembles of Weak Defenses are not Strong",
"A lot of recent literature on adversarial training discusses the phenomenon of transferability Goodfellow et al",
"Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization",
"Building on the above insights",
"Tensor flow models repository"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"Bai Li",
"Changyou Chen",
"Wenlin Wang",
"L. Carin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Lukas Schott",
"Jonas Rauber",
"M. Bethge",
"Wieland Brendel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Wieland Brendel",
"Jonas Rauber",
"M. Bethge"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicholas Carlini",
"Guy Katz",
"Clark W. Barrett",
"D. Dill"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Warren He",
"James Wei",
"Xinyun Chen",
"Nicholas Carlini",
"D. Song"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicholas Carlini",
"D. Wagner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Florian Tramèr",
"Alexey Kurakin",
"Nicolas Papernot",
"D. Boneh",
"P. Mcdaniel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Florian Tramèr",
"Nicolas Papernot",
"I. Goodfellow",
"D. Boneh",
"P. Mcdaniel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Weilin Xu",
"David Evans",
"Yanjun Qi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Andras Rozsa",
"Manuel Günther",
"T. Boult"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"MohamadAli Torkamani"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Alexey Kurakin",
"I. Goodfellow",
"Samy Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Mahmood Sharif",
"Sruti Bhagavatula",
"Lujo Bauer",
"M. Reiter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicholas Carlini",
"D. Wagner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicolas Papernot",
"P. Mcdaniel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicholas Carlini",
"D. Wagner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jure Sokolić",
"R. Giryes",
"G. Sapiro",
"M. Rodrigues"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicolas Papernot",
"P. Mcdaniel",
"I. Goodfellow"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kaiming He",
"X. Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Köbis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicolas Papernot",
"P. Mcdaniel",
"S. Jha",
"Matt Fredrikson",
"Z. B. Celik",
"A. Swami"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Seyed-Mohsen Moosavi-Dezfooli",
"Alhussein Fawzi",
"P. Frossard"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicolas Papernot",
"P. Mcdaniel",
"Xi Wu",
"S. Jha",
"A. Swami"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Chunchuan Lyu",
"Kaizhu Huang",
"Hai-Ning Liang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ruitong Huang",
"Bing Xu",
"Dale Schuurmans",
"Csaba Szepesvari"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Alhussein Fawzi",
"Omar Fawzi",
"P. Frossard"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kaiming He",
"X. Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"I. Goodfellow",
"Jonathon Shlens",
"Christian Szegedy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Gu",
"Luca Rigazio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Anh Totti Nguyen",
"J. Yosinski",
"J. Clune"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Christian Szegedy",
"Wojciech Zaremba",
"I. Sutskever",
"Joan Bruna",
"D. Erhan",
"I. Goodfellow",
"R. Fergus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Biggio",
"Igino Corona",
"Davide Maiorca",
"B. Nelson",
"Nedim Srndic",
"P. Laskov",
"G. Giacinto",
"F. Roli"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Krizhevsky",
"I. Sutskever",
"Geoffrey E. Hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Collobert",
"J. Weston"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Globerson",
"S. Roweis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nilesh N. Dalvi",
"Pedro M. Domingos",
"Mausam",
"Sumit K. Sanghai",
"D. Verma"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Wald"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Wald"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Wald"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Warren He",
"James Wei",
"Xinyun Chen",
"Nicholas Carlini",
"D. Song"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"Uri Shaham",
"Yutaro Yamada",
"S. Negahban"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [],
"affiliation": []
}
],
"arxiv_id": [
"",
"1805.09190v3",
"1712.04248",
"1709.10207",
"1706.04701v1",
"1705.07263v2",
"1705.07204v5",
"1704.03453v2",
"1704.01155v2",
"1612.00138v3",
"",
"1611.01236v2",
"",
"1608.04644v2",
"1607.05113v1",
"1607.04311v1",
"1605.08254v3",
"1605.07277",
"1512.03385v1",
"",
"1511.07528v1",
"1511.04599v3",
"1511.04508v2",
"1511.06385v1",
"1511.03034v6",
"1502.02590",
"1502.01852",
"1412.6572v3",
"1412.5068v4",
"1412.1897v4",
"1312.6199v4",
"1708.06131v1",
"",
"",
"",
"",
"",
"",
"",
"1706.04701v1",
"",
"1511.05432v3",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
[
"background"
],
[
"background"
],
[
"background",
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"result",
"background",
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background",
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
true,
true,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
true,
true,
true,
false,
false,
false,
false,
false
]
} | null | 92 | 102.260872 | 0.62963 | 0.666667 | null | null | null | null | null | rJzIBfZAb |
|
ghosh|divideandconquer_reinforcement_learning|ICLR_cc_2018_Conference | 997870 | 1711.09874 | Divide-and-Conquer Reinforcement Learning | Standard model-free deep reinforcement learning (RL) algorithms sample a new initial state for each trial, allowing them to optimize policies that can perform well even in highly stochastic environments. However, problems that exhibit considerable initial state variation typically produce high-variance gradient estimates for model-free RL, making direct policy or value function optimization challenging. In this paper, we develop a novel algorithm that instead partitions the initial state space into "slices", and optimizes an ensemble of policies, each on a different slice. The ensemble is gradually unified into a single policy that can succeed on the whole state space. This approach, which we term divide-and-conquer RL, is able to solve complex tasks where conventional deep RL methods are ineffective. Our results show that divide-and-conquer RL greatly outperforms conventional policy gradient methods on challenging grasping, manipulation, and locomotion tasks, and exceeds the performance of a variety of prior methods. Videos of policies learned by our algorithm can be viewed at https://sites.google.com/view/dnc-rl/
| {
"name": [
"dibya ghosh",
"avi singh",
"aravind rajeswaran",
"vikash kumar",
"sergey levine"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of California Berkeley",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of California Berkeley",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Washington",
"location": "{'settlement': 'Seattle'}"
},
{
"laboratory": "",
"institution": "University of Washington",
"location": "{'settlement': 'Seattle'}"
},
{
"laboratory": "",
"institution": "University of California Berkeley",
"location": "{}"
}
]
} | null | [
"deep reinforcement learning",
"reinforcement learning",
"policy gradients",
"model-free"
] | null | 2018-02-15 22:29:26 | 22 | 118 | 6 | null | null | null | null | null | null | true | This paper proposes a specific architecture for training an ensemble of separate policies on a family of easier tasks with the goal of obtaining a single policy that can perform well on a harder task. There are significant similarities to the recently published Distral algorithm, but I am convinced that this work offers a meaningful contribution beyond that work. Moreover, the authors performed a thorough comparison between their method and Distral and found that DnC performs better. | {
"review_id": [
"rycTQSqgG",
"r1A2hMtgz",
"HJNRVMqez"
],
"review": [
{
"title": "title: Interesting approach, but seems like a fairly incremental advance on previous work",
"paper_summary": null,
"main_review": "main_review: This paper presents a method for learning a global policy over multiple different MDPs (referred to as different \"contexts\", each MDP having the same dynamics and reward, but different initial state). The basic idea is to learn a separate policy for each context, but regularized in a manner that keeps all of them relatively close to each other, and then learn a single centralized policy that merges the multiple policies via supervised learning. The method is evaluated on several continuous state and action control tasks, and shows improvement over existing and similar approaches, notably the Distral algorithm.\n\nI believe there are some interesting ideas presented in this paper, but in its current form I think that the delta over past work (particularly Distral) is ultimately too small to warrant publication at ICLR. The authors should correct me if I'm wrong, but it seems as though the algorithm presented here is virtually identical to Distral except that:\n1) The KL divergence term regularizes all policies together in a pairwise manner.\n2) The distillation step happens episodically every R steps rather than in a pure SGD manner.\n3) The authors possibly use a TRPO type objective for the standard policy gradient term, rather than REINFORCE-like approach as in Distral (this one point wasn't completely clear, as the authors mention that a \"centralized DnC\" is equivalent to Distral, so they may already be adapting it to the TRPO objective? some clarity on this point would be helpful).\nThus, despite better performance of the method over Distral, this doesn't necessarily seem like a substantially new algorithmic development. And given how sensitive RL tasks are to hyperparameter selection, there needs to be some very substantial treatment of how the regularization parameters are chosen here (both for DnC and for the Distral and centralized DnC variants). Otherwise, it honestly seems that the differences between the competing methods could be artifacts of the choice of regularization (the alpha parameter will affect just how tightly coupled the control policies actually are).\n\nIn addition to this point, the formulation of the problem setting in many cases was also somewhat unclear. In particular, the notion of the contextual MDP is not very clear from the presentation. The authors define a contextual MDP setting where in addition to the initial state there is an observed context to the MDP that can affect the initial state distribution (but not the transitions or reward). It's entirely unclear to me why this additional formulation is needed, and ultimately just seems to confuse the nature of the tasks here which is much more clearly presented just as transfer learning between identical MDPs with different state distributions; and the terminology also conflicts with the (much more complex) setting of contextual decision processes (see: https://arxiv.org/abs/1610.09512). It doesn't seem, for instance, that the final policy is context dependent (rather, it has to \"infer\" the context from whatever the initial state is, so effectively doesn't take the context into account at all). Part of the reasoning seems to be to make the work seem more distinct from Distral than it really is, but I don't see why \"transfer learning\" and the presented contextual MDP are really all that different.\n\nFinally, the experimental results need to be described in substantially more detail. The choice of regularization parameters, the precise nature of the context in each setting, and the precise design of the experiments is all extremely opaque in the current presentation. Since the methodology here is so similar to previous approaches, much more emphasis is required to better understand the (improved) empirical results in this eating.\n\nIn summary, while I do think the core ideas of this paper are interesting: whether it's better to regularize policies to a single central policy as in Distral or whether it's better to use joint regularization, whether we need two different timescales for distillation versus policy training, and what policy optimization method works best, as it is right now the algorithmic choices in the paper seem rather ad-hoc compared to Distral, and need substantially more empirical evidence.\n\nMinor comments:\n• There are several missing words/grammatical errors throughout the manuscript, e.g. on page 2 \"gradient information can better estimated\".",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Good paper, pushing the limits of RL to harder tasks.",
"paper_summary": null,
"main_review": "main_review: This paper presents a reinforcement learning method for learning complex tasks by dividing the state space into slices, learning local policies within each slice, while ensuring that they don't deviate too far from each other, while simultaneously learning a central policy that works across the entire state space in the process. The most closely related works to this one are Guided Policy Search (GPS) and \"Distral\", and the authors compare and contrast their work with the prior work suitably.\n\nThe paper is written well, has good insights, is technically sound, and has all the relevant references. The authors show through several experiments that the divide and conquer (DnC) technique can solve more complex tasks than can be solved with conventional policy gradient methods (TRPO is used as the baseline). The paper and included experiments are a valuable contribution to the community interested in solving harder and harder tasks using reinforcement learning.\n\nFor completeness, it would be great to include one more algorithm in the evaluation: an ablation of DnC which does not involve a central policy at all. If the local policies are trained to convergence, (and the context omega is provided by an oracle), how well does this mixture of local policies perform? This result would be instructive to see for each of the tasks.\n\nThe partitioning of each task must currently be designed by hand. It would be interesting (in future work) to explore how the partitioning could perhaps be discovered automatically.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Good submission, although could use more evaluation",
"paper_summary": null,
"main_review": "main_review: The submission tackles an important problem of learning highly varied skills. The approach relies on dividing the task space into subareas (defined by task context vectors) over which individual policies are trained, but are still required to operate well on tasks outside their context.\n\nThe exposition is clear and the method is well-motivated. I see no issues with the mathematical correctness of the claims made in the paper. The experimental results show a convincing benefit over TRPO and Distral on a number of manipulation and locomotion tasks. I would like to have seen more discussion of the computational costs and scaling of the method over TRPO or Distral, as the pairwise KL divergence terms grow quadratically in the number of contexts. \n\nWhile the method is well-motivated, the division of tasks into subareas seems arbitrarily chosen. It would be very useful for readers to see performance of the algorithm under other task decompositions to alleviate the worries that the algorithm is not sensitive to the decomposition choice.\n\nI would also like to see more discussion of curriculum learning, which also aims at tackling a similar problem of reducing complexity in early stages of training by choosing on simper tasks and progressing to more complex. Would such progressive tasks decompositions work better in your framework? Does your framework remove the need for curriculum learning?\n\nOverall, I believe this is in interesting piece of work and I believe would be of interest to ICLR community.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.6666666865348816,
0.6666666865348816
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to Reviewer 2",
"Response to Reviewer 3: Part 1",
"Response to Reviewer 1",
"Response to Reviewer 3: Part 2"
],
"comment": [
"Thank you for your very valuable comments. We address your questions below.\n\nIn regard to the choice of partitions: to address any potential concern regarding the partitions, we added additional experiments in Appendix D where the partitions are determined automatically, rather than being hand-specified. It is true that some care must be taken to get reasonable partitions, although our experiments suggest that even a simple K-means method can produce good results automatically. In Appendix D, we evaluate DnC on contexts generated by a K-means clustering procedure on the initial state distribution for the Picking task, which performs comparably to our manually designed contexts, indicating that performance of DnC is not particular to our choice of decomposition. We intend to extend this procedure to all the tasks for the final version. We further believe that it’s possible to find more sophisticated automatic methods to generate the decompositions, which would make for interesting future work. \n\n\nRegarding the complexity of the pairwise KL divergence, we have updated the paper to include a discussion of the computational cost in the fourth paragraph of Section 4.2. Empirically we find that the quadratic penalty is not a bottleneck for the problems we hope to address with DnC, since sampling the environment is by far the most computationally demanding operation.\n\nIn regard to the relationship with curriculum learning, we have now added some remarks at the end of the first paragraph of Section 2. Investigating the use of progressive decompositions with our method is an interesting direction for future work!\n",
"Thank you for your valuable suggestions!\n\nWe have included specific experiment details in Appendix A. In particular, we ran an extensive penalty hyperparameter sweep for DnC, centralized DnC, and Distral on each task to select the appropriate parameter for each method. Since the initial version, we have also updated the experiments by conducting a finer hyperparameter sweep and by running experiments with 5 random seeds instead of 3. We have updated the paper with the results obtained from these searches (Figure 1,Table 1). We thus contend that the difference between the performance of the various methods is not contingent on the exact choice of hyperparameters, and is indeed a result of the algorithmic differences. If the reviewer has any other suggestions for how to address this concern, we would be happy to incorporate them. We have also included more comprehensive task information, which detail precisely what the contexts are in each task, in Appendix B. We have updated the paper to distinguish our use of the word “context” from contextual MDPs in Section 3. We also clarify in Section 5 that our analysis ports Distral to the TRPO objective. While the original Distral paper uses soft Q-learning, we adapt the algorithm to TRPO, since empirically TRPO exhibits better performance on high-dimensional continuous control tasks. If the reviewer has further recommendations, we would be happy to address these as well.\n",
"Thank you for your very valuable feedback! \n\nWe have modified the paper to include comparisons between DnC and two different oracle-based ensembles of local policies in Appendix C. The first ablation of DnC never distills the policies together, training the local policies to convergence. This ablation performs poorly compared to DnC in most tasks: we hypothesize that the distillation step allows the local policies to escape the local minima that policy gradient methods generally suffer from. Similar observations have been noted in Mordatch et al. [1], where trajectory optimization without distillation to a central neural network underperforms. The other ablation runs DnC, but returns the final local ensemble instead of the final global policy. We observe that this final local ensemble with oracle context performs only marginally better than the final global policy in most tasks, indicating that there is little loss in performance during the distillation process. For both of these variants, the central policy, which must operate successfully for a wide range of contexts, generalizes better to contexts that are slightly different than the training distribution. Considering that training and testing conditions will almost always differ slightly in practice, even if one has oracle access to the context, it might be beneficial to use the central policy due to its better generalization capability.\n\nAutomatic ways to perform the partitioning is indeed an interesting future direction! As a step in this direction, we have updated the paper with a simple automated partitioning scheme in Appendix D. Partitions are automatically generated via a K-means clustering procedure on the initial state distribution to generate contexts, and find that DnC performs well in this case as well. We hope to pursue more elaborate partitioning schemes in future work.\n\n[1] Mordatch et al, Interactive Control of Diverse Complex Characters with Neural Networks, NIPS 2015\n",
"\nWe now address concerns regarding the differences between our method and Distral [1]. DnC and Distral not only have completely different motivations, but the technical differences between the two algorithms are substantial as well. It is worth noting that our experiments (with hyperparameter searches and multiple random seeds) over five varied tasks in the locomotion and manipulation settings clearly illustrate that the Distral method as described in prior work does not solve the challenging tasks in our evaluation, while our approach does. This extensive comparative evaluation already establishes a clear contribution over the prior work, as noted by the other two reviewers.\n\nThere are also significant conceptual differences. Distral considers a transfer learning setting, while the goal in our work is to obtain a single policy that succeeds on a single challenging task with stochastic structure. While both algorithms could be applied to both settings, we feel this conceptual difference is very important. Whereas our method is concerned with the performance of the central policy on the full state space, the Distral paper evaluates performance of the local policies on their respective domains.\n\nFurthermore, Distral does not propose nor analyze the potential to solve challenging continuous control tasks with stochastic initial state distributions. The observation that decomposing the initial state distribution in this way leads to drastically improved performance is not at all obvious, and is a key insight of our work. We believe that this contribution will be highly relevant to researchers interested in solving complex continuous control tasks, and this contribution is not present in the Distral paper. In the updated paper, we also describe how to automate the process of generating these decompositions, and present results in Appendix D. We find that DnC with this automated partitioning performs comparably to the manual partitions outlined in the paper, without the need for any manual specification of partitions.\n\nBoth DnC and Distral maintain the core idea that optimizing local or instance-specific policies can simplify many tasks. This idea is not new, and is popular in the RL community after works related to guided policy search [2]. In fact, ideas of the same flavor are present even in older works like target propagation [3] where an optimization method generates targets for a supervised learning network. From a bird’s-eye perspective, all these methods exploit the same principle, but a closer look at the technical details unveil significant differences.\n\nFor example, GPS observes that adding a regularization term to stay close to the central network helps with distillation and overall convergence. Distral rediscovers the exact KL regularization and supervised distillation procedure as GPS, albeit with neural networks as local policies. However, Distral’s key innovation comes from carefully choosing the algorithms for local training and applying the method to challenging visual transfer learning scenarios, something that the basic guided policy search algorithm does not do. In the same way, we propose a modified method with pairwise KL regularization terms and a varied distillation schedule, and apply it to challenging stochastic initial state continuous control tasks, as compared to the discrete control setup in Distral. Furthermore, while Distral uses soft Q-learning for their discrete action tasks, we use TRPO due to its stable performance in continuous control tasks. From this perspective, we believe that the difference between our method and Distral is comparable, if not greater than, the difference between Distral and GPS. \n\nIn motivation, technical detail, and empirical performance, DnC varies significantly from Distral. Thus, we believe that the proposed method, DnC, is quite different from previous methods, a sentiment that is shared by the other two reviews as well.\n\n\n[1] Teh. et al, Distral, NIPS 2017\n[2] Levine, et al, Guided Policy Search, ICML 2013\n[3] see references of Lee et al, Difference Target Propagation, ECML PKDD 2015\n"
]
} | {
"paperhash": [
"nair|overcoming_exploration_in_reinforcement_learning_with_demonstrations",
"rajeswaran|learning_complex_dexterous_manipulation_with_deep_reinforcement_learning_and_demonstrations",
"teh|distral:_robust_multitask_reinforcement_learning",
"heess|emergence_of_locomotion_behaviours_in_rich_environments",
"andrychowicz|hindsight_experience_replay",
"popov|data-efficient_deep_reinforcement_learning_for_dexterous_manipulation",
"rajeswaran|towards_generalization_and_simplicity_in_continuous_control",
"brockman|openai_gym",
"kumar|optimal_control_with_learned_local_models:_application_to_dexterous_manipulation",
"mordatch|interactive_control_of_diverse_complex_characters_with_neural_networks",
"levine|end-to-end_training_of_deep_visuomotor_policies",
"schulman|trust_region_policy_optimization",
"mordatch|combining_the_benefits_of_function_approximation_and_trajectory_optimization",
"mnih|playing_atari_with_deep_reinforcement_learning",
"levine|guided_policy_search",
"todorov|mujoco:_a_physics_engine_for_model-based_control",
"kober|learning_throwing_and_catching_skills",
"kakade|a_natural_policy_gradient"
],
"title": [
"Overcoming Exploration in Reinforcement Learning with Demonstrations",
"Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations",
"Distral: Robust multitask reinforcement learning",
"Emergence of Locomotion Behaviours in Rich Environments",
"Hindsight Experience Replay",
"Data-efficient Deep Reinforcement Learning for Dexterous Manipulation",
"Towards Generalization and Simplicity in Continuous Control",
"OpenAI Gym",
"Optimal control with learned local models: Application to dexterous manipulation",
"Interactive Control of Diverse Complex Characters with Neural Networks",
"End-to-End Training of Deep Visuomotor Policies",
"Trust Region Policy Optimization",
"Combining the benefits of function approximation and trajectory optimization",
"Playing Atari with Deep Reinforcement Learning",
"Guided Policy Search",
"MuJoCo: A physics engine for model-based control",
"Learning throwing and catching skills",
"A Natural Policy Gradient"
],
"abstract": [
"Exploration in environments with sparse rewards has been a persistent problem in reinforcement learning (RL). Many tasks are natural to specify with a sparse reward, and manually shaping a reward function can result in suboptimal performance. However, finding a non-zero reward is exponentially more difficult with increasing task horizon or action dimensionality. This puts many real-world tasks out of practical reach of RL methods. In this work, we use demonstrations to overcome the exploration problem and successfully learn to perform long-horizon, multi-step robotics tasks with continuous control such as stacking blocks with a robot arm. Our method, which builds on top of Deep Deterministic Policy Gradients and Hindsight Experience Replay, provides an order of magnitude of speedup over RL on simulated robotics tasks. It is simple to implement and makes only the additional assumption that we can collect a small set of demonstrations. Furthermore, our method is able to solve tasks not solvable by either RL or behavior cloning alone, and often ends up outperforming the demonstrator policy.",
"Dexterous multi-fingered hands are extremely versatile and provide a generic way to perform multiple tasks in human-centric environments. However, effectively controlling them remains challenging due to their high dimensionality and large number of potential contacts. Deep reinforcement learning (DRL) provides a model-agnostic approach to control complex dynamical systems, but has not been shown to scale to high-dimensional dexterous manipulation. Furthermore, deployment of DRL on physical systems remains challenging due to sample inefficiency. Thus, the success of DRL in robotics has thus far been limited to simpler manipulators and tasks. In this work, we show that model-free DRL with natural policy gradients can effectively scale up to complex manipulation tasks with a high-dimensional 24-DoF hand, and solve them from scratch in simulated experiments. Furthermore, with the use of a small number of human demonstrations, the sample complexity can be significantly reduced, and enable learning within the equivalent of a few hours of robot experience. We demonstrate successful policies for multiple complex tasks: object relocation, in-hand manipulation, tool use, and door opening.",
"Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (Distill & transfer learning). Instead of sharing parameters between the different workers, we propose to share a \"distilled\" policy that captures common behaviour across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust and more stable---attributes that are critical in deep reinforcement learning.",
"The reinforcement learning paradigm allows, in principle, for complex behaviours to be learned directly from simple reward signals. In practice, however, it is common to carefully hand-design the reward function to encourage a particular solution, or to derive it from demonstration data. In this paper explore how a rich environment can help to promote the learning of complex behavior. Specifically, we train agents in diverse environmental contexts, and find that this encourages the emergence of robust behaviours that perform well across a suite of tasks. We demonstrate this principle for locomotion -- behaviours that are known for their sensitivity to the choice of reward. We train several simulated bodies on a diverse set of challenging terrains and obstacles, using a simple reward function based on forward progress. Using a novel scalable variant of policy gradient reinforcement learning, our agents learn to run, jump, crouch and turn as required by the environment without explicit reward-based guidance. A visual depiction of highlights of the learned behavior can be viewed following this https URL .",
"Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. \nWe demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.",
"Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.",
"This work shows that policies with simple linear and RBF parameterizations can be trained to solve a variety of continuous control tasks, including the OpenAI gym benchmarks. The performance of these trained policies are competitive with state of the art results, obtained with more elaborate parameterizations such as fully connected neural networks. Furthermore, existing training and testing scenarios are shown to be very limited and prone to over-fitting, thus giving rise to only trajectory-centric policies. Training with a diverse initial state distribution is shown to produce more global policies with better generalization. This allows for interactive control scenarios where the system recovers from large on-line perturbations; as shown in the supplementary video.",
"OpenAI Gym is a toolkit for reinforcement learning research. It includes a growing collection of benchmark problems that expose a common interface, and a website where people can share their results and compare the performance of algorithms. This whitepaper discusses the components of OpenAI Gym and the design decisions that went into the software.",
"We describe a method for learning dexterous manipulation skills with a pneumatically-actuated tendon-driven 24-DoF hand. The method combines iteratively refitted time-varying linear models with trajectory optimization, and can be seen as an instance of model-based reinforcement learning or as adaptive optimal control. Its appeal lies in the ability to handle challenging problems with surprisingly little data. We show that we can achieve sample-efficient learning of tasks that involve intermittent contact dynamics and under-actuation. Furthermore, we can control the hand directly at the level of the pneumatic valves, without the use of a prior model that describes the relationship between valve commands and joint torques. We compare results from learning in simulation and on the physical system. Even though the learned policies are local, they are able to control the system in the face of substantial variability in initial state.",
"We present a method for training recurrent neural networks to act as near-optimal feedback controllers. It is able to generate stable and realistic behaviors for a range of dynamical systems and tasks - swimming, flying, biped and quadruped walking with different body morphologies. It does not require motion capture or task-specific features or state machines. The controller is a neural network, having a large number of feed-forward units that learn elaborate state-action mappings, and a small number of recurrent units that implement memory states beyond the physical system state. The action generated by the network is defined as velocity. Thus the network is not learning a control policy, but rather the dynamics under an implicit policy. Essential features of the method include interleaving supervised learning with trajectory optimization, injecting noise during training, training for unexpected changes in the task specification, and using the trajectory optimizer to obtain optimal feedback gains in addition to optimal actions.",
"Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a partially observed guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.",
"In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.",
"Neural networks have recently solved many hard problems in Machine Learning, but their impact in control remains limited. Trajectory optimization has recently solved many hard problems in robotic control, but using it online remains challenging. Here we leverage the high-fidelity solutions obtained by trajectory optimization to speed up the training of neural network controllers. The two learning problems are coupled using the Alternating Direction Method of Multipliers (ADMM). This coupling enables the trajectory optimizer to act as a teacher, gradually guiding the network towards better solutions. We develop a new trajectory optimizer based on inverse contact dynamics, and provide not only the trajectories but also the feedback gains as training data to the network. The method is illustrated on rolling, reaching, swimming and walking tasks.",
"We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.",
"Direct policy search can effectively scale to high-dimensional systems, but complex policies with hundreds of parameters often present a challenge for such methods, requiring numerous samples and often falling into poor local optima. We present a guided policy search algorithm that uses trajectory optimization to direct policy learning and avoid poor local optima. We show how differential dynamic programming can be used to generate suitable guiding samples, and describe a regularized importance sampled policy optimization that incorporates these samples into the policy search. We evaluate the method by learning neural network controllers for planar swimming, hopping, and walking, as well as simulated 3D humanoid running.",
"We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.",
"In this video, we present approaches for learning throwing and catching skills. We first show how a hitting skill (i.e., table tennis) can be learned using a combination of imitation and reinforcement learning. This hitting skill is subsequently generalized to a catching skill. Secondly, we show how a robot can adapt a throwing skill to new targets. Finally, we demonstrate that a BioRob and a Barrett WAM can play catch together using the previously acquired skills.",
"We provide a natural gradient method that represents the steepest descent direction based on the underlying structure of the parameter space. Although gradient methods cannot make large changes in the values of the parameters, we show that the natural gradient is moving toward choosing a greedy optimal action rather than just a better action. These greedy optimal actions are those that would be chosen under one improvement step of policy iteration with approximate, compatible value functions, as defined by Sutton et al. [9]. We then show drastic performance improvements in simple MDPs and in the more challenging MDP of Tetris."
],
"authors": [
{
"name": [
"Ashvin Nair",
"Bob McGrew",
"Marcin Andrychowicz",
"Wojciech Zaremba",
"P. Abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Rajeswaran",
"Vikash Kumar",
"Abhishek Gupta",
"John Schulman",
"E. Todorov",
"S. Levine"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Y. Teh",
"V. Bapst",
"Wojciech M. Czarnecki",
"John Quan",
"J. Kirkpatrick",
"R. Hadsell",
"N. Heess",
"Razvan Pascanu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"N. Heess",
"TB Dhruva",
"S. Sriram",
"Jay Lemmon",
"J. Merel",
"Greg Wayne",
"Yuval Tassa",
"Tom Erez",
"Ziyun Wang",
"S. Eslami",
"Martin A. Riedmiller",
"David Silver"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Marcin Andrychowicz",
"Dwight Crow",
"Alex Ray",
"Jonas Schneider",
"Rachel Fong",
"P. Welinder",
"Bob McGrew",
"Joshua Tobin",
"P. Abbeel",
"Wojciech Zaremba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"I. Popov",
"N. Heess",
"T. Lillicrap",
"Roland Hafner",
"Gabriel Barth-Maron",
"Matej Vecerík",
"Thomas Lampe",
"Yuval Tassa",
"Tom Erez",
"Martin A. Riedmiller"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Rajeswaran",
"Kendall Lowrey",
"E. Todorov",
"S. Kakade"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Greg Brockman",
"Vicki Cheung",
"Ludwig Pettersson",
"Jonas Schneider",
"John Schulman",
"Jie Tang",
"Wojciech Zaremba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Vikash Kumar",
"E. Todorov",
"S. Levine"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Igor Mordatch",
"Kendall Lowrey",
"Galen Andrew",
"Zoran Popovic",
"E. Todorov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Levine",
"Chelsea Finn",
"Trevor Darrell",
"P. Abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"John Schulman",
"S. Levine",
"P. Abbeel",
"Michael I. Jordan",
"Philipp Moritz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Igor Mordatch",
"E. Todorov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Volodymyr Mnih",
"K. Kavukcuoglu",
"David Silver",
"Alex Graves",
"Ioannis Antonoglou",
"D. Wierstra",
"Martin A. Riedmiller"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Levine",
"V. Koltun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Todorov",
"Tom Erez",
"Yuval Tassa"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jens Kober",
"Katharina Muelling",
"Jan Peters"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Kakade"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1709.10089",
"1709.10087",
"1707.04175",
"1707.02286",
"1707.01495",
"1704.03073",
"1703.02660",
"1606.01540",
null,
null,
"1504.00702",
"1502.05477",
null,
"1312.5602",
null,
null,
null,
null
],
"s2_corpus_id": [
"3543784",
"4780901",
"31009408",
"30099687",
"3532908",
"13268930",
"4042234",
"16099293",
"7586242",
"8360813",
"7242892",
"16046818",
"5564734",
"15238391",
"13971447",
"5230692",
"6467599",
"14540458"
],
"intents": [
[
"background"
],
[],
[
"methodology",
"background"
],
[
"methodology",
"background"
],
[
"methodology",
"background"
],
[
"background"
],
[],
[
"background"
],
[
"methodology",
"background"
],
[],
[
"methodology",
"background"
],
[
"methodology"
],
[],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"methodology",
"background"
],
[
"methodology"
]
],
"isInfluential": [
true,
false,
true,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | null | 84 | 1.404762 | 0.555556 | 0.75 | null | null | null | null | null | rJwelMbR- |
logeswaran|an_efficient_framework_for_learning_sentence_representations|ICLR_cc_2018_Conference | 1803.02893v1 | An efficient framework for learning sentence representations | In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data. Drawing inspiration from the distributional hypothesis and recent work on learning sentence representations, we reformulate the problem of predicting the context in which a sentence appears as a classification problem. Given a sentence and the context in which it appears, a classifier distinguishes context sentences from other contrastive sentences based on their vector representations. This allows us to efficiently learn different types of encoding functions, and we show that the model learns high-quality sentence representations. We demonstrate that our sentence representations outperform state-of-the-art unsupervised and supervised representation learning methods on several downstream NLP tasks that involve understanding sentence semantics while achieving an order of magnitude speedup in training time. | {
"name": [
"lajanugen logeswaran",
"honglak lee"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'settlement': 'Ann Arbor', 'region': 'MI', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'settlement': 'Ann Arbor', 'region': 'MI', 'country': 'USA'}"
}
]
} | null | [
"Computer Science"
] | International Conference on Learning Representations | 2018-02-15 | 49 | 490 | null | null | null | null | null | null | null | true | Though the approach is not terribly novel, it is quite effective (as confirmed on a wide range of evaluation tasks). The approach is simple and likely to be useful in applications. The paper is well written.
+ simple and efficient
+ high quality evaluation
+ strong results
- novelty is somewhat limited
| {
"review_id": [
"rJMoj-jxf",
"SJNPXFyeM",
"ByVL483xf"
],
"review": [
{
"title": "title: An elegant and simple alternative to existing methods, but empirical advantages are unclear ",
"paper_summary": null,
"main_review": "main_review: [REVISION]\n\nThank you for your clarification. I appreciate the effort and think it has improved the paper. I have updated my score accordingly\n\n====== \n\nThis paper proposes a new objective for learning SkipThought-style sentence representations from corpora of ordered sentences. The algorithm is much faster than SkipThoughts as it swaps the word-level decoder for a contrastive classification loss. \n\nComments:\n\nSince one of the key advantages of this method is the speed, I was surprised there was not a more formal comparison of the speed of training different models. For instance, it would be more convincing if two otherwise identical encoders were trained on the same machine on the books corpus with the proposed objective and the skipthoughts decoding objective, and the representations compared after X hours of training. The reported 2 weeks required to train Skipthoughts comes from the paper, but things might be faster now with more up-to-date deep learning libraries etc. If this was what was in fact done, then it's probably just a case of presenting the comparison in a more formal way. I would also lose the sentence \"we are able to train many models in the time it takes to train most unsupervised\" (see next point for reasons why this is questionable).\n\nIt would have been interesting to apply this method with BOW encoders, which should be even faster than RNN-based encoders reported in this paper. The faster BOW models tend to give better performance on cosine-similarity evaluations ( quantifying the nearest-neighbour analysis that the authors use in this paper). Indeed, it would be interesting (although of course not definitive) to see comparison of the proposed algorithm (with BOW and RNN encoders) on cosine sentence similarity evaluations. \n\nThe proposed novelty is simple and intuitive, which I think is a strength of the method. However, a simple idea makes overlap with other proposed approaches more likely, and I'd like the author to check through the public comments to ensure that all previous related ideas are noted in this paper. \n\nI think the authors could do more to emphasise what the point is of trying to learn sentence embeddings. An idea of the eventual applications of these embeddings would make it easier to determine, for instance, whether the supervised ensembling method applied here would be applicable in practice. Moreover, many papers have emphasised the limitations of the evaluations used in this paper (although they are still commonly used) so it would be good to acknowledge that it's hard to draw too many conclusions from such numbers. That said, the numbers are comparable Skipthoughts, so it's clear that this method learns representations of comparable quality. \n\nThe justification for the proposed algorithm is clear in terms of efficiency, but I don't think it's immediately clear from a semantic / linguistic point of view. The statement \"The meaning of a sentence is the property that creates bonds....\" seems to have been cooked up to justify the algorithm, not vice versa. I would cut all of that speculation out and focus on empirically verifiable advantages. \n\nThe section of image embeddings comes completely out of the blue and is very hard to interpret. I'm still not sure I understand this evaluation (short of looking up the Kiros et al. paper), or how the proposed model is applied to a multi-modal task.\n\nThere is much scope to add more structured analysis of the type hinted by the nearest neighbours section. Cherry picked lists don't tell the reader much, but statistics or more general linguistic trends can be found in these neighbours and aggregated, that could be very interesting. \n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Great results, with minor concerns",
"paper_summary": null,
"main_review": "main_review: ==Update==\n\nI appreciate the response, and continue to recommend acceptance. The evaluation metric used in this paper (SentEval) represents an important open problem in NLP—learning reusable sentence representations—and one of the problems in NLP best suited to presentation at IC*LR*. Because of this, I'm willing to excuse the fact that the paper is only moderately novel, in light of the impressive reported results.\n\nWhile I would appreciate a direct (same codebase, same data) comparison with some outside baselines, this paper meets or exceeds the standards for rigor that were established by previous published work in the area, and the existing results are sufficient to support some substantial conclusions.\n\n==========\n\nThis paper proposes an alternative formulation of Kiros's SkipThought objective for training general-purpose sentence encoder RNNs on unlabeled data. This formulation replaces the decoder in that model with a second encoder, and yields substantial improvements to both speed and model performance (as measured on downstream transfer tasks). The resulting model is, for the first time, reasonably competitive even with models that are trained end-to-end on labeled data for the downstream tasks (despite the requirement, imposed by the evaluation procedure, that only the top layer classifier be trained for the downstream tasks here), and is also competitive with models trained on large labeled datasets like SNLI. The idea is reasonable, the topic is important, and the results are quite strong. I recommend acceptance, with some caveats that I hope can be addressed.\n\nConcerns:\n\nA nearly identical idea to the core idea of this paper was proposed in an arXiv paper this spring, as a commenter below pointed out. That work has been out for long enough that I'd urge you to cite it, but it was not published and it reports results that are far less impressive than yours, so that omission isn't a major problem.\n\nI'd like to see more discussion of how you performed your evaluation on the downstream tasks. Did you use the SentEval tool from Conneau et al., as several related recent papers have? If not, does your evaluation procedure differ from theirs or Kiros's in any meaningful way?\n\nI'm also a bit uncomfortable that the paper doesn't directly compare with any baselines that use the exact same codebase, word representations, hyperparameter tuning procedure, etc.. I would be more comfortable with the results if, for example, the authors compared a low-dimensional version of their model with a low-dimensional version of SkipThought, trained in the *exact* same way, or if they implemented the core of their model within the SkipThought codebase and showed strong results there.\n\nMinor points:\n\nThe headers in Table 1 don't make it all that clear which additions (vectors, UMBC) are cumulative with what other additions. This should be an easy fix. \n\nThe use of the check-mark as an output in Figure 1 doesn't make much sense, since the task is not binary classification.\n\n\"Instead of training a model to reconstruct the surface form of the input sentence or its neighbors, our formulation attempts to focus on the semantic aspects of sentences. The meaning of a sentence is the property that creates bonds between a sequence of sentences and makes it logically flow.\" – It's hard to pin down exactly what this means, but it sounds like you're making an empirical claim here: semantic information is more important than non-semantic sources of variation (syntactic/lexical/morphological factors) in predicting the flow of a text. Provide some evidence for this, or cut it.\n\nYou make a similar claim later in the same section: \"In figure 1(a) however, the reconstruction loss forces the model to predict local structural information about target sentences that may be irrelevant to its meaning (e.g., is governed by grammar rules).\" This is a testable prediction: Are purely grammatical (non-semantic) variations in sentence form helpful for your task? I'd suspect that they are, at least in some cases, as they might give you clues as to style, dialect, or framing choices that the author made when writing that specific passage.\n\n\"Our best BookCorpus model (MC-QT) trains in just under 11hrs, compared to skip-thought model’s training time of 2 weeks.\" – If you say this, you need to offer evidence that your model is faster. If you don't use the same hardware and low-level software (i.e., CuDNN), this comparison tells us nearly nothing. The small-scale replication of SkipThought described above should address this issue, if performed.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Intuitive model for sentence representations with good performance",
"paper_summary": null,
"main_review": "main_review: This paper proposes a framework for unsupervised learning of sentence representations by maximizing a model of the probability of true context sentences relative to random candidate sentences. Unique aspects of this skip-gram style model include separate target- and context-sentence encoders, as well as a dot-product similarity measure between representations. A battery of experiments indicate that the learned representations have comparable or better performance compared to other, more computationally-intensive models.\n\nWhile the main constituent ideas of this paper are not entirely novel, I think the specific combination of tools has not been explored previously. As such, the novelty of this paper rests in the specific modeling choices and the significance hinges on the good empirical results. For this reason, I believe it is important that additional details regarding the specific architecture and training details be included in the paper. For example, how many layers is the GRU? What type of parameter initialization is used? Releasing source code would help answer these and other questions, but including more details in the paper itself would also be welcome.\n\nRegarding the empirical results, the method does appear to achieve good performance, especially given the compute time. However, the balance between performance and computational complexity is not investigated, and I think such an analysis would add significant value to the paper. For example, I see at least three ways in which performance could be improved at the expense of additional computation: 1) increasing the candidate pool size 2) increasing the corpus size and 3) increasing the embedding size / increasing the encoder capacity. Does the good performance/efficiency reported in the paper depend on achieving a sweet spot among those three hyperparameters?\n\nOverall, the novelty of this paper is fairly low and there is still substantial room for improvement in some of the analysis. On the other hand, I think this paper proposes an intuitive model and demonstrates good performance. I am on the fence, but ultimately I vote to accept this paper for publication.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.7777777910232544,
0.5555555820465088
],
"confidence": [
1,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Rebuttal",
"Siamese CBOW",
"Related paper",
"Objective function",
"STS14 evaluation"
],
"comment": [
"We thank the reviewers for the helpful comments.\n\nR1, R3: Skip-thoughts training time\nWe agree that training the model could be faster with current hardware and software libraries. A more recent implementation of the skip-thoughts model was released by Google early this year [1]. This implementation mentions that the model takes 9 days to train on a GTX 1080 GPU. Training our proposed models on a GTX 1080 takes 11 hours. Both implementations are based on Tensorflow. Our experiment used cuda 8.0 and cuDNN 6.0 libraries. This also agrees with the numbers in the paper which were based on experiments using GTX TITAN X.\n\nR1, R3: Training speed comparison\nWe performed a comparison on the training efficiency of lower-dimensional versions of our model and the skip-thoughts model. The same encoder architecture was trained in identical conditions using our objective and the skip-thoughts objectives and models were evaluated on downstream tasks after a given number of hours. Experimental results are reported in section C of the appendix. The training efficiency of our model compared to the skip-thoughts model is clear from these experiments.\n\nR1: BoW encoders, sentence similarity evaluations\nWe train BoW encoders using our training objective and evaluate them on textual similarity tasks. Experiments and results are discussed in section B of the appendix. Our RNN-based encoder performs strongly against prior sequence models. Our BoW encoder performs comparably to (or slightly better than) popular BoW representations as well.\n\nR2: Balance between performance and computational complexity\n1) Increasing the candidate pool size - We found that RNN encoders are less sensitive to increasing the candidate pool size. Sentences appearing in the context of a given query sentence are natural candidates for the contrastive sentences since they are more likely to be related to the query sentence, and hence make the prediction problem challenging. We observed marginal performance improvements as we added more random choices to the candidate pool.\n2) Increasing corpus size - We have experiments in the paper with increased corpus size. We considered the UMBC corpus (which is about 3 times the size of BookCorpus) and show that augmenting the BookCorpus dataset enables us to obtain monotonic improvements on the downstream tasks.\n3) Increasing embedding size - We have included experiments on varying the embedding size in section D of the supplementary material. We are able to train bigger and better models at the expense of more training time. The smaller models can be trained more efficiently while still being competitive or better than state-of-the-art higher-dimensional models. \nWe also plan to release pre-trained models for different representation sizes so that other researchers/practitioners can use the appropriate size depending on the downstream task and the amount of labelled data available.\n\nR1: Point of learning sentence representations\nIn the vision community it has become common practice to use CNN features (e.g., AlexNet, VGGNet, ResNet, etc.) pre-trained from the large-scale imagenet database for a variety of downstream tasks (e.g., the image-caption experiment in our paper uses pre-trained CNN features as the image embedding). Our overarching goal is to learn analogous high-quality sentence representations in the text domain. The representations can be used as feature vectors for downstream tasks, as we do in the paper. The encoders can also be used for parameter initialization and fine-tuned on data relevant to a particular application. In this respect, we believe that exploring scalable unsupervised learning algorithms for learning ‘universal’ text representations is an important research problem.\n\nR1: Image-caption retrieval experiments\nWe have updated the description of the image-caption retrieval experiments. We hope the description is more clear now and provides better motivation for the task.\n\nR1: Nearest neighbors\nAs we discuss in the paper, the query sentences used for the nearest neighbor experiment were chosen randomly and not cherry picked. We hope the cosine similarity experiments quantify the nearest neighbor analysis.\n\nR2: We have added more details about the architecture and training to the paper (sec 4.3).\n\nWe will release the source code upon publication.\n\nR3: Evaluation\nThe evaluation on downstream tasks was performed using the evaluation scripts from Kiros et al. since most of the unsupervised methods we compare against were published either before (Kiros et al., Hill et al.) or about the same time (Gan et al.) the SentEval tool was released.\n\nR1, R2, R3:\nWe have updated the paper to reflect your comments and concerns. Modifications are highlighted in blue (Omissions not shown). We have added relevant citations pointed out by reviewers and public comments.\n\nReferences\n[1] https://github.com/tensorflow/models/tree/master/research/skip_thoughts",
"Sure, Thanks. \n\nIn this paper a conceptually similar task of identifying context sentences from candidate sentences based on their bag-of-words representations is considered. Our approach is more general than this work in the following ways\n* Our formulation considers more general scoring functions/classifiers. We found inner products to work best. Using cosine distance as is done in this work led to inferior representations. Cosine distance implicitly requires sentence representations to both lie on the unit ball and be similar (in terms of inner product) to context sentences, which can be a strong constraint. The inner products scoring function only requires the latter. \n* This work uses the same set of parameters to encode both input and context sentences, while we consider using different sets of parameters. This helped learn better representations. We briefly discuss this choice in section 3.\n* Our formulation also allows the use of more general encoder architectures.\n\nAlso, we discuss more recent bag-of-words methods in the paper. \n",
"Thank you for your comment. We will include the paper in a revised version. Please see our response to the previous comment regarding the same paper. ",
"Thank you for your comments. We will include these literature in a revised version of the paper. \n\nDespite similarities in the objective functions, we would like to point out the following key distinctions.\n\nJernite et al. propose to use paragraph level coherence as a learning signal. The following related task is considered in their paper. Given the first three sentences of a paragraph, they choose the next sentence from five candidate sentences later in the paragraph (Paragraphs of length at least 8 are considered). \nOur objective differs from theirs in the following aspects.\n* This work exploits paragraph level coherence signals for learning, while our work derives motivation from the distributional hypothesis. We don’t restrict ourselves to paragraphs in the data as is done in this work. \n* We consider a large number of candidate sentence choices when predicting a context sentence. This is a discriminative approximation to the generation objective (viewing generation as choosing a sentence from all possible sentences)\n* We use a single input sentence and predict the context sentences surrounding it. Using larger input contexts did not yield any significant empirical benefits.\nOur objective further learns richer representations compared to this work, as evidenced by empirical results. \n\nThe local coherence model of Li & Hovy is a feed-forward network which examines a window of sentence embeddings and classifies them as coherent/incoherent (binary classification). We have some discussion about this objective in the paper (section 3). We point out the following key differences between our objective and theirs. \n* Instead of discriminating context windows as plausible/implausible, we encourage observed contexts (in the data) to be more plausible than contrastive (implausible) ones and formulate it as a multi-class classification problem. We experimentally found that this relaxed constraint helps learn better representations.\n* We use a simple scoring function (inner products) in our objective. When using a parameterized classifier, the model has a tendency to learn poor sentence representations and compensate for it using a strong classifier. This is undesirable since the classifier is discarded and only the sentence encoders are used for feature extraction.\n\nHence, Li & Hovy’s objective is better suited for local coherence modeling than it is for learning sentence representations.\n",
"Thank you for your comment.\n\nWe have included an evaluation of our models on the STS14 task in Appendix C of the supplementary material. \n\nWe evaluate RNN-based and Bag-of-words encoders trained using our objective on this task. Our RNN-based encoder performs strongly compared to previous sequence encoders. Bag-of-words models are known to perform strongly in this task as they are better able to encode word identity information. Our BoW variation performs comparably (or slightly better) than prior BoW based models such as FastSent and Siamese CBOW."
]
} | {
"paperhash": [
"guu|generating_sentences_by_editing_prototypes",
"wieting|learning_paraphrastic_sentence_embeddings_from_back-translated_bitext",
"pathak|curiosity-driven_exploration_by_self-supervised_prediction",
"conneau|supervised_learning_of_universal_sentence_representations_from_natural_language_inference_data",
"wieting|revisiting_recurrent_networks_for_paraphrastic_sentence_embeddings",
"arora|a_simple_but_tough-to-beat_baseline_for_sentence_embeddings",
"jernite|discourse-based_objectives_for_fast_unsupervised_sentence_representation_learning",
"gan|learning_generic_sentence_representations_using_convolutional_neural_networks",
"gan|unsupervised_learning_of_sentence_representations_using_convolutional_neural_networks",
"adi|fine-grained_analysis_of_sentence_embeddings_using_auxiliary_prediction_tasks",
"kenter|siamese_cbow:_optimizing_word_embeddings_for_sentence_representations",
"pathak|context_encoders:_feature_learning_by_inpainting",
"noroozi|unsupervised_learning_of_visual_representations_by_solving_jigsaw_puzzles",
"hill|learning_distributed_representations_of_sentences_from_unlabelled_data",
"larsen|autoencoding_beyond_pixels_using_a_learned_similarity_metric",
"wieting|towards_universal_paraphrastic_sentence_embeddings",
"vendrov|order-embeddings_of_images_and_language",
"bowman|generating_sentences_from_a_continuous_space",
"sennrich|neural_machine_translation_of_rare_words_with_subword_units",
"kiros|skip-thought_vectors",
"klein|associating_neural_word_embeddings_with_deep_image_representations_using_fisher_vectors",
"doersch|unsupervised_visual_representation_learning_by_context_prediction",
"zhao|self-adaptive_hierarchical_sentence_model",
"tai|improved_semantic_representations_from_tree-structured_long_short-term_memory_networks",
"chung|gated_feedback_recurrent_neural_networks",
"mao|deep_captioning_with_multimodal_recurrent_neural_networks_(m-rnn)",
"karpathy|deep_visual-semantic_alignments_for_generating_image_descriptions",
"jean|on_using_very_large_target_vocabulary_for_neural_machine_translation",
"pennington|glove:_global_vectors_for_word_representation",
"li|a_model_of_coherence_based_on_distributed_sentence_representation",
"kim|convolutional_neural_networks_for_sentence_classification",
"agirre|semeval-2014_task_10:_multilingual_semantic_textual_similarity",
"le|distributed_representations_of_sentences_and_documents",
"marelli|a_sick_cure_for_the_evaluation_of_compositional_distributional_semantic_models",
"lin|microsoft_coco:_common_objects_in_context",
"hermann|multilingual_distributed_representations_without_word_alignment",
"mikolov|distributed_representations_of_words_and_phrases_and_their_compositionality",
"ji|discriminative_improvements_to_distributional_sentence_similarity",
"socher|recursive_deep_models_for_semantic_compositionality_over_a_sentiment_treebank",
"han|umbc_ebiquity-core:_semantic_textual_similarity_systems",
"mikolov|linguistic_regularities_in_continuous_space_word_representations",
"mikolov|efficient_estimation_of_word_representations_in_vector_space",
"socher|dynamic_pooling_and_unfolding_recursive_autoencoders_for_paraphrase_detection",
"mnih|a_scalable_hierarchical_distributed_language_model",
"pang|seeing_stars:_exploiting_class_relationships_for_sentiment_categorization_with_respect_to_rating_scales",
"dolan|unsupervised_construction_of_large_paraphrase_corpora:_exploiting_massively_parallel_news_sources",
"hu|mining_and_summarizing_customer_reviews",
"pang|a_sentimental_education:_sentiment_analysis_using_subjectivity_summarization_based_on_minimum_cuts",
"voorhees|overview_of_the_trec_2003_question_answering_track",
"wiebe|annotating_expressions_of_opinions_and_emotions_in_language"
],
"title": [
"Generating Sentences by Editing Prototypes",
"Learning Paraphrastic Sentence Embeddings from Back-Translated Bitext",
"Curiosity-Driven Exploration by Self-Supervised Prediction",
"Supervised Learning of Universal Sentence Representations from Natural Language Inference Data",
"Revisiting Recurrent Networks for Paraphrastic Sentence Embeddings",
"A Simple but Tough-to-Beat Baseline for Sentence Embeddings",
"Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning",
"Learning Generic Sentence Representations Using Convolutional Neural Networks",
"Unsupervised Learning of Sentence Representations using Convolutional Neural Networks",
"Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks",
"Siamese CBOW: Optimizing Word Embeddings for Sentence Representations",
"Context Encoders: Feature Learning by Inpainting",
"Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles",
"Learning Distributed Representations of Sentences from Unlabelled Data",
"Autoencoding beyond pixels using a learned similarity metric",
"Towards Universal Paraphrastic Sentence Embeddings",
"Order-Embeddings of Images and Language",
"Generating Sentences from a Continuous Space",
"Neural Machine Translation of Rare Words with Subword Units",
"Skip-Thought Vectors",
"Associating neural word embeddings with deep image representations using Fisher Vectors",
"Unsupervised Visual Representation Learning by Context Prediction",
"Self-Adaptive Hierarchical Sentence Model",
"Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks",
"Gated Feedback Recurrent Neural Networks",
"Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN)",
"Deep visual-semantic alignments for generating image descriptions",
"On Using Very Large Target Vocabulary for Neural Machine Translation",
"GloVe: Global Vectors for Word Representation",
"A Model of Coherence Based on Distributed Sentence Representation",
"Convolutional Neural Networks for Sentence Classification",
"SemEval-2014 Task 10: Multilingual Semantic Textual Similarity",
"Distributed Representations of Sentences and Documents",
"A SICK cure for the evaluation of compositional distributional semantic models",
"Microsoft COCO: Common Objects in Context",
"Multilingual Distributed Representations without Word Alignment",
"Distributed Representations of Words and Phrases and their Compositionality",
"Discriminative Improvements to Distributional Sentence Similarity",
"Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank",
"UMBC_EBIQUITY-CORE: Semantic Textual Similarity Systems",
"Linguistic Regularities in Continuous Space Word Representations",
"Efficient Estimation of Word Representations in Vector Space",
"Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection",
"A Scalable Hierarchical Distributed Language Model",
"Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales",
"Unsupervised Construction of Large Paraphrase Corpora: Exploiting Massively Parallel News Sources",
"Mining and summarizing customer reviews",
"A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts",
"Overview of the TREC 2003 Question Answering Track",
"Annotating Expressions of Opinions and Emotions in Language"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"Kelvin Guu",
"Tatsunori B. Hashimoto",
"Yonatan Oren",
"Percy Liang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Wieting",
"Jonathan Mallinson",
"Kevin Gimpel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Deepak Pathak",
"Pulkit Agrawal",
"Alexei A. Efros",
"Trevor Darrell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Alexis Conneau",
"Douwe Kiela",
"Holger Schwenk",
"Loïc Barrault",
"Antoine Bordes"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Wieting",
"Kevin Gimpel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sanjeev Arora",
"Yingyu Liang",
"Tengyu Ma"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yacine Jernite",
"Samuel R. Bowman",
"D. Sontag"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Zhe Gan",
"Yunchen Pu",
"Ricardo Henao",
"Chunyuan Li",
"Xiaodong He",
"L. Carin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Zhe Gan",
"Yunchen Pu",
"Ricardo Henao",
"Chunyuan Li",
"Xiaodong He",
"L. Carin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yossi Adi",
"Einat Kermany",
"Yonatan Belinkov",
"Ofer Lavi",
"Yoav Goldberg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Tom Kenter",
"Alexey Borisov",
"M. de Rijke"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Deepak Pathak",
"Philipp Krähenbühl",
"Jeff Donahue",
"Trevor Darrell",
"Alexei A. Efros"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Noroozi",
"P. Favaro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Felix Hill",
"Kyunghyun Cho",
"A. Korhonen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Anders Boesen Lindbo Larsen",
"Søren Kaae Sønderby",
"H. Larochelle",
"O. Winther"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Wieting",
"Mohit Bansal",
"Kevin Gimpel",
"Karen Livescu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ivan Vendrov",
"Ryan Kiros",
"S. Fidler",
"R. Urtasun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Samuel R. Bowman",
"L. Vilnis",
"O. Vinyals",
"Andrew M. Dai",
"R. Józefowicz",
"Samy Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Rico Sennrich",
"B. Haddow",
"Alexandra Birch"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ryan Kiros",
"Yukun Zhu",
"R. Salakhutdinov",
"R. Zemel",
"R. Urtasun",
"A. Torralba",
"S. Fidler"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Benjamin Klein",
"Guy Lev",
"Gil Sadeh",
"Lior Wolf"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Carl Doersch",
"A. Gupta",
"Alexei A. Efros"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"H. Zhao",
"Zhengdong Lu",
"P. Poupart"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kai Sheng Tai",
"R. Socher",
"Christopher D. Manning"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Junyoung Chung",
"Çaglar Gülçehre",
"Kyunghyun Cho",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Junhua Mao",
"W. Xu",
"Yi Yang",
"Jiang Wang",
"A. Yuille"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Karpathy",
"Li Fei-Fei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sébastien Jean",
"Kyunghyun Cho",
"R. Memisevic",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jeffrey Pennington",
"R. Socher",
"Christopher D. Manning"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jiwei Li",
"E. Hovy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yoon Kim"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Eneko Agirre",
"Carmen Banea",
"Claire Cardie",
"Daniel Matthew Cer",
"Mona T. Diab",
"Aitor Gonzalez-Agirre",
"Weiwei Guo",
"Rada Mihalcea",
"German Rigau",
"Janyce Wiebe"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Quoc V. Le",
"Tomas Mikolov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Marelli",
"S. Menini",
"Marco Baroni",
"L. Bentivogli",
"R. Bernardi",
"Roberto Zamparelli"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Tsung-Yi Lin",
"M. Maire",
"Serge J. Belongie",
"James Hays",
"P. Perona",
"Deva Ramanan",
"Piotr Dollár",
"C. L. Zitnick"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"K. Hermann",
"Phil Blunsom"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Tomas Mikolov",
"I. Sutskever",
"Kai Chen",
"G. Corrado",
"J. Dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yangfeng Ji",
"Jacob Eisenstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Socher",
"Alex Perelygin",
"Jean Wu",
"Jason Chuang",
"Christopher D. Manning",
"A. Ng",
"Christopher Potts"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Lushan Han",
"Abhay Lokesh Kashyap",
"Timothy W. Finin",
"J. Mayfield",
"Jonathan Weese"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Tomas Mikolov",
"Wen-tau Yih",
"G. Zweig"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Tomas Mikolov",
"Kai Chen",
"G. Corrado",
"J. Dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Socher",
"E. Huang",
"Jeffrey Pennington",
"A. Ng",
"Christopher D. Manning"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Mnih",
"Geoffrey E. Hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Pang",
"Lillian Lee"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Dolan",
"Chris Quirk",
"Chris Brockett"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Minqing Hu",
"Bing Liu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Pang",
"Lillian Lee"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Voorhees"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Janyce Wiebe",
"Theresa Wilson",
"Claire Cardie"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1709.08878v2",
"1706.01847",
"1705.05363",
"1705.02364v5",
"1705.00364v1",
"",
"1705.00557",
"1611.07897v2",
"",
"1608.04207",
"1606.04640v1",
"1604.07379v2",
"1603.09246v3",
"1602.03483v1",
"1512.09300v2",
"1511.08198v3",
"1511.06361",
"1511.06349v4",
"1508.07909v5",
"1506.06726",
"",
"1505.05192v3",
"1504.05070",
"1503.00075",
"1502.02367v4",
"1412.6632v5",
"1412.2306",
"1412.2007v2",
"",
"",
"1408.5882v2",
"",
"1405.4053v2",
"",
"1405.0312v3",
"1312.6173v4",
"1310.4546v1",
"",
"",
"",
"",
"1301.3781v3",
"",
"",
"cs/0506075v1",
"",
"",
"cs/0409058v1",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
[],
[],
[
"background"
],
[
"background",
"methodology"
],
[
"background"
],
[
"background",
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"background",
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[
"background",
"methodology"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"background",
"methodology"
],
[
"methodology"
],
[
"background",
"methodology"
],
[],
[
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[],
[
"methodology"
],
[
"methodology"
],
[],
[
"methodology"
],
[
"background"
],
[
"background",
"methodology"
],
[],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[],
[
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[],
[
"methodology"
],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
true,
false,
true,
false,
true,
true,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true
]
} | null | 84 | 5.833333 | 0.62963 | 0.833333 | null | null | null | null | null | rJvJXZb0W |
|
triastcyn|generating_differentially_private_datasets_using_gans|ICLR_cc_2018_Conference | Generating Differentially Private Datasets Using GANs | In this paper, we present a technique for generating artificial datasets that retain statistical properties of the real data while providing differential privacy guarantees with respect to this data. We include a Gaussian noise layer in the discriminator of a generative adversarial network to make the output and the gradients differentially private with respect to the training data, and then use the generator component to synthesise privacy-preserving artificial dataset. Our experiments show that under a reasonably small privacy budget we are able to generate data of high quality and successfully train machine learning models on this artificial data. | {
"name": [],
"affiliation": []
} | Train GANs with differential privacy to generate artificial privacy-preserving datasets. | [
"generative adversarial networks",
"differential privacy",
"synthetic data"
] | null | 2018-02-15 22:29:29 | 32 | null | null | null | null | null | null | null | null | false | This paper presents an interesting idea: employ GANs in a manner that guarantees the generation of differentially private data.
The reviewers liked the motivation but identified various issues. Also, the authors themselves discovered some problems in their formulation; on behalf of the community, thanks for letting the readers know.
The discovered issues will need to be reviewed in a future submission. | {
"review_id": [
"H1Ae8Z5eM",
"ByaqVKoxz",
"B1p11ROxz"
],
"review": [
{
"title": "title: May need more details for privacy analysis",
"paper_summary": null,
"main_review": "main_review: The paper proposes a technique for differentially privately generating synthetic data using GAN, and experimentally showed that their method achieves both high utility and good privacy.\nThe idea of building a differentially private GAN and generating differentially private synthetic data is very interesting. However, my main concern is the privacy aspect of the technique, as it is not explained clearly enough in the paper. There is also room for improvement in the presentation and clarity of the paper.\n\nMore details:\n- About the differential privacy aspect:\n The author didn't provide detailed privacy analysis of the Gaussian noise layer, and I don't find the values of the sensitivity (C = 1) provided in the answer to a public comment easy to see. Also, the paper mentioned that the batch size is 32 and the author mentioned in the comment that the std of the Gaussian noise is 0.7, and the number of epoch is 50 or 150. I think these values would lead to epsilon much larger than 8 (as in Table 1). However, in Section 5.2, it is said that \"Privacy bounds were evaluated using the moments accountant and the privacy amplification theorem (Abadi et al., 2016), and therefore, are data-dependent and are tighter than using normal composition theorems.\" I don't see clearly why privacy amplification is needed here, and why using moments accountant and privacy amplification can lead to data-dependent privacy loss.\n In general, I don't find the privacy analysis of this paper clear and detailed enough to convince me about the correctness of the privacy results. However, I am very happy to change my opinion if there are convincing details in the rebuttal.\n\n- About the presentation:\n As a paper proposing a differentially private algorithm, detailed and formal analysis of the privacy guarantees is essential to convince the readers. For example, I think it would be much better if there is a formal theorem showing the sensitivity of the Gaussian noise layer. And it would be better to restate (in Appendix 7.4) not only the definition of moments accountant, but the composition and tail bound, as well as the moments accountant for the Gaussian mechanism, since they are all used in the privacy analysis of this paper.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Ok, but not good enough",
"paper_summary": null,
"main_review": "main_review: This paper considers the problem of generating differentially private datasets using GANs. To the best of my knowledge this is the first paper to study differential privacy for GANs.\n\nThe paper is fairly well-written but has several major weaknesses:\n-- Privacy parameter eps = 8 used in the experiments implies that the likelihood of any event can change by e^8 which is roughly 3000, which is an unacceptably high privacy loss. Moreover, even for this high privacy loss the accuracy on the SVHN dataset seems to drop a lot (92% down to 83%) when proposed mechanism is used.\n-- I didn't find a formal proof of the privacy guarantee in the paper. The authors say that the privacy guarantee is based on the moments accountant method, but I couldn't find the proof anywhere. The method itself is introduced in Section 7.4 but isn't used for the proof. Thus the paper seems to be incomplete.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: I like the paper because of its central idea, and the importance of the problem. However, I am not confident about the technical novelty in the algorithmic components.",
"paper_summary": null,
"main_review": "main_review: Summary: The paper addresses the problem of non-interactive differentially private mechanism via adversarial networks. Non-interactive mechanisms have been one of the most sought-after approaches in differentially private algorithm design. The reason is that once a differentially private data set is released, it can be used in any way to answer queries / perform learning tasks without worrying about the privacy budget. However, designing effective non-interactive mechanisms are notoriously hard because of strong computational lower bounds. In that respect, the problem addressed in this paper is extremely important, and the approach of using an adversarial network for the task is very natural (yet novel).\n\nThe main idea in the paper is to set up a usual adversarial framework with the generator and the discriminator, where the discriminator has access to the raw data. The information (in the form of gradients) is passed from the discriminator on to the generator via a differentially private channel (using Gaussian mechanism).\n\nPositive aspects of the paper: One main positive aspect of the paper is that it comes up with a very simple yet effective approach for a non-interactive mechanism for differential privacy. Another positive aspect of the paper is that it is very well-written and is easy to follow.\n\nQuestions: I have a few questions about the paper.\n\n1. The technical novelty of the paper is not that high. Given the main idea of using a GAN, the algorithms and the experiments are fairly straightforward. I may be missing something. I believe the paper can be strengthened by placing more emphasis on the technical content.\n\n2. I am mildly concerned about the effectiveness of the algorithm in the high dimensional setting. The norm of i.i.d. Gaussian noise scales roughly as \\sqrt{dimensions}, which may be too much to tolerate in most settings.\n\n3. I was wondering if there is a way to incorporate assumptions about sparsity in the original data set, to handle curse of dimensionality.\n\n4. I am not sure about the novelty of Theorem 2. Isn't it just post-processing property of differential privacy?",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.3333333432674408,
0.5555555820465088
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Privacy parameters",
"Dimension of the data",
"Dimensionality problem"
],
"comment": [
"Thank you for your question. We use the following parameter values in our experiments:\n1). C = 1, in all of the experiments.\n2). Number of training epochs for GAN is 150 for SVHN and 50 for MNIST. Note that we also use unrolling of the discriminator for 4 steps (reduced to 3 steps after 120 epochs) in generator updates to avoid mode collapse.\n3). Standard deviation of the Gaussian noise is generally set to 0.7. On SVHN, it is increased to 0.8 after 120 epochs to meet a tighter privacy bound.",
"Thank you for your questions.\nTo answer the first question, adding noise in each iteration is not a problem, as it does not introduce bias over time. The dimensionality of data would not be an issue either, because noise is added in an embedding space (and not in the original feature space) for each dimension independently, making the method agnostic to the dimensionality of original data. As reported in the paper, we have done experiments with the SVHN dataset, which has 3072-dimensional input vectors.\nMoving on to your second question. Thank you for drawing our attention to this paper. We were not aware of it and missed it when studying the related work.\nWhile the main ideas are indeed similar, there is a major difference in our method: the way of preserving privacy in GAN training. On the initial stages of our work, we explored the possibility of using differentially private SGD, but we found that achieving reasonable privacy bounds requires adding too much noise to gradients and makes GAN training much harder than it already is. The aforementioned paper confirms our findings by showing that the noise quickly overpowers the gradient (Fig. 1(e)) and that using the GAN after the final epoch is not sufficient for obtaining realistic data (Fig. 2(a)). Instead, we propose adding noise in the forward pass, which improves convergence properties and generated data quality.\nThis difference leads to a number of advantages. Most importantly, our technique does not require additional procedures for picking specific generator epochs or modifying optimisation methods. Moreover, it can be implemented by simply adding a noise layer to the discriminator, and we formally show that this is sufficient for achieving differential privacy.",
"\nDear readers,\n\nWe have discovered a problem related to dimensionality that invalidates privacy guarantees stated in the paper. We are currently working on solving the issue."
]
} | {
"paperhash": [
"abadi|deep_learning_with_differential_privacy",
"bindschaedler|plausible_deniability_for_privacy-preserving_data_synthesis",
"dwork|33rd_international_colloquium_on_automata,_languages_and_programming,_part_ii_(icalp_2006)",
"dwork|differential_privacy:_a_survey_of_results",
"dwork|differential_privacy_and_robust_statistics",
"dwork|our_data,_ourselves:_privacy_via_distributed_noise_generation",
"dwork|boosting_and_differential_privacy",
"dwork|the_algorithmic_foundations_of_differential_privacy",
"fredrikson|model_inversion_attacks_that_exploit_confidence_information_and_basic_countermeasures",
"goodfellow|generative_adversarial_nets",
"gregor|draw:_a_recurrent_neural_network_for_image_generation",
"hamm|learning_privately_from_multiparty_data",
"he|delving_deep_into_rectifiers:_surpassing_human-level_performance_on_imagenet_classification",
"kairouz|the_composition_theorem_for_differential_privacy",
"kingma|adam:_a_method_for_stochastic_optimization",
"diederik|auto-encoding_variational_bayes",
"lecun|gradient-based_learning_applied_to_document_recognition",
"li|t-closeness:_privacy_beyond_kanonymity_and_l-diversity",
"machanavajjhala|johannes_gehrke,_and_muthuramakrishnan_venkitasubramaniam._l-diversity:_privacy_beyond_k-anonymity",
"metz|unrolled_generative_adversarial_networks",
"netzer|reading_digits_in_natural_images_with_unsupervised_feature_learning",
"odena|semi-supervised_learning_with_generative_adversarial_networks",
"oord|pixel_recurrent_neural_networks",
"papernot|semisupervised_knowledge_transfer_for_deep_learning_from_private_training_data",
"radford|unsupervised_representation_learning_with_deep_convolutional_generative_adversarial_networks",
"rezende|stochastic_backpropagation_and_approximate_inference_in_deep_generative_models",
"salimans|improved_techniques_for_training_gans",
"shokri|privacy-preserving_deep_learning",
"silver|mastering_the_game_of_go_with_deep_neural_networks_and_tree_search",
"sweeney|a_model_for_protecting_privacy",
"wu|google's_neural_machine_translation_system:_bridging_the_gap_between_human_and_machine_translation",
"zhao|energy-based_generative_adversarial_network"
],
"title": [
"Deep learning with differential privacy",
"Plausible deniability for privacy-preserving data synthesis",
"33rd International Colloquium on Automata, Languages and Programming, part II (ICALP 2006)",
"Differential privacy: A survey of results",
"Differential privacy and robust statistics",
"Our data, ourselves: Privacy via distributed noise generation",
"Boosting and differential privacy",
"The algorithmic foundations of differential privacy",
"Model inversion attacks that exploit confidence information and basic countermeasures",
"Generative adversarial nets",
"Draw: A recurrent neural network for image generation",
"Learning privately from multiparty data",
"Delving deep into rectifiers: Surpassing human-level performance on imagenet classification",
"The composition theorem for differential privacy",
"Adam: A method for stochastic optimization",
"Auto-encoding variational bayes",
"Gradient-based learning applied to document recognition",
"t-closeness: Privacy beyond kanonymity and l-diversity",
"Johannes Gehrke, and Muthuramakrishnan Venkitasubramaniam. l-diversity: Privacy beyond k-anonymity",
"Unrolled generative adversarial networks",
"Reading digits in natural images with unsupervised feature learning",
"Semi-supervised learning with generative adversarial networks",
"Pixel recurrent neural networks",
"Semisupervised knowledge transfer for deep learning from private training data",
"Unsupervised representation learning with deep convolutional generative adversarial networks",
"Stochastic backpropagation and approximate inference in deep generative models",
"Improved techniques for training gans",
"Privacy-preserving deep learning",
"Mastering the game of go with deep neural networks and tree search",
"A model for protecting privacy",
"Google's neural machine translation system: Bridging the gap between human and machine translation",
"Energy-based generative adversarial network"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"martín abadi",
"andy chu",
"ian goodfellow",
"h brendan mcmahan",
"ilya mironov",
"kunal talwar",
"li zhang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"vincent bindschaedler",
"reza shokri",
"carl a gunter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"cynthia dwork"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"cynthia dwork"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"cynthia dwork",
"jing lei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"cynthia dwork",
"krishnaram kenthapadi",
"frank mcsherry",
"ilya mironov",
"moni naor"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"cynthia dwork",
"guy n rothblum",
"salil vadhan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"cynthia dwork",
"aaron roth"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"matt fredrikson",
"somesh jha",
"thomas ristenpart"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ian goodfellow",
"jean pouget-abadie",
"mehdi mirza",
"bing xu",
"david warde-farley",
"sherjil ozair",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"karol gregor",
"ivo danihelka",
"alex graves",
"danilo rezende",
"daan wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jihun hamm",
"yingjun cao",
"mikhail belkin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"peter kairouz",
"sewoong oh",
"pramod viswanath"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik kingma",
"jimmy ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p diederik",
"max kingma",
" welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yann lecun",
"léon bottou",
"yoshua bengio",
"patrick haffner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ninghui li",
"tiancheng li",
"suresh venkatasubramanian"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ashwin machanavajjhala",
"daniel kifer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"luke metz",
"ben poole",
"david pfau",
"jascha sohl-dickstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yuval netzer",
"tao wang",
"adam coates",
"alessandro bissacco",
"bo wu",
"andrew y ng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"augustus odena"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aaron van den oord",
"nal kalchbrenner",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nicolas papernot",
"martín abadi",
"úlfar erlingsson",
"ian goodfellow",
"kunal talwar"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alec radford",
"luke metz",
"soumith chintala"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"danilo jimenez rezende",
"shakir mohamed",
"daan wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tim salimans",
"ian goodfellow",
"wojciech zaremba",
"vicki cheung",
"alec radford",
"xi chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"reza shokri",
"vitaly shmatikov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david silver",
"aja huang",
"chris j maddison",
"arthur guez",
"laurent sifre",
"george van den",
"julian driessche",
"ioannis schrittwieser",
"veda antonoglou",
"marc panneershelvam",
" lanctot"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"latanya sweeney",
" k-anonymity"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yonghui wu",
"mike schuster",
"zhifeng chen",
"v quoc",
"mohammad le",
"wolfgang norouzi",
"maxim macherey",
"yuan krikun",
"qin cao",
"klaus gao",
" macherey"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"junbo zhao",
"michael mathieu",
"yann lecun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1607.00133v2",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"1502.04623v2",
"1602.03552v1",
"",
"1311.0776v4",
"1412.6980v9",
"",
"",
"",
"",
"1611.02163v4",
"",
"arXiv:1606.01583",
"1601.06759v3",
"arXiv:1610.05755",
"1511.06434v2",
"1401.4082v3",
"1606.03498v1",
"",
"",
"",
"1609.08144v2",
"arXiv:1609.03126"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.444444 | 0.75 | null | null | null | null | null | rJv4XWZA- |
||
wang|trl_discriminative_hints_for_scalable_reverse_curriculum_learning|ICLR_cc_2018_Conference | TRL: Discriminative Hints for Scalable Reverse Curriculum Learning | Deep reinforcement learning algorithms have proven successful in a variety of domains. However, tasks with sparse rewards remain challenging when the state space is large. Goal-oriented tasks are among the most typical problems in this domain, where a reward can only be received when the final goal is accomplished. In this work, we propose a potential solution to such problems with the introduction of an experience-based tendency reward mechanism, which provides the agent with additional hints based on a discriminative learning on past experiences during an automated reverse curriculum. This mechanism not only provides dense additional learning signals on what states lead to success, but also allows the agent to retain only this tendency reward instead of the whole histories of experience during multi-phase curriculum learning. We extensively study the advantages of our method on the standard sparse reward domains like Maze and Super Mario Bros and show that our method performs more efficiently and robustly than prior approaches in tasks with long time horizons and large state space. In addition, we demonstrate that using an optional keyframe scheme with very small quantity of key states, our approach can solve difficult robot manipulation challenges directly from perception and sparse rewards. | {
"name": [],
"affiliation": []
} | We propose Tendency RL to efficiently solve goal-oriented tasks with large state space using automated curriculum learning and discriminative shaping reward, which has the potential to tackle robot manipulation tasks with perception. | [
"deep learning",
"deep reinforcement learning",
"robotics",
"perception"
] | null | 2018-02-15 22:29:26 | 25 | null | null | null | null | null | null | null | null | false | The paper proposes an extension to the reverse curriculum RL approach which uses a discriminator to label states as being on a goal trajectory or off the goal trajectory. The paper is well-written, with good empirical results on a number of task domains. However, the method relies on a number of assumptions on the ability of the agent to reset itself and the environment which are unrealistic and limiting, and beg the question as to why use the given method at all if this capability is assumed to exist. Overall, the method lacks significance and quality, and the motivation is not clear enough.
| {
"review_id": [
"r1Kg9atxz",
"BkFL6KCxf",
"B129GzFxf"
],
"review": [
{
"title": "title: Interesting idea, but approach seems limited. ",
"paper_summary": null,
"main_review": "main_review: The authors extend the approach proposed in the \"Reverse Curriculum Learning for Reinforcement Learning\" paper by adding a discriminator that gives a bonus reward to a state based on how likely it thinks the current policy is to reach the goal from said state. The discriminator is a potentially interesting mechanism to approximate multi-step backups in sparse-reward environments. \n\nThe approach of this paper seems severely severely limited by the assumptions made by the authors, mainly assuming a deterministic environment, known goal states and the ability to sample anywhere in the state space. Some of these assumptions may be reasonable in domains such as robotics, but they seem very restrictive in the domains like the games considered in the paper.\n\n\nAdditional Comments:\n\n-The authors demonstrate some benefits of using Tendency rewards, but made little attempt to explain why it leads to accelerated learning. Results are pure performance results.\n\n-The authors should probably structure the tendency reward as potential based instead of using the Gaussian kernel hack they introduce in section 4.2\n\n- Presentation: There are several mistakes and formatting issues in References\n\n- Assumption 2 transformations -> transitions?\n\n-Need to add assumption 3: advance knowledge of goal state\n\n- the use of gamma as a scale factor in equation 2 is confusion, it was already introduced as the discount factor ( which is default notation in RL). It also isn't clear what the notation r_f denotes (is it the same as r^f in appendix?).\n\n-It is nice to see that the authors compare their method with alternative approaches. Unfortunately, the proposed method does not seem to offer many benefits. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: interesting direction, questions about the proposed approach",
"paper_summary": null,
"main_review": "main_review: The authors present a new method for doing reverse curriculum training for reinforcement learning tasks with deterministic dynamics, a desired goal state at which reward is received, and the ability to teleport to any state. This covers a number of important cases of interest, including all simulated domains, and a number of robotics applications. The training proceeds in phases, where in each phase the initial starting set of states is expanded. The initial set of states used is close to the desired state goal. Each phase is initiated when 80% of the states in the current phase can reach the goal. Once the initial set of start states overlaps with the desired initial set of states for the task, training can terminate. During the training in a single phase, the algorithm uses a shaping reward (the tendency) which is based on a binary classifier that predicts if it will be possible to reach the goal from this state. This reward is combined in a hybrid reward signal. The authors suggest the use of a small number of checkpoints to guide the backwards state expansion to improve the search efficiency. Results are presented on several domains: maze, Super Mario, and Mujoco domains. \n\nThe topic of doing more sample efficient training is important and interesting, and the subset of settings the authors consider is still a good set. \n\nThe paper was clearly written though some details were relegated to the appendix which would’ve been useful to see in the main text.\n\nI’m not yet convinced about this method for the desired setting in terms of significance and quality.\n\nAn alternative to using tendency shaping reward would be (during phase expansion) make the new “goal” states any of the states in the previous phase of initial states P_{i} that did reach the goal. This should greatly reduce the decision making horizon needed in each phase. Since the domain is deterministic, as soon as one can reach one of those states, we have a path to the goal. If we care about the number of steps to reach the goal (vs finding any path), then each of the states in P_{i} for which a successful path can be achieved to the goal can also be labeled by the cost / number of time steps to reach the goal. This should decompose the problem into a series of smaller problems. Perhaps I’m missing something-- could the authors please address this suggestion and/or explain why this wouldn’t be beneficial?\n\nThe authors currently use checkpoints to help guide the search towards the true task desired set of initial states. If those are lacking, it seems like the generation of the new P_{i+1} could be biased towards that desired set of states. One approach could be to randomly roll out from the start state and then bias P_{i+1} towards any states close to states along such trajectories. In general one could imagine a situation in which one both does forward learning/planning from the task start state and backwards learning from the goal state to obtain a significant benefit, similar to ideas that have been used in robot motion planning.\n\nWhy learn from pixels for the robot domains considered? Here it would be nice to compare to some robotics approaches. With the action space of the robot and motion planning, it seems like this problem could be tackled using existing techniques. It is interesting to have a method that can be used with pixels, but in cases where there are other approaches, it would be useful to compare to them.\n\nSmall point\nD.2 Why not compare to GAIL instead? \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review: Concerns with regard to clarity and real-world applicability",
"paper_summary": null,
"main_review": "main_review: This paper proposes a new method for reverse curriculum generation by gradually reseting the environment in phases and classifying states that tend to lead to success. It additionally proposes a mechanism for learning from human-provided \"key states\".\n\nThe ideas in this paper are quite nice, but the paper has significant issues with regard to clarity and applicability to real-world problems:\nFirst, it is unclear is the proposed method requires access only high-dimensional observations (e.g. images) during training or if it additionally requires low-dimensional states (e.g. sufficient information to reset the environment). In most compelling problems settings where a low-dimensional representation that sufficiently explains the current state of the world is available during training, then it is also likely that one can write down a nicely shaped reward function using that state information during training, in which case, it makes sense to use such a reward function. This paper seems to require access to low-dimensional states, and specifically considers the sparse-reward setting, which seems contrived.\nSecond, the paper states that the assumption \"when resetting, the agent can be reset to any state\" can be satisfied in problems such as real-world robotic manipulation. This is not correct. If the robot could autonomously reset to any state, then we would have largely solved robotic manipulation. Further, it is not always realistic to assume access to low-dimensional state information during training on a real robotic system (e.g. knowing the poses of all of the objects in the world).\nThird, the experiments section lacks crucial information needed to understand the experiments. What is the state, observation, and action space for each problem setting? What is the reward function for each problem setting? What reinforcement learning algorithm is used in combination with the curriculum and tendency rewards? Are the states and actions continuous or discrete? Without this information, it is difficult to judge the merit of the experimental setting.\nFourth, the proposed method seems to lack motivation, making the proposed scheme seem a bit ad hoc. Could each of the components be motivated further through more discussion and/or ablative studies?\nFinally, the main text of the paper is substantially longer than the recommended page limit. It should be shortened by making the writing more concise.\n\nBeyond my feedback on clarity and significance, here are further pieces of feedback with regard to the technical content, experiments, and related work:\nI'm wondering -- can the reward shaping in Equation 2 be made to satisfy the property of not affecting the final policy? (see Ng et al. '09) If so, such a reward shaping would make the method even more appealing.\nHow do the experiments in section 5.4 compare to prior methods and ablations? Without such a comparison, it is impossible to judge the performance of the proposed method and the level of difficulty of these tasks. At the very least, the paper should compare the performance of the proposed method to the performance a random policy.\n\nThe paper is missing some highly relevant references. First, how does the proposed method compare to hindsight experience replay? [1] Second, learning from keyframes (rather than demonstrations) has been explored in the past [1]. It would be preferable to use the standard terminology of \"keyframe\".\n\n[1] Andrychowicz et al. Hindsight Experience Replay. 2017\n[2] Akgun et al. Keyframe-based Learning from Demonstration. 2012\n\nIn summary, I think this paper has a number of promising ideas and experimental results, but given the significant issues in clarity and significance to real world problems, I don't think that the current version of this paper is suitable for publication in ICLR.\n\nMore minor feedback on clarity and correctness:\n- Abstract: \"Deep RL algorithms have proven successful in a vast variety of domains\" -- This is an overstatement.\n- The introduction should be more clear with regard to the assumptions. In particular, it would be helpful to see discussion of requiring human-provided keyframes. As is, it is unclear what is meant by \"checkpoint scheme\", which is not commonly used terminology.\n- \"This kind of spare reward, goal-oriented tasks are considered the most difficult challenges\" -- This is also an overstatement. Long-horizon tasks and high-dimensional observations are also very difficult. Also, the sentence is not grammatically correct.\n- \"That is, environment\" -> \"That is, the environment\"\n- In the last paragraph of the intro, it would be helpful to more clearly state what the experiments can accomplish. Can they handle raw pixel inputs?\n- \"diverse domains\" -> \"diverse simulated domains\"\n- \"a robotic grasping task\" -> \"a simulated robotic grasping task\"\n- There are a number of issues and errors in citations, e.g. missing the year, including the first name, incorrect reference\n- Assumption 1: \\mathcal{P} has not yet been defined.\n- The last two paragraphs of section 3.2 are very difficult to understand without reading the method yet\n- \"conventional RL solver tend\" -> \"conventional RL tend\", also should mention sparse reward in this sentence.\n- Algorithm 1 and Figure 1 are not referenced in the text anywhere, and should be\n- The text in Figure 1 and Figure 3 is extremely small\n- The text in Figure 3 is extremely small\n\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.4444444477558136,
0.3333333432674408
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to AnonReviewer2",
"Rebuttal",
"Response to rebuttal",
"Rebuttal"
],
"comment": [
"1-Thanks for mentioning that. Actually, this alternative has been carefully considered, and we decided not to use it mainly because this method largely impairs the agent's ability to find new policies. We tested this idea with an experiment setup similar to the one in Appendix E.3 (Fig 10), and found that if we change the goal state to any of the successful states from the previous phase, the agent is highly likely to lose the capability of finding a new shortcut (the fifth graph in Fig 10). The reason is that TRL's reward function is hybrid (tendency + final goal), where the final goal reward is meant to guarantee the agent's motivation for finding new policies. That’s why keeping the final goal state constant in training of each phase makes sense.\n\n2-Thanks for your suggestion. Based on some experiments on this idea, we find that in small state space tasks (e.g. the Maze) this approach can lead to similar performance compared to keyframe scheme (\"checkpoint\" is renamed \"keyframe\"), but it might be impractical in large state space multistage tasks such as “Pick and Place”. Since the shaping of tendency reward hasn’t covered the area close to the start state, exploration beginning from the start state might be biased as well, and the complexity of generating P_{i+1} can be very high. As a matter of fact, several keyframes can already solve this problem well in these domains.\n\n3-We learn from raw pixel perceptions based on the assessment that it is a more general form of environment information and contains more details of the environment than low-dimensional data. Classic approaches, due to hand-designed detectors and grasp policies, cannot be easily generalized to new objects or varying background scenes. Additionally, images are less expensive to acquire and are more practical than precise sensor information. Taking robotic grasping and picking as an example, the location and shape of the object are hard to acquire and define (we cannot mount sensors everywhere), we will have to rely on perceptions (image or video).\n\n4-TRL does NOT fall in the track of imitation learning. The optional keyframes are only used in large-scale experiments like grasping from perception, not in simpler ones like Mario and Maze. By our design, TRL works without any expert policies. The keyframe scheme only helps to shrink search space and does not influence the learned policy. Our experiments show that the agent does not necessarily follow the keyframes (Appendix E.3 Fig 10).",
"1-As is claimed in the paper, our assumption follows [Carlos et al 2017]. For deterministic environments, we found it not necessary since we can change the discriminator to the probability of success between 0-1 and TRL can then handle stochastic as well. We have revised the claim. For the sample-anywhere assumption, in fact, we don’t need to reach everywhere but only start states in the current phase which it has reached during the generation process. We can record those states through low dimensional data (angles of joint etc) easily. In games, actually we find it’s easier than robotics to reset to any state given access to the corresponding API from developers. Given that many game developers are interested in training AI agent automatically for their games, such APIs are usually not hard to acquire.\n\n2-As is explained in the Introduction, we have pointed out the reason why the method in Reverse Curriculum paper is lack of efficiency (Close to the end of the 2nd paragraph). Then we show that with the help of tendency rewards, our model can get rid of the unnecessary time-consuming reviewing process where the agent switches start states between old and new ones to avoid forgetting old policies (End of the 3rd & middle of the 4th paragraph). To prove our idea, we make a comparison in Experiment 5.1 (Fig 3), which shows our advantage in efficiency compared to Reverse Curriculum algorithm. TRL’s main advantage over reverse curriculum is that it no longer requires keeping all starting sets. \n\n3-Thanks for mentioning the potential based reward shaping. However, if we define the shaped reward as r = T(St’) - T(St), although this approach can avoid repeated rewards, it still suffer from the reward sparsity problem, since T(St’_positive) - T(St_positive) and T(St’_negative)- T(St_negative) remain 0 at most time and won’t help the agent learn to tackle these tasks.\n\n4-Thanks! We have added this assumption. This assumption is also listed in [Carlos et al 2017].\n\n5-This gamma is only used for weight balance for two rewards. We are sorry to use a confusing notation. Another notation $\\lambda$ has been used to address the confusion.\n\n6-We explained in Experiment 5.3 that the reward function used in PBRS is well hand-engineered by us. We tried more than 10 different reward functions shaped from demonstration and keep adjusting them to let PBRS solve this task. In our experiments, only 2 of all the reward functions we tried can let PBRS work, the others are not shown on the Fig 6. This approach costs much human elaboration and different maps in the Maze need different reward function. Moreover, in most robotic domains where the reward function cannot be easily shaped by hands, human elaboration will increase to an unpractical level. TRL is able to solve this problem with negligible human elaboration with merely several labeled keyframes (\"checkpoint\" is renamed \"keyframe\"). We also proved TRL’s robustness to keyframes with different quality and scale in Appendix E.3 & E.4 (Fig 10, Fig 11). Although the training efficiency of TRL and PBRS may seem similar in the figure, the human elaboration behind the performance is quite different.",
"> 1a. We respectfully remind that full sentence in our paper is “When resetting, the agent can start from any state s ∈ Pi.” We don’t assume that the agent can reset to any state\n\nThank you for the clarification. However, assuming resets even in Pi is not practical in many robotic manipulation problems, e.g. any problem involving free moving objects such as pushing or pick and place (e.g. when the robot must learn to also move the object back to where it started). \n\n\n> 1b. TRL does need these low-dimensional data to restore visited states during the generation of new phases and doesn’t require these data for real training… Since these low-dimensional data is easy to acquire…\n\nI agree that joint angle and end-effector information is easy to acquire. But in practice, *full* low-dimensional state information is not easy to acquire (i.e. positions of free moving objects) and if you assume access to it during some parts of training, then you might as well use it for all parts of training. For example, imagine you wanted to apply this method to a robot learning pushing an object (a fairly simple task). You would need to put some sort of tracker on the object to get its low-dimensional state. If you need to put a tracker on the object, then you might as well use the tracker during training too.\n\n\nThank you for running the additional experiments. I think that they improve the paper.",
"1-We respectfully remind that full sentence in our paper is “When resetting, the agent can start from any state s ∈ Pi.” We don’t assume that the agent can reset to any state. Actually, we only assume that it can reset to a certain state in each phase where it has reached before. Thanks for mentioning the access to low-dimensional states. TRL does need these low-dimensional data to restore visited states during the generation of new phases and doesn’t require these data for real training. During each generation process, the newly sampled states will be stored in the form of low-dimensional states such as the angle of joints and velocity of motors. Since these low-dimensional data is easy to acquire and only used for resetting the agent, we just summarized it as “a way of adding new states to the new phase”. It seems that there is no need for special emphasize.\n\n2-As is mentioned in the last paragraph of Introduction: “The major contribution of this work is that we present a reliable tendency reinforcement learning method that is capable of training agents to solve large state space tasks with only final reward. ” This is our reward setting and is just the definition of goal-oriented tasks. And the detail of experiments is also shown in Appendix C, where we explain all of the settings. The RL used in all of our experiments is A3C and our action control is discrete.\n\n3-There are three components: (a) Phase administrator (b) Tendency reward (c) Keyframes (\"checkpoint\" is renamed \"keyframe\")\nWe ran rough ablation studies with three different settings of difficulties: \n(i) small state space with only final reward (10*10 Maze with observation 10*10): None of the three components are needed since a traditional RL method can tackle it. \n(ii) medium state space with only final reward (40*40 Maze with observation 9*9, Mario Bros): We can solve it by only using (b) with around 53000 training steps(40*40 Maze). We can also accelerate learning by combine (b) and (a), which will take around 35000 training steps. \n(iii) large state space with only final reward (100*100 Maze with observation 9*9, robotic manipulation from perception(grasping, pick and place)): We use (a), (b)and (c) to solve these problems. If we only use (a) and (b), the generation of each phase might be biased and will fail in multistage tasks. Then we include (c) and test the influence of keyframes with different quality and scale (Appendix E.3 E.4 Fig 10 11). We do not find clear relationship between the number of keyframes and the efficiency of training, but keyframes can indeed help TRL learn well (33000 iterations in Grasping, 99000 iterations in Conveyance challenge).\n\n4-We ran some tests based on [Ng et al 1999] and found that if we structure the tendency reward as potential based, the efficiency will largely decrease. We tested it in 40*40 Maze with observation 9*9. Since the tendency reward then be defined as rT(St’) - T(St), the hybrid reward is still very sparse and the agent takes more than 50000 iterations to complete 60% of the whole task (our method takes around 35000 steps to complete the whole one).\n\n5-Goal-oriented tasks are among the most difficult challenges in RL and traditional methods (e.g. TRPO, AC, PPO) alone are not capable of tackling them. The most recent approach to tackle it is based on intrinsic motivation. We made an experiment comparing TRL with curiosity-driven RL in Appendix E.1.2 (Table 2) and showed TRL’s advantages. Other methods mainly focus on tackling this problem with demonstrations, which we also compare TRL with in Experiment 5.3 (Fig 6). The result shows that we only need a small number of keyframes to achieve better results compared to them without much human elaboration or well hand-engineered reward function.\n\n6-Thanks. We have incorporated these two works in discussion."
]
} | {
"paperhash": [
"abbeel|apprenticeship_learning_via_inverse_reinforcement_learning",
"akgun|keyframe-based_learning_from_demonstration",
"dwight|hindsight_experience_replay",
"bellemare|unifying_count-based_exploration_and_intrinsic_motivation",
"brys|reinforcement_learning_from_demonstration_through_shaping",
"florensa|stochastic_neural_networks_for_hierarchical_reinforcement_learning",
"florensa|reverse_curriculum_generation_for_reinforcement_learning",
"han|learning_compound_multi-step_controllers_under_unknown_dynamics",
"harutyunyan|shaping_mario_with_human_advice",
"houthooft|vime:_variational_information_maximizing_exploration",
"kakade|approximately_optimal_approximate_reinforcement_learning",
"karpathy|curriculum_learning_for_motor_skills",
"tejas|hierarchical_deep_reinforcement_learning:_integrating_temporal_abstraction_and_intrinsic_motivation",
"mnih|alex_graves,_ioannis_antonoglou,_daan_wierstra,_and_martin_riedmiller._playing_atari_with_deep_reinforcement_learning",
"mnih|human-level_control_through_deep_reinforcement_learning",
"mnih|asynchronous_methods_for_deep_reinforcement_learning",
"mohamed|variational_information_maximisation_for_intrinsically_motivated_reinforcement_learning",
"andrew|algorithms_for_inverse_reinforcement_learning",
"pathak|curiosity-driven_exploration_by_self-supervised_prediction",
"pinto|robust_adversarial_reinforcement_learning",
"popov|data-efficient_deep_reinforcement_learning_for_dexterous_manipulation",
"schulman|trust_region_policy_optimization",
"stadie|incentivizing_exploration_in_reinforcement_learning_with_deep_predictive_models",
"sukhbaatar|intrinsic_motivation_and_automatic_curricula_via_asymmetric_self-play",
"however|with_additional_keyframes,_this_limitation_could_be_avoided_even_with_irreversible_processes_and_absorbing_sets._the_mechanism_is_described_as_follows:_when_we_sample_states_nearby_a_keyframe_that_is_not_in_any_absorbing_set,_the_sampled_states_might_happen_to_belong_to_some_absorbing_set._although_phase_extensions_are_constrained_within_the_absorbing_sets,_the_generated_phase_might_also_cover_some_states_sampled_nearby_a_keyframe"
],
"title": [
"Apprenticeship learning via inverse reinforcement learning",
"Keyframe-based learning from demonstration",
"Hindsight experience replay",
"Unifying count-based exploration and intrinsic motivation",
"Reinforcement learning from demonstration through shaping",
"Stochastic neural networks for hierarchical reinforcement learning",
"Reverse curriculum generation for reinforcement learning",
"Learning compound multi-step controllers under unknown dynamics",
"Shaping mario with human advice",
"Vime: Variational information maximizing exploration",
"Approximately optimal approximate reinforcement learning",
"Curriculum learning for motor skills",
"Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation",
"Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning",
"Human-level control through deep reinforcement learning",
"Asynchronous methods for deep reinforcement learning",
"Variational information maximisation for intrinsically motivated reinforcement learning",
"Algorithms for inverse reinforcement learning",
"Curiosity-driven exploration by self-supervised prediction",
"Robust adversarial reinforcement learning",
"Data-efficient deep reinforcement learning for dexterous manipulation",
"Trust region policy optimization",
"Incentivizing exploration in reinforcement learning with deep predictive models",
"Intrinsic motivation and automatic curricula via asymmetric self-play",
"with additional keyframes, this limitation could be avoided even with irreversible processes and absorbing sets. The mechanism is described as follows: When we sample states nearby a keyframe that is not in any absorbing set, the sampled states might happen to belong to some absorbing set. Although phase extensions are constrained within the absorbing sets, the generated phase might also cover some states sampled nearby a keyframe"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"pieter abbeel",
"andrew y ng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"maya baris akgun",
"karl cakmak",
"andrea l jiang",
" thomaz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"crow dwight",
"ray andrychowicz",
"marcin "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"marc bellemare",
"sriram srinivasan",
"georg ostrovski",
"tom schaul",
"david saxton",
"remi munos"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tim brys",
"anna harutyunyan",
"halit bener suay",
"sonia chernova",
"matthew e taylor",
"ann nowé"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"carlos florensa",
"yan duan",
"pieter abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"carlos florensa",
"david held",
"markus wulfmeier",
"pieter abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"weiqiao han",
"sergey levine",
"pieter abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"anna harutyunyan",
"tim brys",
"peter vrancx",
"ann nowé"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rein houthooft",
"xi chen",
"yan duan",
"john schulman",
"filip de turck",
"pieter abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sham kakade",
"john langford"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"andrej karpathy",
" van panne",
" de",
" michiel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d tejas",
"karthik kulkarni",
"ardavan narasimhan",
"josh saeedi",
" tenenbaum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"volodymyr mnih",
"koray kavukcuoglu",
"david silver"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"volodymyr mnih",
"koray kavukcuoglu",
"david silver",
"andrei a rusu",
"joel veness",
"marc g bellemare",
"alex graves",
"martin riedmiller",
"andreas k fidjeland",
"georg ostrovski"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"volodymyr mnih",
"adria puigdomenech badia",
"mehdi mirza",
"alex graves",
"timothy lillicrap",
"tim harley",
"david silver",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shakir mohamed",
"danilo jimenez",
"rezende "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"y andrew",
"stuart j ng",
" russell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"deepak pathak",
"pulkit agrawal",
"alexei a efros",
"trevor darrell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lerrel pinto",
"james davidson",
"rahul sukthankar",
"abhinav gupta"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ivaylo popov",
"nicolas heess",
"timothy lillicrap",
"roland hafner",
"gabriel barth-maron",
"matej vecerik",
"thomas lampe",
"yuval tassa",
"tom erez",
"martin riedmiller"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"john schulman",
"sergey levine",
"pieter abbeel",
"michael jordan",
"philipp moritz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"bradly c stadie",
"sergey levine",
"pieter abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sainbayar sukhbaatar",
"ilya kostrikov",
"arthur szlam",
"rob fergus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" however"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"1707.01495v3",
"",
"",
"1704.03012v1",
"1707.05300v3",
"",
"",
"1605.09674v4",
"",
"",
"1604.06057v2",
"arXiv:1312.5602",
"",
"1602.01783v2",
"1509.08731v1",
"",
"",
"1703.02702v1",
"arXiv:1704.03073",
"1502.05477v5",
"1507.00814v3",
"arXiv:1703.05407",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.37037 | 0.75 | null | null | null | null | null | rJssAZ-0- |
||
sagun|empirical_analysis_of_the_hessian_of_overparametrized_neural_networks|ICLR_cc_2018_Conference | Empirical Analysis of the Hessian of Over-Parametrized Neural Networks | We study the properties of common loss surfaces through their Hessian matrix. In particular, in the context of deep learning, we empirically show that the spectrum of the Hessian is composed of two parts: (1) the bulk centered near zero, (2) and outliers away from the bulk. We present numerical evidence and mathematical justifications to the following conjectures laid out by Sagun et. al. (2016): Fixing data, increasing the number of parameters merely scales the bulk of the spectrum; fixing the dimension and changing the data (for instance adding more clusters or making the data less separable) only affects the outliers. We believe that our observations have striking implications for non-convex optimization in high dimensions. First, the *flatness* of such landscapes (which can be measured by the singularity of the Hessian) implies that classical notions of basins of attraction may be quite misleading. And that the discussion of wide/narrow basins may be in need of a new perspective around over-parametrization and redundancy that are able to create *large* connected components at the bottom of the landscape. Second, the dependence of a small number of large eigenvalues to the data distribution can be linked to the spectrum of the covariance matrix of gradients of model outputs. With this in mind, we may reevaluate the connections within the data-architecture-algorithm framework of a model, hoping that it would shed light on the geometry of high-dimensional and non-convex spaces in modern applications. In particular, we present a case that links the two observations: small and large batch gradient descent appear to converge to different basins of attraction but we show that they are in fact connected through their flat region and so belong to the same basin. | {
"name": [],
"affiliation": []
} | The loss surface is *very* degenerate, and there are no barriers between large batch and small batch solutions. | [
"Deep Learning",
"Over-parametrization",
"Hessian",
"Eigenvalues",
"Flat minima",
"Large batch Small batch"
] | null | 2018-02-15 22:29:31 | 27 | null | null | null | null | null | null | null | null | false | Pros:
+ Builds in important ways on the work of Sagun et al., 2016.
Cons:
- The reviewers were very concerned that the assumption in the paper that the second term of Equation (6) is negligible was insufficiently supported, and this concern remained after the discussion and the revision.
- The paper needs to be more precise in its language about the Hessian, particularly in distinguishing between ill conditioning and degeneracy.
- The reviewers did not find the experiment very convincing because it relied on initializing the small-batch optimization from the end point of the large-batch optimization. Again, this concern remained following the discussion and revision.
The area chair agrees with the authors' comments in their OpenReview post of 08 Jan. 2018 "A remark on relative evaluation," and has discounted the reviewers' comments about the relative novelty of the work. It is important not to penalize authors for submitting their papers to conferences with an open review process, especially when that process is still being refined.
However, even discounting the remarks about novelty, there are key issues in the paper that need to be addressed to strengthen it (the 3 "cons" above), so this paper does not quite meet the threshold for ICLR Conference acceptance.
However, because it raises really interesting questions and is likely to provoke useful discussions in the community, it might be a good workshop track paper.
| {
"review_id": [
"ryIbx22yz",
"rJT6jEcgz",
"HkeyY0M-z"
],
"review": [
{
"title": "title: A worthwhile set of experiments, but not entirely convincing, and confusingly presented.",
"paper_summary": null,
"main_review": "main_review: The authors perform a set of experiments in which they inspect the Hessian matrix of the loss of a neural network, and observe that most of the eigenvalues are very close to zero. This is a potentially important observation, and the experiments were well worth performing, but I don't find them fully convincing (partly because I was confused by the presentation).\n\nThey perform four sets of experiments:\n\n1) In section 3.1, they show on simulated data that for data drawn from k clusters, there are roughly k significant eigenvalues in the Hessian of the solution.\n\n2) In section 3.2, they show on MNIST that the solution contains few large eigenvalues, and also that there are negative eigenvalues.\n\n3) In section 3.3, they show (again on MNIST) that at their respective solutions, large batch and small batch methods find solutions with similar numbers of large eigenvalues, but that for the large batch method the magnitudes are larger.\n\n4) In section 4.1, they train (on CIFAR10) using a large batch method, and then transition to a small batch method, and argue that the second solution appears to be better than the first, but that they are a part of the same basin (since linearly while interpolating between them they don't run into any barriers).\n\nI'm not fully convinced by the second and third experiments, partly because I didn't fully understand the plots (more on this below), but also because it isn't clear to me what we should expect from the spectrum of a Hessian, so I don't know whether the observed specra have fewer large eigenvalues, or more large eigenvalues, then would be \"natural\". In other words, there isn't a *baseline*.\n\nFor the fourth experiment, it's unsurprising that the small batch method winds up in a different location in the same basin as the large batch method, since it was initialized to the large batch method's solution (and it doesn't appear to me, in figure 9, that the small batch solution is significantly different).\n\nSection 2.1 is said to contain an argument that the second term of equation 5 can be ignored, but only says that if \\ell' and \\nabla^2 of f are uncorrelated, then it can be ignored. I don't see any reason that these two quantities should be correlated, but this is not an argument that they are uncorrelated. Also, it isn't clear to me where this approximation was used--everywhere? In section 3.2, it sounds as if the exact Hessian is used, and at the end of this section the authors say that figure 6 demonstrates that the effect of this second term is small, but I don't see why this is, and it isn't explained.\n\nMy main complaint is that I had a great deal of difficulty interpreting the plots: it often wasn't clear to me what exactly was being plotted, and most of the language describing them was frustratingly vague. For example, figure 6 is captioned \"left edge of the spectrum, eigenvalues are scaled by their ratio\". The text explains that \"left edge of the spectrum\" means \"small but negative eigenvalues\" (this would be better in the caption), but what are the ratios? Ratio of what to what? I think it would greatly enhance clarity if every plot caption described exactly, and unambiguously, what quantities were plotted on the horizontal and vertical axes.\n\nSome minor notes:\n\nThere are a number of places where \"it's\" is used, where it should be \"its\".\n\nIn the introduction, the definition of \\mathcal{L}' is slightly confusing, since it's an expectation, but the use of \"'\" makes one expect a derivative. Perhaps use \\hat{\\mathcal{L}} for the empirical loss, and \\mathcal{L} for the expected one?\n\nOn the bottom of page 4, \"if \\ell' and \\nabla f are not correlated\": I think the \\nabla should be \\nabla^2.\n\nIt's \"principal components\", not \"principle components\".",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The assumption for the numerical analysis is not sound and more experiments need to be conducted",
"paper_summary": null,
"main_review": "main_review: This paper studies the spectrum of the Hessian matrix for neural networks. To explain the observation that the spectrum of Hessian is composed of a bulk of eigenvalues centered near zero and several outliers away from the bulk, it applies the generalized Gauss-Newton decomposition on the Hessian matrix and argues that the Hessian can be approximated by the average of N rank-1 matrices. It also studies the effects on the spectrum from the model size, input data distribution and the algorithm empirically. Finally, this paper revisits the issue that if SGD solutions with different batch sizes converge to the same basin. \n\nPros:\n1. The spectra of the Hessians with different model sizes, input data distributions and algorithms are empirically studied, which provides some insights into the behavior of over-parameterized neural networks. \n2. A decomposition of the Hessian is introduced to explain the degeneracy of the Hessian. Although no mathematical justification for the key approximation Eq. (6) is provided, the experiments in Sec. 3 and Sec. 4 seem to suggest the analysis and support the approximation. \n\nCons:\n1. The paper's contributions seem to be marginal. Many arguments in the paper have been first brought out in Sagun et. al.(2016) and Keskar et. al.(2016): the degeneracy of the Hessian, the bulk and outlier decomposition of the Hessian matrix and the flatness of loss surface at basins. The authors failed to show the significance of their results. For example, what further insights do the results in Sec. 3 provide to the community compared with Sagun et. al.(2016) and Keskar et. al.(2016).\n\n2. More mathematical justification is needed. For example, in the derivation of Eq (6), why can we assume l'(f) and the gradient of f to be uncorrelated? How does this lead to the vanishing of the second term in the decomposition? \n\n3. More experiments are needed to support the arguments. For example, Sec. 4.1 shows that the solutions of SB SGD and LB SGD fall into the same basin, which is opposed to the results of Keskar et. al. (2016). However, this conclusion is not convincing. First, this result is drawn from one dataset. Second, the solution of SB SGD is initialized from the solution of LB SGD. As claimed in Keskar et. al. (2016), the solution of LB SGD may already get trapped at some bad minimum and it is not certain if SB SGD can escape from that. If it can't, then SB and LB can still be in the same basin as per the setting in this paper. So I'd like to suggest the author compare SB and LB when random initializations are conducted for both algorithms.\n\n4. In general, this paper is easy to read. However, it is not well organized. In the introduction, the authors spent several paragraphs for line search and expensive computation of GD and the Hessian, which I don't think are very related to the main purpose of this paper. Besides, the connection between the analysis and the experimental results is very weak and should be better established. \n\nMinor points:\n1. Language is awkward in section 3.1 and 3.2: 'contains contains', 'more smaller than', 'very close zero'...\n2. More experimental details need to be included, such as the parameters used in training and generating the synthetic dataset.\n3. The author needs to provide an explanation for the disagreement between Figure (10) and the result of Keskar et. al.(2016). What's the key difference in experimental settings?\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A thought provoking tentative claim but exposition needs a lot of work.",
"paper_summary": null,
"main_review": "main_review: This paper has at its core an interesting, novel, tentative claim, backed up by simple experiments, that small batch gradient descent and large batch gradient descent may converge to points in the same basin of attraction, contrary to the discussion (but not the actual experimental results) of Keskar et al. (2016). In general, there is a pressing need for insight into the qualitative behavior of gradient-based optimization and this area is of immense interest to many machine learning practitioners. Unfortunately the interesting tentative insights are surrounded by many unsubstantiated and only tangentially related theoretical discussions. Overall the paper has the appearance of lacking a sharp focus. This is a shame since I found the core of the paper very interesting and thought provoking.\n\nMajor comments:\n\nWhile the paper has some interesting tentative experimental insights, the relationship between theory and experiment is complicated. The theoretical claims are vague and wide ranging, and are not all individually well supported or even tested by the experiments. Rather than including lots of small potential insights which the authors have had about what may be going on during gradient-based optimization, I'd prefer to see a paper with much tighter focus with a small number of theoretical claims well supported by experiments (it's fine if the experiments are simplistic as here; that's still interesting).\n\nA large amount of the paper hinges on being able to ignore the second term in (6), and this fact is referred to many times, but the theoretical and experimental justification for this claim is very thin.\n\nThe authors mention overparameterization repeatedly, and it's in the title, but they never define it. It also doesn't appear to take center stage in their experimental investigations (if it is in fact critical to the experiments then it should be made clearer how).\n\nThroughout this paper there is not a clear distinction between eigenvalues being zero and eigenvalues being close to zero, or similarly between the Hessian being singular and ill-conditioned. This distinction is particularly important in the theoretical discussion.\n\nIt would be helpful to be clearer about the differences between this work and that presented in Sagun et al. (2016).\n\nMinor comments:\n\nThe assumption that the target y is real is at odds with many regression problems and practically all classification. It might be worth generalizing the discussion to multidimensional targets.\n\nIt would be good to have some citations to support the claim that often \"the number of parameters M is comparable to the number of examples N (if not much larger)\". With 1-dimensional targets as considered here, that sounds like a recipe for extreme overfitting and poor generalization. Generically based on counting constraints and free parameters one would expect to be able to fit exactly any dataset of N output values using a model with M free parameters. (With P-dimensional targets the relevant comparison would be M vs N P rather than M vs N).\n\nAt the end of intro to section 1, \"loss is non-degenerate\" should be \"Hessian of the loss is non-degenerate\"? Also, didn't the paper cited assume at least one negative eigenvalue at any saddle point, rather than non-degeneracy?\n\nIn section 1.1, it would be helpful to explain the precise sense in which \"overparameterized\" is being used. Hopefully it is in the sense that there are more parameters than needed for good performance at the true global minimum (the additional parameters helping with the process of *finding* a good minimum rather than its existence) or in the sense that M -> infinity for N \"equal to\" infinity. If it is in the sense that M >> N then I'm not sure of the relevance to practical machine learning.\n\nIt would be helpful to use a log scale for the plot in Figure 1. The claim that the Hessian is ill-conditioned depends on the condition number, which is impossible to estimate from the plot.\n\nThe fact that \"wide basins, as opposed to narrow ones, generalize better\" is not a new claim of the Keskar et al. paper. I'd argue it's well-known and part of the classical explanation of why maximum likelihood methods overfit and Bayesian ones don't. See for example MacKay, Information Theory Inference and Learning Algorithms.\n\n\"It turns out that the Hessian is degenerate at any given point\" makes it sound like the result is a theoretical one. As I understand it, the experimental investigation in Sagun et al. (2016) just shows that the Hessian may often be ill-conditioned. As above, more clarity is also needed about whether it is literally degenerate or just approximately so, in which case ill-conditioned is probably a more appropriate word. Ill-conditioned is also more appropriate than singular in \"slightly singular but extremely so\".\n\nHow much data was used for the simple experiments in Figure 1? Infinite data? What data was used?\n\nIt would be helpful to spell out the intuition in \"Intuitively, this kind of singularity...\".\n\nI don't think the decomposition (5) is required to \"explain why having more parameters than samples results in degenerate Hessian matrices\". Generically one would expect that with 1-dimensional targets, N datapoints and N + Q parameters, there would be a Q-dimensional submanifold of parameter space on which the loss would be zero. Of course there would be a few conditions needed to make this into a precise statement, but no need for assuming the second term is negligible.\n\nIs the conventional decomposition of the loss into l o f used for the generalized Gauss Newton that f is a function only of the input to the neural net and the model parameters, but not the target? I could be wrong, but that was always my interpretation.\n\nIt's not clear whether the phrase \"bottom of the landscape\" used several times in the paper refers to the neighborhood of local minima or of global minima.\n\nWhat is the justification for assuming l'(f(w)) and grad f(w) are not correlated? That seems unlikely to be true in general! Also spell out why this implies the second term can be ignored. I'm a bit skeptical of the claim in general. It's easy to come up with counterexamples. For example take l to be the identity (say f has a relu applied to it to ensure everything is well formed).\n\n\"Immediately, this implies that there are at least M - N trivial eigenvalues of the Hessian\". Make it clear that trivial here means approximately not exactly zero (in which case a good word would be \"small\"); this follows since the second term in (5) is only approximately zero. In fact it should be possible to prove there are M - N values which are exactly zero, but that doesn't follow from the argument presented. As above I'd argue this analysis is somewhat beside the point since N should be greater than M in practice to prevent severe overfitting.\n\nIn section 3.1, \"trivial eigenvalues\" should be \"non-trivial eigenvalues\".\n\nWhat's the relevance of using PCA on the data in Figure 2 when it comes to analyzing training neural nets? Also, is there any reason 2 classes breaks the trend?\n\nWhat size of data was used for the experiments to plot figure 2 and figure 3? Infinite?\n\nIt's not completely clear what the takeaway is from Figure 3. I presume this is supporting the point that the eigenvalues of the Hessian at convergence consist of a bulk and outliers. The could be stated explicitly. Is there any significance to the fact that the number of clusters is equal to the number of outliers? Is this supporting some broader claim of the paper?\n\nFigure 4, 5, 6 would benefit from being log plots, and make the claim that the bulk has the same shape independent of data much stronger.\n\nThe x-axis in Figure 5 is not \"ordered counts of eigenvalues\" but \"index of eigenvalues\", and in Figure 6 is not \"ratios of eigenvalues\" but ratio of the index. In the caption for Figure 6, \"scaled by their ratio\" is not clear.\n\nI don't follow why Figure 6 confirms that \"the effects of the ignored term in the decomposition is small\" for negative eigenvalues.\n\nIn section 3.3, when saying the variances of the steps are different but the means are similar, it may interesting to note that the variance is often the dominant term and much greater in magnitude than the mean when doing SGD (at least that's what I've experienced).\n\nWhat's the meaning of \"elbow at similar levels\"? What's the significance?\n\nIn section 4 it is claimed that overparameterization is what \"leads to flatness at the bottom of the landscape which is easy to optimize\". The bulk-outlier view suggests that adding extra parameters may just add extra dimensions to the flat region, but why is optimizing 100 values in a flat 100-dimensional space easier than optimizing 10 values in a flat 10-dimensional space?\n\nIn section 4.1, \"fair comparison\" is misleading since it depends on perspective. If one cares about compute time then certainly measuring steps rather than epochs would not be a fair comparison!\n\nWhat's the relevance of the fact that random initial points in high-dimensional spaces are almost always nearly orthogonal (N.B. the \"nearly\" should be added)? This seems to be assuming something about the mapping from initial point to basin of attraction.\n\nWhat's the meaning of \"extending away from either end points appear to be confirming the sharpness of [the] LB solution\"? Is this shown somewhere?\n\nIt would be helpful to highlight the key difference to Keskar et al. (which I believe is initializing SB training from LB point rather than from scratch). I presume the claim is that Keskar et al. only found their \"inverted camel hump\" linear interpolation results due to the random initialization, and that this would also often be observed for, say, two random LB-from-scratch trainings (which may randomly fall into different basins of attraction). If this is the intended point then it would be good to make this explicit.\n\nIn \"the first terms starts to dominate\", to dominate what? The gradient, or the second term in (5)? If the latter, what is the relevance of this?\n\nWhy \"even\" in \"Even when the weight space has large flat regions\"?\n\nIn the last paragraph of section 4.1, it might be worth spelling out that (as I understand it) the idea is that the small batch method finds itself in a poor region to begin with, since the average loss over an SB-noise-sized neighborhood of the LB point is actually not very good, and so there is a non-zero gradient through flat space to a place where the average loss over an SB-noise-sized neighborhood is good.\n\nIn section 5, \"we see that even large batch methods are able to get to the level where small batch methods go\" seems strange. Isn't this of training set loss? Isn't the \"level\" people care about the test set loss?\n\nIn appendix A, the meaning of consecutive in \"largest consecutive gap\" and \"largest consecutive ratio\" was not clear to me.\n\nAppendix B is only referred to in a footnote. What is its significance for the main theme of the paper? I'd suggest either making it more prominent or putting it in a separate paper.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.3333333432674408,
0.4444444477558136
],
"confidence": [
0.25,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to the points raised",
"Brief response to the comments",
"Reply",
"A joint response to the common themes for the reviews",
"Response to specific points ",
"Exact vs approximate Hessian"
],
"comment": [
"Thank you very much for the helpful review, please note that some of the major themes are addressed in the general comment above. \n\n- In the decomposition, multi-target case can be covered by\n$\\ell(s_y, y) = -s_y + \\log\\sum_{y'}\\exp{s_{y'}}$. It is indeed the case that the output independent of the target would be a conventional way to go, to do that, we will expand the decomposition to cover vector-valued outputs, too.\n\n- The strict saddle property in Lee et. al. assumes isolated (therefore non-degenerate) critical point.\n \n- Log scale plot for Figure 1 doesn't produce a meaningful plot, however, it might be worthwhile to note that 95% of the eigenvalues for the final point are within the band of [-10^(-4), -10^(-4)]\n\n- For Figure 1, 2, and 3 a thousand samples are generated from Gaussian clusters. This point is also addressed in Section 3.1. Also, the takeaway of section 3.1 and 3.2 is the relation between the outliers and data (and not the size of the model).\n\n- By the bottom of the landscape, we mean loss values near zero (but not at zero). To be more precise, for a non-negative function, f, we mean an element from the set {w:f(w)<epsilon}. To the best of our knowledge, the values of the global minimum, and/or the local minima are unknown in the case of deep learning loss functions. \n\n- Regarding the correlation in the second term, that's right, a more plausible argument would be the perfect classifier that has zero gradients on each of the examples.\n\n- In most cases, M>N without 'severe' overfitting, for example, for CIFAR-10 N=50K and M is usually several million.\n\n- PCA was a way to assess the complexity of the data and show its relation to the eigenvalues. But we decided to remove it since a notion of complexity of the data in this context should take the architecture into account. We added a remark on this in the text, as well. \n\n- In our experience, relative values of the variance and the mean of the gradients in SGD depends on the phase of the training. We will look into this in more detail. \n\n- By a 'fair comparison', we mean a fair comparison of what algorithm finds what kind of solution, assuming one is interested in the behavior of the algorithm itself. Otherwise, the real-life computational challenges depend on the hardware, too. For instance, one could increase the batch-size up to the saturation of the GPU and not lose time on it. Therefore, scaling the time axis with the number of epochs can be misleading in a broader context.\n\n- If one is to select random points on the sphere, the selected points become more and more orthogonal as the dimension of the sphere increases. We have experiments that show that this orthogonality is preserved for the trained points, too, if one starts from orthogonal initial points. This is not surprising given the geometry of high dimensional spaces. But we can follow up on this in another work. \n\n- \"In the last paragraph of section 4.1, it might be worth spelling out that (as I understand it) the idea is that the small batch method finds itself in a poor region to begin with, since the average loss over an SB-noise-sized neighborhood of the LB point is actually not very good, and so there is a non-zero gradient through flat space to a place where the average loss over an SB-noise-sized neighborhood is good.\" This is a great point, but we are curious about the following: Is it the size of the noise, or the shape of it? We believe this should be investigated further in a separate context. \n\n- \"In section 5, \"we see that even large batch methods are able to get to the level where small batch methods go\" seems strange. Isn't this of training set loss? Isn't the \"level\" people care about the test set loss?\" Right, we meant 'the same basin'.\n\n- By largest consecutive gap, we mean the largest element of the set of consecutive gaps of eigenvalues when they are ordered on the real line. And similarly with the largest consecutive ratio. They are just ways of finding a separator in the spectrum. Some which seem to work better than the others but such a separator should depend on the notion of the complexity of the dataset, as well. Also, we added a note to explain the relevance of Appx B. The theorem there is a tool that maps the eigenvalues of the population matrix to the sample covariance matrix but it is only valid for independent data. We also provide an example where it can work and fail at the end of the appendix.",
"Thank you very much for the comments. We have addressed some of the main concerns above in a general statement. Please consider that response in your re-evaluation, as well. We fixed the minor points and added more details to the experiments. We also changed the structure of the paper to emphasize our contribution and make it clearer. \n\nTo be more precise, we have a simple perspective that can also be interpreted as a warning sign when one is interested in questions related to the geometry of the bottom of the landscape. We have insights derived from our simple experiments and a demonstration how common ways of visualizations can be misleading. As pointed out, we improved our presentation in this regard. \n\nRegarding point 3: we present two solutions that are qualitatively different and they show signs of being in different basins (sharp/narrow) but they are in the same basin. We also point out that the barriers between solutions can appear depending on internal symmetries of the system, and our experiment addresses this issue as well. Please refer to the general comment above and the updated text for further details. ",
"Thanks for attempting to address my concerns. But the responses to Point 2 and Point 3 are not still convincing to me. In particular, the soundness of the assumption for the mathematical justification is still not addressed and the experimental setting comparing SB against LB is not well designed. Considering the overall novelty and the contribution of this paper, I keep my rating.",
"We thank all three reviewers for their time to evaluate our work, here we craft a response that we believe should address some of the points commonly raised by all three reviewers. We have edited the paper to enhance the message we are trying to convey, and we hope it is more expressive in its new state. \n\nThe focus of our work: The landscape at the bottom is flatter than the picture depicted in many recent papers (some of which are other fellow ICLR submissions e.g. https://openreview.net/forum?id=rJma2bZCW). Therefore we should revise our notions of 'basin' in a way that will address this feature. \n\nOur work is phenomenological, and it addresses the shortcomings of certain ways of picturing the landscape, and it calls for a change. To this end:\n(1) We demonstrate the local geometry at the bottom of the landscape and its intricate relations with the data, model, and algorithm.\n(2) Then we show how the space of solutions can be vastly connected if one avoids rather simple pitfalls.\n\nGeneral remarks:\n\n- Our work is an enhancement over Sagun et. al. (2016) in the following ways: (1) We present more experiments of the spectrum of the Hessian in various different setups, as well as a possible explanation. Therefore we solidify the claims in a more robust way. (2) Based on the key insight from the previous part, we present an experiment where two qualitatively different solutions are connected, thereby challenging some of the recent work by pointing out the fact that certain ways of visualization techniques can be misleading. (3) Finally, to the best of our knowledge, Sagun et. al. (2016) hasn't been published anywhere besides the ArXiv. We believe that our contribution is the necessary addition that would build on top of that work. This can also be seen by the reviews of that work has got: https://openreview.net/forum?id=B186cP9gx \n\n- Our experiments don't rely on the decomposition. The decomposition is a tool to analyze the results and make predictions to be tested experimentally. All experiments are standalone. We edited the text to better reflect this fact. We also added more details on the experimental procedures.\n\n- Certain notions such as the data complexity and over-parametrization are vague since making them more precise would require the details of the architecture, as well. Our focus is on the flattened weight vector, therefore, for now, it would be enough to consider cases where M>>N. However, future work will take a more detailed look into this.\n",
"Thank you very much for your time to review our paper. We have added a joint response that should cover most of the issues addressed, and we updated the pdf file for our work. Here are some specific comments:\n\nThe phenomena of a large number of eigenvalues being small (e.g. for Figure 1, 95% of the eigenvalues for the final point are within the band of [-10^(-4), -10^(-4)]) is a geometrical feature of the landscape that may change our way of visualizing the landscape. For instance, if one is to consider a random polynomial of degree 3 or more, and in a large number of variables, at a local minimum, the histogram of the eigenvalues of the loss function will be a shifted semi-circle distribution which is drastically different. Or in another context, if one is interested in the sample covariance function as in a perfect solution that ignores the second term, and if M < N and the data are iid then the spectrum would have a Marcenko Pastur part (see Appendix for more details). However, what we observe here doesn't fit into such provable cases, and to the best of our knowledge, there is no mathematically sound theoretical argument that would provide us with an explanation for the case at hand. Therefore, the scope of our work is to stick to the experiments and gain insight into what may actually be happening at the bottom of the loss landscape. \n\nThank you also for pointing out the improvements, we have edited the text to reflect on increasing the clarity of our experiments, and exposition. We hope that our message is better conveyed this way. ",
"Thank you very much for your question. The experiments are the exact Hessian calculation, therefore, it reflects the existing negative eigenvalues. Of course, they can only come from the second term of the decomposition. We modified the text to reflect this."
]
} | {
"paperhash": [
"auffinger|random_matrices_and_complexity_of_spin_glasses",
"baik|phase_transition_of_the_largest_eigenvalue_for_nonnull_complex_sample_covariance_matrices",
"andrew|energy_landscapes_for_machine_learning",
"bloemendal|on_the_principal_components_of_sample_covariance_matrices",
"bottou|stochastic_gradient_learning_in_neural_networks",
"bottou|large-scale_machine_learning_with_stochastic_gradient_descent",
"bourrely|parallelization_of_a_neural_network_learning_algorithm_on_a_hypercube._hypercube_and_distributed_computers",
"chaudhari|entropy-sgd:_biasing_gradient_descent_into_wide_valleys",
"dinh|sharp_minima_can_generalize_for_deep_nets",
"daniel|topology_and_geometry_of_deep_rectified_network_optimization_landscapes",
"goyal|yangqing_jia,_and_kaiming_he._accurate,_large_minibatch_sgd:_training_imagenet_in_1_hour",
"hochreiter|flat_minima",
"jastrzębski|three_factors_influencing_minima_in_sgd",
"keskar|on_large-batch_training_for_deep_learning:_generalization_gap_and_sharp_minima",
"lecun|efficient_backprop._lecture_notes_in_computer_science",
"lee|gradient_descent_converges_to_minimizers",
"vladimir|distribution_of_eigenvalues_for_some_sets_of_random_matrices",
"martens|deep_learning_via_hessian-free_optimization",
"mei|the_landscape_of_empirical_risk_for_non-convex_losses",
"nocedal|numerical_optimization",
"panageas|gradient_descent_only_converges_to_minimizers",
"barak|fast_exact_multiplication_by_the_hessian",
"sagun|explorations_on_high_dimensional_landscapes",
"sagun|singularity_of_the_hessian_in_deep_learning",
"schaul|no_more_pesky_learning_rates",
"shwartz|opening_the_black_box_of_deep_neural_networks_via_information",
"zhang|understanding_deep_learning_requires_rethinking_generalization"
],
"title": [
"Random matrices and complexity of spin glasses",
"Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices",
"Energy landscapes for machine learning",
"On the principal components of sample covariance matrices",
"Stochastic gradient learning in neural networks",
"Large-scale machine learning with stochastic gradient descent",
"Parallelization of a neural network learning algorithm on a hypercube. Hypercube and distributed computers",
"Entropy-sgd: Biasing gradient descent into wide valleys",
"Sharp minima can generalize for deep nets",
"Topology and geometry of deep rectified network optimization landscapes",
"Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour",
"Flat minima",
"Three factors influencing minima in sgd",
"On large-batch training for deep learning: Generalization gap and sharp minima",
"Efficient backprop. Lecture notes in computer science",
"Gradient descent converges to minimizers",
"Distribution of eigenvalues for some sets of random matrices",
"Deep learning via hessian-free optimization",
"The landscape of empirical risk for non-convex losses",
"Numerical optimization",
"Gradient descent only converges to minimizers",
"Fast exact multiplication by the hessian",
"Explorations on high dimensional landscapes",
"Singularity of the hessian in deep learning",
"No more pesky learning rates",
"Opening the black box of deep neural networks via information",
"Understanding deep learning requires rethinking generalization"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"antonio auffinger",
"gérard ben arous",
"jiří černỳ"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jinho baik",
"gérard ben arous",
"sandrine péché"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"j andrew",
"ritankar ballard",
"stefano das",
"dhagash martiniani",
"levent mehta",
"jacob d sagun",
"david j stevenson",
" wales"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex bloemendal",
"antti knowles",
"horng-tzer yau",
"jun yin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"léon bottou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"léon bottou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" bourrely"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pratik chaudhari",
"anna choromanska",
"stefano soatto",
"yann lecun",
"carlo baldassi",
"christian borgs",
"jennifer chayes",
"levent sagun",
"riccardo zecchina"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"laurent dinh",
"razvan pascanu",
"samy bengio",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"c daniel",
"freeman ",
"joan bruna"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"priya goyal",
"piotr dollár",
"ross girshick",
"pieter noordhuis",
"lukasz wesolowski",
"aapo kyrola",
"andrew tulloch"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"stanisław jastrzębski",
"zachary kenton",
"devansh arpit",
"nicolas ballas",
"asja fischer",
"yoshua bengio",
"amos storkey"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nitish shirish keskar",
"dheevatsa mudigere",
"jorge nocedal",
"mikhail smelyanskiy",
"ping tak",
"peter tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yann lecun",
"léon bottou",
"k-r orr",
" müller"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jason d lee",
"max simchowitz",
"benjamin michael i jordan",
" recht"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a vladimir",
"leonid andreevich marčenko",
" pastur"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"james martens"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"song mei",
"yu bai",
"andrea montanari"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jorge nocedal",
"stephen j wright"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ioannis panageas",
"georgios piliouras"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a barak",
" pearlmutter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"v ugur levent sagun",
"gérard güney",
"yann ben arous",
" lecun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"levent sagun",
"léon bottou",
"yann lecun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tom schaul",
"sixin zhang",
"yann lecun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ravid shwartz",
"-ziv ",
"naftali tishby"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chiyuan zhang",
"samy bengio",
"moritz hardt",
"benjamin recht",
"oriol vinyals"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1003.1129v2",
"",
"",
"1404.0788v3",
"",
"",
"",
"1611.01838v5",
"1703.04933v2",
"arXiv:1611.01540",
"arXiv:1706.02677",
"",
"1711.04623v3",
"arXiv:1609.04836",
"",
"1602.04915v2",
"",
"",
"arXiv:1607.06534",
"",
"arXiv:1605.00405",
"",
"1412.6615v4",
"arXiv:1611.07476",
"1206.1106v2",
"1703.00810v3",
"1611.03530v2"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.407407 | 0.583333 | null | null | null | null | null | rJrTwxbCb |
||
rossetto|lung_tumor_location_and_identification_with_alexnet_and_a_custom_cnn|ICLR_cc_2018_Conference | Lung Tumor Location and Identification with AlexNet and a Custom CNN | Lung cancer is the leading cause of cancer deaths in the world and early detection is a crucial part of increasing patient survival. Deep learning techniques provide us with a method of automated analysis of patient scans. In this work, we compare AlexNet, a multi-layered and highly flexible architecture, with a custom CNN to determine if lung nodules with patient scans are benign or cancerous. We have found our CNN architecture to be highly accurate (99.79%) and fast while maintaining low False Positive and False Negative rates (< 0.01% and 0.15% respectively). This is important as high false positive rates are a serious issue with lung cancer diagnosis. We have found that AlexNet is not well suited to the problem of nodule identification, though it is a good baseline comparison because of its flexibility. | {
"name": [],
"affiliation": []
} | null | [] | null | 2018-02-15 22:29:26 | 9 | null | null | null | null | null | null | null | null | false | Pros:
- Addresses an important medical imaging application
- Uses an open dataset
Con:
- Authors do not cite original article describing challenge from which they use their data: https://arxiv.org/pdf/1612.08012.pdf , or the website for the corresponding challenge: https://luna16.grand-challenge.org/results/
- Authors either 1) do not follow the evaluation protocol set forth by the challenge, making it impossible to compare to other methods published on this dataset, or 2) incorrectly describe their use of that public dataset.
- Compares only to AlexNet architecture, and not to any of the other multiple methods published on this dataset (see: https://arxiv.org/pdf/1612.08012.pdf).
- Too much space is spent explaining well-understood evaluation functions.
- As reviewers point out, no motivation for new architecture is given.
| {
"review_id": [
"SkOp9W5gf",
"HkQQ3IQxf",
"B1dApr_lf"
],
"review": [
{
"title": "title: difficult to read, need more details",
"paper_summary": null,
"main_review": "main_review: The paper compares AlexNet and a custom CNN in predicting malignant lung nodules, and shows that the proposed CNN achieves significantly lower false positives and false negative rates.\n\nMajor comments\n\n- I did not fully understand the motivation of the custom CNN over AlexNet. \n\n- Some more description of the dataset will be helpful. Do the 888 scans belong to different patients, or same patient can be scanned at different times? What is the dimensionality of each CT scan?\n\n- Are the authors predicting the location of the malignant nodule, or are they classifying if the image has a malignant nodule? How do the authors compute a true positive? What threshold is used?\n\n- What is 'Luna subsets'? What is 'unsmoothed and smoothed image'?\n\nMinor comments\n\n- The paper is difficult to read, and contains a lot of spelling and grammatical errors.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: a trivial comparison of 2 CNN models for lung cancer detection on CT scans",
"paper_summary": null,
"main_review": "main_review: This paper compares 2 CNN architectures (Alexnet and a VGG variant) for the task of classifying images of lung cancer from CT scans. The comparison is trivial and does not go in depth to explain why one architecture works better than the other. Also, no effort is made to explain the data beyond some superficial description. No example of input data is given (what does an actual input look like). The authors mention \"the RCNN object detector\" in step 18, that presumably does post-processing after the CNN. But there is no explanation of that module anywhere. Instead the authors spend most of the paper listing in wordy details the architecture of their VGG variant. Also, a full page is devoted to detailed explanation of what precision-recall and Matthews Correlation Coefficient is! Overall, the paper does not provide any insight beyond: i tried this, i tried that and this works better than that; a strong reject.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Paper with interesting ideas, but far from meeting ICLR level.",
"paper_summary": null,
"main_review": "main_review: The authors compare a standard DL machine (AlexNet) with a custom CNN-based solution in the well known tasks of classifying lung tumours into benign or cancerous in the Luna CT scan dataset, concluding that the proposed novel solution performs better.\nThe paper is interesting, but it has a number of issues that prevents it from being accepted for the ICLR conference.\n\nFirst, the scope of the paper, in its present form, is very limited: the idea of comparing the novel solution just with AlexNet is not adding much to the present landscape of methods to tackle this problem.\nMoreover, although the task is very well known and in the last few year gave rise to a steady flow of solutions and was also the topic of a famous Kaggle competition, no discussion about that can be found in the manuscript.\nThe novel solution is very briefly sketched, and some of the tricks in its architecture are not properly justified: moreover, the performance improvement w.r.t . to AlexNet is hardly supporting the claim.\nExperimental setup consists of just a single training/test split, thus no confidence intervals on the results can be defined to show the stability of the solution.\nThe whole sections 2.3 and 2.4 include only standard material unnecessary to mention given the target venue, and the references are limited and incomplete.\nThis given, I rate this manuscript as not suitable for ICLR 2018.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.2222222238779068,
0.1111111119389534,
0.2222222238779068
],
"confidence": [
0.75,
1,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [],
"comment": []
} | {
"paperhash": [
"|data_from_lidc-idri._data_from_lidc-idri",
"christopher|pattern_recognition",
"krizhevsky|imagenet_classification_with_deep_convolutional_neural_networks",
"virginia|screening_for_lung_cancer:_us_preventive_services_task_force_recommendation_statement",
"pereira|brain_tumor_segmentation_using_convolutional_neural_networks_in_mri_images",
"martin|evaluation:_from_precision,_recall_and_f-measure_to_roc,_informedness,_markedness_and_correlation",
"rivera|establishing_the_diagnosis_of_lung_cancer:_diagnosis_and_management_of_lung_cancer:_american_college_of_chest_physicians_evidence-based_clinical_practice_guidelines",
"arindra|validation,_comparison,_and_combination_of_algorithms_for_automatic_detection_of_pulmonary_nodules_in_computed_tomography_images:_the_luna16_challenge",
"simonyan|very_deep_convolutional_networks_for_large-scale_image_recognition"
],
"title": [
"Data from lidc-idri. Data From LIDC-IDRI",
"Pattern recognition",
"Imagenet classification with deep convolutional neural networks",
"Screening for lung cancer: Us preventive services task force recommendation statement",
"Brain tumor segmentation using convolutional neural networks in mri images",
"Evaluation: from precision, recall and f-measure to roc, informedness, markedness and correlation",
"Establishing the diagnosis of lung cancer: Diagnosis and management of lung cancer: American college of chest physicians evidence-based clinical practice guidelines",
"Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the luna16 challenge",
"Very deep convolutional networks for large-scale image recognition"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [],
"affiliation": []
},
{
"name": [
"m christopher",
" bishop"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex krizhevsky",
"ilya sutskever",
"geoffrey e hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a virginia",
" moyer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sérgio pereira",
"adriano pinto",
"victor alves",
"carlos a silva"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david martin",
"powers "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"patricia rivera",
"atul c mehta",
"momen m wahidi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"arnaud arindra",
"adiyoso setio",
"alberto traverso",
"thomas de bel",
"moira sn berens",
"cas van den bogaard",
"piergiorgio cerello",
"hao chen",
"qi dou",
"maria evelina fantacci",
"bram geurts"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"karen simonyan",
"andrew zisserman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"2010.16061v1",
"",
"1612.08012v4",
"arXiv:1409.1556"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.185185 | 0.833333 | null | null | null | null | null | rJr4kfWCb |
||
addad|clipping_free_attacks_against_neural_networks|ICLR_cc_2018_Conference | Clipping Free Attacks Against Neural Networks | During the last years, a remarkable breakthrough has been made in AI domain
thanks to artificial deep neural networks that achieved a great success in many
machine learning tasks in computer vision, natural language processing, speech
recognition, malware detection and so on. However, they are highly vulnerable
to easily crafted adversarial examples. Many investigations have pointed out this
fact and different approaches have been proposed to generate attacks while adding
a limited perturbation to the original data. The most robust known method so far
is the so called C&W attack [1]. Nonetheless, a countermeasure known as fea-
ture squeezing coupled with ensemble defense showed that most of these attacks
can be destroyed [6]. In this paper, we present a new method we call Centered
Initial Attack (CIA) whose advantage is twofold : first, it insures by construc-
tion the maximum perturbation to be smaller than a threshold fixed beforehand,
without the clipping process that degrades the quality of attacks. Second, it is
robust against recently introduced defenses such as feature squeezing, JPEG en-
coding and even against a voting ensemble of defenses. While its application is
not limited to images, we illustrate this using five of the current best classifiers
on ImageNet dataset among which two are adversarialy retrained on purpose to
be robust against attacks. With a fixed maximum perturbation of only 1.5% on
any pixel, around 80% of attacks (targeted) fool the voting ensemble defense and
nearly 100% when the perturbation is only 6%. While this shows how it is difficult
to defend against CIA attacks, the last section of the paper gives some guidelines
to limit their impact. | {
"name": [],
"affiliation": []
} | In this paper, a new method we call Centered Initial Attack (CIA) is provided. It insures by construction the maximum perturbation to be smaller than a threshold fixed beforehand, without the clipping process. | [
"Adversarial examples",
"Neural Networks",
"Clipping"
] | null | 2018-02-15 22:29:47 | 23 | null | null | null | null | null | null | null | null | false | The reviewers have various reservations.
While the paper has interesting suggestions, it is slightly incremental and the results are not sufficiently compared to other techniques.
We not that one reviewer revised his opinion | {
"review_id": [
"ryU7ZMsgf",
"BkrIo4ixG",
"B1fZIQcxM"
],
"review": [
{
"title": "title: Interesting reparametrization, but too little experimental support",
"paper_summary": null,
"main_review": "main_review: This paper presents a reparametrization of the perturbation applied to features in adversarial examples based attacks. It tests this attack variation on against Inception-family classifiers on ImageNet. It shows some experimental robustness to JPEG encoding defense.\n\nSpecifically about the method: Instead of perturbating a feature x_i by delta_i, as in other attacks, with delta_i in range [-Delta_i, Delta_i], they propose to perturbate x_i^*, which is recentered in the domain of x_i through a heuristic ((x_i ± Delta_i + domain boundary that would be clipped)/2), and have a similar heuristic for computing a Delta_i^*. Instead of perturbating x_i^* directly by delta_i, they compute the perturbed x by x_i^* + Delta_i^* * g(r_i), so they follow the gradient of loss to misclassify w.r.t. r (instead of delta). \n\n+/-:\n+ The presentation of the method is clear.\n+ ImageNet is a good dataset to benchmark on.\n- (!) The (ensemble) white-box attack is effective but the results are not compared to anything else, e.g. it could be compared to (vanilla) FGSM nor C&W.\n- The other attack demonstrated is actually a grey-box attack, as 4 out of the 5 classifiers are known, they are attacking the 5th, but in particular all the 5 classifiers are Inception-family models.\n- The experimental section is a bit sloppy at times (e.g. enumerating more than what is actually done, starting at 3.1.1.).\n- The results on their JPEG approximation scheme seem too explorative (early in their development) to be properly compared.\n\nI think that the paper need some more work, in particular to make more convincing experiments that the benefit lies in CIA (baselines comparison), and that it really is robust across these defenses shown in the paper.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Incremental but interesting results for adversarial examples",
"paper_summary": null,
"main_review": "main_review: In this paper the authors present a new method for generating adversarial examples by constraining the perturbations to fall in a bounded region. Further, experimentally, they demonstrate that learning the perturbations to balance errors against multiple classifiers can overcome many common defenses used against adversarial examples.\n\nPros:\n- Simple, easy to apply technique\n- Positive results in a wide variety of settings.\n\nCons:\n- Writing is a bit awkward at points.\n- Approach seems fairly incremental.\n\nOverall, the results are interesting but the technique seems relatively incremental.\n\nDetails:\n\n\"To find the center of domain definition...\" paragraph should probably go after the cases are described. Confusing as to what is being referred to where it currently is written.\n\nTable 1: please use periods not commas (as in Table 2), e.g. 96.1 not 96,1\n\ninexistent --> non-existent\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Clipping Free Attacks Against Neural Networks ",
"paper_summary": null,
"main_review": "main_review: The paper is not anonymized. In page 2, the first line, the authors revealed [15] is a self-citation and [15] is not anonumized in the reference list.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.4444444477558136,
0.2222222238779068
],
"confidence": [
0.5,
0.25,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Our response to AnonReviewer3",
"Practical idea, but incremental",
"Our response to AnonReviewer2",
"Our response to AnonReviewer1"
],
"comment": [
"We honestly do not understand your evaluation since sentence “Evolutionary algorithms are also used by authors in [15] to find adversarial examples ...” is not a self citation. Believe us that we are not Nguyen et al and we are not related to them at all. You will see it clearly when the names well be unveiled. \nSo, please take the time to reconsider your evaluation and give us a fair review as our paper is the result of several months of work.",
"Since I could not modify the previous review, I show the evaluation results in the comment. \n`Evaluation:\n5: Marginally below acceptance threshold\n3: The reviewer is fairly confident that the evaluation is correct\n\nThis paper introduces a novel method to generate adversarial examples so that the resulting examples lie inside the specified region by construction without clipping. \n\nStrength:\n-Experimental results show that the proposed adversarial examples successfully fool multiple recent defense methods, such as feature squeezing, ensemble defenses, and JPEG encoding.\n-Tested with ImageNet \n\nWeakness:\n-The results are not compared with known attacks experimentally\n\nI think additional experiments are needed to demonstrate that the proposed adversarial example is superior to other adversarial examples. Since no experimental comparison is not conducted, I could not how much the proposed adversarial examples works robustly across multiple defense methods compared to other adversarial examples.\n\n\n\n",
"Thanks a lot for taking the time to read the paper and provide us with you review.\nWe do not claim in the paper that CIA attacks are the most robust ones as we did not indeed give any comparison to other methods. We however show that they are an answer to some issues met in literature. First, avoid the clipping that degrades the quality of attacks. We give a comparison to C&W. (Figure 1). Second, we show that CIA attacks are effective against recent published defenses : ensembling (by the way, at least three papers submitted to ICLR2018 claim this defense to be effective), smoothing and JPG encoding. After the paper submission, we continued our experiments and made comparison to C&W and FGSM. They show a non negligible improvement in attacks success using CIA approach. If adding the results would change your review to an acceptance, we would like to do it.\nAbout the grey-box attack, you are definitely right . However, the purpose of this section 3.3 was to show that ensembling can be considered as a defense as it limits the attacks but not totally effective. Using another classifier would likely show that the transferability is even more limited. This would only reinforce our claim about the lack of transferability which was already tackled in the previous sections.\nFinally, the English of the paper can be improved. We will do it in the revised version.",
"First, thank for taking the time to read the paper and provide this review.\nThe approach is not really incremental as we do not go from an easy case then harden it at each new experiment. In the Tabl2 we show the non transferabity of attacks when making targeted attacks which is against what is often claimed in literature. Table 3 gives the results of attacking several classifiers at once. This shows that ensembling is not always effective as often claimed (by the way, at least three papers submitted to ICLR2018 claim it!). Table 4 provides results of attacking another defense, i.e. spatial smoothing. Table 5 shows that smoothed attacks are not necessarily successful if defense does not use smoothing. Once again, this reveals that the idea behind smoothing as an efficient defense is not true. Table 6 gives the results of attacking a defense with and without smoothing at the same time. Table 7 presents a combination of ensembling and smoothing defense. We could have given this last table directly at the beginning but we think honestly that this would make the paper more difficult to read. \nThe paper is obviously not written in a perfect English and this can be improved. We will do it in the revised version. But overall we think that we bring some interesting results to the community:\n- Avoid clipping to perform more robust attacks\n- Perform effective attacks against strong defenses like ensembling and smoothing.\n- Make partial crafting without affecting the whole content of input data (images in Figure 3)\n- Finally, CIA attacks can be applied beyond images.\nWe hope this answer will give you satisfaction and change your review to an acceptance."
]
} | {
"paperhash": [
"carlini|towards_evaluating_the_robustness_of_neural_networks",
"goodfellow|explaining_and_harnessing_adversarial_examples",
"moosavi-dezfooli|deepfool:_a_simple_and_accurate_method_to_fool_deep_neural_networks",
"papernot|the_limitations_of_deep_learning_in_adversarial_settings",
"szegedy|intriguing_properties_of_neural_networks",
"xu|feature_squeezing:_detecting_adversarial_examples_in_deep_neural_networks",
"xu|feature_squeezing_mitigates_and_detects_carlini/wagner_adversarial_examples",
"carlini|defensive_distillation_is_not_robust_to_adversarial_examples",
"papernot|towards_the_science_of_security_and_privacy_in_machine_learning",
"gu|towards_deep_neural_network_architectures_robust_to_adversarial_examples",
"feinman|detecting_adversarial_samples_from_artifacts",
"grosse|on_the_(statistical)_detection_of_adversarial_examples",
"metzen|on_detecting_adversarial_perturbations",
"kurakin|adversarial_examples_in_the_physical_world",
"nguyen|deep_neural_networks_are_easily_fooled:_high_confidence_predictions_for_unrecognizable_images",
"ingma|a_method_for_stochastic_optimization",
"bojarski|end_to_end_learning_for_self-driving_cars",
"|katherine_works_to_self-driving_cars,_smartphones,_and_drones",
"papernot|transferability_in_machine_learning:_from_phenomena_to_black-box_attacks_using_adversarial_samples",
"grosse|adversarial_perturbations_against_deep_neural_networks_for_malware_classification",
"shukla|real_life_applications_of_soft_computing,_book",
"li|adversarial_examples_detection_in_deep_networks_with_convolutional_filter_statistics",
"hendrycks|early_methods_for_detecting_adversarial_images"
],
"title": [
"Towards Evaluating the Robustness of Neural Networks",
"Explaining and Harnessing Adversarial Examples",
"DeepFool: a simple and accurate method to fool deep neural networks",
"The Limitations of Deep Learning in Adversarial Settings",
"Intriguing Properties of Neural Networks",
"Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks",
"Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples",
"Defensive Distillation is not Robust to Adversarial Examples",
"Towards the Science of Security and Privacy in Machine Learning",
"Towards Deep Neural Network Architectures Robust to Adversarial Examples",
"Detecting Adversarial Samples from Artifacts",
"On the (Statistical) Detection of Adversarial Examples",
"On Detecting Adversarial Perturbations",
"Adversarial Examples in the Physical World",
"Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images",
"A method for stochastic optimization",
"End to End Learning for Self-Driving Cars",
"Katherine works to Self-Driving Cars, Smartphones, and Drones",
"Transferability in machine learning: from phenomena to black-box attacks using adversarial samples",
"Adversarial perturbations against deep neural networks for malware classification",
"Real Life Applications of Soft Computing, book",
"Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics",
"Early Methods for Detecting Adversarial Images"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"nicholas carlini",
"david wagner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ian j goodfellow",
"jonathon shlens",
"christian szegedy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"seyed-mohsen moosavi-dezfooli",
"alhussein fawzi",
"pascal frossard"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nicolas papernot",
"patrick mcdaniel",
"somesh jha",
"matt fredrikson",
"z berkay celik",
"ananthram swami"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"christian szegedy",
"wojciech zaremba",
"ilya sutskever",
"joan bruna"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"weilin xu",
"david evans",
"yanjun qi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"weilin xu",
"david evans",
"yanjun qi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nicholas carlini",
"david wagner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nicolas papernot",
"patrick mcdaniel",
"arunesh sinha",
"michael wellman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shixiang gu",
"luca rigazio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"reuben feinman",
"saurabh ryan r curtin",
"andrew b shintre",
" gardner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kathrin grosse",
"praveen manoharan",
"nicolas papernot",
"michael backes",
"patrick mcdaniel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jan hendrik metzen",
"tim genewein",
"volker fischer",
"bastian bischoff"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alexey kurakin",
"ian goodfellow",
"samy bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"anh nguyen",
"jason yosinski",
"jeff clune"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d ingma",
"j and b a",
" adam"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mariusz bojarski",
"davide del testa",
"daniel dworakowski",
"bernhard firner",
"beat flepp",
"prasoon goyal",
"lawrence d jackel",
"mathew monfort",
"urs muller",
"jiakai zhang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"nicolas papernot",
"patrick mcdaniel",
"ian goodfellow"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"k grosse",
"n papernot",
"p manohoran",
" backes",
"p daniel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" shukla"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xin li",
"fuxin li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dan hendrycks",
"kevin gimpel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1608.04644v2",
"1412.6572v3",
"1511.04599v3",
"1511.07528v1",
"1312.6199v4",
"1704.01155v2",
"1705.10686v1",
"1607.04311v1",
"1611.03814v1",
"1412.5068v4",
"1703.00410v3",
"1702.06280v2",
"1702.04267v2",
"1607.02533v4",
"1412.1897v4",
"arXiv:1412.6980",
"arXiv:1604.07316",
"",
"arXiv:1605.07277",
"1606.04435v2",
"",
"1612.07767v2",
"1608.00530v2"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.333333 | 0.416667 | null | null | null | null | null | rJqfKPJ0Z |
||
zhou|hybridnet_a_hybrid_neural_architecture_to_speedup_autoregressive_models|ICLR_cc_2018_Conference | HybridNet: A Hybrid Neural Architecture to Speed-up Autoregressive Models | This paper introduces HybridNet, a hybrid neural network to speed-up autoregressive
models for raw audio waveform generation. As an example, we propose
a hybrid model that combines an autoregressive network named WaveNet and a
conventional LSTM model to address speech synthesis. Instead of generating
one sample per time-step, the proposed HybridNet generates multiple samples per
time-step by exploiting the long-term memory utilization property of LSTMs. In
the evaluation, when applied to text-to-speech, HybridNet yields state-of-art performance.
HybridNet achieves a 3.83 subjective 5-scale mean opinion score on
US English, largely outperforming the same size WaveNet in terms of naturalness
and provide 2x speed up at inference. | {
"name": [],
"affiliation": []
} | It is a hybrid neural architecture to speed-up autoregressive model. | [
"neural architecture",
"inference time reduction",
"hybrid model"
] | null | 2018-02-15 22:29:32 | 16 | null | null | null | null | null | null | null | null | false | The paper presents a hybrid architecture which combines WaveNet and LSTM for speeding-up raw audio generation. The novelty of the method is limited, as it’s a simple combination of existing techniques. The practical impact of the approach is rather questionable since the generated audio has significantly lower MOS scores than the state-of-the-art WaveNet model. | {
"review_id": [
"ryOLIn5lf",
"r16uKJ5gG",
"ByDRVIuZG"
],
"review": [
{
"title": "title: Right name. Low innovation. Samples please!",
"paper_summary": null,
"main_review": "main_review: This paper presents HybridNet, a neural speech (and other audio) synthesis system (vocoder) that combines the popular and effective WaveNet model with an LSTM with the goal of offering a model with faster inference-time audio generation.\n\nSummary: The proposed model, HybridNet is a fairly straightforward variation of WaveNet and thus the paper offers a relatively low novelty. There is also a lack of detail regarding the human judgement experiments that make the significance of the results difficult to interpret. \n\nLow novelty of approach / impact assessment:\nThe proposed model is based closely on WaveNet, an existing state-of-the-art vocoder model. The proposal here is to extend WaveNet to include an LSTM that will generate samples between WaveNet samples -- thus allowing WaveNet to sample at a lower sample frequency. WaveNet is known for being relatively slow at test-time generation time, thus allowing it to run at a lower sample frequency should decrease generation time. The introduction of a local LSTM is perhaps not a sufficiently significant innovation. \n\nAnother issue that lowers the assessment of the likely impact of this paper is that there are already a number of alternative mechanism to deal with the sampling speed of WaveNet. In particular, the cited method of Ramachandran et al (2017) uses caching and other tricks to achieve a speed up of 21 times over WaveNet (compared to the 2-4 times speed up of the proposed method). The authors suggest that these are orthogonal strategies that can be combined, but the combination is not attempted in this paper. There are also other methods such as sampleRNN (Mehri et al. 2017) that are faster than WaveNet at inference time. The authors do not compare to this model.\n\nInappropriate evaluation:\nWhile the model is motivated by the need to reduce the generation of WaveNet sampling, the evaluation is largely based on the quality of the sampling rather than the speed of sampling. The results are roughly calibrated to demonstrate that HybridNet produces higher quality samples when (roughly) adjusted for sampling time. The more appropriate basis of comparison is to compare sample time as a function of sample quality. \n\nExperiments:\nFew details are provided regarding the human judgment experiments with Mechanical Turkers. As a result it is difficulty to assess the appropriateness of the evaluation and therefore the significance of the findings. I would also be much more comfortable with this quality assessment if I was able to hear the samples for myself and compare the quality of the WaveNet samples with HybridNet samples. I will also like to compare the WaveNet samples generated by the authors' implementation with the WaveNet samples posted by van den Oord et al (2017). \n\n\nMinor comments / questions:\n\nHow, specifically, is validation error defined in the experiments? \n\nThere are a few language glitches distributed throughout the paper. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Good results but lacking details about design decisions",
"paper_summary": null,
"main_review": "main_review: TL;DR of paper: for sequential prediction, in order to scale up the model size without increasing inference time, use a model that predicts multiple timesteps at once. In this case, use an LSTM on top of a Wavenet for audio synthesis, where the LSTM predicts N steps for every Wavenet forward pass. The main result is being able to train bigger models, by increasing Wavenet depth, without increasing inference time.\n\nThe idea is simple and intuitive. I'm interested in seeing how well this approach can generalize to other sequential prediction domains. I suspect that it's easier in the waveform case because neighboring samples are highly correlated. I am surprised by how much an improvement \n\nHowever, there are a number of important design decisions that are glossed over in the paper. Here are a few that I am wondering about:\n* How well do other multi-step decoders do? For example, another natural choice is using transposed convolutions to upsample multiple timesteps. Fully connected layers? How does changing the number of LSTM layers affect performance?\n* Why does the Wavenet output a single timestep? Why not just have the multi-step decoder output all the timesteps?\n* How much of a boost does the separate training give over joint training? If you used the idea suggested in the previous point, you wouldn't need this separate training scheme.\n* How does performance vary over changing the number of steps the multi-step decoder outputs?\n\nThe paper also reads like it was hastily written, so please go back and fix the rough edges.\n\nRight now, the paper feels too coupled to the existing Deep Voice 2 system. As a research paper, it is lacking important ablations. I'll be happy to increase my score if more experiments and results are provided.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Right choice of problem. Introduces significant independence assumptions.",
"paper_summary": null,
"main_review": "main_review: By generating multiple samples at once with the LSTM, the model is introducing some independence assumptions between samples that are from neighbouring windows and are not conditionally independent given the context produced by Wavenet. This reduces significantly the generality of the proposed technique.\n\nPros:\n- Attempting to solve the important problem of speeding up autoregressive generation.\n- Clarity of the write-up is OK, although it could use some polishing in some parts.\n- The work is in the right direction, but the paucity of results and lack of thoroughness reduces somewhat the work's overall significance.\n\nCons:\n- The proposed technique is not particularly novel and it is not clear whether the technique can be used to get speed-ups beyond 2x - something that is important for real-world deployment of Wavenet.\n- The amount of innovation is on the low side, as it involves mostly just fairly minor architectural changes.\n- The absolute results are not that great (MOS ~3.8 is not close to the SOTA of 4.4 - 4.5)\n\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.5555555820465088,
0.3333333432674408
],
"confidence": [
1,
1,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [],
"comment": []
} | {
"paperhash": [
"sercan|deep_voice:_real-time_neural_text-to-speech",
"sercan|deep_voice_2:_multi-speaker_neural_text-to-speech",
"cho|learning_phrase_representations_using_rnn_encoder-decoder_for_statistical_machine_translation",
"fan|tts_synthesis_with_bidirectional_lstm_based_recurrent_neural_networks",
"graves|generating_sequences_with_recurrent_neural_networks",
"graves|hallucination_with_recurrent_neural_networks",
"hochreiter|long_short-term_memory",
"mehri|samplernn:_an_unconditional_end-to-end_neural_audio_generation_model",
"ramachandran|fast_generation_for_convolutional_autoregressive_models",
"sotelo|char2wav:_end-to-end_speech_synthesis",
"sutskever|oriol_vinyals,_and_quoc_v._le._sequence_to_sequence_learning_with_neural_networks",
"taigman|voice_synthesis_for_in-the-wild_speakers_via_a_phonological_loop",
"oord|wavenet:_a_generative_model_for_raw_audio",
"wang|tacotron:_towards_end-to-end_speech_synthesis",
"yu|multi-scale_context_aggregation_by_dilated_convolutions",
"zen|unidirectional_long_short-term_memory_recurrent_neural_network_with_recurrent_output_layer_for_low-latency_speech_synthesis"
],
"title": [
"Deep voice: Real-time neural text-to-speech",
"Deep voice 2: Multi-speaker neural text-to-speech",
"Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"TTS synthesis with bidirectional LSTM based recurrent neural networks",
"Generating sequences with recurrent neural networks",
"Hallucination with recurrent neural networks",
"Long short-term memory",
"Samplernn: An unconditional end-to-end neural audio generation model",
"Fast generation for convolutional autoregressive models",
"Char2wav: End-to-end speech synthesis",
"Oriol Vinyals, and Quoc v. Le. Sequence to sequence learning with neural networks",
"Voice synthesis for in-the-wild speakers via a phonological loop",
"Wavenet: A generative model for raw audio",
"Tacotron: Towards end-to-end speech synthesis",
"Multi-scale context aggregation by dilated convolutions",
"Unidirectional long short-term memory recurrent neural network with recurrent output layer for low-latency speech synthesis"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"ö sercan",
"mike arık",
"adam chrzanowski",
"gregory coates",
"andrew diamos",
"yongguo gibiansky",
"xian kang",
"john li",
"andrew miller",
"jonathan ng",
"shubho raiman",
"mohammad sengupta",
" shoeybi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ö sercan",
"gregory arık",
"andrew diamos",
"john gibiansky",
"kainan miller",
"wei peng",
"jonathan ping",
"yanqi raiman",
" zhou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kyunghyun cho",
"bart van merriënboer",
"caglar gulcehre",
"dzmitry bahdanau",
"fethi bougares",
"holger schwenk",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yuchen fan",
"feng-long yao qian",
"frank k xie",
" soong"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex graves"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex graves"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"soroush mehri",
"kundan kumar",
"ishaan gulrajani",
"rithesh kumar",
"shubham jain",
"jose sotelo",
"aaron c courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"prajit ramachandran",
"tom le paine",
"pooya khorrami",
"mohammad babaeizadeh",
"shiyu chang",
"yang zhang",
"mark a hasegawa-johnson",
"roy h campbell",
"thomas s huang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jose sotelo",
"soroush mehri",
"kundan kumar",
"joao felipe santos",
"kyle kastner",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ilya sutskever"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yaniv taigman",
"lior wolf",
"adam polyak",
"eliya nachmani"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aaron van den oord",
"sander dieleman",
"heiga zen",
"karen simonyan",
"oriol vinyals",
"alex graves",
"nal kalchbrenner",
"andrew senior",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yuxuan wang",
"r j skerry-ryan",
"daisy stanton",
"yonghui wu",
"ron j weiss",
"navdeep jaitly",
"zongheng yang",
"ying xiao",
"zhifeng chen",
"samy bengio",
"quoc le",
"yannis agiomyrgiannakis",
"rob clark",
"rif a saurous"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"fisher yu",
"vladlen koltun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"heiga zen",
"hasim sak"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"1308.0850v5",
"",
"",
"",
"1704.06001v1",
"",
"",
"arXiv:1707.06588",
"1609.03499v2",
"arXiv:1703.10135",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.407407 | 1 | null | null | null | null | null | rJoXrxZAZ |
||
jastrzbski|three_factors_influencing_minima_in_sgd|ICLR_cc_2018_Conference | 1711.04623v3 | Three factors influencing minima in SGD | We study the statistical properties of the endpoint of stochastic gradient descent (SGD). We approximate SGD as a stochastic differential equation (SDE) and consider its Boltzmann Gibbs equilibrium distribution under the assumption of isotropic variance in loss gradients.. Through this analysis, we find that three factors – learning rate, batch size and the variance of the loss gradients – control the trade-off between the depth and width of the minima found by SGD, with wider minima favoured by a higher ratio of learning rate to batch size. In the equilibrium distribution only the ratio of learning rate to batch size appears, implying that it’s invariant under a simultaneous rescaling of each by the same amount.
We experimentally show how learning rate and batch size affect SGD from two perspectives: the endpoint of SGD and the dynamics that lead up to it. For the endpoint, the experiments suggest the endpoint of SGD is similar under simultaneous rescaling of batch size and learning rate, and also that a higher ratio leads to flatter minima, both findings are consistent with our theoretical analysis. We note experimentally that the dynamics also seem to be similar under the same rescaling of learning rate and batch size, which we explore showing that one can exchange batch size and learning rate in a cyclical learning rate schedule. Next, we illustrate how noise affects memorization, showing that high noise levels lead to better generalization. Finally, we find experimentally that the similarity under simultaneous rescaling of learning rate and batch size breaks down if the learning rate gets too large or the batch size gets too small. | {
"name": [],
"affiliation": []
} | null | [
"Computer Science",
"Mathematics"
] | arXiv.org | 2017-11-13 | 33 | null | null | null | null | null | null | null | null | false | Dear authors,
The reviewers agreed that the theoretical part lacked novelty and that the paper should focus on its experimental part which at the moment is not strong enough to warrant publication.
Regarding the theoretical part, here are the main concerns:
- Even though it is used in previous works, the continuous time approximation of stochastic gradient overlooks its practical behaviour, especially since a good rule of thumb is to use as large as stepsize as possible (without reaching divergence), as for instance mentioned in The Marginal Value of Adaptive Gradient Methods in Machine Learning by Wilson et al.
- The isotropic approximation is very strong and I don't know settings where this would hold. Since it seems central to your statements, I wonder what can be deduced from the obtained results.
- I do not think the Gaussian assumption is unreasonable and I am fine with it. Though there are clearly cases where this will not be true, it will probably be OK most of the time.
I encourage the authors to focus on the experimental part in a resubmission. | {
"review_id": [
"H19fnlceG",
"ByBJy2Oef",
"BkC-HgcxG"
],
"review": [
{
"title": "title: Theory not particularly novel, experiments okay.",
"paper_summary": null,
"main_review": "main_review: The authors study SGD as a stochastic differential equation and use the Fokker planck equation from statistical physics to derive the stationary distribution under standard assumptions. Under a (somewhat strong) local convexity assumption, they derive the probability of arriving at a local minimum, in terms of the batchsize, learning rate and determinant of the hessian.\n\nThe theory in section 3 is described clearly, although it is largely known. The use of the Fokker Planck equation for stationary distributions of stochastic SDEs has seen wide use in the machine learning literature over the last few years, and this paper does not add any novel insights to that. For example, the proof of Theorem 1 in Appendix C is boilerplate. Also, though it may be relatively new to the deep learning/ML community, I don't see the need to derive the F-P equation in Appendix A.\n\nTheorem 2 uses a fairly strong locally convex assumption, and uses a straightforward taylor expansion at a local minimum. It should be noted that the proof in Appendix D assumes that the covariance of the noise is constant in some interval around the minimum; I think this is again a strong assumption and should be included in the statement of Theorem 2.\n\nThere are some detailed experiments showing the effect of the learning rate and batchsize on the noise and therefore performance of SGD, but the only real insight that the authors provide is that the ratio of learning rate to batchsize controls the noise, as opposed to the that of l.r. to sqrt(batchsize). I wish this were analyzed in more detail.\n\nOverall I think the paper is borderline; the lack of real novelty makes it marginally below threshold in my view.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A stability analysis of local optima in constant-rate SGD",
"paper_summary": null,
"main_review": "main_review: The paper investigates how the learning rate and mini-batch size in SGD impacts the optima that the SGD algorithm finds.\nEmpirically, the authors argue that it was observed that larger learning rates converge to minima which are more wide,\nand that smaller learning rates more often lead to convergence to minima which are narrower, i.e. where the Hessian has large Eigenvalues. In this paper, the authors derive an analytical theory that aims at explaining this phenomenon.\n\nPoint of departure is an analytical theory proposed by Mandt et al., where SGD is analyzed in a continuous-time stochastic\nformalism. In more detail, a stochastic differential equation is derived which mimicks the behavior of SGD. The advantage of\nthis theory is that under specific assumptions, analytic stationary distributions can be derived. While Mandt et al. focused\non the vicinity of a local optima, the authors of the present paper assumed white diagonal gradient noise, which allows to\nderive an analytic, *global* stationary distribution (this is similar as in Langevin dynamics).\n\nThen, the authors focus again on individual local optima and \"integrate out\" the stationary distribution around a local optimum, using again a Gaussian assumption. As a result, the authors obtain un-normalized probabilities of getting trapped in a given local optimum. This un-normalized probability depends on the strength of the value of the loss function in the vicinity of the optimum, the gradient noise, and the width of the optima. In the end, these un-normalized probabilities are taken as\nprobabilities that the SGD algorithm will be trapped around the given optimum in finite time.\n\n\nOverall assessment:\nI find the analytical results of the paper very original and interesting. The experimental part has some weaknesses. The paper could be drastically improved when focusing on the experimental part.\n\nDetailed comments:\n\nRegarding the analytical part, I think this is all very nice and original. However, I have some comments/requests:\n\n1. Since the authors focus around Gaussian regions around the local minima, perhaps the diagonal white noise assumption could be weakened. This is again the multivariate Ornstein-Uhlenbeck setup examined in Mandt et al., and probably possesses an analytical solution for the un-normalized probabilities (even if the noise is multivariate Gaussian). Would the authors to consider generalizing the proof for the camera-ready version perhaps?\n\n2. It would be nice to sketch the proof of theorem 2 in the main paper, rather than to just refer to the appendix. In my opinion, the theorem results from a beautiful and instructive calculation that should provide the reader with some intuition.\n\n3. Would the authors comment on the underlying theoretical assumptions a bit more? In particular, the stationary distribution predicted by the Ornstein-Uhlenbeck formalism is never reached in practice. When using SGD in practice, one is in the initial mode-seeking phase. So, why is it a reasonable assumption to still use results obtained from the stationary (equilibrated) distribution which is never reached?\n\n\nRegarding the experiments: here I see a few problems. First, the writing style drops in quality. Second, figures 2 and 3 are cryptic. Why do the authors focus on two manually selected optima? In which sense is this statistically significant? How often were the experiments repeated? The figures are furthermore hard to read. I would recommend overhauling the entire experiments section.\n\nDetails:\n\n- Typo in Figure 2: ”with different with different”.\n- “the endpoint of SGD with a learning rate schedule η → η/a, for some a > 0, and a constant batch size S, should be the same\n as the endpoint of SGD with a constant learning rate and a batch size schedule S → aS.” This is clearly wrong as there are many local minima, and running teh algorithm twice results in different local optima. Maybe add something that this only true on average, like “the characteristics of these minima ... should be the same”.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting paper, however not convincing theoretical results",
"paper_summary": null,
"main_review": "main_review: In this paper, the authors present an analysis of SGD within an SDE framework. The ideas and the presented results are interesting and are clearly of interest to the deep learning community. The paper is well-written overall.\n\nHowever, the paper has important problems. \n\n1) The analysis is widely based on the recent paper by Mandt et al. While being an interesting work on its own, the assumptions made in that paper are very strict and not very realistic. For instance, the assumption that the stochastic gradient noise being Gaussian is very restrictive and trying to justify it just by the usual CLT is not convincing especially when the parameter space is extremely large, the setting that is considered in the paper.\n\n2) There is a mistake in the proof Theorem 1. Even with the assumption that the gradient of sigma is bounded, eq 20 cannot be justified and the equality can only be \"approximately equal to\". The result will only hold if sigma does not depend on theta. However, letting sigma depend on theta is the only difference from Mandt et al. On the other hand, with constant sigma the result is very trivial and can be found in any text book on SDEs (showing the Gibbs distribution). Therefore, presenting it as a new result is misleading. \n\n3) Even if the sigma is taken constant and theorem 1 is corrected, I don't think theorem 2 is conclusive. Theorem 2 basically assumes that the distribution is locally a proper Gaussian (it is stated as locally convex, however it is taken as quadratic) and the result just boils down to computing some probability under a Gaussian distribution, which is still quite trivial. Apart from this assumption not being very realistic, the result does not justify the claims on \"the probability of ending in a certain minimum\" -- which is on the other hand a vague statement. First of all \"ending in\" a certain area depends on many different factors, such as the structure of the distribution, the initial point, the distance between the modes etc. Also it is not very surprising that the inverse image of a wider Gaussian density is larger than of a pointy one. This again does not justify the claims. For instance consider a GMM with two components, where the means of the individual components are close to each other, but one component having a very large variance and a smaller weight, and the other one having a lower variance and higher weight. With authors' claim, the algorithm should spend more time on the wider one, however it is evident that this will not be the case. \n\n4) There is a conceptual mistake that the authors assume that SGD will attain the exact stationary distribution even when the SDE is simulated by the fixed step-size Euler integrator. As soon as one uses eta>0 the algorithm will never attain the stationary distribution of the continuous-time process, but will attain a stationary distribution that is close to the ideal one (of course with several smoothness, growth assumptions). The error between the ideal distribution and the empirical distribution will be usually O(eta) depending on the assumption and therefore changing eta will result in a different distribution than the ideal one. With this in mind the stationary distributions for (eta/S) and (2eta/2S) will be clearly different. \n\n\nThe experiments are very interesting and I do not underestimate their value. However, the current analysis unfortunately does not properly explain the rather strong claims of the authors, which is supposed to be the main contribution of this paper. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.5555555820465088,
0.2222222238779068
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Explanation of changes from original submission",
"Response to AnonReviewer2",
"Response to AnonReviewer3 (Part III of III)",
"Thanks for Reproducing",
"Response to AnonReviewer3 (Part II of III)",
"Response to AnonReviewer1",
"Response to AnonReviewer3 (Part I of III)"
],
"comment": [
"We added many clarifying changes, including discussions, better plots, and some improvements to experiments (e.g. larger grid in “Breaking point” section). This increased submission size by 2 pages, but we believe it was necessary to address all reviewer’s points. We would be grateful for feedback if some clarifications are too explicit, or if we should reduce the size of submission to previous size. Easiest way of reducing size would be moving some of the enlarged and expanded figures to Appendix.\n\nChanges:\nWe revised abstract to reflect better our novelty and main contribution\nWe added paragraph in Related work on Fokker-Planck equation\nWe improved figures as suggested by reviewers, e.g. Figure 4 and Figure 6 are enlarged.\nWe renamed section to 3 from “Theoretical results” to “Insights from Fokker-Planck” and 3.2 from “Main results” to “Three factors influencing equilibrium distribution”, to reflect the novel main finding of this section.\nWe reworded significantly section 3, mostly in response to reviewer 2:\nWe added many clarifications, e.g. in opening of section 3 we say “We make the assumption of isotropic covariance (...)”, or we added whole paragraph discussing Theorem 1 in 3.2. At the end of section 3.1 we add a clarifying remark on how we differ to Mandt et al. at the end of section 3.1. and a reference to Li et al. justifying the approximation of SGD by an SDE.\nAdded discussion sections after each theorem, which talk about the assumptions and interpretations of the results. \nWe made changes to theory\nWe fixed assumption of Theorem 1 as suggested by reviewer 2 to have a constant sigma.\nWe clarified that we assume equilibrium distribution in solution of Fokker-Planck rather than just the stationary distribution.\nIn 4.1 we rerun MLP experiment on a 4-layer network with Batch Normalization that is closer to assumptions made in Theorem 1, and we moved the 20 layer network without Batch Normalization experiments to appendix. Correlations remain qualitatively similar between the two experiments.\nIn 4.2 (“eta/S determines learning dynamics of SGD”) we added clarifying paragraph discussing that theory predicts “invariance” of endpoint of SGD, while dynamics “invariance” is an additional experimental result\nIn 4.3 (“impact of SGD on memorization”) we added minor clarifications\nIn 4.4 (“Breaking point of the theory in practice”) we significantly improved experiment by running larger grid, and improving plots\nWe added Section 5 “Discussion” which in 4 paragraphs summarizes results.\nDue to large space taken by all of the above changes we moved 4.5 (“Cyclical batch and learning rate schedule”) to Appendix, and referred to it in 4.2, which also discussed cyclical batch size and learning rate schedules.\nWe also included some changes in 4.5 (now in Appendix). We changed tracking ratio of hessian and loss, to tracking hessian. We rerun larger grid search and included table comparing performance of discussed schedules (CLR, CBS and constant).",
"We thank the reviewer for their interesting comments and observations and for their enthusiasm for our results.\n\nAnalytical Part \n\nResponse to point 1, whether we can generalize to non-diagonal white noise. \n\nIn short, we believe generalization beyond the isotropic case is nontrivial, and we leave for future work. To clarify, in Mandt et al. they assume globally the Ornstein-Uhlenbeck (quadratic loss) setup (i.e they only consider one minimum), whereas for Theorem 2 we assume a series of minima, and then approximate the integral using the second order Taylor series locally near each minima, but not globally. (For Theorem 1 there is no restriction on the loss at all). The proof of Theorem 1 only strictly holds if the gradient noise is isotropic - in the non-isotropic case, the Fokker Planck equation will be a complicated partial differential equation which doesn’t have a closed form analytic stationary solution in general. Instead one would need numerical solutions of the PDE or further simplifying assumptions for an analytic solution. The solution may also depend on the path in parameter space through which the process evolves, unless further assumptions are made.\n\nResponse to point 2, whether the proof of theorem 2 can appear in the main paper.\n\nWe have decided to keep the proof of theorem 2 in the appendix, in response to AnonReviewer3 who suggested this proof is fairly standard, and also to keep the paper easy to read on a first pass without too much mathematical detail. \n\nResponse to point 3, that the equilibrium distribution of the SDE is not reached in practice. \n\nWe agree that the stationary distribution is not precisely arrived at in practice, but it can be approached to a good approximation if enough epochs have passed. On the other hand, we are not necessarily interested in exactly reaching the equilibrium distribution, we are more interested in sampling from the equilibrium distribution, which can happen in a fewer number of epochs than it takes for the probability distribution to approach it.\n\nExperimental Part\n\nIn figures 2 and 3 we show a qualitative result, common in the literature, e.g. Fig. 3 of https://arxiv.org/pdf/1609.04836.pdf, which expresses intuitively the consistency of the theory with experiment. They are just a one-dimensional slice through parameter space and so should be treated with a pinch of salt. In the original submission there were five plots each of which shows the consistency of our experiments with our theory pictorially. To show we are not manually selecting minima that fit our claims, we have run some more interpolation experiments to validate this. In the new version we have added more seeds to show the robustness with respect to the model random initialization of the result, see Appendix F in the revised version.\n\nWe have improved the quality of the figures to make them easier to read. \n\nOn detail 1, the typo, we have fixed this in the new version.\n\nOn detail 2, that the endpoints will not be the same, we agree with the reviewer here and thank them for the suggestion to clarify the phrasing in this way and have edited accordingly to read instead that the characteristics of these minima should be the same, not the actual minima. This is similar to the fix done for AnonReviewer3 on the vagueness of the phrase “the probability of ending in a certain minimum”.",
"4. In response to point 4, on the error between SGD and the SDE stationary solution. It is standard to approximate SGD with an SDE in the machine learning literature and use the stationary distribution as an approximation of the learnt distribution. We are aware that SGD will not exactly attain the SDE stationary distribution. Instead, we recognise the breakdown of the theory, and have a whole section devoted to it: In the new version this is Section 4.5 “Breaking of the theory in practice”, where we see this error for larger eta, as the reviewer states. We highlight other limitations in Appendix F. We also specifically mention the approximation holds only to first order in eta in the final paragraph of Appendix B. In the new version we give a more detailed experiment of the breakdown of the theory in figure 7. We hope this addresses the concern that there is a conceptual mistake - we are aware that SGD will not attain the exact stationary distribution for eta>0 and this is reflected in our paper.",
"First of all we would like to thank authors for reproducing our results, we are very happy to see interest in our work! We will do our best to further investigate the report soon. There are some issues that need to be clarified (e.g. x axis of memorization experiment is different than in our paper), we contacted authors of reproduction via e-mail to clarify.\n\nIn the meantime, let us clarify cyclical batch size. In our submission we plot batch size and learning rate over time for both schedules in the Appendix. \nWe also do mention in text it is just replacing any relative change in learning rate with batch size change, for instance if learning rate is increased by factor of 5 (e.g. 0.1 to 0.5), we replace it with reduction of batch size by factor of 5 (e.g. 100 to 25). Adding to your report results of CBS, especially discrete one, would be very interesting. We will clarify it further in text.",
"2. Response to paragraph 2, first point, that there is a mistake in Theorem 1:\n\nWe agree there is a mathematical mistake in allowing sigma to vary with theta. We address this by changing our assumption so that sigma is constant. This modification does not affect our end results as the equilibrium distribution will then be the standard Gibbs distribution. Though the Gibbs distribution appears in the SDE literature, this exact expression has not appeared before in the machine learning literature to the best of our knowledge, explicitly showing the dependence of the loss, learning rate, batch size and sigma.\n\nResponse to paragraph 2, second point, that letting sigma depend on theta is the only difference to Mandt et al.: \n\nWe agree with the reviewer that taking a constant sigma is the same as Assumption 2 of Mandt et al. However, this is not the only difference between our paper and Mandt et al.\nAs stated on the first point of our introduction to this rebuttal, we do not assume Mandt et al. Assumption 4, which is the assumption that the iterates throughout are constrained to be in a region in which the loss surface is quadratic. Instead we allow iterates to be drawn from any region of parameter space for a general loss function. For Theorem 2 we decompose the whole loss surface into different basins of attraction of different sizes. For each of these different basins of attraction we use a second order Taylor expansion to evaluate the integral for the result of Theorem 2, allowing us to define the sizes of these basins and to compare between these basins. There is no comparison of different basins in Mandt et al. and this comparison is critical for the important observations about which minima SGD ends up in. To summarize, letting sigma be constant is mathematically necessary, but this was not the only difference between us and Mandt et al., instead our key difference is that we don’t restrict to a single basin with a quadratic loss, instead we consider many basins. \n\n3. Response to paragraph 3, even if Theorem 1 is corrected, the reviewer thinks Theorem 2 is not conclusive. Let us address each point in turn:\n\nAbout the concern that Theorem 2 is quite trivial. \nWe disagree that the result is trivial - it is indeed a simple calculation, but it is not obvious a priori that it will be the determinant of the Hessian that will appear in the prefactor, nor that the ratio of learning rate to batch size will control the weight given to width (from the Hessian prefactor) over depth. \n\nAbout \"the probability of ending in a certain minimum\" is vague.\nWe agree that the concept of the size of minima SGD finishes in is indeed vague unless it is sufficiently well defined. To discuss this we propose an approximation of each minima region by the quadratic bowl at that minima, and the size of the minima by the effective posterior mass of the corresponding Gaussian distribution. This Laplace approximate mass has been used before in the context of computing the evidence in Bayesian methods: it is indeed approximate, but it is sufficiently well defined, and captures enough for us to be able to discuss the critical issue of sizes of minima regions. For example to calculate Bayes’ factors e.g. in Kass and Raferty https://www.stat.washington.edu/raftery/Research/PDF/kass1995.pdf or for Bayesian model comparison in Mackay https://pdfs.semanticscholar.org/e5c6/a695a4455a526ec8955dcc0fa2d6810089e9.pdf. We have revised the phrase to read instead “the probability of ending in a minimum characterized by a certain loss value and Hessian determinant”.\n\nWith regards to the dependence of the endpoint on the initial point we point out that the stationary distribution theoretically doesn’t depend on the initialization, and this will be approximately true in practice if the algorithm is run for long enough. \n\nWe would like to clarify the dependence on the distance between the modes. We point out in appendix D that we assume the modes are separated by a large enough distance that the tails of the Gaussian approximation do not contribute significantly. This means that the example the reviewer gives of a GMM with close means is not valid for our situation since we assume the modes are separated enough in the derivation in appendix D. We would be happy to promote details of this assumption to the main text for clarity. \n\nTo summarize, our claim is not that the algorithm will spend more time on the wider minima, instead our claim is that the ratio of learning rate to batch size controls the tradeoff between width and depth, so whether it spends more time at a wider minima depends on the value of this ratio. Finally, we would like to emphasize that our empirical results confirm that a higher ratio of learning rate to batch size leads to a wider region being sampled. \n\n(continued...)",
"We thank the reviewer for the insightful comments and interesting questions they pose. We think the paper will be stronger through the clarifications that ensue. \n\n1)\n\nResponse to the comments about the Fokker-Planck equation and the novelty of our results. \nWe would like to clarify where our novelty arises. In our paper the main new result is that the learning rate over batch size ratio controls the tradeoff between the width (i.e. sharpness) and depth of the region in which SGD ends. In Theorem 1 and the surrounding text, we clarify the relationship between the batch size and learning rate and the effect that this has on the resulting equilibrium distribution. Though theorem 1 is a standard Gibbs-Boltzmann result, it is valuable to express it as a function of batch size and learning rate: this has not previously been emphasised in the literature, and it is this relationship that provides the insight into how SGD performs. To the best of our knowledge we have not seen the exact statement of Theorem 1 in the literature before. We do agree that a Gibbs distribution and its derivation has appeared before in the machine learning literature. For example, in the online setting we are aware of the results of equation (24) of Heskes and Kappen (http://www.sciencedirect.com/science/article/pii/S0924650908700382), but the relation here does not give the temperature of the Gibbs distribution in terms of the learning rate, batch size and gradient covariance. So we believe the result is novel in the machine learning context for minibatch stochastic gradient descent. We have adjusted the presentation of Theorem 1 to reflect this. We have also renamed the section from ‘Theoretical Results’ to ‘Insights from Fokker-Planck’ to reflect more clearly that our novelty is the insights gained rather than the derivation of new mathematical results.\n\n2)\n\nResponse to the comment on Theorem 2 that we assume the gradient covariance is constant in some region. We agree that the assumption should be included in the statement of Theorem 2, and further would like to revise Theorem 2 to be such that the covariance of the noise is constant and proportional to the identity everywhere. This stronger assumption corrects a mathematical mistake pointed out by AnonReviewer3 - nonetheless, this stronger assumption is sufficient for our requirements, to obtain an analytic solution for the stationary distribution. In the revised version this assumption appears in the statement of theorem 2.\n\n3)\n\nResponse to the comment “the only real insight that the authors provide is that the ratio of learning rate to batchsize controls the noise, as opposed to the that of l.r. to sqrt(batchsize)”. \n\nWe disagree that the ratio of learning rate to batch size controlling noise being the only real insight. We agree this is a core contribution of our work. However, more than interpreting this ratio as just the noise, we also investigate how this ratio affects the geometry sampled by SGD, the learning dynamics, the memorization and generalization. \n\nWe verify in the paper that when keeping the ratio of learning rate to batch size the same, we terminate in a region with similar properties, hessian, loss and performance. We did not focus on other scaling strategies such as square root as they did not appear in our theoretical analysis and investigations of them have appeared previously, e.g. in Hoffer et al., as referenced in Section 4.2. We would be happy to include further experiments on square root scaling if the reviewer suggests. ",
"We thank the reviewer for their interest in our paper, and the detailed review. We will address each points of the review in turn, and supplement responses with experiments where possible. Before that, we wanted to stress here two crucial points about our submission: \n\nWe would like to restate our main claim. This is that learning rate over batch size, along with noise in gradients, controls the stationary distribution from which SGD “samples” a solution. This claim (especially the importance of the ratio of learning rate to batch size) has not been made before. We discuss in the theory section, and in the experiment section how these findings are reflected in practice. Please refer to the rebuttal of AnonReviewer1 for more details about this point.\n\nSecond, we believe our paper is different from Mandt et al.. Our goal is comparing the different relative probabilities of ending in different “minima regions”, characterized by a loss value and hessian determinant. In particular we differ in Assumption 4 of Mandt et al. where in their whole analysis they restrict attention only to a region within a quadratic bowl, whereas we allow for a general loss function with multiple minima regions. In contrast, the goal of Mandt et al. is to show that under certain assumptions, SGD can be seen as sampling from a quadratic posterior (see for instance Fig.4 in Mandt et al), whereas we view SGD as sampling a solution from a stationary distribution that is not just a quadratic. For theorem 2, which talks about the probability of ending in a minima with certain characteristics, we use a Laplace approximation to evaluate an integral, which uses the second order Taylor expansion of the loss locally around a given minima - but this is not the same as the assumption in Mandt et al. which is that the loss is globally approximated by a quadratic. We have made changes to the paper at the end of Section 3.1 and in Appendix D to emphasize this.\n\nWe have clarified the aforementioned points in the revised paper.\n\nDetailed responses:\n1. Response to point 1, that assuming the batch gradient converges via the CLT to a Gaussian distribution. \n\nIt is a common assumption that the stochastic gradient noise can be modelled as Gaussian, for instance in the paper by Li et al. ‘15, https://arxiv.org/pdf/1511.06251.pdf the stochastic differential equation that we use has been proven to approximate SGD in the weak sense. More precisely, the use of the central limit theorem is appropriate in this case: the minibatch samples are randomized draws from a fixed distribution with finite variance: the distribution over the randomly ordered full dataset. Typical minibatch sizes are large by any CLT standard. The data exchangeability ensure that there is a shared variance, C, for all data points, and hence by the CLT the average over the minibatch will have variance C/S for a batch size S. We have produced a plot for the MLP model on the FMNIST dataset used in section 4.1 of the submitted paper which shows samples of gradients in randomly chosen directions for different batch sizes, which appear to follow a Gaussian distribution already for a typical batch size of 64. Here is a sample from 10 random directions at initialization https://anonfile.com/52v9m2d0b6/grid.pdf, and at best validation point https://anonfile.com/6cvcmbd4b8/grid_after_training.pdf.\n\n"
]
} | {
"paperhash": [
"saxe|on_the_information_bottleneck_theory_of_deep_learning",
"xiao|fashion-mnist:_a_novel_image_dataset_for_benchmarking_machine_learning_algorithms",
"arpit|a_closer_look_at_memorization_in_deep_networks",
"sagun|empirical_analysis_of_the_hessian_of_over-parametrized_neural_networks",
"dinh|sharp_minima_can_generalize_for_deep_nets",
"zhang|understanding_deep_learning_requires_rethinking_generalization",
"keskar|on_large-batch_training_for_deep_learning:_generalization_gap_and_sharp_minima",
"he|deep_residual_learning_for_image_recognition",
"goodfellow|qualitatively_characterizing_neural_network_optimization_problems",
"simonyan|very_deep_convolutional_networks_for_large-scale_image_recognition",
"mackay|a_practical_bayesian_framework_for_backpropagation_networks",
"zhang|energy–entropy_competition_and_the_effectiveness_of_stochastic_gradient_descent_in_machine_learning",
"poggio|theory_of_deep_learning_iii:_explaining_the_non-overfitting_puzzle",
"chaudhari|stochastic_gradient_descent_performs_variational_inference,_converges_to_limit_cycles_for_deep_networks",
"smith|a_bayesian_perspective_on_generalization_and_stochastic_gradient_descent",
"wu|towards_understanding_generalization_of_deep_learning:_perspective_of_loss_landscapes",
"goyal|accurate,_large_minibatch_sgd:_training_imagenet_in_1_hour",
"li|batch_size_matters:_a_diffusion_approximation_framework_on_nonconvex_stochastic_gradient_descent",
"hoffer|train_longer,_generalize_better:_closing_the_generalization_gap_in_large_batch_training_of_neural_networks",
"chaudhari|deep_relaxation:_partial_differential_equations_for_optimizing_deep_neural_networks",
"mandt|stochastic_gradient_descent_as_approximate_bayesian_inference",
"shwartz-ziv|opening_the_black_box_of_deep_neural_networks_via_information",
"li|stochastic_modified_equations_and_adaptive_stochastic_gradient_algorithms",
"shang|covariance-controlled_adaptive_langevin_thermostat_for_large-scale_bayesian_sampling",
"smith|cyclical_learning_rates_for_training_neural_networks",
"vollmer|(non-)_asymptotic_properties_of_stochastic_gradient_langevin_dynamics",
"krizhevsky|one_weird_trick_for_parallelizing_convolutional_neural_networks",
"chen|stochastic_gradient_hamiltonian_monte_carlo",
"heskes|on-line_learning_processes_in_artificial_neural_networks"
],
"title": [
"On the Information Bottleneck Theory of Deep Learning",
"Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms",
"A Closer Look at Memorization in Deep Networks",
"EMPIRICAL ANALYSIS OF THE HESSIAN OF OVER-PARAMETRIZED NEURAL NETWORKS",
"Sharp Minima Can Generalize For Deep Nets",
"UNDERSTANDING DEEP LEARNING REQUIRES RE-THINKING GENERALIZATION",
"ON LARGE-BATCH TRAINING FOR DEEP LEARNING: GENERALIZATION GAP AND SHARP MINIMA",
"Deep Residual Learning for Image Recognition",
"QUALITATIVELY CHARACTERIZING NEURAL NETWORK OPTIMIZATION PROBLEMS",
"VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION",
"A Practical Bayesian Framework for Backpropagation Networks",
"Energy-entropy competition and the effectiveness of stochastic gradient descent in machine learning",
"Theory of Deep Learning III: explaining the non-overfitting puzzle by",
"STOCHASTIC GRADIENT DESCENT PERFORMS VARIATIONAL INFERENCE, CONVERGES TO LIMIT CYCLES FOR DEEP NETWORKS",
"A BAYESIAN PERSPECTIVE ON GENERALIZATION AND STOCHASTIC GRADIENT DESCENT",
"Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes",
"Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour",
"On the diffusion approximation of nonconvex stochastic gradient descent",
"Train longer, generalize better: closing the generalization gap in large batch training of neural networks",
"DEEP RELAXATION: PARTIAL DIFFERENTIAL EQUATIONS FOR OPTIMIZING DEEP NEURAL NETWORKS",
"Stochastic Gradient Descent as Approximate Bayesian Inference",
"Toward Designing Intelligent PDEs for Computer Vision: An Optimal Control Approach",
"Stochastic modified equations and adaptive stochastic gradient algorithms",
"Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling",
"Cyclical Learning Rates for Training Neural Networks",
"Exploration of the (Non-)asymptotic Bias and Variance of Stochastic Gradient Langevin Dynamics",
"One weird trick for parallelizing convolutional neural networks",
"Stochastic Gradient Hamiltonian Monte Carlo",
""
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"andrew m saxe",
"yamini bansal",
"joel dapello",
"madhu advani",
"artemy kolchinsky",
"brendan d tracey",
"david d cox"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"han xiao",
"kashif rasul",
"roland vollgraf"
],
"affiliation": [
{
"laboratory": "Zalando Research Mühlenstraße 25",
"institution": "",
"location": "{'postCode': '10243', 'settlement': 'Berlin'}"
},
{
"laboratory": "Zalando Research Mühlenstraße 25",
"institution": "",
"location": "{'postCode': '10243', 'settlement': 'Berlin'}"
},
{
"laboratory": "Zalando Research Mühlenstraße 25",
"institution": "",
"location": "{'postCode': '10243', 'settlement': 'Berlin'}"
}
]
},
{
"name": [
"devansh arpit",
"stanisław jastrzębski",
"nicolas ballas",
"david krueger",
"emmanuel bengio",
"maxinder s kanwal",
"tegan maharaj",
"asja fischer",
"aaron courville",
"yoshua bengio",
"simon lacoste-julien"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "McGill University",
"location": "{'country': 'Canada'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Montréal Institute for Learning Algo-rithms",
"location": "{'country': 'Canada'}"
},
{
"laboratory": "",
"institution": "University of Bonn",
"location": "{'settlement': 'Bonn', 'country': 'Germany'}"
},
{
"laboratory": "",
"institution": "Montréal Institute for Learning Algo-rithms",
"location": "{'country': 'Canada'}"
},
{
"laboratory": "",
"institution": "Montréal Institute for Learning Algo-rithms",
"location": "{'country': 'Canada'}"
},
{
"laboratory": "",
"institution": "Montréal Institute for Learning Algo-rithms",
"location": "{'country': 'Canada'}"
}
]
},
{
"name": [
"levent sagun",
"utku evci",
"v ugur güney",
"yann dauphin",
"léon bottou"
],
"affiliation": [
{
"laboratory": "",
"institution": "Université Paris Saclay",
"location": "{'country': 'CEA'}"
},
{
"laboratory": "",
"institution": "NYU",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"laurent dinh",
"razvan pascanu",
"samy bengio",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chiyuan zhang",
"samy bengio",
"moritz hardt",
"benjamin recht",
"oriol vinyals",
"google deepmind"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nitish shirish keskar",
"dheevatsa mudigere",
"jorge nocedal",
"mikhail smelyanskiy",
"ping tak",
"peter tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
}
]
},
{
"name": [
"ian j goodfellow",
"oriol vinyals",
"andrew m saxe"
],
"affiliation": [
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'settlement': 'Mountain View', 'region': 'CA'}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'settlement': 'Mountain View', 'region': 'CA'}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'settlement': 'Mountain View', 'region': 'CA'}"
}
]
},
{
"name": [
"karen simonyan",
"andrew zisserman"
],
"affiliation": [
{
"laboratory": "Visual Geometry Group",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "Visual Geometry Group",
"institution": "University of Oxford",
"location": "{}"
}
]
},
{
"name": [
"david j c mackay'"
],
"affiliation": [
{
"laboratory": "",
"institution": "California lnstitute of Technology",
"location": "{'postCode': '139-74, 91125', 'settlement': 'Pasadena', 'region': 'C A', 'country': 'USA'}"
}
]
},
{
"name": [
"yao zhang",
"andrew m saxe",
"madhu s advani",
"alpha a lee"
],
"affiliation": [
{
"laboratory": "Cavendish Laboratory",
"institution": "University of Cambridge",
"location": "{'postCode': 'CB3 0HE', 'settlement': 'Cambridge', 'country': 'United Kingdom'}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{'postCode': '02138', 'settlement': 'Cambridge', 'region': 'MA', 'country': 'United States of America'}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{'postCode': '02138', 'settlement': 'Cambridge', 'region': 'MA', 'country': 'United States of America'}"
},
{
"laboratory": "Cavendish Laboratory",
"institution": "University of Cambridge",
"location": "{'postCode': 'CB3 0HE', 'settlement': 'Cambridge', 'country': 'United Kingdom'}"
}
]
},
{
"name": [
"t poggio",
"k kawaguchi",
"q liao",
"b miranda",
"l rosasco",
"x boix",
"j hidary",
"h mhaskar"
],
"affiliation": [
{
"laboratory": "MIT † † Alphabet (Google)",
"institution": "Claremont Graduate University",
"location": "{}"
},
{
"laboratory": "MIT † † Alphabet (Google)",
"institution": "Claremont Graduate University",
"location": "{}"
},
{
"laboratory": "MIT † † Alphabet (Google)",
"institution": "Claremont Graduate University",
"location": "{}"
},
{
"laboratory": "MIT † † Alphabet (Google)",
"institution": "Claremont Graduate University",
"location": "{}"
},
{
"laboratory": "MIT † † Alphabet (Google)",
"institution": "Claremont Graduate University",
"location": "{}"
},
{
"laboratory": "MIT † † Alphabet (Google)",
"institution": "Claremont Graduate University",
"location": "{}"
},
{
"laboratory": "MIT † † Alphabet (Google)",
"institution": "Claremont Graduate University",
"location": "{}"
},
{
"laboratory": "MIT † † Alphabet (Google)",
"institution": "Claremont Graduate University",
"location": "{}"
}
]
},
{
"name": [
"pratik chaudhari",
"stefano soatto"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Los Angeles'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Los Angeles'}"
}
]
},
{
"name": [
"samuel l smith",
"v quoc",
" le google brain"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lei wu",
"zhanxing zhu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"priya goyal",
"piotr dollár",
"ross girshick",
"pieter noordhuis",
"lukasz wesolowski",
"aapo kyrola",
"andrew tulloch yangqing",
"jia kaiming",
"he facebook"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wenqing hu",
"chris junchi li",
"lei li",
"jian-guo liu"
],
"affiliation": [
{
"laboratory": "",
"institution": "Missouri University of Science and Technology (formerly University of Missouri",
"location": "{'addrLine': 'Rolla)'}"
},
{
"laboratory": "",
"institution": "Princeton University",
"location": "{'postCode': '08544', 'settlement': 'Princeton', 'region': 'NJ', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Duke University",
"location": "{'postCode': '27708', 'settlement': 'Durham', 'region': 'NC', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Duke University",
"location": "{'postCode': '27708', 'settlement': 'Durham', 'region': 'NC', 'country': 'USA'}"
}
]
},
{
"name": [
"elad hoffer",
"itay hubara",
"daniel soudry"
],
"affiliation": [
{
"laboratory": "",
"institution": "Technion -Israel Institute of Technology",
"location": "{'settlement': 'Haifa', 'country': 'Israel'}"
},
{
"laboratory": "",
"institution": "Technion -Israel Institute of Technology",
"location": "{'settlement': 'Haifa', 'country': 'Israel'}"
},
{
"laboratory": "",
"institution": "Technion -Israel Institute of Technology",
"location": "{'settlement': 'Haifa', 'country': 'Israel'}"
}
]
},
{
"name": [
"pratik chaudhari",
"adam oberman",
"stanley osher",
"stefano soatto",
"guillaume carlier"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Los Angeles'}"
},
{
"laboratory": "",
"institution": "McGill University",
"location": "{'settlement': 'Montreal'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'addrLine': 'Los Angeles. 4 CEREMADE'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Los Angeles'}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"stephan mandt",
"matthew d hoffman",
"david m blei"
],
"affiliation": [
{
"laboratory": "",
"institution": "Columbia University",
"location": "{'postCode': '10025', 'settlement': 'New York', 'region': 'NY', 'country': 'USA'}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Columbia University",
"location": "{'postCode': '10025', 'settlement': 'New York', 'region': 'NY', 'country': 'USA'}"
}
]
},
{
"name": [
"risheng liu",
"zhouchen lin",
"wei zhang",
"kewei tang",
"zhixun su"
],
"affiliation": [
{
"laboratory": "",
"institution": "Dalian University of Technology",
"location": "{'settlement': 'Dalian', 'country': 'China'}"
},
{
"laboratory": "",
"institution": "Microsoft Research Asia",
"location": "{'settlement': 'Beijing', 'country': 'China'}"
},
{
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": "{'country': 'China'}"
},
{
"laboratory": "",
"institution": "Dalian University of Technology",
"location": "{'settlement': 'Dalian', 'country': 'China'}"
},
{
"laboratory": "",
"institution": "Dalian University of Technology",
"location": "{'settlement': 'Dalian', 'country': 'China'}"
}
]
},
{
"name": [
"qianxiao li",
"cheng tai",
"weinan e ‡2"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Beijing Institute of Big Data Research",
"location": "{'settlement': 'Beijing', 'country': 'China'}"
},
{
"laboratory": "",
"institution": "Princeton University",
"location": "{'country': 'USA'}"
}
]
},
{
"name": [
"xiaocheng shang",
"zhanxing zhu",
"benedict leimkuhler",
"amos j storkey"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Edinburgh",
"location": "{'postCode': 'EH9 3FD', 'country': 'UK'}"
},
{
"laboratory": "",
"institution": "University of Edinburgh",
"location": "{'postCode': 'EH9 3FD', 'country': 'UK'}"
},
{
"laboratory": "",
"institution": "University of Edinburgh",
"location": "{'postCode': 'EH9 3FD', 'country': 'UK'}"
},
{
"laboratory": "",
"institution": "University of Edinburgh",
"location": "{'postCode': 'EH9 3FD', 'country': 'UK'}"
}
]
},
{
"name": [
"leslie n smith"
],
"affiliation": [
{
"laboratory": "",
"institution": "U.S. Naval Research Laboratory",
"location": "{'addrLine': '4555 Overlook Ave', 'postCode': '5514', 'settlement': 'Code'}"
}
]
},
{
"name": [
"sebastian j vollmer",
"konstantinos c zygalakis",
"yee whye teh"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Southampton",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Oxford",
"location": "{}"
}
]
},
{
"name": [
"alex krizhevsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tianqi chen",
"emily b fox",
"carlos guestrin"
],
"affiliation": [
{
"laboratory": "MODE Lab",
"institution": "University of Washington",
"location": "{'settlement': 'Seattle', 'region': 'WA'}"
},
{
"laboratory": "MODE Lab",
"institution": "University of Washington",
"location": "{'settlement': 'Seattle', 'region': 'WA'}"
},
{
"laboratory": "MODE Lab",
"institution": "University of Washington",
"location": "{'settlement': 'Seattle', 'region': 'WA'}"
}
]
},
{
"name": [
"tom m heskes",
"bert kappen"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Nijmegen",
"location": "{'postCode': '6525 EZ', 'settlement': 'Geert Grooteplein 21, Nijmegen', 'country': 'The Netherlands'}"
},
{
"laboratory": "",
"institution": "University of Nijmegen",
"location": "{'postCode': '6525 EZ', 'settlement': 'Geert Grooteplein 21, Nijmegen', 'country': 'The Netherlands'}"
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 87 | null | 0.407407 | 0.75 | null | null | null | null | null | rJma2bZCW |
|
macua|learning_parametric_closedloop_policies_for_markov_potential_games|ICLR_cc_2018_Conference | 3652072 | 1802.00899 | Learning Parametric Closed-Loop Policies for Markov Potential Games | Multiagent systems where the agents interact among themselves and with an stochastic environment can be formalized as stochastic games. We study a subclass of these games, named Markov potential games (MPGs), that appear often in economic and engineering applications when the agents share some common resource. We consider MPGs with continuous state-action variables, coupled constraints and nonconvex rewards. Previous analysis followed a variational approach that is only valid for very simple cases (convex rewards, invertible dynamics, and no coupled constraints); or considered deterministic dynamics and provided open-loop (OL) analysis, studying strategies that consist in predefined action sequences, which are not optimal for stochastic environments. We present a closed-loop (CL) analysis for MPGs and consider parametric policies that depend on the current state and where agents adapt to stochastic transitions. We provide easily verifiable, sufficient and necessary conditions for a stochastic game to be an MPG, even for complex parametric functions (e.g., deep neural networks); and show that a closed-loop Nash equilibrium (NE) can be found (or at least approximated) by solving a related optimal control problem (OCP). This is useful since solving an OCP---which is a single-objective problem---is usually much simpler than solving the original set of coupled OCPs that form the game---which is a multiobjective control problem. This is a considerable improvement over the previously standard approach for the CL analysis of MPGs, which gives no approximate solution if no NE belongs to the chosen parametric family, and which is practical only for simple parametric forms. We illustrate the theoretical contributions with an example by applying our approach to a noncooperative communications engineering game. We then solve the game with a deep reinforcement learning algorithm that learns policies that closely approximates an exact variational NE of the game. | {
"name": [
"sergio valcarcel macua",
"javier zazo",
"santiago zazo"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
} | We present general closed loop analysis for Markov potential games and show that deep reinforcement learning can be used for learning approximate closed-loop Nash equilibrium. | [
"Stochastic games",
"potential games",
"closed loop",
"reinforcement learning",
"multiagent systems"
] | null | 2018-02-15 22:29:24 | 26 | 44 | 8 | null | null | null | null | null | null | true | The paper considers Markov potential games (MPGs), where the agents share some common resource. They consider MPGs with continuous state-action variables, coupled constraints and nonconvex rewards, which is novel. The reviews are all positive and point out the novel contributions in the paper | {
"review_id": [
"BkVvEP5gM",
"BJZ6A-clG",
"BJLGKD8Mz"
],
"review": [
{
"title": "title: ICLR may not be the right venue; Technical questions: Unclear how to deal with stochastic dynamics, etc.",
"paper_summary": null,
"main_review": "main_review: Summary:\nThis paper studies multi-agent sequential decision making problems that belong to the class of games called Markov Potential Games (MPG). It considers finding the optimal policy within a parametric space of policies, which can be represented by a function approximator such as a DNN.\nA main contribution of this work is that it shows that for MPG, instead of solving a multi-objective optimization problem (Eq. 8), which is difficult, it is sufficient to solve a scalar-valued optimization problem (Eq. 16). Theorem 1 shows that under certain conditions on the reward function, the game is MPG. It also shows how one might find the potential function J, which is used in the single objective optimization problem.\nFinding J can be computationally expensive in general. So the paper provides some properties that lead to finding J easier. For example, obtaining J is easy if we have a cooperative game (Corollary 1) or the reward can be decomposed/decoupled in a certain way (Theorem 2).\n\n\nEvaluation:\n\nThis is a well-written paper that studies an important problem, but I don’t think ICLR is the right venue for it. There is not much about (representation) learning in this work. The use of TRPO as an RL algorithm in the Experiment does not play a critical role in this work either. Aside this general comment, I have several other more specific comments.\n\n\n- There is a significant literature on the use of RL for multi-agent systems. The paper does not do a good job comparing and positioning with respect to them. For example, refer to the following recent paper and references therein:\n\nPerolat, Strub, et al., “Learning Nash Equilibrium for General-Sum Markov Games from Batch Data,” AISTATS, 2017.\n\n\n- If I understand correctly, the policies are considered to be functions from the state of the system to a continuous action. So it is a function, and not a probability distribution. This means that the space of considered policies correspond to the space of pure strategies. We know that for some games, the Nash equilibrium is a mixed strategy. Isn’t this a big limitation of this approach?\n\n\n- I am unclear how this approach can handle stochastic dynamics. For example, the optimization (P1) depends on the realization of (theta_i)_i. But this is not available. The dependence is not only in the objective, but also in the constraints, which makes things more difficult.\n\nI understand that in the experiments the authors used two models (either the average of random realization, or solving a different optimization for each realization), but none of them is an appropriate solution for a stochastic system.\n\n\n- How large is the MPG class? Is there any structural result that positions them compared to other Markov Games? For example, is the class of zero-sum games an example of MPG?\n\n\n- There is a comment close to the end of Section 5 that when there is no prior knowledge of the dynamics and the reward, one can use the proposed approach to learn PCL-NE by using any DRL.\nThis is questionable because if the reward is not known, the conditions of Theorems 1 or 2 cannot be verifies, so it is not possible to use (P1) instead of (G2).\n\n\n- What comments can you make about the computational complexity? It seems that depending on the dynamics, the optimization problem P1 can be non-convex, hence computationally difficult to solve.\n\n\n- How is the work related to the following paper?\nMacua, Zazo, Zazo, “Learning in Constrained Stochastic Dynamic Potential Games,” ICASSP, 2016\n\n======\nI updated the score based on the authors' rebuttal.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting work on Markov potential games, from the viewpoint of someone without any prior knowledge on the topic",
"paper_summary": null,
"main_review": "main_review: This manuscript considers a subclass of stochastic games named Markov potential games. It provides some assumptions that guarantee that a game is a Markov potential game and leads to some nice properties to solve the problem to approximately a Nash equilibrium. It is claimed that the work extends the state of the art by analysing the closed-loop version in a different manner, firstly constraining policies to a parametric family and then deriving conditions for that, instead of the other way around. As someone with no knowledge in the topic, I find the paper interesting to read, but I have not followed any proofs. The experimental setup is quite limited, even though I believe that the intention of the authors is to provide some theoretical ideas rather than applying them. Minor point: there are a few sentences with small errors, this could be improved.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: review",
"paper_summary": null,
"main_review": "main_review: While it is not very surprising that in a potential game it is easy to find Nash equilibria (compare to normal form static games, in which local maxima of the potential are pure Nash equilibria), the idea of approaching these stochastic games from this direction is novel and potentially (no pun intended) fruitful. The paper is well written, the motivation is clear, and some of the ideas are non-trivial. However, the connection to learning representations is a little tenuous. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.5555555820465088,
0.6666666865348816
],
"confidence": [
0.5,
0,
0.25
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"We appreciate the careful reading and the detailed feedback. We believe to have addressed all concerns, including the motivation of the paper as a relevant application of learning representation. We will be glad to address any further concern.",
"We appreciate the feedback from the reviewer. We wish to emphasize the importance of providing a rigorous and effective method for finding closed-loop solutions to a relevant class of games. This application of learning representation advances the state of the art in multiagent systems.",
"New version",
"We thank the reviewer for the good feedback. Since our key idea is to rely on expressive parametric policies, we believe this work presents a relevant application of learning representation for multiagent systems."
],
"comment": [
"We believe that ICLR is a propper venue. Our key contribution is to show that closed-loop NE (CL-NE) can be approximated with parametric policies. However, the applicability of this result is limited by the accuracy of the approximation. Approximations that depend on hand-coded features usually require domain knowledge and have to be re-designed for every game; while learned features that can express complex policies can alleviate these problems. Thus, we see this work as a relevant application of learned representations to multiagent systems that extend previous works, which only studied cooperative games, or assumed discrete state-action with no coupled constraints.\n\nAlthough the focus of our literature review is potential games, we know no previous method for approximating CL-NE for any class of Markov games with continuous variables and coupled constraints. There are open-loop (OL) analysis of some games with continuous variables, like monotone games. Also, (Perolat et al. 2017) studied state-dependent policies but assumed finite state-action sets, which are less common in engineering.\n\nConsidering deterministic policies is not a limitation of our setting for two reasons: 1) Prop. 1 shows that under mild conditions, there exists a deterministic policy that achieves the optimal value of P1 and that is also an NE of G2. 2) We do not claim that our method will find all possible NE of G2, but just the one that is also solution to P1. There may be many (possible mixed strategies) solutions to G2, but we propose a method to find one of them.\n\nThe reviewer has concerns about handling stochastic dynamics. We remark that the notation for objective and dynamix is standard in the literature. On the other hand, we agree that we should clearify that the optimal value of the OCP is the one that maximizes the expected return, for which the constraints are satisfied almost surely.\n\nRegarding the models used in the experiment, we remark that these two models are only for estimating the benchmark solution. The proposed DRL solution tackles the problem without taking into account any of these models. However, we are happy to change the way of computing the benchmark solution, and any further feedback on this direction will be much appreciated.\n\nThe reviewer asks how large is the MPG class, and if zero sum games are an example of MPG. MPGs appear often in engineering and economics applications, where multiple agents have to share some resource. We have studied MPGs with \"exact potentiality\" condition, that includes cooperative and congestion games. There is a larger family of games that satisfy the \"weighted potentiality\" condition, where an agent’s change in reward due to its unilateral strategy deviation is equal to the change in the potential function but scaled by a positive weight. It is easy to show that weighted potential games (WPGs) and exact potential games can be made equivalent by scaling the reward functions [1, Lemma 2.1]. Thus, equivalent results to those presented here should be equally available for WPGs. A zero sum game is a WPG with weights 1 and -1, but we believe our KKT approach still holds in this case.\n\nThe reviewer argues that it is not possible to learn PCL-NE with no prior knowledge of the environment, since Theorems 1 or 2 cannot be verified. We have to distinguish designer from DRL agents. Our claim is that we can use the proposed approach to find a PCL-NE by using any DRL agent that has no prior knowledge of the dynamics and/or the reward, given that the game is MPG. We do not claim that the agents are able to validate the Theorems. This situation is similar to previous works that assumed knowledge that the game is cooperative, or for most of the single agent reinforcement learning literature that assumes that the environment is an MDP without requiring the agents to verify it.\n\nThe reviewer suggests that since the rewards are nonconvex, the computational complexity of P1 can be high. We disagree in part. Under Assumptions 1-4, having a discount factor smaller than one makes the Bellman operator monotone, independent on the convexity of the rewards. On the other hand, training a DRL algorithm implies finding local optima of nonconvex problems; but we remark that this is independent on the convexity of the agents' rewards.\n\nThere are a number of notable differences with (Macua, Zazo, Zazo, 2016). The main one is that although such work had the intuition that MPGs could be solved with RL methods, it only included an OL analysis; actually, it only extended previous OL analysis to the stochastic case. That is the reason why it didn't consider state-dependent policy and their Corollary 1 missed the disjoint state condition. Since such OL analysis is not satisfactory for stochastic dynamics, the current paper bridges this gap. We believe that this is an important piece in the potential games literature.\n\n[1] Lã et al. Potential game theory: applications in radio resource allocation. Springer, 2016",
"We appreciate the feedback from the reviewer. We just wish to emphasize the importance of providing an analysis and effective method for finding closed-loop (CL) solutions for a relevant class of games that appear often in engineering and economics, and that includes cooperative and congestion games. Up to the best of our knowledge, this is the first time that this kind of solutions are rigorously provided for any class of Markov games with continuous variables and/or coupled constraints that appear often in engineering applications.\n\nMoreover, we remark that since our solution relies on parametric policies, being able to learn features is key for the applicability of the method. In summary, we believe this paper provides a useful application of representation learning for multiagent systems, which extends previous approaches, which only considered cooperative games or assumed finite state-action sets.\n\nWe acknowledge that the experimental setup is limited. But as the reviewer suggests, our intention with the example in Appendixes A-B and with the numerical experiment in Sec. 5 is to illustrate how to apply the proposed framework to economic and engineering problems.",
"We have addressed the comments from the reviewers. In addition, we have strengthened the form of Theorem 2.",
"We also expected that finding closed-loop Nash equilibria in MPG should be doable. However, we remark that the closed-loop analysis is much more slippery than the open-loop analysis, since the agents have to take into account not only all possible trajectories over the state-action space (as in the open-loop case), but also all possible deviations from that trajectories at every step. The situation is even more involved since we consider coupled constraints (i.e., we are considering the stochastic infinite-horizon extension of a relevant class of generalized Nash equilibrium problems like those studied in [1]). Up to the best of our knowledge this is the first work that provides a rigorous analysis and an effective method for learning approximate closed-loop Nash equilibrium in continuous MPG (actually in any class of games with continuous state-action variables).\n\nThe reviewer comments that the connection of the current work with learning representations is a little tenuous. Although the main focus of the paper is the theoretical analysis of Markov potential games (MPGs), we believe that this connection is indeed stronger than it might seem. Our key idea is to rely on parametric policies, whose applicability for real problems depends on the expressiveness of the parametric family. If the optimal policy is a complicated mapping from states to actions, we require sophisticated parametric approximations that are able to approximate such mapping. Parametric approximations that depend on hand-coded features usually require expert domain knowledge, can be time consuming (especially for multiagent problems), and have to be re-designed for every problem at hand; while learned features that can express complex closed-loop policies are able to alleviate these problems, hence, crucial to the usefulness of our method. In summary (as responded to AnonReviewer2), we see the current setting as a relevant application of learned representations that extend previous multiagent applications, which only studied cooperative games, or assumed discrete state-action, and never with coupled constraints. In addition, we remark that our analysis allows to reformulate the game in a centralized manner that could inspire the extension of advanced DRL techniques like [2, 3], which were previously only valid for cooperative games.\n\n[1] F. Facchinei and C. Kanzow. \"Generalized Nash equilibrium problems.\" 4OR: A Quarterly Journal of Operations Research 5.3 (2007): 173-210.\n\n[2] J. Foerster et al. \"Counterfactual Multi-Agent Policy Gradients.\" arXiv preprint arXiv:1705.08926 (2017).\n\n[3] P. Sunehag et al. \"Value-Decomposition Networks For Cooperative Multi-Agent Learning.\" arXiv preprint arXiv:1706.05296 (2017)."
]
} | {
"paperhash": [
"levhari|the_great_fish_war:_an_example_using_a_dynamic_cournot-nash_solution",
"sunehag|value-decomposition_networks_for_cooperative_multi-agent_learning",
"foerster|counterfactual_multi-agent_policy_gradients",
"zazo|dynamic_potential_games_with_constraints:_fundamentals_and_applications_in_communications",
"pérolat|learning_nash_equilibrium_for_general-sum_markov_games_from_batch_data",
"macua|learning_in_constrained_stochastic_dynamic_potential_games",
"mnih|asynchronous_methods_for_deep_reinforcement_learning",
"heess|learning_continuous_control_policies_by_stochastic_value_gradients",
"lillicrap|continuous_control_with_deep_reinforcement_learning",
"schulman|high-dimensional_continuous_control_using_generalized_advantage_estimation",
"prasad|two-timescale_algorithms_for_learning_nash_equilibria_in_general-sum_stochastic_games",
"bertsekas|dynamic_programming_and_optimal_control,_two_volume_set",
"sage|optimum_systems_control",
"kydland|noncooperative_and_dominant_player_solutions_in_discrete_dynamic_games",
"konda|actor-critic_algorithms",
"apostol|multi-variable_calculus_and_linear_algebra,_with_applications_to_differential_equations_and_probability"
],
"title": [
"The great fish war: an example using a dynamic Cournot-Nash solution",
"Value-Decomposition Networks For Cooperative Multi-Agent Learning",
"Counterfactual Multi-Agent Policy Gradients",
"Dynamic Potential Games With Constraints: Fundamentals and Applications in Communications",
"Learning Nash Equilibrium for General-Sum Markov Games from Batch Data",
"Learning in constrained stochastic dynamic potential games",
"Asynchronous Methods for Deep Reinforcement Learning",
"Learning Continuous Control Policies by Stochastic Value Gradients",
"Continuous control with deep reinforcement learning",
"High-Dimensional Continuous Control Using Generalized Advantage Estimation",
"Two-Timescale Algorithms for Learning Nash Equilibria in General-Sum Stochastic Games",
"Dynamic Programming and Optimal Control, Two Volume Set",
"Optimum systems control",
"Noncooperative and Dominant Player Solutions in Discrete Dynamic Games",
"Actor-Critic Algorithms",
"Multi-variable calculus and linear algebra, with applications to differential equations and probability"
],
"abstract": [
"In recent years there have been numerous international conflicts about fishing rights. These conflicts are wider in scope than those captured by the model presented in this paper. Yet the model sheds lights on the economic implications of these conflicts as well as on the implications of other duopolistic situations in which the decisions of the participants affect the evolution of an underlying population of interest. Our model has two basic features: the underlying population changes as a result of the actions of both participants, and each participant takes account of the other's actions. This strategic aspect is studied, for an example, by using the concept of a Cournot-Nash equilibrium in which each participant's reaction depends on the stock of fish and not on previous behavior. Thus, the model is a discrete-time analog of a differential game. The paper examines the dynamic and steady-state properties of the fish population that results from the participants' interactions.",
"We study the problem of cooperative multi-agent reinforcement learning with a single joint reward signal. This class of learning problems is difficult because of the often large combined action and observation spaces. In the fully centralized and decentralized approaches, we find the problem of spurious rewards and a phenomenon we call the \"lazy agent\" problem, which arises due to partial observability. We address these problems by training individual agents with a novel value decomposition network architecture, which learns to decompose the team value function into agent-wise value functions. We perform an experimental evaluation across a range of partially-observable multi-agent domains and show that learning such value-decompositions leads to superior results, in particular when combined with weight sharing, role information and information channels.",
"\n \n Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents' policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent's action, while keeping the other agents' actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actor-critic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.\n \n",
"In a noncooperative dynamic game, multiple agents operating in a changing environment aim to optimize their utilities over an infinite time horizon. Time-varying environments allow to model more realistic scenarios (e.g., mobile devices equipped with batteries, wireless communications over a fading channel, etc.). However, solving a dynamic game is a difficult task that requires dealing with multiple coupled optimal control problems. We focus our analysis on a class of problems, named dynamic potential games, whose solution can be found through a single multivariate optimal control problem. Our analysis generalizes previous studies by considering that the set of environment's states and the set of players' actions are constrained, as it is required for many applications. We also show that the theoretical results are the natural extension of the analysis for static potential games. We apply the analysis and provide numerical methods to solve four example problems, with different features each: i) energy demand control in a smart-grid network; ii) network flow optimization in which the relays have bounded link capacity and limited battery life; iii) uplink multiple access communication with users that have to optimize the use of their batteries; and iv) two optimal scheduling games with time-varying channels.",
"This paper addresses the problem of learning a Nash equilibrium in $\\gamma$-discounted multiplayer general-sum Markov Games (MG). A key component of this model is the possibility for the players to either collaborate or team apart to increase their rewards. Building an artificial player for general-sum MGs implies to learn more complex strategies which are impossible to obtain by using techniques developed for two-player zero-sum MGs. In this paper, we introduce a new definition of $\\epsilon$-Nash equilibrium in MGs which grasps the strategy's quality for multiplayer games. We prove that minimizing the norm of two Bellman-like residuals implies the convergence to such an $\\epsilon$-Nash equilibrium. Then, we show that minimizing an empirical estimate of the $L_p$ norm of these Bellman-like residuals allows learning for general-sum games within the batch setting. Finally, we introduce a neural network architecture named NashNetwork that successfully learns a Nash equilibrium in a generic multiplayer general-sum turn-based MG.",
"We extend earlier works on continuous potential games to the most general case: stochastic time varying environment, stochastic rewards, non-reduced form and constrained state-action sets. We provide conditions for a Markov Nash equilibrium (MNE) of the game to be equivalent to the solution of a single control problem. Then, we address the problem of learning this MNE when the reward and state transition models are unknown. We follow a reinforcement learning approach and extend previous algorithms for working with constrained state-action subsets of real vector spaces. As an application example, we simulate a network flow optimization model, in which the relays have batteries that deplete with a random factor. The results obtained with the proposed framework are close to optimal.",
"We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.",
"We present a unified framework for learning continuous control policies using backpropagation. It supports stochastic control by treating stochasticity in the Bellman equation as a deterministic function of exogenous noise. The product is a spectrum of general policy gradient algorithms that range from model-free methods with value functions to model-based methods without value functions. We use learned models but only require observations from the environment in- stead of observations from model-predicted trajectories, minimizing the impact of compounded model errors. We apply these algorithms first to a toy stochastic control problem and then to several physics-based control problems in simulation. One of these variants, SVG(1), shows the effectiveness of learning models, value functions, and policies simultaneously in continuous domains.",
"We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.",
"Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. \nOur approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.",
"We consider the problem of finding stationary Nash equilibria (NE) in a finite discounted general-sum stochastic game. We first generalize a non-linear optimization problem from [9] to a general N-player game setting. Next, we break down the optimization problem into simpler sub-problems that ensure there is no Bellman error for a given state and an agent. We then provide a characterization of solution points of these sub-problems that correspond to Nash equilibria of the underlying game and for this purpose, we derive a set of necessary and sufficient SG-SP (Stochastic Game - Sub-Problem) conditions. Using these conditions, we develop two provably convergent algorithms. The first algorithm - OFF-SGSP - is centralized and model-based, i.e., it assumes complete information of the game. The second algorithm - ON-SGSP - is an online model-free algorithm. We establish that both algorithms converge, in self-play, to the equilibria of a certain ordinary differential equation (ODE), whose stable limit points coincide with stationary NE of the underlying general-sum stochastic game. On a single state non-generic game [12] as well as on a synthetic two-player game setup with 810,000 states, we establish that ON-SGSP consistently outperforms NashQ [16] and FFQ [21] algorithms.",
"The leading and most up-to-date textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization. The treatment focuses on basic unifying themes, and conceptual foundations. It illustrates the versatility, power, and generality of the method with many examples and applications from engineering, operations research, and other fields. It also addresses extensively the practical application of the methodology, possibly through the use of approximations, and provides an extensive treatment of the far-reaching methodology of Neuro-Dynamic Programming/Reinforcement Learning.",
"Optimum systems control , Optimum systems control , مرکز فناوری اطلاعات و اطلاع رسانی کشاورزی",
"Noncooperative and Dominant Player Solutions in Discrete Dynamic Games Author(s): Finn Kydland Source: International Economic Review, Vol. 16, No. 2 (Jun., 1975), pp. 321-335 Published by: Wiley for the Economics Department of the University of Pennsylvania and Institute of Social and Economic Research -Osaka University Stable URL: http://www.jstor.org/stable/2525814 . Accessed: 07/04/2014 14:17",
"Many complex decision making problems like scheduling in manufacturing systems, portfolio management in finance, admission control in communication networks etc., with clear and precise objectives, can be formulated as stochastic dynamic programming problems in which the objective of decision making is to maximize a single “overall” reward. In these formulations, finding an optimal decision policy involves computing a certain “value function” which assigns to each state the optimal reward one would obtain if the system was started from that state. This function then naturally prescribes the optimal policy, which is to take decisions that drive the system to states with maximum value. \nFor many practical problems, the computation of the exact value function is intractable, analytically and numerically, due to the enormous size of the state space. Therefore one has to resort to one of the following approximation methods to find a good sub-optimal policy: (1) Approximate the value function. (2) Restrict the search for a good policy to a smaller family of policies. \nIn this thesis, we propose and study actor-critic algorithms which combine the above two approaches with simulation to find the best policy among a parameterized class of policies. Actor-critic algorithms have two learning units: an actor and a critic. An actor is a decision maker with a tunable parameter. A critic is a function approximator. The critic tries to approximate the value function of the policy used by the actor, and the actor in turn tries to improve its policy based on the current approximation provided by the critic. Furthermore, the critic evolves on a faster time-scale than the actor. \nWe propose several variants of actor-critic algorithms. In all the variants, the critic uses Temporal Difference (TD) learning with linear function approximation. Some of the variants are inspired by a new geometric interpretation of the formula for the gradient of the overall reward with respect to the actor parameters. This interpretation suggests a natural set of basis functions for the critic, determined by the family of policies parameterized by the actor's parameters. We concentrate on the average expected reward criterion but we also show how the algorithms can be modified for other objective criteria. We prove convergence of the algorithms for problems with general (finite, countable, or continuous) state and decision spaces. \nTo compute the rate of convergence (ROC) of our algorithms, we develop a general theory of the ROC of two-time-scale algorithms and we apply it to study our algorithms. In the process, we study the ROC of TD learning and compare it with related methods such as Least Squares TD (LSTD). We study the effect of the basis functions used for linear function approximation on the ROC of TD. We also show that the ROC of actor-critic algorithms does not depend on the actual basis functions used in the critic but depends only on the subspace spanned by them and study this dependence. \nFinally, we compare the performance of our algorithms with other algorithms that optimize over a parameterized family of policies. We show that when only the “natural” basis functions are used for the critic, the rate of convergence of the actor critic algorithms is the same as that of certain stochastic gradient descent algorithms. However, with appropriate additional basis functions for the critic, we show that our algorithms outperform the existing ones in terms of ROC. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)",
"Linear Analysis. Linear Spaces. Linear Transformations and Matrices. Determinants. Eigenvalues and Eigenvectors. Eigenvalues of Operators Acting on Euclidean Spaces. Linear Differential Equations. Systems of Differential Equations. Nonlinear Analysis. Differential Calculus of Scalar and Vector Fields. Applications of the Differential Calculus. Line Integrals. Special Topics. Set Functions and Elementary Probability. Calculus of Probabilities. Introduction to Numerical Analysis."
],
"authors": [
{
"name": [
"D. Levhari",
"Leonard J. Mirman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Sunehag",
"Guy Lever",
"A. Gruslys",
"Wojciech M. Czarnecki",
"V. Zambaldi",
"Max Jaderberg",
"Marc Lanctot",
"Nicolas Sonnerat",
"Joel Z. Leibo",
"K. Tuyls",
"T. Graepel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jakob N. Foerster",
"Gregory Farquhar",
"Triantafyllos Afouras",
"Nantas Nardelli",
"Shimon Whiteson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Zazo",
"Sergio Valcarcel Macua",
"M. S. Fernández",
"Javier Zazo"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Pérolat",
"Florian Strub",
"Bilal Piot",
"O. Pietquin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sergio Valcarcel Macua",
"S. Zazo",
"Javier Zazo"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Volodymyr Mnih",
"Adrià Puigdomènech Badia",
"Mehdi Mirza",
"Alex Graves",
"T. Lillicrap",
"Tim Harley",
"David Silver",
"K. Kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"N. Heess",
"Greg Wayne",
"David Silver",
"T. Lillicrap",
"Tom Erez",
"Yuval Tassa"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"T. Lillicrap",
"Jonathan J. Hunt",
"A. Pritzel",
"N. Heess",
"Tom Erez",
"Yuval Tassa",
"David Silver",
"Daan Wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"John Schulman",
"Philipp Moritz",
"S. Levine",
"Michael I. Jordan",
"P. Abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"H. Prasad",
"P. L. A.",
"S. Bhatnagar"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Bertsekas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. P. Sage",
"C. C. White",
"G. Siouris"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"F. Kydland"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Vijay R. Konda",
"J. Tsitsiklis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"T. Apostol"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
"1706.05296",
"1705.08926",
null,
"1606.08718",
null,
"1602.01783",
"1510.09142",
"1509.02971",
"1506.02438",
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"153601656",
"25026734",
"19141434",
"15960041",
"2661899",
"8650566",
"6875312",
"53604",
"16326763",
"3075448",
"1236677",
"10566652",
"5313469",
"58938741",
"207779694",
"122413454"
],
"intents": [
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"methodology",
"background"
],
[
"background"
],
[
"methodology",
"background"
],
[
"methodology"
],
[
"methodology",
"background"
],
[
"methodology",
"background"
],
[
"methodology",
"background"
],
[
"background"
],
[],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"background"
]
],
"isInfluential": [
false,
false,
false,
true,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
true
]
} | null | 84 | 0.52381 | 0.592593 | 0.25 | null | null | null | null | null | rJm7VfZA- |
xiao|improving_the_universality_and_learnability_of_neural_programmerinterpreters_with_combinator_abstraction|ICLR_cc_2018_Conference | 3612479 | 1802.02696 | Improving the Universality and Learnability of Neural Programmer-Interpreters with Combinator Abstraction | To overcome the limitations of Neural Programmer-Interpreters (NPI) in its universality and learnability, we propose the incorporation of combinator abstraction into neural programing and a new NPI architecture to support this abstraction, which we call Combinatory Neural Programmer-Interpreter (CNPI). Combinator abstraction dramatically reduces the number and complexity of programs that need to be interpreted by the core controller of CNPI, while still allowing the CNPI to represent and interpret arbitrary complex programs by the collaboration of the core with the other components. We propose a small set of four combinators to capture the most pervasive programming patterns. Due to the finiteness and simplicity of this combinator set and the offloading of some burden of interpretation from the core, we are able construct a CNPI that is universal with respect to the set of all combinatorizable programs, which is adequate for solving most algorithmic tasks. Moreover, besides supervised training on execution traces, CNPI can be trained by policy gradient reinforcement learning with appropriately designed curricula. | {
"name": [
"da xiao",
"jo-yu liao",
"xingyuan yuan"
],
"affiliation": [
{
"laboratory": "",
"institution": "Beijing University of Posts and Telecommunications",
"location": "{'country': 'China'}"
},
{
"laboratory": "",
"institution": "ColorfulClouds Technology Co",
"location": "{'settlement': 'Ltd, Beijing', 'country': 'China'}"
},
{
"laboratory": "",
"institution": "ColorfulClouds Technology Co",
"location": "{'settlement': 'Ltd, Beijing', 'country': 'China'}"
}
]
} | null | [
"neural programming",
"Neural Programmer-Interpreter"
] | null | 2018-02-15 22:29:36 | 13 | 14 | 0 | null | null | null | null | null | null | true | This paper present a functional extension to NPI, allowing the learning of simpler, more expressive programs.
Although the conference does not put explicit bounds on the length of papers, the authors pushed their luck with their initial submission (a body of 14 pages). It is clear, from the discussion and the reviews, however, that the authors have sought to substantially reduce the length of their paper while improving its clarity.
Reviewers found the method and experiments interesting, and two out of three heartily recommend it for acceptance to ICLR. I am forced to discount the score of the third reviewer, which does not align with the content of their review. I had discussed the issue of length with them, and am disappointed that they chose not to adjust their score to reflect their assessment of the paper, but rather their displeasure at the length of the paper (which, as stated above, does push the boundary a little).
Overall, I recommend accepting this paper, but warn the authors that this is a generous decision, heavily motivated by my appreciation for the work, and that they should be careful not to try such stunts in future conference in order to preserve the fairness of the submission process. | {
"review_id": [
"rkRojIHxz",
"ByvgbYFeG",
"BkswAkLlG"
],
"review": [
{
"title": "title: The paper clearly breaks the submission guidelines. The paper is far too long, 14 pages (+refs and appendix, in total 19 pages), while the page limit is 8 pages (+refs and appendix). Therefore, the paper should be rejected. ",
"paper_summary": null,
"main_review": "main_review: The paper is interesting to read and gives valuable insights. \n\nHowever, the paper clearly breaks the submission guidelines. The paper is far too long, 14 pages (+refs and appendix, in total 19 pages), while the page limit is 8 pages (+refs and appendix). Therefore, the paper should be rejected. I can not foresee how the authors should be able to squeeze to content into 8 pages. The paper is more suitable for a journal, where page limit is less of an issue.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Good Paper",
"paper_summary": null,
"main_review": "main_review: The authors propose a variant of the neural programmer-interpreter that can support so called combinators for composing an d structuring computations. In a sense, programs in this variant are at a higher level than those in the original neural programmer-interpreter. The distinguishing aspect of the neural programmer-interpreter is that it learns a generic core (which in the variant of the paper corresponds to an interpreter of the programming language) and programs for concrete tasks simultaneously. Increasing the expressivity of the language with combinators has a danger of making the training of core very difficult. The authors avoids this pitfall by carefully re-designing the deterministic part of the core. For instance, they separate out the evaluation of the detector from the LSTM used for the core. Also, they use a fixed routine for parsing the applier instruction. The authors describe two ways of training their variant of the neural programmer-interpreter. The first is similar to the existing methods, and trains the variant using traces. The second is different and trains the variant using just input-output pairs but under carefully designed curriculum. The authors experimentally show that their approach leads to a more stable core of the neural programmer-interpreter that is close to being universal, in the sense that the core knows how to interpret commands.\n\nI found the new architecture of the neural programmer-interpreter very interesting. It is carefully crafted so as to support expressive combinators without making the learning more difficult. I can't quite judge how strong their experimental evaluations are, but I think that learning a neural programmer-interpreter from just input-output pairs using RL techniques is new and worth being pursued further. I am generally positive about accepting this paper to ICLR'18.\n\nI have three complaints, though. First, the paper uses 14 pages well over 8 pages, the recommended limit. Second, it has many typos. Third, the authors claim universality of the approach. When I read this claim, I expected a theorem initially but later I realized that the claim was mostly about informal understanding and got disappointed slightly. I hope that the authors consider these complaints when they revise the paper.\n\n* abstract, p1: is is universal -> is universal\n* p2: may still intractable to provable -> may still be intractable to prove\n* p2: import abstraction -> important abstraction\n* p2: a_(t+1)are -> a_(t+1) are\n* p2: Algorithm 1 The -> Algorithm 1. The\n* Algorithm1, p3: f_lstm(c,p,h) -> f_lstm(s,p,h)\n* p3: learn to interpreting -> learn to interpret\n* p3: it it common -> it is common\n* p3: The two program share -> The two programs share\n* p3: that server as -> that serve as\n* p3: be interpret by -> be interpreted by\n* p3: (le 9 in our -> (<= 9 in our\n* Figure 1, p4: the type of linrec is wrong.\n* p6: f_d et -> f_det\n* p8: it+1 -> i_(t+1)\n* p8: detector. the -> detector. The\n* p9: As I mentioned, I suggest you to make clear that the claim about universality is mostly based on intuition, not on theorem.\n* p9: to to -> to\n* p10: the the set -> the set\n* p11: What are DETs?",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: promising use of functional programming ideas in neural program induction; model description needs clarification",
"paper_summary": null,
"main_review": "main_review: Quality\nThe paper is very interesting and clearly motivated. The idea of importing concepts from functional programming into neural programming looks very promising, helping to address a bit the somewhat naive approach taken so far in the deep learning community towards program induction. However, I found the model description difficult to fully understand and have significant unresolved questions - especially *why* exactly the model should be expected to have better universality compared to NPI and RNPI, given than applier memory is unbounded just like NPI/RNPI program memories are unbounded.\n\nClarity\nThe paper does a good job of summarizing NPI and motivating the universality property of the core module. \n\nI had a lot of questions while reading:\n\nWhat is the purpose of detectors? It is not clear what is being detected. From the context it seems to be encoding observations from the environment, which can vary according to the task and change during program execution. The detector memory is also confusing. In the original NPI, it is assumed that the caller knows which encoder is needed for each program. In CNPI, is this part learned or more general in some way?\n\nAppliers - is it the case that *every* program apart from the four combinators must be written as an applier? For example ADD1, BSTEP, BUBBLESORT, etc all must be implemented as an applier, and programs that cannot be implemented as appliers are not expressible by CNPI?\n\nMemory - combinator memory looks like a 4-way softmax over the four combinators, right? The previous NPI program memory is analogous then to the applier memory.\n\nEqn 3 - binarizing the detector output introduces a non-differentiable operation. How is the detector then trained e.g. from execution traces? Later I see that there is a notion of a “correct condition” for the detector to regress on, which makes me confused again about what exactly the output of a detector means.\n\nComputing the next subprogram - since the size of applier memory is unbounded, the core still needs to be aware of an unlimited number of subprograms. I must be missing something here - how does the proposed model therefore achieve better universality than the original NPI and RNPI models?\n\nAnalysis - for the claim of perfect generalization, I think this will not generally hold true for perceptual inputs. Will the proposed model only be useful in discrete domains for algorithmic tasks, or could it be more broadly applicable, e.g. to robotics tasks?\n\nOriginality\nThis methods proposed in this paper are quite novel and start to bridge an important gap between neural program induction and functional programming, by importing the concept of combinator abstraction into NPI.\n\nSignificance\nThe paper will be significant to people interested in NPI-related models and neural program induction generally, but on the other hand, there is currently not yet a “killer application” to this line of work. \n\nThe experiments appear to show significant new capabilities of CNPI compared to NPI and RNPI in terms of better generalization and universality, as well as being trainable by reinforcement learning.\n\nPros\n- Learns new programs without catastrophic forgetting in the NPI core, in particular where previous NPI models fail.\n- Detector training is decoupled from core and memory training, so that perfect generalization does not have to be re-verified after learning new behaviors.\n\nCons\n- So far lacking useful applications in the real world. Could the techniques in this paper help in robotics extensions to NPI? (see e.g. https://arxiv.org/abs/1710.01813)\n- Adds a significant amount of further structure into the NPI framework, which could potentially make broader applications more complex to implement. Do the proposed modifications reduce generality in any way?\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.2222222238779068,
0.6666666865348816,
0.6666666865348816
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to Reviewer 2, Part 1",
"Response to Reviewer 3, Part 1",
"Revision 2017-12-16: Summary of Changes",
"The paper presents some very interesting ideas, but the paper is still lengthy and is more suitable for a journal.",
"Response to Reviewer 3, Part 3",
"Response to Reviewer 2, Part 2",
"Thanks very much for your interest in our paper",
"Response to Reviewer 1",
"Response to Reviewer 3, Part 2"
],
"comment": [
"Thanks for your very constructive feedback. We have uploaded a revision to incorporate your suggestions. We will try to answer your questions and concerns one by one below.\n> First, the paper uses 14 pages well over 8 pages, the recommended limit.\nRe: We have shortened the paper from 14 to 12 pages (+refs and appendix, from 19 to 17 pages) while preserving most of the important contents by:\n1) abbreviating the description of NPI and removing NPI's inference algorithm (Algorithm 1 in the original version) in Section 2.1;\n2) rewriting some paragraphs (especially those in Section 5) to make them more succinct;\n3) substantial re-typesetting (e.g. placing some figures and tables side by side, which is also common practice in other submissions). In order to present the somewhat intricate idea as clear as possible, we use in this paper quite a few figures and tables. The bad typesetting of them in the original version made the manuscript unnecessarily long.\nWe'd like to mention that the 12-page revision is in fact shorter than a number of other submissions, e.g.:\n1. Modular Continual Learning in a Unified Visual Environment (https://openreview.net/forum?id=rkPLzgZAZ), 14 pages\n2. Towards Synthesizing Complex Programs From Input-Output Examples (https://openreview.net/forum?id=Skp1ESxRZ), 16 pages\n3. Sobolev GAN (https://openreview.net/forum?id=SJA7xfb0b), 15 pages\n4. N2N learning: Network to Network Compression via Policy Gradient Reinforcement Learning (https://openreview.net/forum?id=B1hcZZ-AW), 13 pages\n\n> Second, it has many typos.\nRe: We have corrected these and some other typos in the revision. We apologize for the carelessness leading to so many typos and thank you very much for the effort of pointing them out.\n* Figure 1, p4: the type of linrec is wrong.\nDo you mean that linrec has fewer arguments than shown in Figure 2? The pseudo-code in Figure 1 is only for illustration purpose. We deliberately use a simpler version of linrec to make its connection with ADD and BSTEP more apparent.\n* p11: What are DETs?\nDETs stand for detectors. The abbreviation is defined in paragraph 1 of Section 3.1.",
"Thanks for your very constructive feedback. We have uploaded a revision to incorporate your suggestions. We will try to answer your questions and concerns one by one below.\n> especially *why* exactly the model should be expected to have better universality compared to NPI and RNPI, given than applier memory is unbounded just like NPI/RNPI program memories are unbounded. Also related to:\n> Computing the next subprogram - since the size of applier memory is unbounded, the core still needs to be aware of an unlimited number of subprograms. I must be missing something here - how does the proposed model therefore achieve better universality than the original NPI and RNPI models?\nRe: Applier memory is indeed unbounded. However, the core is in fact *not* aware of any actual applier programs. Let's take the BSTEP program in Figure 4 as an example (also see Figure 3 (c) and line 5-7 and 15-16 of Algorithm 1 in the revision). At the first execution step, the core does not directly call 'COMPSWAP' as the next subprogram. It calls 'a1'. Then the actual subprogram COMPSWAP's ID is looked up in the frame, which is constructed on the fly by the BSTEP applier when calling linrec. The _Parse function in Algorithm 1 and Lemma 1 in Appendix C guarantee that the frame will be filled with correct values.\nIn CNPI, the core is only responsible for interpreting combinators and is only aware of formal callable arguments. We offload the responsibility of interpreting appliers from the core to a parser. The two key facts are: 1) the execution of all appliers follows exactly the same pattern: call a combinator with a detector arguments and a fixed number of callable arguments and then return, and 2) the parser itself is a *fixed* program (see function _Parse in Algorithm 1) with no learning taking place at all. As a result, the parser can correctly interpret *any* applier with appropriately set program embeddings (according to equation (1)) regardless of how many applier programs are already stored in the program memory. We propose and prove a lemma (Lemma 1 in Appendix C) on the interpretation of appliers in the revision.\nThe distinguishing feature of CNPI that enables this separation of responsibility and that eventually provides the universality of CNPI is the dynamic binding of formal detectors and callable arguments to actual programs. We have rewritten the first half of Section 4 to explicitly propose a theorem and a proposition on the universality of CNPI and added Appendix C in the revision to prove the theorem. Please see the last part of our reply to Review 3's comments for more details.\n\n> Appliers - is it the case that *every* program apart from the four combinators must be written as an applier? For example ADD1, BSTEP, BUBBLESORT, etc all must be implemented as an applier, and programs that cannot be implemented as appliers are not expressible by CNPI?\nRe: Yes. Actually we have proposed a \"combinatory programing language\" for CNPI where programs are composed by iteratively defining appliers from the bottom up. We give a formal definition of combinatory programs in Appendix C in the revision. We propose a proposition in Section 4 stating that any recursive program is combinatorizable, i.e., can be converted to a combinatory equivalent.\nThis proposition shows that the set of all combinatory programs is adequate for solving most algorithmic tasks, considering that most, if not all, algorithmic tasks have a recursive solution. Instead of giving a formal proof of it, we propose a concrete algorithm for combinatorizing any program set expressing an recursive algorithm in Appendix B. Although we believe that the proposition is true (effectively, it says that the combinatory programming language is Turing-complete), we think that a formal proof of it would be too tedious to be included in this paper. The intuition behind is that during the execution of a combinatory programs, combinators and appliers call each other to form an alternating call sequence until reaching a ACT. Arbitrarily complex program structures can be expressed in this way (see the last paragraph of Section 3.1 and Figure 3 (c) and (d)). We'd like to point out that the circle formed by the mutual invocation of combinators and appliers is a very fundamental construct in the interpretation of functional program languages. It can be seen as a \"neural equivalent\" of the eval-apply circle that lies at the heart of a LISP evaluator. The book \"Structure and Interpretation of Computer Programs (2nd edition)\" has a good discussion on this (Section 4.1.1, Figure 4.1: The eval-apply cycle exposes the essence of a computer language. https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-26.html#%_sec_4.1.1). The expressive power (Turing-completeness) of functional programming languages like LISP has been well recognized. Anyway, we admit that this is a weakness regarding the theoretical rigor of this paper, which could be improved by future work.",
"In response to reviewers' comments, we have made the following changes in the revision:\n1. We have shortened the paper from 14 to 12 pages (+refs and appendix, from 19 to 17 pages) while preserving most of the important contents by: \n1) abbreviating the description of NPI and removing NPI's inference algorithm (Algorithm 1 in the original version) in Section 2.1; \n2) rewriting some paragraphs (especially those in Section 5) to make them more succinct; \n3) substantial re-typesetting (e.g. placing some figures and tables side by side, which is also common practice in other submissions).\n\n2. We have added a theorem and a proposition on the universality of CNPI in Section 4, along with a proof of the theorem in Appendix C.\n\n3. We have added a discussion section to discussion one potential application of CNPI besides solving algorithmic tasks, namely natural language understanding as inferring and executing programs.\n\n4. We have corrected a number of typos.",
"I have now also read the revised version of the paper, i.e., the 12-page version. \n\nThe paper is very interesting to read, however slightly hard to digest when you are not familiar with NPI. \n\nThe paper presents a clear contribution in addition to previous work, i.e., identifying and proposing a set of four combinators that improves the universality of neural programmer-interpreters. An algorithmic framework is presented for inference and example executions are provided. \n\nThe analysis and evaluation of the approach are appropriate, both from a theoretical and experimental point of view. However, the execution examples seem very small and it is hard to predict the generality and scalability of the approach (at least for me). How does the technique scale for large problems / applications?\n\nThe paper is very well written and clearly highlights its contribution. However, in my opinion the paper breaks the submission guidelines. The paper is too long, 12 pages (+refs and appendix, in total 17 pages), while the page limit is 8 pages (+refs and appendix). The paper is more suitable for a journal, where page limit is less of an issue.\n\nI have generally a problem with papers that are far too long. The page limits are there for a reason, e.g., all papers should be given an equal amount of space to express the ideas and evaluate them. Although the page limit (8 pages) is a recommendation at this conference, this is the first time I see a paper that breaks / stretches the limit so significantly. I think many of the papers submitted could have been of higher quality, have better evaluations, etc. if they also had stretched the page limits by 50%. I think all papers should be judged based on the same restrictions/limitation, scope, etc. \n",
"> Analysis - for the claim of perfect generalization, I think this will not generally hold true for perceptual inputs. Will the proposed model only be useful in discrete domains for algorithmic tasks, or could it be more broadly applicable, e.g. to robotics tasks? Also related to:\n> So far lacking useful applications in the real world. Could the techniques in this paper help in robotics extensions to NPI?\nRe: We are not very familiar with robotics, but CNPI does has the potential capability of augmenting intelligent agents trained by RL to follow instructions and do tasks, which may have applications in robotics domain. We discuss this below. Though in this paper we only demonstrate the capability of CNPI in algorithm domain, we believe that the proposed approach is quite general and can potentially be applied to other domains. One such domains is the recent work of treating natural language understanding as inferring and executing programs, applied to semantic parsing for QA (e.g., Andreas et al. (2016), Liang et al. (2017)) and training agents by RL to follow instructions and generalize (e.g., Oh et al. (2017), Andreas et al. (2017), Denil et al. (2017)). Besides normal nouns and verbs, natural language contains ``higher-order'' words such as ``then'', ``if'' and ``until'', which play the critical role of controlling the ``execution'' of other verbs, substantially enhancing the expressive power of the language. Very recently, Anonymous (2018) shows empirically that the prevalent sequence-to-sequence models struggle at mapping instructions containing these words (e.g., ``twice'') to correct action sequence with good generalization. On the other hand, these words can readily be represented as combinators (e.g., def twice(a): a(); a()). By adding these words to the vocabulary and equipping the agent with CNPI-like components to interpret them as combinators, it would be possible to construct agents that display more complex and structured behavior following succinct instructions, and that generalize better due to the raised level of abstraction. We leave this for future work. We have replaced the conclusion section with a discussion section in the revision to discuss the potential applications of CNPI.\n--------\nJacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural networks for question answering. In NAACL, 2016.\nChen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In Annual Meeting of the Association for Computational Linguistics (ACL), 2017.\nJunhyuk Oh, Singh Satinder, Lee Honglak, and Kholi Pushmeet. Zero-shot task generalization with multi-task deep reinforcement learning. In International Conference on Machine Learning (ICML), 2017.\nJacob Andreas, Dan Klein, and Sergey Levine. Modular multitask reinforcement learning with policy sketches. In International Conference on Machine Learning (ICML), 2017.\nMisha Denil, Sergio Gómez Colmenarejo, Serkan Cabi, David Saxton, and Nando de Freitas. Programmable agents. arXiv preprint arXiv:1706.06383, 2017.\n\nWe hope that these replies and the revision resolve your questions. Any additional questions and suggestions are welcome and we will try our best to make things as clear as possible.",
"> Third, the authors claim universality of the approach. When I read this claim, I expected a theorem initially but later I realized that the claim was mostly about informal understanding and got disappointed slightly.\nRe: The original version did lack a theorem, which is a major drawback regarding the completeness of the paper. In the revision we state the universality of CNPI with the following theorem and proposition in Section 4 (In fact we already mentioned the proposition in the original version. In the revision we propose it more explicitly):\n--------\nTheorem 1. If 1) the core along with the program embeddings of the set of four combinators and the built-in combinator _mapself are trained and verified before being fixed, and 2) the detectors for a new task are trained and verified, then CNPI can 1) interpret the combinatory programs of the new task correctly with perfect generalization (i.e. with any input complexity) by adding appliers to the program memory, and 2) maintain correct interpretation of already learned programs.\nProposition 1. Any recursive program is combinatorizable, i.e., can be converted to a combinatory equivalent.\n--------\nTheorem 1 states that CNPI is universal with respect to the set of all combinatorizable programs and that appliers can be continually added to the program memory to solve new tasks. Proposition 1 shows that this set of programs is adequate for solving most algorithmic tasks, considering that most, if not all, algorithmic tasks have a recursive solution. We give an induction proof of Theorem 1 in Appendix C, which is newly added in the revision. The proof is in fact quite straightforward. The distinguishing feature of CNPI that enables this proof is the dynamic binding of formal detectors and callable arguments to actual programs, which makes verification of combinator's execution (by the core) and verification of their invocation (by appliers) independent of each other. In contrast, it is impossible to conduct such a proof with NPI and RNPI which lack this feature.\nFor Proposition 1, instead of giving a formal proof, we propose a concrete algorithm for combinatorizing any program set expressing an recursive algorithm in Appendix B. Although we believe that Proposition 1 is true (effectively, it says that the combinatory programming language is Turing-complete), we think that a formal proof of it would be too tedious to be included in this paper. The intuition behind is that during the execution of a combinatory programs, combinators and appliers call each other to form an alternating call sequence until reaching a ACT. Arbitrarily complex program structures can be expressed in this way (see the last paragraph of Section 3.1 and Figure 3 (c) and (d)). We'd like to point out that the circle formed by the mutual invocation of combinators and appliers is a very fundamental construct in the interpretation of functional program languages. It can be seen as a \"neural equivalent\" of the eval-apply circle that lies at the heart of a LISP evaluator. The book \"Structure and Interpretation of Computer Programs (2nd edition)\" has a good discussion on this (Section 4.1.1, Figure 4.1: The eval-apply cycle exposes the essence of a computer language. https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-26.html#%_sec_4.1.1). The expressive power (Turing-completeness) of functional programming languages like LISP has been well recognized. Anyway, we admit that this is a weakness regarding the theoretical rigor of this paper, which could be improved by future work.\nWe have rewritten the first half of Section 4 on the universality of CNPI to state our claims more clearly. We have also replaced the conclusion section with a discussion section in the revision to discuss the potential applications of CNPI.\n\nThanks again for the reviewing effort and any additional comments and suggestions are welcome.",
"We have shortened the paper from 14 to 12 pages. We also added a discussion Section in the revision to discuss the potential applications of CNPI in other \"more real\" domains. Any additional questions or feedback are welcome.",
"Thanks very much for your comments. The original version is indeed too long. We have uploaded a revision. We have shortened the paper from 14 to 12 pages (+refs and appendix, from 19 to 17 pages) while preserving most of the important contents by:\n1) abbreviating the description of NPI and removing NPI's inference algorithm (Algorithm 1 in the original version) in Section 2.1;\n2) rewriting some paragraphs (especially those in Section 5) to make them more succinct;\n3) substantial re-typesetting (e.g. placing some figures and tables side by side, which is also common practice in other submissions). In order to present the somewhat intricate idea as clear as possible, we use in this paper quite a few figures and tables. The bad typesetting of them in the original version made the manuscript unnecessarily long.\nConsidering the dense contents of the paper this is the best that we can do. \n\nWe'd like to mention that the 12-page revision is in fact shorter than a number of other submissions, e.g.:\n1. Modular Continual Learning in a Unified Visual Environment (https://openreview.net/forum?id=rkPLzgZAZ), 14 pages\n2. Towards Synthesizing Complex Programs From Input-Output Examples (https://openreview.net/forum?id=Skp1ESxRZ), 16 pages\n3. Sobolev GAN (https://openreview.net/forum?id=SJA7xfb0b), 15 pages\n4. N2N learning: Network to Network Compression via Policy Gradient Reinforcement Learning (https://openreview.net/forum?id=B1hcZZ-AW), 13 pages",
"> What is the purpose of detectors? It is not clear what is being detected. From the context it seems to be encoding observations from the environment, which can vary according to the task and change during program execution. The detector memory is also confusing. In the original NPI, it is assumed that the caller knows which encoder is needed for each program. In CNPI, is this part learned or more general in some way? Also related to:\n> Eqn 3 - binarizing the detector output introduces a non-differentiable operation. How is the detector then trained e.g. from execution traces? Later I see that there is a notion of a “correct condition” for the detector to regress on, which makes me confused again about what exactly the output of a detector means.\nRe: Your understanding of detectors is basically correct. As described in Section 3.1, the detector, as a lightweight and more \"specialized\" version of the encoder in NPI, detects some condition (e.g. a pointer P2 reaching the end of array) in the environment and provides signals for the combinator to condition its execution. It outputs 0 if the condition satisfies, otherwise 1. As with the confusion about detector memory, you mentioned \"In the original NPI, it is assumed that the caller knows which encoder is needed for each program.\" This way of saying is not very precise. In the original NPI paper, encoders are constructed and used on a per task basis, rather than per program. All programs of a task use the same encoder, which is predetermined. Once a particular task has been given, the single shared encoder integrates tightly with the core and effectively becomes part of the monolithic model, not subject to any dynamic selection by the core. So no such thing as an encoder memory is needed in NPI. On the other hand, in CNPI it is not until the interpretation of an applier that which detector is needed for the next combinator to call is determined. This detector is then loaded from the detector memory and \"attached\" to the core. Different programs for the same task may use different detectors (e.g. COMPSWAP and BSTEP for bubble sort task in Figure 3). This architecture is more flexible and promotes the reusability of detectors *across* tasks. For example, a detector detecting pointer P2 reaching the end of array can be used in both grade-school addition task by ADD program and bubble sort task by BSTEP program (see Figure 1). This level of reusability is not easy to achieve in NPI. Reusable detectors can be continually added to the detector memory, just as appliers are added to the program memory, during the lifelong learning process of CNPI to enhance its capability.\n\n> Memory - combinator memory looks like a 4-way softmax over the four combinators, right? The previous NPI program memory is analogous then to the applier memory.\nRe: Whether to use two separate memories for combinators and appliers respectively or to use a single program memory is an implementation issue. While both approaches are feasible, in our current implementation we choose to use a single program memory to store both combinators and appliers and use a flag in the program embeddings to differentiate the two types. Considering this, Figure 1 is a little bit misleading. Anyway, this figure is only for illustration purpose.\n\n> Adds a significant amount of further structure into the NPI framework, which could potentially make broader applications more complex to implement. Do the proposed modifications reduce generality in any way?\nRe: CNPI does add a few essential structures (mainly the frames and the detector memory) into the NPI framework and make the model more complex. But as both the analytical and the experimental results show, the proposed modifications significantly *increase* universality and generality for algorithmic tasks. The limitation is perhaps that we do not discuss how to deal with perceptual input, for which the binary output of detectors may be not sufficient. More detectors types may be needed to extend CNPI to support perceptual inputs, but the proposed detector memory architecture and the dynamic selection of detectors can still be used as basis for such extensions. Overall, we believe that the gains are worth the added complexity."
]
} | {
"paperhash": [
"lake|generalization_without_systematicity:_on_the_compositional_skills_of_sequence-to-sequence_recurrent_networks",
"lake|still_not_systematic_after_all_these_years:_on_the_compositional_skills_of_sequence-to-sequence_recurrent_networks",
"oh|zero-shot_task_generalization_with_multi-task_deep_reinforcement_learning",
"cai|making_neural_programming_architectures_generalize_via_recursion",
"andreas|modular_multitask_reinforcement_learning_with_policy_sketches",
"kaiser|neural_gpus_learn_algorithms",
"reed|neural_programmer-interpreters",
"kurach|neural_random_access_machines",
"neelakantan|neural_programmer:_inducing_latent_programs_with_gradient_descent",
"graves|neural_turing_machines",
"sutskever|sequence_to_sequence_learning_with_neural_networks",
"hochreiter|long_short-term_memory"
],
"title": [
"Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks",
"Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks",
"Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning",
"Making Neural Programming Architectures Generalize via Recursion",
"Modular Multitask Reinforcement Learning with Policy Sketches",
"Neural GPUs Learn Algorithms",
"Neural Programmer-Interpreters",
"Neural Random Access Machines",
"Neural Programmer: Inducing Latent Programs with Gradient Descent",
"Neural Turing Machines",
"Sequence to Sequence Learning with Neural Networks",
"Long Short-Term Memory"
],
"abstract": [
"Humans can understand and produce new utterances effortlessly, thanks to their compositional skills. Once a person learns the meaning of a new verb \"dax,\" he or she can immediately understand the meaning of \"dax twice\" or \"sing and dax.\" In this paper, we introduce the SCAN domain, consisting of a set of simple compositional navigation commands paired with the corresponding action sequences. We then test the zero-shot generalization capabilities of a variety of recurrent neural networks (RNNs) trained on SCAN with sequence-to-sequence methods. We find that RNNs can make successful zero-shot generalizations when the differences between training and test commands are small, so that they can apply \"mix-and-match\" strategies to solve the task. However, when generalization requires systematic compositional skills (as in the \"dax\" example above), RNNs fail spectacularly. We conclude with a proof-of-concept experiment in neural machine translation, suggesting that lack of systematicity might be partially responsible for neural networks' notorious training data thirst.",
"Humans can understand and produce new utterances effortlessly, thanks to their systematic compositional skills. Once a person learns the meaning of a new verb \"dax,\" he or she can immediately understand the meaning of \"dax twice\" or \"sing and dax.\" In this paper, we introduce the SCAN domain, consisting of a set of simple compositional navigation commands paired with the corresponding action sequences. We then test the zero-shot generalization capabilities of a variety of recurrent neural networks (RNNs) trained on SCAN with sequence-to-sequence methods. We find that RNNs can generalize well when the differences between training and test commands are small, so that they can apply \"mix-and-match\" strategies to solve the task. However, when generalization requires systematic compositional skills (as in the \"dax\" example above), RNNs fail spectacularly. We conclude with a proof-of-concept experiment in neural machine translation, supporting the conjecture that lack of systematicity is an important factor explaining why neural networks need very large training sets.",
"As a step towards developing zero-shot task generalization capabilities in reinforcement learning (RL), we introduce a new RL problem where the agent should learn to execute sequences of instructions after learning useful skills that solve subtasks. In this problem, we consider two types of generalizations: to previously unseen instructions and to longer sequences of instructions. For generalization over unseen instructions, we propose a new objective which encourages learning correspondences between similar subtasks by making analogies. For generalization over sequential instructions, we present a hierarchical architecture where a meta controller learns to use the acquired skills for executing the instructions. To deal with delayed reward, we propose a new neural architecture in the meta controller that learns when to update the subtask, which makes learning more efficient. Experimental results on a stochastic 3D domain show that the proposed ideas are crucial for generalization to longer instructions as well as unseen instructions.",
"Empirically, neural networks that attempt to learn programs from data have exhibited poor generalizability. Moreover, it has traditionally been difficult to reason about the behavior of these models beyond a certain level of input complexity. In order to address these issues, we propose augmenting neural architectures with a key abstraction: recursion. As an application, we implement recursion in the Neural Programmer-Interpreter framework on four tasks: grade-school addition, bubble sort, topological sort, and quicksort. We demonstrate superior generalizability and interpretability with small amounts of training data. Recursion divides the problem into smaller pieces and drastically reduces the domain of each neural network component, making it tractable to prove guarantees about the overall system's behavior. Our experience suggests that in order for neural architectures to robustly learn program semantics, it is necessary to incorporate a concept like recursion.",
"We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them—specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor-critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level sub-goals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.",
"Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. \nWe present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. \nAn essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. \nTo achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.",
"We propose the neural programmer-interpreter (NPI): a recurrent and compositional neural network that learns to represent and execute programs. NPI has three learnable components: a task-agnostic recurrent core, a persistent key-value program memory, and domain-specific encoders that enable a single NPI to operate in multiple perceptually diverse environments with distinct affordances. By learning to compose lower-level programs to express higher-level programs, NPI reduces sample complexity and increases generalization ability compared to sequence-to-sequence LSTMs. The program memory allows efficient learning of additional tasks by building on existing programs. NPI can also harness the environment (e.g. a scratch pad with read-write pointers) to cache intermediate results of computation, lessening the long-term memory burden on recurrent hidden units. In this work we train the NPI with fully-supervised execution traces; each program has example sequences of calls to the immediate subprograms conditioned on the input. Rather than training on a huge number of relatively weak labels, NPI learns from a small number of rich examples. We demonstrate the capability of our model to learn several types of compositional programs: addition, sorting, and canonicalizing 3D models. Furthermore, a single NPI learns to execute these programs and all 21 associated subprograms.",
"Abstract: In this paper, we propose and investigate a new neural network architecture called Neural Random Access Machine. It can manipulate and dereference pointers to an external variable-size random-access memory. The model is trained from pure input-output examples using backpropagation. \nWe evaluate the new model on a number of simple algorithmic tasks whose solutions require pointer manipulation and dereferencing. Our results show that the proposed model can learn to solve algorithmic tasks of such type and is capable of operating on simple data structures like linked-lists and binary trees. For easier tasks, the learned solutions generalize to sequences of arbitrary length. Moreover, memory access during inference can be done in a constant time under some assumptions.",
"Deep neural networks have achieved impressive supervised classification performance in many tasks including image recognition, speech recognition, and sequence to sequence learning. However, this success has not been translated to applications like question answering that may involve complex arithmetic and logic reasoning. A major limitation of these models is in their inability to learn even simple arithmetic and logic operations. For example, it has been shown that neural networks fail to learn to add two binary numbers reliably. In this work, we propose Neural Programmer, an end-to-end differentiable neural network augmented with a small set of basic arithmetic and logic operations. Neural Programmer can call these augmented operations over several steps, thereby inducing compositional programs that are more complex than the built-in operations. The model learns from a weak supervision signal which is the result of execution of the correct program, hence it does not require expensive annotation of the correct program itself. The decisions of what operations to call, and what data segments to apply to are inferred by Neural Programmer. Such decisions, during training, are done in a differentiable fashion so that the entire network can be trained jointly by gradient descent. We find that training the model is difficult, but it can be greatly improved by adding random noise to the gradient. On a fairly complex synthetic table-comprehension dataset, traditional recurrent networks and attentional models perform poorly while Neural Programmer typically obtains nearly perfect accuracy.",
"We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.",
"Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
],
"authors": [
{
"name": [
"B. Lake",
"Marco Baroni"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Lake",
"Marco Baroni"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Junhyuk Oh",
"Satinder Singh",
"Honglak Lee",
"Pushmeet Kohli"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jonathon Cai",
"Richard Shin",
"D. Song"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jacob Andreas",
"D. Klein",
"S. Levine"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Lukasz Kaiser",
"I. Sutskever"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Scott E. Reed",
"Nando de Freitas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Karol Kurach",
"Marcin Andrychowicz",
"I. Sutskever"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Arvind Neelakantan",
"Quoc V. Le",
"I. Sutskever"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Alex Graves",
"Greg Wayne",
"Ivo Danihelka"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"I. Sutskever",
"O. Vinyals",
"Quoc V. Le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sepp Hochreiter",
"J. Schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
"1711.00350",
"1706.05064",
"1704.06611",
"1611.01796",
"1511.08228",
"1511.06279",
"1511.06392",
"1511.04834",
"1410.5401",
"1409.3215",
null
],
"s2_corpus_id": [
"46761158",
"406912",
"11974467",
"1844940",
"14711954",
"2009318",
"7034786",
"1174466",
"6715185",
"15299054",
"7961699",
"1915014"
],
"intents": [
[
"background"
],
[
"background"
],
[
"background"
],
[
"methodology",
"background"
],
[
"methodology",
"background"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[
"methodology",
"background"
],
[],
[
"background"
]
],
"isInfluential": [
false,
false,
false,
true,
true,
false,
false,
false,
true,
false,
false,
false
]
} | null | 84 | 0.166667 | 0.518519 | 0.75 | null | null | null | null | null | rJlMAAeC- |
fox|parametrized_hierarchical_procedures_for_neural_programming|ICLR_cc_2018_Conference | 3508182 | null | Parametrized Hierarchical Procedures for Neural Programming | Neural programs are highly accurate and structured policies that perform algorithmic tasks by controlling the behavior of a computation mechanism. Despite the potential to increase the interpretability and the compositionality of the behavior of artificial agents, it remains difficult to learn from demonstrations neural networks that represent computer programs. The main challenges that set algorithmic domains apart from other imitation learning domains are the need for high accuracy, the involvement of specific structures of data, and the extremely limited observability. To address these challenges, we propose to model programs as Parametrized Hierarchical Procedures (PHPs). A PHP is a sequence of conditional operations, using a program counter along with the observation to select between taking an elementary action, invoking another PHP as a sub-procedure, and returning to the caller. We develop an algorithm for training PHPs from a set of supervisor demonstrations, only some of which are annotated with the internal call structure, and apply it to efficient level-wise training of multi-level PHPs. We show in two benchmarks, NanoCraft and long-hand addition, that PHPs can learn neural programs more accurately from smaller amounts of both annotated and unannotated demonstrations. | {
"name": [
"roy fox",
"richard shin",
"sanjay krishnan",
"ken goldberg",
"dawn song",
"ion stoica"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
}
]
} | We introduce the PHP model for hierarchical representation of neural programs, and an algorithm for learning PHPs from a mixture of strong and weak supervision. | [
"Neural programming",
"Hierarchical Control"
] | null | 2018-02-15 22:29:19 | 38 | 30 | 3 | null | null | null | null | null | null | true | This paper is somewhat incremental on recent prior work in a hot area; it has some weaknesses but does move the needle somewhat on these problems. | {
"review_id": [
"SylxFWcgG",
"SkiyHjDlf",
"HyXxfzsxM"
],
"review": [
{
"title": "title: NPI with less supervision",
"paper_summary": null,
"main_review": "main_review: I thank the authors for their updates and clarifications. I stand by my original review and score. I think their method and their evaluation has some major weaknesses, but I think that it still provides a good baseline to force work in this space towards tasks which can not be solved by simpler models like this. So while I'm not super excited about the paper I think it is above the accept threshold.\n--------------------------------------------------------------------------\nThis paper extends an existing thread of neural computation research focused on learning resuable subprocedures (or options in RL-speak). Instead of simply input and output examples, as in most of the work in neural computation, they follow in the vein of the Neural Programmer-Interpreter (Reed and de Freitas, 2016) and Li et. al., 2017, where the supervision contains the full sequence of elementary actions in the domain for all samples, and some samples also contain the hierarchy of subprocedure calls.\n\nThe main focus of their work is learning from fewer fully annotated samples than prior work. They introduce two new ideas in order to enable this:\n1. They limit the memory state of each level in the program heirarchy to simply a counter indicating the number of elementary actions/subprocedure calls taken so far (rather than a full RNN embedded hidden/cell state as in prior work). They also limit the subprocedures such that they do not accept any arguments.\n2. By considering this very limited set of possible hidden states, they can compute the gradients using a dynamic program that seems to be more accurate than the approximate dynamic program used in Li et. al., 2017. \n\nThe main limitation of the work is this extremely limited memory state, and the lack of arguments. Without arguments, everything that parameterizes the subprocedures must be in the visible world state. In both of their domains, this is true, but this places a significant limitation on the algorithms which can be modeled with this technique. Furthermore, the limited memory state means that the only way a subprocedure can remember anything about the current observation is to call a different subprocedure. Again, their two evalation tasks fit into this paradigm, but this places very significant limitations on the set of applicable domains. I would have like to see more discussion on how constraining these limitations would be in practice. For example, it seems it would be impossible for this architecture to perform the Nanocraft task if the parameters of the task (width, height, etc.) were only provided in the first observation, rather than every observation. \n\nNone-the-less I think this work is an important step in our understanding of the learning dynamics for neural programs. In particular, while the RNN hidden state memory used by the prior work enables the learning of more complicted programs *in theory*, this has not been shown in practice. So, it's possible that all the prior work is doing is learning to approixmate a much simpler architecture of this form. Specifically, I think this work can act as a great base-line by forcing future work to focus on domains which cannot be easily solved by a simpler architecture of this form. This limitation will also force the community to think about which tasks require a more complicated form of memory, and which can be solved with a very simple memory of this form.\n\n\nI also have the following additional concerns about the paper:\n\n1. I found the current explanation of the algorithm to be very difficult to understand. It's extremely difficult to understand the core method without reading the appendix, and even with the appendix I found the explanation of the level-by-level decomposition to be too terse.\n\n2. It's not clear how their gradient approximation compares to the technique used by Li et. al. They obviously get better results on the addition and Nanocraft domains, but I would have liked a more clear explanation and/or some experiments providing insights into what enables these improvements (or at least an admission by the authors that they don't really understand what enabled the performance improvements).\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: a good paper",
"paper_summary": null,
"main_review": "main_review: In the paper titled \"Parameterized Hierarchical Procedures for Neural Programming\", the authors proposed \"Parametrized Hierarchical Procedure (PHP)\", which is a representation of a hierarchical procedure by differentiable parametrization. Each PHP is represented with two multi-layer perceptrons with ReLU activation, one for its operation statement and one for its termination statement. With two benchmark tasks (NanoCraft and long-hand addition), the authors demonstrated that PHPs are able to learn neural programs accurately from smaller amounts of strong/weak supervision. \n\nOverall the paper is well-written with clear logic and accurate narratives. The methodology within the paper appears to be reasonable to me. Because this is not my research area, I cannot judge its technical contribution. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: .",
"paper_summary": null,
"main_review": "main_review: Summary of paper: The goal of this paper is to be able to construct programs given data consisting of program input and program output pairs. Previous works by Reed & Freitas (2016) (using the paper's references) and Cai et al. (2017) used fully supervised trace data. Li et al. (2017) used a mixture of fully supervised and weakly supervised trace data. The supervision helps with discovering the hierarchical structure in the program which helps generalization to other program inputs. The method is heavily based on the \"Discovery of Deep Options\" (DDO) algorithm by Fox et al. (2017).\n\n---\n\nQuality: The experiments are chosen to compare the method that the paper is proposing directly with the method from Li et al. (2017).\nClarity: The connection between learning a POMDP policy and program induction could be made more explicit. In particular, section 3 describes the problem statement but in terms of learning a POMDP policy. The only sentence with some connection to learning programs is the first one.\nOriginality: This line of work is very recent (as far as I know), with Li et al. (2017) being the other work tackling program learning from a mixture of supervised and weakly supervised program trace data.\nSignificance: The problem that the paper is solving is significant. The paper makes good progress in demonstrating this on toy tasks.\n\n---\n\nSome questions/comments:\n- Is the Expectation-Gradient trick also known as the reinforce/score function trick?\n- This paper could benefit from being rewritten so that it is in one language instead of mixing POMDP language used by Fox et al. (2017) and program learning language. It is not exactly clear, for example, how are memory states m_t and states s_t related to the program traces.\n- It would be nice if the experiments in Figure 2 could compare PHP and NPL on exactly the same total number of demonstrations.\n\n---\n\nSummary: The problem under consideration is important and experiments suggest good progress. However, the clarity of the paper could be made better by making the connection between POMDPs and program learning more explicit or if the algorithm was introduced with one language.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.5555555820465088,
0.5555555820465088
],
"confidence": [
0.5,
0,
0.25
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Changes from original submission",
"Re: a good paper",
"Re:{^5} NPI with less supervision",
"Re: NPI with less supervision",
"Re:{^3} NPI with less supervision",
"Re:{^4} NPI with less supervision",
"Re: Re: NPI with less supervision",
"Re: your review"
],
"comment": [
"This is an overview of the main changes we introduced in revisions of the paper:\n\n- In Sections 1 and 2.1, we removed the somewhat irrelevant discussion of the challenges of general RNNs.\n- We reorganized Table 1.\n- Based on feedback from Reviewer #3, in Section 3 we clarified the connection between the POMDP formulation and program learning.\n- Based on feedback from Reviewers #2 and #3, in Section 4.1.1 we clarified the connection between the call stack and the agent memory, and noted basic properties of the PHP model complexity.\n- Based on feedback from Reviewer #2, in Section 4.2.2 (formerly 4.2) we clarified the EG algorithm; in Section 4.2.3 (formerly 4.2.1) we clarified the level-wise training algorithm and updated the notation of layer indexes for consistency.\n- Based on feedback from Reviewer #2, in Section 5.1 (Results) we addressed the causes of PHP gains; we removed the discussion of weak supervision which is repeated in Section 6.\n- We removed Figure 3, which was somewhat redundant with Figure 1.\n- Based on feedback from Reviewer #2, in Section 6 we addressed limitations of the PHP model and the need for more complicated benchmarks in this field.\n- We made multiple minor clarifications and style improvements.",
"Thank you for your time and for your assessment. We are very excited about these results and are making updates to improve the paper.",
"We agree that the evidence for (1) does not disambiguate the two causes well enough to support (1) with high confidence, and updated the paper to make this even clearer.\n\nWe would also like to clarify the contributions of the paper:\n1) We introduce the PHP model, which is simpler to optimize than the NPI model. That is, for a given dataset, the PHP model induces an optimization landscape in which a good solution is easier to find.\n2) We propose an EG algorithm that computes exact gradients in this optimization landscape, allowing efficient optimization.\n3) We show empirically that our model and algorithm outperform baseline models and algorithms.\n\nThis work does not show that the approximate gradients of NPL are worse than exact gradients in optimizing the NPI model (with weak supervision), although this may well be the case when the NPL execution-path grouping loses useful path information. This work also does not show that the exact gradients of the full-batch EG algorithm are always better than approximate gradients; in fact, using SGD via minibatch EG may well be better (see e.g. [Keskar et al. 2017]). Finally, this work does not fully tease apart whether the gains are due to the PHP model inducing simpler optimization landscapes, or due to the EG algorithm utilizing them better, or both, although there is evidence that the PHP model is easier to optimize.\n\nThese are all exciting research questions, and we thank the reviewer again for raising them. We believe that the simple and interpretable PHP model, the useful EG method, and the compelling empirical results presented here would be valuable to the community as a stepping stone towards such future research.\n\n\n[Keskar et al. 2017] On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima, ICLR 2017",
"Thank you for this valuable and detailed feedback.\n\nYou are correct in pointing out that PHPs impose a constraining memory structure, and we added to Sections 1 and 6 notes on their limitations. In principle, any finite memory structure can be implemented with sufficiently many PHPs, by having a distinct procedure for each memory state. Specifically in NanoCraft, PHPs can remember task parameters by calling a distinct sub-procedure for each building location and size. This lacks generalization, which was also not shown for NanoCraft by Li et al. (2017). We expect the generalization achieved by limiting the number of procedures to be further enhanced by allowing them to depend on a program counter.\n\nThis paper thus makes an important first step towards neural programming with structural constraints that are both useful as an inductive bias that improves sample complexity, and computationally tractable. We agree that more expressive structures will be needed as the field moves beyond the current simple benchmarks, which we hope this work promotes. We agree that passing arguments to hierarchical procedures is an important extension to explore in future work.\n\nWe clarified in Section 4.2 and in the Appendix the explanations of the algorithm and of the level-wise training procedure. Specifically, in Section 4.2 we elaborated on the structure of the full likelihood P(zeta, xi | theta) as a product of the relevant PHP operations, and how this leads to the given gradient expression; and clarified the expression for sampling from the posterior P(zeta | xi, theta) in level-wise training.\n\nWe added in Section 2 a short comparison of our method to that of Li et al. (2017). The main difference is that their method computes approximate gradients by averaging selectively over computation paths, whereas our method computes exact gradients using dynamic programming, enabled by having small discrete latent variables in each time step.",
"Thank you for making this excellent point.\n\nOur experiments indicate that the gains of PHP are due to both (1) the ability to compute exact gradients for weakly supervised demonstrations (via the EG algorithm), and (2) the PHP model being easier to optimize than the NPI model. We added this observation to Section 5.1.\n\nAs evidence for (2), consider the case of strongly supervised demonstrations, where NPL coincides with NPI and takes exact gradients. As shown in Figure 2 (blue curves at the 64 mark), with 64 strongly supervised demonstrations in the NanoCraft domain, the accuracy is 1.0 for PHP; 0.724 for NPL/NPI. In this case, PHP has lower sample complexity with comparable optimization algorithms, suggesting that this domain is solvable with a PHP model of lower complexity than the NPI model. We note, however, that Li et al. (2017) used batch size 1, whereas we used full batches and made no attempt to optimize the batch size.\n\nAs evidence for (1), consider the case where 48 of the 64 demonstrations are weakly supervised (Figure 2, blue curves at the 16 mark). Here the success rate is 0.969 for PHP; 0.502 for NPL. Compared to the strongly supervised case above, this 70% increase in the gain of PHP over NPL is likely due to the exact gradients used to optimize the PHP model, in contrast to the approximate gradients of NPL.\n\nWe are excited to present these results as they suggest a number of new research question, such as the effect of optimizing PHP with stochastic gradients, and we thank the reviewer for inspiring this direction for future research.",
"I agree with your evidence for point (2). However I don't see how your evidence for point (1) disambiguates between the two causes. Couldn't it just as well be the case that the 70% increase of PHP over NPL is due to the fact that PHP is using a simpler model that is easier to optimize?\n",
"Thanks for your response. The clarifications to Section 4.2 make the level-wise training algorithm more clear.\n\nThe additional information in section 2 makes it clear how the gradient compuation differs, but it does not clarify where the gains come from. Specifically, from the current results, it's not clear whether the gains come from (1) the ability to compute exact gradients rather than the approximate gradient computation used by Li et. al, or (2) the simpler PHP model is just easier to optimize in general, so it will work better regardless of the technique used for gradient computation.\n",
"Thank you for these constructive comments.\n\nWe added to Section 3 clarification of the connection between the POMDP formulation and program learning. In particular, the state s_t of the POMDP models the configuration of the computer (e.g., the tapes and heads of a Turing Machine, or the RAM of a register machine), whereas the memory m_t of the agent models the internal state of the machine itself (e.g. the state of a Turing Machine's Finite State Machine, or the registers of a register machine).\n\nThe Expectation–Gradient method is somewhat similar to but distinct from the REINFORCE trick, which uses the so-called “log-gradient” identity \\nabla_\\theta{p_\\theta(x)} = p(x) \\nabla_\\theta{\\log p(x)} to compute \\nabla_\\theta{E_p[f(x)]}. In fact, we use that same identity twice to compute \\nabla_\\theta{\\log P(\\xi | \\theta)}: once to express the gradient of log P(xi | theta) using the gradient of P(xi | theta); then after introducing the sum over zeta, we use the identity again in the other direction to express this using the gradient of log P(zeta, xi | theta).\n\nWe added to Section 5.1 clarification that we did use the same total number of demonstrations for PHP as was used for NPL. The results for 64 demonstrations are shown in Figure 2, and the results for PHP with 128 and 256 demonstrations were essentially the same as with 64, and were omitted for figure clarity."
]
} | {
"paperhash": [
"krishnan|swirl:_a_sequential_windowed_inverse_reinforcement_learning_algorithm_for_robot_tasks_with_delayed_rewards",
"krishnan|transition_state_clustering:_unsupervised_surgical_trajectory_segmentation_for_robot_learning",
"krishnan|ddco:_discovery_of_deep_continuous_options_for_robot_learning_from_demonstrations",
"cai|making_neural_programming_architectures_generalize_via_recursion",
"fox|multi-level_discovery_of_deep_options",
"devlin|robustfill:_neural_program_learning_under_noisy_i/o",
"sharma|learning_to_repeat:_fine_grained_action_repetition_for_deep_reinforcement_learning",
"andreas|modular_multitask_reinforcement_learning_with_policy_sketches",
"florensa|stochastic_neural_networks_for_hierarchical_reinforcement_learning",
"li|neural_program_lattices",
"balog|deepcoder:_learning_to_write_programs",
"parisotto|neuro-symbolic_program_synthesis",
"heess|learning_and_transfer_of_modulated_locomotor_controllers",
"fox|principled_option_learning_in_markov_decision_processes",
"bacon|the_option-critic_architecture",
"srinivas|option_discovery_in_hierarchical_reinforcement_learning_using_spatio-temporal_clustering",
"jonsson|hierarchical_linearly-solvable_markov_decision_problems",
"andreas|learning_to_compose_neural_networks_for_question_answering",
"kaiser|neural_gpus_learn_algorithms",
"reed|neural_programmer-interpreters",
"neelakantan|neural_programmer:_inducing_latent_programs_with_gradient_descent",
"genewein|bounded_rationality,_abstraction,_and_hierarchical_decision-making:_an_information-theoretic_optimality_principle",
"hamidi|active_imitation_learning_of_hierarchical_policies",
"sukhbaatar|end-to-end_memory_networks",
"joulin|inferring_algorithmic_patterns_with_stack-augmented_recurrent_nets",
"graves|neural_turing_machines",
"daniel|hierarchical_relative_entropy_policy_search",
"konidaris|robot_learning_from_demonstration_by_constructing_skill_trees",
"konidaris|skill_discovery_in_continuous_reinforcement_learning_domains_using_skill_chaining",
"bonet|deterministic_pomdps_revisited",
"simsek|using_relative_novelty_to_identify_useful_temporal_abstractions_in_reinforcement_learning",
"salakhutdinov|optimization_with_em_and_expectation-conjugate-gradient",
"bui|policy_recognition_in_the_abstract_hidden_markov_model",
"mcgovern|automatic_discovery_of_subgoals_in_reinforcement_learning_using_diverse_density",
"|automated_discovery_of_options_in_reinforcement_learning",
"thrun|finding_structure_in_reinforcement_learning"
],
"title": [
"SWIRL: A sequential windowed inverse reinforcement learning algorithm for robot tasks with delayed rewards",
"Transition state clustering: Unsupervised surgical trajectory segmentation for robot learning",
"DDCO: Discovery of Deep Continuous Options for Robot Learning from Demonstrations",
"Making Neural Programming Architectures Generalize via Recursion",
"Multi-Level Discovery of Deep Options",
"RobustFill: Neural Program Learning under Noisy I/O",
"Learning to Repeat: Fine Grained Action Repetition for Deep Reinforcement Learning",
"Modular Multitask Reinforcement Learning with Policy Sketches",
"Stochastic Neural Networks for Hierarchical Reinforcement Learning",
"Neural Program Lattices",
"DeepCoder: Learning to Write Programs",
"Neuro-Symbolic Program Synthesis",
"Learning and Transfer of Modulated Locomotor Controllers",
"Principled Option Learning in Markov Decision Processes",
"The Option-Critic Architecture",
"Option Discovery in Hierarchical Reinforcement Learning using Spatio-Temporal Clustering",
"Hierarchical Linearly-Solvable Markov Decision Problems",
"Learning to Compose Neural Networks for Question Answering",
"Neural GPUs Learn Algorithms",
"Neural Programmer-Interpreters",
"Neural Programmer: Inducing Latent Programs with Gradient Descent",
"Bounded Rationality, Abstraction, and Hierarchical Decision-Making: An Information-Theoretic Optimality Principle",
"Active Imitation Learning of Hierarchical Policies",
"End-To-End Memory Networks",
"Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets",
"Neural Turing Machines",
"Hierarchical Relative Entropy Policy Search",
"Robot learning from demonstration by constructing skill trees",
"Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining",
"Deterministic POMDPs Revisited",
"Using relative novelty to identify useful temporal abstractions in reinforcement learning",
"Optimization with EM and Expectation-Conjugate-Gradient",
"Policy Recognition in the Abstract Hidden Markov Model",
"Automatic Discovery of Subgoals in Reinforcement Learning using Diverse Density",
"AUTOMATED DISCOVERY OF OPTIONS IN REINFORCEMENT LEARNING",
"Finding Structure in Reinforcement Learning"
],
"abstract": [
"We present sequential windowed inverse reinforcement learning (SWIRL), a policy search algorithm that is a hybrid of exploration and demonstration paradigms for robot learning. We apply unsupervised learning to a small number of initial expert demonstrations to structure future autonomous exploration. SWIRL approximates a long time horizon task as a sequence of local reward functions and subtask transition conditions. Over this approximation, SWIRL applies Q-learning to compute a policy that maximizes rewards. Experiments suggest that SWIRL requires significantly fewer rollouts than pure reinforcement learning and fewer expert demonstrations than behavioral cloning to learn a policy. We evaluate SWIRL in two simulated control tasks, parallel parking and a two-link pendulum. On the parallel parking task, SWIRL achieves the maximum reward on the task with 85% fewer rollouts than Q-learning, and one-eight of demonstrations needed by behavioral cloning. We also consider physical experiments on surgical tensioning and cutting deformable sheets using a da Vinci surgical robot. On the deformable tensioning task, SWIRL achieves a 36% relative improvement in reward compared with a baseline of behavioral cloning with segmentation.",
"Demonstration trajectories collected from a supervisor in teleoperation are widely used for robot learning, and temporally segmenting the trajectories into shorter, less-variable segments can improve the efficiency and reliability of learning algorithms. Trajectory segmentation algorithms can be sensitive to noise, spurious motions, and temporal variation. We present a new unsupervised segmentation algorithm, transition state clustering (TSC), which leverages repeated demonstrations of a task by clustering segment endpoints across demonstrations. TSC complements any motion-based segmentation algorithm by identifying candidate transitions, clustering them by kinematic similarity, and then correlating the kinematic clusters with available sensory and temporal features. TSC uses a hierarchical Dirichlet process Gaussian mixture model to avoid selecting the number of segments a priori. We present simulated results to suggest that TSC significantly reduces the number of false-positive segments in dynamical systems observed with noise as compared with seven probabilistic and non-probabilistic segmentation algorithms. We additionally compare algorithms that use piecewise linear segment models, and find that TSC recovers segments of a generated piecewise linear trajectory with greater accuracy in the presence of process and observation noise. At the maximum noise level, TSC recovers the ground truth 49% more accurately than alternatives. Furthermore, TSC runs 100× faster than the next most accurate alternative autoregressive models, which require expensive Markov chain Monte Carlo (MCMC)-based inference. We also evaluated TSC on 67 recordings of surgical needle passing and suturing. We supplemented the kinematic recordings with manually annotated visual features that denote grasp and penetration conditions. On this dataset, TSC finds 83% of needle passing transitions and 73% of the suturing transitions annotated by human experts.",
"An option is a short-term skill consisting of a control policy for a specified region of the state space, and a termination condition recognizing leaving that region. In prior work, we proposed an algorithm called Deep Discovery of Options (DDO) to discover options to accelerate reinforcement learning in Atari games. This paper studies an extension to robot imitation learning, called Discovery of Deep Continuous Options (DDCO), where low-level continuous control skills parametrized by deep neural networks are learned from demonstrations. We extend DDO with: (1) a hybrid categorical-continuous distribution model to parametrize high-level policies that can invoke discrete options as well continuous control actions, and (2) a cross-validation method that relaxes DDO's requirement that users specify the number of options to be discovered. We evaluate DDCO in simulation of a 3-link robot in the vertical plane pushing a block with friction and gravity, and in two physical experiments on the da Vinci surgical robot, needle insertion where a needle is grasped and inserted into a silicone tissue phantom, and needle bin picking where needles and pins are grasped from a pile and categorized into bins. In the 3-link arm simulation, results suggest that DDCO can take 3x fewer demonstrations to achieve the same reward compared to a baseline imitation learning approach. In the needle insertion task, DDCO was successful 8/10 times compared to the next most accurate imitation learning baseline 6/10. In the surgical bin picking task, the learned policy successfully grasps a single object in 66 out of 99 attempted grasps, and in all but one case successfully recovered from failed grasps by retrying a second time.",
"Empirically, neural networks that attempt to learn programs from data have exhibited poor generalizability. Moreover, it has traditionally been difficult to reason about the behavior of these models beyond a certain level of input complexity. In order to address these issues, we propose augmenting neural architectures with a key abstraction: recursion. As an application, we implement recursion in the Neural Programmer-Interpreter framework on four tasks: grade-school addition, bubble sort, topological sort, and quicksort. We demonstrate superior generalizability and interpretability with small amounts of training data. Recursion divides the problem into smaller pieces and drastically reduces the domain of each neural network component, making it tractable to prove guarantees about the overall system's behavior. Our experience suggests that in order for neural architectures to robustly learn program semantics, it is necessary to incorporate a concept like recursion.",
"Augmenting an agent's control with useful higher-level behaviors called options can greatly reduce the sample complexity of reinforcement learning, but manually designing options is infeasible in high-dimensional and abstract state spaces. While recent work has proposed several techniques for automated option discovery, they do not scale to multi-level hierarchies and to expressive representations such as deep networks. We present Discovery of Deep Options (DDO), a policy-gradient algorithm that discovers parametrized options from a set of demonstration trajectories, and can be used recursively to discover additional levels of the hierarchy. The scalability of our approach to multi-level hierarchies stems from the decoupling of low-level option discovery from high-level meta-control policy learning, facilitated by under-parametrization of the high level. We demonstrate that using the discovered options to augment the action space of Deep Q-Network agents can accelerate learning by guiding exploration in tasks where random actions are unlikely to reach valuable states. We show that DDO is effective in adding options that accelerate learning in 4 out of 5 Atari RAM environments chosen in our experiments. We also show that DDO can discover structure in robot-assisted surgical videos and kinematics that match expert annotation with 72% accuracy.",
"The problem of automatically generating a computer program from some specification has been studied since the early days of AI. Recently, two competing approaches for automatic program learning have received significant attention: (1) neural program synthesis, where a neural network is conditioned on input/output (I/O) examples and learns to generate a program, and (2) neural program induction, where a neural network generates new outputs directly using a latent program representation. Here, for the first time, we directly compare both approaches on a large-scale, real-world learning task and we additionally contrast to rule-based program synthesis, which uses hand-crafted semantics to guide the program generation. Our neural models use a modified attention RNN to allow encoding of variable-sized sets of I/O pairs, which achieve 92% accuracy on a real-world test set, compared to the 34% accuracy of the previous best neural synthesis approach. The synthesis model also outperforms a comparable induction model on this task, but we more importantly demonstrate that the strength of each approach is highly dependent on the evaluation metric and end-user application. Finally, we show that we can train our neural models to remain very robust to the type of noise expected in real-world data (e.g., typos), while a highly-engineered rule-based system fails entirely.",
"Reinforcement Learning algorithms can learn complex behavioral patterns for sequential decision making tasks wherein an agent interacts with an environment and acquires feedback in the form of rewards sampled from it. Traditionally, such algorithms make decisions, i.e., select actions to execute, at every single time step of the agent-environment interactions. In this paper, we propose a novel framework, Fine Grained Action Repetition (FiGAR), which enables the agent to decide the action as well as the time scale of repeating it. FiGAR can be used for improving any Deep Reinforcement Learning algorithm which maintains an explicit policy estimate by enabling temporal abstractions in the action space. We empirically demonstrate the efficacy of our framework by showing performance improvements on top of three policy search algorithms in different domains: Asynchronous Advantage Actor Critic in the Atari 2600 domain, Trust Region Policy Optimization in Mujoco domain and Deep Deterministic Policy Gradients in the TORCS car racing domain.",
"We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them—specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor-critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level sub-goals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.",
"Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments show that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.",
"We propose the Neural Program Lattice (NPL), a neural network that learns to perform complex tasks by composing low-level programs to express high-level programs. Our starting point is the recent work on Neural Programmer-Interpreters (NPI), which can only learn from strong supervision that contains the whole hierarchy of low-level and high-level programs. NPLs remove this limitation by providing the ability to learn from weak supervision consisting only of sequences of low-level operations. We demonstrate the capability of NPL to learn to perform long-hand addition and arrange blocks in a grid-world environment. Experiments show that it performs on par with NPI while using weak supervision in place of most of the strong supervision, thus indicating its ability to infer the high-level program structure from examples containing only the low-level operations.",
"We develop a first line of attack for solving programming competition-style problems from input-output examples using deep learning. The approach is to train a neural network to predict properties of the program that generated the outputs from the inputs. We use the neural network’s predictions to augment search techniques from the programming languages community, including enumerative search and an SMT-based solver. Empirically, we show that our approach leads to an order of magnitude speedup over the strong non-augmented baselines and a Recurrent Neural Network approach, and that we are able to solve problems of difficulty comparable to the simplest problems on programming competition websites.",
"Recent years have seen the proposal of a number of neural architectures for the problem of Program Induction. Given a set of input-output examples, these architectures are able to learn mappings that generalize to new test inputs. While achieving impressive results, these approaches have a number of important limitations: (a) they are computationally expensive and hard to train, (b) a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verify the correctness of the learnt mapping (as it is defined by a neural network). In this paper, we propose a novel technique, Neuro-Symbolic Program Synthesis, to overcome the above-mentioned problems. Once trained, our approach can automatically construct computer programs in a domain-specific language that are consistent with a set of input-output examples provided at test time. Our method is based on two novel neural modules. The first module, called the cross correlation I/O network, given a set of input-output examples, produces a continuous representation of the set of I/O examples. The second module, the Recursive-Reverse-Recursive Neural Network (R3NN), given the continuous representation of the examples, synthesizes a program by incrementally expanding partial programs. We demonstrate the effectiveness of our approach by applying it to the rich and complex domain of regular expression based string transformations. Experiments show that the R3NN model is not only able to construct programs from new input-output examples, but it is also able to construct new programs for tasks that it had never observed before during training.",
"We study a novel architecture and training procedure for locomotion tasks. A high-frequency, low-level \"spinal\" network with access to proprioceptive sensors learns sensorimotor primitives by training on simple tasks. This pre-trained module is fixed and connected to a low-frequency, high-level \"cortical\" network, with access to all sensors, which drives behavior by modulating the inputs to the spinal network. Where a monolithic end-to-end architecture fails completely, learning with a pre-trained spinal module succeeds at multiple high-level tasks, and enables the effective exploration required to learn from sparse rewards. We test our proposed architecture on three simulated bodies: a 16-dimensional swimming snake, a 20-dimensional quadruped, and a 54-dimensional humanoid. Our results are illustrated in the accompanying video at this https URL",
"It is well known that options can make planning more efficient, among their many benefits. Thus far, algorithms for autonomously discovering a set of useful options were heuristic. Naturally, a principled way of finding a set of useful options may be more promising and insightful. In this paper we suggest a mathematical characterization of good sets of options using tools from information theory. This characterization enables us to find conditions for a set of options to be optimal and an algorithm that outputs a useful set of options and illustrate the proposed algorithm in simulation.",
"\n \n Temporal abstraction is key to scaling up learning and planning in reinforcement learning. While planning with temporally extended actions is well understood, creating such abstractions autonomously from data has remained challenging.We tackle this problem in the framework of options [Sutton,Precup and Singh, 1999; Precup, 2000]. We derive policy gradient theorems for options and propose a new option-critic architecture capable of learning both the internal policies and the termination conditions of options, in tandem with the policy over options, and without the need to provide any additional rewards or subgoals. Experimental results in both discrete and continuous environments showcase the flexibility and efficiency of the framework.\n \n",
"This paper introduces an automated skill acquisition framework in reinforcement learning which involves identifying a hierarchical description of the given task in terms of abstract states and extended actions between abstract states. Identifying such structures present in the task provides ways to simplify and speed up reinforcement learning algorithms. These structures also help to generalize such algorithms over multiple tasks without relearning policies from scratch. We use ideas from dynamical systems to find metastable regions in the state space and associate them with abstract states. The spectral clustering algorithm PCCA+ is used to identify suitable abstractions aligned to the underlying structure. Skills are defined in terms of the sequence of actions that lead to transitions between such abstract states. The connectivity information from PCCA+ is used to generate these skills or options. These skills are independent of the learning task and can be efficiently reused across a variety of tasks defined over the same model. This approach works well even without the exact model of the environment by using sample trajectories to construct an approximate estimate. We also present our approach to scaling the skill acquisition framework to complex tasks with large state spaces for which we perform state aggregation using the representation learned from an action conditional video prediction network and use the skill acquisition framework on the aggregated state space.",
"\n \n We present a hierarchical reinforcement learning framework that formulates each task in the hierarchy as a special type of Markov decision process for which the Bellman equation is linear and has analytical solution. Problems of this type, called linearly-solvable MDPs (LMDPs) have interesting properties that can be exploited in a hierarchical setting, such as efficient learning of the optimal value function or task compositionality. The proposed hierarchical approach can also be seen as a novel alternative to solving LMDPs with large state spaces. We derive a hierarchical version of the so-called Z-learning algorithm that learns different tasks simultaneously and show empirically that it significantly outperforms the state-of-the-art learning methods in two classical HRL domains: the taxi domain and an autonomous guided vehicle task.\n \n",
"We describe a question answering model that applies to both images and structured knowledge bases. The model uses natural language strings to automatically assemble neural networks from a collection of composable modules. Parameters for these modules are learned jointly with network-assembly parameters via reinforcement learning, with only (world, question, answer) triples as supervision. Our approach, which we term a dynamic neural model network, achieves state-of-the-art results on benchmark datasets in both visual and structured domains.",
"Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. \nWe present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. \nAn essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. \nTo achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.",
"We propose the neural programmer-interpreter (NPI): a recurrent and compositional neural network that learns to represent and execute programs. NPI has three learnable components: a task-agnostic recurrent core, a persistent key-value program memory, and domain-specific encoders that enable a single NPI to operate in multiple perceptually diverse environments with distinct affordances. By learning to compose lower-level programs to express higher-level programs, NPI reduces sample complexity and increases generalization ability compared to sequence-to-sequence LSTMs. The program memory allows efficient learning of additional tasks by building on existing programs. NPI can also harness the environment (e.g. a scratch pad with read-write pointers) to cache intermediate results of computation, lessening the long-term memory burden on recurrent hidden units. In this work we train the NPI with fully-supervised execution traces; each program has example sequences of calls to the immediate subprograms conditioned on the input. Rather than training on a huge number of relatively weak labels, NPI learns from a small number of rich examples. We demonstrate the capability of our model to learn several types of compositional programs: addition, sorting, and canonicalizing 3D models. Furthermore, a single NPI learns to execute these programs and all 21 associated subprograms.",
"Deep neural networks have achieved impressive supervised classification performance in many tasks including image recognition, speech recognition, and sequence to sequence learning. However, this success has not been translated to applications like question answering that may involve complex arithmetic and logic reasoning. A major limitation of these models is in their inability to learn even simple arithmetic and logic operations. For example, it has been shown that neural networks fail to learn to add two binary numbers reliably. In this work, we propose Neural Programmer, an end-to-end differentiable neural network augmented with a small set of basic arithmetic and logic operations. Neural Programmer can call these augmented operations over several steps, thereby inducing compositional programs that are more complex than the built-in operations. The model learns from a weak supervision signal which is the result of execution of the correct program, hence it does not require expensive annotation of the correct program itself. The decisions of what operations to call, and what data segments to apply to are inferred by Neural Programmer. Such decisions, during training, are done in a differentiable fashion so that the entire network can be trained jointly by gradient descent. We find that training the model is difficult, but it can be greatly improved by adding random noise to the gradient. On a fairly complex synthetic table-comprehension dataset, traditional recurrent networks and attentional models perform poorly while Neural Programmer typically obtains nearly perfect accuracy.",
"Abstraction and hierarchical information-processing are hallmarks of human and animal intelligence underlying the unrivaled flexibility of behavior in biological systems. Achieving such a flexibility in artificial systems is challenging, even with more and more computational power. Here we investigate the hypothesis that abstraction and hierarchical information-processing might in fact be the consequence of limitations in information-processing power. In particular, we study an information-theoretic framework of bounded rational decision-making that trades off utility maximization against information-processing costs. We apply the basic principle of this framework to perception-action systems with multiple information-processing nodes and derive bounded optimal solutions. We show how the formation of abstractions and decision-making hierarchies depends on information-processing costs. We illustrate the theoretical ideas with example simulations and conclude by formalizing a mathematically unifying optimization principle that could potentially be extended to more complex systems.",
"In this paper, we study the problem of imitation learning of hierarchical policies from demonstrations. The main difficulty in learning hierarchical policies by imitation is that the high level intention structure of the policy, which is often critical for understanding the demonstration, is unobserved. We formulate this problem as active learning of Probabilistic State-Dependent Grammars (PSDGs) from demonstrations. Given a set of expert demonstrations, our approach learns a hierarchical policy by actively selecting demonstrations and using queries to explicate their intentional structure at selected points. Our contributions include a new algorithm for imitation learning of hierarchical policies and principled heuristics for the selection of demonstrations and queries. Experimental results in five different domains exhibit successful learning using fewer queries than a variety of alternatives.",
"We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network [23] but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch [2] to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering [22] and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.",
"Despite the recent achievements in machine learning, we are still very far from achieving real artificial intelligence. In this paper, we discuss the limitations of standard deep learning approaches and show that some of these limitations can be overcome by learning how to grow the complexity of a model in a structured way. Specifically, we study the simplest sequence prediction problems that are beyond the scope of what is learnable with standard recurrent networks, algorithmically generated sequences which can only be learned by models which have the capacity to count and to memorize sequences. We show that some basic algorithms can be learned from sequential data using a recurrent network associated with a trainable memory.",
"We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.",
"Many reinforcement learning (RL) tasks, especially in robotics, consist of multiple sub-tasks that are strongly structured. Such task structures can be exploited by incorporating hierarchical policies that consist of gating networks and sub-policies. However, this concept has only been partially explored for real world settings and complete methods, derived from first principles, are needed. Real world settings are challenging due to large and continuous state-action spaces that are prohibitive for exhaustive sampling methods. We define the problem of learning sub-policies in continuous state action spaces as finding a hierarchical policy that is composed of a high-level gating policy to select the low-level sub-policies for execution by the agent. In order to efficiently share experience with all sub-policies, also called inter-policy learning, we treat these sub-policies as latent variables which allows for distribution of the update information between the sub-policies. We present three different variants of our algorithm, designed to be suitable for a wide variety of real world robot learning tasks and evaluate our algorithms in two real robot learning scenarios as well as several simulations and comparisons.",
"We describe CST, an online algorithm for constructing skill trees from demonstration trajectories. CST segments a demonstration trajectory into a chain of component skills, where each skill has a goal and is assigned a suitable abstraction from an abstraction library. These properties permit skills to be improved efficiently using a policy learning algorithm. Chains from multiple demonstration trajectories are merged into a skill tree. We show that CST can be used to acquire skills from human demonstration in a dynamic continuous domain, and from both expert demonstration and learned control sequences on the uBot-5 mobile manipulator.",
"We introduce a skill discovery method for reinforcement learning in continuous domains that constructs chains of skills leading to an end-of-task reward. We demonstrate experimentally that it creates appropriate skills and achieves performance benefits in a challenging continuous domain.",
"We study a subclass of POMDPs, called Deterministic POMDPs, that is characterized by deterministic actions and observations. These models do not provide the same generality of POMDPs yet they capture a number of interesting and challenging problems, and permit more efficient algorithms. Indeed, some of the recent work in planning is built around such assumptions mainly by the quest of amenable models more expressive than the classical deterministic models. We provide results about the fundamental properties of Deterministic POMDPs, their relation with AND/OR search problems and algorithms, and their computational complexity.",
"We present a new method for automatically creating useful temporal abstractions in reinforcement learning. We argue that states that allow the agent to transition to a different region of the state space are useful subgoals, and propose a method for identifying them using the concept of relative novelty. When such a state is identified, a temporally-extended activity (e.g., an option) is generated that takes the agent efficiently to this state. We illustrate the utility of the method in a number of tasks.",
"We show a close relationship between the Expectation - Maximization (EM) algorithm and direct optimization algorithms such as gradient-based methods for parameter learning. We identify analytic conditions under which EM exhibits Newton-like behavior, and conditions under which it possesses poor, first-order convergence. Based on this analysis, we propose two novel algorithms for maximum likelihood estimation of latent variable models, and report empirical results showing that, as predicted by theory, the proposed new algorithms can substantially outperform standard EM in terms of speed of convergence in certain cases.",
"In this paper, we present a method for recognising an agent's behaviour in dynamic, noisy, uncertain domains, and across multiple levels of abstraction. We term this problem on-line plan recognition under uncertainty and view it generally as probabilistic inference on the stochastic process representing the execution of the agent's plan. Our contributions in this paper are twofold. In terms of probabilistic inference, we introduce the Abstract Hidden Markov Model (AHMM), a novel type of stochastic processes, provide its dynamic Bayesian network (DBN) structure and analyse the properties of this network. We then describe an application of the Rao-Blackwellised Particle Filter to the AHMM which allows us to construct an efficient, hybrid inference method for this model. In terms of plan recognition, we propose a novel plan recognition framework based on the AHMM as the plan execution model. The Rao-Blackwellised hybrid inference for AHMM can take advantage of the independence properties inherent in a model of plan execution, leading to an algorithm for online probabilistic plan recognition that scales well with the number of levels in the plan hierarchy. This illustrates that while stochastic models for plan execution can be complex, they exhibit special structures which, if exploited, can lead to efficient plan recognition algorithms. We demonstrate the usefulness of the AHMM framework via a behaviour recognition system in a complex spatial environment using distributed video surveillance data.",
"This paper presents a method by which a reinforcement learning agent can automatically discover certain types of subgoals online. By creating useful new subgoals while learning, the agent is able to accelerate learning on the current task and to transfer its expertise to other, related tasks through the reuse of its ability to attain subgoals. The agent discovers subgoals based on commonalities across multiple paths to a solution. We cast the task of finding these commonalities as a multiple-instance learning problem and use the concept of diverse density to find solutions. We illustrate this approach using several gridworld tasks.",
"AI planning benefits greatly from the use of temporally-extended or macroactions. Macro-actions allow for faster and more efficient planning as well as the reuse of knowledge from previous solutions. In recent years, a significant amount of research has been devoted to incorporating macro-actions in learned controllers, particularly in the context of Reinforcement Learning. One general approach is the use of options (temporally-extended actions) in Reinforcement Learning [22]. While the properties of options are well understood, it is not clear how to find new options automatically. In this thesis we propose two new algorithms for discovering options and compare them to one algorithm from the literature. We also contribute a new algorithm for learning with options which improves on the performance of two widely used learning algorithms. Extensive experiments are used to demonstrate the effectiveness of the proposed algorithms.",
"Reinforcement learning addresses the problem of learning to select actions in order to maximize one's performance in unknown environments. To scale reinforcement learning to complex real-world tasks, such as typically studied in AI, one must ultimately be able to discover the structure in the world, in order to abstract away the myriad of details and to operate in more tractable problem spaces. \n \nThis paper presents the SKILLS algorithm. SKILLS discovers skills, which are partially defined action policies that arise in the context of multiple, related tasks. Skills collapse whole action sequences into single operators. They are learned by minimizing the compactness of action policies, using a description length argument on their representation. Empirical results in simple grid navigation tasks illustrate the successful discovery of structure in reinforcement learning."
],
"authors": [
{
"name": [
"S. Krishnan",
"Animesh Garg",
"Richard Liaw",
"Brijen Thananjeyan",
"Lauren Miller",
"Florian T. Pokorny",
"Ken Goldberg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Krishnan",
"Animesh Garg",
"S. Patil",
"Colin S. Lea",
"Gregory Hager",
"P. Abbeel",
"Ken Goldberg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Krishnan",
"Roy Fox",
"Ion Stoica",
"Ken Goldberg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jonathon Cai",
"Richard Shin",
"D. Song"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Roy Fox",
"S. Krishnan",
"Ion Stoica",
"Ken Goldberg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jacob Devlin",
"Jonathan Uesato",
"Surya Bhupatiraju",
"Rishabh Singh",
"Abdel-rahman Mohamed",
"Pushmeet Kohli"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sahil Sharma",
"A. Lakshminarayanan",
"Balaraman Ravindran"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jacob Andreas",
"D. Klein",
"S. Levine"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Carlos Florensa",
"Yan Duan",
"P. Abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Chengtao Li",
"Daniel Tarlow",
"Alexander L. Gaunt",
"Marc Brockschmidt",
"Nate Kushman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Matej Balog",
"Alexander L. Gaunt",
"Marc Brockschmidt",
"Sebastian Nowozin",
"Daniel Tarlow"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Emilio Parisotto",
"Abdel-rahman Mohamed",
"Rishabh Singh",
"Lihong Li",
"Dengyong Zhou",
"Pushmeet Kohli"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"N. Heess",
"Greg Wayne",
"Yuval Tassa",
"T. Lillicrap",
"Martin A. Riedmiller",
"David Silver"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Roy Fox",
"Michal Moshkovitz",
"Naftali Tishby"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Pierre-Luc Bacon",
"J. Harb",
"Doina Precup"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Srinivas",
"Ramnandan Krishnamurthy",
"Peeyush Kumar",
"Balaraman Ravindran"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Anders Jonsson",
"V. Gómez"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jacob Andreas",
"Marcus Rohrbach",
"Trevor Darrell",
"D. Klein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Lukasz Kaiser",
"I. Sutskever"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Scott E. Reed",
"Nando de Freitas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Arvind Neelakantan",
"Quoc V. Le",
"I. Sutskever"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Tim Genewein",
"Felix Leibfried",
"Jordi Grau-Moya",
"Daniel A. Braun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Hamidi",
"Prasad Tadepalli",
"R. Goetschalckx",
"Alan Fern"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sainbayar Sukhbaatar",
"Arthur Szlam",
"J. Weston",
"R. Fergus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Armand Joulin",
"Tomas Mikolov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Alex Graves",
"Greg Wayne",
"Ivo Danihelka"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Christian Daniel",
"G. Neumann",
"Jan Peters"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Konidaris",
"S. Kuindersma",
"R. Grupen",
"A. Barto"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Konidaris",
"A. Barto"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Blai Bonet"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Özgür Simsek",
"A. Barto"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Salakhutdinov",
"S. Roweis",
"Zoubin Ghahramani"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"H. Bui",
"S. Venkatesh",
"G. West"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. McGovern",
"A. Barto"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"S. Thrun",
"Anton Schwartz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
null,
"1710.05421",
"1704.06611",
"1703.08294",
"1703.07469",
"1702.06054",
"1611.01796",
"1704.03012",
null,
"1611.01989",
"1611.01855",
"1610.05182",
"1609.05524",
"1609.05140",
"1605.05359",
"1603.03267",
"1601.01705",
"1511.08228",
"1511.06279",
"1511.04834",
null,
null,
null,
"1503.01007",
"1410.5401",
null,
null,
null,
"1205.2659",
null,
null,
"1106.0672",
null,
null,
null
],
"s2_corpus_id": [
"52905591",
"3983555",
"11787854",
"1844940",
"9387066",
"6933074",
"260497091",
"14711954",
"7774489",
"34816748",
"2906360",
"15904815",
"9692454",
"877206",
"6627476",
"18901011",
"9045501",
"3130692",
"2009318",
"7034786",
"6715185",
"17148170",
"3051736",
"1399322",
"172783",
"15299054",
"11711975",
"2433807",
"2122274",
"1934251",
"18328568",
"8703651",
"263862416",
"1223826",
"9293260",
"9569834"
],
"intents": [
[
"background"
],
[
"background"
],
[
"methodology",
"background"
],
[
"methodology",
"background"
],
[
"methodology",
"background"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"methodology",
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[],
[
"background"
],
[],
[
"background"
],
[
"methodology",
"background"
],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[
"methodology"
],
[
"methodology",
"background"
],
[
"background"
],
[],
[
"background"
]
],
"isInfluential": [
false,
false,
false,
true,
true,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | null | 84 | 0.357143 | 0.555556 | 0.25 | null | null | null | null | null | rJl63fZRb |
wang|evidence_aggregation_for_answer_reranking_in_opendomain_question_answering|ICLR_cc_2018_Conference | 13764176 | 1711.05116 | Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering | Very recently, it comes to be a popular approach for answering open-domain questions by first searching question-related passages, then applying reading comprehension models to extract answers. Existing works usually extract answers from single passages independently, thus not fully make use of the multiple searched passages, especially for the some questions requiring several evidences, which can appear in different passages, to be answered. The above observations raise the problem of evidence aggregation from multiple passages. In this paper, we deal with this problem as answer re-ranking. Specifically, based on the answer candidates generated from the existing state-of-the-art QA model, we propose two different re-ranking methods, strength-based and coverage-based re-rankers, which make use of the aggregated evidences from different passages to help entail the ground-truth answer for the question. Our model achieved state-of-the-arts on three public open-domain QA datasets, Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8\% improvement on the former two datasets. | {
"name": [
"shuohang wang",
"jing jiang",
"wei zhang",
"xiaoxiao guo",
"shiyu chang",
"zhiguo wang",
"tim klinger",
"gerald tesauro",
"murray campbell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "AI Foundations -Learning",
"institution": "IBM Research",
"location": "{'region': 'AI'}"
},
{
"laboratory": "AI Foundations -Learning",
"institution": "IBM Research",
"location": "{'region': 'AI'}"
},
{
"laboratory": "AI Foundations -Learning",
"institution": "IBM Research",
"location": "{'region': 'AI'}"
},
{
"laboratory": "AI Foundations -Learning",
"institution": "IBM Research",
"location": "{'region': 'AI'}"
},
{
"laboratory": "AI Foundations -Learning",
"institution": "IBM Research",
"location": "{'region': 'AI'}"
},
{
"laboratory": "AI Foundations -Learning",
"institution": "IBM Research",
"location": "{'region': 'AI'}"
},
{
"laboratory": "AI Foundations -Learning",
"institution": "IBM Research",
"location": "{'region': 'AI'}"
}
]
} | We propose a method that can make use of the multiple passages information for open-domain QA. | [
"Question Answering",
"Deep Learning"
] | null | 2018-02-15 22:29:26 | 36 | 158 | 31 | null | null | null | null | null | null | true | The pros and cons of this paper cited by the reviewers can be summarized below:
Pros:
* Solid experimental results against strong baselines on a task of great interest
* Method presented is appropriate for the task
* Paper is presented relatively clearly, especially after revision
Cons:
* The paper is somewhat incremental. The basic idea of aggregating across multiple examples was presented in Kadlec et al. 2016, but the methodology here is different.
| {
"review_id": [
"rJZQa3YgG",
"S1OAdY3eG",
"H1pRH5def"
],
"review": [
{
"title": "title: Very good contribution on multi-sentences answer reranking with significant experimental results.",
"paper_summary": null,
"main_review": "main_review: The authors propose an approach where they aggregate, for each candidate answer, text from supporting passages. They make use of two ranking components. A strength-based re-ranker captures how often a candidate answer would be selected while a coverage-based re-ranker aims to estimate the coverage of the question by the supporting passages. Potential answers are extracted using a machine comprehension model. A bi-LSTM model is used to estimate the coverage of the question. A weighted combination of the outputs of both components generates the final ranking (using softmax). \nThis article is really well written and clearly describes the proposed scheme. Their experiments clearly indicate that the combination of the two re-ranking components outperforms raw machine comprehension approaches. The paper also provides an interesting analysis of various design issues. Finally they situate the contribution with respect to some related work pertaining to open domain QA. This paper seems to me like an interesting and significant contribution.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Incremental idea but with solid results",
"paper_summary": null,
"main_review": "main_review: Traditional open-domain QA systems typically have two steps: passage retrieval and aggregating answers extracted from the retrieved passages. This paper essentially follows the same paradigm, but leverages the state-of-the-art reading comprehension models for answer extraction, and develops the neural network models for the aggregating component. Although the idea seems incremental, the experimental results do seem solid. The paper is generally easy to follow, but in several places the presentation can be further improved.\n\nDetailed comments/questions:\n 1. In Sec. 2.2, the justification for adding H^{aq} and \\bar{H}^{aq} is to downweigh the impact of stop word matching. I feel this is a somewhat indirect and less effective design, if avoiding stop words is really the reason. A standard preprocessing step may be better.\n 2. In Sec. 2.3, it seems that the final score is just the sum of three individual normalized scores. It's not truly a \"weighted\" combination, where the weights are typically assumed to be tuned.\n 3. Figure 3: Connecting the dots in the two subfigures on the right does not make sense. Bar charts should be used instead.\n 4. The end of Sec. 4.2: I feel it's a bad example, as the passage does not really support the answer. The fact that \"Sesame Street\" got picked is probably just because it's more famous.\n 5. It'd be interesting to see how traditional IR answer aggregation methods perform, such as simple classifiers or heuristics by word matching (or weighted by TFIDF) and counting. This will demonstrates the true advantages of leveraging modern NN models.\n\nPros:\n 1. Updating a traditional open-domain QA approach with neural models\n 2. Experiments demonstrate solid positive results\n\nCons:\n 1. The idea seems incremental\n 2. Presentation could be improved\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: This is a neural-based approach for improving QA systems by aggregating answers from multiple passages.",
"paper_summary": null,
"main_review": "main_review: The paper is clear, although there are many English mistakes (that should be corrected).\nThe proposed method aggregates answers from multiple passages in the context of QA. The new method is motivated well and departs from prior work. Experiments on three datasets show the proposed method to be notably better than several baselines (although two of the baselines, GA and BiDAF, appear tremendously weak). The analysis of the results is interesting and largely convincing, although a more dedicated error analysis or discussion of the limitation of the proposed approach would be welcome.\n\nMinor point: in the description of Quasar-T, the IR model is described as lucene index. An index is not an IR model. Lucene is an IR system that implements various IR models. The terminology should be corrected here. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.7777777910232544,
0.5555555820465088,
0.5555555820465088
],
"confidence": [
0.5,
0.75,
0.25
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to ICLR 2018 Conference Paper758 AnonReviewer2",
"Response to ICLR 2018 Conference Paper758 AnonReviewer3",
"Response to ICLR 2018 Conference Paper758 AnonReviewer1"
],
"comment": [
"Thank you for your kind review. We have improved the presentation and added new discussions which we hope will further improve. ",
"Thank you for your feedback and thorough review. We have revised the paper to address the issues you raised and fixed the presentation issues.\n\nABOUT THE NOVELTY: \n\nAlthough traditional QA systems also have the answer re-ranking component, this paper focuses on a novel problem of ``text evidence aggregation'': Here the problem is essentially modeling the relationship between the question and multiple passages (i.e., text evidence), where different passages could enhance or complement each other. For example, the proposed neural re-ranker models the complementary scenario, i.e., whether the union of different passages could cover different facts in a question, thus the attention-based model is a natural fit.\n\nIn contrast, previous answer re-ranking research did not address the above problem: (1) traditional QA systems like (Ferrucci et al., 2010) used similar passage retrieval approach with answer candidates added to the queries. However they usually consider each passage individually for extracting features of answers, whereas we utilize the information of union/co-occurrence of multiple passages by composing them with neural networks. (2) KB-QA systems (Bast and Haussmann, 2015; Yih et al., 2015; Xu et al., 2016) sometimes use text evidence to help answer re-ranking, where the features are also extracted on the pair of a question and a single-passage but ignored the union information among multiple passages.\n\nWe have added the above discussion to our paper (Page 11).\n\nRESPONSE TO THE DETAILED QUESTIONS:\n\nQ1: In Sec. 2.2, the justification for adding H^{aq} and \\bar{H}^{aq} is to downweigh the impact of stop word matching. I feel this is a somewhat indirect and less effective design, if avoiding stop words is really the reason. A standard preprocessing step may be better.\n \nA1: We follow the model design in (Wang and Jiang 2017). The reason for adding H^{aq} and \\bar{H}^{aq} is not only to downweigh the stop word matching, but also to take into consideration the semantic information at each position. Therefore, the sentence-level matching model (Eq. (5) in the next paragraph) could potentially learn to distinguish the effects of the element-wise comparison vectors with the original lexical information. We’ve clarified this on Page 5.\n \nQ2: In Sec. 2.3, it seems that the final score is just the sum of three individual normalized scores. It's not truly a \"weighted\" combination, where the weights are typically assumed to be tuned.\n\nA2: We did tune the assigned weights for the three types of normalized scores on the dev set. The tuned version gives some improvement on dev and results in slightly better test scores, compared to simply summing up the three scores.\n \nQ3: Figure 3: Connecting the dots in the two subfigures on the right does not make sense. Bar charts should be used instead.\n \nA3: We have changed the subfigures to bar charts in the updated version.\n \nQ4: The end of Sec. 4.2: I feel it's a bad example, as the passage does not really support the answer. The fact that \"Sesame Street\" got picked is probably just because it's more famous.\n \nA4: We agree that the passages in Table 6 do not provide full evidence to the question (unlike the example in Figure 1b where the passages fully support all the facts in the question). However, the “Sesame Street” got picked not because it is more famous, but because it has supporting evidence in the form of the \"award-winning\" and \"children's TV show\" facts, while the candidate \"Great Dane\" only covers \"1969\".\n\nWe selected this example in order to show another common case of realistic problems in Open-Domain QA, where the question is complex and the top-K retrieved passages cannot provide full evidence. In this case, our model is able to select the candidate with evidence covering more facts in the question (i.e. the candidate that is more likely to be approximately correct).\n\n \nQ5: It'd be interesting to see how traditional IR answer aggregation methods perform, such as simple classifiers or heuristics by word matching (or weighted by TFIDF) and counting. This will demonstrate the true advantages of leveraging modern NN models.\n \nA5: Thank you for the valuable advice! We’ve added a baseline method with BM25 value to rerank the answers based on the aggregated passages, together with the analysis about it in the current version. In summary, the BM25 model improved the F1 scores but sometimes caused a decrease in the EM scores. This is mainly for two reasons: (1) BM25 relies on bag-of-word representation, so context information is not taken into consideration. Also it does not model the phrase similarities. (2) shorter answers are preferred by BM25. For example when answer candidate A is a subsequence of B, then according to our way of collecting pseudo passages, the pseudo passage of A is always a superset of the pseudo passage of B. Therefore F1 scores are often improved while EM declines.\n",
"Thank you for your valuable comments! We corrected the grammar and spelling issues and revised the Lucene description on Page 6.\n\nWe provided additional discussion in the conclusion section. Our analysis shows that the instances which were incorrectly predicted require complex reasoning and sometimes commonsense knowledge to get right. We believe that further improvement in these areas has the potential to greatly improve performance in these difficult multi-passage reasoning scenarios. \n\nAbout baselines:\nThe two baselines, GA and BiDAF, came from the dataset papers. Besides these two, we also compared with the R^3 baseline. This method is from the recent work (Wang et al, 2017), which improves previous state-of-the-art neural-based open-domain QA system (Chen et al., 2017) on 4 out of 5 public datasets. As a result, we believe that this baseline reflects the state-of-the-art, thus our experimental comparison is reasonable.\n"
]
} | {
"paperhash": [
"wang|r3:_reinforced_reader-ranker_for_open-domain_question_answering",
"dhingra|quasar:_datasets_for_question_answering_by_search_and_reading",
"tan|s-net:_from_answer_extraction_to_answer_generation_for_machine_reading_comprehension",
"lei|deriving_neural_architectures_from_sequence_and_graph_kernels",
"buck|ask_the_right_questions:_active_question_reformulation_with_reinforcement_learning",
"joshi|triviaqa:_a_large_scale_distantly_supervised_challenge_dataset_for_reading_comprehension",
"dunn|searchqa:_a_new_q&a_dataset_augmented_with_context_from_a_search_engine",
"yu|improved_neural_relation_detection_for_knowledge_base_question_answering",
"chen|reading_wikipedia_to_answer_open-domain_questions",
"seo|bidirectional_attention_flow_for_machine_comprehension",
"wang|machine_comprehension_using_match-lstm_and_answer_pointer",
"cui|attention-over-attention_neural_networks_for_reading_comprehension",
"rajpurkar|squad:_100,000+_questions_for_machine_comprehension_of_text",
"trischler|natural_language_comprehension_with_the_epireader",
"parikh|a_decomposable_attention_model_for_natural_language_inference",
"dhingra|gated-attention_readers_for_text_comprehension",
"kadlec|text_understanding_with_the_attention_sum_reader_network",
"xu|question_answering_on_freebase_via_relation_extraction_and_textual_evidence",
"dyer|recurrent_neural_network_grammars",
"wang|learning_natural_language_inference_with_lstm",
"bast|more_accurate_question_answering_on_freebase",
"yih|semantic_parsing_via_staged_query_graph_generation:_question_answering_with_knowledge_base",
"hermann|teaching_machines_to_read_and_comprehend",
"bordes|large-scale_simple_question_answering_with_memory_networks",
"kingma|adam:_a_method_for_stochastic_optimization",
"pennington|glove:_global_vectors_for_word_representation",
"berant|semantic_parsing_on_freebase_from_question-answer_pairs",
"ferrucci|building_watson:_an_overview_of_the_deepqa_project",
"robertson|the_probabilistic_relevance_framework:_bm25_and_beyond",
"huang|forest_reranking:_discriminative_parsing_with_non-local_features",
"kwok|scaling_question_answering_to_the_web",
"collins|discriminative_reranking_for_natural_language_parsing",
"hochreiter|long_short-term_memory",
"green|baseball:_an_automatic_question-answerer",
"|under_review_as_a_conference_paper_at_iclr_2020_completely_free-form_sampling_in_image_space_:_spatial_transformers",
"shen|discriminative_reranking_for_machine_translation",
"voorhees|the_trec-8_question_answering_track_report"
],
"title": [
"R3: Reinforced Reader-Ranker for Open-Domain Question Answering",
"Quasar: Datasets for Question Answering by Search and Reading",
"S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension",
"Deriving Neural Architectures from Sequence and Graph Kernels",
"Ask the Right Questions: Active Question Reformulation with Reinforcement Learning",
"TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension",
"SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine",
"Improved Neural Relation Detection for Knowledge Base Question Answering",
"Reading Wikipedia to Answer Open-Domain Questions",
"Bidirectional Attention Flow for Machine Comprehension",
"Machine Comprehension Using Match-LSTM and Answer Pointer",
"Attention-over-Attention Neural Networks for Reading Comprehension",
"SQuAD: 100,000+ Questions for Machine Comprehension of Text",
"Natural Language Comprehension with the EpiReader",
"A Decomposable Attention Model for Natural Language Inference",
"Gated-Attention Readers for Text Comprehension",
"Text Understanding with the Attention Sum Reader Network",
"Question Answering on Freebase via Relation Extraction and Textual Evidence",
"Recurrent Neural Network Grammars",
"Learning Natural Language Inference with LSTM",
"More Accurate Question Answering on Freebase",
"Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge Base",
"Teaching Machines to Read and Comprehend",
"Large-scale Simple Question Answering with Memory Networks",
"Adam: A Method for Stochastic Optimization",
"GloVe: Global Vectors for Word Representation",
"Semantic Parsing on Freebase from Question-Answer Pairs",
"Building Watson: An Overview of the DeepQA Project",
"The Probabilistic Relevance Framework: BM25 and Beyond",
"Forest Reranking: Discriminative Parsing with Non-Local Features",
"Scaling question answering to the Web",
"Discriminative Reranking for Natural Language Parsing",
"Long Short-Term Memory",
"Baseball: an automatic question-answerer",
"Under review as a conference paper at ICLR 2020 completely free-form sampling in image space : Spatial Transformers",
"Discriminative Reranking for Machine Translation",
"The TREC-8 Question Answering Track Report"
],
"abstract": [
"In recent years researchers have achieved considerable success applying neural network methods to question answering (QA). These approaches have achieved state of the art results in simplified closed-domain settings such as the SQuAD (Rajpurkar et al., 2016) dataset, which provides a pre-selected passage, from which the answer to a given question may be extracted. More recently, researchers have begun to tackle open-domain QA, in which the model is given a question and access to a large corpus (e.g., wikipedia) instead of a pre-selected passage (Chen et al., 2017a). This setting is more complex as it requires large-scale search for relevant passages by an information retrieval component, combined with a reading comprehension model that \"reads\" the passages to generate an answer to the question. Performance in this setting lags considerably behind closed-domain performance. In this paper, we present a novel open-domain QA system called Reinforced Ranker-Reader $(R^3)$, based on two algorithmic innovations. First, we propose a new pipeline for open-domain QA with a Ranker component, which learns to rank retrieved passages in terms of likelihood of generating the ground-truth answer to a given question. Second, we propose a novel method that jointly trains the Ranker along with an answer-generation Reader model, based on reinforcement learning. We report extensive experimental results showing that our method significantly improves on the state of the art for multiple open-domain QA datasets.",
"We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The Quasar-S dataset consists of 37000 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. The posts and comments on the website serve as the background corpus for answering the cloze questions. The Quasar-T dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for extracting these answers. We pose these datasets as a challenge for two related subtasks of factoid Question Answering: (1) searching for relevant pieces of text that include the correct answer to a query, and (2) reading the retrieved text to answer the query. We also describe a retrieval system for extracting relevant sentences and documents from the corpus given a query, and include these in the release for researchers wishing to only focus on (2). We evaluate several baselines on both datasets, ranging from simple heuristics to powerful neural models, and show that these lag behind human performance by 16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at this https URL .",
"In this paper, we present a novel approach to machine reading comprehension for the MS-MARCO dataset. Unlike the SQuAD dataset that aims to answer a question with exact text spans in a passage, the MS-MARCO dataset defines the task as answering a question from multiple passages and the words in the answer are not necessary in the passages. We therefore develop an extraction-then-synthesis framework to synthesize answers from extraction results. Specifically, the answer extraction model is first employed to predict the most important sub-spans from the passage as evidence, and the answer synthesis model takes the evidence as additional features along with the question and passage to further elaborate the final answers. We build the answer extraction model with state-of-the-art neural networks for single passage reading comprehension, and propose an additional task of passage ranking to help answer extraction in multiple passages. The answer synthesis model is based on the sequence-to-sequence neural networks with extracted evidences as features. Experiments show that our extraction-then-synthesis method outperforms state-of-the-art methods.",
"The design of neural architectures for structured objects is typically guided by experimental insights rather than a formal process. In this work, we appeal to kernels over combinatorial structures, such as sequences and graphs, to derive appropriate neural operations. We introduce a class of deep recurrent neural operations and formally characterize their associated kernel spaces. Our recurrent modules compare the input to virtual reference objects (cf. filters in CNN) via the kernels. Similar to traditional neural operations, these reference objects are parameterized and directly optimized in end-to-end training. We empirically evaluate the proposed class of neural architectures on standard applications such as language modeling and molecular graph regression, achieving state-of-the-art results across these applications.",
"We frame Question Answering (QA) as a Reinforcement Learning task, an approach that we call Active Question Answering. We propose an agent that sits between the user and a black box QA system and learns to reformulate questions to elicit the best possible answers. The agent probes the system with, potentially many, natural language reformulations of an initial question and aggregates the returned evidence to yield the best answer. The reformulation system is trained end-to-end to maximize answer quality using policy gradient. We evaluate on SearchQA, a dataset of complex questions extracted from Jeopardy!. The agent outperforms a state-of-the-art base model, playing the role of the environment, and other benchmarks. We also analyze the language that the agent has learned while interacting with the question answering system. We find that successful question reformulations look quite different from natural language paraphrases. The agent is able to discover non-trivial reformulation strategies that resemble classic information retrieval techniques such as term re-weighting (tf-idf) and stemming.",
"We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study.",
"We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.",
"Relation detection is a core component of many NLP applications including Knowledge Base Question Answering (KBQA). In this paper, we propose a hierarchical recurrent neural network enhanced by residual learning which detects KB relations given an input question. Our method uses deep residual bidirectional LSTMs to compare questions and relation names via different levels of abstraction. Additionally, we propose a simple KBQA system that integrates entity linking and our proposed relation detector to make the two components enhance each other. Our experimental results show that our approach not only achieves outstanding relation detection performance, but more importantly, it helps our KBQA system achieve state-of-the-art accuracy for both single-relation (SimpleQuestions) and multi-relation (WebQSP) QA benchmarks.",
"This paper proposes to tackle open-domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article. This task of machine reading at scale combines the challenges of document retrieval (finding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles). Our approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA datasets indicate that (1) both modules are highly competitive with respect to existing counterparts and (2) multitask learning using distant supervision on their combination is an effective complete system on this challenging task.",
"Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typically these methods use attention to focus on a small portion of the context and summarize it with a fixed-size vector, couple attentions temporally, and/or often form a uni-directional attention. In this paper we introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test.",
"Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.",
"Cloze-style reading comprehension is a representative problem in mining relationship between document and query. In this paper, we present a simple but novel model called attention-over-attention reader for better solving cloze-style reading comprehension task. The proposed model aims to place another attention mechanism over the document-level attention and induces “attended attention” for final answer predictions. One advantage of our model is that it is simpler than related works while giving excellent performance. In addition to the primary model, we also propose an N-best re-ranking strategy to double check the validity of the candidates and further improve the performance. Experimental results show that the proposed methods significantly outperform various state-of-the-art systems by a large margin in public datasets, such as CNN and Children’s Book Test.",
"We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. \nThe dataset is freely available at this https URL",
"We present the EpiReader, a novel model for machine comprehension of text. Machine comprehension of unstructured, real-world text is a major research goal for natural language processing. Current tests of machine comprehension pose questions whose answers can be inferred from some supporting text, and evaluate a model's response to the questions. The EpiReader is an end-to-end neural model comprising two components: the first component proposes a small set of candidate answers after comparing a question to its supporting text, and the second component formulates hypotheses using the proposed candidates and the question, then reranks the hypotheses based on their estimated concordance with the supporting text. We present experiments demonstrating that the EpiReader sets a new state-of-the-art on the CNN and Children's Book Test machine comprehension benchmarks, outperforming previous neural models by a significant margin.",
"We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.",
"In this paper we study the problem of answering cloze-style questions over documents. Our model, the Gated-Attention (GA) Reader, integrates a multi-hop architecture with a novel attention mechanism, which is based on multiplicative interactions between the query embedding and the intermediate states of a recurrent neural network document reader. This enables the reader to build query-specific representations of tokens in the document for accurate answer selection. The GA Reader obtains state-of-the-art results on three benchmarks for this task–the CNN & Daily Mail news stories and the Who Did What dataset. The effectiveness of multiplicative interaction is demonstrated by an ablation study, and by comparing to alternative compositional operators for implementing the gated-attention.",
"Several large cloze-style context-question-answer datasets have been introduced recently: the CNN and Daily Mail news data and the Children's Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for question-answering problems where the answer is a single word from the document. Ensemble of our models sets new state of the art on all evaluated datasets.",
"Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like relation extraction are robust to data scarcity, they are less expressive than the deep meaning representation methods like semantic parsing, thereby failing at answering questions involving multiple constraints. Here we alleviate this problem by empowering a relation extraction method with additional evidence from Wikipedia. We first present a neural network based relation extractor to retrieve the candidate answers from Freebase, and then infer over Wikipedia to validate these answers. Experiments on the WebQuestions question answering dataset show that our method achieves an F_1 of 53.3%, a substantial improvement over the state-of-the-art.",
"We introduce recurrent neural network grammars, probabilistic models of sentences with explicit phrase structure. We explain efficient inference procedures that allow application to both parsing and language modeling. Experiments show that they provide better parsing in English than any single previously published supervised generative model and better language modeling than state-of-the-art sequential RNNs in English and Chinese.",
"Natural language inference (NLI) is a fundamentally important task in natural language processing that has many applications. The recently released Stanford Natural Language Inference (SNLI) corpus has made it possible to develop and evaluate learning-centered methods such as deep neural networks for natural language inference (NLI). In this paper, we propose a special long short-term memory (LSTM) architecture for NLI. Our model builds on top of a recently proposed neural attention model for NLI but is based on a significantly different idea. Instead of deriving sentence embeddings for the premise and the hypothesis to be used for classification, our solution uses a match-LSTM to perform word-by-word matching of the hypothesis with the premise. This LSTM is able to place more emphasis on important word-level matching results. In particular, we observe that this LSTM remembers important mismatches that are critical for predicting the contradiction or the neutral relationship label. On the SNLI corpus, our model achieves an accuracy of 86.1%, outperforming the state of the art.",
"Real-world factoid or list questions often have a simple structure, yet are hard to match to facts in a given knowledge base due to high representational and linguistic variability. For example, to answer \"who is the ceo of apple\" on Freebase requires a match to an abstract \"leadership\" entity with three relations \"role\", \"organization\" and \"person\", and two other entities \"apple inc\" and \"managing director\". Recent years have seen a surge of research activity on learning-based solutions for this method. We further advance the state of the art by adopting learning-to-rank methodology and by fully addressing the inherent entity recognition problem, which was neglected in recent works. We evaluate our system, called Aqqu, on two standard benchmarks, Free917 and WebQuestions, improving the previous best result for each benchmark considerably. These two benchmarks exhibit quite different challenges, and many of the existing approaches were evaluated (and work well) only for one of them. We also consider efficiency aspects and take care that all questions can be answered interactively (that is, within a second). Materials for full reproducibility are available on our website: http://ad.informatik.uni-freiburg.de/publications.",
"We propose a novel semantic parsing framework for question answering using a knowledge base. We define a query graph that resembles subgraphs of the knowledge base and can be directly mapped to a logical form. Semantic parsing is reduced to query graph generation, formulated as a staged search problem. Unlike traditional approaches, our method leverages the knowledge base in an early stage to prune the search space and thus simplifies the semantic matching problem. By applying an advanced entity linking system and a deep convolutional neural network model that matches questions and predicate sequences, our system outperforms previous methods substantially, and achieves an F1 measure of 52.5% on the WEBQUESTIONS dataset.",
"Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.",
"Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions. This paper studies the impact of multitask and transfer learning for simple question answering; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. To this end, we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks. We conduct our study within the framework of Memory Networks (Weston et al., 2015) because this perspective allows us to eventually scale up to more complex reasoning, and show that Memory Networks can be successfully trained to achieve excellent performance.",
"We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.",
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.",
"In this paper, we train a semantic parser that scales up to Freebase. Instead of relying on annotated logical forms, which is especially expensive to obtain at large scale, we learn from question-answer pairs. The main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question. We tackle this problem in two ways: First, we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus. Second, we use a bridging operation to generate additional predicates based on neighboring predicates. On the dataset of Cai and Yates (2013), despite not having annotated logical forms, our system outperforms their state-of-the-art parser. Additionally, we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline.",
"IBM Research undertook a challenge to build a computer system that could compete at the human champion level in real time on the American TV Quiz show, Jeopardy! The extent of the challenge includes fielding a real-time automatic contestant on the show, not merely a laboratory exercise. The Jeopardy! Challenge helped us address requirements that led to the design of the DeepQA architecture and the implementation of Watson. After 3 years of intense research and development by a core team of about 20 researches, Watson is performing at human expert-levels in terms of precision, confidence and speed at the Jeopardy! Quiz show. Our results strongly suggest that DeepQA is an effective and extensible architecture that may be used as a foundation for combining, deploying, evaluating and advancing a wide range of algorithmic techniques to rapidly advance the field of QA.",
"The Probabilistic Relevance Framework (PRF) is a formal framework for document retrieval, grounded in work done in the 1970—1980s, which led to the development of one of the most successful text-retrieval algorithms, BM25. In recent years, research in the PRF has yielded new retrieval models capable of taking into account document meta-data (especially structure and link-graph information). Again, this has led to one of the most successful Web-search and corporate-search algorithms, BM25F. This work presents the PRF from a conceptual point of view, describing the probabilistic modelling assumptions behind the framework and the different ranking algorithms that result from its application: the binary independence model, relevance feedback models, BM25 and BM25F. It also discusses the relation between the PRF and other statistical models for IR, and covers some related topics, such as the use of non-textual features, and parameter optimisation for models with free parameters.",
"Conventional n-best reranking techniques often suffer from the limited scope of the nbest list, which rules out many potentially good alternatives. We instead propose forest reranking, a method that reranks a packed forest of exponentially many parses. Since exact inference is intractable with non-local features, we present an approximate algorithm inspired by forest rescoring that makes discriminative training practical over the whole Treebank. Our final result, an F-score of 91.7, outperforms both 50-best and 100-best reranking baselines, and is better than any previously reported systems trained on the Treebank.",
"The wealth of information on the web makes it an attractive resource for seeking quick answers to simple, factual questions such as "e;who was the first American in space?"e; or "e;what is the second tallest mountain in the world?"e; Yet today's most advanced web search services (e.g., Google and AskJeeves) make it surprisingly tedious to locate answers to such questions. In this paper, we extend question-answering techniques, first studied in the information retrieval literature, to the web and experimentally evaluate their performance.First we introduce Mulder, which we believe to be the first general-purpose, fully-automated question-answering system available on the web. Second, we describe Mulder's architecture, which relies on multiple search-engine queries, natural-language parsing, and a novel voting procedure to yield reliable answers coupled with high recall. Finally, we compare Mulder's performance to that of Google and AskJeeves on questions drawn from the TREC-8 question answering track. We find that Mulder's recall is more than a factor of three higher than that of AskJeeves. In addition, we find that Google requires 6.6 times as much user effort to achieve the same level of recall as Mulder.",
"This article considers approaches which rerank the output of an existing probabilistic parser. The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses. A second model then attempts to improve upon this initial ranking, using additional features of the tree as evidence. The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account. We introduce a new method for the reranking task, based on the boosting approach to ranking problems described in Freund et al. (1998). We apply the boosting method to parsing the Wall Street Journal treebank. The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. The new model achieved 89.75 F-measure, a 13 relative decrease in F-measure error over the baseline model's score of 88.2. The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data. Experiments show significant efficiency gains for the new algorithm over the obvious implementation of the boosting approach. We argue that the method is an appealing alternative-in terms of both simplicity and efficiency-to work on feature selection methods within log-linear (maximum-entropy) models. Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation.",
"Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.",
"<u>Baseball</u> is a computer program that answers questions phrased in ordinary English about stored data. The program reads the question from punched cards. After the words and idioms are looked up in a dictionary, the phrase structure and other syntactic facts are determined for a content analysis, which lists attribute-value pairs specifying the information given and the information requested. The requested information is then extracted from the data matching the specifications, and any necessary processing is done. Finally, the answer is printed. The program's present context is baseball games; it answers such questions as \"Where did each team play on July 7?\"",
"Convolutional networks are not aware of an object’s geometric variations, which leads to inefficient utilization of model and data capacity. To overcome this issue, recent works on deformation modeling seek to spatially reconfigure the data towards a common arrangement such that semantic recognition suffers less from deformation. This is typically done by augmenting static operators with learned free-form sampling grids in the image space, dynamically tuned to the data and task for adapting the receptive field. Yet adapting the receptive field does not quite reach the actual goal – what really matters to the network is the effective receptive field (ERF), which reflects how much each pixel contributes. It is thus natural to design other approaches to adapt the ERF directly during runtime. In this work, we instantiate one possible solution as Deformable Kernels (DKs), a family of novel and generic convolutional operators for handling object deformations by directly adapting the ERF while leaving the receptive field untouched. At the heart of our method is the ability to resample the original kernel space towards recovering the deformation of objects. This approach is justified with theoretical insights that the ERF is strictly determined by data sampling locations and kernel values. We implement DKs as generic drop-in replacements of rigid kernels and conduct a series of empirical studies whose results conform with our theories. Over several tasks and standard base models, our approach compares favorably against prior works that adapt during runtime. In addition, further experiments suggest a working mechanism orthogonal and complementary to previous works.",
"This paper describes the application of discriminative reranking techniques to the problem of machine translation. For each sentence in the source language, we obtain from a baseline statistical machine translation system, a ranked best list of candidate translations in the target language. We introduce two novel perceptroninspired reranking algorithms that improve on the quality of machine translation over the baseline system based on evaluation using the BLEU metric. We provide experimental results on the NIST 2003 Chinese-English large data track evaluation. We also provide theoretical analysis of our algorithms and experiments that verify that our algorithms provide state-of-theart performance in machine translation.",
"The TREC-8 Question Answering track was the \ffirst large-scale evaluation of domain-independent question answering systems. This paper summarizes the results of the track by giving a brief overview of the different approaches taken to solve the problem. The most accurate systems found a correct response for more than 2/3 of the questions. Relatively simple bag-of-words approaches were adequate for \ffinding answers when responses could be as long as a paragraph (250 bytes), but more sophisticated processing was necessary for more direct responses (50 bytes).\r\n\r\nThe TREC-8 Question Answering track was an initial e\u000bort to bring the bene\fts of large-scale evaluation to bear on a question answering (QA) task. The goal in the QA task is to retrieve small snippets of text that contain the actual answer to a question rather than the document lists traditionally returned by text retrieval systems. The assumption is that users would usually prefer to be given the answer rather than \fand the answer themselves in a document.\r\n\r\nThis paper summarizes the retrieval results of the track; a companion paper (\\The TREC-8 Question Answering Track Evaluation\") gives details about how the evaluation was implemented. By necessity, a track report can give only an overview of the different approaches used in the track. Readers are urged to consult the participants' papers elsewhere in the Proceedings for details regarding a particular approach."
],
"authors": [
{
"name": [
"Shuohang Wang",
"Mo Yu",
"Xiaoxiao Guo",
"Zhiguo Wang",
"Tim Klinger",
"Wei Zhang",
"Shiyu Chang",
"G. Tesauro",
"Bowen Zhou",
"Jing Jiang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Bhuwan Dhingra",
"Kathryn Mazaitis",
"William W. Cohen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Chuanqi Tan",
"Furu Wei",
"Nan Yang",
"Weifeng Lv",
"M. Zhou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Tao Lei",
"Wengong Jin",
"R. Barzilay",
"T. Jaakkola"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Buck",
"Jannis Bulian",
"Massimiliano Ciaramita",
"Andrea Gesmundo",
"N. Houlsby",
"Wojciech Gajewski",
"Wei Wang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Mandar Joshi",
"Eunsol Choi",
"Daniel S. Weld",
"Luke Zettlemoyer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Matthew Dunn",
"Levent Sagun",
"Mike Higgins",
"V. U. Güney",
"Volkan Cirik",
"Kyunghyun Cho"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Mo Yu",
"Wenpeng Yin",
"K. Hasan",
"C. D. Santos",
"Bing Xiang",
"Bowen Zhou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Danqi Chen",
"Adam Fisch",
"J. Weston",
"Antoine Bordes"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Minjoon Seo",
"Aniruddha Kembhavi",
"Ali Farhadi",
"Hannaneh Hajishirzi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Shuohang Wang",
"Jing Jiang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yiming Cui",
"Z. Chen",
"Si Wei",
"Shijin Wang",
"Ting Liu",
"Guoping Hu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Pranav Rajpurkar",
"Jian Zhang",
"Konstantin Lopyrev",
"Percy Liang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Adam Trischler",
"Zheng Ye",
"Xingdi Yuan",
"Philip Bachman",
"Alessandro Sordoni",
"Kaheer Suleman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ankur P. Parikh",
"Oscar Täckström",
"Dipanjan Das",
"Jakob Uszkoreit"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Bhuwan Dhingra",
"Hanxiao Liu",
"Zhilin Yang",
"William W. Cohen",
"R. Salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Rudolf Kadlec",
"Martin Schmid",
"Ondrej Bajgar",
"Jan Kleindienst"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kun Xu",
"Siva Reddy",
"Yansong Feng",
"Songfang Huang",
"Dongyan Zhao"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Chris Dyer",
"A. Kuncoro",
"Miguel Ballesteros",
"Noah A. Smith"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Shuohang Wang",
"Jing Jiang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Hannah Bast",
"Elmar Haussmann"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Wen-tau Yih",
"Ming-Wei Chang",
"Xiaodong He",
"Jianfeng Gao"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Karl Moritz Hermann",
"Tomás Kociský",
"Edward Grefenstette",
"L. Espeholt",
"W. Kay",
"Mustafa Suleyman",
"Phil Blunsom"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Antoine Bordes",
"Nicolas Usunier",
"S. Chopra",
"J. Weston"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Diederik P. Kingma",
"Jimmy Ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jeffrey Pennington",
"R. Socher",
"Christopher D. Manning"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jonathan Berant",
"A. Chou",
"Roy Frostig",
"Percy Liang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Ferrucci",
"E. Brown",
"Jennifer Chu-Carroll",
"James Fan",
"David Gondek",
"Aditya Kalyanpur",
"Adam Lally",
"J. William Murdock",
"Eric Nyberg",
"J. Prager",
"Nico Schlaefer",
"Chris Welty"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Robertson",
"H. Zaragoza"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Liang Huang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Cody C. T. Kwok",
"Oren Etzioni",
"Daniel S. Weld"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Collins",
"Terry Koo"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sepp Hochreiter",
"J. Schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Green",
"A. K. Wolf",
"C. Chomsky",
"Kenneth Laughery"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"Libin Shen",
"Anoop Sarkar",
"F. Och"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Voorhees"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1709.00023",
"1707.03904",
"1706.04815",
"1705.09037",
"1705.07830",
"1705.03551",
"1704.05179",
"1704.06194",
"1704.00051",
"1611.01603",
"1608.07905",
"1607.04423",
"1606.05250",
"1606.02270",
"1606.01933",
"1606.01549",
"1603.01547",
"1603.00957",
"1602.07776",
"1512.08849",
null,
null,
"1506.03340",
"1506.02075",
"1412.6980",
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"21928029",
"2417413",
"33257417",
"12975952",
"3700344",
"26501419",
"11606382",
"7752968",
"3618568",
"8535316",
"5592690",
"9205021",
"11816014",
"711424",
"8495258",
"6529193",
"11022639",
"139787",
"1949831",
"11004224",
"495573",
"18309765",
"6203757",
"9605730",
"6628106",
"1957433",
"6401679",
"1831060",
"207178704",
"1131864",
"5456456",
"405878",
"1915014",
"5867164",
"260423622",
"750809",
"16944215"
],
"intents": [
[
"methodology",
"background"
],
[
"methodology",
"background"
],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[],
[
"methodology",
"background"
],
[
"background"
],
[],
[
"methodology",
"background"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"background"
],
[],
[],
[
"background"
],
[
"methodology"
],
[],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[],
[],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[],
[
"methodology"
],
[]
],
"isInfluential": [
true,
true,
false,
false,
false,
false,
true,
false,
false,
true,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false
]
} | null | 84 | 1.880952 | 0.62963 | 0.5 | null | null | null | null | null | rJl3yM-Ab |
gao|adversarial_policy_gradient_for_alternating_markov_games|ICLR_cc_2018_Conference | Adversarial Policy Gradient for Alternating Markov Games | Policy gradient reinforcement learning has been applied to two-player alternate-turn zero-sum games, e.g., in AlphaGo, self-play REINFORCE was used to improve the neural net model after supervised learning. In this paper, we emphasize that two-player zero-sum games with alternating turns, which have been previously formulated as Alternating Markov Games (AMGs), are different from standard MDP because of their two-agent nature. We exploit the difference in associated Bellman equations, which leads to different policy iteration algorithms. As policy gradient method is a kind of generalized policy iteration, we show how these differences in policy iteration are reflected in policy gradient for AMGs. We formulate an adversarial policy gradient and discuss potential possibilities for developing better policy gradient methods other than self-play REINFORCE. The core idea is to estimate the minimum rather than the mean for the “critic”. Experimental results on the game of Hex show the modified Monte Carlo policy gradient methods are able to learn better pure neural net policies than the REINFORCE variants. To apply learned neural weights to multiple board sizes Hex, we describe a board-size independent neural net architecture. We show that when combined with search, using a single neural net model, the resulting program consistently beats MoHex 2.0, the state-of-the-art computer Hex player, on board sizes from 9×9 to 13×13. | {
"name": [
"chao gao",
"ryan hayward"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
} | null | [] | null | 2018-02-15 22:29:48 | 57 | null | null | null | null | null | null | null | null | false | The reviewers agree that the paper is below threshold for acceptance in the main track (one with very low confidence), but they favor submitting the paper to the workshop track.
The paper considers policy gradient methods for two-player zero-sum Alternating Markov games. They propose adversarial policy gradient (fairly obviously), wherein the critic estimates min rather than mean reward. They also report promising empirical results in the game of Hex, with varying board sizes. I found the paper to be well-written and easy to read, possibly due to revisions in the rebuttal discussions.
The reviewers consider the contribution to be small, mainly due to the fact that the key algorithmic insights were already published decades ago. Reintroducing them is a service to the community, but its novelty is limited. Other critiques mentioned that results in Hex only provide limited understanding of the algorithm's behavior in general Alternating Markov games. The lack of comparison with modern methods like AlphaGo Zero was also mentioned as a limitation.
Bottom line: The paper provides a small but useful contribution to the community, as described above, and the committee recommends it for workshop.
| {
"review_id": [
"rJFql_Nxz",
"SkNEyzcxG",
"ByzeYntef"
],
"review": [
{
"title": "title: n/a",
"paper_summary": null,
"main_review": "main_review: This paper is outside of my area of expertise, so I'll just provide a light review:\n\n- the idea of assuming that the opponent will take the worst possible action is reasonable in widely used in classic search, so making value functions follow this intuition seems sensible,\n- but somehow I wonder if this is really novel? Isn't there a whole body of literature on fictitious self-play, including need RL variants (e.g. Heinrich&Silver, 2016) that approaches things in a similar way?\n- the results on Hex have some signal, but I don’t know how to calibrate them w.r.t. The state of the art on that game? A 40% win rate seems low, what do other published papers based on RL or search achieve?\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A nice but somewhat minimal paper addressing caveats of existing adversarial RL attempts",
"paper_summary": null,
"main_review": "main_review: The paper makes the simple but important observation that (deep) reinforcement learning in alternating Markov games requires a min-max formulation of the Bellman equation as well as careful attention to the way in which one alternates solving for both players' policies in a policy iteration setting.\n\nWhile some of the core algorithmic insights regarding Algorithms 3 & 4 in the paper stem from previous work (Condon, 1990; Hoffman & Karp, 1966), I was not actually aware of these previous results until I reviewed this paper.\n\nA nice corollary of Algorithms 3 & 4 is that they make for a straightforward adaptation of policy gradient algorithms since when optimizing one policy, the other is fixed to the greedy policy.\n\nIn general, it would be nice to have the algorithms specified as formal algorithms as opposed to text-based outlines. I found myself reading and re-reading descriptions to make sure I understood what math was being implied by the descriptions.\n\nSection 6\n\n> Hex is simpler than Go in the sense that perfect play can \n> often be achieved whenever virtual connections are found \n> by H-Search\n\nIt is not clear here what virtual connections are, what H-Search is, and how these imply perfect play, if perfect play as previously discussed is unknown.\n\nOverall, the results on Hex for AMCPG-A and AMCPG-B vs. standard REINFORCE variants currently used are very encouraging. That said, empirically it is always a question of whether these results are specific to Hex. Because this paper is not proposing the best Hex player (i.e., the winning rate against Wolve never exceeds 0.5), I think it is quite reasonable to request the authors to compare AMCPG-A and AMCPG-B to standard REINFORCE variants on other games (they do not need to be as difficult as Hex).\n\nFinally, assuming that the results do generalize to other games, I am left wondering about the significance of the contribution. On one hand, the authors have introduced me to literature I was not aware of, but on the other hand, their actual novel contribution is a rather straightforward adaptation of ideas in the literature to policy gradients (that could be formalized in a more technically precise way) with an evaluation on a single type of game. This is a useful contribution no doubt, but I am concerned with whether it meets the significance level that I am used to with accepted ICLR papers in previous years.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A bit verbose on existing methods + notations and low on experiments",
"paper_summary": null,
"main_review": "main_review: This paper introduces a variation over existing policy gradient methods for two players zero sum games, in which instead of using the outcome of a single policy network rollout as the return, they use the minimum outcome among a few rollouts either from the original position or where the first action from that position is selected uniformly among the top k policy outputs. \n\nThe proposed method supposedly provides slightly stronger targets, due to the extra lookahead / rollouts. Experiments show that this provides faster progress per iteration on the game of Hex against a fixed third party opponent.\n\nThere is no comparison against state of the art methods like AlphaGo Zero which uses MCTS root move distribution and MCTS rollouts outcome to train policy and value network, even though the author do cite this work. There is also no comparison with Hexit which also trains policy net on MCTS move distribution, and was also applied to Hex.\n\nThe actual proposed method is actually a one liner change, which could be introduced much sooner in the paper to save the reader some time. While the idea is interesting, the paper felt quite verbose on introducing notations and related work, and a bit lacking on actual change that is being proposed and the experiment to back it up.\n\nFor example, was it really necessary to introduce state transition probabilities p(s’, a, s) when all the experiments are done in the deterministic game of Hex ?\n\nAlso the experiment seems not fully fair to the reinforce baseline. My understand is that the proposed method is much more costly due to extra rollouts that are needed. It would be interesting to see the same learning curves as in Figure 2, but the x axis would be some computational budget (total number of network forward, or wall clock time). It is conceivable that the vanilla reinforce would do just as well as the proposed method if the plots were aligned this way. It would also be good to know the asymptotic behavior.\n\nSo even though the idea is interesting, it seems that much stronger methods AlphaGo Zero / Hexit are now available, and the experimental section is a bit weak. I would recommend to accept for a workshop paper but not sure about the main track.\n\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.4444444477558136,
0.4444444477558136
],
"confidence": [
0.25,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"computation cost",
"a new version has been updated",
"New results after combing neural net with search have been added to the paper",
"Response",
"Comparison to ExIt has been added",
"previous discussion about AMGs was \"neglected\" "
],
"comment": [
"Thanks for your comments. \n\nThe reviewer is concerned about the computation cost for each training. In our experiments, training is very fast, took only a few hours on 9x9 and 11x11 Hex. Since all training/evaluation are conducted on the same computer with a single GTX 1080 GPU. We briefly list the detailed training time for each method here: \n \n9x9 Hex: total time usage for 400 iterations training: \nAMCPG-A: k=1: about 1 h 40 m, k=3: about 2.5 h, k=6: 4h 10 minutes, k=9: about 6h\nAMCPG-B: similar as above\nREINFORCE-B: similar as above\nREINFORCE-A: 1 hour 15 minutes\nREINFORCE-V: 1 hour 20 minutes\n\n11x11 Hex: \nAMCPG-A: k=1: about 3h15 minutes, k=3: about 5h, k=6: about 9h, k=9: about 12h\nAMCPG-B: similar as above\nREINFORCE-B: similar as above,\nREINFORCE-A: 2.5 hours\nREINFORCE-V: 2.5 hours\n\nIn fact, most of our time was not spending on training the neural net, but for evaluation the neural net model by playing against Wolve, as Wolve's search is orders of magnitude slower than a pure neural net player. \n\nWe note that, even though a pure neural net self-play training might not be able to provide state-of-the-art playing, such methods have its own merits. For example, due to their fast speed, the first version of Alphago uses such a method for generating data to train a value net which is useful in search. \n\nOn other hand hand, even though search+NN self-play might also be able to learn a neural net policy that itself can play strongly. It is likely that such a neural net could be further improved by policy gradient. \n",
"\nWe thank the reviewer's comments about the 'state-of-the-art' in Hex. We have updated our paper, in which we show that after combining our neural net model with search, better results than MoHex 2.0 are observed.\n\nWe summarize the changes in below:\n\n\n1. we show that with our boardsize independent (as there is no fully connected layers) neural net architecture, a single trained model trained on 9x9 can generalize to other board sizes. When combined with search, the new program consistently defeats MoHex 2.0 on 9x9 to 13x13 (with same number of simulations and with same computation time ).\n\n2. we also compared our results with ExIt. We show that both MoHex 2.0 and our new program achieve better winrates against MoHex 2011 than ExIt, though ExIt was only concentrated on 9x9. \n\n3. we show that minimum rollout return slightly improves Monte carlo tree search\n\n\n4. typos and grammatical errors are corrected. \n\nHowever, due to various constraint, we did not apply our methods to other games, though it would be interesting to do so. Hex is the game we are most familiar. But on the other hand, we stress that, just as REINFORCE, we did not have any special modifications when apply the ACMPG variants to Hex. \n",
"Thanks for the reviewing. \n\nThe reviewer mentioned fictitious self-play (Heinrich&Silver, 2016), but it is primary for imperfect-information games.\n\nWe focus on classic perfect information two-player zero-sum game played in alternate turns. \n\nAdditionally, the reviewer was concerned about the state-of-art in Hex. In the revised paper, we haven shown that after combining our neural net with search, the state-of-art in Hex is improved. Moreover, we used a single neural net model, with consistent improvement on multiple board sizes. \n",
"Overall, I like the paper (it makes a simple but important point) and the authors have addressed most of my concerns.\n\nThat said, the one major issue that remains with the paper is that I would like to see evaluations in a larger variety of domains -- I feel like I'm overfitting my understanding of the ideas in the paper to the game of Hex. For this reason, I feel that my current review score is appropriate. As another reviewer points out, this paper would be great for a workshop if it is not accepted to the main track.\n",
"Thanks for your comment. \n\nIn the revised paper, we have added our neural net model to search, the resulting program is stronger than MoHex 2.0 on board sizes 9x9 to 13x13. We have also included a comparison with ExIt. It appears that ExIt might not as strong as MoHex 2.0 (the ExIt paper was comparing their player with MoHex 2011). Another advantage of our new player is that it is able to play on multiple board size with only one trained model, while ExIt is limited on 9x9. \n\nDetained responses are in below. \n\nThe methods ExIt (I assume you mean ExIt by saying HexIt) and AlphaGo Zero are similar. They work well but one problem is the computation cost is very high. For example, when applied to chess and shogi, it is mentioned that 5000 TPUs were used for MCTS self-play data generation. \n\nFor ExIt, by the time our paper is submitted, only first version is available on arxiv, though we are aware their work has been accepted in NIPS 2017. The newest version can be found from this URL.\n https://arxiv.org/abs/1705.08439\n\nThey did all experiments on 9x9 Hex. In the first version on arxiv, their player is a search+NN player not pure neural net. \nOn other hand hand, even if the learned neural net policy itself is strong by following MCTS, it is likely the playing strength of this pure neural net can be improved by doing a policy gradient on it, though after such a policy gradient, the policy might not good for Monte-carlo tree search any more (as shown by first Alphago paper). \n\nIn the newest version, they compared their policy_value net + MCTS player with MoHex 2011, however, there is MoHex 2.0, which is much stronger than MoHex 2011. \n\nExIt only conducts experiments on 9x9 Hex. It is not very clear how much time could be used to produce significant results on larger board size, such as 11x11, presumably, this is not a easy task with only one GPU computer. We note that even ExIt was specially applied only to this board size, MoHex 2.0 and our new program both seem to be able achieve better playing results than ExIt. \n\nOur AMCPG-A or AMCPG-B follows traditional “light self-play”. No tree was built. To estimate the “minimum” critic, extra roll-outs are conducted. But it is very much due to the Monte-Calro nature of the method, and that is why we mention an actor-critic style might be more efficient. Our methods work essentially similar as traditional policy gradient, that's why we only compared with REINFORCE variants. \n\nWe argue that it could be unfair to say that our better results compared to classic REINFORCE is merely due to extra roll-outs. One can see that in REINFORCE-B, extra roll-outs are also conducted the same way as AMCPG-A and AMCPG-B. Their extra computation costs due to extra roll-out are the same. However, the results in Figure 2 suggests that REINFORCE-B has similar performance as REINFORCE-A and REINFORCE-V. ",
"Thank you for your comments. \n\nYes, the key insights behind this paper is much from the literature, i.e., (Condon, 1990; Hoffman & Karp, 1966; Littman 1996). But, as the reviewer has pointed out, perhaps it is because of the difference in terminology, those classic works were much \"unknown\" for many researchers.\n\nIn this paper, we brought those again to the community, one goal is to stimulate more thorough thinking about the difference between two-player alternate-turn games and single agent MDPs. It is apparent that two-player alternate-turn zero-sum games are more \"challenging\" in many aspects. A more careful examination about the fundamental differences between AMGs and MDPs will perhaps help people develop more effective/efficient RL methods specifically for this domain. \n\nWe only did our experiments on the game of Hex, primarily because this is the game we are most familiar. But it should be noted that we didn't conduct any game specific modifications when applying those AMCPG variants to this specific game, just as REINFORCE. \n\nIt is true that doing more games would be more convincing; however, due to various constraint (i.e., hardware constraint, knowledge about other games), we did not manage to have an attempt in this direction while writing this paper. \n\nAs for advancing the state-of-art, the state-of-the-art for Hex are still search based methods. In the first version we submitted, we did not attempt to advance the state-of-art, since we were concentrated on introducing new fast and better policy gradient methods. \n\nHowever, after receiving the reviewers' comments about state-of-art, we proceed to combine our neural net with search, and the resulting program is indeed be able to surpass MoHex 2.0. \n\nMost notably, we use a single model for multiple board sizes, the new program consistently defeats MoHex 2.0 on every board size. This is much due to the architecture we introduced, where we deliberately removed fully connected layers, so that the learned parameter weights can generalize to multiple board sizes.\n\nSince expert data is often difficult to obtain or generate, while generating expert data on smaller board is usually much easier and cheaper than larger board sizes, our result provides an encouraging direction for more efficient learning on games which has similar characteristics as Hex (e.g., other connection games). \n\nWe have also investigated “minimum return” in Monte-carlo tree search, experimental results show that incorporating “minimum playout” also improved MCTS. \n\nFuture work direction is using value net in pure neural net training as well as use it to replace the playout in MCTS. However, different from previous work, we argue that a “min” operator might be able to lead better results in alternating markov games. \n\nWe have included a psude-code for Algo.1, Algo.2 and Algo.3 in the appendix, which provides a more formal discription about each procedure. Also, explanation about Virtual Connections and H-Search have been added in the revised paper. \n\n"
]
} | {
"paperhash": [
"abadi|tensorflow:_large-scale_machine_learning_on_heterogeneous_distributed_systems",
"vadim|a_hierarchical_approach_to_computer_hex",
"anthony|thinking_fast_and_slow_with_deep_learning_and_tree_search",
"arneson|monte_carlo_tree_search_in_hex",
"bellman|a_markovian_decision_process",
"bertsekas|neuro-dynamic_programming",
"capen|competitive_bidding_in_high-risk_situations",
"clark|training_deep_convolutional_neural_networks_to_play_go",
"condon|on_algorithms_for_simple_stochastic_games",
"condon|the_complexity_of_stochastic_games",
"coulom|efficient_selectivity_and_backup_operators_in_monte-carlo_tree_search",
"fox|taming_the_noise_in_reinforcement_learning_via_soft_updates",
"gao|move_prediction_using_deep_convolutional_neural_networks_in_hex",
"goodfellow|generative_adversarial_nets",
"gu|q-prop:_sample-efficient_policy_gradient_with_an_off-policy_critic",
"hayward|mohex_wins_hex_tournament",
"henderson|playing_and_solving_the_game_of_hex",
"hoffman|on_nonterminating_stochastic_games",
"ronald|dynamic_programming_and_markov_processes",
"huang|monte-carlo_simulation_balancing_in_practice",
"huang|mohex_2.0:_a_pattern-based_mcts_hex_player",
"sham|a_natural_policy_gradient",
"kingma|adam:_a_method_for_stochastic_optimization",
"kober|reinforcement_learning_in_robotics:_a_survey",
"lecun|deep_learning",
"lillicrap|continuous_control_with_deep_reinforcement_learning",
"lin|self-improving_reactive_agents_based_on_reinforcement_learning,_planning_and_teaching",
"littman|markov_games_as_a_framework_for_multi-agent_reinforcement_learning",
"lederman|algorithms_for_sequential_decision_making",
"maddison|move_evaluation_in_go_using_deep_convolutional_neural_networks",
"mnih|human-level_control_through_deep_reinforcement_learning",
"mnih|asynchronous_methods_for_deep_reinforcement_learning",
"nash|some_games_and_machines_for_playing_them",
"pawlewicz|scalable_parallel_dfpn_search",
"pawlewicz|stronger_virtual_connections_in_hex",
"peters|policy_gradient_methods",
"pinto|robust_adversarial_reinforcement_learning",
"gavin|on-line_q-learning_using_connectionist_systems",
"schaul|prioritized_experience_replay",
"schulman|trust_region_policy_optimization",
"schulman|proximal_policy_optimization_algorithms",
"shannon|computers_and_automata",
"lloyd|stochastic_games",
"silver|monte-carlo_simulation_balancing",
"silver|deterministic_policy_gradient_algorithms",
"silver|mastering_the_game_of_go_with_deep_neural_networks_and_tree_search",
"silver|mastering_chess_and_shogi_by_self-play_with_a_general_reinforcement_learning_algorithm",
"silver|george_van_den_driessche,_thore_graepel,_and_demis_hassabis",
"james|the_optimizers_curse:_skepticism_and_postdecision_surprise_in_decision_analysis",
"richard|reinforcement_learning:_an_introduction",
"sutton|policy_gradient_methods_for_reinforcement_learning_with_function_approximation",
"tesauro|temporal_difference_learning_and_td-gammon",
"tian|better_computer_go_player_with_neural_network_and_long-term_prediction",
"wang|dueling_network_architectures_for_deep_reinforcement_learning",
"wang|sample_efficient_actor-critic_with_experience_replay",
"christopher|q-learning",
"ronald|simple_statistical_gradient-following_algorithms_for_connectionist_reinforcement_learning"
],
"title": [
"Tensorflow: Large-scale machine learning on heterogeneous distributed systems",
"A hierarchical approach to computer Hex",
"Thinking fast and slow with deep learning and tree search",
"Monte carlo tree search in Hex",
"A Markovian decision process",
"Neuro-Dynamic Programming",
"Competitive bidding in high-risk situations",
"Training deep convolutional neural networks to play Go",
"On algorithms for Simple Stochastic Games",
"The complexity of stochastic games",
"Efficient selectivity and backup operators in Monte-Carlo tree search",
"Taming the noise in reinforcement learning via soft updates",
"Move prediction using deep convolutional neural networks in Hex",
"Generative adversarial nets",
"Q-prop: Sample-efficient policy gradient with an off-policy critic",
"Mohex wins hex tournament",
"Playing and solving the game of Hex",
"On nonterminating stochastic games",
"Dynamic programming and Markov processes",
"Monte-carlo simulation balancing in practice",
"Mohex 2.0: a pattern-based MCTS Hex player",
"A natural policy gradient",
"Adam: A method for stochastic optimization",
"Reinforcement learning in robotics: A survey",
"Deep learning",
"Continuous control with deep reinforcement learning",
"Self-improving reactive agents based on reinforcement learning, planning and teaching",
"Markov games as a framework for multi-agent reinforcement learning",
"Algorithms for sequential decision making",
"Move evaluation in Go using deep convolutional neural networks",
"Human-level control through deep reinforcement learning",
"Asynchronous methods for deep reinforcement learning",
"Some games and machines for playing them",
"Scalable parallel DFPN search",
"Stronger virtual connections in Hex",
"Policy gradient methods",
"Robust adversarial reinforcement learning",
"On-line Q-learning using connectionist systems",
"Prioritized experience replay",
"Trust region policy optimization",
"Proximal policy optimization algorithms",
"Computers and automata",
"Stochastic games",
"Monte-carlo simulation balancing",
"Deterministic policy gradient algorithms",
"Mastering the game of Go with deep neural networks and tree search",
"Mastering chess and shogi by self-play with a general reinforcement learning algorithm",
"George van den Driessche, Thore Graepel, and Demis Hassabis",
"The optimizers curse: Skepticism and postdecision surprise in decision analysis",
"Reinforcement learning: An introduction",
"Policy gradient methods for reinforcement learning with function approximation",
"Temporal difference learning and TD-Gammon",
"Better computer Go player with neural network and long-term prediction",
"Dueling network architectures for deep reinforcement learning",
"Sample efficient actor-critic with experience replay",
"Q-learning",
"Simple statistical gradient-following algorithms for connectionist reinforcement learning"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"martín abadi",
"ashish agarwal",
"paul barham",
"eugene brevdo",
"zhifeng chen",
"craig citro",
"greg s corrado",
"andy davis",
"jeffrey dean",
"matthieu devin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"v vadim",
" anshelevich"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"thomas anthony",
"zheng tian",
"david barber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ryan b broderick arneson",
"philip hayward",
" henderson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"richard bellman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dimitri p bertsekas",
"john n tsitsiklis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"robert v edward c capen",
"william m clapp",
" campbell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"christopher clark",
"amos storkey"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"anne condon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"anne condon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rémi coulom"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"roy fox",
"ari pakman",
"naftali tishby"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chao gao",
"ryan b hayward",
"martin müller"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ian goodfellow",
"jean pouget-abadie",
"mehdi mirza",
"bing xu",
"david warde-farley",
"sherjil ozair",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shixiang gu",
"timothy lillicrap",
"zoubin ghahramani",
"richard e turner",
"sergey levine"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"broderic hayward",
" arneson",
" henderson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"philip henderson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alan j hoffman",
"richard m karp"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a ronald",
" howard"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shih-chieh huang",
"rémi coulom",
"shun-shii lin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shih-chieh huang",
"broderick arneson",
"ryan b hayward",
"martin müller",
"jakub pawlewicz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"m sham",
" kakade"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik kingma",
"jimmy ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jens kober",
"andrew bagnell",
"jan peters"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yann lecun",
"yoshua bengio",
"geoffrey hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jonathan j timothy p lillicrap",
"alexander hunt",
"nicolas pritzel",
"tom heess",
"yuval erez",
"david tassa",
"daan silver",
" wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"long-h lin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" michael l littman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"michael lederman",
"littman "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aja chris j maddison",
"ilya huang",
"david sutskever",
" silver"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"volodymyr mnih",
"koray kavukcuoglu",
"david silver",
"andrei a rusu",
"joel veness",
"marc g bellemare",
"alex graves",
"martin riedmiller",
"andreas k fidjeland",
"georg ostrovski"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"volodymyr mnih",
"adria puigdomenech badia",
"mehdi mirza",
"alex graves",
"timothy lillicrap",
"tim harley",
"david silver",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"john f nash"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jakub pawlewicz",
"ryan b hayward"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jakub pawlewicz",
"ryan hayward",
"philip henderson",
"broderick arneson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jan peters",
"j andrew",
"bagnell "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lerrel pinto",
"james davidson",
"rahul sukthankar",
"abhinav gupta"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a gavin",
"mahesan rummery",
" niranjan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tom schaul",
"john quan",
"ioannis antonoglou",
"david silver"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"john schulman",
"sergey levine",
"pieter abbeel",
"michael jordan",
"philipp moritz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"john schulman",
"filip wolski",
"prafulla dhariwal",
"alec radford",
"oleg klimov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"claude e shannon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"s lloyd",
" shapley"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david silver",
"gerald tesauro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david silver",
"guy lever",
"nicolas heess",
"thomas degris",
"daan wierstra",
"martin riedmiller"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david silver",
"aja huang",
"chris j maddison",
"arthur guez",
"laurent sifre",
"george van den",
"julian driessche",
"ioannis schrittwieser",
"veda antonoglou",
"marc panneershelvam",
" lanctot"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david silver",
"thomas hubert",
"julian schrittwieser",
"ioannis antonoglou",
"matthew lai",
"arthur guez",
"marc lanctot",
"laurent sifre",
"dharshan kumaran",
"thore graepel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david silver",
"julian schrittwieser",
"karen simonyan",
"ioannis antonoglou",
"aja huang",
"arthur guez",
"thomas hubert",
"lucas baker",
"matthew lai",
"adrian bolton",
"yutian chen",
"timothy lillicrap",
"fan hui",
"laurent sifre"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"e james",
"robert l smith",
" winkler"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"s richard",
"andrew g sutton",
" barto"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david a richard s sutton",
" mcallester",
"p satinder",
"yishay singh",
" mansour"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"gerald tesauro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yuandong tian",
"yan zhu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ziyu wang",
"tom schaul",
"matteo hessel",
"hado van hasselt",
"marc lanctot",
"nando de freitas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ziyu wang",
"victor bapst",
"nicolas heess",
"volodymyr mnih",
"remi munos",
"koray kavukcuoglu",
"nando de freitas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jch christopher",
"peter watkins",
" dayan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"williams ronald"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"arXiv:1603.04467",
"",
"1705.08439v4",
"",
"",
"",
"",
"",
"",
"",
"",
"1512.08562v4",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"1412.6980v9",
"",
"1807.07987v2",
"1509.02971v6",
"",
"",
"",
"1412.6564v2",
"",
"1602.01783v2",
"",
"",
"",
"",
"1703.02702v1",
"",
"1511.05952v4",
"1502.05477v5",
"1707.06347v2",
"",
"",
"",
"",
"",
"arXiv:1712.01815",
"",
"",
"",
"",
"",
"",
"1511.06581v3",
"",
"1805.04874v3",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.444444 | 0.583333 | null | null | null | null | null | rJk51gJRb |
||
li|towards_binaryvalued_gates_for_robust_lstm_training|ICLR_cc_2018_Conference | Towards Binary-Valued Gates for Robust LSTM Training | Long Short-Term Memory (LSTM) is one of the most widely used recurrent structures in sequence modeling. Its goal is to use gates to control the information flow (e.g., whether to skip some information/transformation or not) in the recurrent computations, although its practical implementation based on soft gates only partially achieves this goal and is easy to overfit. In this paper, we propose a new way for LSTM training, which pushes the values of the gates towards 0 or 1. By doing so, we can (1) better control the information flow: the gates are mostly open or closed, instead of in a middle state; and (2) avoid overfitting to certain extent: the gates operate at their flat regions, which is shown to correspond to better generalization ability. However, learning towards discrete values of the gates is generally difficult. To tackle this challenge, we leverage the recently developed Gumbel-Softmax trick from the field of variational methods, and make the model trainable with standard backpropagation. Experimental results on language modeling and machine translation show that (1) the values of the gates generated by our method are more reasonable and intuitively interpretable, and (2) our proposed method generalizes better and achieves better accuracy on test sets in all tasks. Moreover, the learnt models are not sensitive to low-precision approximation and low-rank approximation of the gate parameters due to the flat loss surface. | {
"name": [],
"affiliation": []
} | We propose a new algorithm for LSTM training by learning towards binary-valued gates which we shown has many nice properties. | [
"recurrent neural network",
"LSTM",
"long-short term memory network",
"machine translation",
"generalization"
] | null | 2018-02-15 22:29:48 | 45 | null | null | null | null | null | null | null | null | false | This paper proposes training binary-values LSTMs for NLP using the Gumbel-softmax reparameterization. The motivation is that this will generalize better, and this is demonstrated in a couple of instances.
However, it's not clear how cherry-picked the examples are, since the training loss wasn't reported for most experiments. And, if the motivation is better generalization, it's not clear why we would use this particular setup. | {
"review_id": [
"Syo-smqgf",
"S15OPlugz",
"HyA3jBqgG"
],
"review": [
{
"title": "title: Interesting, but not impressive",
"paper_summary": null,
"main_review": "main_review: The paper argues for pushing the input and forget gate’s output toward 0 or 1, i.e., the LSTM tends to reside in flat region of surface loss, which is likely to generalize well. To achieve that, the sigmoid function in the original LSTM is replaced by a function G that is continuous and differentiable with respect to the parameters (by applying the Gumbel-Softmax trick). As a result, the model is still differentiable while the output gate is approximately binarized. \n\nPros:\n-\tThe paper is clearly written\n-\tThe method is new and somehow theoretically guaranteed by the proof of the Proposition 1\n-\tThe experiments are clearly explained with detailed configurations\n-\tThe performance of the method in the model compression task is promising \n\nCons:\n-\tThe “simple deduction” which states that pushing the gate values toward 0 or 1 correspond to the region of the overall loss surface may need more theoretical analysis\n-\tIt is confusing whether the output of the gate is sampled based on or computed directly by the function G \n-\tThe experiments lack many recent baselines on the same dataset (Penn Treebank: Melis et al. (2017) – On the State of the Art of Evaluation in Neural Language Models; WMT: Ashish et.al. (2017) – Attention Is All You Need) \n-\tThe experiment’s result is only slightly better than the baseline’s\n-\tTo be more persuasive, the author should include in the baselines other method that can “binerize” the gate values such as the one sharpening the sigmoid function. \n\n\nIn short, this work is worth a read. Although the experimental results are not quite persuasive, the method is nice and promising. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The technical novelty is limited and experiments do not show much benefits of the proposed model.",
"paper_summary": null,
"main_review": "main_review: This paper aims to push the LSTM gates to be binary. To achieve this, the paper proposes to employ the recent Gumbel-Softmax trick to obtain end-to-end trainable categorical distribution (taking 0 or 1 value). The resulted G2-LSTM is applied for language model and machine translation in the experiments. \n\nThe novelty of this paper is limited. Just directly apply the Gumbel-Softmax trick. \n\nThe motivation is not explained clearly and convincingly. Why need to pursue binary gates? According to the paper, it may give better generalization performance. But there is no theoretical or experimental evidence provided by this paper to support this argument. \n\nThe results of the new G2-LSTM are not significantly better than baselines in the experiments.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: This paper propose a new \"gate\" function for LSTM to enable the values of the gates towards 0 or 1. The motivation behind is a flat region of the loss surface is likely to generalize well. It shows the experimental results are comparable or better than vanilla LSTM and much more robust to low-precision approximation and low-rank approximation.\n\nIn section 3.2, the paper claimed using a smaller temperature cannot guarantee the outputs to be close to the boundary. Is there any experimental evidence to show it's not working? It also claimed pushing output gate to 0/1 will drop the performance. It actually quite interesting because there are bunch of paper claimed output gate is not important for language modeling, e.g. https://openreview.net/pdf?id=HJOQ7MgAW . \n\nIn the sensitive analysis, what if apply rounding / low-rank for all the parameters? \n\nHow was this approach compare to binarynet https://arxiv.org/abs/1602.02830 ? Applying the same idea, but only for forget gate/ input gate. Also, can we apply this idea to the binarynet? \n\nOverall, I think it's an interesting paper but I feel it should compare with some simple baseline to binarized the gate function. \n\nUpdates: Thanks a lot for all the clarification. It do improve the paper quality but I'm still thinking it's higher than \"6\" but lower than \"7\". To me, improve ppl from \"52.8\" to \"52.1\" isn't very significant. For WMT, it improve on DE->EN but not for EN->DE (although it improve both for the author's own baseline). So I'm not fully convinced this approach could improve the generalization. But I feel this work can have many other applications such as \"binarynet\". ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.3333333432674408,
0.5555555820465088
],
"confidence": [
0.75,
0.75,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Revision to the paper (to all reviewers) : updates on experimental results",
"We respectfully disagree with the comment about the novelty/experimental results of the paper. ",
"Thanks for the relevant comments. Here are the responses to the questions.",
"Thanks for the relevant comments. We have improved PTB results according to the suggestions."
],
"comment": [
"Thanks all reviewers for their valuable comments, we updated a new version of the paper by including the following results:\n\n1. We make discussion about the sharpening sigmoid method proposed by the reviewers, and add the algorithm as one of the baselines in the experiments. The experimental results still show that our proposed method achieves the best performance in all tasks.\n\n2. We update the experimental results on language modelling task which achieves the best performance (52.1) as far as we know without using any hyperparameter search method.\n",
"\n[Regarding the experiment]\n\nWe are afraid that the reviewer makes a wrong judgement to the performance results, our model is much better than the baseline on two tasks. \n\nFor machine translation, we achieved the SOTA performance on German->English task and the improvement is significate (+ about 1 point) in the field of translation, not to mention that our model is much better than some other submissions https://openreview.net/forum?id=HktJec1RZ. \n\nFor language model, by leveraging several tricks in literature, we significantly improve the performance from 77.4 to 52.1 (the best number as far as we know). This number is achieved without using any hyperparameter search method, we reported the detail in the paper. \n\n[Regarding the motivation]\n\nWe have discussed in section 2.1 that there are a bunch of work empirically and theoretically studying the relationship between flat loss surface and generalization, not to mention that there are some continuous study and verification in ICLR 2018 submissions, e.g., https://openreview.net/forum?id=HkmaTz-0W . Thus our method is well motivated: by pushing the softmax operator towards its flat region will lead to better generalization. \n\n[Regarding the novelty of the paper]\n\nWe are regretful to see the reviewer claims that there is little novelty in the paper. First, we are the first to apply Gumbel-softmax trick for robust training of LSTM by pushing the value of the gate to the boundary. We empirically show that our method achieves better accuracy even achieves the SOTA performance in some tasks. Second, we show that by different low-precision/low-rank compressions, our model is even still comparable to the baseline models before compressions. \n",
"[Regarding the small temperature experiment]\n\nThanks for figure this out. First, we want to point out that theoretically it doesn’t help: Simply consider function f_{W,b}(x) =sigmoid((Wx+b)/tau), where tau is the temperature, it is computationally equivalent to f_{W’,b’}(x) =sigmoid(W’x+b’) by setting W’=W/tau and b’ = b/tau. Then using a small temperature is equivalent to rescale the initial parameter as well as gradient to a larger range. Usually, setting an initial point in a larger range with a larger learning rate will harm the optimization process.\n\nWe also did a set of experiments and updated the paper to show it doesn’t help in practice.\n\n[Regarding the binary net]\n\nDespite the different between the model structure (gate-based LSTM v.s. CNN), the main difference is that we regularize the output of the activation of the gates to binary value only, but not to regularize the weights. One should notice that the accuracy of Binary Net is usually much worse than the baseline model. However, we show that (1) Our models generalize well among different tasks. (2) The accuracy of the models after low-rank/low-precision compression using our method is competitive to (or even better than) the baseline. Besides, our techniques can also be applied to binarynet training.\n\n[Regarding apply rounding / low-rank for all the parameters]\n\nWe will do the experiment but as our proposed method is focusing on LSTM unit. We are not sure whether the performance will drop a lot when we apply rounding/low-rank to embedding and attention.",
"\n[Regarding the computation of function G]\n\nDuring training, the output of the gate is computed directly by function G, while the function G contains some random noise U.\n\n[Regarding the sharpened sigmoid function experiment]\n\nThanks for figure this out. First, we want to point out that theoretically it doesn’t help: Simply consider function f_{W,b}(x) =sigmoid((Wx+b)/tau), where tau is the temperature, it is computationally equivalent to f_{W’,b’}(x) =sigmoid(W’x+b’) by setting W’=W/tau and b’ = b/tau. Then using a small temperature is equivalent to rescale the initial parameter as well as gradient to a larger range. Usually, setting an initial point in a larger range with a larger learning rate will harm the optimization process.\n\nWe also did a set of experiments and updated the paper to show it doesn’t help in practice.\n\n[Regarding the significance of experimental results]\n\nFor machine translation, we achieved the SOTA performance on German->English task and the improvement is significate (+ about 1 point) in the field of translation, not to mention that our model is much better than some other submissions https://openreview.net/forum?id=HktJec1RZ. For English->German task, we noticed that “Attention is all you need” is the state of the art but it is not LSTM-based; thus we didn’t list that result in the paper.\n\nFor language model, thanks for the reference, we have studied the papers. By leveraging several tricks in literature, we significantly improve the performance from 77.4 to 52.1 (the best number as far as we know) without using any hyperparameter search method, we reported the detail in the paper. \n"
]
} | {
"paperhash": [
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"bahdanau|an_actor-critic_algorithm_for_sequence_prediction",
"britz|massive_exploration_of_neural_machine_translation_architectures",
"cettolo|report_on_the_11th_iwslt_evaluation_campaign,_iwslt_2014",
"chaudhari|entropy-sgd:_biasing_gradient_descent_into_wide_valleys",
"gal|a_theoretically_grounded_application_of_dropout_in_recurrent_neural_networks",
"gehring|convolutional_sequence_to_sequence_learning",
"gers|learning_to_forget:_continual_prediction_with_lstm",
"grave|improving_neural_language_models_with_a_continuous_cache",
"haussler|mutual_information,_metric_entropy_and_cumulative_relative_entropy_risk",
"hochreiter|the_vanishing_gradient_problem_during_learning_recurrent_neural_nets_and_problem_solutions",
"hochreiter|sepp_hochreiter_and_jürgen_schmidhuber._long_short-term_memory",
"huang|toward_neural_phrase-based_machine_translation",
"inan|tying_word_vectors_and_word_classifiers:_a_loss_framework_for_language_modeling",
"jang|categorical_reparameterization_with_gumbel-softmax",
"jean|on_using_very_large_target_vocabulary_for_neural_machine_translation",
"jozefowicz|exploring_the_limits_of_language_modeling",
"keskar|on_large-batch_training_for_deep_learning:_generalization_gap_and_sharp_minima",
"kim|character-aware_neural_language_models",
"krueger|regularizing_rnns_by_randomly_preserving_hidden_activations",
"matt|gans_for_sequences_of_discrete_elements_with_the_gumbel-softmax_distribution",
"luong|effective_approaches_to_attentionbased_neural_machine_translation",
"maddison|the_concrete_distribution:_a_continuous_relaxation_of_discrete_random_variables",
"melis|on_the_state_of_the_art_of_evaluation_in_neural_language_models",
"merity|pointer_sentinel_mixture_models",
"merity|regularizing_and_optimizing_lstm_language_models",
"papineni|bleu:_a_method_for_automatic_evaluation_of_machine_translation",
"boris|acceleration_of_stochastic_approximation_by_averaging",
"marc|sequence_level_training_with_recurrent_neural_networks",
"semeniuta|recurrent_dropout_without_memory_loss",
"sennrich|neural_machine_translation_of_rare_words_with_subword_units",
"shen|minimum_risk_training_for_neural_machine_translation",
"subramanian|adversarial_generation_of_natural_language",
"villegas|learning_to_generate_long-term_future_via_hierarchical_prediction",
"vinyals|show_and_tell:_a_neural_image_caption_generator",
"wan|regularization_of_neural_networks_using_dropconnect",
"wiseman|sequence-to-sequence_learning_as_beam-search_optimization",
"wiseman|sequence-to-sequence_learning_as_beam-search_optimization",
"wu|google's_neural_machine_translation_system:_bridging_the_gap_between_human_and_machine_translation",
"xingjian|convolutional_lstm_network:_a_machine_learning_approach_for_precipitation_nowcasting",
"xu|rich_zemel,_and_yoshua_bengio._show,_attend_and_tell:_neural_image_caption_generation_with_visual_attention",
"zaremba|recurrent_neural_network_regularization",
"matthew|adadelta:_an_adaptive_learning_rate_method",
"zhang|highway_long_short-term_memory_rnns_for_distant_speech_recognition",
"zoph|neural_architecture_search_with_reinforcement_learning"
],
"title": [
"Neural machine translation by jointly learning to align and translate",
"An actor-critic algorithm for sequence prediction",
"Massive exploration of neural machine translation architectures",
"Report on the 11th iwslt evaluation campaign, iwslt 2014",
"Entropy-sgd: Biasing gradient descent into wide valleys",
"A theoretically grounded application of dropout in recurrent neural networks",
"Convolutional sequence to sequence learning",
"Learning to forget: Continual prediction with lstm",
"Improving neural language models with a continuous cache",
"Mutual information, metric entropy and cumulative relative entropy risk",
"The vanishing gradient problem during learning recurrent neural nets and problem solutions",
"Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory",
"Toward neural phrase-based machine translation",
"Tying word vectors and word classifiers: A loss framework for language modeling",
"Categorical reparameterization with gumbel-softmax",
"On using very large target vocabulary for neural machine translation",
"Exploring the limits of language modeling",
"On large-batch training for deep learning: Generalization gap and sharp minima",
"Character-aware neural language models",
"Regularizing rnns by randomly preserving hidden activations",
"Gans for sequences of discrete elements with the gumbel-softmax distribution",
"Effective approaches to attentionbased neural machine translation",
"The concrete distribution: A continuous relaxation of discrete random variables",
"On the state of the art of evaluation in neural language models",
"Pointer sentinel mixture models",
"Regularizing and optimizing lstm language models",
"Bleu: a method for automatic evaluation of machine translation",
"Acceleration of stochastic approximation by averaging",
"Sequence level training with recurrent neural networks",
"Recurrent dropout without memory loss",
"Neural machine translation of rare words with subword units",
"Minimum risk training for neural machine translation",
"Adversarial generation of natural language",
"Learning to generate long-term future via hierarchical prediction",
"Show and tell: A neural image caption generator",
"Regularization of neural networks using dropconnect",
"Sequence-to-sequence learning as beam-search optimization",
"Sequence-to-sequence learning as beam-search optimization",
"Google's neural machine translation system: Bridging the gap between human and machine translation",
"Convolutional lstm network: A machine learning approach for precipitation nowcasting",
"Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention",
"Recurrent neural network regularization",
"Adadelta: an adaptive learning rate method",
"Highway long short-term memory rnns for distant speech recognition",
"Neural architecture search with reinforcement learning"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"dzmitry bahdanau",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dzmitry bahdanau",
"philemon brakel",
"kelvin xu",
"anirudh goyal",
"ryan lowe",
"joelle pineau",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"denny britz",
"anna goldie",
"thang luong",
"quoc le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mauro cettolo",
"jan niehues",
"sebastian stüker",
"luisa bentivogli",
"marcello federico"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pratik chaudhari",
"anna choromanska",
"stefano soatto",
"yann lecun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yarin gal",
"zoubin ghahramani"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jonas gehring",
"michael auli",
"david grangier",
"denis yarats",
"yann n dauphin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jürgen felix a gers",
"fred schmidhuber",
" cummins"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"edouard grave",
"armand joulin",
"nicolas usunier"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david haussler",
"manfred opper"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"po-sen huang",
"chong wang",
"dengyong zhou",
"li deng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hakan inan",
"khashayar khosravi",
"richard socher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"eric jang",
"shixiang gu",
"ben poole"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sébastien jean",
"kyunghyun cho",
"roland memisevic",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rafal jozefowicz",
"oriol vinyals",
"mike schuster",
"noam shazeer",
"yonghui wu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nitish shirish keskar",
"dheevatsa mudigere",
"jorge nocedal",
"mikhail smelyanskiy",
"ping tak",
"peter tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoon kim",
"yacine jernite",
"david sontag",
"alexander m rush"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david krueger",
"tegan maharaj",
"janos kramar",
"mohammad pezeshki",
"nicolas ballas",
"nan rosemary ke",
"anirudh goyal",
"yoshua bengio",
"aaron courville",
"christopher pal",
" zoneout"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"j matt",
"josé kusner",
" miguel hernández-lobato"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"minh-thang luong",
"hieu pham",
"christopher d manning"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"andriy chris j maddison",
"yee whye mnih",
" teh"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"gábor melis",
"chris dyer",
"phil blunsom"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"stephen merity",
"caiming xiong",
"james bradbury",
"richard socher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"stephen merity",
"nitish shirish keskar",
"richard socher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kishore papineni",
"salim roukos",
"todd ward",
"wei-jing zhu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"t boris",
"anatoli b polyak",
" juditsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aurelio marc",
"sumit ranzato",
"michael chopra",
"wojciech auli",
" zaremba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"stanislau semeniuta",
"aliaksei severyn",
"erhardt barth"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rico sennrich",
"barry haddow",
"alexandra birch"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shiqi shen",
"yong cheng",
"zhongjun he",
"wei he",
"hua wu",
"maosong sun",
"yang liu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sandeep subramanian",
"sai rajeswar",
"francis dutil",
"christopher pal",
"aaron courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ruben villegas",
"jimei yang",
"yuliang zou",
"sungryull sohn",
"xunyu lin",
"honglak lee"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"oriol vinyals",
"alexander toshev",
"samy bengio",
"dumitru erhan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"li wan",
"matthew zeiler",
"sixin zhang",
"yann le cun",
"rob fergus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sam wiseman",
"alexander m rush"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sam wiseman",
"alexander m rush"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yonghui wu",
"mike schuster",
"zhifeng chen",
"v quoc",
"mohammad le",
"wolfgang norouzi",
"maxim macherey",
"yuan krikun",
"qin cao",
"klaus gao",
" macherey"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"zhourong shi xingjian",
"hao chen",
"dit-yan wang",
"wai-kin yeung",
"wang-chun wong",
" woo"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kelvin xu",
"jimmy ba",
"ryan kiros",
"kyunghyun cho",
"aaron courville",
"ruslan salakhudinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wojciech zaremba",
"ilya sutskever",
"oriol vinyals"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d matthew",
" zeiler"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yu zhang",
"guoguo chen",
"dong yu",
"kaisheng yaco",
"sanjeev khudanpur",
"james glass"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"barret zoph",
"v quoc",
" le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1409.0473v7",
"arXiv:1607.07086",
"1703.03906v2",
"",
"1611.01838v5",
"1512.05287v5",
"1705.03122v3",
"",
"1612.04426v1",
"",
"",
"",
"",
"1611.01462v3",
"arXiv:1611.01144",
"1412.2007v2",
"1602.02410v2",
"arXiv:1609.04836",
"",
"",
"arXiv:1611.04051",
"arXiv:1508.04025",
"1611.00712v3",
"1707.05589v2",
"1609.07843v1",
"1708.02182v1",
"",
"",
"1511.06732v7",
"1603.05118v2",
"1508.07909v5",
"1512.02433v3",
"1705.10929v1",
"arXiv:1704.05831",
"1411.4555v2",
"",
"",
"arXiv:1606.02960",
"1609.08144v2",
"1506.04214v2",
"",
"1409.2329v5",
"1212.5701v1",
"",
"1611.01578v2"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.481481 | 0.666667 | null | null | null | null | null | rJiaRbk0- |
||
quint|interpretable_classification_via_supervised_variational_autoencoders_and_differentiable_decision_trees|ICLR_cc_2018_Conference | Interpretable Classification via Supervised Variational Autoencoders and Differentiable Decision Trees | As deep learning-based classifiers are increasingly adopted in real-world applications, the importance of understanding how a particular label is chosen grows. Single decision trees are an example of a simple, interpretable classifier, but are unsuitable for use with complex, high-dimensional data. On the other hand, the variational autoencoder (VAE) is designed to learn a factored, low-dimensional representation of data, but typically encodes high-likelihood data in an intrinsically non-separable way. We introduce the differentiable decision tree (DDT) as a modular component of deep networks and a simple, differentiable loss function that allows for end-to-end optimization of a deep network to compress high-dimensional data for classification by a single decision tree. We also explore the power of labeled data in a supervised VAE (SVAE) with a Gaussian mixture prior, which leverages label information to produce a high-quality generative model with improved bounds on log-likelihood. We combine the SVAE with the DDT to get our classifier+VAE (C+VAE), which is competitive in both classification error and log-likelihood, despite optimizing both simultaneously and using a very simple encoder/decoder architecture. | {
"name": [],
"affiliation": []
} | We combine differentiable decision trees with supervised variational autoencoders to enhance interpretability of classification. | [
"interpretable classification",
"decision trees",
"deep learning",
"variational autoencoder"
] | null | 2018-02-15 22:29:38 | 16 | null | null | null | null | null | null | null | null | false | The paper proposes a new model called differential decision tree which captures the benefits of decision trees and VAEs. They evaluate the method only on the MNIST dataset. The reviewers thus rightly complain that the evaluation is thus insufficient and one also questions its technical novelty. | {
"review_id": [
"SJCjWdiJG",
"HktiHfugG",
"H1v-LprxG"
],
"review": [
{
"title": "title: Review: Interesting hybrid model, but weak experiments (MNIST only)",
"paper_summary": null,
"main_review": "main_review: \nSummary\n\nThis paper proposes a hybrid model (C+VAE)---a variational autoencoder (VAE) composed with a differentiable decision tree (DDT)---and an accompanying training scheme. Firstly, the prior is specified as a mixture distribution with one component per class (SVAE). During training, the ELBO’s KL term uses the component that corresponds to the known label. Secondly, the DDT’s leaves are parametrized with the encoder distribution q(z|x), and thus gradient information flows back through the DDT into the posterior approximations in order to make them more discriminative. Lastly, the VAE and DDT are trained together by alternating optimization of each component (plus a ridge penalty on the decoder means). Experiments are performed on MNIST, demonstrating tree classification performance, (supervised) neg. log likelihood performance, and latent space interpretability via the DDT. \n\n\nEvaluation\n\nPros: Giving the VAE discriminative capabilities is an interesting line of research, and this paper provides another take on tree-based VAEs, which are challenging to define given the discrete nature of the former and continuous nature of the latter. Thus, I applaud the authors for combining the two in a way that admits efficient training. Moreover, I like the qualitative experiment (Figure 2) in which the tree is used to vary a latent dimension to change the digit’s class. I can see this being used for dataset augmentation or adversarial example generation, for instance.\n\nCons: An indefensible flaw in the work is that the model is evaluated on only MNIST. As there is no strong theory in the paper, this limited experimental evaluation is reason enough for rejection. Yet, moreover, the negative log likelihood comparison (Table 2) is not an informative comparison, as it speaks only to the power of adding supervision. Lastly, I do not think the interpretability provided by the decision tree is as great as the authors seem to claim. Decision trees provide rich and interpretable structure only when each input feature has clear semantics. However, in this case, the latent space is being used as input to the tree. As the decision tree, then, is merely learning hard, class-based partitioning rules for the latent space, I do not see how the tree is representing anything especially revealing. Taking Figure 2 as an example (which I do like the end result of), I could generate similar results with a black-box classifier by using gradients to perturb the latent ‘4’ mean into a latent ‘7’ mean (a la DeepDream). I could then identify the influential dimension(s) by taking the largest absolute values in the gradient vector. Maybe there is another use case in which a decision tree is superior; I’m just saying Section 4.3 doesn’t convince me to the extent that was promised earlier in the paper (and by the title).\n\nComment: It's easier to make a latent variable model interpretable when the latent variables are given clear semantics in the model definition, in my opinion. Otherwise, the semantics of the latent space become too entangled. Could you, somehow, force the tree to encode an identifiable attribute at each node, which would then force that attribute to be encoded in a certain dimension of latent space? \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interpretable classifier via deep learning is an important topic, but the work in this paper is not a substantial contribution.",
"paper_summary": null,
"main_review": "main_review: This paper addresses a method of building an interpretable model for classification, where two key ingredients are (1) supervised variational autoencoder and (2) differentiable decision tree. Recently one important line of research is to build interpretable models which have more modeling capacity while maintaining interpretability, over existing models such as linear models or decision trees. In this sense, the current work is timely research. A few contributions are claimed in this paper: (1) differentiable decision tree which allows for gradient-based optimization; (2) supervised VAE where class-specific Gaussian prior is used for the probabilistic decoder in the VAE; (3) combination of these two models. Regarding the differentiable decision tree, I am not an expert in decision tree. However, I understand that there have been various work on probabilistic decision tree, Bayesian decision tree, and Mondrian tree. More literature survey might be needed to pin-point what's new and what's common with previous work. Regarding the supervised VAE, the term \"supervised VAE\" is misleading. To me, the current model is nothing but VAE with class-specific Gaussian prior. (3) Regarding the combination of supervised VAE and DDT, it would be much better to show us a graphical illustration of the model to improve the readability. I see the encoder is common for both the decoder and DDT. However, it is not clear how DDT is coupled with the encoder. It seems that DDT takes the output of the encoder as input but the output of DDT is not coupled with VAE. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Some comments on the evaluation and results",
"paper_summary": null,
"main_review": "main_review: The paper tries to build an interpretable and accurate classifier via stacking a supervised VAE (SVAE) and a differentiable decision tree (DTT). The problem is important and interesting. The authors list the contributions of each part but it seems that only the final contribution, i.e. analysis of the interpretability, is interesting and should be further extended and emphasized. Here with the detailed comments.\n\n1. I think Table 2 does not make sense at all. This is not only because the authors use the label information but also because the authors compare different quantities. The the previous methods evaluate log p(x) while the proposed method evaluates log p(x, y) which should be much lower as the proposed method potentially trains a separated model for each class of the x for evaluation.\n\n2. The generation results of the SVAE shown in Figure 7 in Appendix A seem strange as the diversity of the samples is much less than those from the vanilla VAEs. Could the authors explain this mode collapse phenomenon? \n\n3. The results in Table 1 are not interesting. It is most useful to interpret the state-of-the-art classifier while the results of the proposed methods are far from the state-of-the-art even on such simple MNIST dataset.\n\n4. The most interesting results of this paper are shown in Figure 1. However, I think the results on the interpretability should be further extended. Several questions are as follows: \n\nWhy other dimensions are not so interpretable, compared with 21?\n\nCan we also interpret a VAE given labels by varying each dimension of the latent variables without jointly training a DTT? I personally think some of the dimensions of the latent variables of the vanilla VAEs can also be interpreted via interpolation in each dimension. \n\nCan these results be generalized to other datasets, consisting of natural images? \n\nOverall, this paper is below the acceptance threshold.\n ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.2222222238779068,
0.4444444477558136,
0.3333333432674408
],
"confidence": [
1,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Correction of calculations of log-likelihood results",
"Thanks to the reviewers for the helpful comments "
],
"comment": [
"We recently discovered a numerical error of calculation of KL-divergence, which impacted final calculation of log-likelihood of our models SVAE and C+VAE. Our updated bounds for log-likelihood are -102.77 for SVAE and -110.12 for C+VAE. (Classification results were unchanged.)\n\nIn the new version we plan to upload soon, we also updated the discussion to reflect that, while our models no longer greatly improve over more complex, state-of-the-art models in terms of log-likelihood, SVAE still improves over an unmodified VAE (which uses the same encoder-decoder pair that we use), and C+VAE is comparable to an unmodified VAE when simultaneously optimizing for both classification and generative performance.\n",
"We greatly appreciate the detailed feedback from the reviewers, and will look into refocusing our paper on the interpretability aspects.\n\nWe updated the pdf to fix the bug mentioned in our earlier comment, but made no other changes at this time, pending the refocusing described above. \n"
]
} | {
"paperhash": [
"breiman|classification_and_regression_trees",
"burda|importance_weighted_autoencoders",
"dilokthanakul|deep_unsupervised_clustering_with_gaussian_mixture_variational_autoencoders",
"kingma|auto-encoding_variational_bayes",
"diederik|adam:_a_method_for_stochastic_optimization",
"diederik|semi-supervised_learning_with_deep_generative_models",
"diederik|improving_variational_autoencoders_with_inverse_autoregressive_flow",
"kontschieder|deep_neural_decision_forests",
"kullback|on_information_and_sufficiency",
"pedregosa|scikit-learn:_machine_learning_in_python",
"|c4.5:_programs_for_machine_learning",
"rasmus|semi-supervised_learning_with_ladder_networks",
"jimenez|variational_inference_with_normalizing_flows",
"rezende|stochastic_back-propagation_and_variational_inference_in_deep_latent_gaussian_models",
"salimans|markov_chain_monte_carlo_and_variational_inference:_bridging_the_gap",
"zoran|learning_deep_nearest_neighbor_representations_using_differentiable_boundary_trees"
],
"title": [
"Classification and regression trees",
"Importance weighted autoencoders",
"Deep unsupervised clustering with gaussian mixture variational autoencoders",
"Auto-encoding variational bayes",
"Adam: A method for stochastic optimization",
"Semi-supervised learning with deep generative models",
"Improving variational autoencoders with inverse autoregressive flow",
"Deep neural decision forests",
"On information and sufficiency",
"Scikit-learn: Machine learning in Python",
"C4.5: Programs for machine learning",
"Semi-supervised learning with ladder networks",
"Variational inference with normalizing flows",
"Stochastic back-propagation and variational inference in deep latent gaussian models",
"Markov chain monte carlo and variational inference: Bridging the gap",
"Learning deep nearest neighbor representations using differentiable boundary trees"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"leo breiman",
"jerome friedman",
"charles j stone",
"richard a olshen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yuri burda",
"roger b grosse",
"ruslan salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nat dilokthanakul",
"pedro a m mediano",
"marta garnelo",
"c h matthew",
"hugh lee",
"kai salimbeni",
"murray arulkumaran",
" shanahan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d p kingma",
" welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p diederik",
"jimmy kingma",
" ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p diederik",
"danilo j kingma",
"shakir rezende",
"max mohamed",
" welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p diederik",
"tim kingma",
"rafal salimans",
"xi józefowicz",
"ilya chen",
"max sutskever",
" welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"peter kontschieder",
"madalina fiterau",
"antonio criminisi",
"samuel rota bulò"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"s kullback",
"r a leibler"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"f pedregosa",
"g varoquaux",
"a gramfort",
"v michel",
"b thirion",
"o grisel",
"m blondel",
"p prettenhofer",
"r weiss",
"v dubourg",
"j vanderplas",
"a passos",
"d cournapeau",
"m brucher",
"m perrot",
"e duchesnay"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"j ",
"ross quinlan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"antti rasmus",
"mathias berglund",
"mikko honkala",
"harri valpola",
"tapani raiko"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"danilo jimenez",
"rezende ",
"shakir mohamed"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"danilo jimenez rezende",
"shakir mohamed",
"daan wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tim salimans",
"p diederik",
"max kingma",
" welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"balaji daniel zoran",
"charles lakshminarayanan",
" blundell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"1509.00519v4",
"1611.02648v2",
"",
"1412.6980v9",
"",
"",
"",
"",
"1201.0490v4",
"",
"",
"1505.05770v6",
"",
"1410.6460v4",
"1702.08833v1"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.333333 | 0.833333 | null | null | null | null | null | rJhR_pxCZ |
||
thekumparampil|attentionbased_graph_neural_network_for_semisupervised_learning|ICLR_cc_2018_Conference | Attention-based Graph Neural Network for Semi-supervised Learning | Recently popularized graph neural networks achieve the state-of-the-art accuracy on a number of standard benchmark datasets for graph-based semi-supervised learning, improving significantly over existing approaches. These architectures alternate between a propagation layer that aggregates the hidden states of the local neighborhood and a fully-connected layer. Perhaps surprisingly, we show that a linear model, that removes all the intermediate fully-connected layers, is still able to achieve a performance comparable to the state-of-the-art models. This significantly reduces the number of parameters, which is critical for semi-supervised learning where number of labeled examples are small. This in turn allows a room for designing more innovative propagation layers. Based on this insight, we propose a novel graph neural network that removes all the intermediate fully-connected layers, and replaces the propagation layers with attention mechanisms that respect the structure of the graph. The attention mechanism allows us to learn a dynamic and adaptive local summary of the neighborhood to achieve more accurate predictions. In a number of experiments on benchmark citation networks datasets, we demonstrate that our approach outperforms competing methods. By examining the attention weights among neighbors, we show that our model provides some interesting insights on how neighbors influence each other. | {
"name": [],
"affiliation": []
} | We propose a novel attention-based interpretable Graph Neural Network architecture which outperforms the current state-of-the-art Graph Neural Networks in standard benchmark datasets | [
"Graph Neural Network",
"Attention",
"Semi-supervised Learning"
] | null | 2018-02-15 22:29:21 | 48 | null | null | null | null | null | null | null | null | false | A version of GCNs of Kipf and Welling is introduced with (1) no non-linearity; (2) a basic form of (softmax) attention over neighbors where the attention scores are computed as the cosine of endpoints' representations (scaled with a single learned scalar). There is a moderate improvement on Citeseer, Cora, Pubmed.
Since the use of gates with GCNs / Graph neural networks is becoming increasingly common (starting perhaps with GGSNNs of Li et al, ICLR 2016)) and using attention in graph neural networks is also not new (see reviews and comments for references), the novelty is very limited. In order to make the submission more convincing the authors could: (1) present results on harder datasets; (2) carefully evaluate against other forms of attention (i.e. previous work).
As it stands, though it is interesting to see that such simple model performs well on the three datasets, I do not see it as an ICLR paper.
Pros:
-- a simple model, achieves results close / on par with state of the art
Cons:
-- limited originality
-- either results on harder datasets or / and evaluation agains other forms of attention (i.e. previous work) are needed
| {
"review_id": [
"rJmKbdIgM",
"S1Z9bmyZf",
"HJvS2zhgz"
],
"review": [
{
"title": "title: idea would be reasonable and constains interesting insight",
"paper_summary": null,
"main_review": "main_review: The paper proposes graph-based neural network in which weights from neighboring nodes are adaptively determined. The paper shows importance of propagation layer while showing the non-linear layer does not have significant effect. Further the proposed method also provides class relation based on the edge-wise relevance.\n\nThe paper is easy to follow and the idea would be reasonable. \n\nImportance of the propagation layer than the non-linear layer is interesting, and I think it is worth showing.\n\nVariance of results of AGNN is comparable or even smaller than GLN. This is a bit surprising because AGNN would be more complicated computation than GLN. Is there any good explanation of this low variance of AGNN?\n\nInterpretation of Figure 2 is not clear. All colored nodes except for the thick circle are labeled node? I couldn't judge those predictions are appropriate or not.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: few questions",
"paper_summary": null,
"main_review": "main_review: SUMMARY.\n\nThe paper presents an extension of graph convolutional networks.\nGraph convolutional networks are able to model nodes in a graph taking into consideration the structure of the graph.\nThe authors propose two extensions of GCNs, they first remove intermediate non-linearities from the GCN computation, and then they add an attention mechanism in the aggregation layer, in order to weight the contribution of neighboring nodes in the creation of the new node representation.\nInterestingly, the proposed linear model obtains results that are on-par with the state-of-the-art model, and the linear model with attention outperforms the state-of-the-art models on several standard benchmarks.\n\n\n----------\n\nOVERALL JUDGMENT\nThe paper is, for the most part, clear, although some improvement on the presentation would be good (see below).\nAn important issue the authors should address is the notation consistency, the indexes i and j are used for defining nodes and labels, please use another index for labels.\nIt is very interesting that stripping standard GCN out of nonlinearities gives pretty much the same results, I would appreciate if the authors could give some insights of why this is the case.\nIt seems to me that an important experiment is missing here, have the authors tried to apply the attention model with the standard GCN?\nI like the idea of using a very minimal attention mechanism. The similarity function used for the attention (cosine) is symmetric, this means that if two nodes are connected in both directions, they will be equally important for each other. But intuitively this is not true in general. It would be interesting if the authors could elaborate a bit more on the choice of the similarity function.\n\n\n----------\n\nDETAILED COMMENTS\nPage 2. I do not understand the point of so many details on Graph Laplacian Regularization.\nPage 2. The use of the term 'skip-grams' is somewhat odd, it is not clear what the authors mean with that.\nPage 3. 'the natural random walk' ???\nBottom of page 4. When the authors introduce the attention based network also introduce the input/embedding layer, I believe there is a better place to do so instead of that together with the most important contribution of the paper.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting paper with nice findings. The originality is however relatively limited in a field where many recent papers have been proposed, and the experiments need to be completed.",
"paper_summary": null,
"main_review": "main_review: The paper proposes a semi supervised learning algorithm for graph node classification. The Algorithm is inspired from Graph Neural Networks and more precisely graph convolutional NNs recently proposed by ref (Kipf et al 2016)) in the paper. These NNs alternate 2 types of layers: non linear projection and diffusion, the latter incorporates the graph relational information by constraining neighbor nodes to have close representations according to some “graph metrics”. The authors propose a model with simplified projection layers and more sophisticated diffusion ones, incorporating a simple attention mechanism. Experiments are performed on citation textual datasets. Comparisons with published results on the same datasets are presented.\n\nThe paper is clear and develops interesting ideas relevant to semi-supervised graph node classification. One finding is that simple models perform as well as more complex ones in this setting where labeled data is scarce. Another one is the importance of integrating relational information for classifying nodes when it is available. The attention mechanism itself is extremely simple, and learns one parameter per diffusion layers. One parameter weights correlations between node embeddings in a diffusion layer. I understand that you tried more complex attention mechanisms, but the one finally selected is barely an attention mechanism and rather a simple “importance” weight. This is not a criticism, but this makes the title somewhat misleading. The experiments show that the proposed model is state of the art for graph node classification. The performance is on par with some other recent models according to table 2. The other tests are also interesting, but the comparison could have been extended to other models e.g. GCN.\nYou advocate the role of the diffusion layers, and in the experiments you stack 3 to 4 such layers. It would be interesting to have indications on the compromise performance/ number of diffusion layers and on the evolution of these performances when adding such layers.\nThe bibliography on semi-supervised learning in graphs for classification is light and should be enhanced.\nOverall this is an interesting paper with nice findings. The originality is however relatively limited in a field where many recent papers have been proposed, and the experiments need to be completed.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.5555555820465088,
0.6666666865348816
],
"confidence": [
0.25,
0.5,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Thank you for pointing out the missing reference.",
"Thank you for the insightful comment.",
"Changes made to the paper",
"Response to AnonReviewer2: Improved Figure 2 for qualitative analysis of Attention",
"Response to AnonReviewer4",
"Response to AnonReviewer3: Compared AGNN with different number of layers and added experiments with GCN"
],
"comment": [
"Thank you for your interest in our paper and bringing the VAIN model to our attention.\n\nWe see that VAIN uses attention between multiple-agents in a system. We were not aware of this line of literature when we submitted the paper. We will cite this line of work in our final version. Below is a comparison between VAIN and Attention-based-Graph Neural Network.\n\nMain similarities and differences between VAIN and Attention-based Graph Neural Network (AGNN) are as follows:\n1. Experimental results: AGNN is tested on semi-supervised classification of nodes on a graph where as VAIN is tested on prediction of future state in multi-agent systems.\n\n2. Side information: AGNN is graph processing neural network, but VAIN model does not take graph as an input as initially proposed (although it could). In VAIN, it is assumed that every agent can possibly interact with every other agent in the system. Where as in AGNN we have a known graph in which real world first order interaction between two nodes are represented as edges and attention is computed only for these first order interactions. VAIN clubs all the higher order (long range) interactions into a single attention mechanism, where as AGNN computes higher order interactions through multiple hops of first order attention mechanism.\n",
"We agree that graph classification is another exciting application where graph neural networks are making breakthroughs. There are several key differences in the dataset (from citation networks) and we have not tried the idea of linear architecture for the molecular dataset yet. For example, the edges have attributes. There are straight forward ways to incorporate such information into GNNs, but we have not pursued this direction yet. I do agree the experiments you suggested will both (a) clarify what the gain is in non-linear activation; and (b) give insights on how different datasets (and applications) might require different architectures. \n\nFor the linear model, we did not tune the hyper parameters and the same hyper parameters are used as your (Kipf and Welling) original GCN. We made a small change in the stopping criteria to take the best model in validation error out of all epochs. We did not see any significant change when we use the same stopping criteria as GCN. We will make this explicit during the revision process. Overall, there was no hyperparameter tuning for the linear model, and all the numbers should provide fair comparisons.\n\nThank you for the references, we will surely include and discuss all the great prior work you pointed out. \n\n",
"We thank the reviewers and the other commenters for helping us improve our work and its presentation. Taking the reviews and comments to heart we have made several changes which, we believe, greatly improve our paper. We added comparison of performance of AGNN with different number of propagation layers in Appendix C. In the Appendix D, we added experimental results of GCN on both random splits and cross-validation settings. Further, we have expanded the bibliography in the Sections 2 and 4.1. As per the reviews we have also made changes to notations and presentation style in Sections 3 and 4. In Section 5.2 and Appendix A, we corrected the order of class names. We improved the caption and marked training set nodes in Figures 2, 4, 5 and 6. Finally, we made some minor changes in the text.",
"Thank you for your time, review and valuable comments.\n\n1. Regarding the similar variance of results of AGNN and GLN: In Table 2 of the original version we don’t report the variance or standard-deviation of accuracies of the trials, but we report (as mentioned in paragraph 1 on page 3 of original version) standard-error which defined as standard-deviation/square-root(number of trials) (https://en.wikipedia.org/wiki/Standard_error). That being said, when the training data is fixed (as is the case for Table 2), the variance of GLN is smaller than that of AGNN as predicted by the reviewer, as the only source of randomness is the initialization of the neural network weights. On the other hand, when the training data is chosen randomly (As is the case for Tables 3 and 4), there are two sources of randomness and the variance of GLN and AGNN are harder to predict and compare. We could not predict how different choices of the training data affects the accuracy, and it can happen that GLN has larger variance than AGNN.\n\n2. Regarding Figure 2.: We apologize for the lack of clarity in its caption. The thick nodes are from the test set whose labels are not known to the model at training time. For clarification, we have now added `*’ (asterisk) to mark nodes from the training set whose labels were revealed to the model during training (e.g. Figure 4). Coincidentally none of the neighborhood in Figure 2 have any nodes from the training set.\n",
"We are thankful for your review and insightful comments.\n\n1. Confusing notation is corrected: In the revised version $c$ indexes a label.\n\n2. Why GLN works: For semi-supervised learning, we believe that the primary gain of using graph neural network comes from the “Averaging” effect. Similar to denoising pixels in images, by averaging neighbors features, we get a denoised version of current nodes’ features. This gives significant gain over those estimations without denoising (such as Mulit-Layer Perceptron in Table 2). This, we believe, is why GLN is already achieving the state-of-the-art performance. The focus of this paper is how to get the next remaining gain, which we achieve by proposing asymmetric averaging using “attention”. So far, we did not see any noticeable gain in non-linear activation for semi-supervised learning. However, we believe such non-linearity can be important for other applications, such as graph classification tasks on molecular networks. \n\n3. Attention in GCN: GCN with attention did not give gain over our AGNN architecture, which is somewhat expected as GCN and GLN have comparable performances, within the error margin of each other. Note that from the architecture complexity perspective AGNN is simpler than GCN with attention, meaning that AGNN might have a better chance explaining the data.\n\n4. Symmetric attention: Even though the scaled cosine similarity would be symmetric between two connected nodes $i$ and $j$, the attention value itself can be different due to the fact that softmax computations are calculated on different neighborhoods: $N(i)$ and $N(j)$ respectively.\nBut we agree that attention mechanism has an element of symmetry and this might be alleviated by using more complex attention mechanism. As the reviewer pointed out, we chose the simple attention mechanism here; we tried various attention mechanisms with varying degrees of complexity, and found the simple attention mechanism to give the best performance. Training complex attention is challenging, and we would like to explore more complex ones in our future work.\n\nResponse to detailed comments:\n\n1. Details on Graph Laplacian Regularization: We added details about Laplacian regularization for completeness of discussion of previous work and because Laplacian regularizations closely related to the propagations layers used in almost all Graph Neural Network papers.\n2. ‘Skip-grams’: We added some clarification on the use of ‘skip-grams’ in the revised version.\n3. ‘Natural random walk’ on a graph is random walk where one move from a node to one of its neighbors selected with uniform probability. We have clarified this in the revised version.\n4. Presentation of the Attention-based Graph Neural Network: Thanks for pointing this out. We have made some changes to the presentation style.\n",
"Thank you for reviewing our paper and pointing out missed experiments and inconsistencies.\n\n1. Attention mechanism: It is true as the reviewer pointed out that our attention mechanism is very simple. We settled on this choice after training/testing several attention mechanisms, most of which are more complex than the one we propose. The proposed simple attention mechanism gave the best performance, among those we tried. We believe this is due to the fact that complex attention mechanisms are harder to train as there are more parameters to learn.\n\n2. GCN on other training sets: The reason we do not report GCN performance in tables 2 and 3 is that we made it our rule not to run other researcher’s algorithms ourselves, at the fear of not doing justice in the hyperparameters we need to choose. However, given the interest in the numerical comparisons, as the reviewer pointed out, in the revised version, we run these experiments and reported the performance of GCN in the appendix D (as it might give the wrong impression that those results are performed by the authors of GCN, if we put it in the table in the main text).\n\n3. Choice of number of diffusion layers: Thanks for pointing this out. We have added a table in the appendix C which contains testing accuracies of AGNN model with different number of diffusion layers.\n\n4. Regarding bibliography: We have expanded the bibliography on semi-supervised learning using graphs. Please see the section 2 in the revised manuscript.\n"
]
} | {
"paperhash": [
"abadi|tensorflow:_large-scale_machine_learning_on_heterogeneous_distributed_systems",
"atwood|diffusion-convolutional_neural_networks",
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"berg|graph_convolutional_matrix_completion",
"bronstein|geometric_deep_learning:_going_beyond_euclidean_data",
"bruna|spectral_networks_and_locally_connected_networks_on_graphs",
"dai|discriminative_embeddings_of_latent_variable_models_for_structured_data",
"defferrard|convolutional_neural_networks_on_graphs_with_fast_localized_spectral_filtering",
"duvenaud|convolutional_networks_on_graphs_for_learning_molecular_fingerprints",
"gilmer|neural_message_passing_for_quantum_chemistry",
"graves|neural_turing_machines",
"grover|node2vec:_scalable_feature_learning_for_networks",
"hamilton|inductive_representation_learning_on_large_graphs",
"kipf|semi-supervised_classification_with_graph_convolutional_networks",
"li|gated_graph_sequence_neural_networks",
"monti|geometric_deep_learning_on_graphs_and_manifolds_using_mixture_model_cnns",
"niepert|learning_convolutional_neural_networks_for_graphs",
"perozzi|deepwalk:_online_learning_of_social_representations",
"scarselli|the_graph_neural_network_model",
"schlichtkrull|modeling_relational_data_with_graph_convolutional_networks",
"sukhbaatar|learning_multiagent_communication_with_backpropagation",
"tang|line:_large-scale_information_network_embedding",
"yang|revisiting_semi-supervised_learning_with_graph_embeddings",
"buchnik|bootstrapped_graph_diffusions:_exposing_the_power_of_nonlinearity",
"dai|learning_combinatorial_optimization_algorithms_over_graphs",
"faerman|lasagne:_locality_and_structure_aware_graph_node_embedding",
"le|distributed_representations_of_sentences_and_documents"
],
"title": [
"",
"Diffusion-Convolutional Neural Networks",
"NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE",
"Active Learning for Node Classification in Assortative and Disassortative Networks",
"Geometric deep learning: going beyond Euclidean data",
"Spectral Networks and Deep Locally Connected Networks on Graphs",
"Discriminative Embeddings of Latent Variable Models for Structured Data",
"Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering",
"Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"Neural Message Passing for Quantum Chemistry",
"Neural Turing Machines",
"node2vec: Scalable Feature Learning for Networks",
"Inductive Representation Learning on Large Graphs",
"SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS",
"GATED GRAPH SEQUENCE NEURAL NETWORKS",
"Geometric deep learning on graphs and manifolds using mixture model CNNs",
"LAZY RANDOM WALKS AND OPTIMAL TRANSPORT ON GRAPHS",
"DeepWalk: Online Learning of Social Representations",
"The graph neural network model The graph neural network model",
"Graph Convolutional Matrix Completion Rianne van den Berg",
"Learning Multiagent Communication with Backpropagation",
"LINE: Large-scale Information Network Embedding",
"Revisiting Semi-Supervised Learning with Graph Embeddings",
"Bootstrapped Graph Di usions: Exposing the Power of Nonlinearity",
"Adaptive Submodularity: Theory and Applications in Active Learning and Stochastic Optimization",
"LASAGNE: Locality And Structure Aware Graph Node Embedding",
"Distributed Representations of Sentences and Documents"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"martín abadi",
"ashish agarwal",
"paul barham",
"eugene brevdo",
"zhifeng chen",
"craig citro",
"greg s corrado",
"andy davis",
"jeffrey dean",
"matthieu devin",
"sanjay ghemawat",
"ian goodfellow",
"andrew harp",
"geoffrey irving",
"michael isard",
"yangqing jia",
"rafal jozefowicz",
"lukasz kaiser",
"manjunath kudlur",
"josh levenberg",
"dan mané",
"rajat monga",
"sherry moore",
"derek murray",
"chris olah",
"mike schuster",
"jonathon shlens",
"benoit steiner",
"ilya sutskever",
"kunal talwar",
"paul tucker",
"vincent vanhoucke",
"vijay vasudevan",
"fernanda viégas",
"oriol vinyals",
"pete warden",
"martin wattenberg",
"martin wicke",
"yuan yu",
"xiaoqiang zheng",
"google research"
],
"affiliation": [
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
}
]
},
{
"name": [
"james atwood",
"don towsley"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Massachusetts Amherst",
"location": "{'postCode': '01003', 'region': 'MA'}"
},
{
"laboratory": "",
"institution": "University of Massachusetts Amherst",
"location": "{'postCode': '01003', 'region': 'MA'}"
}
]
},
{
"name": [
"dzmitry bahdanau",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"cristopher moore",
"xiaoran yan",
"yaojia zhu",
"jean-baptiste rouquier",
"terran lane"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"michael m bronstein",
"joan bruna",
"yann lecun",
"arthur szlam",
"pierre vandergheynst"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"joan bruna",
"wojciech zaremba",
"arthur szlam",
"yann lecun"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "The City College of New York",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
}
]
},
{
"name": [
"hanjun dai",
"bo dai",
"le song"
],
"affiliation": [
{
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": "{}"
},
{
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": "{}"
},
{
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": "{}"
}
]
},
{
"name": [
"michaël defferrard",
"xavier bresson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david duvenaud",
"dougal maclaurin",
"jorge aguilera-iparraguirre",
"rafael gómez-bombarelli",
"timothy hirzel",
"alán aspuru-guzik",
"ryan p adams"
],
"affiliation": [
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
}
]
},
{
"name": [
"justin gilmer",
"samuel s schoenholz",
"patrick f riley",
"oriol vinyals",
"george e dahl"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex graves",
"greg wayne",
"google deepmind",
" london"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aditya grover",
"jure leskovec"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"william l hamilton",
"rex ying",
"jure leskovec"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"thomas n kipf",
"max welling"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Amsterdam",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Amsterdam Canadian Institute for Advanced Research (CIFAR)",
"location": "{}"
}
]
},
{
"name": [
"yujia li",
"richard zemel",
"marc brockschmidt",
"daniel tarlow"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"federico monti",
"davide boscaini",
"jonathan masci",
"emanuele rodolà",
"jan svoboda",
"michael m bronstein",
"usi lugano"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Tel Aviv University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"christian léonard"
],
"affiliation": [
{
"laboratory": "",
"institution": "Université Paris Ouest. Bât. G",
"location": "{'addrLine': '200 av. de la République', 'postCode': '92001', 'settlement': 'Nanterre', 'country': 'France'}"
}
]
},
{
"name": [
"bryan perozzi",
"steven skiena"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stony Brook University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stony Brook University",
"location": "{}"
}
]
},
{
"name": [
"franco ; scarselli",
"marco ; gori",
"markus hagenbuchner",
"gabriele monfardini",
"ah chung tsoi",
"; chung"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"thomas n kipf",
"max welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sainbayar sukhbaatar",
"arthur szlam",
"rob fergus"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jian tang",
" meng qu",
"mingzhe wang",
"ming zhang",
"jun yan",
"qiaozhu mei"
],
"affiliation": [
{
"laboratory": "",
"institution": "Microsoft Research Asia",
"location": "{}"
},
{
"laboratory": "",
"institution": "Peking University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Peking University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Peking University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research Asia",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{}"
}
]
},
{
"name": [
"zhilin yang",
"william w cohen",
"ruslan salakhutdinov"
],
"affiliation": [
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{}"
}
]
},
{
"name": [
"eliav buchnik",
"edith cohen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"daniel golovin",
"andreas krause",
"a s maximization",
"a s min sum cover"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"evgeniy faerman",
"felix borutta",
"kimon fountoulakis",
"michael w mahoney"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of California at Berkeley",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of California at Berkeley",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of California at Berkeley",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of California at Berkeley",
"location": "{}"
}
]
},
{
"name": [
"quoc le",
"tomas mikolov"
],
"affiliation": [
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'addrLine': '1600 Amphitheatre Parkway', 'postCode': '94043', 'settlement': 'Mountain View', 'region': 'CA'}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{'addrLine': '1600 Amphitheatre Parkway', 'postCode': '94043', 'settlement': 'Mountain View', 'region': 'CA'}"
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.592593 | 0.5 | null | null | null | null | null | rJg4YGWRb |
||
zhang|fewshot_learning_with_simplex|ICLR_cc_2018_Conference | Few-Shot Learning with Simplex | Deep learning has made remarkable achievement in many fields. However, learning
the parameters of neural networks usually demands a large amount of labeled
data. The algorithms of deep learning, therefore, encounter difficulties when applied
to supervised learning where only little data are available. This specific task
is called few-shot learning. To address it, we propose a novel algorithm for fewshot
learning using discrete geometry, in the sense that the samples in a class are
modeled as a reduced simplex. The volume of the simplex is used for the measurement
of class scatter. During testing, combined with the test sample and the
points in the class, a new simplex is formed. Then the similarity between the test
sample and the class can be quantized with the ratio of volumes of the new simplex
to the original class simplex. Moreover, we present an approach to constructing
simplices using local regions of feature maps yielded by convolutional neural networks.
Experiments on Omniglot and miniImageNet verify the effectiveness of
our simplex algorithm on few-shot learning. | {
"name": [],
"affiliation": []
} | A simplex-based geometric method is proposed to cope with few-shot learning problems. | [
"One-shot learning",
"few-shot learning",
"deep learning",
"simplex"
] | null | 2018-02-15 22:29:21 | 27 | null | null | null | null | null | null | null | null | false | Reviewers largely acknowledge the novelty of the paper in proposing the use of simplex volume for measuring class scatter in few-shot learning. However there are concerns on missing comparisons with relevant baseline methods, both earlier published work as well as other simpler variants without using the volume (k-NN etc). | {
"review_id": [
"ryht5AYlM",
"r1_gtOolG",
"H1_Nsp3gz"
],
"review": [
{
"title": "title: Interesting approach but missing baselines and state of the art claims wrong",
"paper_summary": null,
"main_review": "main_review: This paper proposes a geometric based approach to solving the problem of one-shot and few-shot learning. The basic idea is to use the feature vectors of a particular class to construct a simplex. (I am assuming the dimensions of the vectors are selected so as to exactly construct a simplex? It is not clearly written in the paper). The volume of the simplex is then taken to be a measure of class scatter, and classification happens by assigning the test feature vector to the nearest simplex, where the distances are normalized by the volume of the simplex. \n\nWhile the approach makes sense, I am not convinced that this geometric method plays an important role in increasing the performance on one-shot/few-shot tasks. In particular, one could try simpler approaches like k-NN where the distances to the cluster centers are also normalized by the variance within the clusters. I would suspect that this method is not superior to this simpler baseline. \n\nThe other issue I have with this paper is misleading claims about being state of the art on Omniglot. In particular see Kaiser et al (ICLR 2017), where on 5-way-1-shot an accuracy of 98.4% is reached compared to 94.6% in this paper, and on 5-way-5-shot an accuracy of 99.6% is reached compared to 99.1% in this work. The paper also misses evaluations on various other data sets such as GNMT etc., on which Kaiser et al evaluated their approach.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting technique but more complicated and lower performance than recent approaches",
"paper_summary": null,
"main_review": "main_review: This work proposes a method for few-shot classification that treats a set of embedded points belonging to a class as a simplex. Classification for an unlabeled test point is performed by selecting the class whose augmented simplex has the smallest volume relative to the original class simplex.\n\nStrengths\n- The use of simplices for representing classes in few-shot learning is novel.\n\nWeaknesses\n- A number of recent related few-shot learning approaches are missing from the related work.\n- In light of missing baselines, the proposed method does not perform better than recent few-shot approaches.\n\nI am not an expert on simplices, but the derivations in the paper appear to be correct, with two exceptions: (a) equation 6 appears to be the ratio of C^2(Y U t) / C(Y), (b) equation 6 appears to be missing a minus sign.\n\nThe writing of the paper is relatively clear, however there are several important issues:\n- Background on metric learning is missing.\n- The training loss is not described (i.e. how is the volume ratio from equation 7 converted into a probability distribution over classes?).\n- The local feature representation in Section 3 is unclear and should be explained in more detail.\n- Related work is missing recent few-shot learning approaches, including MAML [1], Prototypical Networks [2], and TCML [3].\n\nThe proposed method is a metric learning approach but it has some additional restrictions relative to other such techniques for few-shot learning such as Matching Networks or Prototypical Networks. One is that the computation of volume ratio involves matrix inversion of P and Q. Another is that the method is not defined when the number of points exceeds the dimensionality of the embedding space. These are not likely to be an issue for few-shot learning, but should be noted as interest in methods that scale gracefully from the few-shot to ordinary classification increases.\n\nRegarding Omniglot results, 20-way 1-shot/5-shot experiments are widely reported in related work but missing from the paper.\n\nWhen the results are viewed in light of missing baselines, such as Prototypical Networks (68.2% on 5-way 5-shot miniImagenet), the proposed method is more complicated and performs significantly worse.\n\nOverall, the proposed approach is interesting but there are significant issues with both background/related work and performance relative to missing baselines.\n\n[1] Finn, Chelsea, Pieter Abbeel, and Sergey Levine. \"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.\" ICML 2017.\n[2] Snell, Jake, Kevin Swersky, and Richard S. Zemel. \"Prototypical Networks for Few-shot Learning.\" NIPS 2017.\n[3] Mishra, Nikhil, et al. \"Meta-Learning with Temporal Convolutions.\" arXiv preprint arXiv:1707.03141 (2017).\n\nEDIT: I have read the author's response. The background and related work issues are largely fixed in the latest revision of the paper. Thanks also to the authors for clarifying that training proceeds according to the minimization of cross-entropy loss, rather than a loss based on the simplex. In this case, the novelty of the proposed method then lies in the test-time procedure for making a classification decision when a few-shot episode is encountered. Thus the novelty is relatively low in my opinion. From an experimental perspective, I believe that a comparison of the proposed approach to other test-time classification decision rules is warranted to demonstrate that the simplex rule is better than simpler alternatives (for example, fitting a Gaussian distribution to the support examples of each few-shot class and then assigning a test example to the class with highest posterior probability). My rating remains unchanged.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: No Title",
"paper_summary": null,
"main_review": "main_review: This paper proposes an approach for few-shot classification based on a geometric idea. The basic assumption is that a query instance will be closest to the polytope corresponding to the correct class than to other classes, where they consider polytopes formed by selecting samples from each class as vertices. As a distance metric, authors consider the variation of the volume of each class-polytope when a query instance is added to the corresponding class. Given that there is not a method to calculate the volume of a general convex polytope, they approximate the polytope by the corresponding simplex convex (convex polytope with the condition of n = d). Fortunately, in the case of the simplex, there is close form solution to obtain the volume.\n\nIn general, the paper is well presented and, as far as I know, the proposed idea is novel and sound. Experimentation is correct. Results indicate that the proposed method is able to outperform related state-of-the-art techniques, achieving a reasonable improvement, approx. 1-3% depending of the dataset. \n\nAs a drawback, for each query instance, the method needs to estimate the distance to the simplex of each class, therefore it does not scale well with the number of classes. Authors should comment about this issue, in particular, about the computational complexity of the proposed method. Also, in the cases presented in the paper, the selection of the training instances used to calculate the simplex is straight-forward, however, in a more general case, this could be a relevant problem. It will be good to comment about this issue.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.3333333432674408,
0.5555555820465088
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Thanks for your comment",
"Thanks for your comment",
"Thanks for your comment"
],
"comment": [
"Our algorithm works well for few-shot learning case where sizes of matrices for computing determinants are generally up to 5x5. The computation complexity is actually low. However, the cubic complexity caused by determinant will significantly slow down our algorithm if there are much more samples in each class. For such case, our algorithm cannot scale well with the number of classes. But for few-shot learning, this is not an issue. \n\nActually, using the bordering method of matrix inversion, we can further expand $P$ with $Q$ in formula (7). Theoretically, we only need to compute the inverse of $Q$ one time for each class during testing. ",
"According to our experiments on miniImageNet and Omniglot, our algorithm overall outperforms Matching Network (NIPS 2016) and Meta Learner (ICLR 2017) on these two benchmarks, even though our algorithm is based on features produced by a simple CNN and performs no refinement on new classes. This indicates that the simplex idea shows the superiority compared with some of new but more complicated algorithms for few-shot learning. \n\nThe algorithm proposed by Kaiser et al (ICLR 2017) indeed outperforms our algorithm on Omniglot. We will make the claim proper in the revision.\nWe followed the experimental protocol presented by Ravi & Larochelle (ICLR 2017). So we did not perform experiments on GNMT. The two benchmarks we used are the datasets that are most frequently used by other researchers for few-shot learning. ",
"We will cite the papers you listed in the revision. They are excellent works. \n\nBut first, I want to make it clear that we did not use simplex volume as the loss to train networks. As Fig. 4 illustrates, we employed a simple 4-block CNN supervised by softmax. For new classes, we only applied the feature maps produced by the last convolution layer, as shown in Fig. 3. Namely, we did not use any re-training, fine-tuning, or other manipulations that refine the CNNs for new classes. Even though, our simplex algorithm overall outperforms Matching Network (NIPS 2016) and Meta Learner (ICLR 2017) on two benchmarks. \n\nFor Prototypical Networks (PN), our simplex idea can also be applicable according to the formulation of this elegant algorithm, e.g. replacing the distance metric in PN with our simplex metric. Our algorithm can also be combined with Meta-learning algorithms such as MAML and TCML, to improve performance, because they are developed from the different level of learning models for few-shot learning. \n\nAbout the typos, the squares are missed in equation (3) and a minus sign in equation (6). These will be revised. Thank you for pointing out these typos.\nBesides, the local feature representation in Section 3 will be presented in more detail. "
]
} | {
"paperhash": [
"cayley|a_bayesian_approach_to_unsupervised_one-shot_learning_of_object_categories",
"fei-fei|one-shot_learning_of_object_categories",
"finn|model-agnostic_meta-learning_for_fast_adaptation_of_deep_networks",
"he|delving_deep_into_rectifiers:_surpassing_human-level_performance_on_imagenet_classification",
"he|deep_residual_learning_for_image_recognition",
"kaiser|learning_to_remember_rare_events",
"koch|siamese_neural_networks_for_one-shot_image_recognition",
"krizhevsky|human-level_concept_learning_through_probabilistic_program_induction",
"lecun|deep_learning",
"lin|transfer_of_view-manifold_learning_to_similarity_perception_of_novel_objects",
"chandra|on_the_generalised_distance_in_statistics",
"mishra|meta-learning_with_temporal_convolutions",
"qiao|few-shot_image_recognition_by_predicting_parameters_from_activations",
"ravi|optimization_as_a_model_for_few-shot_learning",
"roweis|nonlinear_dimensionality_reduction_by_locally_linear_embedding",
"russakovsky|imagenet_large_scale_visual_recognition_challenge",
"santoro|oneshot_learning_with_memory-augmented_neural_networks",
"santoro|a_simple_neural_network_module_for_relational_reasoning",
"simonyan|very_deep_convolutional_networks_for_large-scale_image_recognition",
"snell|prototypical_networks_for_few-shot_learning",
"szegedy|going_deeper_with_convolutions",
"szegedy|rethinking_the_inception_architecture_for_computer_vision",
"tenenbaum|a_global_geometric_framework_for_nonlinear_dimensionality_reduction",
"tenenbaum|how_to_grow_a_mind:_statistics,_structure,_and_abstraction",
"vinyals|matching_networks_for_one_shot_learning",
"weisstein|crc_concise_encyclopedia_of_mathematics",
"wright|classification_via_minimum_incremental_coding_length_(micl)"
],
"title": [
"A Bayesian approach to unsupervised one-shot learning of object categories",
"One-shot learning of object categories",
"Model-agnostic meta-learning for fast adaptation of deep networks",
"Delving deep into rectifiers: Surpassing human-level performance on imagenet classification",
"Deep residual learning for image recognition",
"Learning to remember rare events",
"Siamese neural networks for one-shot image recognition",
"Human-level concept learning through probabilistic program induction",
"Deep learning",
"Transfer of view-manifold learning to similarity perception of novel objects",
"On the generalised distance in statistics",
"Meta-learning with temporal convolutions",
"Few-shot image recognition by predicting parameters from activations",
"Optimization as a model for few-shot learning",
"Nonlinear dimensionality reduction by locally linear embedding",
"ImageNet large scale visual recognition challenge",
"Oneshot learning with memory-augmented neural networks",
"A simple neural network module for relational reasoning",
"Very deep convolutional networks for large-scale image recognition",
"Prototypical networks for few-shot learning",
"Going deeper with convolutions",
"Rethinking the inception architecture for computer vision",
"A global geometric framework for nonlinear dimensionality reduction",
"How to grow a mind: Statistics, structure, and abstraction",
"Matching networks for one shot learning",
"CRC concise encyclopedia of mathematics",
"Classification via minimum incremental coding length (MICL)"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"arthur cayley"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"li fei-fei",
"rob fergus",
"pietro perona"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chelsea finn",
"pieter abbeel",
"sergey levine"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lukasz kaiser",
"ofir nachum",
"aurko roy",
"samy bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"gregory koch",
"richard zemel",
"ruslan salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex krizhevsky",
"ilya sutskever",
"geoffrey e hinton ; brenden",
"m lake",
"ruslan salakhutdinov",
"joshua b tenenbaum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yann lecun",
"yoshua bengio",
"geoffrey hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xingyu lin",
"hao wang",
"zhihao li",
"yimeng zhang",
"alan l yuille",
"tai sing",
"lee "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"prasanta chandra",
"mahalanobis "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nikhil mishra",
"mostafa rohaninejad",
"xi chen",
"pieter abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"siyuan qiao",
"chenxi liu",
"wei shen",
"alan l yuille"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sachin ravi",
"hugo larochelle"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sam t roweis",
"lawrence k saul"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"olga russakovsky",
"jia deng",
"hao su",
"jonathan krause",
"sanjeev satheesh",
"sean ma",
"zhiheng huang",
"andrej karpathy",
"aditya khosla",
"michael bernstein",
"alexander c berg",
"li fei-fei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"adam santoro",
"sergey bartunov",
"matthew botvinick",
"daan wierstra",
"timothy p lillicrap"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"adam santoro",
"david raposo",
"g t david",
"mateusz barrett",
"razvan malinowski",
"peter pascanu",
"timothy battaglia",
" lillicrap"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"karen simonyan",
"andrew zisserman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jake snell",
"kevin swersky",
"richard s zemel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"christian szegedy",
"wei liu",
"yangqing jia",
"pierre sermanet",
"scott e reed",
"dragomir anguelov",
"dumitru erhan",
"vincent vanhoucke",
"andrew rabinovich"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"christian szegedy",
"vincent vanhoucke",
"sergey ioffe",
"jonathon shlens",
"zbigniew wojna"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"joshua b tenenbaum",
"vin de silva",
"john c langford"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"joshua b tenenbaum",
"charles kemp",
"thomas l griffiths",
"noah d goodman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"oriol vinyals",
"charles blundell",
"timothy p lillicrap",
"koray kavukcuoglu",
"daan wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"eric w weisstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"john wright",
"yangyu tao",
"zhouchen lin",
"yi ma",
"heung-yeung shum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"1512.03385v1",
"1703.03129v1",
"",
"",
"1807.07987v2",
"",
"",
"",
"",
"",
"",
"1409.0575v3",
"",
"1706.01427v1",
"",
"",
"1409.4842v1",
"1512.00567v3",
"",
"",
"1606.04080v2",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.407407 | 0.75 | null | null | null | null | null | rJfHoM-C- |
||
rodríguez|a_painless_attention_mechanism_for_convolutional_neural_networks|ICLR_cc_2018_Conference | A Painless Attention Mechanism for Convolutional Neural Networks | We propose a novel attention mechanism to enhance Convolutional Neural Networks for fine-grained recognition. The proposed mechanism reuses CNN feature activations to find the most informative parts of the image at different depths with the help of gating mechanisms and without part annotations. Thus, it can be used to augment any layer of a CNN to extract low- and high-level local information to be more discriminative.
Differently, from other approaches, the mechanism we propose just needs a single pass through the input and it can be trained end-to-end through SGD. As a consequence, the proposed mechanism is modular, architecture-independent, easy to implement, and faster than iterative approaches.
Experiments show that, when augmented with our approach, Wide Residual Networks systematically achieve superior performance on each of five different fine-grained recognition datasets: the Adience age and gender recognition benchmark, Caltech-UCSD Birds-200-2011, Stanford Dogs, Stanford Cars, and UEC Food-100, obtaining competitive and state-of-the-art scores. | {
"name": [],
"affiliation": []
} | We enhance CNNs with a novel attention mechanism for fine-grained recognition. Superior performance is obtained on 5 datasets. | [
"computer vision",
"deep learning",
"convolutional neural networks",
"attention"
] | null | 2018-02-15 22:29:27 | 36 | null | null | null | null | null | null | null | null | false | This paper received borderline reviews. Initially, all reviewers raised a number of concerns (clarity, small improvements, etc). Even after some back and forth discussion, concerns remain, and it's clear that while the idea has potential, another round of reviewing is needed before a decision can be reached. This would be a major revision in a journal. Unfortunately, that is not possible in a conference setting and we must recommend rejection. We recommend the authors to use the feedback to make the manuscript stronger and submit to a future venue. | {
"review_id": [
"ry2OdYCeM",
"rkzOQxcgM",
"Sky96rolf"
],
"review": [
{
"title": "title: Review for A Painless Attention Mechanism for Convolutional Neural Networks ",
"paper_summary": null,
"main_review": "main_review: Paper presents an interesting attention mechanism for fine-grained image classification. Introduction states that the method is simple and easy to understand. However, the presentation of the method is bit harder to follow. It is not clear to me if the attention modules are applied over all pooling layers. How they are combined? \n\nWhy use cross -correlation as the regulariser? Why not much stronger constraint such as orthogonality over elements of M in equation 1? What is the impact of this regularisation?\n\nWhy use soft-max in equation 1? One may use a Sigmoid as well? Is it better to use soft-max?\n\nEquation 9 is not entirely clear to me. Undefined notations.\n\nIn Table 2, why stop from AD= 2 and AW=2? What is the performance of AD=1, AW=1 with G? Why not perform this experiment over all 5 datasets? Is this performances, dataset specific?\n\nThe method is compared against 5 datasets. Obtained results are quite good.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A limited evaluation and a limited contribution",
"paper_summary": null,
"main_review": "main_review: The manuscript describes a novel attentional mechanism applied to fine-grained recognition. \n\nOn the positive side, the approach seems to consistently improve the recognition accuracy of the baseline (a wide residual net). The approach is also consistently tested on the main fine-grained recognition datasets (the Adience age and gender recognition benchmark, Caltech-UCSD Birds-200-2011, Stanford Dogs, Stanford Cars, and UEC Food-100).\n\nOn the negative side, the paper could be better written and motivated.\n\nFirst, some claimed are made about how the proposed approach \"enhances most of the desirable properties from previous approaches” (see pp 1-2) but these claims are never backed up. More generally since the paper focuses on attention, other attentional approaches should be used as benchmarks beyond the WRN baseline. If the authors want to claim that the proposed approach is \"more robust to deformation and clutter” then they should design an experiment that shows that this is the case. \n\nBeyond, the approach seems a little ad hoc. No real rationale is provided for the different mechanisms including the gating etc and certainly no experimental validation is provided to demonstrate the need for these mechanisms. More generally, it is not clear from reading the paper specifically what computational limitation of the CNN is being solved by the proposed attentional mechanism. \n\nSome of the masks shown in Fig 3 seem rather suspicious and prompt this referee to think that the networks are seriously overfitting to the data. For instance, why would attending to a right ear help in gender recognition? \n\nThe proposed extension adds several hyperparameters (for instance the number K of attention heads). Apologies if I missed it but I am not clear how this was optimized for the experiments reported. In general, the paper could be clearer. For instance, it is not clear from either the text or Fig 2 how H goes from XxYxK for the attention head o XxYxN for the output head.\n\nAs a final point, I would say that while some of the criticisms could be addressed in a revision, the improvements seem relatively modest. Given that the focus of the paper is already limited to fine-grained recognition, it seems that the paper would be better suited for a computer vision conference.\n\n\nMinor point: \n\n\"we incorporate the advantages of visual and biological attention mechanisms” not sure this statement makes much sense. Seems like visual and biological are distinct attributes but visual attention can be biological (or not, I guess) and it is not clear how biological the proposed approach is. Certainly no attempt is made by the authors to connect to biology.\n\n\"top-down feed-forward attention mechanism” -> it should be just feed-forward attention. Not clear what \"top-down feed-forward” attention could be...",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Improvement gain is small",
"paper_summary": null,
"main_review": "main_review: This paper proposes a feed-forward attention mechanism for fine-grained image classification. It is modular and can be added to any convolutional layer, the attention model uses CNN feature activations to find the most informative parts then combine with the original feature map for the final prediction. Experiments show that wide residual net together with this new attention mechanism achieve slightly better performance on several fine-grained image classification tasks.\n\nStrength of this work:\n1) It is end-to-end trainable and doesn't require multiple stages, prediction can be done in single feedforward pass.\n2) Easy to train and doesn't increase the model size a lot.\n\nWeakness:\n1) Both attention depth and attention width are small. The choice of which layer to add this module is unclear to me. \n2) No analysis on using the extra regularization loss actually helps.\n3) My main concern is the improvement gain is very small. In Table3, the gain of using the gate module is only 0.1%. It argues that this attention module can be added to any layer but experiments show only 1 layer and 1 attention map already achieve most of the improvement. From Table 4 to Table 7, WRNA compared to WRN only improve ~1% on average. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.4444444477558136,
0.4444444477558136
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Re: AnonReviewer1",
"Re: AnonReviewer2",
"Article Update",
"Re: Relevant paper with spatial regularization and attention",
"Re: AnonReviewer3",
"Re: AnonReviewer3",
"Re: AnonReviewer3",
"Re: Thanks but not enough...",
"Thanks but not enough...",
"Paper v2.0"
],
"comment": [
"Thank you for your comments,\n\n>> However, the presentation of the method is bit harder to follow. It is not clear to me if the attention modules are applied over all pooling layers.\n\nAny layer of the network can be augmented with the attention mechanism. We chose to use the augmentation after each pooling layer in order to reduce even further the computational cost. We have clarified this point at the end of section 3.3, when Table 2 is introduced, and in the second paragraph of section 4.\n\n>> How they are combined? \n\nAs it can be seen in Fig 1, 2a, and 2b, a 1x1 convolution is applied to the output of the layer we want to augment, producing an attentional heatmap. This heatmap is then element-wise multiplied with a copy of the layer output, and the result is used to predict the class probabilities and a confidence score. This process is applied to an arbitrary number N of layers, producing N class probability vectors, and N confidence scores. Then all the class predictions are weighted by the confidence scores (softmax normalized so that they add-up to 1) and averaged (using Eq 9). This is the final combined prediction of the network. This overall explanation is now placed in the “Overview” section before section 3.1.\n\n>> Why use cross -correlation as the regulariser? Why not much stronger constraint such as orthogonality over elements of M in equation 1? \n\nPlease note that the 2-norm operation requires to square all the elements of the matrix, thus the minimum norm is achieved when the inner product of all the different pairs of masks is 0 (orthogonal). Thus, orthogonality is constrained by regularizing the 2-norm of a matrix. This is now clarified after Eq 3.\n\n>> What is the impact of this regularisation?\n\nIn order to address questions R1.1, etc.. we have added experiments on deformable mnist, showing the importance of each module. In figure 4d it can be seen that the regularized model performs better than the unregularized counterpart.\n\n>> Why use soft-max in equation 1? One may use a Sigmoid as well? Is it better to use soft-max?\n\nWe use softmax because it constrains the network to choose only one region in the image, thus forcing it to learn which is the most discriminative region. Using sigmoids attains the risk of just learning to predict 1s for every region, or all zeros. Note that multiple regions can still be identified by using multiple attention heads. This explanation has been included in section 3.1.\n\n>> Equation 9 is not entirely clear to me. Undefined notations.\n\n“output” is the predicted vector of class probabilities, “g_net” is the confidence score for the original output of the network “output_net” (without attention). This information has been appended after equation 9.\n\n>> In Table 2, why stop from AD= 2 and AW=2? What is the performance of AD=1, AW=1 with G? Why not perform this experiment over all 5 datasets? \n\nWe had to constrain the number of experiments to a limited amount of time and resources, which makes it difficult to brute-force all hyperparameter combinations with all datasets. We hope that this question is now clarified with the experiments on deformable-mnist (Section 4.1 in the new version of the paper).\n\n>> Is this performances, dataset specific?\n\nNo, generally increasing AD and AW results in better performance in all datasets.\n",
"Thanks for the feedback,\n\n>> 1) Both attention depth and attention width are small. \n\nAlthough higher AD and AW do result in an increment of accuracy, we considered that 2 was enough to demonstrate that the proposed mechanism enhances the baseline models at negligible computational cost. In order to address this concern, we have included experiments on deformable mnist where it can be seen that the performance increases with higher AW, and AD (Figure 4b and 4c in the new version of the paper). \n\n>> The choice of which layer to add this module is unclear to me. \n\nPlease note that the same placing problem is present in most of the well-known CNN layers such as Dropout, Local-contrast normalization, Spatial Transformers, etc. \nHowever, as we answer to R1’s first question, we have established a systematic methodology which consists in adding the attention mechanism after each subsampling layer of the WRN in order to obtain features of different levels at the smallest possible computational cost. This information is now included at the end of section 3.3, when Table 2 is introduced, and in the second paragraph of section 4. \n\n>> 2) No analysis on using the extra regularization loss actually helps.\n\nThe analysis has been included in section 4.1, where experiments with Cluttered Translated MNIST show that regularization adds an extra performance increment.\n\n>> 3) My main concern is the improvement gain is very small. In Table3, the gain of using the gate module is only 0.1%. It argues that this attention module can be added to any layer but experiments show only 1 layer and 1 attention map already achieve most of the improvement. \n\nWe hope that the new experiments on Cluttered Translated MNIST in section 4.1 help to clarify this point. Also, as it can be seen in Figure 4d, gates are crucial when AD and AW grow. \n\n>> From Table 4 to Table 7, WRNA compared to WRN only improve ~1% on average. \n\nPlease note that 1% is a remarkable amount given that, for instance, other relevant papers such as Spatial Transformer Networks only reported an improvement of 0.8% on CUB200-2011 with respect to their own baseline (see table 3 in their work). Moreover, in the case of residual attention networks [B], the reported improvement on Cifar100 is 0.05%.\n\n[A] Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. In Advances in Neural Information Processing Systems (pp. 2017-2025).\n[B] Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., ... & Tang, X. (2017). Residual Attention Network for Image Classification. CVPR2017.\n",
"A new revised version of this paper has been published at ECCV2018.\n\nRodríguez, P., Gonfaus, J. M., Cucurull, G., Roca, F. X., & Gonzàlez, J. (2018, September). Attend and Rectify: A Gated Attention Mechanism for Fine-Grained Recovery. In European Conference on Computer Vision (pp. 357-372). Springer, Cham.\n\nhttp://openaccess.thecvf.com/content_ECCV_2018/html/Pau_Rodriguez_Lopez_Attend_and_Rectify_ECCV_2018_paper.html\n\n",
"Thank you for the interest in our paper. We have included this explanation in the “Related Work” section, as a specialized solution for multilabel classification, where instead of learning universal modules, a ResNet is modified to improve its multilabel classification by enhancing the predictions with the learned most relevant regions.\n\nDifferently from this ICLR, in [1], instead of designing a general mechanism like the one proposed in our submission, the authors design an specialized attention mechanism for multilabel classification and test it on MSCOCO, NUS-WIDE, and WIDER. Namely, they use the features in “res4b22 relu” in order to extract attention scores for each label through three convolutional layers. To avoid attending to labels not present for the input being processed, these attention maps are multiplied by “confidence maps”, which are learned to be 1 if the label is present, and 0 if not. The attentional predictions are average with the network predictions. Differently, we want to incorporate fine detail at different levels of abstraction to the final prediction, thus, we propose a general “Attention Module”, that can be applied at many levels to any network, to enhance the final prediction weighted by the relevance of each prediction (for instance, details in the texture might help do distinguish between two birds which are similar at abstract level).\n\nChanges will be visible in the final version of the paper.",
">> For instance, why would attending to a right ear help in gender recognition? \n\nIn the Adience dataset, most women wear earrings, so the network might have learned to look at ears whenever possible.\n\n>> The proposed extension adds several hyperparameters (for instance the number K of attention heads). Apologies if I missed it but I am not clear how this was optimized for the experiments reported. In general, the paper could be clearer. For instance, it is not clear from either the text or Fig 2 how H goes from XxYxK for the attention head o XxYxN for the output head.\n\nIn Figure 2a, Z (of size XxYxN) is convolved with K XxYx1 masks. When these are multiplied by Z again, we obtain K XxYxN feature maps (broadcasting). Figure 2b depicts the output process for 1 of the K XxYxN feature maps. We have updated Figure2 to include all this information.\n\n>> As a final point, I would say that while some of the criticisms could be addressed in a revision, the improvements seem relatively modest. \n\nGiven that we use a strong baseline such a WRN, we do not think that the improvements are modest. Please note that other relevant papers such as Spatial Transformer Networks [A] only reported an improvement of 0.8% on CUB200-2011 with respect to their own baseline (see table 3 in their work), and residual attention networks report 0.05% improvement on Cifar100 [B]. Most importantly, the improvement is consistently obtained across datasets, and in 3 datatsets we outperform current state of the art.\n\n[A] Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. In Advances in Neural Information Processing Systems (pp. 2017-2025).\n[B] Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., ... & Tang, X. (2017). Residual Attention Network for Image Classification. CVPR2017.\n\n>> Given that the focus of the paper is already limited to fine-grained recognition, it seems that the paper would be better suited for a computer vision conference.\n\nThis work helps to build better feature representations applied to Computer Vision, which it is clearly inside the scope of this conference, from the website (http://www.iclr.cc/): “The performance of machine learning methods is heavily dependent on the choice of data representation (or features) on which they are applied…. Applications in vision, audio, speech, natural language processing, robotics, neuroscience, or any other field...”\n\n>>Minor point: \"we incorporate the advantages of visual and biological attention mechanisms” not sure this statement makes much sense. Seems like visual and biological are distinct attributes but visual attention can be biological (or not, I guess) and it is not clear how biological the proposed approach is. Certainly no attempt is made by the authors to connect to biology.\n\nThe way the proposed mechanism relates to biological attention is similar to the relationship between artificial and real neural networks, this is similarly done in [A]. Thus we have corrected the statement for “we incorporate the advantages inspired by visual and biological attention mechanisms, as stated in [A]”\n\n[A] Ba, J., Mnih, V., & Kavukcuoglu, K. (2014). Multiple object recognition with visual attention. ICLR2015.\n\n>> \"top-down feed-forward attention mechanism” -> it should be just feed-forward attention. Not clear what \"top-down feed-forward” attention could be…\n\nIn the literature, bottom-up attention is referred to the process of finding the most relevant regions of the image at the feature level, i.e. regions that are salient from their surroundings, while top-down attention refers to a high level process which finds the most relevant part of an input taking into account global information [A] (in a CNN top-down usually means to choose the regions to attend at the output instead of directly doing it at the feature level, which is the case for [B,C].)\n\n[A] Connor, Charles E., Howard E. Egeth, and Steven Yantis. \"Visual attention: bottom-up versus top-down.\" Current Biology14.19 (2004): R850-R852.\n[B] Oliva, A., Torralba, A., Castelhano, M. S., & Henderson, J. M. (2003, September). Top-down control of visual attention in object detection. In Image processing, 2003. icip 2003. proceedings. 2003 international conference on (Vol. 1, pp. I-253). IEEE.\n[C] Rodríguez, P., Cucurull, G., Gonfaus, J. M., Roca, F. X., & Gonzalez, J. (2017). Age and gender recognition in the wild with deep attention. Pattern Recognition, 72, 563-571.\n",
">> If the authors want to claim that the proposed approach is \"more robust to deformation and clutter” then they should design an experiment that shows that this is the case. \n\nIn the new introduced experiments on Cluttered Translated MNIST (Section 4.1 in the new version of the paper), we confirm that indeed the proposed method is more robust than the baseline.\n\n>> Beyond, the approach seems a little ad hoc. No real rationale is provided for the different mechanisms including the gating etc and certainly no experimental validation is provided to demonstrate the need for these mechanisms. \n\nIn section 3, the rationale for the different mechanisms is mentioned in each of the subsections. For instance, gates regulate the relative importance of the predictions of each attention head. This is important when AW is high and the current input has a few informative regions. In this case, just one attention head would be enough and thus, heads focusing in other regions can be dampened by the gates. This explanation is now added in section 3.4 and the conclusion.\nIn addition, we have included experiments on cluttered MNIST showing that gates are critical to obtaining good performances with high AW (see Figure 4d in the new version of the paper).\n\n>> More generally, it is not clear from reading the paper specifically what computational limitation of the CNN is being solved by the proposed attentional mechanism. \n\nThe proposed paper addresses the same problem as other attentional methods in the literature [A,B,C, ...], i.e. it enhances the model to find the most informative parts of the image and to discard irrelevant information. This is especially relevant for fine-grained recognition, where some details are more informative than other salient features of the image.\n\n[A] Ba, J., Mnih, V., & Kavukcuoglu, K. (2014). Multiple object recognition with visual attention. ICLR2015.\n[B] Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., ... & Bengio, Y. (2015, June). Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning (pp. 2048-2057).\n[C] Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. In Advances in Neural Information Processing Systems (pp. 2017-2025).\n\n>> Some of the masks shown in Fig 3 seem rather suspicious and prompt this referee to think that the networks are seriously overfitting to the data.\n\nThe train loss vs validation loss difference does not suggest that the proposed model suffers from greater overfitting than the original architecture. Moreover, inspired by this comment, we have designed a test on cluttered MNIST showing that the attention augmented model generalizes better on the test set when increasing the number of distractors (unseen during training), see Figure 4e in the new version of the paper. We hypothesize that attention prevents the model from memorizing uninformative parts of the image, which could be associated with noise. Section 4.1 and the conclusion now reflect this new finding. \n",
"We thank the reviewer for the feedback,\n\n>> First, some claimed are made about how the proposed approach \"enhances most of the desirable properties from previous approaches” (see pp 1-2) but these claims are never backed up. \n\nWith this sentence we tried to convey that the proposed model accumulates the best of the following properties in the literature: (i) it works in a single pass because it uses a single feed-forward CNN, differently from recurrent and two-step models, (ii) it is trained with SGD instead of RL, thus it presents faster convergence and it does not require sampling, (iii) it can be used to augment any architecture, as we show for WRNs, and (iv) it is simple to implement (instead of creating a whole new network architecture, we just add the attention heads and the attention outputs to an already existing one, eq 9). In order to better back up these properties, we have added a table in the introduction (Table 2) comparing the different architectures in the literature with ours, showing that ours accumulates the best of them.\n\n>> More generally since the paper focuses on attention, other attentional approaches should be used as benchmarks beyond the WRN baseline. \n\nPlease note that we include other attentional approaches for fine-grained recognition in all tables: \n * Table 3 -> FAM [A]; \n * Table 4 -> RA-CNN [B], STN [C], B-CNN [D], PD [E], FCAN [F]; \n * Table 5 -> DVAN [G], FCAN [F], B-CNN [C], RA-CNN [B]; \n * Table 6 -> DVAN [G], FCAN [F], RA-CNN [B]. \n\nMoreover, most of these approaches propose singular architectures that have been especially engineered for solving their respective recognition tasks, while the purpose of our approach is to demonstrate that our proposed mechanism works on general purpose architectures.\n\n[A] Rodríguez, P., Cucurull, G., Gonfaus, J. M., Roca, F. X., & Gonzalez, J. (2017). Age and gender recognition in the wild with deep attention. Pattern Recognition, 72, 563-571.\n[B] Fu, J., Zheng, H., & Mei, T. (2017, July). Look closer to see better: recurrent attention convolutional neural network for fine-grained image recognition. In Conf. on Computer Vision and Pattern Recognition.\n[C] Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. In Advances in Neural Information Processing Systems (pp. 2017-2025).\n[D] Lin, T. Y., RoyChowdhury, A., & Maji, S. (2015). Bilinear cnn models for fine-grained visual recognition. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1449-1457).\n[E] Zhang, N., Donahue, J., Girshick, R., & Darrell, T. (2014, September). Part-based R-CNNs for fine-grained category detection. In European conference on computer vision (pp. 834-849). Springer, Cham.\n[F] Liu, X., Xia, T., Wang, J., & Lin, Y. (2016). Fully convolutional attention localization networks: Efficient attention localization for fine-grained recognition. arXiv preprint arXiv:1603.06765.\n[G] Zhao, B., Wu, X., Feng, J., Peng, Q., & Yan, S. (2016). Diversified visual attention networks for fine-grained object classification. arXiv preprint arXiv:1606.08572.",
"Thank you for the thorough review. We think the comments help to keep a high standards on this conference, and our paper has greately improved the quality thanks to them.\n\n>> Unfortunately, I do not think this rebuttal addresses my main complaint. I understand that the benchmarks include systems that use attentional mechanisms. My main issue is that the paper is about attention but different attentional mechanisms are never compared on a level play-field (i.e., using the same architectures, optimizers, etc etc). There is no way from the benchmarks to properly assess how much of the improvement is actually driven by the proposed attentional mechanism as opposed to anything else. \n\nWe understand the concern, this is exactly why we worked hard during the review period to find the time to include a comparison between STNs and the proposed attention mechanism under the same exact settings (same base architecture, learning algorithm, hyperparameters, training steps, etc.) showing that ours generalizes much better. Moreover, through all the manuscript, we emphasize that our approach is simpler and faster than other competing approaches.\n\n>> This is all the more problematic given how small the improvements are.\n\nIn the second point of the responses to AnonReviewer3 (3/3) (https://openreview.net/forum?id=rJe7FW-Cb¬eId=BJZ7a4ING¬eId=HkabPkOQM), we explain that the improvement is not so modest given the current context. In our case, improvement is comparable to that found in STN, for example.\n\n>> There is no way from the benchmarks to properly assess how much of the improvement is actually driven by the proposed attentional mechanism as opposed to anything else. \n\nWe think it is clear that plugging the proposed mechanism into a state-of-the-art CNN results in an improvement. In fact, in every table we show how the augmented models are always better.\n\n>> I would also add that with all that said the proposed mechanism remains relatively incremental with respect to related work (work properly cited) and that it seems to be better suited for a more specialized conference. \n\nWe still think our work helps indeed to build better representations, and it could be of inspiration for future work in any other field.",
"Unfortunately, I do not think this rebuttal addresses my main complaint. I understand that the benchmarks include systems that use attentional mechanisms. My main issue is that the paper is about attention but different attentional mechanisms are never compared on a level play-field (i.e., using the same architectures, optimizers, etc etc). There is no way from the benchmarks to properly assess how much of the improvement is actually driven by the proposed attentional mechanism as opposed to anything else. This is all the more problematic given how small the improvements are. I would also add that with all that said the proposed mechanism remains relatively incremental with respect to related work (work properly cited) and that it seems to be better suited for a more specialized conference. ",
"We thank all the reviewers for their highly valuable feedback. We have addressed all comments one by one, and we have accordingly updated the manuscript. Changes appear in blue.\nList of changes:\n * A new table (Table 2) in the introduction has been added to clarify the advantages of our proposal with respect to the literature.\n * An overview section has been added to section 3 (Section 3.1) to summarize and clarify how the different submodules fit together.\n * Undefined notations have been clarified in equation 9.\n * Ablation experiments on cluttered translated MNIST have been introduced in section 4.1.\n * Textual clarifications addressing comments from the reviewers.\n\nThanks to these improvements resulting from the review process, the manuscript has substantially clarified the contribution of our work, has improved the technical quality and has also enhanced the experimental quality. We trust these improvements make it even more appealing for publication at this conference.\n"
]
} | {
"paperhash": [
"robert|cognitive_psychology_and_its_implications",
"chen|deep-based_ingredient_recognition_for_cooking_recipe_retrieval",
"desimone|neural_mechanisms_of_selective_visual_attention",
"eidinger|age_and_gender_estimation_of_unfiltered_faces",
"fu|look_closer_to_see_better:_recurrent_attention_convolutional_neural_network_for_fine-grained_image_recognition",
"hassannejad|food_image_recognition_using_very_deep_convolutional_networks",
"he|deep_residual_learning_for_image_recognition",
"hochreiter|long_short-term_memory",
"jaderberg|spatial_transformer_networks",
"khosla|novel_dataset_for_fine-grained_image_categorization:_stanford_dogs",
"krause|3d_object_representations_for_fine-grained_categorization",
"lecun|gradient-based_learning_applied_to_document_recognition",
"levi|age_and_gender_classification_using_convolutional_neural_networks",
"lin|bilinear_cnn_models_for_fine-grained_visual_recognition",
"lin|bilinear_convolutional_neural_networks_for_fine-grained_visual_recognition",
"liu|fully_convolutional_attention_localization_networks:_efficient_attention_localization_for_fine-grained_recognition",
"matsuda|recognition_of_multiple-food_images_by_detecting_candidate_regions",
"mnih|recurrent_models_of_visual_attention",
"ozbulak|how_transferable_are_cnn-based_features_for_age_and_gender_classification?",
"rodríguez|age_and_gender_recognition_in_the_wild_with_deep_attention",
"rothe|deep_expectation_of_real_and_apparent_age_from_a_single_image_without_facial_landmarks",
"russakovsky|the_imagenet_large_scale_visual_recognition_challenge",
"sermanet|attention_for_fine-grained_categorization",
"simonyan|very_deep_convolutional_networks_for_large-scale_image_recognition",
"uijlings|selective_search_for_object_recognition",
"kastner|mechanisms_of_visual_attention_in_the_human_cortex",
"vaswani|attention_is_all_you_need",
"wah|the_caltech-ucsd_birds-200-2011_dataset",
"wang|residual_attention_network_for_image_classification",
"xiao|the_application_of_two-level_attention_models_in_deep_convolutional_neural_network_for_fine-grained_image_classification",
"yanai|food_image_recognition_using_deep_convolutional_network_with_pre-training_and_fine-tuning",
"zagoruyko|wide_residual_networks",
"zhang|part-based_r-cnns_for_fine-grained_category_detection",
"zhang|picking_deep_filter_responses_for_fine-grained_image_recognition",
"zhao|a_survey_on_deep_learning-based_fine-grained_object_classification_and_semantic_segmentation",
"zhao|diversified_visual_attention_networks_for_fine-grained_object_classification"
],
"title": [
"Cognitive psychology and its implications",
"Deep-based ingredient recognition for cooking recipe retrieval",
"Neural mechanisms of selective visual attention",
"Age and gender estimation of unfiltered faces",
"Look closer to see better: recurrent attention convolutional neural network for fine-grained image recognition",
"Food image recognition using very deep convolutional networks",
"Deep residual learning for image recognition",
"Long short-term memory",
"Spatial transformer networks",
"Novel dataset for fine-grained image categorization: Stanford dogs",
"3d object representations for fine-grained categorization",
"Gradient-based learning applied to document recognition",
"Age and gender classification using convolutional neural networks",
"Bilinear cnn models for fine-grained visual recognition",
"Bilinear convolutional neural networks for fine-grained visual recognition",
"Fully convolutional attention localization networks: Efficient attention localization for fine-grained recognition",
"Recognition of multiple-food images by detecting candidate regions",
"Recurrent models of visual attention",
"How transferable are cnn-based features for age and gender classification?",
"Age and gender recognition in the wild with deep attention",
"Deep expectation of real and apparent age from a single image without facial landmarks",
"The imagenet large scale visual recognition challenge",
"Attention for fine-grained categorization",
"Very deep convolutional networks for large-scale image recognition",
"Selective search for object recognition",
"Mechanisms of visual attention in the human cortex",
"Attention is all you need",
"The Caltech-UCSD Birds-200-2011 Dataset",
"Residual attention network for image classification",
"The application of two-level attention models in deep convolutional neural network for fine-grained image classification",
"Food image recognition using deep convolutional network with pre-training and fine-tuning",
"Wide residual networks",
"Part-based r-cnns for fine-grained category detection",
"Picking deep filter responses for fine-grained image recognition",
"A survey on deep learning-based fine-grained object classification and semantic segmentation",
"Diversified visual attention networks for fine-grained object classification"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"john robert",
"anderson "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jingjing chen",
"chong-wah ngo"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"robert desimone",
"john duncan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"eran eidinger",
"roee enbar",
"tal hassner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jianlong fu",
"heliang zheng",
"tao mei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hamid hassannejad",
"guido matrella",
"paolo ciampolini",
"ilaria de munari",
"monica mordonini",
"stefano cagnoni"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"max jaderberg",
"karen simonyan",
"andrew zisserman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aditya khosla",
"nityananda jayadevaprakash",
"bangpeng yao",
"fei-fei li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jonathan krause",
"michael stark",
"jia deng",
"li fei-fei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yann lecun",
"léon bottou",
"yoshua bengio",
"patrick haffner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"gil levi",
"tal hassner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tsung-yu lin",
"aruni roychowdhury",
"subhransu maji"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tsung-yu lin",
"aruni roychowdhury",
"subhransu maji"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xiao liu",
"tian xia",
"jiang wang",
"yuanqing lin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"y matsuda",
"h hoashi",
"k yanai"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"volodymyr mnih",
"nicolas heess",
"alex graves"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"gokhan ozbulak",
"yusuf aytar",
"hazim kemal ekenel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"guillem pau rodríguez",
" cucurull",
"m josep",
"f gonfaus",
"jordi xavier roca",
" gonzàlez"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"radu rasmus rothe",
"luc timofte",
" van gool"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"olga russakovsky",
"jia deng",
"jonathan krause",
"alex berg",
"li fei-fei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pierre sermanet",
"andrea frome",
"esteban real"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"karen simonyan",
"andrew zisserman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a uijlings",
"theo van de sande",
"m gevers",
" smeulders"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sabine kastner",
"ungerleider ",
"leslie g "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ashish vaswani",
"noam shazeer",
"niki parmar",
"jakob uszkoreit",
"llion jones",
"aidan n gomez",
"lukasz kaiser",
"illia polosukhin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"c wah",
"s branson",
"p welinder",
"p perona",
"s belongie"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"fei wang",
"mengqing jiang",
"chen qian",
"shuo yang",
"cheng li",
"honggang zhang",
"xiaogang wang",
"xiaoou tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tianjun xiao",
"yichong xu",
"kuiyuan yang",
"jiaxing zhang",
"yuxin peng",
"zheng zhang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"keiji yanai",
"yoshiyuki kawano"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sergey zagoruyko",
"nikos komodakis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ning zhang",
"jeff donahue",
"ross girshick",
"trevor darrell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xiaopeng zhang",
"hongkai xiong",
"wengang zhou",
"weiyao lin",
"qi tian"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"bo zhao",
"jiashi feng",
"xiao wu",
"shuicheng yan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"bo zhao",
"xiao wu",
"jiashi feng",
"qiang peng",
"shuicheng yan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"1512.03385v1",
"",
"1506.02025v3",
"",
"",
"",
"",
"",
"",
"arXiv:1603.06765",
"",
"1406.6247v1",
"",
"",
"",
"",
"",
"arXiv:1409.1556",
"",
"",
"1706.03762v7",
"",
"1704.06904v1",
"",
"",
"1605.07146v4",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.481481 | 0.75 | null | null | null | null | null | rJe7FW-Cb |
||
liu|on_the_generalization_effects_of_densenet_model_structures|ICLR_cc_2018_Conference | On the Generalization Effects of DenseNet Model Structures | Modern neural network architectures take advantage of increasingly deeper layers, and various advances in their structure to achieve better performance. While traditional explicit regularization techniques like dropout, weight decay, and data augmentation are still being used in these new models, little about the regularization and generalization effects of these new structures have been studied.
Besides being deeper than their predecessors, could newer architectures like ResNet and DenseNet also benefit from their structures' implicit regularization properties?
In this work, we investigate the skip connection's effect on network's generalization features. Through experiments, we show that certain neural network architectures contribute to their generalization abilities. Specifically, we study the effect that low-level features have on generalization performance when they are introduced to deeper layers in DenseNet, ResNet as well as networks with 'skip connections'. We show that these low-level representations do help with generalization in multiple settings when both the quality and quantity of training data is decreased. | {
"name": [],
"affiliation": []
} | Our paper analyses the tremendous representational power of networks especially with 'skip connections', which may be used as a method for better generalization. | [
"Skip connection",
"generalization",
"gegularization",
"deep network",
"representation."
] | null | 2018-02-15 22:29:31 | 1 | null | null | null | null | null | null | null | null | false | The paper appears unfinished in many ways: the experiments are preliminary, the paper completely ignored a large body of prior work on the subject, and the presentation needs substantial improvements. The authors did not provide a rebuttal.
I encourage the authors to refrain from submitting unfinished papers such as this one in the future, as it unnecessarily increases the load on a review system that is already strained. | {
"review_id": [
"r1bLTRKxG",
"r1OmdAYxz",
"H1YnyaKgG"
],
"review": [
{
"title": "title: An analysis paper with only 4 citations.",
"paper_summary": null,
"main_review": "main_review: The paper studies the effect of different network structures (plain CNN, ResNet and DenseNet). This is an interesting line of research to pursue, however, it gives an impression that a large amount of recent work in this direction has not been considered by the authors. The paper contains ONLY 4 references. \n\nSome references that might be useful to consider in the paper:\n- K. Greff et. al. Highway and Residual Networks learn Unrolled Iterative Estimation.\n- C. Zang et. al. UNDERSTANDING DEEP LEARNING REQUIRES RETHINKING GENERALIZATION\n- Q. Liao el. al. Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex\n- A. Veit et. al. Residual Networks Behave Like Ensembles of Relatively Shallow Networks\n- K. He at. Al Identity Mappings in Deep Residual Networks\n\nThe writing and the structure of the paper could be significantly improved. From the paper, it is difficult to understand the contributions. From the ones listed in Section 1, it seems that most of the contributions were shown in the original ResNet and DenseNet papers. Given, questionable contribution and a lack of relevant citations, it is difficult to recommend for acceptance of the paper. \n\nOther issues:\nSection 2: “Skip connection …. overcome the overfitting”, could the authors comment on this a bit more or point to relevant citation? \nSection 2: “We increase the number of skip connections from 0 to 28”, it is not clear to me how this is done.\nSection 3.1.1 “deep Linear model”, what the authors mean with this? Multiple layers without a nonlinearity? Is it the same as Cascade Net?\nSection 3.2 From the data description, it is not clear how the training data was obtained. Could the authors provide more details on this?\nSection 3.2 “…, only 3 of them are chosen to be displayed…”, how the selection process was done?\nSection 3.2 “Instead of showing every layer’s output we exhibit the 3th, 5th, 7th, 9th, 11th, 13th and the final layer’s output”, according to the description in Fig. 7 we should be able to see 7 columns, this description does not correspond to Fig. 7.\nSection 4 “This paper investigates how skip connections works in vision tasks…” I do not find experiments with vision datasets in the paper. In order to claim this, I would encourage the authors to run tests on a CV benchmark dataset (e. g. ImageNet)\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Analysis paper on the generalization effect of skip connections, motivation and contributions are not clear, references are very limited",
"paper_summary": null,
"main_review": "main_review: This paper analyzes the role of skip connections with respect to generalization in recent architectures such as ResNets or DenseNets. The authors perform an analysis of the performance of ResNets and DenseNets under data scarcity constraints and noisy training samples. They also run some experiments assessing the importance of the number of skip connections in such networks.\n\nThe presentation of the paper could be significantly improved. The motivation is difficult to grasp and the contributions do not seem compelling.\n\nMy main concern is about the contribution of the paper. The hypothesis that skip connections ease the training and improve the generalization has already been highlighted in the ResNet and DenseNet paper, see e.g. [a].\n\n[a] https://arxiv.org/pdf/1603.05027.pdf\n\nMoreover, the literature review is very limited. Although there is a vast existing literature on ResNets, DenseNets and, more generally, skip connections, the paper only references 4 papers. Many relevant papers could be referenced in the introduction as examples of successes in computer vision tasks, identity mapping initialization, recent interpretations of ResNets/DensetNets, etc.\n\nThe title suggests that the analysis is performed on DenseNet architectures, but experiments focus on comparing both ResNets and DenseNets to sequential convolutional networks and assessing the importance of skip connections.\n\nIn section 3.1. (1st paragraph) proposes adding noise to groundtruth labels; however, in section 3.1.2,. it would seem that noise is added by changing the input images (by setting some pixel channels to 0). Could the authors clarify that? Wouldn’t the noise added to the groundtruth act as a regularizer?\n\nIn section 4, the paper claims to investigate the role of skip connections in vision tasks. However, experiments are performed on MNIST, CIFAR100, a curve fitting problem and a presumably synthetic 2D classification problem. Performing the analysis on computer vision datasets such as ImageNet would be more compelling to back the statement in section 4.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: analysing skip connections",
"paper_summary": null,
"main_review": "main_review: The ms analyses a number of simulations how skip connections effect the generalization of different network architectures. The experiments are somewhat interesting but they appear rather preliminary. To indeed show the claims made, error bars in the graphs would be necessary as well will more careful and more generic analysis. In addition clear hypotheses should be stated. \nThe fact that some behaviour is seen in MNIST or CIFAR in the simulations does not permit conclusion for other data sets. Typically extensive teacher student simulations are required to validly make points. Also formally the paper is not in good shape. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.2222222238779068,
0.2222222238779068,
0.1111111119389534
],
"confidence": [
0.75,
0.75,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [],
"comment": []
} | {
"paperhash": [
"he|deep_residual_learning_for_image_recognition"
],
"title": [
"Deep Residual Learning for Image Recognition"
],
"abstract": [
""
],
"authors": [
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
}
]
}
],
"arxiv_id": [
""
],
"s2_corpus_id": [
""
],
"intents": [
null
],
"isInfluential": [
null
]
} | null | 84 | null | 0.185185 | 0.833333 | null | null | null | null | null | rJbs5gbRW |
||
binkowski|autoregressive_convolutional_neural_networks_for_asynchronous_time_series|ICLR_cc_2018_Conference | 1703.04122v4 | Autoregressive Convolutional Neural Networks for Asynchronous Time Series | We propose Significance-Offset Convolutional Neural Network, a deep convolutional network architecture for regression of multivariate asynchronous time series. The model is inspired by standard autoregressive (AR) models and gating mechanisms used in recurrent neural networks. It involves an AR-like weighting system, where the final predictor is obtained as a weighted sum of adjusted regressors, while the weights are data-dependent functions learnt through a convolutional network. The architecture was designed for applications on asynchronous time series and is evaluated on such datasets: a hedge fund proprietary dataset of over 2 million quotes for a credit derivative index, an artificially generated noisy autoregressive series and household electricity consumption dataset. The pro-posed architecture achieves promising results as compared to convolutional and recurrent neural networks. The code for the numerical experiments and the architecture implementation will be shared online to make the research reproducible. | {
"name": [],
"affiliation": []
} | null | [
"Mathematics",
"Computer Science"
] | International Conference on Machine Learning | 2017-03-12 | 41 | null | null | null | null | null | null | null | null | false | The reviewers feel that the novelties in the model are not significant. Furthermore, they suggest that empirical results could be improved by
1: analyses showing how the significance network functions and directly measuring its impact
2: More reproducible experiments. In particular, this is really an applications paper, and the experiments on the main application are not reproducible because the data is proprietary.
3: baselines that make assumptions more in line with the authors' problem setup | {
"review_id": [
"Hkl6sRTyf",
"H1JQgkAJG",
"BkaINb9xz"
],
"review": [
{
"title": "title: Missing lots of related work",
"paper_summary": null,
"main_review": "main_review: To begin with, the authors seem to be missing some recent developments in the field of deep learning which are closely related to the proposed approach; e.g.:\n\nSotirios P. Chatzis, “Recurrent Latent Variable Conditional Heteroscedasticity,” Proc. 42nd IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE ICASSP), pp. 2711-2715, March 2017.\n\nIn addition, the authors claim that Gaussian process-based models are not appropriate for handling asynchronous data, since the assumed Gaussianity is inappropriate for financial datasets, which often follow fat-tailed distributions. However, they seem to be unaware of several developments in the field, where mixtures of Gaussian processes are postulated, so as to allow for capturing long tails in the data distribution; for instance:\n\nEmmanouil A. Platanios and Sotirios P. Chatzis, “Gaussian Process-Mixture Conditional Heteroscedasticity,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 5, pp. 888-900, May 2014.\n\nHence, the provided experimental comparisons are essentially performed against non-rivals of the method. It is more than easy to understand that a method not designed for modeling observations with the specific characteristics of financial data should definitely perform worse than a method designed to cope with such artifacts. That is why the sole purpose of a (convincing) experimental evaluation regime should be to compare between methods that are designed with the same data properties in mind. The paper does not satisfy this requirement.\n\nTurning to the method itself, the derivations are clear and straightforward; the method could have been motivated a somewhat better, though.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: There are some novel ideas in this paper, but it's not entirely clear if the ideas are all that useful. The experiments are okay, but don't really highlight the usefulness of the individual proposed ideas very much.",
"paper_summary": null,
"main_review": "main_review: The author proposed:\n1. A data augmentation technique for asynchronous time series data.\n2. A convolutional 'Significance' weighting neural network that assigns normalised weights to the outputs of a fully-connected autoregressive 'Offset' neural network, such that the output is a weighted average of the 'Offset' neural net.\n3. An 'auxiliary' loss function.\n\nThe experiments showed that:\n1. The proposed method beat VAR/CNN/ResNet/LSTM 2 synthetic asynchronous data sets, 1 real electricity meter data set and 1 real financial bid/ask data set. It's not immediately clear how hyper-parameters for the benchmark models were chosen.\n2. The author observed from the experiments that the depth of the offset network has negligible effect, and concluded that the 'Significance' network has crucial impact. (I don't see how this conclusion can be made.)\n3. The proposed auxiliary loss is not useful.\n4. The proposed architecture is more robust to noise in the synthetic data set compared to the benchmarks, and together with LSTM, are least prone to overfitting.\n\nPros\n- Proposed a useful way of augmenting asynchronous multivariate time series for fitting autoregressive models\n- The convolutional Significance/weighting networks appears to reduce test errors (not entirely clear)\n\nCons\n- The novelties aren't very well-justified. The 'Significance' network was described as critical to the performance, but there is no experimental result to show the sensitivity of the model's performance with respect to the architecture of the 'Significance' network. At the very least, I'd like to see what happens if the weighting was forced to be uniform while keeping the 'Offset' network and loss unchanged.\n- It's entirely unclear how the train and test data was split. This may be quite important in the case of the financial data set.\n- It's also unclear if model training was done on a rolling basis, which is common for time series forecasting.\n- The auxiliary loss function does not appear to be very helpful, but was described as a key component in the paper.\n\nQuality: The quality of the paper was okay. More details of the experiments should be included in the main text to help interpret the significance of the experimental results. The experiment also did not really probe the significance of the 'Significance' network even though it's claimed to be important.\nClarity: Above average. \nOriginality: Mediocre. Nothing really shines. Weighted average-type architecture has been proposed many times in neural networks (e.g., attention mechanisms). \nSignificance: Low. It's unclear how useful the architecture really is.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The paper has potential but needs more empirical validation to demonstrate general relevance",
"paper_summary": null,
"main_review": "main_review: The authors propose an extension to CNN using an autoregressive weighting for asynchronous time series applications. The method is applied to a proprietary dataset as well as a couple UCI problems and a synthetic dataset, showing improved performance over baselines in the asynchronous setting.\n\nThis paper is mostly an applications paper. The method itself seems like a fairly simple extension for a particular application, although perhaps the authors have not clearly highlighted details of methodological innovation. I liked that the method was motivated to solve a real problem, and that it does seem to do so well compared to reasonable baselines. However, as an an applications paper, the bread of experiments are a little bit lacking -- with only that one potentially interesting dataset, which happens to proprietary. Given the fairly empirical nature of the paper in general, it feels like a strong argument should be made, which includes experiments, that this work will be generally significant and impactful. \n\nThe writing of the paper is a bit loose with comments like:\n“Besides these and claims of secretive hedge funds (it can be marketing surfing on the deep learning hype), no promising results or innovative architectures were publicly published so far, to the best of our knowledge.”\n\nParts of the also appear rush written, with some sentences half finished:\n“\"ues of x might be heterogenous, hence On the other hand, significance network provides data-dependent weights for all regressors and sums them up in autoregressive manner.””\n\nAs a minor comment, the statement\n“however, due to assumed Gaussianity they are inappropriate for financial datasets, which often follow fat-tailed distributions (Cont, 2001).”\nIs a bit too broad. It depends where the Gaussianity appears. If the likelihood is non-Gaussian, then it often doesn’t matter if there are latent Gaussian variables.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.4444444477558136,
0.4444444477558136
],
"confidence": [
1,
0.5,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to the reviewers"
],
"comment": [
"First of all, we would like to thank all reviewers for their insightful comments.\n\nAs the ratings quite consistently put the paper below the acceptance threshold, the authors decided not to modify the submission, but instead to continue the work and possibly re-submit the paper in the future. We agree that the obtained results lack comparison with other significant models as well as the proposed model without certain components (e.g. significance network). New experiments will be carried out to reinforce the results."
]
} | {
"paperhash": [
"borovykh|conditional_time_series_forecasting_with_convolutional_neural_networks",
"dauphin|language_modeling_with_gated_convolutional_networks",
"neil|phased_lstm:_accelerating_recurrent_network_training_for_long_or_event-based_sequences",
"grave|efficient_softmax_approximation_for_gpus",
"oord|wavenet:_a_generative_model_for_raw_audio",
"oord|conditional_image_generation_with_pixelcnn_decoders",
"he|deep_residual_learning_for_image_recognition",
"srivastava|highway_networks",
"ioffe|batch_normalization:_accelerating_deep_network_training_by_reducing_internal_covariate_shift",
"kingma|adam:_a_method_for_stochastic_optimization",
"chung|empirical_evaluation_of_gated_recurrent_neural_networks_on_sequence_modeling",
"schmidhuber|deep_learning_in_neural_networks:_an_overview",
"wilson|copula_processes",
"lecun|gradient-based_learning_applied_to_document_recognition",
"kumar|ecommercegan_:_a_generative_adversarial_network_for_e-commerce",
"gamboa|deep_learning_for_time-series_analysis",
"bun|cleaning_large_correlation_matrices:_tools_from_random_matrix_theory",
"li|a_scalable_end-to-end_gaussian_process_adapter_for_irregularly_sampled_time_series_classification",
"weissenborn|mufuru:_the_multi-function_recurrent_unit",
"heaton|deep_learning_in_finance",
"józefowicz|exploring_the_limits_of_language_modeling",
"mathieu|deep_multi-scale_video_prediction_beyond_mean_square_error",
"cho|describing_multimedia_content_using_attention-based_encoder-decoder_networks",
"sims|macroeconomics_and_reality"
],
"title": [
"Conditional time series forecasting with convolutional neural networks",
"Language Modeling with Gated Convolutional Networks",
"Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences",
"Efficient softmax approximation for GPUs",
"WAVENET: A GENERATIVE MODEL FOR RAW AUDIO Aäron van den Oord",
"Conditional Image Generation with PixelCNN Decoders Aäron van den Oord",
"Deep Residual Learning for Image Recognition",
"Highway Networks",
"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"Published as a conference paper at ICLR 2015 ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION",
"Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling",
"Deep Learning in Neural Networks: An Overview",
"Copula Processes",
"Gradient-based learning applied to document recognition",
"eCommerceGAN : A Generative Adversarial Network for E-commerce",
"Deep Learning for Time-Series Analysis",
"Cleaning large Correlation Matrices: tools from Random Matrix Theory",
"A scalable end-to-end Gaussian process adapter for irregularly sampled time series classification",
"MuFuRU: The Multi-Function Recurrent Unit",
"Deep Learning in Finance",
"Exploring the Limits of Language Modeling",
"DEEP MULTI-SCALE VIDEO PREDICTION BEYOND MEAN SQUARE ERROR",
"Describing Multimedia Content using Attention-based Encoder-Decoder Networks",
""
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"anastasia borovykh",
"sander bohte",
"cornelis w oosterlee"
],
"affiliation": [
{
"laboratory": "",
"institution": "Università di Bologna",
"location": "{'settlement': 'Bologna', 'country': 'Italy'}"
},
{
"laboratory": "",
"institution": "Centrum Wiskunde & Informatica",
"location": "{'settlement': 'Amsterdam', 'country': 'The Netherlands'}"
},
{
"laboratory": "",
"institution": "Centrum Wiskunde & Informatica",
"location": "{'settlement': 'Amsterdam', 'country': 'The Netherlands'}"
}
]
},
{
"name": [
"yann n dauphin",
"angela fan",
"michael auli",
"david grangier"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"daniel neil",
"michael pfeiffer",
"shih-chii liu"
],
"affiliation": [
{
"laboratory": "",
"institution": "ETH Zurich Zurich",
"location": "{'country': 'Switzerland'}"
},
{
"laboratory": "",
"institution": "ETH Zurich Zurich",
"location": "{'country': 'Switzerland'}"
},
{
"laboratory": "",
"institution": "ETH Zurich Zurich",
"location": "{'country': 'Switzerland'}"
}
]
},
{
"name": [
"édouard grave",
"armand joulin",
"moustapha cissé",
"david grangier",
"hervé jégou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sander dieleman",
"heiga zen",
"karen simonyan",
"nal kalchbrenner",
"andrew senior",
"koray kavukcuoglu",
"google deepmind",
" london"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"google deepmind",
"nal kalchbrenner",
"lasse espeholt",
"alex graves",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
}
]
},
{
"name": [
"rupesh kumar srivastava",
"klaus greff"
],
"affiliation": [
{
"laboratory": "The Swiss AI Lab IDSIA Istituto Dalle Molle di Studi sull'Intelligenza Artificiale Università della Svizzera italiana (USI) Scuola universitaria professionale della Svizzera italiana (SUPSI)",
"institution": "",
"location": "{'addrLine': 'Galleria 2', 'postCode': '6928', 'settlement': 'Manno-Lugano', 'country': 'Switzerland'}"
},
{
"laboratory": "The Swiss AI Lab IDSIA Istituto Dalle Molle di Studi sull'Intelligenza Artificiale Università della Svizzera italiana (USI) Scuola universitaria professionale della Svizzera italiana (SUPSI)",
"institution": "",
"location": "{'addrLine': 'Galleria 2', 'postCode': '6928', 'settlement': 'Manno-Lugano', 'country': 'Switzerland'}"
}
]
},
{
"name": [
"sergey ioffe"
],
"affiliation": [
{
"laboratory": "",
"institution": "Christian Szegedy Google Inc",
"location": "{}"
}
]
},
{
"name": [
"diederik p kingma",
"jimmy lei ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"junyoung chung",
"caglar gulcehre",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal CIFAR Senior Fellow",
"location": "{}"
}
]
},
{
"name": [
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": "",
"institution": "SUPSI",
"location": "{'addrLine': 'Galleria 2', 'postCode': '6928', 'settlement': 'Manno-Lugano', 'country': 'Switzerland'}"
}
]
},
{
"name": [
"andrew gordon",
"wilson zoubin ghahramani"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Cambridge Cambridge",
"location": "{'postCode': 'CB2 1PZ', 'country': 'UK'}"
},
{
"laboratory": "",
"institution": "University of Cambridge Cambridge",
"location": "{'postCode': 'CB2 1PZ', 'country': 'UK'}"
}
]
},
{
"name": [
"yann lecun",
"léon bottou",
"yoshua bengio",
"patrick haffner",
"yoshua bottou",
"patrick bengio",
" haffner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ashutosh kumar",
"arijit biswas",
"subhajit sanyal"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"john gamboa"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Kaiserslautern",
"location": "{'settlement': 'Kaiserslautern', 'country': 'Germany'}"
}
]
},
{
"name": [
"joël bun",
"jean-philippe bouchaud",
"marc potters"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"steven cheng",
"-xian li",
"benjamin marlin"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Massachusetts Amherst Amherst",
"location": "{'postCode': '01003', 'region': 'MA'}"
},
{
"laboratory": "",
"institution": "University of Massachusetts Amherst Amherst",
"location": "{'postCode': '01003', 'region': 'MA'}"
},
{
"laboratory": "",
"institution": "University of Massachusetts Amherst Amherst",
"location": "{'postCode': '01003', 'region': 'MA'}"
}
]
},
{
"name": [
"dirk weissenborn",
"tim rocktäschel"
],
"affiliation": [
{
"laboratory": "Language Technology Lab",
"institution": "DFKI",
"location": "{'addrLine': 'Alt-Moabit 91c', 'settlement': 'Berlin', 'country': 'Germany'}"
},
{
"laboratory": "",
"institution": "University College",
"location": "{'addrLine': 'London Gower Street', 'settlement': 'London', 'country': 'UK'}"
}
]
},
{
"name": [
"j b heaton",
"n g polson",
"j h witte"
],
"affiliation": [
{
"laboratory": "",
"institution": "Conjecture LLC",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Chicago",
"location": "{}"
},
{
"laboratory": "",
"institution": "University College London, and Conjecture LLC",
"location": "{}"
}
]
},
{
"name": [
"rafal jozefowicz",
"mike schuster",
"noam shazeer",
"yonghui wu",
"google brain"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"michael mathieu",
"camille couprie",
"yann lecun"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
}
]
},
{
"name": [
"kyunghyun cho",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"christopher a sims",
"finn kydland"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Minnesota Minneapolis",
"location": "{'postCode': '55455', 'region': 'Minnesota'}"
},
{
"laboratory": "",
"institution": "University of Minnesota Minneapolis",
"location": "{'postCode': '55455', 'region': 'Minnesota'}"
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 95 | null | 0.407407 | 0.75 | null | null | null | null | null | rJaE2alRW |
|
shen|learning_to_generate_filters_for_convolutional_neural_networks|ICLR_cc_2018_Conference | 1812.01894v1 | Learning to Generate Filters for Convolutional Neural Networks | Conventionally, convolutional neural networks (CNNs) process different images with the same set of filters. However, the variations in images pose a challenge to this fashion. In this paper, we propose to generate sample-specific filters for convolutional layers in the forward pass. Since the filters are generated on-the-fly, the model becomes more flexible and can better fit the training data compared to traditional CNNs. In order to obtain sample-specific features, we extract the intermediate feature maps from an autoencoder. As filters are usually high dimensional, we propose to learn a set of coefficients instead of a set of filters. These coefficients are used to linearly combine the base filters from a filter repository to generate the final filters for a CNN. The proposed method is evaluated on MNIST, MTFL and CIFAR10 datasets. Experiment results demonstrate that the classification accuracy of the baseline model can be improved by using the proposed filter generation method. | {
"name": [],
"affiliation": []
} | null | [
"Computer Science"
] | arXiv.org | 2018-02-15 | 15 | null | null | null | null | null | null | null | null | false | The paper proposes a method for learning convolutional networks with dynamic input-conditioned filters. There are several prior work along this idea, but there is no comparison agaist them. Overall, experimental results are not convincing enough. | {
"review_id": [
"Syy4M8qxf",
"HygXOMDxf",
"BJFxOpcez"
],
"review": [
{
"title": "title: Interesting neural network architecture; experiments can be stronger",
"paper_summary": null,
"main_review": "main_review: This paper proposes a two-pathway neural network architecture. One pathway is an autoencoder that extracts image features from different layers. The other pathway consists of convolutional layers to solve a supervised task. The kernels of these convolutional layers are generated dynamically based on the autoencoder features of the corresponding layers. Directly mapping the autoencoder features to the convolutional kernels requires a very large matrix multiplication. As a workaround, the proposed method learns a dictionary of base kernels and maps the features to the coefficients on the dictionary. \n\nThe proposed method is an interesting way of combining an unsupervised learning objective and a supervised one. \n\nWhile the idea is interesting, the experiments are a bit weak. \nFor MNIST (Table 1), only epoch 1 and epoch 20 results are reported. However, the results of a converged model (train for more epochs) are more meaningful. \nFor Cifar-10 (Figure 4b), the final accuracy is less than 90%, which is several percentages lower than the state-of-the-art method.\nFor MTFL, I am not sure how significant the final results are. It seems a more commonly used recent protocol is to train on MTFL and test on AFLW. \nIn general, the experiments are under controlled settings and are encouraging. However, promising results for comparing with the state-of-the-art methods are necessary for showing the practical importance of the proposed method. \n\nA minor point: it is a bit unnatural to call the proposed method “baseline” ... \n\nIf the model is trained in an end-to-end manner. It will be helpful to perform ablative studies on how critical the reconstruction loss is (Note that the two pathway can be possibly trained using a single supervised objective function). \n\nIt will be interesting to see if the proposed model is useful for semi-supervised learning. \n\nA paper that may be related regarding dynamic filters:\nImage Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction\n\nSome paper that may be related regarding combine supervised and unsupervised learning:\nStacked What-Where Auto-encoders\nSemi-Supervised Learning with Ladder Networks\nAugmenting Supervised Neural Networks with Unsupervised Objectives for Large-Scale Image Classification\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Reject - little novelty and weak experiments",
"paper_summary": null,
"main_review": "main_review: The authors propose an approach to dynamically generating filters in a CNN based on the input image. The filters are generated as linear combinations of a basis set of filters, based on features extracted by an auto-encoder. The authors test the approach on recognition tasks on three datasets: MNIST, MTFL (facial landmarks) and CIFAR10, and show a small improvement over baselines without dynamic filters.\n\nPros:\n1) I have not seen this exact approach proposed before.\n2) There method is evaluated on three datasets and two tasks: classification and facial landmark detection.\n\nCons:\n1) The authors are not the first to propose dynamically generating filters, and they clearly mention that the work of De Brabandere et al. is closely related. Yet, there is no comparison to other methods for dynamic weight generation. \n2) Related to that, there is no ablation study, so it is unclear if the authors’ contributions are useful. I appreciate the analysis in Tables 1 and 2, but this is not sufficient. Why the need for the autoencoder - why can’t the whole network be trained end-to-end on the goal task? Why generate filters as linear combination - is this just for computational reasons, or also accuracy? This should be analyzed empirically.\n3) The experiments are somewhat substandard:\n- On MNIST the authors use a tiny poorly-performance network, and it is no surprise that one can beat it with a bigger dynamic filter network.\n- The MTFL experiments look most convincing (although this might be because I am not familiar with SoTA on the dataset), but still there is no control for the number of parameters, and the performance improvements are not huge\n- On CIFAR10 - there is a marginal improvement in performance, which, as the authors admit, can also be reached by using a deeper model. The baseline models are far from SoTA - the authors should look at more modern architecture such as AllCNN (not particularly new or good, but very simple), ResNet, wide ResNet, DenseNet, etc.\n\nAs a comment, I don’t think classification is a good task for showcasing such an architecture - classification is already working extremely well. Many other tasks - for instance, detection, tracking, few-shot learning - seem much more promising.\n\nTo conclude, the authors propose a new approach to learning convolutional networks with dynamic input-conditioned filters. Unfortunately, the authors fail to demonstrate the value of the proposed method. I therefore recommend rejection.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: This paper explores learning dynamic filters for CNNs. The filters are generated by using the features of an autoencoder on the input image, and linearly combining a set of base filters for each layer. This addresses an interesting problem which has been looked at a lot before, but with some small new parts. There is a lot of prior work in this area that should be cited in the area of dynamic filters and steerable filters. There are also parallels to ladder networks that should be highlighted. \n\nThe results indicate improvement over baselines, however baselines are not strong baselines. \nA key question is what happens when this method is combined with VGG11 which the authors train as a baseline? \nWhat is the effect of the reconstruction loss? Can it be removed? There should be some ablation study here.\nFigure 5 is unclear what is being displayed, there are no labels.\n\nOverall I would advise the authors to address these questions and suggest this as a paper suitable for a workshop submission.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.3333333432674408,
0.3333333432674408
],
"confidence": [
0.75,
0.75,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Reference"
],
"comment": [
"Thank you for your comment. I just read your paper. It is very interesting. I will cite your work in the next version."
]
} | {
"paperhash": [
"baek|deep_convolutional_decision_jungle_for_image_classification",
"ha|hypernetworks",
"bertinetto|learning_feed-forward_one-shot_learners",
"jia|dynamic_filter_networks",
"ioannou|decision_forests,_convolutional_networks_and_the_models_in-between",
"kontschieder|deep_neural_decision_forests",
"xiong|conditional_convolutional_neural_network_for_modality-aware_face_recognition",
"ronneberger|u-net:_convolutional_networks_for_biomedical_image_segmentation",
"yi|learning_face_representation_from_scratch",
"zhang|facial_landmark_detection_by_deep_multi-task_learning",
"simonyan|very_deep_convolutional_networks_for_large-scale_image_recognition",
"burgos-artizzu|robust_face_landmark_estimation_under_occlusion",
"schmidhuber|learning_to_control_fast-weight_memories:_an_alternative_to_dynamic_recurrent_networks",
"krizhevsky|learning_multiple_layers_of_features_from_tiny_images",
"maaten|visualizing_data_using_t-sne",
"lecun|gradient-based_learning_applied_to_document_recognition",
"|the_networks_used_in_the_cifar10_experiment_are_shown_from_table_15_to_table_20._a.2_i_mage"
],
"title": [
"Deep Convolutional Decision Jungle for Image Classification",
"HyperNetworks",
"Learning feed-forward one-shot learners",
"Dynamic Filter Networks",
"Decision Forests, Convolutional Networks and the Models in-Between",
"Deep Neural Decision Forests",
"Conditional Convolutional Neural Network for Modality-Aware Face Recognition",
"U-Net: Convolutional Networks for Biomedical Image Segmentation",
"Learning Face Representation from Scratch",
"Facial Landmark Detection by Deep Multi-task Learning",
"Very Deep Convolutional Networks for Large-Scale Image Recognition",
"Robust Face Landmark Estimation under Occlusion",
"Learning to Control Fast-Weight Memories: An Alternative to Dynamic Recurrent Networks",
"Learning Multiple Layers of Features from Tiny Images",
"Visualizing Data using t-SNE",
"Gradient-based learning applied to document recognition",
"The networks used in the CIFAR10 experiment are shown from Table 15 to Table 20. A.2 I MAGE"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"Seungryul Baek",
"K. Kim",
"Tae-Kyun Kim"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"David Ha",
"Andrew M. Dai",
"Quoc V. Le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Luca Bertinetto",
"João F. Henriques",
"Jack Valmadre",
"Philip H. S. Torr",
"A. Vedaldi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Xu Jia",
"Bert De Brabandere",
"T. Tuytelaars",
"L. Gool"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yani Andrew Ioannou",
"D. Robertson",
"Darko Zikic",
"P. Kontschieder",
"J. Shotton",
"Matthew Brown",
"A. Criminisi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Kontschieder",
"M. Fiterau",
"A. Criminisi",
"S. R. Bulò"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Chao Xiong",
"Xiaowei Zhao",
"Danhang Tang",
"J. Karlekar",
"Shuicheng Yan",
"Tae-Kyun Kim"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"O. Ronneberger",
"P. Fischer",
"T. Brox"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Dong Yi",
"Zhen Lei",
"Shengcai Liao",
"S. Li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Zhanpeng Zhang",
"Ping Luo",
"Chen Change Loy",
"Xiaoou Tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"K. Simonyan",
"Andrew Zisserman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"X. Burgos-Artizzu",
"P. Perona",
"Piotr Dollár"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Krizhevsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"L. Maaten",
"Geoffrey E. Hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yann LeCun",
"L. Bottou",
"Yoshua Bengio",
"P. Haffner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
}
],
"arxiv_id": [
"1706.02003",
"1609.09106",
"1606.05233",
"1605.09673",
"1603.01250",
"",
"",
"1505.04597",
"1411.7923",
"",
"1409.1556",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
[
"background",
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[]
],
"isInfluential": [
false,
false,
true,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
true,
false,
true,
false
]
} | null | 84 | null | 0.37037 | 0.833333 | null | null | null | null | null | rJa90ceAb |
|
bosselut|simulating_action_dynamics_with_neural_process_networks|ICLR_cc_2018_Conference | 1711.05313v2 | Simulating Action Dynamics with Neural Process Networks | Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated. In this work, we introduce Neural Process Networks to understand procedural text through (neural) simulation of action dynamics. Our model complements existing memory architectures with dynamic entity tracking by explicitly modeling actions as state transformers. The model updates the states of the entities by executing learned action operators. Empirical results demonstrate that our proposed model can reason about the unstated causal effects of actions, allowing it to provide more accurate contextual information for understanding and generating procedural text, all while offering more interpretable internal representations than existing alternatives. | {
"name": [
"antoine bosselut",
"omer levy",
"ari holtzman",
"corin ennis",
"dieter fox",
"yejin choi"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Washington",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Washington",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Washington",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Washington -Bothell",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Washington",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Washington",
"location": "{}"
}
]
} | null | [
"Computer Science"
] | International Conference on Learning Representations | 2017-11-01 | 31 | 110 | null | null | null | null | null | null | null | true | this submission proposes a novel extension of existing recurrent networks that focus on capturing long-term dependencies via tracking entities/their statesand tested it on a new task. there's a concern that the proposed approach is heavily engineered toward the proposed task and may not be applicable to other tasks, which i fully agree with. i however find the proposed approach and the authors' justification to be thorough enough, and for now, recommend it to be accepted. | {
"review_id": [
"rJQcSB5gG",
"r1Hu15Kxz",
"SJUEXlDxf"
],
"review": [
{
"title": "title: Interesting model to incorporate domain-specific knowledge in procedural language understanding",
"paper_summary": null,
"main_review": "main_review: The paper studies procedural language, which can be very useful in applications such as robotics or online customer support. The system is designed to model knowledge of the procedural task using actions and their effect on entities. The proposed solution incorporates a structured representation of domain-specific knowledge that appears to improve performance in two evaluated tasks: tracking entities as the procedure evolves, and generating sentences to complete a procedure. The method is interesting and presents a good amount of evidence that it works, compared to relevant baseline solutions. \n\nThe proposed tasks of tracking entities and generating sentences are also interesting given the procedural context, and the authors introduce a new dataset with dense annotations for evaluating this task. Learning happens in a weakly supervised manner, which is very interesting too, indicating that the model introduces the right bias to produce better results.\n\nThe manual selection and curation of entities for the domain are reasonable assumptions, but may also limit the applicability or generality from the learning perspective. This selection may also explain part of the better performance, as the right bias is not just in the model, but in the construction of the \"ontologies\" to make it work.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: good paper",
"paper_summary": null,
"main_review": "main_review: SUMMARY.\n\nThe paper presents a novel approach to procedural language understanding.\nThe proposed model reads food recipes and updates the representation of the entities mentioned in the text in order to reflect the physical changes of the entities in the recipe.\nThe authors also propose a manually annotated dataset where each passage of a recipe is annotated with entities, actions performed over the entities, and the change in state of the entities after the action.\nThe authors tested their model on the proposed dataset and compared it with several baselines.\n\n\n----------\n\nOVERALL JUDGMENT\nThe paper is very well written and easy to read.\nI enjoyed reading this paper, I found the proposed architecture very well thought for the proposed task.\nI would have liked to see a little bit more of analysis on the results, it would be interesting to see what are the cases the model struggles the most.\n\nI am wondering how the model would perform without intermediate losses i.e., entity selection loss and action selection loss.\nIt would also be interesting to see the impact of the amount of 'intermediate' supervision on the state change prediction.\n\nThe setup for generation is a bit unclear to me.\nThe authors mentioned to encode entity vectors with a biGRU, do the authors encode it in order of appearance in the text? would not it be better to encode the entities with some structure-agnostic model like Deep Sets?\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Worried about the generality of the model, the qualitative analysis, as well as a fair comparison to Recurrent Entity Networks and non-neural baselines",
"paper_summary": null,
"main_review": "main_review: Summary\n\nThis paper presents Neural Process Networks, an architecture for capturing procedural knowledge stated in texts that makes use of a differentiable memory, a sentence and word attention mechanism, as well as learning action representations and their effect on entity representations. The architecture is tested for tracking entities in recipes, as well as generating the natural language description for the next step in a recipe. It is compared against a suit of baselines, such as GRUs, Recurrent Entity Networks, Seq2Seq and the Neural Checklist Model. While I liked the overall paper, I am worried about the generality of the model, the qualitative analysis, as well as a fair comparison to Recurrent Entity Networks and non-neural baselines.\n\nStrengths\n\nI believe the authors made a good effort in comparing against existing neural baselines (Recurrent Entity Networks, Neural Checklist Model) *for their task*. That said, it is unclear to me how generally applicable the method is and whether the comparison against Recurrent Entity Networks is fair (see Weaknesses).\nI like the ablation study.\n\nWeaknesses\n\nWhile I find the Neural Process Networks architecture interesting and I acknowledge that it outperforms Recurrent Entity Networks for the presented tasks, after reading the paper it is not clear to me how generally applicable the architecture is. Some design choices seem rather tailored to the task at hand (manual collection of actions MTurk annotation in section 3.1) and I am wondering where else the authors see their method being applied given that the architecture relies on all entities and actions being known in advance. My understanding is that the architecture could be applied to bAbI and CBT (the two tasks used in the Recurrent Entity Networks paper). If that is the case, a fair comparison to Recurrent Entity Networks would have been to test against Recurrent Entity Networks on these tasks too. If they the architecture cannot be applied in these tasks, the authors should explain why.\nI am not convinced by the qualitative analysis. Table 2 tells me that even for the best model the entity selection performance is rather unreliable (only 55.39% F1), yet all examples shown in Table 3 look really good, missing only the two entities oil (1) and sprinkles (3). This suggests that these examples were cherry-picked and I would like to see examples that are sampled randomly from the dev set. I have a similar concern regarding the generation task. First, it is not mentioned where the examples in Table 6 are taken from – is it the train, dev or test set? Second, the overall BLEU score seems quite low even for the best model, yet the examples in Table 6 look really good. In my opinion, a good qualitative analysis should also discuss failure cases. Since the BLEU score is so low here, you might also want to compare perplexity of the models.\nThe qualitative analysis in Table 5 is not convincing either. In Appendix A.1 it is mentioned that word embeddings are initialized from word2vec trained on the training set. My suspicion is that one would get the clustering in Table 4 already from those pretrained vectors, maybe even when pretrained on the Google news corpus. Hence, it is not clear what propagating gradients through the Neural Process Networks into the action embeddings adds, or put differently, why does it have to be a differentiable architecture when an NLP pipeline might be enough? This could easily be tested by another ablation where action embeddings are pretrained using word2vec and then fixed during training of the Neural Process Network. Moreover, in 3.3 it is mentioned that even the Action Selection is pretrained, which makes me wonder what is actually trained jointly in the architecture and what is not.\nI think the difficulty of the task at hand needs to be discussed at some point, ideally early in the paper. Until examples on page 7 are shown, I did not have a sense for why a neural architecture is chosen. For example, in 2.3 it is mentioned that for \"wash and cut\" the two functions fwash and fcut need to be selected. For this example, this seems trivial as the functions have the same name (and you could even have a function per name!). As far as I understand, the point of the action selector is to only have a fixed number of learned actions and multiple words (cut, slice etc.) should select the same action fcut. Otherwise (if there is little language ambiguity) I would not see the need for a complex neural architecture. Related to that, a non-neural baseline for the entity selection task that in my opinion definitely needs to be added is extracting entities using a pretrained NER system and returning all of them as the selection.\np2 Footnote 1: So if I understand this correctly, this work builds upon a dataset of over 65k recipes from Kiddon et al. (2016), but only for 875 of those detailed annotations were created?\n\nMinor Comments\n\np1: The statement \"most natural language understanding algorithms do not have the capacity …\" should be backed by reference.\np2: \"context representation ht\" – I would directly mention that this is a sentence encoding.\np3: 2.4: I have the impression what you are describing here is known in the literature as entity linking.\np3 Eq.3: Isn't c3*0 always a vector of zeros?\np4 Eq.6: W4 is an order-3 tensor, correct?\np4 Eq.8: What is YC and WC here and what are their dimensions? I am confused by the softmax, as my understanding (from reading the paragraph on the Action Selection Loss on p.5) was that the expression in the softmax here is a scalar (as it is done for every possible action), so this should be a sigmoid to allow for multiple actions to attain a probability of 1?\np5: \"See Appendix for details\" -> \"see Appendix C for details\"\np5 3.3: Could you elaborate on the heuristic for extracting verb mentions? Is only one verb mention per sentence extracted?\np5: \"trained to minimize cross-entropy loss\" -> \"trained to minimize the cross-entropy loss\"\np5 3.3: What is the global loss?\np6: \"been read (§2.5.\" -> \"been read (§2.5).\"\np6: \"We encode these vectors using a bidirectional GRU\" – I think you composing a fixed-dimensional vector from the entity vectors? What's eI?\np7: For which statement is (Kim et al. 2016) the reference? Surely, they did not invent the Hadamard product.\np8: \"Our model, in contrast\" use\" -> \"Our model, in contrast, uses\".\np8 Related Work: I think it is important to mention that existing architectures such as Memory Netwroks could, in principle, learn to track entities and devote part of their parameters to learn the effect of actions. What Neural Process Networks are providing is a strong inductive bias for tracking entities and learning the effect of actions that is useful for the task considered in this paper. As mentioned in the weaknesses, this might however come at the price of a less general model, which should be discussed.\n\n# Update after the rebuttal\nThanks for the clarifications and updating the paper. I am increasing my score by two points and expect to see the ablations as well as the NER baseline mentioned in the rebuttal in the next revision of the paper. Furthermore, I encourage the authors to include the analysis of pretrained word2vec embeddings vs the embeddings learned by this architecture into the paper. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.7777777910232544,
0.8888888955116272,
0.5555555820465088
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Re: Reviewer #3 part 1",
"Re: Reviewer #3 part 3",
"Re: Reviewer #3 part 2",
"Re: Reviewer #2",
"Re: Reviewer #1"
],
"comment": [
"Because our response was longer than 5000 characters, we separate our response into multiple parts to break the writing at natural breaks in the response.\n\n--- Motivation with Respect to Related Work ---\nWe thank the reviewer for the question that prompts us to better clarify the key differences between previous approaches based on datasets such as bAbI, and the task proposed in our study. The motivation of our work is to probe a research direction where we make use of naturally existing text with no gold labels, and investigate the role of the modular architecture and intermediate loss functions (with distant supervision) for learning latent action dynamics. In sum, the key contributions of our work are (1) to introduce a new task and dataset (including detailed annotations for evaluation) that bring up unique challenges that previous datasets did not cover, and (2) to propose a new model that is better suited for this new challenge of reasoning about action dynamics. \n\nAs such, our newly introduced task actually complements work on densely labeled datasets such as bAbI. The bAbI dataset is synthetically constructed such that training labels cover the full spectrum of the semantics the model is expected to learn (i.e., # of training instances is extremely high for the # of words/concepts involved). When the training set provides sufficient inductive signals, it is possible to train an end-to-end model to extract the complex relations needed to do well on the task, and Recurrent Entity Networks are one of the best models architected. In our task, because the dataset alone does not provide sufficient inductive signals (only 875 recipes are densely labeled for evaluation), we investigate methods to provide better inductive biases using intermediate losses guided by distant supervision. We view both types of research directions --- integrating inductive biases into datasets (bAbI) vs. models (NPN) --- as important to pursue. They are complementary to each other, and our work focuses on the latter that has been relatively less explored in the existing literature.\n\nCBT is a cloze task based on children’s stories. While CBT is based on real natural language text like ours, the nature of the task differs much from ours in that answering the cloze task often requires remembering the surface patterns associated with each entity throughout the story excerpt (as has been also suggested by the Window Encoding used by prior approaches on this task). In contrast, our task focuses on the unspoken causal effects of actions, rather than explicitly mentioned descriptions about entities. \n\nGiven the key differences between our task and others, it seems beyond the scope to require our model to outperform on all other tasks with different modeling requirements. That said, we are happy to include a detailed and insightful narrative about these differences in our revision, along with side by side performance comparisons. At this time, our conjecture is that updating entity states only through action application is likely to be too restrictive for CBT and bAbI tasks where remembering surface patterns without corresponding actions is crucial. However, our neural process networks can be easily extended to directly connect the sentence encoding to the simulation module (a minor change to one equation), in order to allow for updating entity representations even when there are no explicit actions associated.\n\n--- Qualitative Examples ---\nThe examples in all tables were taken from the development set. We chose these examples to provide interesting case studies on some of the patterns that the model is able to learn by reading text and simulating the underlying world. We agree with the reviewer that an analysis of failure cases should have been included in the original submission and have updated the paper to include examples of similarly interesting cases the model misses. We intend to expand the model’s capabilities to capture these in future work.\n",
"--- Minor Comments ---\nWe appreciate the reviewer’s thought-provoking questions about the impact of our model. We’ve updated the paper to extend the qualitative evaluation with additional examples and to clarify where our approach differentiates with the goals of general-purpose memory models such as Recurrent Entity Networks. We thank the reviewer for pointing out additional baselines and ablations to run to show the importance of the components of the model and will update the paper to incorporate them as we get the results. Finally, we appreciate the reviewer’s comments pointing out minor corrections to be made in the paper, and have incorporated them in the revised version.\n\nBelow, we address minor comments made by the reviewer that were not addressed in the paragraphs above.\n\np3: 2.4: I have the impression what you are describing here is known in the literature as entity linking.\n\nAssuming the reviewer is referring to the recurrent attention paragraph, we think coreference resolution would be a more accurate analogue to the task being handled as the goal of the recurrent attention mechanism is to tie connections between entity changes in the text without the use of an external KB. However, coreference tasks are defined only over explicitly mentioned entities in the text, while our task requires reasoning about implicit mentions as well, e.g., “Add water to the pot. Boil for 30 minutes” (where the implicit argument of Boil is water).\n\np3 Eq.3: Isn't c3*0 always a vector of zeros?\n\nYes, we included this option in the choice distribution as an easy short-circuit for the model to choose to include no entities in a particular step. \n\np4 Eq.6: W4 is an order-3 tensor, correct?\n\nYes, W_4 is a bilinear projection tensor between the action embedding and the entity embedding. We’ve clarified this in the new version of the paper\n\np4 Eq.8: What is YC and WC here and what are their dimensions? I am confused by the softmax, as my understanding (from reading the paragraph on the Action Selection Loss on p.5) was that the expression in the softmax here is a scalar (as it is done for every possible action), so this should be a sigmoid to allow for multiple actions to attain a probability of 1?\n\nThe softmax here predicts the end state for each state change in the lexicon. Each state change is predicted individually, so Y_c corresponds to the end state being predicted for an individual state change. W_c corresponds to a projection for each individual state change. Each state predictor is a separate multi-class classifier that predicts the end state of the entity from the output of the action applicator. These predictors are trained using the State Change loss in section 5. Actions are selected by the sigmoid in Equation 1.\n\np6: \"We encode these vectors using a bidirectional GRU\" – I think you composing a fixed-dimensional vector from the entity vectors? What's eI?\n\neI is the concatenation of final time step hidden states from encoding the entity state vectors in both directions using a bidirectional GRU.\n\np7: For which statement is (Kim et al. 2016) the reference? Surely, they did not invent the Hadamard product.\n\nKim et al. used the Hadamard product to jointly project two input representation in multimodal learning. We used their citation as a motivation for our decision to jointly project signal from the entity state vectors and word context representation. We’ve removed the citation to get rid of this ambiguity.\n",
"\n--- Learning Causality-aware Action Embeddings ---\nWe apologize for the confusion about the action selector pretraining. What we meant in this case is that the MLP used to select action embeddings is pretrained. The action embeddings themselves are learned jointly with the entity selector, the simulation module, state change predictors and sentence encoder. \n\nWe included Table 5 to show our action embeddings model semantic similarity between real-world actions. While word2vec embeddings would, no doubt, capture lexical semantics between these actions, the neural process network learns commonalities between actions that aren’t as extractable with a word2vec model. Looking at the action embeddings for ``bake”, ``boil”, and ``pour”, for example, we list the cosine similarities between pairs below:\n\nSkipgram:\nboil - bake → 0.329\nboil - pour → 0.548\n\nNPN:\nboil - bake → 0.687\nboil - pour → -0.119\n\nThe NPN learns action embedding representations based on the state changes those actions induce as opposed to the local context windows around the mentions of the action in text, thereby encoding different semantics in the learned representation. While we did not use pretrained skipgram embeddings to initialize the action embeddings in our work, it is possible that including them when training our model might even lead to better results on our task as the action embeddings could encode elements of both lexical (word2vec) and frame (NPN) semantics. Conversely, we would argue that using only pretrained action embeddings from a word2vec model with no additional training would cause the bilinear matrix from the simulation module to have to learn the simulation mapping functions on its own, which would make the model less expressive. We will include both additional ablations in our final paper.\n\nFor the moment, the action selector learns from distant supervision (string matching in each sentence is used to extract verb mention(s) as labels), but the model is designed to generalize beyond this signal. For example, in the sentence “Boil the water in the pot”, the model is designed to be able to select a composite action that includes an action such as f_put because boiling water involves moving the location of water to the pot. For the moment, we initialize a single action embedding for each verb in the lexicon and let the model learn to map sentences to a mask over these action embeddings. We agree with the reviewer, however, that it would be an interesting investigation to make the action embeddings ``implicit,” letting the model learn to select combinations of elementary actions. This approach is one of our current avenues of future work and could have the effect of generalizing the model similarly to the un-tied version of the REN. \n\nThe reviewer makes a good point about including an NER baseline in the evaluation and we will include it in the final paper. We don’t anticipate the performance being much stronger than the GRU baseline, however, since current NER systems can only identify entities that are directly mentioned in the text, thereby missing elided, coreferent and composite mentions.\n",
"\n--- Architectures with modular prior knowledge representations --- \nIt is correct that our method assumes that predefined sets of entities, actions, and their causal effects are given before initializing the simulation environment. One motivation behind this design choice is to investigate more explicit and modular representations of the world (i.e., entities, actions, and their causal effects), abstracting away from specific words that appeared in the input text. We postulated that this modular architecture would better support integration of prior knowledge about actions and their causal effects, which can be viewed as part of the common sense knowledge people start with that biases how they read and interpret text. We agree with the reviewer that an interesting future research direction would be fully automatic acquisition of ontological knowledge, which we felt was beyond the scope of this paper. \n\n--- Mostly automatic acquisition of prior knowledge --- \nWe also wonder whether there might have been slight confusion about how we acquire the predefined sets of entities, actions, and their causal effects. Importantly, for the training set, we acquire entities and actions automatically from the training corpus. We manually annotated entities and actions only for the purpose of evaluation, but do not use them during training. It is correct that we manually curate the handful of dimensions of action causality, however, primarily because there does not seem to be an easy way to acquire them automatically.\n",
"We thank the reviewer for their positive feedback. We share the same excitement about the potential for knowledge-guided architectures that simulate world dynamics.\n\nWe’ve edited the paper to show more analysis examples. We’d originally shown examples that presented interesting case studies on the model’s capabilities. We’ve now added other interesting cases that the model fails to handle, but that future simulators would need to capture to correctly model the domain.\n\n--- Intermediate Losses for learning with Distant Supervision ---\nThanks for the question about the impact of the intermediate modular loss as that was one of the key investigation points of our work: whether a neural network trained with a single loss (with distant supervision) could learn the internal dynamics of the task, or whether adding additional losses as guides (with additional distant supervision) would promote the architected inductive biases. This investigation point is a direct consequence of the fact that we do not assume a manually constructed dataset that provides sufficient annotated labels that support directly learning implied action dynamics. Instead, we make use of naturally existing data as is, and investigate the role of the modular architecture and distantly supervised intermediate losses for learning latent structure. \n\nTo provide more detailed comments about the intermediate loss: without the entity selection and the action selection loss, the model would not learn the necessary bias to use the correct actions and entities in predicting the final states. Pretraining the action selector was also especially useful as it allowed the model to use the correct action embeddings when predicting the state changes that were happening in each step. This allowed errors in predicting the final states to be backpropagated to the correct action embeddings from the start. \n\nWe also think it’s an interesting question to see how many examples the model must see during training to learn to select entities and simulate state changes. We thought about including experiments that randomly dropped a percentage of the training set labels and will add these ablations in the final paper. \n\n--- Generation Modeling Variations ---\nWe appreciate the reviewer’s suggestion for using deep sets to encode the state vectors and agree that it seems like a better modeling fit at an intuitive level. While we did not try deep sets as an encoding method, in our pilot study, we explored several attention mechanisms over both the context words and the entity state vectors, and we found that the simple sequential encoding leads to the best performance, a conclusion that had also been found in prior work (Kiddon et al. 2016). We will look into deep sets as an encoding mechanism and report it in the final paper if helpful.\n"
]
} | {
"paperhash": [
"ji|dynamic_entity_representations_in_neural_language_models",
"chiappa|recurrent_environment_simulators",
"yang|reference-aware_language_models",
"henaff|tracking_the_world_state_with_recurrent_entity_networks",
"neelakantan|learning_a_natural_language_interface_with_neural_programmer",
"kiddon|globally_coherent_text_generation_with_neural_checklist_models",
"liu|jointly_learning_grounded_task_structures_from_language_instruction_and_visual_demonstration",
"gao|physical_causality_of_action_verbs_in_grounded_language_understanding",
"seo|query-reduction_networks_for_question_answering",
"miller|key-value_memory_networks_for_directly_reading_documents",
"tu|modeling_coverage_for_neural_machine_translation",
"neelakantan|neural_programmer:_inducing_latent_programs_with_gradient_descent",
"hill|the_goldilocks_principle:_reading_children's_books_with_explicit_memory_representations",
"kiddon|mise_en_place:_unsupervised_interpretation_of_instructional_recipes",
"oh|action-conditional_video_prediction_using_deep_networks_in_atari_games",
"sukhbaatar|end-to-end_memory_networks",
"wahlstrom|from_pixels_to_torques:_policy_learning_with_deep_dynamical_models",
"kingma|adam:_a_method_for_stochastic_optimization",
"weston|memory_networks",
"sutskever|sequence_to_sequence_learning_with_neural_networks",
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"cho|learning_phrase_representations_using_rnn_encoder–decoder_for_statistical_machine_translation",
"mikolov|distributed_representations_of_words_and_phrases_and_their_compositionality",
"mori|flow_graph_corpus_from_recipe_texts",
"mikolov|efficient_estimation_of_word_representations_in_vector_space",
"si|unsupervised_learning_of_event_and-or_grammar_and_semantics_from_video",
"|daan_wierstra,_and_shakir_mohamed._recurrent_environment_simulators._corr",
"|neural_checklist_we_use_a_pretained_model",
"|both_encoders_and_the_decoder_are_single_layer._the_learning_rate_is_0.0003_initially_and_is_halved_every_5_epochs._the_model_is_trained_with_the_adam_optimizer",
"|hadamard_product_for_low-rank_bilinear_pooling",
"srivastava|dropout:_a_simple_way_to_prevent_neural_networks_from_overfitting",
"mori|a_machine_learning_approach_to_recipe_text_processing"
],
"title": [
"Dynamic Entity Representations in Neural Language Models",
"Recurrent Environment Simulators",
"Reference-Aware Language Models",
"Tracking the World State with Recurrent Entity Networks",
"Learning a Natural Language Interface with Neural Programmer",
"Globally Coherent Text Generation with Neural Checklist Models",
"Jointly Learning Grounded Task Structures from Language Instruction and Visual Demonstration",
"Physical Causality of Action Verbs in Grounded Language Understanding",
"Query-Reduction Networks for Question Answering",
"Key-Value Memory Networks for Directly Reading Documents",
"Modeling Coverage for Neural Machine Translation",
"Neural Programmer: Inducing Latent Programs with Gradient Descent",
"The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations",
"Mise en Place: Unsupervised Interpretation of Instructional Recipes",
"Action-Conditional Video Prediction using Deep Networks in Atari Games",
"End-To-End Memory Networks",
"From Pixels to Torques: Policy Learning with Deep Dynamical Models",
"Adam: A Method for Stochastic Optimization",
"Memory Networks",
"Sequence to Sequence Learning with Neural Networks",
"Neural Machine Translation by Jointly Learning to Align and Translate",
"Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation",
"Distributed Representations of Words and Phrases and their Compositionality",
"Flow Graph Corpus from Recipe Texts",
"Efficient Estimation of Word Representations in Vector Space",
"Unsupervised learning of event AND-OR grammar and semantics from video",
"Daan Wierstra, and Shakir Mohamed. Recurrent environment simulators. CoRR",
"Neural Checklist We use a pretained model",
"Both encoders and the decoder are single layer. The learning rate is 0.0003 initially and is halved every 5 epochs. The model is trained with the Adam optimizer",
"Hadamard product for low-rank bilinear pooling",
"Dropout: a simple way to prevent neural networks from overfitting",
"A Machine Learning Approach to Recipe Text Processing"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"Yangfeng Ji",
"Chenhao Tan",
"Sebastian Martschat",
"Yejin Choi",
"Noah A. Smith"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Chiappa",
"S. Racanière",
"Daan Wierstra",
"S. Mohamed"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Zichao Yang",
"Phil Blunsom",
"Chris Dyer",
"Wang Ling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Mikael Henaff",
"J. Weston",
"Arthur Szlam",
"Antoine Bordes",
"Yann LeCun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Arvind Neelakantan",
"Quoc V. Le",
"Martín Abadi",
"A. McCallum",
"Dario Amodei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Chloé Kiddon",
"Luke Zettlemoyer",
"Yejin Choi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Changsong Liu",
"Shaohua Yang",
"S. Saba-Sadiya",
"N. Shukla",
"Yunzhong He",
"Song-Chun Zhu",
"J. Chai"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Qiaozi Gao",
"Malcolm Doering",
"Shaohua Yang",
"J. Chai"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Minjoon Seo",
"Sewon Min",
"Ali Farhadi",
"Hannaneh Hajishirzi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Alexander H. Miller",
"Adam Fisch",
"Jesse Dodge",
"Amir-Hossein Karimi",
"Antoine Bordes",
"J. Weston"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Zhaopeng Tu",
"Zhengdong Lu",
"Yang Liu",
"Xiaohua Liu",
"Hang Li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Arvind Neelakantan",
"Quoc V. Le",
"I. Sutskever"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Felix Hill",
"Antoine Bordes",
"S. Chopra",
"J. Weston"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Chloé Kiddon",
"Ganesa Thandavam Ponnuraj",
"Luke Zettlemoyer",
"Yejin Choi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Junhyuk Oh",
"Xiaoxiao Guo",
"Honglak Lee",
"Richard L. Lewis",
"Satinder Singh"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sainbayar Sukhbaatar",
"Arthur Szlam",
"J. Weston",
"R. Fergus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Niklas Wahlstrom",
"Thomas Bo Schön",
"M. Deisenroth"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Diederik P. Kingma",
"Jimmy Ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Weston",
"S. Chopra",
"Antoine Bordes"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"I. Sutskever",
"O. Vinyals",
"Quoc V. Le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Dzmitry Bahdanau",
"Kyunghyun Cho",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kyunghyun Cho",
"B. V. Merrienboer",
"Çaglar Gülçehre",
"Dzmitry Bahdanau",
"Fethi Bougares",
"Holger Schwenk",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Tomas Mikolov",
"I. Sutskever",
"Kai Chen",
"G. Corrado",
"J. Dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Shinsuke Mori",
"Hirokuni Maeta",
"Yoko Yamakata",
"Tetsuro Sasada"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Tomas Mikolov",
"Kai Chen",
"G. Corrado",
"J. Dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Zhangzhang Si",
"Mingtao Pei",
"Benjamin Z. Yao",
"Song-Chun Zhu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [],
"affiliation": []
},
{
"name": [],
"affiliation": []
},
{
"name": [],
"affiliation": []
},
{
"name": [
"Nitish Srivastava",
"Geoffrey E. Hinton",
"A. Krizhevsky",
"I. Sutskever",
"R. Salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Shinsuke Mori",
"Tetsuro Sasada",
"Yoko Yamakata",
"Koichiro Yoshino"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1708.00781v1",
"1704.02254v2",
"1611.01628",
"1612.03969v3",
"1611.08945v4",
"",
"",
"",
"",
"1606.03126v2",
"1601.04811v6",
"1511.04834v3",
"1511.02301v4",
"",
"1507.08750",
"1503.08895v5",
"1502.02251v3",
"1412.6980v9",
"1410.3916v11",
"1409.3215v3",
"1409.0473v7",
"1406.1078",
"1310.4546v1",
"",
"1301.3781v3",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
[
"background"
],
[
"background"
],
[],
[
"background",
"methodology"
],
[
"methodology"
],
[
"background",
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"background",
"methodology"
],
[
"background",
"methodology"
],
[],
[],
[
"background"
],
[],
[
"background"
],
[],
[],
[],
[
"background",
"methodology"
],
[],
[
"background"
]
],
"isInfluential": [
false,
false,
false,
true,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | null | 87 | 1.264368 | 0.740741 | 0.75 | null | null | null | null | null | rJYFzMZC- |
|
velikovi|graph_attention_networks|ICLR_cc_2018_Conference | 1710.10903v3 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of computationally intensive matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training). | {
"name": [
"petar veličković",
"guillem cucurull",
"arantxa casanova",
"adriana romero",
"pietro liò",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
} | null | [
"Computer Science",
"Mathematics"
] | International Conference on Learning Representations | 2017-10-30 | 44 | 14,605 | null | null | null | null | null | null | null | true | The authors appear to have largely addressed the concerns of the reviewers and commenters regarding related work and experiments. The results are strong, and this will likely be a useful contribution for the graph neural network literature. | {
"review_id": [
"ryFW0bhlM",
"S1vzCb-bz",
"BJth1UKlf"
],
"review": [
{
"title": "title: Good basic idea with several weaknesses in the technical exposition and the experiments",
"paper_summary": null,
"main_review": "main_review: This is a paper about learning vector representations for the nodes of a graph. These embeddings can be used in downstream tasks the most common of which is node classification.\n\nSeveral existing approaches have been proposed in recent years. The authors provide a fair and almost comprehensive discussion of state of the art approaches. There are a couple of exception that have already been mentioned in a comment from Thomas Kipf and Michael Bronstein. A more precise discussion of the differences between existing approaches (especially MoNets) should be a crucial addition to the paper. You provide such a comparison in your answer to Michael's comment. To me, the comparison makes sense but it also shows that the ideas presented here are less novel than they might initially seem. The proposed method introduces two forms of (simple) attention. Nothing groundbreaking here but still interesting enough and well explained. It might also be a good idea to compare your method to something like LLE (locally linear embedding). LLE also learns a weight for each of neighbors of a node and computes the embedding as a weighted average of the neighbor embeddings according to these weights. Your approach is different since it is learned end-to-end (not in two separate steps) and because it is applicable to arbitrary graphs (not just graphs where every node has exactly k neighbors as in LLE). Still, something to relate to. \n\nPlease take a look at the comment by Fabian Jansen. I think he is on to something. It seems that the attention weight (from i to j) in the end is only a normalization operation that doesn't take the embedding of node i into account. \n\nThere are two issues with the experiments.\n\nFirst, you don't report results on Pubmed because your method didn't scale. Considering that Pubmed has less than 20,000 nodes this shows a clear weakness of your approach. You write (in an answer to a comment) that it *should* be parallelizable but somehow you didn't make it work. We have to, however, evaluate the approach on what it is able to do at the moment. Having a complexity that is quadratic in the number of nodes is terrible and one of the major reasons learning with graphs has moved from kernels to neural approaches. While it is great that you acknowledge this openly as a weakness, it is currently not possible to claim that your method scales to even moderately sized graphs. \n\nSecond, the experimental set-up on the Cora and Citeseer data sets should be properly randomized. As Thomas pointed out, for graph data the variance can be quite high. For some split the method might perform really well and less well for others. In your answer titled \"Requested clarifications\" to a different comment you provide numbers randomized over 10 runs. Did you randomize the parameter initialization only or also the the train/val/test splits? If you did the latter, this seems reasonable. In Kipf et al.'s GCN paper this is what was done (not over 100 splits as some other commenter claimed. The average over 100 runs pertained to the ICA method only.) ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Well written paper, lack of novelty",
"paper_summary": null,
"main_review": "main_review: This paper has proposed a new method for classifying nodes of a graph. Their method can be used in both semi-supervised scenarios where the label of some of the nodes of the same graph as the graph in training is missing (Transductive) and in the scenario that the test is on a completely new graph (Inductive).\nEach layer of the network consists of feature representations for all of the nodes in the Graph. A linear transformation is applied to all the features in one layer and the output of the layer is the weighted sum of the transformed neighbours (including the node). The attention logit between node i and its neighbour k is calculated by a one layer fully connected network on top of the concatenation of the transformed representation of node i and transformed representation of the neighbour k. They also can incorporate the multi-head attention mechanism and average/concatenate the output of each head.\n\nOriginality:\nAuthors improve upon GraphSAGE by replacing the aggregate and sampling function at each layer with an attention mechanism. However, the significance of the attention mechanism has not been studied in the experiments. For example by reporting the results when attention is turned off (1/|N_i| for every node) and only a 0-1 mask for neighbours is used. They have compared with GraphSAGE only on PPI dataset. I would change my rating if they show that the 33% gain is mainly due to the attention in compare to other hyper-parameters. [The experiments are now more informative. Thanks]\nAlso, in page 4 authors claim that GraphSAGE is limited because it samples a neighbourhood of each node and doesn't aggregate over all the neighbours in order to keep its computational footprint consistent. However, the current implementation of the proposed method is computationally equal to using all the vertices in GraphSAGE.\n\nPros:\n- Interesting combination of attention and local graph representation learning. \n- Well written paper. It conveys the idea clearly.\n- State-of-the-art results on three datasets.\n\nCons:\n- When comparing with spectral methods it would be better to mention that the depth of embedding propagation in this method is upper-bounded by the depth of the network. Therefore, limiting its adaptability to broader class of graph datasets. \n- Explaining how attention relates to previous body of work in embedding propagation and when it would be more powerful.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Very interesting work, but the graph structure is not fully exploited",
"paper_summary": null,
"main_review": "main_review: The paper introduces a neural network architecture to operate on graph-structured\ndata named Graph Attention Networks.\nKey components are an attention layer and the possibility to learn how to\nweight different nodes in the neighborhood without requiring spectral decompositions\nwhich are costly to be computed.\n\nI found the paper clearly written and very well presented. I want to thank\nthe author for actively participating in the discussions and in clarifying already\nmany of the details that I was missing.\n\nAs also reported in the comments by T. Kipf I found the lack of comparison to previous\nworks on attention and on constructions of NN for graph data are missing.\nIn particular MoNet seems a more general framework, using features to compute node\nsimilarity is another way to specify the \"coordinate system\" for convolution.\nI would argue that in many cases the graph is given and that one would have\nto exploit its structure rather than the simple first order neighbors structure.\n\nI feel, in fact, that the paper deals mainly with \"localized metric-learning\" rather than\nusing the information in the graph itself. There is no\nexplicit usage of the graph beyond the selection of the local neighborhood.\nIn many ways when I first read it I though it would be a modified version of\nmemory networks (which have not been cited). Sec. 2.1 is basically describing\na way to learn a matrix W so that the attention layer produces the weights to be\nused for convolution, or the relative coordinate system, which is to me a\nmemory network like construction, where the memory is given by the neighborhood.\n\nI find the idea to use the multi-head attention very interesting, but one should\nconsider the increase in number of parameters in the experimental section.\n\nI agree that the proposed method is computationally efficient but the authors\nshould keep in mind that parallelizing across all edges involves lot of redundant\ncopies (e.g. in a distributed system) as the neighborhoods highly overlap, at\nleast for interesting graphs.\n\nThe advantage with respect to methods that try to use LSTM in this domain\nin a naive manner is clear, however the similarity function (attention) in this\nwork could be interpreted as the variable dictating the visit ordering.\n\nThe authors seem to emphasize the use of GPU as the best way to scale their work\nbut I tend to think that when nodes have varying degrees they would be highly\nunused. Main reason why they are widely used now is due to the structure in the\nrepresentation of convolutional operations.\nAlso in case of sparse data GPUs are not the best alternative.\n\nExperiments are very well described and performed, however as explained earlier\nsome comparisons are needed.\nAn interesting experiment could be to use the attention weights as adjacency\nmatrix for GCN.\n\nOverall I liked the paper and the presentation, I think it is a simple yet\neffective way of dealing with graph structure data. However, I think that in\nmany interesting cases the graph structure is relevant and cannot be used\njust to get the neighboring nodes (e.g. in social network analysis).",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.5555555820465088,
0.6666666865348816
],
"confidence": [
0.75,
0.75,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Further experimental results, and clarification of complexity",
"Summary of revisions made to the paper in the discussion period",
"Thank you!",
"Reply to AnonReviewer3",
"Further baseline experimental results",
"Requested clarifications",
"Reply to AnonReviewer4",
"Relationship to MoNets",
"Relationship to VAIN",
"Utilised splits, and a comment on ELU",
"Reply to AnonReviewer2"
],
"comment": [
"Thank you for your comments and queries on the complexity and experimental setup!\n\nWe fully agree that 100-run performance would give the fairest comparison to the baselines. Accordingly, results after 100 runs of our model largely follow the trend of the 10-run result:\nCora: 83.0 +- 0.7 (maximum 84.3%)\nCiteseer: 72.6 +- 0.8 (maximum 74.2%)\n\nTo highlight: we have used a 1-run result (without any additional runs), rather than the best result, in the original writeup as submitted. Our N-run results already showed it is possible to achieve better single-run results than 83.3% and 74.0%, respectively. \n\nOur particular choice of attentional mechanism, a, is explicitly written out in Equation (3), clarified by the text immediately preceding this Equation, and illustrated by Figure 1 (left). It may be expressed as:\n\na(x, y) = a^T[x||y]\nwhere a is a learnable weight vector, || is concatenation, and ^T is transposition.\n\nThat is, it corresponds to a simple, linear, single-layer MLP with a single output neuron, acting on the concatenated features of the two nodes to compute the attention coefficient---largely similar to the original attention mechanism of Bahdanau et al.\n\nTheoretically, our model needs to compute the attentional coefficients e_{i, j} only across the edges of the graph, i.e. O(|E|) computations of a single-layer MLP overall, which are independent, and thus can be parallelised. This is on par with other baseline techniques (such as GCNs or Chebyshev Nets). Taking into account that we need to perform a matrix multiplication on each node's features (to transform the feature space from F to F' features), we may express the overall computational complexity of a single attention head's computations as O(|V| x F x F' + |E| x F'), where F is the input feature count, and F' the output feature count---keeping in mind that many of these computations are trivially parallelisable on a GPU.\n\nThe P(N, 2) or C(N, 2) values mentioned in your comment would correspond to a dense graph (E ~ V^2), where O(V^2) complexity is unavoidable regardless of which graph technique is selected.\n\nUnfortunately, even though the softmax computation on every node should be trivially parallelisable, we were unable to make advantage of our tensor manipulation framework to achieve this parallelisation, while retaining a favourable storage complexity (as its softmax function is optimised for same-sized vectors). This implied that we had to reuse the technique from the self-attention paper of Vaswani et al., wherein attention is computed over all pairs of nodes, with a bias value of -inf inserted into non-connected pairs of nodes. This required a storage complexity of O(V^2), and caused OOM errors on our GPU when the Pubmed dataset was provided---which is the reason for the lack of results on Pubmed. \n\nWe were, however, able to run our model on the PPI dataset, which has 3x the number of nodes and 18x the number of edges of Pubmed. We were able to do this as PPI is split into 24 disjoint graphs (with test graphs entirely unseen), allowing us to effectively batch the softmax operation. This should still demonstrate evidence of the fact our model is capable of retaining competitive performance when scaling up to larger graphs (and, perhaps more critically, that it is capable of inductive, as well as transductive, generalisation).\n\nWe thank you once again for your comments, and will be sure to include some aspects of the above discussion in a revised version of the paper!",
"We hope that the revisions we have made to the paper have properly addressed the comments of the reviewers as well as other researchers on our work - and that its overall contribution, quality and clarity is now significantly improved! We would like to thank everyone once again for their thoughtful comments on our paper.\n\nWe provide a summary of the changes made to the paper:\n\n* We have been able to implement a sparse version of the GAT layer, allowing us to execute the model on the Pubmed benchmark. We make this clear across the document, wherever we enumerated the datasets under study.\n\n* In Section 1, we have added appropriate references to relevant related work: MoNet, VAIN, neighbourhood attention, locally linear embedding (LLE) and memory networks.\n\n* In response to Fabian Jansen’s comment (and as reiterated by one of the reviewers), we have now inserted a LeakyReLU nonlinearity to our attention mechanism---representing a minimal change from the previous mechanism’s properties, while no longer having spurious weights. Section 2.1 details this change (within Equation 3 and text immediately preceding it, Figure 1, and its caption).\n\n* In Section 2.2, we no longer mention the storage limitation of our model, as we have been successful in addressing it (by implementing a sparse GAT layer). Instead, we mention the limitation of the current sparse matrix multiplication operation with respect to batching. \n\n* In Section 2.2, we incorporate many of the useful comments (from reviewers and other researchers) about the characteristics of our model: time complexity (especially, comparing it to our primary spectral baselines), the effects of multi-head attention on the parameter count, clarifying the computational/performance tradeoffs compared to GraphSAGE, detailing the relationship between GAT and MoNet, an informal assessment of the suitability of GPUs for such computations, and comments about the model’s effective “receptive field” size around each node and the computational redundancies of the model.\n\n* In Section 3.2., we state that we now compare our model with the results reported by the MoNet paper as well. \n\n* In Section 3.3., we corrected two typos (dropout p = 0.6, rather than 0.5; also, our early stopping strategy took into account both the loss and the accuracy), and noted the slight differences in architecture we used for the Pubmed dataset. We also make explicit the inductive learning experiment of a GAT model with attention turned off (as recommended by one of the reviewers).\n\n* In Section 3.4., we now report the new results of all models considered, after 100 runs for transductive tasks (for fairly comparing against the work of Kipf and Welling), and 10 runs for inductive tasks. We also provide the best 100-run transductive results we were able to obtain with a GCN model computing 64 features (with ReLU or ELU activation), and the best inductive result we were able to obtain with GraphSAGE (by only changing its architecture, and not the sampling strategy), as well as the 10-run result of the aforementioned inductive GAT model with attention turned off (as a comparison to a GCN-like model computing the same number of features). These results are now all enumerated in Tables 2 and 3, and are discussed appropriately in the main text body. The tables’ captions have been expanded to make the result presentation more clear as well.\n",
"Thank you very much for spotting this! We have now updated our method to make advantage of all the weights (by applying a simple nonlinearity to the output before normalisation), and will be sure to acknowledge you in the final version of our paper!",
"Thank you very much for your detailed review! Please refer to our global comment above for a list of all revisions we have applied to the paper---we are hopeful that they have addressed your comments appropriately.\n\nFabian has indeed correctly identified that half of our attention weights were spurious. We have now rectified this by applying a simple nonlinearity (the LeakyReLU) prior to normalising, and anticipated that its application would provide better performance to the model on the PPI dataset (which has a large number of training nodes). Indeed, we noticed no discernible change on Cora and Citeseer, but an increase in F1-score on PPI (now at 0.973 +- 0.002 after 10 runs; previously, as given in our reply to one of the comments below, it was 0.952 +- 0.006). The new results may be found in Tables 2 and 3.\n\nIn the meantime, we have been successful at leveraging TensorFlow’s sparse_softmax operation, and produced a sparsified version of the GAT layer. We are happy to provide results on Pubmed, and they are now given in the revised version of the paper (see Table 2 for a summary). We were able to match state-of-the-art level performance of MoNet and GCN (at 79.0 +- 0.3% after 100 runs). Similarly to the MoNet paper authors, we had to revise the GAT architecture slightly to accommodate Pubmed’s extremely small training set size (of 60 examples), and this is clearly remarked in our experimental setup (Section 3.3).\n\nFinally, quoting directly from the work of Kipf and Welling:\n\n“We trained and tested our model on the same dataset splits as in Yang et al. (2016) and report mean accuracy of 100 runs with random weight initializations.”\n\nThis implies that the splits were not randomised in the result reported by the GCN paper (specifically, the one used to compare with other baseline approaches), but only the model initialisation---and this is exactly what we do as well. We, in fact, use exactly the code provided by Thomas Kipf at https://github.com/tkipf/gcn/blob/master/gcn/utils.py#L24 to load the dataset splits.\n\nWe have added all the required references to MoNet and LLE (and many other pieces of related work) in the revised version (Section 1, paragraphs 6 and 9; also Section 2.2, bullet point 5) - thank you for pointing out LLE to us, which is an interesting and relevant piece of related work!\n\nWe thank you once again for your review, which has definitely helped make our paper’s contributions stronger!\n",
"First of all, thank you very much for your thorough comment and thoughts on the experimental setup! \n\nWe directly quoted back the baseline results originally reported, under the assumption that appropriate hyperparameter optimisation had already been performed on them. However, we have now performed further experiments on the baseline techniques, in line with some of your recommendations, and the results of this study still point to an outperformance by GAT models. We focused on the experiments that were easily runnable without significantly modifying the codebases at https://github.com/tkipf/gcn and https://github.com/williamleif/GraphSAGE. Our findings can be summarised as follows, and will be highlighted in an updated version of the paper:\n\nCora/Citeseer: We have trained the GCN and Chebyshev (K = 2 and K = 3) models, with a hidden size of 64, with ReLU and ELU activations, for 10 runs each. Note that we did not need to add an additional input linear layer (as suggested by the comment), given that the code at https://github.com/tkipf/gcn/blob/master/gcn/layers.py#L176 already does this.\n\nThe best-performing models achieved the following mean +- std results:\n\nCora: 81.5 +- 0.7 (Cheby2 ReLU)\nCiteseer: 71.0 +- 0.3 (Cheby3 ELU and GCN ReLU)\n\nThese results are still outperformed by both our model's single-run performance (as in our paper) and 10-run performance (as in our reply to a previous comment below).\n\nPPI: Firstly, we would like to note that our model actually considers three-hop neighbourhoods (rather than four), and that the GraphSAGE models feature skip connections---in fact, our model only has one skip connection in total whereas GraphSAGE has a skip connection for every aggregation layer (the concat operation in Line 5 of Algorithm 1 in https://arxiv.org/abs/1706.02216). The authors of GraphSAGE have, in fact, highlighted that this skip connection was critical to their performance gains.\n\nIn line with this, we have tested a wide range of larger GraphSAGE models with three aggregation layers, with both ReLU and ELU activations, spanning feature counts up to 1024. Specially, for the third layer we focused on feature counts of 121 and 726, as our GAT model’s final aggregation layer also acts as a classification layer, computing 6 * 121 features which are then pointwise-averaged. Some of these combinations resulted in OOM errors, with the best performing one being a GraphSAGE-LSTM model computing [512, 512, 726] features, with 128 features being used for aggregating neighbourhoods, using the ELU activation. This approach achieved a micro-F1 score of 0.648 on PPI. We have found it beneficial to let the model train for more epochs compared to the original authors' work, and were able to reach a maximal test micro-F1 score of 0.768 after doing so.\n\nThis is still outperformed by a significant margin by both our single-model result (reported in the paper) and our 10-run result (reported in a reply to a previous comment below).\n\nFinally, as pointed out by the comment, we report that, for a pre-trained GAT model on Cora, the mean attentional coefficients in the hidden layer (across all eight attention heads) are 0.275 for the self-edge and 0.185 for the neighbourhood edges.",
"Thank you very much for your comment - we acknowledge that this detail about our experimental setup was not sufficiently clear in the submitted version and are more than happy to address it appropriately in a subsequent revision.\n\nWe have picked the best hyperparameter configuration considering the validation score on both Cora and PPI, and then reused the Cora architectural hyperparameters on Citeseer. Once the hyperparameters were in place, the early-stopped models were then evaluated on the test set once, and the obtained results are the ones reported in the paper.\n\nWe agree that reporting the averaged model performance would be useful, and we will do this in an updated version of the paper. The results after 10 runs of the same model with different random seeds are (with highlighted standard deviations):\n\nCora: 83.0 +- 0.6 (with a maximum of 83.9%)\nCiteseer: 72.7 +- 0.7 (with a maximum of 74.2%)\nPPI: 0.952 +- 0.006 (with a maximum of 0.966) \n\nThese correspond to state-of-the-art results across all three datasets.",
"We would like to thank you for the comprehensive review! Please refer to our global comment above for a list of all revisions we have applied to the paper---we are hopeful that they have addressed your comments appropriately.\n\nPrimarily, thank you for suggesting the constant-attention experiment (with 1/|Ni| coefficients)! This not only directly evaluates the significance of the attention mechanism on the inductive task, but allows for a comparison with a GCN-like inductive structure. We have successfully shown a benefit of using attention:\n\nThe Const-GAT model achieved 0.934 +- 0.006 micro-F1;\nThe GAT model achieved 0.973 +- 0.002 micro-F1.\n\nWhich demonstrates a clear positive effect of using an attention mechanism (given that all other architectural and training properties are kept fixed across the two models). These results are clearly communicated in our revised paper now (Section 3.3 introduces the experiment in the “Inductive learning” paragraph, while the results are outlined in Table 3 and discussed in Section 3.4, paragraph 4).\n\nOur intention was not to imply that our method is computationally more efficient than GraphSAGE---only that GraphSAGE’s design decisions (sampling subsets of neighbourhoods) have potentially limiting effects on its predictive power. We have rewrote bullet point 4 in Section 2.2, to hopefully communicate this better.\n\nLastly, we make explicit that the depth of our propagation is upper-bounded by network depth in Section 2.2, paragraph 2. We remark that GCN-like models suffer from the same issue, and that skip connections (or similar constructs) may be readily used to effectively increase the depth to desirable levels. The primary benefit of leveraging attention, as opposed to prior approaches to graph-structured feature aggregation, is being able to (implicitly) assign different importances to different neighbours, while simultaneously generalising to a wide range of degree distributions---these differences are stated in our paper in various locations (e.g. Section 1, paragraph 8; Section 2.2, bullet point 2).\n\nWe thank you once again for your review, which has definitely helped make our paper’s contributions stronger!\n",
"Thank you very much for your comment, and pointing us to this work! MoNets are definitely a highly relevant piece of related work to ours, and therefore they will receive appropriate treatment and a citation in the subsequent revision of our paper.\n\nWe find that our work can indeed be reformulated as a particular case of the MoNet framework. Namely, setting the pseudo-coordinate function to be \n\nu(x, y) = f(x) || f(y)\n(where f(x) represent (potentially MLP-transformed) features of node x, and || is concatenation) \n\nand the weight function to be \n\nw_j(u) = softmax(MLP(u))\n(with the softmax performed over the entire neighbourhood of a node)\n\nwould make the patch operator similar to ours. \n\nThis could be interpreted as a way of integrating the ideas of self-attentional interfaces (such as the work of Vaswani et al.: https://arxiv.org/abs/1706.03762 ) into the patch-operator framework presented by MoNet. Specially, and in comparison to the previously specified MoNet frameworks, our model uses node features for similarity computations, rather than the node's structural properties (such as their degrees in the graph). This, in combination with using a multilayer perceptron for computing the attention coefficients, allows the network more freedom in the way it chooses to express similarities between different nodes in the graph, irrespective of the local topological properties. The addition of the softmax function ensures that these coefficients will be well-behaved (and potentially probabilistically interpretable).\n\nLastly, our work also features a few stabilising additions to the attention model (to better cope with the smaller training set sizes), such as applying dropout on the computed attention coefficients, exposing the network to a stochastically sampled neighbourhood on every iteration. Such regularisation techniques might be harder to interpret or justify when structural properties are used as pseudo-coordinates, as stochastically dropping neighbours changes e.g. the node degrees.\n\nTo avoid any potential confusion for other readers of this discussion, we would like to also highlight that the arXiv link for MoNets that we referred to is: https://arxiv.org/pdf/1611.08402.pdf ",
"Thank you for the positive feedback, as well as bringing your paper to our attention! We have found it to be very interesting related work, and will be sure to cite it in a subsequent version of our paper (most likely alongside our existing citation of the work of Santoro et al.: https://arxiv.org/abs/1706.01427 ). We highlight a few comparisons between our approaches that are worth mentioning below.\n\nWe compute attention coefficients using an edge-wise mechanism, rather than a node-wise mechanism followed by an edge-wise distance metric. This is suitable for a graph setting (with neighbourhoods specified by the graph structure), because we can only evaluate this mechanism across the edges that are in the graph (easing the computational load). In a multi-agent setting (as the one explored by your paper), there may not be an immediately-obvious such structure, and this is why one has to resort to specifying interactions across all pairs of agents (at least initially, before the kind of pruning by way of k-NN could be performed). As we focus on making per-node predictions in graphs, we also found it useful for a node to attend over its own features, which your proposed model explicitly disallows. Our work also features a few stabilising additions to the attention model (to better cope with the smaller training set sizes), such as multi-head attention and dropout on the computed attention coefficients.",
"Thank you for the kind feedback, the plethora of useful related work, and the queries!\n\nWe have already noted the relationship of our work to MoNets and VAIN (as given in our replies to the authors below). The work on Neighbourhood attention is also relevant, and will also be cited appropriately alongside the related work by Santoro et al. (which we already cited in the original version). Also, the improved neighbourhood attention might hold interesting future work avenues (such as introducing an edge-wise 'message passing' network whose outputs one can attend over).\n\nWe have utilised exactly the same training/validation/testing splits for Cora and Citeseer as the ones used in Kipf & Welling. This information should be already highlighted in the description of our experimental setup. In fact, for extracting the dataset we use exactly the code provided at: https://github.com/tkipf/gcn/blob/master/gcn/utils.py\n\nWe have found early on in our experiments that the properties of the ELU function are convenient for simplifying the optimisation process of our method - reducing the amount of effort invested in our hyperparameter search.",
"First of all, thank you very much for your thorough review, and for the variety of useful pointers within it! Please refer to our global comment above for a list of all revisions we have applied to the paper---we are hopeful that they have addressed your comments appropriately.\n\nWe have now added all the references to attention-like constructions (such as MoNet and neighbourhood attention) to our related work, as well as memory networks (see Section 1, paragraphs 6 and 9; also Section 2.2, bullet point 5). We fully agree with your comments about the increase in parameter count with multi-head attention, computational redundancy, and comparative advantages of GPUs in this domain, and have explicitly added them as remarks to our model’s analysis (in Section 2.2, bullet point 1 and paragraph 2). \n\nWhile we agree that the graph structure is given in many interesting cases, in our approach we specifically sought to produce an operator explicitly capable of solving inductive problems (which appear often, e.g., in the biomedical domain, where the method needs to be able to generalise to new structures). A potential way of reconciling this when a graph structure is provided is to combine GAT-like and spectral layers in the same architecture.\n\nFurther experiments (as discussed by us in all previous comments) have also been performed and are now explicitly listed in the paper’s Results section (please see Tables 2 and 3 for a summary). We have also attempted to use the GAT coefficients as the aggregation matrix for GCNs (both in an averaged and multi-head manner)---but found that there were no clear performance changes compared to using the Laplacian.\n\nWe thank you once again for your review, which has definitely helped make our paper’s contributions stronger!"
]
} | {
"paperhash": [
"zitnik|predicting_multicellular_function_through_multi-layer_tissue_networks",
"denil|programmable_agents",
"hoshen|vain:_attentional_multi-agent_predictive_modeling",
"vaswani|attention_is_all_you_need",
"hamilton|inductive_representation_learning_on_large_graphs",
"santoro|a_simple_neural_network_module_for_relational_reasoning",
"duan|one-shot_imitation_learning",
"lin|a_structured_self-attentive_sentence_embedding",
"jégou|the_one_hundred_layers_tiramisu:_fully_convolutional_densenets_for_semantic_segmentation",
"monti|geometric_deep_learning_on_graphs_and_manifolds_using_mixture_model_cnns",
"gehring|a_convolutional_encoder_model_for_neural_machine_translation",
"kipf|semi-supervised_classification_with_graph_convolutional_networks",
"defferrard|convolutional_neural_networks_on_graphs_with_fast_localized_spectral_filtering",
"niepert|learning_convolutional_neural_networks_for_graphs",
"yang|revisiting_semi-supervised_learning_with_graph_embeddings",
"cheng|long_short-term_memory-networks_for_machine_reading",
"he|deep_residual_learning_for_image_recognition",
"clevert|fast_and_accurate_deep_network_learning_by_exponential_linear_units_(elus)",
"li|gated_graph_sequence_neural_networks",
"atwood|diffusion-convolutional_neural_networks",
"duvenaud|convolutional_networks_on_graphs_for_learning_molecular_fingerprints",
"henaff|deep_convolutional_networks_on_graph-structured_data",
"kingma|adam:_a_method_for_stochastic_optimization",
"weston|memory_networks",
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"cho|learning_phrase_representations_using_rnn_encoder–decoder_for_statistical_machine_translation",
"perozzi|deepwalk:_online_learning_of_social_representations",
"bruna|spectral_networks_and_locally_connected_networks_on_graphs",
"glorot|understanding_the_difficulty_of_training_deep_feedforward_neural_networks",
"sen|collective_classification_in_network_data",
"weston|deep_learning_via_semi-supervised_embedding",
"belkin|manifold_regularization:_a_geometric_framework_for_learning_from_labeled_and_unlabeled_examples",
"gori|a_new_model_for_learning_in_graph_domains",
"subramanian|from_the_cover:_gene_set_enrichment_analysis:_a_knowledge-based_approach_for_interpreting_genome-wide_expression_profiles",
"zhu|combining_active_learning_and_semi-supervised_learning_using_gaussian_fields_and_harmonic_functions",
"lu|link-based_classification",
"roweis|nonlinear_dimensionality_reduction_by_locally_linear_embedding.",
"frasconi|a_general_framework_for_adaptive_processing_of_data_structures",
"hochreiter|long_short-term_memory",
"sperduti|supervised_neural_networks_for_the_classification_of_structures",
"|tensorflow:_large-scale_machine_learning_on_heterogeneous_systems",
"srivastava|dropout:_a_simple_way_to_prevent_neural_networks_from_overfitting",
"scarselli|the_graph_neural_network_model",
"sen|collective_classi!cation_in_network_data",
"maaten|visualizing_data_using_t-sne",
"|tensorflow:_large-scale_machine_learning_on_heterogeneous_systems"
],
"title": [
"Predicting multicellular function through multi-layer tissue networks",
"Programmable Agents",
"VAIN: Attentional Multi-agent Predictive Modeling",
"Attention is All you Need",
"Inductive Representation Learning on Large Graphs",
"A simple neural network module for relational reasoning",
"One-Shot Imitation Learning",
"A Structured Self-attentive Sentence Embedding",
"The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation",
"Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs",
"A Convolutional Encoder Model for Neural Machine Translation",
"Semi-Supervised Classification with Graph Convolutional Networks",
"Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering",
"Learning Convolutional Neural Networks for Graphs",
"Revisiting Semi-Supervised Learning with Graph Embeddings",
"Long Short-Term Memory-Networks for Machine Reading",
"Deep Residual Learning for Image Recognition",
"Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)",
"Gated Graph Sequence Neural Networks",
"Diffusion-Convolutional Neural Networks",
"Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"Deep Convolutional Networks on Graph-Structured Data",
"Adam: A Method for Stochastic Optimization",
"Memory Networks",
"Neural Machine Translation by Jointly Learning to Align and Translate",
"Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation",
"DeepWalk: online learning of social representations",
"Spectral Networks and Locally Connected Networks on Graphs",
"Understanding the difficulty of training deep feedforward neural networks",
"Collective Classification in Network Data",
"Deep learning via semi-supervised embedding",
"Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples",
"A new model for learning in graph domains",
"From the Cover: Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles",
"Combining active learning and semi-supervised learning using Gaussian fields and harmonic functions",
"Link-Based Classification",
"Nonlinear dimensionality reduction by locally linear embedding.",
"A general framework for adaptive processing of data structures",
"Long Short-Term Memory",
"Supervised neural networks for the classification of structures",
"TensorFlow: Large-scale machine learning on heterogeneous systems",
"Dropout: a simple way to prevent neural networks from overfitting",
"The Graph Neural Network Model",
"Collective Classi!cation in Network Data",
"Visualizing Data using t-SNE",
"TensorFlow: Large-scale machine learning on heterogeneous systems"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"M. Zitnik",
"J. Leskovec"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Misha Denil",
"Sergio Gomez Colmenarejo",
"Serkan Cabi",
"D. Saxton",
"Nando de Freitas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yedid Hoshen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ashish Vaswani",
"Noam M. Shazeer",
"Niki Parmar",
"Jakob Uszkoreit",
"Llion Jones",
"Aidan N. Gomez",
"Lukasz Kaiser",
"Illia Polosukhin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"William L. Hamilton",
"Z. Ying",
"J. Leskovec"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Adam Santoro",
"David Raposo",
"D. Barrett",
"Mateusz Malinowski",
"Razvan Pascanu",
"P. Battaglia",
"T. Lillicrap"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yan Duan",
"Marcin Andrychowicz",
"Bradly C. Stadie",
"Jonathan Ho",
"Jonas Schneider",
"I. Sutskever",
"P. Abbeel",
"Wojciech Zaremba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Zhouhan Lin",
"Minwei Feng",
"C. D. Santos",
"Mo Yu",
"Bing Xiang",
"Bowen Zhou",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Jégou",
"M. Drozdzal",
"David Vázquez",
"Adriana Romero",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Federico Monti",
"D. Boscaini",
"Jonathan Masci",
"E. Rodolà",
"Jan Svoboda",
"M. Bronstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jonas Gehring",
"Michael Auli",
"David Grangier",
"Yann Dauphin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Thomas Kipf",
"M. Welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Defferrard",
"X. Bresson",
"P. Vandergheynst"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Mathias Niepert",
"Mohamed Ahmed",
"Konstantin Kutzkov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Zhilin Yang",
"William W. Cohen",
"R. Salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jianpeng Cheng",
"Li Dong",
"Mirella Lapata"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kaiming He",
"X. Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Djork-Arné Clevert",
"Thomas Unterthiner",
"Sepp Hochreiter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yujia Li",
"Daniel Tarlow",
"Marc Brockschmidt",
"R. Zemel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"James Atwood",
"D. Towsley"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Duvenaud",
"D. Maclaurin",
"J. Aguilera-Iparraguirre",
"Rafael Gómez-Bombarelli",
"Timothy D. Hirzel",
"Alán Aspuru-Guzik",
"Ryan P. Adams"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Mikael Henaff",
"Joan Bruna",
"Yann LeCun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Diederik P. Kingma",
"Jimmy Ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Weston",
"S. Chopra",
"Antoine Bordes"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Dzmitry Bahdanau",
"Kyunghyun Cho",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kyunghyun Cho",
"B. V. Merrienboer",
"Çaglar Gülçehre",
"Dzmitry Bahdanau",
"Fethi Bougares",
"Holger Schwenk",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Bryan Perozzi",
"Rami Al-Rfou",
"S. Skiena"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Joan Bruna",
"Wojciech Zaremba",
"Arthur Szlam",
"Yann LeCun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Xavier Glorot",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Prithviraj Sen",
"Galileo Namata",
"M. Bilgic",
"L. Getoor",
"Brian Gallagher",
"Tina Eliassi-Rad"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Weston",
"F. Ratle",
"H. Mobahi",
"R. Collobert"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Belkin",
"P. Niyogi",
"Vikas Sindhwani"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Gori",
"G. Monfardini",
"F. Scarselli"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Subramanian",
"P. Tamayo",
"V. Mootha",
"Sayan Mukherjee",
"B. Ebert",
"Michael A. Gillette",
"A. Paulovich",
"S. Pomeroy",
"T. Golub",
"E. Lander",
"J. Mesirov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Xiaojin Zhu",
"Zoubin Ghahramani",
"J. Lafferty"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Qing Lu",
"L. Getoor"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Roweis",
"L. Saul"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Frasconi",
"M. Gori",
"A. Sperduti"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sepp Hochreiter",
"J. Schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Sperduti",
"A. Starita"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"Nitish Srivastava",
"Geoffrey E. Hinton",
"A. Krizhevsky",
"I. Sutskever",
"R. Salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"F. Scarselli",
"M. Gori",
"A. Tsoi",
"M. Hagenbuchner",
"G. Monfardini"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Prithviraj Sen",
"Galileo Namata",
"M. Bilgic",
"L. Getoor",
"Brian Gallagher",
"Tina Eliassi-Rad"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"L. Maaten",
"Geoffrey E. Hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
}
],
"arxiv_id": [
"1707.04638",
"1706.06383v1",
"1706.06122",
"1706.03762v7",
"1706.02216v4",
"1706.01427v1",
"1703.07326",
"1703.03130",
"1611.09326v3",
"1611.08402v3",
"1611.02344v3",
"1609.02907",
"1606.09375v3",
"1605.05273v4",
"1603.08861",
"1601.06733",
"1512.03385v1",
"1511.07289v5",
"1511.05493v4",
"1511.02136",
"1509.09292v2",
"1506.05163",
"1412.6980v9",
"1410.3916v11",
"1409.0473v7",
"1406.1078",
"1403.6652v2",
"1312.6203v3",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
[
"methodology"
],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"methodology",
"background"
],
[
"result",
"methodology",
"background"
],
[
"methodology",
"background"
],
[
"result",
"methodology"
],
[
"result",
"methodology"
],
[
"methodology"
],
[],
[
"background"
],
[
"methodology",
"background"
],
[],
[
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"methodology",
"background"
],
[
"methodology",
"background"
],
[],
[
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"methodology",
"background"
],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[],
[],
[],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[]
],
"isInfluential": [
false,
false,
false,
false,
true,
false,
false,
false,
false,
true,
false,
true,
true,
false,
false,
false,
true,
false,
false,
false,
true,
false,
true,
false,
true,
false,
false,
false,
true,
true,
true,
true,
false,
false,
true,
true,
false,
false,
true,
false,
false,
false,
false,
true,
false,
false
]
} | null | 88 | 165.965912 | 0.555556 | 0.833333 | null | null | null | null | null | rJXMpikCZ |
|
yazici|autoregressive_generative_adversarial_networks|ICLR_cc_2018_Conference | Autoregressive Generative Adversarial Networks | Generative Adversarial Networks (GANs) learn a generative model by playing an adversarial game between a generator and an auxiliary discriminator, which classifies data samples vs. generated ones. However, it does not explicitly model feature co-occurrences in samples. In this paper, we propose a novel Autoregressive Generative Adversarial Network (ARGAN), that models the latent distribution of data using an autoregressive model, rather than relying on binary classification of samples into data/generated categories. In this way, feature co-occurrences in samples can be more efficiently captured. Our model was evaluated on two widely used datasets: CIFAR-10 and STL-10. Its performance is competitive with respect to other GAN models both quantitatively and qualitatively. | {
"name": [],
"affiliation": []
} | null | [
"Generative Adversarial Networks",
"Latent Space Modeling"
] | null | 2018-02-15 22:29:40 | 25 | null | null | null | null | null | null | null | null | false | The reviewers (all experts in this area) appreciated the novelty of the idea, though they felt that the experimental results (samples and Inception scores) did not provide convincing evidence value of this method over already established techniques. The authors responded to the concerns but were not able to address the issue of evaluation due to time constraints. The idea is likely sound but evaluation does not meet the bar, it may make a good contribution as a workshop paper. | {
"review_id": [
"S1klbTulM",
"S13bO3cez",
"BkuDb6tgf"
],
"review": [
{
"title": "title: interesting gan architecture, evaluations limited",
"paper_summary": null,
"main_review": "main_review: This paper proposes a new GAN model whereby the discriminator (rather than being a binary classifier) consists of an encoding network followed by an autoregressive model on the encoded features. The discriminator is trained to maximize the probability of the true data and minimize the probability of the generated samples. The authors also propose a version that combines this autoregressive discriminator with a patchGAN discriminator. The authors train this model on cifar10 and stl10 and show reasonable generations and inception scores, comparing the latter with existing approaches. \n\nPros: This discriminator architecture is well motivated, intuitive and novel. the samples are good (though not better than existing approaches as far as I can tell). The paper is also well written and easy to read.\n\nCons: As is commonly the case with GAN models, it is difficult to assess the advantage of this approach over exiting techniques. The samples generated form this model look fine, but not better than existing samples. The inception scores are ok, but seem to be outperformed by other models (though this shouldn't necessarily be taken as a critique of the approach presented here as inception scores are an approximation to what we care about and we should not be trying to tune models for better inception scores). \n\nDetailed comments:\n- In terms of experiments, I think think paper is missing the following: (1) An additional dataset -- cifar and stl10 are very similar, a face dataset for example would be good to see and is commonly used in GAN papers. (2) the authors claim their method is stable, so it would be good to see quantitative results backing this claim, i.e. sweeps over hyper-parameters / encoding/generator architectures with evaluations for different settings. \n- the idea of having some form of recurrent (either over channels of spatially) processing in the discriminator seems more general that the specific proposal given here. Could the authors say a bit more about what they think the effects of adding recurrence in the discriminator vs optimizing the likelihood of the features under the autoregressive model?\n\nUltimately, the approach is interesting but there is not enough empirical evaluations.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting idea. Insufficient empirical support.",
"paper_summary": null,
"main_review": "main_review: This paper proposes an alternative GAN formulation that replaces the standard binary classification task in the discriminator with a autoregressive model that attempts to capture discriminative feature dependencies on the true data samples.\n\nSummary assessment:\nThe paper presents a novel perspective on GANs and an interesting conjecture regarding the failure of GANs to capture global consistency. However the experiments do not directly support this conjecture. In addition, both qualitative and quantitative results to not provide significant evidence of the value of this technique over and above the establish methods in the literature.\n\nThe central motivation of the method proposed in the paper, is a conjecture that the lack of global consistency in GAN-generated samples is due to the binary classification formulation of the discriminator. While this is an interesting conjecture, I am somewhat unconvinced that this is indeed the cause of the problem. First, I would argue that other high-performing auto-regressive models such as PixelRNN and PixelCNN also seem to lack global consistency. This observation would seem to violate this conjecture. More importantly, the paper does not show any direct empirical evidence in support of this conjecture. \n\nThe authors make a very interesting observation in their description of the proposed approach. In discussing an initial variant of the model (Eqns. (5) and (6) and text immediately below), the authors state that attempting to maximize the negative log likelihood of the auto-regressive modelling the generated samples results in unstable training. I would like to see more discussion of this point as it could have some bearing on the difficulty of GAN to model sequential data in general. Does the failure occurs because the auto-regressive discriminator is able to \"overfit\" the generated samples?\n\nAs a result of the observed failure of the formulation given in Eqns. (5) and (6), the authors propose an alternative formulation that explicitly removes the negative likelihood maximization for generated samples. As a result the only objective for the auto-regressive model is an attempt to maximize the log-likelihood of the true data. The authors suggest that this should be sufficient to provide a reliable training signal for the generator. It would be useful if the authors showed a representation of these features (perhaps via T-SNE) for both true data and generated samples. \n\nEmpirical results:\nThe authors experiments show samples (qualitative comparison) and inception scores (quantitative comparison) for 3 variants of the proposed model and compare these to methods in the literature. The comparisons show the proposed model preforms well, but does not exceed the performance of many of the existing methods in the literature.\n\nAlso, I fail to observe significantly more global consistency for these samples compared to samples of other SOTA GAN models in the literature. Again, there is no attempt made by the authors to make this direct comparison of global consistency either qualitatively or quantitatively. \n\nMinor comment:\nI did not see where PARGAN was defined. Was this the combination of the Auto-regressive GAN with Patch-GAN?\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Autoregression in feature space",
"paper_summary": null,
"main_review": "main_review: This work attempts to improve the global consistency of samples generated by generative adversarial networks by replacing the discriminator with an autoregressive model in an encoded feature space. The log likelihood of the classification model is then replaced with the log likelihood of the feature space autoregressive model. It's not clear what can be said with respect to the convergence properties of this class of models, and this is not discussed.\n\nThe method is quite similar in spirit to Denoising Feature Matching of Warde-Farley & Bengio (2017), as both estimate a density model in feature space -- this method via a constrained autoregressive model and DFM via an estimator of the score function, although DFM was used in conjunction with the standard criterion whereas this method replaces it. This is certainly worth mentioning and discussing. In particular the section in Warde-Farley & Bengio regarding the feature space transformation of the data density seems quite relevant in this work.\n\nUnfortunately the only quantitative measurements reporter are Inception scores, which is known to be a poor measure (and the scores presented are not particularly high, either); Frechet Inception distance or log likelihood estimates via AIS on some dataset would be more convincing. On the plus side, the authors report an average over Inception scores for multiple runs. On the other hand, it sounds as though the stopping criterion was still qualitative.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.4444444477558136,
0.2222222238779068
],
"confidence": [
0.75,
1,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to AnonReviewer1",
"Response to AnonReviewer3",
"Response to AnonReviewer2 "
],
"comment": [
"Thanks for your review and feedback. PixelRNN and PixelCNN are pixel prediction methods. Since adjacent pixels in images are highly correlated and pixel values can be captured by local image statistics, their model might be using most of their capacity to model local information instead of global. However in our case, autoregressive modeling is in latent space and adjacent features values are less correlated than adjacent pixel values. Also high level abstract representation are globally related such as co-occurrence of different object parts in a scene. As a result, PixelRNN's and PixelCNN's lack of global consistency is not necessarily about autoregressive modeling but more about their pixel level modeling.\n\nMaximizing Eqns. (5) by updating \"R\" parameters are unstable because the second term in the equation is unbounded. We mentioned this in paper but did not go into detail. When the second term is unbounded gradient of it gets way bigger than the first term's gradient. As a result optimization cares mostly about the second term which means it decreases probability of generated feature while avoiding to increase probability of real features. We see this phenomenon exactly in our experiment. After some iterations the error of the first term starts to increase because the gradients cares about the second term. We tried a simple method to overcome this by using margin loss, as in EBGAN, in the second term which makes the second term bounded. However this trick did not provide better results than by simple discarding the second term from R's objective. We do not think it has something to do with sequential data but being unbounded.\n\n\"The authors suggest that this should be sufficient to provide a reliable training signal for the generator\". Empirical evidence, both qualitative and quantitative, shows that this is a reliable training signal for the generator. Even though, the auto-regressor is not adversarial the encoder is adversarial which satisfies the distinguishability of real and fake samples. Our intuition is that as the auto-regressor fits on only real features it discovers feature co-occurrence statistics in real data distribution. Repelling from fake data distribution is not necessary since it is already satisfied by the encoder. As mentioned previously, margin loss in the second term is not better than discarding it totally.\n\nFor Empirical results: We see that we emphasized global inconsistency a lot in the paper however it is just an observation about what is lacking in the current GAN models and why it might be happening. Generic theme of our model is learning a generative model by using feature co-occurrence statistics in real data distribution which is not found in generative distribution. Our C-ARGAN and S-ARGAN can learn both spatial layout and feature layout. Even though our model is not better than other GAN models, its competitive on DCGAN architecture and can be further improved with more advanced autoregressive modeling as we mentioned in the conclusion section.\n\nSorry for PARGAN confusion. It is simply summation of the Auto-regressive GAN with Patch-GAN without any hyperparameters. It will be included into the revision.",
"Thanks for your review and feedback. (1) We have included CelebA dataset at 64x64 resolution with SW-ARGAN objective. (2) We did not claim stability over various architectures or hyperparameters. For setting that we mention in the paper, the method works well without mode collapse or training instability. Also our model is fairly simple and does not use any trick (like in improvedGAN) to improve performance. For your (3) point, our autoregressive modeling can be also modeled with a CNN similar to PixelCNN, so it is not a specific proposal about recurrent modeling. One possible benefit of using autoregression instead of recurrent discriminator is that it takes more bits of information from the objective (similar to EBGAN) instead of single score from the last time step (real/fake score).",
"Thanks for your review and feedback. We will include Denoising Feature Matching to our paper and make a clear comparison. Even though Denoising Feature Matching uses density estimation in the latent space there are major differences which makes learning dynamic of our model totally different then theirs. (i) As you mentioned their method is complementary to GAN objective while our method can be learned standalone. (ii) More importantly their discriminator (encoder + classifier) are trained as in original GAN objective which means that features learned from the data distribution are based on classifier's feedback not on density model's. This crucial difference make both works different than one another. (iii) In our model feature co-occurrences is modeled explicitly. (iv) Motivation for both works are totally different.\n\nUnfortunately, we could not include a second score (FID) into the revision due to time limitations.\n"
]
} | {
"paperhash": [
"berthelot|began:_boundary_equilibrium_generative_adversarial_networks",
"coates|an_analysis_of_single-layer_networks_in_unsupervised_feature_learning",
"dai|figure_8:_generation_for_celeba_dataset_with_64x64_resolution_sw-argan",
"nguyen|dual_discriminator_generative_adversarial_nets",
"dumoulin|adversarially_learned_inference",
"goodfellow|generative_adversarial_nets",
"gulrajani|improved_training_of_wasserstein_gans",
"hoang|multi-generator_generative_adversarial_nets",
"huszár|how_(not)_to_train_your_generative_model:_scheduled_sampling",
"ioffe|batch_normalization:_accelerating_deep_network_training_by_reducing_internal_covariate_shift",
"isola|image-to-image_translation_with_conditional_adversarial_networks",
"krizhevsky|cifar-10_(canadian_institute_for_advanced_research",
"li|precomputed_real-time_texture_synthesis_with_markovian_generative_adversarial_networks",
"liu|deep_learning_face_attributes_in_the_wild",
"mao|multi-class_generative_adversarial_networks_with_the_l2_loss_function",
"mehri|samplernn:_an_unconditional_end-to-end_neural_audio_generation_model",
"radford|unsupervised_representation_learning_with_deep_convolutional_generative_adversarial_networks",
"russakovsky|imagenet_large_scale_visual_recognition_challenge",
"salimans|improved_techniques_for_training_gans",
"schuster|bidirectional_recurrent_neural_networks",
"theis|generative_image_modeling_using_spatial_lstms",
"theis|a_note_on_the_evaluation_of_generative_models",
"oord|pixel_recurrent_neural_networks",
"warde|improving_generative_adversarial_networks_with_denoising_feature_matching",
"junbo|energy-based_generative_adversarial_network"
],
"title": [
"BEGAN: boundary equilibrium generative adversarial networks",
"An analysis of single-layer networks in unsupervised feature learning",
"Figure 8: Generation for CelebA dataset with 64x64 resolution SW-ARGAN",
"Dual Discriminator Generative Adversarial Nets",
"Adversarially Learned Inference",
"Generative adversarial nets",
"Improved Training of Wasserstein GANs",
"Multi-Generator Generative Adversarial Nets",
"How (not) to Train your Generative Model: Scheduled Sampling",
"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"Image-to-Image Translation with Conditional Adversarial Networks",
"Cifar-10 (canadian institute for advanced research",
"Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks",
"Deep learning face attributes in the wild",
"Multi-class generative adversarial networks with the L2 loss function",
"Samplernn: An unconditional end-to-end neural audio generation model",
"Unsupervised representation learning with deep convolutional generative adversarial networks",
"ImageNet Large Scale Visual Recognition Challenge",
"Improved techniques for training gans",
"Bidirectional recurrent neural networks",
"Generative Image Modeling Using Spatial LSTMs",
"A note on the evaluation of generative models",
"Pixel recurrent neural networks",
"Improving generative adversarial networks with denoising feature matching",
"Energy-based generative adversarial network"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"david berthelot",
"tom schumm",
"luke metz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a coates",
"h lee",
"a y ng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"z dai",
"a almahairi",
"p bachman",
"e hovy",
"a courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"t dinh nguyen",
"t le",
"h vu",
"d phung"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"v dumoulin",
"i belghazi",
"b poole",
"o mastropietro",
"a lamb",
"m arjovsky",
"a courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ian goodfellow",
"jean pouget-abadie",
"mehdi mirza",
"bing xu",
"david warde-farley",
"sherjil ozair",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"i gulrajani",
"f ahmed",
"m arjovsky",
"v dumoulin",
"a courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"q hoang",
"t dinh nguyen",
"t le",
"d phung"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"f huszár"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"s ioffe",
"c szegedy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p isola",
"j.-y zhu",
"t zhou",
"a a efros"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex krizhevsky",
"vinod nair",
"geoffrey hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"c li",
"m wand"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ziwei liu",
"ping luo",
"xiaogang wang",
"xiaoou tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xudong mao",
"qing li",
"haoran xie",
"y k raymond",
"zhen lau",
" wang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"soroush mehri",
"kundan kumar",
"ishaan gulrajani",
"rithesh kumar",
"shubham jain",
"jose sotelo",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alec radford",
"luke metz",
"soumith chintala"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"olga russakovsky",
"jia deng",
"hao su",
"jonathan krause",
"sanjeev satheesh",
"sean ma",
"zhiheng huang",
"andrej karpathy",
"aditya khosla",
"michael bernstein",
"alexander c berg",
"li fei-fei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tim salimans",
"ian j goodfellow",
"wojciech zaremba",
"vicki cheung",
"alec radford",
"xi chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"m schuster",
"k k paliwal"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"l theis",
"m bethge"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"l theis",
"a van den oord",
"m bethge"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aäron van den oord",
"nal kalchbrenner",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david warde",
"-farley ",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jake junbo",
"michaël zhao",
"yann mathieu",
" lecun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1703.10717v4",
"",
"",
"1709.03831v1",
"1606.00704v3",
"",
"1704.00028v3",
"1708.02556v4",
"",
"1502.03167v3",
"",
"",
"",
"1411.7766v3",
"",
"",
"1511.06434v2",
"1409.0575v3",
"1606.03498v1",
"",
"1506.03478v2",
"1511.01844v3",
"1601.06759v3",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.37037 | 0.916667 | null | null | null | null | null | rJWrK9lAb |
||
morerio|minimalentropy_correlation_alignment_for_unsupervised_deep_domain_adaptation|ICLR_cc_2018_Conference | 3522489 | 1711.10288 | Minimal-Entropy Correlation Alignment for Unsupervised Deep Domain Adaptation | In this work, we face the problem of unsupervised domain adaptation with a novel deep learning approach which leverages our finding that entropy minimization is induced by the optimal alignment of second order statistics between source and target domains. We formally demonstrate this hypothesis and, aiming at achieving an optimal alignment in practical cases, we adopt a more principled strategy which, differently from the current Euclidean approaches, deploys alignment along geodesics. Our pipeline can be implemented by adding to the standard classification loss (on the labeled source domain), a source-to-target regularizer that is weighted in an unsupervised and data-driven fashion. We provide extensive experiments to assess the superiority of our framework on standard domain and modality adaptation benchmarks. | {
"name": [
"pietro morerio",
"jacopo cavazza",
"vittorio murino"
],
"affiliation": [
{
"laboratory": "Pattern Analysis and Computer Vision (PAVIS)",
"institution": "Istituto Italiano di Tecnologia -Genova",
"location": "{'country': 'Italy'}"
},
{
"laboratory": "Pattern Analysis and Computer Vision (PAVIS)",
"institution": "Istituto Italiano di Tecnologia -Genova",
"location": "{'country': 'Italy'}"
},
{
"laboratory": "Pattern Analysis and Computer Vision (PAVIS)",
"institution": "Istituto Italiano di Tecnologia -Genova",
"location": "{'country': 'Italy'}"
}
]
} | A new unsupervised deep domain adaptation technique which efficiently unifies correlation alignment and entropy minimization | [
"unsupervised domain adaptation",
"entropy minimization",
"image classification",
"deep transfer learning"
] | null | 2018-02-15 22:29:39 | 32 | 147 | 13 | null | null | null | null | null | null | true | This paper presents a nice approach to domain adaptation that improves empirically upon previous work, while also simplifying tuning and learning.
| {
"review_id": [
"r15hYW5gM",
"SkjcYkCgf",
"BkiyM2dgG"
],
"review": [
{
"title": "title: New correlation alignment based domain adaptation method which results in minimal target entropy",
"paper_summary": null,
"main_review": "main_review: Summary:\nThis paper proposes minimal-entropy correlation alignment, an unsupervised domain adaptation algorithm which links together two prior class of methods: entropy minimization and correlation alignment. Interesting new idea. Make a simple change in the distance function and now can perform adaptation which aligns with minimal entropy on target domain and thus can allow for removal of hyperparameter (or automatic validation of correct one).\n\nStrengths\n- The paper is clearly written and effectively makes a simple claim that geodesic distance minimization is better aligned to final performance than euclidean distance minimization between source and target. \n- Figures 1 and 2 (right side) are particularly useful for fast understanding of the concept and main result.\n\n\nQuestions/Concerns:\n- Can entropy minimization on target be used with other methods for DA param tuning? Does it require that the model was trained to minimize the geodesic correlation distance between source and target?\n- It would be helpful to have a longer discussion on the connection with Geodesic flow kernel [1] and other unsupervised manifold based alignment methods [2]. Is this proposed approach an extension of this prior work to the case of non-fixed representations in the same way that Deep CORAL generalized CORAL?\n- Why does performance suffer compared to TRIPLE on the SYN->SVHN task? Is there some benefit to the TRIPLE method which may be combined with the MECA approach?\n\n\t\t\t\t\t\n[1] Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic flow kernel for unsupervised domain adaptation. In CVPR, 2012.\n\t\t\t\t\t\n[2] Raghuraman Gopalan and Ruonan Li. Domain adaptation for object recognition: An unsupervised approach. In ICCV, 2011. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: This paper proposes a principled connection between correlation alignment and entropy minimization to achieve a more robust domain adaptation. The authors show the connection between the two approaches within a unified framework. The experimental results support the claims in the paper, and show the benefits over state-of-the-art methods such as DeepCoral. ",
"paper_summary": null,
"main_review": "main_review: The authors propose a novel deep learning approach which leverages on our finding that entropy minimization\nis induced by the optimal alignment of second order statistics between source and target domains. Instead of relying on Euclidean distances when performing the alignment, the authors use geodesic distances which preserve the geometry of the manifolds. Among others, the authors also propose a handy way to cross-validate the model parameters on target data using the entropy criterion. The experimental validation is performed on benchmark datasets for image classification. Comparisons with the state-of-the-art approaches show that the proposed marginally improves the results. The paper is well written and easy to understand.\n\nAs a main difference from DeepCORAL method, this approach relies on the use of geodesic distances when doing the alignment of the distribution statistics, which turns out to be beneficial for improving the network performance on the target tasks. While I don't see this as substantial contribution to the field, I think that using the notion of geodesic distance in this context is novel. The experiments show the benefit over the Euclidean distance when applied to the datasets used in the paper. \n\nA lot of emphasis in the paper is put on the methodology part. The experiments could have been done more extensively, by also providing some visual examples of the aligned distributions and image features. This would allow the readers to further understand why the proposed alignment approach performs better than e.g. Deep Coral.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Need further exploration for the use of entropy to select free parameters; geodesic correlation alignment is a reasonable improvement",
"paper_summary": null,
"main_review": "main_review: This paper improves the correlation alignment approach to domain adaptation from two aspects. One is to replace the Euclidean distance by the geodesic Log-Euclidean distance between two covariance matrices. The other is to automatically select the balancing cost by the entropy on the target domain. Experiments are conducted from SVHN to MNIST and from SYN MNIST to SVHN. Additional experiments on cross-modality recognition are reported from RGB to depth.\n\nStrengths:\n+ It is a sensible idea to improve the Euclidean distance by the geodesic Log-Euclidean distance to better explore the manifold structure of the PSD matrices. \n+ It is also interesting to choose the balancing cost using the entropy on the target. However, this point is worth further exploring (please see below for more detailed comments).\n+ The experiments show that the geodesic correlation alignment outperforms the original alignment method. \n\nWeaknesses: \n- It is certainly interesting to have a scheme to automatically choose the hyper-parameters in unsupervised domain adaptation, and the entropy over the target seems like a reasonable choice. This point is worth further exploring for the following reasons. \n1. The theoretical result is not convincing given it relies on many unrealistic assumptions, such as the null performance degradation under perfect correlation alignment, the Dirac’s delta function as the predictions over the target, etc.\n2. The theorem actually does not favor the correlation alignment over the geodesic alignment. It does not explain that, in Figure 2, the entropy is able to find the best balancing cost \\lamba for geodesic alignment but not for the Euclidean alignment.\n3. The entropy alignment seems an interesting criterion to explore in general. Could it be used to find fairly good hyper-parameters for the other methods? Could it be used to determine the other hyper-parameters (e..g, learning rate, early stopping) for the geodesic alignment? \n4. If one leaves a subset of the target domain out and use its labels for validation, how different would the selected balancing cost \\lambda differ from that by the entropy? \n\n- The cross-modality setup (from RGB to depth) is often not considered as domain adaptation. It would be better to replace it by another benchmark dataset. The Office-31 dataset is still a good benchmark to compare different methods and for the study in Section 5.1, though it is not necessary to reach state-of-the-art results on this dataset because, as the authors noted, it is almost saturated. \n\nQuestion:\n- I am not sure how the gradients were computed after the eigendecomposition in equation (8).\n\n\nI like the idea of automatically choosing free parameters using the entropy over the target domain. However, instead of justifying this point by the theorem that relies on many assumptions, it is better to further test it using experiments (e.g., on Office31 and for other adaptation methods). The geodesic correlation alignment is a reasonable improvement over the Euclidean alignment.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.7777777910232544,
0.5555555820465088
],
"confidence": [
1,
0.75,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to AnonReviewer2",
"Response to AnonReviewer1 - part 2",
"Response to AnonReviewer1 - part 1",
"Response to AnonReviewer3",
"Comments after reading the rebuttal",
"Github code"
],
"comment": [
"We are thankful for the provided comments and we will respond (A) to each query (Q) in detail.\n\n\nQ 1 - (a) Can entropy minimization on target be used with other methods for DA param tuning? (b) Does it require that the model was trained to minimize the geodesic correlation distance between source and target? \n\nA 1 - (a) Let us point out that we are not minimizing entropy on the target as a regularizing training loss, as previous works did (Tzeng et al. 2015, Haeusser et al. 2017 or Carlucci et al. 2017). For the latter methods, entropy cannot be used as a criterion for parameter tuning, since it is one of the quantities explicitly optimized in the problem. Differently, we obtain the minimum of the entropy as a consequence of an optimal correlation alignment. Such criterion could possibly be used for other methods aiming at source-target distribution alignment. (b) Alignment does not *explicitly* require a geodesic distance. However, since the former must be optimal, it cannot be attained with an Euclidean distance, which is the reason why we propose the log-Euclidean one.\n\n\nQ 2. - It would be helpful to have a longer discussion on the connection with Geodesic flow kernel [1] and other unsupervised manifold based alignment methods [2]. Is this proposed approach an extension of this prior work to the case of non-fixed representations in the same way that Deep CORAL generalized CORAL? \n[1] Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic flow kernel for unsupervised domain adaptation. In CVPR, 2012.\n[2] Raghuraman Gopalan,, Ruonan Li and Rama Chellappa. Domain adaptation for object recognition: An unsupervised approach. In ICCV, 2011. \n\nA -2 The works [1,2] are kernelized approaches which, by either using Principal Components Analysis [1] or Partial Least Squares [2], a sequence of intermediate embeddings is generated as a smooth transition from the source to the target domain. In [1], such sequence is implicitly computed by means of a kernel function which is subsequently used for classification. In [2], after the source data are projected on hand-crafted intermediate subspaces, classification is performed. \nIn [1] and [2], the necessity for engineering intermediate embeddings is motivated by the need for adapting the fixed input representation so that the domain shift can be solved. As a way to do it, [1] and [2] follow the geodesics on the data manifold. \nIn a very same way, our proposed approach, MECA, follows the geodesics on the manifold (of second order statistics), but, differently, this step is finalized to better guide the feature learning stage. \nFor all these reasons, MECA and [1,2] can be seen as different manners of exploiting geodesic alignment for the sake of domain adaptation.\n\n\nQ 3. - Why does performance suffer compared to TRIPLE on the SYN->SVHN task? Is there some benefit to the TRIPLE method which may be combined with the MECA approach? \n\nA 3 - As we argued in the paper, the performance on SYN to SVHN task is due to the the visual similarity between source and target domain whose relative data distributions are already quite aligned. Also note that TRIPLE already performs better than direct training on the target domain. This could be interpreted as a cue for TRIPLE to perform implicit data augmentation on the source synthetic data (and, indeed, the same could be done in MECA, trying to boost its performance by means of data augmentation). However, when more realistic datasets are used as source, such procedure becomes more difficult to be accomplished and that’s why, on all the other benchmarks, TRIPLE is inferior to MECA in terms of performance.",
"W 5. - The cross-modality setup (from RGB to depth) is often not considered as domain adaptation. It would be better to replace it by another benchmark dataset. The Office-31 dataset is still a good benchmark to compare different methods and for the study in Section 5.1, though it is not necessary to reach state-of-the-art results on this dataset because, as the authors noted, it is almost saturated. \n\nIn domain adaptation, the equivalence between domain and dataset is not automatic and some works have been operating in the direction of discovering domains as a subpart of a dataset (e.g., Gong et al. Reshaping Visual Datasets for Domain Adaptation - NIPS 2013). In this respect, the NYU dataset can be used to quantify adaptation across different sensor modalities within the same dataset.\nThe NYU experiment we carried out was also considered in the following recent domain adaptation works: Tzeng et al. “Adversarial Discriminative Domain Adaptation ICCV 2017” and Volpi et al. “Adversarial Feature Augmentation for Unsupervised Domain Adaptation” ArXiv 2017. We believe such experiment adds a considerable value to our work and we would like to maintain it.\nIn any case, after the reviewer’s suggestion, we are now running the Office-31 experiments. Preliminary results on the Amazon->Webcam split are in line with those already in the paper and coherent with the ones published in Sun & Saenko, 2016: Baseline (no adapt) 58.1%, Deep-Coral +5.9%, MECA +8.7% (Note that we use a VGG as a baseline architecture, while Sun & Saenko, 2016 use AlexNet).\n---\n\nQ 1. - I am not sure how the gradients were computed after the eigendecomposition in equation (8). \n\nAs a common practice, we let the software library to automatically compute the gradients along the computation graph, given the fact that the additive regularizer that we wrote is nothing but a differentiable composition of elementary functions such as logarithms and square exponentiation. Although it’s possible to explicitly write down gradients with formulas, such explicit formalism is not of particular interest and we decided to remove such calculations from the paper in order to reduce verbosity.\n",
"We thank the reviewer for having read our work with great detail and for the valuable suggestions. We will address all quoted weaknesses (W) and questions (Q) separately.\n\n\nW 1. - The theoretical result is not convincing given it relies on many unrealistic assumptions, such as the null performance degradation under perfect correlation alignment, the Dirac’s delta function as the predictions over the target, etc. \n \nIn Theorem 1, by assuming the optimal correlation alignment, we can prove that entropy is minimized (which, ancillary, implies the Dirac’s delta function for the predictions). Under a theoretical standpoint, the strong assumption is balanced by the significant claim we have proved. In practical terms, the reviewers is right in observing that the optimal alignment is not granted for free, and this justifies the choice of a more sound metric for correlation alignment. That’s why we proposed the log-Euclidean distance to make the alignment closer to the optimal one.\n--\n\nW 2. - The theorem actually does not favor the correlation alignment over the geodesic alignment. It does not explain that, in Figure 2, the entropy is able to find the best balancing cost \\lamba for geodesic alignment but not for the Euclidean alignment. \n\nAs we showed in Figure 2, in the case of geodesic alignment, entropy minimization always correlate with the optimal performance on the target domain. Since the same does not always happen when an Euclidean metric is used, this is an evidence that Euclidean alignment is not able to achieve an optimal correlation alignment which, in comparison, is better achieved through our geodesic approach. \n--\n\nW 3. - The entropy alignment seems an interesting criterion to explore in general. Could it be used to find fairly good hyper-parameters for the other methods? Could it be used to determine the other hyper-parameters (e.g., learning rate, early stopping) for the geodesic alignment?\n\nIt does make sense to fine tune the \\lambda by using target entropy since, ultimately, a low entropy on the target is a proxy for a confident classifier whose predictions are peaky. In other words, since \\lambda regulates the effect of the correlation alignment, it also balances the capability of a classifier trained on the source to perform well on the target. Since in our pipeline \\lambda is the only parameter related to domain adaptation, we deem our choice quite natural. In fact, other free parameters (learning rate, early stopping) are not related to adaptation, but to the regular training of the deep neural network, which can be actually determined by using source data only - as we did in our experiments.\n--\n\nW 4. - If one leaves a subset of the target domain out and use its labels for validation, how different would the selected balancing cost \\lambda differ from that by the entropy?\n\nThe availability of a few labeled samples from the target domain would cast the problem into semi-supervised domain adaptation. Instead, our work faces the more challenging unsupervised scenario. \nIndeed, we propose an unsupervised method which lead to the same results of using labelled target samples for validation. This is shown in the top-right of Figure 2: the blue curve accounts for the best target performance, which is computed by means of target test labels - thus not accessible during training. Differently, the red curve can be computed at training time since the entropy criterion is fully unsupervised. \nFigure 2 shows that the proposed criterion is effectively able to select the \\lambda which corresponds to the best target performance that one could achieve if one was allowed to use target label. Notice that the same does not happen for Deep CORAL (bottom-right) - and the reported results for that competitor were done by direct validation on the target.\n--",
"We are thankful for the detailed reading and careful evaluation of our work. \n\n\nBy following the proposed suggestion, we added to the Appendix some t-SNE visualizations in which we compare our baseline network with no adaptation against Deep CORAL and MECA on the SVHN to MNIST benchmark. As the we observed, Deep CORAL and MECA achieve a better separation among classes - confirming the quantitative results of Table 1. \n\nMoreover, when looking at the degree of confusion between source and target domain achieved within each digit’s class, we can qualitatively show that MECA is better in “shuffling” source and target data than Deep CORAL, in which the two are close but much more separated. This can be read as an additional, qualitative evidence of the superiority of the proposed geodesic over the Euclidean alignment. \n\nThese considerations and further remarks have been discussed in the revised paper (appendix). ",
"The rebuttal addresses most of my questions. Here are two more cents. \n\nThe theorem still does not favor the correlation alignment over the geodesic alignment. What Figure 2 shows is an empirical observation but the theorem itself does not lead to the result.\n\nI still do not think the cross-modality setup is appropriate for studying domain adaptation. That would result in disparate supports to the distributions of the two domains. In general, it is hard to adapt between two such \"domains\" though the additional pairwise relation between the data points of the two \"domains\" could help. Moreover, there has been a rich literature on multi-modality data. It is not a good idea to term it with a new name and meanwhile ignore the existing works on multi-modalities. \n\n",
"https://github.com/pmorerio/minimal-entropy-correlation-alignment"
]
} | {
"paperhash": [
"häusser|associative_domain_adaptation",
"carlucci|autodial:_automatic_domain_alignment_layers",
"saito|asymmetric_tri-training_for_unsupervised_domain_adaptation",
"tzeng|adversarial_discriminative_domain_adaptation",
"taigman|unsupervised_cross-domain_image_generation",
"bousmalis|domain_separation_networks",
"zhang|efficient_temporal_sequence_comparison_and_classification_using_gram_matrix_embeddings_on_a_riemannian_manifold",
"minh|approximate_log-hilbert-schmidt_distances_between_covariance_operators_for_image_classification",
"cavazza|kernelized_covariance_for_action_recognition",
"li|revisiting_batch_normalization_for_practical_domain_adaptation",
"kan|bi-shifting_auto-encoder_for_unsupervised_domain_adaptation",
"sun|return_of_frustratingly_easy_domain_adaptation",
"ioffe|batch_normalization:_accelerating_deep_network_training_by_reducing_internal_covariate_shift",
"minh|log-hilbert-schmidt_metric_between_positive_definite_operators_on_hilbert_spaces",
"ganin|unsupervised_domain_adaptation_by_backpropagation",
"fernando|unsupervised_visual_domain_adaptation_using_subspace_alignment",
"shekhar|generalized_domain-adaptive_dictionaries",
"gong|geodesic_flow_kernel_for_unsupervised_domain_adaptation",
"gopalan|domain_adaptation_for_object_recognition:_an_unsupervised_approach",
"glorot|domain_adaptation_for_large-scale_sentiment_classification:_a_deep_learning_approach",
"torralba|unbiased_look_at_dataset_bias",
"arsigny|geometric_means_in_a_novel_vector_space_structure_on_symmetric_positive-definite_matrices",
"grandvalet|semi-supervised_learning_by_entropy_minimization",
"lee|pseudo-label_:_the_simple_and_efficient_semi-supervised_learning_method_for_deep_neural_networks",
"netzer|reading_digits_in_natural_images_with_unsupervised_feature_learning"
],
"title": [
"Associative Domain Adaptation",
"AutoDIAL: Automatic Domain Alignment Layers",
"Asymmetric Tri-training for Unsupervised Domain Adaptation",
"Adversarial Discriminative Domain Adaptation",
"Unsupervised Cross-Domain Image Generation",
"Domain Separation Networks",
"Efficient Temporal Sequence Comparison and Classification Using Gram Matrix Embeddings on a Riemannian Manifold",
"Approximate Log-Hilbert-Schmidt Distances between Covariance Operators for Image Classification",
"Kernelized covariance for action recognition",
"Revisiting Batch Normalization For Practical Domain Adaptation",
"Bi-Shifting Auto-Encoder for Unsupervised Domain Adaptation",
"Return of Frustratingly Easy Domain Adaptation",
"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"Log-Hilbert-Schmidt metric between positive definite operators on Hilbert spaces",
"Unsupervised Domain Adaptation by Backpropagation",
"Unsupervised Visual Domain Adaptation Using Subspace Alignment",
"Generalized Domain-Adaptive Dictionaries",
"Geodesic flow kernel for unsupervised domain adaptation",
"Domain adaptation for object recognition: An unsupervised approach",
"Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach",
"Unbiased look at dataset bias",
"Geometric Means in a Novel Vector Space Structure on Symmetric Positive-Definite Matrices",
"Semi-supervised Learning by Entropy Minimization",
"Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks",
"Reading Digits in Natural Images with Unsupervised Feature Learning"
],
"abstract": [
"We propose associative domain adaptation, a novel technique for end-to-end domain adaptation with neural networks, the task of inferring class labels for an unlabeled target domain based on the statistical properties of a labeled source domain. Our training scheme follows the paradigm that in order to effectively derive class labels for the target domain, a network should produce statistically domain invariant embeddings, while minimizing the classification error on the labeled source domain. We accomplish this by reinforcing associations between source and target data directly in embedding space. Our method can easily be added to any existing classification network with no structural and almost no computational overhead. We demonstrate the effectiveness of our approach on various benchmarks and achieve state-of-the-art results across the board with a generic convolutional neural network architecture not specifically tuned to the respective tasks. Finally, we show that the proposed association loss produces embeddings that are more effective for domain adaptation compared to methods employing maximum mean discrepancy as a similarity measure in embedding space.",
"Classifiers trained on given databases perform poorly when tested on data acquired in different settings. This is explained in domain adaptation through a shift among distributions of the source and target domains. Attempts to align them have traditionally resulted in works reducing the domain shift by introducing appropriate loss terms, measuring the discrepancies between source and target distributions, in the objective function. Here we take a different route, proposing to align the learned representations by embedding in any given network specific Domain Alignment Layers, designed to match the source and target feature distributions to a reference one. Opposite to previous works which define a priori in which layers adaptation should be performed, our method is able to automatically learn the degree of feature alignment required at different levels of the deep network. Thorough experiments on different public benchmarks, in the unsupervised setting, confirm the power of our approach.",
"It is important to apply models trained on a large number of labeled samples to different domains because collecting many labeled samples in various domains is expensive. To learn discriminative representations for the target domain, we assume that artificially labeling the target samples can result in a good representation. Tri-training leverages three classifiers equally to provide pseudo-labels to unlabeled samples; however, the method does not assume labeling samples generated from a different domain. In this paper, we propose the use of an asymmetric tri-training method for unsupervised domain adaptation, where we assign pseudo-labels to unlabeled samples and train the neural networks as if they are true labels. In our work, we use three networks asymmetrically, and by asymmetric, we mean that two networks are used to label unlabeled target samples, and one network is trained by the pseudo-labeled samples to obtain target-discriminative representations. Our proposed method was shown to achieve a state-of-the-art performance on the benchmark digit recognition datasets for domain adaptation.",
"Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.",
"We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given function f, which accepts inputs in either domains, would remain unchanged. Other than the function f, the training data is unsupervised and consist of a set of samples from each domain. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.",
"The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain. We hypothesize that explicitly modeling what is unique to each domain can improve a model's ability to extract domain-invariant features. Inspired by work on private-shared component analysis, we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. Our model is trained to not only perform the task we care about in the source domain, but also to use the partitioned representation to reconstruct the images from both domains. Our novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process.",
"In this paper we propose a new framework to compare and classify temporal sequences. The proposed approach captures the underlying dynamics of the data while avoiding expensive estimation procedures, making it suitable to process large numbers of sequences. The main idea is to first embed the sequences into a Riemannian manifold by using positive definite regularized Gram matrices of their Hankelets. The advantages of the this approach are: 1) it allows for using non-Euclidean similarity functions on the Positive Definite matrix manifold, which capture better the underlying geometry than directly comparing the sequences or their Hankel matrices, and 2) Gram matrices inherit desirable properties from the underlying Hankel matrices: their rank measure the complexity of the underlying dynamics, and the order and coefficients of the associated regressive models are invariant to affine transformations and varying initial conditions. The benefits of this approach are illustrated with extensive experiments in 3D action recognition using 3D joints sequences. In spite of its simplicity, the performance of this approach is competitive or better than using state-of-art approaches for this problem. Further, these results hold across a variety of metrics, supporting the idea that the improvement stems from the embedding itself, rather than from using one of these metrics.",
"This paper presents a novel framework for visual object recognition using infinite-dimensional covariance operators of input features, in the paradigm of kernel methods on infinite-dimensional Riemannian manifolds. Our formulation provides a rich representation of image features by exploiting their non-linear correlations, using the power of kernel methods and Riemannian geometry. Theoretically, we provide an approximate formulation for the Log-Hilbert-Schmidt distance between covariance operators that is efficient to compute and scalable to large datasets. Empirically, we apply our framework to the task of image classification on eight different, challenging datasets. In almost all cases, the results obtained outperform other state of the art methods, demonstrating the competitiveness and potential of our framework.",
"In this paper we aim at increasing the descriptive power of the covariance matrix, limited in capturing linear mutual dependencies between variables only. We present a rigorous and principled mathematical pipeline to recover the kernel trick for computing the covariance matrix, enhancing it to model more complex, non-linear relationships conveyed by the raw data. To this end, we propose Kernelized-COV, which generalizes the original covariance representation without compromising the efficiency of the computation. In the experiments, we validate the proposed framework against many previous approaches in the literature, scoring on par or superior with respect to the state of the art on benchmark datasets for 3D action recognition.",
"Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study (Tommasi et al. 2015) shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance.",
"In many real-world applications, the domain of model learning (referred as source domain) is usually inconsistent with or even different from the domain of testing (referred as target domain), which makes the learnt model degenerate in target domain, i.e., the test domain. To alleviate the discrepancy between source and target domains, we propose a domain adaptation method, named as Bi-shifting Auto-Encoder network (BAE). The proposed BAE attempts to shift source domain samples to target domain, and also shift the target domain samples to source domain. The non-linear transformation of BAE ensures the feasibility of shifting between domains, and the distribution consistency between the shifted domain and the desirable domain is constrained by sparse reconstruction between them. As a result, the shifted source domain is supervised and follows similar distribution as target domain. Therefore, any supervised method can be applied on the shifted source domain to train a classifier for classification in target domain. The proposed method is evaluated on three domain adaptation scenarios of face recognition, i.e., domain adaptation across view angle, ethnicity, and imaging sensor, and the promising results demonstrate that our proposed BAE can shift samples between domains and thus effectively deal with the domain discrepancy.",
"\n \n Unlike human learning, machine learning often fails to handle changes between training (source) and test (target) input distributions. Such domain shifts, common in practical scenarios, severely damage the performance of conventional machine learning methods. Supervised domain adaptation methods have been proposed for the case when the target data have labels, including some that perform very well despite being ``frustratingly easy'' to implement. However, in practice, the target domain is often unlabeled, requiring unsupervised adaptation. We propose a simple, effective, and efficient method for unsupervised domain adaptation called CORrelation ALignment (CORAL). CORAL minimizes domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. Even though it is extraordinarily simple--it can be implemented in four lines of Matlab code--CORAL performs remarkably well in extensive evaluations on standard benchmark datasets.\n \n",
"Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.",
"This paper introduces a novel mathematical and computational framework, namely Log-Hilbert-Schmidt metric between positive definite operators on a Hilbert space. This is a generalization of the Log-Euclidean metric on the Riemannian manifold of positive definite matrices to the infinite-dimensional setting. The general framework is applied in particular to compute distances between co-variance operators on a Reproducing Kernel Hilbert Space (RKHS), for which we obtain explicit formulas via the corresponding Gram matrices. Empirically, we apply our formulation to the task of multi-category image classification, where each image is represented by an infinite-dimensional RKHS covariance operator. On several challenging datasets, our method significantly outperforms approaches based on covariance matrices computed directly on the original input features, including those using the Log-Euclidean metric, Stein and Jeffreys divergences, achieving new state of the art results.",
"Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). \nAs the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. \nOverall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.",
"In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyper parameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.",
"Data-driven dictionaries have produced state-of-the-art results in various classification tasks. However, when the target data has a different distribution than the source data, the learned sparse representation may not be optimal. In this paper, we investigate if it is possible to optimally represent both source and target by a common dictionary. Specifically, we describe a technique which jointly learns projections of data in the two domains, and a latent dictionary which can succinctly represent both the domains in the projected low-dimensional space. An efficient optimization technique is presented, which can be easily kernelized and extended to multiple domains. The algorithm is modified to learn a common discriminative dictionary, which can be further used for classification. The proposed approach does not require any explicit correspondence between the source and target domains, and shows good results even when there are only a few labels available in the target domain. Various recognition experiments show that the method performs on par or better than competitive state-of-the-art methods.",
"In real-world applications of visual recognition, many factors - such as pose, illumination, or image quality - can cause a significant mismatch between the source domain on which classifiers are trained and the target domain to which those classifiers are applied. As such, the classifiers often perform poorly on the target domain. Domain adaptation techniques aim to correct the mismatch. Existing approaches have concentrated on learning feature representations that are invariant across domains, and they often do not directly exploit low-dimensional structures that are intrinsic to many vision datasets. In this paper, we propose a new kernel-based method that takes advantage of such structures. Our geodesic flow kernel models domain shift by integrating an infinite number of subspaces that characterize changes in geometric and statistical properties from the source to the target domain. Our approach is computationally advantageous, automatically inferring important algorithmic parameters without requiring extensive cross-validation or labeled data from either domain. We also introduce a metric that reliably measures the adaptability between a pair of source and target domains. For a given target domain and several source domains, the metric can be used to automatically select the optimal source domain to adapt and avoid less desirable ones. Empirical studies on standard datasets demonstrate the advantages of our approach over competing methods.",
"Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.",
"The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.",
"Datasets are an integral part of contemporary object recognition research. They have been the chief reason for the considerable progress in the field, not just as source of large amounts of training data, but also as means of measuring and comparing performance of competing algorithms. At the same time, datasets have often been blamed for narrowing the focus of object recognition research, reducing it to a single benchmark performance number. Indeed, some datasets, that started out as data capture efforts aimed at representing the visual world, have become closed worlds unto themselves (e.g. the Corel world, the Caltech-101 world, the PASCAL VOC world). With the focus on beating the latest benchmark numbers on the latest dataset, have we perhaps lost sight of the original purpose? The goal of this paper is to take stock of the current state of recognition datasets. We present a comparison study using a set of popular datasets, evaluated based on a number of criteria including: relative data bias, cross-dataset generalization, effects of closed-world assumption, and sample value. The experimental results, some rather surprising, suggest directions that can improve dataset collection as well as algorithm evaluation protocols. But more broadly, the hope is to stimulate discussion in the community regarding this very important, but largely neglected issue.",
"In this work we present a new generalization of the geometric mean of positive numbers on symmetric positive‐definite matrices, called Log‐Euclidean. The approach is based on two novel algebraic structures on symmetric positive‐definite matrices: first, a lie group structure which is compatible with the usual algebraic properties of this matrix space; second, a new scalar multiplication that smoothly extends the Lie group structure into a vector space structure. From bi‐invariant metrics on the Lie group structure, we define the Log‐Euclidean mean from a Riemannian point of view. This notion coincides with the usual Euclidean mean associated with the novel vector space structure. Furthermore, this means corresponds to an arithmetic mean in the domain of matrix logarithms. We detail the invariance properties of this novel geometric mean and compare it to the recently introduced affine‐invariant mean. The two means have the same determinant and are equal in a number of cases, yet they are not identical in g...",
"We consider the semi-supervised learning problem, where a decision rule is to be learned from labeled and unlabeled data. In this framework, we motivate minimum entropy regularization, which enables to incorporate unlabeled data in the standard supervised learning. Our approach includes other approaches to the semi-supervised problem as particular or limiting cases. A series of experiments illustrates that the proposed solution benefits from unlabeled data. The method challenges mixture models when the data are sampled from the distribution class spanned by the generative model. The performances are definitely in favor of minimum entropy regularization when generative models are misspecified, and the weighting of unlabeled data provides robustness to the violation of the \"cluster assumption\". Finally, we also illustrate that the method can also be far superior to manifold learning in high dimension spaces.",
"We propose the simple and efficient method of semi-supervised learning for deep neural networks. Basically, the proposed network is trained in a supervised fashion with labeled and unlabeled data simultaneously. For unlabeled data, Pseudo-Label s, just picking up the class which has the maximum network output, are used as if they were true labels. Without any unsupervised pre-training method, this simple method with dropout shows the state-of-the-art performance.",
"Detecting and reading text from natural images is a hard computer vision task that is central to a variety of emerging applications. Related problems like document character recognition have been widely studied by computer vision and machine learning researchers and are virtually solved for practical applications like reading handwritten digits. Reliably recognizing characters in more complex scenes like photographs, however, is far more difficult: the best existing methods lag well behind human performance on the same tasks. In this paper we attack the problem of recognizing digits in a real application using unsupervised feature learning methods: reading house numbers from street level photos. To this end, we introduce a new benchmark dataset for research use containing over 600,000 labeled digits cropped from Street View images. We then demonstrate the difficulty of recognizing these digits when the problem is approached with hand-designed features. Finally, we employ variants of two recently proposed unsupervised feature learning methods and find that they are convincingly superior on our benchmarks."
],
"authors": [
{
"name": [
"Philip Häusser",
"Thomas Frerix",
"A. Mordvintsev",
"D. Cremers"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Fabio Maria Carlucci",
"L. Porzi",
"B. Caputo",
"E. Ricci",
"S. R. Bulò"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kuniaki Saito",
"Y. Ushiku",
"T. Harada"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Eric Tzeng",
"Judy Hoffman",
"Kate Saenko",
"Trevor Darrell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yaniv Taigman",
"Adam Polyak",
"Lior Wolf"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Konstantinos Bousmalis",
"George Trigeorgis",
"N. Silberman",
"Dilip Krishnan",
"D. Erhan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Xikang Zhang",
"Yin Wang",
"Mengran Gou",
"M. Sznaier",
"O. Camps"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"H. Q. Minh",
"Marco San-Biagio",
"Loris Bazzani",
"Vittorio Murino"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jacopo Cavazza",
"Andrea Zunino",
"Marco San-Biagio",
"Vittorio Murino"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yanghao Li",
"Naiyan Wang",
"Jianping Shi",
"Jiaying Liu",
"Xiaodi Hou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Meina Kan",
"S. Shan",
"Xilin Chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Baochen Sun",
"Jiashi Feng",
"Kate Saenko"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sergey Ioffe",
"Christian Szegedy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"H. Q. Minh",
"Marco San-Biagio",
"Vittorio Murino"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yaroslav Ganin",
"V. Lempitsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Basura Fernando",
"Amaury Habrard",
"M. Sebban",
"T. Tuytelaars"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sumit Shekhar",
"Vishal M. Patel",
"H. Nguyen",
"R. Chellappa"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Boqing Gong",
"Yuan Shi",
"Fei Sha",
"K. Grauman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Raghuraman Gopalan",
"Ruonan Li",
"R. Chellappa"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Xavier Glorot",
"Antoine Bordes",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Torralba",
"Alexei A. Efros"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Vincent Arsigny",
"P. Fillard",
"X. Pennec",
"N. Ayache"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yves Grandvalet",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Dong-Hyun Lee"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yuval Netzer",
"Tao Wang",
"Adam Coates",
"A. Bissacco",
"Bo Wu",
"A. Ng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1708.00938",
"1704.08082",
"1702.08400",
"1702.05464",
"1611.02200",
"1608.06019",
null,
null,
"1604.06582",
"1603.04779",
null,
"1511.05547",
"1502.03167",
null,
"1409.7495",
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"20186541",
"4377230",
"12570770",
"4357800",
"10756563",
"2127515",
"4663260",
"15254812",
"6726756",
"5069968",
"11590103",
"16439870",
"5808102",
"2448883",
"6755881",
"9440223",
"7955708",
"6742009",
"10337178",
"18235792",
"2777306",
"11507055",
"7890982",
"18507866",
"16852518"
],
"intents": [
[
"background",
"methodology"
],
[
"background",
"methodology"
],
[
"background",
"methodology"
],
[
"background",
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"background"
],
[],
[
"background"
],
[
"methodology"
],
[
"background",
"methodology"
],
[],
[
"methodology"
],
[
"background"
],
[
"background",
"methodology"
],
[
"background",
"methodology"
],
[
"background",
"methodology"
],
[
"background",
"methodology"
],
[
"background",
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background",
"methodology"
],
[
"background",
"methodology"
],
[
"background",
"methodology"
]
],
"isInfluential": [
false,
true,
true,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | null | 84 | 1.75 | 0.666667 | 0.916667 | null | null | null | null | null | rJWechg0Z |
yoo|dense_recurrent_neural_network_with_attention_gate|ICLR_cc_2018_Conference | Dense Recurrent Neural Network with Attention Gate | We propose the dense RNN, which has the fully connections from each hidden state to multiple preceding hidden states of all layers directly. As the density of the connection increases, the number of paths through which the gradient flows can be increased. It increases the magnitude of gradients, which help to prevent the vanishing gradient problem in time. Larger gradients, however, can also cause exploding gradient problem. To complement the trade-off between two problems, we propose an attention gate, which controls the amounts of gradient flows. We describe the relation between the attention gate and the gradient flows by approximation. The experiment on the language modeling using Penn Treebank corpus shows dense connections with the attention gate improve the model’s performance. | {
"name": [],
"affiliation": []
} | Dense RNN that has fully connections from each hidden state to multiple preceding hidden states of all layers directly. | [
"recurrent neural network",
"language modeling",
"dense connection"
] | null | 2018-02-15 22:29:28 | 23 | null | null | null | null | null | null | null | null | false | meta score: 4
This paper concerns a variant to previous RNN architectures using temporal skip connections, with experimentation on the PTB language modelling task
The reviewers all recommend that the paper is not ready for publication and thus should be rejected from ICLR. The novelty of the paper and its relation to the state-of-the-art is not clear. The experimental validation is weak.
Pros:
- possibly interesting idea
Cons:
- weak experimental validation
- weak connection to the state of the art
- precise original contribution w.r.t state-of-the-art is not clear
| {
"review_id": [
"SkLFMZ9gG",
"SJLNIX9lG",
"Hk37PGqlz"
],
"review": [
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: This paper proposes a new type of RNN architectures called Dense RNNs. The authors combine several different RNN architectures and claim that their RNN can model long-term dependencies better, can learn multiscale representation of the sequential data, and can sidestep the exploding or vanishing gradients problem by using parametrized gating units.\n\nUnfortunately, this paper is hard to read, it is difficult to understand the intention of the authors. The authors make several claims without any supportive reference or experimental evidence. Both intuitive and theoretical justifications of the proposed architecture are not so convincing. The experiment is only done on PTB dataset, and the reported numbers are not that promising either. \n\nThis paper tries to combine three different features from previous works, and unfortunately, it is not so well conducted.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Not exciting",
"paper_summary": null,
"main_review": "main_review: The authors propose an RNN that combines temporal shortcut connections from [Soltani & Jang, 2016] and Gated Recurrent Attention [Chung, 2014]. However, their justification about the novelty and efficacy of the model is not well demonstrated in the paper. The experiment part is modest with only one small dataset Penn Tree Bank is used. The results are not significant enough and no comparisons with models in [Soltani & Jang, 2016] and [Chung, 2014] are provided in the paper to show the effectiveness of the proposed combination. To conclude, this paper is an incremental work with limited contributions.\n\nSome writing issues:\n1. Lack of support in arguments,\n2. Lack of referencing to previous works. For example, the sentence “By selecting the same dropout mask for feedforward, recurrent connections, respectively, the dropout can apply to the RNN, which is called a variational dropout” mentions “variational dropout” with no citing. Or “NARX-RNN and HO-RNN increase the complexity by increasing recurrent depth. Gated feedback RNN has the fully connection between two consecutive timesteps” also mentions a lot of models without any references at all.\n3. Some related papers are not cited, e.g., Hierarchical Multiscale Recurrent Neural Networks [Chung, 2016]\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Dense RNNs with Attention Gate",
"paper_summary": null,
"main_review": "main_review: Summary: \n\nThis paper proposes a fully connected dense RNN architecture that has connections to every layer and the preceding connections of each layer. The connections are also gated by using a simple gating mechanism. The authors very briefly discusses about the effect of these on the dynamics of the learning. They report results on PTB character-level language modelling task.\n\n\nQuestions:\nWhat is the computational complexity of this approach compared to a vanilla RNN architecture?\nWhat is the implications of these skip connections in terms of memory consumption during BPTT?\nDid you use gradient clipping and have you used any specific type of initialization for the parameters?\nHow would this approach would compare against the Clockwork RNNs which has a block-diagonal weight matrices? [1]\nHow would dense-RNNs compare against to the MANNs [2]?\nHow would you implement this model efficiently?\n\nPros:\nInteresting idea.\nCons:\nLack of experiments and empirical results supporting the arguments.\nHand-wavy theory.\nLack of references to the relevant literature. \n\nGeneral Comments:\nIn general the paper is relatively well written despite having some minor typos. The idea is interesting, however the experiments in this paper is seriously lacking. The only results presented in this paper is on PTB. The results are quite behind the SOTA and PTB is a really tiny, toyish language modeling task. The theory is very hand-wavy, the connections to the previous attempts to come up with related properties of the recurrent models should be cited. The Figure 2 is very related to the Gersgorin circle theorem in [3]. The discussion about the skip-connections is very related to the results in [2]. \n\nOverall, I think this paper is rushed and not ready for the publication.\n\n[1] Koutnik, J., Greff, K., Gomez, F., & Schmidhuber, J. (2014, January). A clockwork rnn. In International Conference on Machine Learning (pp. 1863-1871).\n[2] Gulcehre, Caglar, Sarath Chandar, and Yoshua Bengio. \"Memory Augmented Neural Networks with Wormhole Connections.\" arXiv preprint arXiv:1701.08718 (2017).\n[3] Zilly, Julian Georg, Rupesh Kumar Srivastava, Jan Koutník, and Jürgen Schmidhuber. \"Recurrent highway networks.\" arXiv preprint arXiv:1607.03474 (2016).\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.1111111119389534,
0.3333333432674408,
0.3333333432674408
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to AnonReviewer3",
"Response to AnonReviewer1"
],
"comment": [
" We thanks the reviewers for their work. And your review was very helpful for me. \n\nLack of support in arguments\n\nI added the reference papers belows.\n- Learning long-term dependencies in narx recurrent neural networks (NARX-RNN)\n- Higher order recurrent neural networks (HO-RNN) \n- Hierarchical multiscale recurrent neural networks\n- Memory augmented neural networks with wormhole connections (MANN)\n- A clockwork rnn\n\n",
" We thanks the reviewers for their work. And your review was very helpful for me. \nI answered your questions and based on the aswers, I updated my paper. \n\nQ. What is the computational complexity of this approach compared to a vanilla RNN architecture? \n\nA. The number of parameters in dense rnn is feedforward depth^2 * recurrent depth * hidden size^2\nThe number of parameters in vanilla rnn is hidden size^2. \n\nDoubling the hidden size and doubling feedforward depth have same effect in terms of the number of parameters. And doubling recurrent depth is more efficient than doubling of hidden size with same factor. \n\nQ. What is the implications of these skip connections in terms of memory consumption during BPTT? \n\nA. If there is no skip connections, the gradients have to flow with stopping by every hidden states, it makes the parameters being vanished or exploded. The skip connections make the gradients pass the less number of hidden states, it alleviates the vanishing gradient or exploding gradient problems. \n\nQ. Did you use gradient clipping and have you used any specific type of initialization for the parameters? \n\nA. We used gradient clipping with the value 5. We used stochastic gradient optimizer with sceduling the learning rate.\n\nQ. How would this approach compare against the Clockwork RNNs which has a block-diagonal weight matrices?\n\nA. In clockwork RNN, the hidden states are divided into multiple sub-modules, which act with different periods to capture multiple timescales. In dense RNN, all previous states within recurrent depth affect current hidden state every time step. The periods underlying the sequences are automatically selected using the attention gate in dense RNN. In summary, clockwork RNN pre-defines the frequency to capture from the sequence and dense RNN learns the frequency using the attention gate. \n\nQ. [1] How would dense-RNNs compare against to the MANNs [2]? \n\nA. All previous states within the recurrent depth don't always affect the next state. Thus, MANN uses the memory to remember previous states and retrieve some of previous states if necessary. This is similar concept. However, the MANN has only connections between same layers. \n\nQ. How would you implement this model efficiently? \n\nA. In equation (12), there are many weight multiplication. As the number of weight multiplication increases, slower the calculation speed is. \n\nIn theoretical analysis, we analyzed using Gersgorin circle theorem similar to the paper \"Recurrent Highway Network\". \n"
]
} | {
"paperhash": [
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"chung|empirical_evaluation_of_gated_recurrent_neural_networks_on_sequence_modeling",
"chung|hierarchical_multiscale_recurrent_neural_networks",
"el|hierarchical_recurrent_neural_networks_for_long-term_dependencies",
"gal|a_theoretically_grounded_application_of_dropout_in_recurrent_neural_networks",
"aranovich|uber_die_abgrenzung_der_eigenwerte_einer_matrix",
"graves|speech_recognition_with_deep_recurrent_neural_networks",
"gulcehre|memory_augmented_neural_networks_with_wormhole_connections",
"he|deep_residual_learning_for_image_recognition",
"hermans|training_and_analysing_deep_recurrent_neural_networks",
"hochreiter|the_vanishing_gradient_problem_during_learning_recurrent_neural_nets_and_problem_solutions",
"hochreiter|long_short-term_memory",
"huang|densely_connected_convolutional_networks",
"inan|tying_word_vectors_and_word_classifiers:_a_loss_framework_for_language_modeling",
"koutnik|a_clockwork_rnn",
"lin|learning_long-term_dependencies_in_narx_recurrent_neural_networks",
"mikolov|distributed_representations_of_words_and_phrases_and_their_compositionality",
"moon|rnndrop:_a_novel_dropout_for_rnns_in_asr",
"pascanu|on_the_difficulty_of_training_recurrent_neural_networks",
"schmidhuber|learning_complex,_extended_sequences_using_the_principle_of_history_compression",
"soltani|higher_order_recurrent_neural_networks",
"zaremba|recurrent_neural_network_regularization",
"zhang|architectural_complexity_measures_of_recurrent_neural_networks"
],
"title": [
"Neural machine translation by jointly learning to align and translate",
"Empirical evaluation of gated recurrent neural networks on sequence modeling",
"Hierarchical multiscale recurrent neural networks",
"Hierarchical recurrent neural networks for long-term dependencies",
"A theoretically grounded application of dropout in recurrent neural networks",
"Uber die abgrenzung der eigenwerte einer matrix",
"Speech recognition with deep recurrent neural networks",
"Memory augmented neural networks with wormhole connections",
"Deep residual learning for image recognition",
"Training and analysing deep recurrent neural networks",
"The vanishing gradient problem during learning recurrent neural nets and problem solutions",
"Long short-term memory",
"Densely connected convolutional networks",
"Tying word vectors and word classifiers: A loss framework for language modeling",
"A clockwork rnn",
"Learning long-term dependencies in narx recurrent neural networks",
"Distributed representations of words and phrases and their compositionality",
"Rnndrop: A novel dropout for rnns in asr",
"On the difficulty of training recurrent neural networks",
"Learning complex, extended sequences using the principle of history compression",
"Higher order recurrent neural networks",
"Recurrent neural network regularization",
"Architectural complexity measures of recurrent neural networks"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"dzmitry bahdanau",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"junyoung chung",
"caglar gulcehre",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"junyoung chung",
"sungjin ahn",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"salah el",
"hihi ",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yarin gal",
"zoubin ghahramani"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"semyon aranovich",
"geršhgorin "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex graves",
"abdel-rahman mohamed",
"geoffrey hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"caglar gulcehre",
"sarath chandar",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"michiel hermans",
"benjamin schrauwen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"gao huang",
"zhuang liu",
"kilian q weinberger",
"laurens van der maaten"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hakan inan",
"khashayar khosravi",
"richard socher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jan koutnik",
"klaus greff",
"faustino gomez",
"juergen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tsungnan lin",
"bill g horne",
"peter tino",
"c lee giles"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tomas mikolov",
"ilya sutskever",
"kai chen",
"greg s corrado",
"jeff dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"taesup moon",
"heeyoul choi",
"hoshik lee",
"inchul song"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"razvan pascanu",
"tomas mikolov",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rohollah soltani",
"hui jiang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wojciech zaremba",
"ilya sutskever",
"oriol vinyals"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"saizheng zhang",
"yuhuai wu",
"tong che",
"zhouhan lin",
"roland memisevic",
"ruslan r salakhutdinov",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1409.0473v7",
"1412.3555v1",
"1609.01704v7",
"",
"1512.05287v5",
"",
"1303.5778v1",
"1701.08718v1",
"1512.03385v1",
"",
"",
"",
"1608.06993v5",
"1611.01462v3",
"1402.3511v1",
"",
"1310.4546v1",
"",
"1211.5063v2",
"",
"1605.00064v1",
"1409.2329v5",
"1602.08210v3"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.259259 | 0.75 | null | null | null | null | null | rJVruWZRW |
||
song|pixeldefend_leveraging_generative_models_to_understand_and_defend_against_adversarial_examples|ICLR_cc_2018_Conference | 1710.10766v3 | PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples | Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of image classifiers? In this paper, we show empirically that adversarial examples mainly lie in the low probability regions of the training distribution, regardless of attack types and targeted models. Using statistical hypothesis testing, we find that modern neural density models are surprisingly good at detecting imperceptible image perturbations. Based on this discovery, we devised PixelDefend, a new approach that purifies a maliciously perturbed image by moving it back towards the distribution seen in the training data. The purified image is then run through an unmodified classifier, making our method agnostic to both the classifier and the attacking method. As a result, PixelDefend can be used to protect already deployed models and be combined with other model-specific defenses. Experiments show that our method greatly improves resilience across a wide variety of state-of-the-art attacking methods, increasing accuracy on the strongest attack from 63% to 84% for Fashion MNIST and from 32% to 70% for CIFAR-10. | {
"name": [
"yang song",
"taesup kim",
"sebastian nowozin",
"stefano ermon",
"nate kushman"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
}
]
} | null | [
"Computer Science",
"Mathematics"
] | International Conference on Learning Representations | 2017-10-30 | 45 | 695 | null | null | null | null | null | null | null | true | The paper studies the use of PixelCNN density models for the detection of adversarial images, which tend to lie in low-probability parts of image space. The work is novel, relevant to the ICLR community, and appears to be technically sound.
A downside of the paper is its limited empirical evaluation: there evidence suggesting that defenses against adversarial examples that work well on MNIST/CIFAR do not necessarily transfer well to much higher-dimensional datasets, for instance, ImageNet. The paper could, therefore, would benefit from empirical evaluations of the defense on a dataset like ImageNet. | {
"review_id": [
"rJbiu3lbM",
"rJ4_WfuxM",
"HJ9WQx6JG"
],
"review": [
{
"title": "title: interesting experimental results, but no definitive argument",
"paper_summary": null,
"main_review": "main_review: The authors propose to use a generative model of images to detect and defend against adverarial examples. White-box attacks against standard models for image recognition (Resnet and VGG) are considered, and a generative model (a PixelCNN) is trained on the same data as the classifiers. The authors first show that adversarial examples created by the white-box attacks correspond to low likelihood region (according to the pixelCNN), which first gives a classification rule for detecting adversarial examples.\n\nThen, to turn the genrative model into a defensive algorithm, the authors propose to preprocess test images by approximately maximizing the likelihood under similar constraints as the attacker of images, to \"project\" adversarial examples back to high-density regions (as estimated by the generative model). As a heuristic method, the authors propose to greedily maximize the likelihood of the incoming images pixel-by-pixel, which is possible because of the specific form of the PixelCNN likelihood in the context of l-infty attacks. An \"adaptive\" version of the algorithm, in which the preprocessing is used only when the likelihood of an example is below a certain threshold, is also proposed.\n\nExperiments are carried out on Fashion MNIST and CIFAR-10. At a high level, the message is that projecting the image into a high density region is sufficient to correct for a significant portions of the mistakes made on adversarial examples. The main result is that this approach based on generative models seems to work even on against the strongest attacks.\n\nOverall, the idea proposed in the paper, using a generative model to detect and filter out spurious patterns that can appear in adversarial examples, is rather intuitive. The experimental result that adversarial examples can somehow be corrected by a generative model is also interesting. The design choice of PixelCNN, which allows for a greedy optimization seems reasonable in that setting.\n\nWhereas the paper is an interesting step forward, the paper still doesn't provide definitive arguments in favor of using such approaches in practice. There is a significant loss in accuracy on clean examples (2% on CIFAR-10 for a resnet), and more generally against weaker opponents such as the fast gradient sign. Thus, in reality, the experiments show that the pipeline generative model + classifier is robust against the strongest white box methods for this classifier, but on the other hand these methods do not transfer well to new models. This somewhat weakens the result, since robustness against these methods that do not transfer well is achieved by changing the model. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A convincing new way to defend image models against adversarial examples.",
"paper_summary": null,
"main_review": "main_review: The paper describes the creative application of a density estimation model to clean up adversarial examples before applying and image model (for classification, in this setup). The basic idea is that the image is first moved back to the probable region of images before applying the classifier. For images, the successful PiexlCNN model is used as a density estimator and is applied to clean up the image before the classification is attempted.\n\nThe proposed method is very intuitive, but might be expensive if a naive implementation of PixelCNN is used for the cleaning. The approach is novel. It is useful that the density estimator model does not have to rely on the labels. Also, it might even be trained on a different dataset potentially.\n\nThe con is that the proposed methodology still does not solve the problem of adversarial examples completely.\n\nMinor nitpick: In section 2.1, it is suggested that DeepFool was the first optimization based attack to minimize the perturbation wrt the original image. In fact the much earler (2013) \"Intriguing Propoerties ... \" paper relied on the same formulation (minimizing perturbation under several constraints: changed detection and pixel intensities are being in the given range).",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review of PixelDefend",
"paper_summary": null,
"main_review": "main_review: \nI read the rebuttal and thank the authors for the thoughtful responses and revisions. The updated Figure 2 and Section 4.4. addresses my primary concerns. Upwardly revising my review.\n\n====================\n\nThe authors describe a method for detecting adversarial examples by measuring the likelihood in terms of a generative model of an image. Furthermore, the authors prescribe a method for cleaning or 'santizing' an adversarial image through employing a generative model. The authors demonstrate some success in restoring images that have been adversarially perturbed with this technique.\n\nThe idea of using a generative model (PixelCNN) to assess whether a given image has been adversarially perturbed is a very interesting and understandable finding that may contribute quite nicely to the adversarial literature. One limitation of this method, however, is our ability to build successful generative models for high resolution images. However, I would be curious to know if the authors tried their method on high resolution images, regardless?\n\nMajor comments:\n1) Cross validation. Figure 2a is quite interesting and compelling. It is not clear from the figure if the 'clean' (nor the other data for that matter) is from the *training* or *testing* data for the PixelCNN model. I would *hope* that this is from the *testing* data indicating that these are the likelihood on unseen images?\n\nThat said, it would be interesting to see the *training* data on this plot as well to see if there are any systematic shifts that might make the distribution of adversarial examples less discernible.\n\n2) Adversary to PixelCNN. It is not clear why a PixelCNN may not be adversarially attacked, nor if such a model would be able to guard against an adversarial attack. I am not sure how well viable of strategy this may be but it is worth understanding or addressing to determine how viable this method for guarding actually is.\n\n3) Restorative effects of PixelDefend. I would like to see individual examples of (a) adversarial perturbation for a given image and (b) PixelDefend perturbation for that adversarial image. In particular, I would like to see how close (a) is the negative of (b). This would give me more confidence that this techniques is successfully guarding against the original attack.\n\nI am willing to adjust my rating upward if the authors are able to address some of the points above in a substantive manner. \n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.6666666865348816,
0.6666666865348816
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to AnonReviewer3",
"Response to AnonReviewer1",
"Response to AnonReviewer2"
],
"comment": [
"Thank you for your review! Here is our answer to the following concern:\n\nQ: In reality, the experiments show that the pipeline generative model + classifier is robust against the strongest white box methods for this classifier, but on the other hand these (stronger attacking) methods do not transfer well to new models. This somewhat weakens the result, since robustness against these methods that do not transfer well is achieved by changing the model. \nA: Our assumption is that in real world circumstance we seldom have the option to change our classification model in response to the adversarial attack being used. We tend to think that the underlying classification model is generally going to be fixed and reasonably hard to change (i.e., in deployed autonomous driving cars), where the adversary can easily test out the system to decide which attack to use against it. Therefore it is important to defend against the strongest available attack.\n",
"Thank you for your review! We would like to address each of your concerns point-by-point:\n\nQ: Generative models of high resolution images.\nA: We agree that testing PixelDefend on high resolution images is an important direction for future work. Although we haven’t tried PixelDefend on higher resolution images than CIFAR-10, we are hopeful that the PixelCNN can capture the approximate distributional properties to distinguish adversarial image even at resolutions where generating convincing samples becomes difficult. Experiments in the paper already provide one such piece of evidence: The samples given by PixelCNN on CIFAR-10 are already bad as judged by humans (see Figure. 9), while the samples for Fashion-MNIST (see Figure. 8) are almost indistinguishable from the training dataset. However, PixelDefend on CIFAR-10 is just as effective as PixelDefend on Fashion-MNIST.\n\nQ: Training or testing on Figure. 2a.\nA: These are indeed likelihoods of *testing* data on unseen images. We have revised the figure to add likelihoods of *training* data as well.\n\nQ: Adversary to PixelDefend\nA: We agree that the above arguments are not definitive and consider theoretical justifications to be an important direction for future research. In Section 4.4, we gave a discussion and provided some empirical results for an attack on PixelDefend. The arguments can be briefly summarized as:\n * A naive attack of the PixelDefend purification process requires back-propagating thousands of repeated PixelCNN computations. This can lead to gradient vanishing problems, as validated by our experiments.\n * Maximizing the PixelCNN density with gradient-based methods is very difficult (as shown in Figure. 5). Therefore such methods are not very amenable to generating adversarial images to fool a PixelCNN via gradient-based techniques.\n * The PixelCNN is trained independent of labels. Therefore, the perturbation direction that leads to higher probability images has a smaller correlation with the perturbation direction that results in misclassification. This arguably makes attacking more difficult.\n\nQ: Restorative effects of PixelDefend.\nA: The goal of PixelDefend is not to undo the adversarial perturbations, but simply to avoid the problems they cause on the underlying classifier by pushing the image towards the nearest high probability mode of the distribution. These changes may not in general undo the adversarial changes, but (as our results show) will push the images towards the classification region for the original underlying class.\n",
"Thanks for pointing out our mistake of quoting DeepFool as the first optimization based attack to minimize the perturbation w.r.t. the original image. We have corrected it in the revised version."
]
} | {
"paperhash": [
"xiao|fashion-mnist:_a_novel_image_dataset_for_benchmarking_machine_learning_algorithms",
"madry|towards_deep_learning_models_resistant_to_adversarial_attacks",
"xu|feature_squeezing_mitigates_and_detects_carlini/wagner_adversarial_examples",
"carlini|adversarial_examples_are_not_easily_detected:_bypassing_ten_detection_methods",
"cissé|parseval_networks:_improving_robustness_to_adversarial_examples",
"tramèr|the_space_of_transferable_adversarial_examples",
"xu|feature_squeezing:_detecting_adversarial_examples_in_deep_neural_networks",
"nayebi|biologically_inspired_protection_of_deep_networks_from_adversarial_attacks",
"li|dropout_inference_in_bayesian_neural_networks_with_alpha-divergences",
"feinman|detecting_adversarial_samples_from_artifacts",
"grosse|on_the_(statistical)_detection_of_adversarial_examples",
"ramachandran|fast_generation_for_convolutional_autoregressive_models",
"metzen|on_detecting_adversarial_perturbations",
"salimans|pixelcnn++:_improving_the_pixelcnn_with_discretized_logistic_mixture_likelihood_and_other_modifications",
"liu|delving_into_transferable_adversarial_examples_and_black-box_attacks",
"kurakin|adversarial_machine_learning_at_scale",
"carlini|towards_evaluating_the_robustness_of_neural_networks",
"amodei|concrete_problems_in_ai_safety",
"oord|conditional_image_generation_with_pixelcnn_decoders",
"papernot|transferability_in_machine_learning:_from_phenomena_to_black-box_attacks_using_adversarial_samples",
"oord|pixel_recurrent_neural_networks",
"he|deep_residual_learning_for_image_recognition",
"papernot|the_limitations_of_deep_learning_in_adversarial_settings",
"moosavi-dezfooli|deepfool:_a_simple_and_accurate_method_to_fool_deep_neural_networks",
"papernot|distillation_as_a_defense_to_adversarial_perturbations_against_deep_neural_networks",
"he|delving_deep_into_rectifiers:_surpassing_human-level_performance_on_imagenet_classification",
"goodfellow|explaining_and_harnessing_adversarial_examples",
"gu|towards_deep_neural_network_architectures_robust_to_adversarial_examples",
"sutskever|sequence_to_sequence_learning_with_neural_networks",
"simonyan|very_deep_convolutional_networks_for_large-scale_image_recognition",
"sasaki|clustering_via_mode_seeking_by_direct_estimation_of_the_gradient_of_a_log-density",
"szegedy|intriguing_properties_of_neural_networks",
"vincent|extracting_and_composing_robust_features_with_denoising_autoencoders",
"hein|manifold_denoising",
"shimodaira|improving_predictive_inference_under_covariate_shift_by_weighting_the_log-likelihood_function",
"zhu|algorithm_778:_l-bfgs-b:_fortran_subroutines_for_large-scale_bound-constrained_optimization",
"byrd|a_limited_memory_algorithm_for_bound_constrained_optimization",
"efron|an_introduction_to_the_bootstrap",
"bengio|learning_long-term_dependencies_with_gradient_descent_is_difficult",
"|f._related_work_most_recent_work_on_detecting_adversarial_examples_focuses_on_adding_an_outlier_class_detection_module_to_the_classifier,_such_as_grosse_et_al",
"|basic_iterative_method_(bim)_kurakin_et_al._(2016)_tested_a_simple_variant_of_the_fast_gradient_sign_method_by_applying_it_multiple_times_with_a_smaller_step_size",
"|11_adversarial_perturbations_of_deep_neural_networks._perturbations,_optimization,_and_statistics",
"netzer|reading_digits_in_natural_images_with_unsupervised_feature_learning",
"krizhevsky|learning_multiple_layers_of_features_from_tiny_images",
"|an_introduction_to_the_bootstrap",
"lecun|gradient-based_learning_applied_to_document_recognition",
"|cifar-10_(canadian_institute_for_advanced_research",
"|figure_8:_true_and_generated_images_from_fashion_mnist._the_upper_part_shows_true_images_sampled_from_the_dataset_while_the_bottom_shows_generated_images_from_pixelcnn"
],
"title": [
"Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms",
"Towards Deep Learning Models Resistant to Adversarial Attacks",
"Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples",
"Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods",
"Parseval Networks: Improving Robustness to Adversarial Examples",
"The Space of Transferable Adversarial Examples",
"Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks",
"Biologically inspired protection of deep networks from adversarial attacks",
"Dropout Inference in Bayesian Neural Networks with Alpha-divergences",
"Detecting Adversarial Samples from Artifacts",
"On the (Statistical) Detection of Adversarial Examples",
"Fast Generation for Convolutional Autoregressive Models",
"On Detecting Adversarial Perturbations",
"PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications",
"Delving into Transferable Adversarial Examples and Black-box Attacks",
"Adversarial Machine Learning at Scale",
"Towards Evaluating the Robustness of Neural Networks",
"Concrete Problems in AI Safety",
"Conditional Image Generation with PixelCNN Decoders",
"Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples",
"Pixel Recurrent Neural Networks",
"Deep Residual Learning for Image Recognition",
"The Limitations of Deep Learning in Adversarial Settings",
"DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks",
"Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks",
"Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"Explaining and Harnessing Adversarial Examples",
"Towards Deep Neural Network Architectures Robust to Adversarial Examples",
"Sequence to Sequence Learning with Neural Networks",
"Very Deep Convolutional Networks for Large-Scale Image Recognition",
"Clustering via Mode Seeking by Direct Estimation of the Gradient of a Log-Density",
"Intriguing properties of neural networks",
"Extracting and composing robust features with denoising autoencoders",
"Manifold Denoising",
"Improving predictive inference under covariate shift by weighting the log-likelihood function",
"Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization",
"A Limited Memory Algorithm for Bound Constrained Optimization",
"An Introduction to the Bootstrap",
"Learning long-term dependencies with gradient descent is difficult",
"F. Related work Most recent work on detecting adversarial examples focuses on adding an outlier class detection module to the classifier, such as Grosse et al",
"Basic iterative method (BIM) Kurakin et al. (2016) tested a simple variant of the fast gradient sign method by applying it multiple times with a smaller step size",
"11 adversarial perturbations of deep neural networks. Perturbations, Optimization, and Statistics",
"Reading Digits in Natural Images with Unsupervised Feature Learning",
"Learning Multiple Layers of Features from Tiny Images",
"An Introduction to the Bootstrap",
"Gradient-based learning applied to document recognition",
"Cifar-10 (canadian institute for advanced research",
"Figure 8: True and generated images from Fashion MNIST. The upper part shows true images sampled from the dataset while the bottom shows generated images from PixelCNN"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"Han Xiao",
"Kashif Rasul",
"Roland Vollgraf"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Madry",
"Aleksandar Makelov",
"Ludwig Schmidt",
"Dimitris Tsipras",
"Adrian Vladu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Weilin Xu",
"David Evans",
"Yanjun Qi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicholas Carlini",
"D. Wagner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Moustapha Cissé",
"Piotr Bojanowski",
"Edouard Grave",
"Yann Dauphin",
"Nicolas Usunier"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Florian Tramèr",
"Nicolas Papernot",
"I. Goodfellow",
"D. Boneh",
"P. Mcdaniel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Weilin Xu",
"David Evans",
"Yanjun Qi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Aran Nayebi",
"S. Ganguli"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yingzhen Li",
"Y. Gal"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Reuben Feinman",
"Ryan R. Curtin",
"S. Shintre",
"Andrew B. Gardner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kathrin Grosse",
"Praveen Manoharan",
"Nicolas Papernot",
"M. Backes",
"P. Mcdaniel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Prajit Ramachandran",
"T. Paine",
"Pooya Khorrami",
"M. Babaeizadeh",
"Shiyu Chang",
"Yang Zhang",
"Mark Hasegawa-Johnson",
"R. Campbell",
"Thomas S. Huang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. H. Metzen",
"Tim Genewein",
"Volker Fischer",
"Bastian Bischoff"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Tim Salimans",
"A. Karpathy",
"Xi Chen",
"Diederik P. Kingma"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yanpei Liu",
"Xinyun Chen",
"Chang Liu",
"D. Song"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Alexey Kurakin",
"I. Goodfellow",
"Samy Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicholas Carlini",
"D. Wagner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Dario Amodei",
"C. Olah",
"J. Steinhardt",
"P. Christiano",
"John Schulman",
"Dandelion Mané"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Aäron van den Oord",
"Nal Kalchbrenner",
"L. Espeholt",
"K. Kavukcuoglu",
"O. Vinyals",
"Alex Graves"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicolas Papernot",
"P. Mcdaniel",
"I. Goodfellow"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Aäron van den Oord",
"Nal Kalchbrenner",
"K. Kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kaiming He",
"X. Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicolas Papernot",
"P. Mcdaniel",
"S. Jha",
"Matt Fredrikson",
"Z. B. Celik",
"A. Swami"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Seyed-Mohsen Moosavi-Dezfooli",
"Alhussein Fawzi",
"P. Frossard"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicolas Papernot",
"P. Mcdaniel",
"Xi Wu",
"S. Jha",
"A. Swami"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kaiming He",
"X. Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"I. Goodfellow",
"Jonathon Shlens",
"Christian Szegedy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Gu",
"Luca Rigazio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"I. Sutskever",
"O. Vinyals",
"Quoc V. Le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"K. Simonyan",
"Andrew Zisserman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Hiroaki Sasaki",
"Aapo Hyvärinen",
"Masashi Sugiyama"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Christian Szegedy",
"Wojciech Zaremba",
"I. Sutskever",
"Joan Bruna",
"D. Erhan",
"I. Goodfellow",
"R. Fergus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Pascal Vincent",
"H. Larochelle",
"Yoshua Bengio",
"Pierre-Antoine Manzagol"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Matthias Hein",
"Markus Maier"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Hidetoshi Shimodaira"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Zhu",
"R. Byrd",
"P. Lu",
"J. Nocedal"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Byrd",
"P. Lu",
"J. Nocedal",
"C. Zhu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Efron",
"R. Tibshirani"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yoshua Bengio",
"P. Simard",
"P. Frasconi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [],
"affiliation": []
},
{
"name": [],
"affiliation": []
},
{
"name": [
"Yuval Netzer",
"Tao Wang",
"Adam Coates",
"A. Bissacco",
"Bo Wu",
"A. Ng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Krizhevsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"Yann LeCun",
"L. Bottou",
"Yoshua Bengio",
"P. Haffner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [],
"affiliation": []
}
],
"arxiv_id": [
"1708.07747v2",
"1706.06083v4",
"1705.10686v1",
"1705.07263v2",
"1704.08847v2",
"1704.03453v2",
"1704.01155v2",
"1703.09202v1",
"1703.02914",
"1703.00410v3",
"1702.06280v2",
"1704.06001v1",
"1702.04267v2",
"1701.05517v1",
"1611.02770",
"1611.01236v2",
"1608.04644v2",
"1606.06565v2",
"1606.05328v2",
"1605.07277",
"1601.06759v3",
"1512.03385v1",
"1511.07528v1",
"1511.04599v3",
"1511.04508v2",
"1502.01852",
"1412.6572v3",
"1412.5068v4",
"1409.3215v3",
"1409.1556",
"1404.5028",
"1312.6199v4",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
[
"methodology"
],
[
"methodology",
"result"
],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background",
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"background"
],
[],
[
"background"
],
[
"methodology",
"result"
],
[
"background"
],
[
"background"
],
[],
[
"background",
"methodology"
],
[],
[
"methodology"
],
[
"background",
"methodology"
],
[],
[
"background",
"methodology"
],
[
"background"
],
[
"background",
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"background",
"methodology"
],
[],
[
"background"
],
[],
[],
[],
[],
[
"methodology"
],
[
"methodology"
],
[],
[],
[]
],
"isInfluential": [
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false
]
} | null | 88 | 7.897727 | 0.666667 | 0.75 | null | null | null | null | null | rJUYGxbCW |
|
amit|lifelong_learning_by_adjusting_priors|ICLR_cc_2018_Conference | Lifelong Learning by Adjusting Priors | In representational lifelong learning an agent aims to continually learn to solve novel tasks while updating its representation in light of previous tasks. Under the assumption that future tasks are related to previous tasks, representations should be learned in such a way that they capture the common structure across learned tasks, while allowing the learner sufficient flexibility to adapt to novel aspects of a new task. We develop a framework for lifelong learning in deep neural networks that is based on generalization bounds, developed within the PAC-Bayes framework. Learning takes place through the construction of a distribution over networks based on the tasks seen so far, and its utilization for learning a new task. Thus, prior knowledge is incorporated through setting a history-dependent prior for novel tasks. We develop a gradient-based algorithm implementing these ideas, based on minimizing an objective function motivated by generalization bounds, and demonstrate its effectiveness through numerical examples. | {
"name": [],
"affiliation": []
} | We develop a lifelong learning approach to transfer learning based on PAC-Bayes theory, whereby priors are adjusted as new tasks are encountered thereby facilitating the learning of novel tasks. | [
"Lifelong learning",
"Transfer learning",
"PAC-Bayes theory"
] | null | 2018-02-15 22:29:36 | 35 | null | null | null | null | null | null | null | null | false | The author's revisions addressed clarity issues and some experimental issues (e.g., including MAML results in the comparison). The work takes an original path to an important problem (transfer learning, essentially). There is a question of significance, and this is due to the fact that the empirical comparisons are still very limited. The task is an artificial one derived from MNIST. I would call this "toy" as well. On this toy task, the approach isn't that much different from MAML, which is not in of itself a problem, but it would be interested to have a less superficial discussion of the differences.
The authors mention that they didn't have time for a larger empirical study. I think one is necessary in this case because the work is purposing a new learning algorithm/framework, and the question of its potential impact/significance is an empirical one. | {
"review_id": [
"HyY2Sogef",
"HyJlQlQWf",
"H1qBI-5gG"
],
"review": [
{
"title": "title: Well written paper, with very week experiments.",
"paper_summary": null,
"main_review": "main_review: The author extends existing PAC-Bayes bounds to multi-task learning, to allow the prior to be adapted across different tasks. Inspired by the variational bayes literature, a probabilistic neural network is used to minimize the bound. Results are evaluated on a toy dataset and a synthetically modified version of MNIST. \n\nWhile this paper is well written and addresses an important topic, there are a few points to be discussed:\n\n* Experimental results are really week. The toy experiment only compares the mean of two gaussians. Also, on the synthetic MNIST experiments, no comparison is done with any external algorithms. Neural Statistician, Model-Agnostic Meta-Learning and matching networks all provide decent results on such setup. While it is tolerated to have minimal experiments in a theoretical papers, the theory only extends Pentina & Lampert (2014). Also, similar algorithms can be obtain through variational-bayes evidence lower bound. \n\n* The bound appears to be sub-optimal. A bound where the KL term vanishes by 1/n would probably be tighter. I went in appendix to try to see how the proof could be adapted but it’s definitively not as well written as the rest of the paper. I’m not against putting proofs in appendix but only when it helps clarity. In this case it did not.\n\n* The paper is really about multi-task learning. Lifelong learning implies some continual learning and addressing the catastrophic forgetting issues. I would recommend against overuse of the lifelong learning term.\n\nMinors:\n* Define KLD\n* Section 5.1 : “to toy”\n* Section 5.1.1: “the the”\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting risk bound but empirical evaluation is not convincing",
"paper_summary": null,
"main_review": "main_review: The paper considers multi-task setting of machine learning. The first contribution of the paper is a novel PAC-Bayesian risk bound. This risk bound serves as an objective function for multi-task machine learning. A second contribution is an algorithm, called LAP, for minimizing a simplified version of this objective function. LAP algorithm uses several training tasks to learn a prior distribution P over hypothesis space. This prior distribution P is then used to find a posterior distribution Q that minimizes the same objective function over the test task. The third contribution is an empirical evaluation of LAP over toy dataset of two clusters and over MNIST.\n\nWhile the paper has the title of \"life-long learning\", the authors admit that all experiments are in multi-task setting, where\nthe training is done over all tasks simultaneously. The novel risk bound and LAP algorithm can definitely be applied to life-long setting, where training tasks are available sequentially. But since there is no empirical evaluation in this setting, I suggest to adjust the title of the paper. \n \nThe novel risk bound of the paper is an extension of the bound from [Pentina & Lampert, ICML 2014]. The extension seems to be quite significant. Unlike the bound of [Pentina & Lampert, ICML 2014], the new bound allows to re-use many different PAC-Bayesian complexity terms that were published previously. \n\nI liked risk bound and optimization sections of the paper. But I was less convinced by the empirical experiments. Since \nthe paper improves the risk bound of [Pentina & Lampert, ICML 2014], I expected to see an empirical comparison of LAP and optimization algorithm from the latter paper. To make such comparison fair, both optimization algorithms should use the same base algorithm, e.g. ridge regression, as in [Pentina & Lampert, ICML 2014]. Also I suggest to use the datasets from the latter paper. \n\nThe experiment with multi-task learning over MNIST dataset looks interesting, but it is still a toy experiment. This experiment will be more convincing with more sophisticated datasets (CIFAR-10, ImageNet) and architectures (e.g. Inception-V4, ResNet). \n\nMinor remarks:\nSection 6, line 4: \"Combing\" -> \"Combining\"\nPage 14, first equation: There should be \"=\" before the second expectation.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting algorithm based on a theoretical study, but the main theorem might contain a flaw",
"paper_summary": null,
"main_review": "main_review: I personally warmly welcome any theoretically grounded methods to perform deep learning. I read the paper with interest, but I have two concerns about the main theoretical result (Theorem 1, lifelong learning PAC-Bayes bound).\n* Firstly, the bound is valid for a [0,1]-valued loss, which does not comply with the losses used in the experiments (Euclidean distance and cross-entropy). This is not a big issue, as I accept that the authors are mainly interested in the learning strategy promoted by the bound. However, this should clearly appear in the theorem statement.\n* Secondly, and more importantly, I doubt that the uaw of the meta-posterior as a distribution over priors for each task is valid. In Proposition 1 (the classical single-task PAC-Bayes bound), the bound is valid with probability 1-delta for one specific choice of prior P, and this choice must be independent of the learning sample S. However, it appears that the bound should be valid uniformly for all P in order to be used in Theorem 1 proof (see Equation 18). From a learning point of view, it seems counterintuitive that the prior used in the KL term to learn from a task relies on the training samples (i.e., the same training samples are used to learn the meta-posterior over priors, and the task specific posterior). \n\nA note about the experiments:\nI am slightly disappointed that the authors compared their algorithm solely with methods learning from fewer tasks. I would like to see the results obtained by another method using five tasks. A simple idea would be to learn a network independently for each of the five tasks, and consider as a meta-prior an isotropic Gaussian distribution centered on the mean of the five learned weight vectors.\n\nTypos and minor comments:\n- Equation 1: \\ell is never explicitly defined.\n- Equation 4: Please explicitly define m in this context (size of the learning sample drawn from tau).\n- Page 4, before Equation 5: A dot is missing between Q and \"This\".\n- Page 7, line 3: Missing parentheses around equation number 12.\n- Section 5.1.1, line 5: \"The hypothesis class is a the set of...\"\n- Equation 17: Q_1, ... Q_n are irrelevant.\n\n=== UPDATE ===\nI increased my score after author's rebuttal. See my other post.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.5555555820465088,
0.5555555820465088
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Dear Authors...",
"Authors response",
"We thank the reviewer for the helpful comments, and address the specific points below.",
"We thank the reviewer for the helpful comment.",
"Response to Area Chair",
"Authors response",
"We thank the reviewer for the helpful comments, and address the specific points below.",
"We thank the reviewer for the helpful comments, and address the specific points below.",
"An improved paper"
],
"comment": [
"There are comments by AnonReviewer1 that require your immediate attention and may materially impact your article's acceptance. Please respond as soon as possible.\n\nNote that OpenReview seems to not be sending email announcements for messages not marked Everyone, so please use that designation.",
"We appreciate your thorough review which greatly contributes to our work.\nRegarding the name 'lifelong', we followed the definition of Pentina & Lampert.\nHowever, we agree that the name might be misleading and we will change it to 'meta-learning' in future submissions.",
"* The toy example (section 5) was meant only for visualization of the setup. \nIn the revised version we separated it from the experimental results section.\n\nIn the experimental results part (section 6) we added a comparison to the recently introduced Model-Agnostic Meta-Learning (MAML, Finn, Abbeel, and Levine. \"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.\" arXiv preprint arXiv:1703.03400 (2017).) .\n\nWe also addressed the comparison to variational methods which maximize the evidence lower bound (section 6). Actually, such methods can be seen as minimizing a bound on the generalization error, but with a complexity terms of the KLD between posterior and prior, which is less tight than the bounds in the paper. We compared the results of such an objective ( which is referred to as LAP-KLD in section 6) and showed that it performed much worse.\n\n* We rewrote the proof - hopefully it is clearer. (see section 3.2 for overview and 8.1 for the full proof).\nIn section 8.2 we also added a bound in which the the KL term can vanish at a rate of 1/m (number of samples) if the empirical error is low. For the number of tasks, n, we preferred to keep the 1/n for simplicity and because this term is less important for the LAP algorithm.\n\n* We added a discussion in the introduction (section 1 - first paragraph) about the distinction from continual learning and from multi-task learning. We hope this clarifies our choice of paper title. There is a clear difference from multi-task learning, since the goal in our work is to acquire knowledge (prior) that, when transferred to new tasks, facilitates learning with low generalization error, rather than using multiple tasks collaboratively to aid each task in the given set of tasks.\n\n",
"Indeed there was a mistake in the constant multiplying the KL divergence of the hyper-distributions.\nUsing the corrected code the results of both methods (varitional and PAC-Bayes) are quite similar (0.9% for LAP-M, 0.75% for LAP-S and 0.85% for LAP-VB).\n",
"We thank the area chair for the helpful comment.\nIndeed there was a problem with the constant, please see our response to AnonReviewer1.\n \nP.S.\nSince the submission of the revised paper we added more experiments that demonstrate the meta-learning performance in varied task-environments and with different number of training-tasks.\nWe would be happy to include those in a revised version if possible.",
"The authors addressed most of my concerns. I will upgrade my score. The only remaining issue is evaluation with more sophisticated datasets and architectures. ",
"* We added a comment about the bounded loss issue (see end of section 2.2). Indeed, this is not a big issue since - theoretically, we can claim to bound a truncated version of the loss, and empirically the losses are almost always smaller than one.\n\n* Thank you for pointing out the delicate issue about our main Theorem. We have rewritten the proofs using a different technique, which clarifies the points made by the reviewer and, in fact, leads to improved bounds (see section 3.2 for overview and 8.1 for full proof). \nIn the new formulation, each task bound holds for all hyper-posteriors and all posteriors, so it is valid to optimize both using the same samples.\nNote that our new theorem deviates significantly in both proof technique and behavior from that in Pentina and Lampert’s work. \n\n* In section 6, we added the experiment you suggested and several other methods which use all the training tasks, including: \n1. Using the bound from Pentina and Lampert’s work as a learning objective, \n2. Using an objective derived from variational methods and hierarchical generative models and 3. A recent method - Model-Agnostic Meta-Learning (MAML, Finn, Abbeel, and Levine. \"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.\" arXiv preprint arXiv:1703.03400 (2017).) . \n\n",
"* We added a discussion in the introduction about the distinction from multi-task learning (section 1 - first paragraph). \nThere is a clear difference from multi-task, since in lifelong learning the goal is to acquire knowledge (prior) that when transferred to new tasks facilitates good learning.\nWhile we call this transfer setup “lifelong learning ” (as in Pentina and Lampert’s work), it can also be called “learning-to-learn”. But ‘multi-task learning’ is inappropriate because of the different goals and outcome of learning (a prior for learning tasks vs. solutions to given tasks).\n\n* We added an experimental comparison to a learning objective which is based on Pentina and Lampert’s main theorem. As can be seen in section 6, this bound leads to far worse empirical results. We believe that using our theorem leads to better performance since it is a tighter bound.\n\n* Due to technical difficulties and lack of time we cannot provide a high quality multiple data-set evaluation at this time.\nHowever, we did add a comparison to competitive recent approach - Model-Agnostic Meta-Learning (MAML, Finn, Abbeel, and Levine. \"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.\" arXiv preprint arXiv:1703.03400 (2017).) (see section 6). \n",
"The authors performed a substantial amount of work to address reviewer comments, both from a theoretical and empirical perspective. The submitted revision turns out to be an improved paper, and I raised my score from 5 to 6.\nIn particular, the new PAC-Bayes theorem is much more interesting. \n\nNote that it took me a while to get convinced of the validity of the new proof; I was confused by the fact that the hyper-posterior $\\mathcal Q$ relies on the samples S_1, ..., S_i, ..., S_n, whereas this is never explicitly said in the proof of Section 8.1 (see Equation 18). But it turns out that the result is not affected by this. I think this should be made clearer for the readers benefit.\n\nHowever, the latter point made me realize that the learning algorithm promoted by the theoretical result needs to learn from all tasks simultaneously (it is indeed what is performed in the paper). Considering this, I agree with the two other reviewers that the term \"lifelong learning\" should not be used here, as there is no continuous learning involved. Personally, I consider this framework as a variant of transfer learning, where one observes multiple tasks before learning a target one. That being said, I conceive that this \"overuse\" of the buzzword \"lifelong learning\" has been present in several works lately.\n"
]
} | {
"paperhash": [
"alquier|regret_bounds_for_lifelong_learning",
"andrychowicz|learning_to_learn_by_gradient_descent_by_gradient_descent",
"audibert|pac-bayesian_aggregation_and_multi-armed_bandits",
"baxter|a_model_of_inductive_bias_learning",
"blundell|weight_uncertainty_in_neural_network",
"chaudhari|entropy-sgd:_biasing_gradient_descent_into_wide_valleys",
"clevert|fast_and_accurate_deep_network_learning_by_exponential_linear_units_(elus)",
"devroye|a_probabilistic_thalamiceory_of_pattern_recognition",
"gintare|computing_nonvacuous_generalization_bounds_for_deep_(stochastic)_neural_networks_with_many_more_parameters_than_training_data",
"edwards|towards_a_neural_statistician",
"finn|model-agnostic_meta-learning_for_fast_adaptation_of_deep_networks",
"germain|pac-bayesian_theory_meets_bayesian_inference",
"graves|practical_variational_inference_for_neural_networks",
"keskar|on_large-batch_training_for_deep_learning:_generalization_gap_and_sharp_minima",
"kingma|adam:_a_method_for_stochastic_optimization",
"diederik|auto-encoding_variational_bayes",
"kingma|variational_dropout_and_the_local_reparameterization_trick",
"kirkpatrick|overcoming_catastrophic_forgetting_in_neural_networks",
"lecun|guy_lever,_franc_¸ois_laviolette,_and_john_shawe-taylor._tighter_pac-bayes_bounds_through_distribution-dependent_priors",
"maurer|a_note_on_the_pac_bayesian_theorem",
"maurer|the_benefit_of_multitask_representation_learning",
"mcallester|a_pac-bayesian_tutorial_with_a_dropout_bound",
"david|pac-bayesian_model_averaging",
"tom|the_need_for_biases_in_learning_generalizations",
"neyshabur|a_pacbayesian_approach_to_spectrally-normalized_margin_bounds_for_neural_networks",
"pentina|a_pac-bayesian_bound_for_lifelong_learning",
"pentina|lifelong_learning_with_non-iid_tasks",
"ravi|optimization_as_a_model_for_few-shot_learning",
"razavian|cnn_features_offthe-shelf:_an_astounding_baseline_for_recognition",
"rezende|stochastic_backpropagation_and_approximate_inference_in_deep_generative_models",
"seeger|pac-bayesian_generalisation_error_bounds_for_gaussian_process_classification",
"seldin|pac-bayesian_inequalities_for_martingales",
"shalev|understanding_machine_learning:_from_theory_to_algorithms",
"teh|distral:_robust_multitask_reinforcement_learning",
"ilya|pac-bayes-empirical-bernstein_inequality"
],
"title": [
"Regret bounds for lifelong learning",
"Learning to learn by gradient descent by gradient descent",
"PAC-Bayesian aggregation and multi-armed bandits",
"A model of inductive bias learning",
"Weight uncertainty in neural network",
"Entropy-sgd: Biasing gradient descent into wide valleys",
"Fast and accurate deep network learning by exponential linear units (elus)",
"A Probabilistic Thalamiceory of Pattern Recognition",
"Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data",
"Towards a neural statistician",
"Model-agnostic meta-learning for fast adaptation of deep networks",
"Pac-bayesian theory meets bayesian inference",
"Practical variational inference for neural networks",
"On large-batch training for deep learning: Generalization gap and sharp minima",
"Adam: A method for stochastic optimization",
"Auto-encoding variational bayes",
"Variational dropout and the local reparameterization trick",
"Overcoming catastrophic forgetting in neural networks",
"Guy Lever, Franc ¸ois Laviolette, and John Shawe-Taylor. Tighter pac-bayes bounds through distribution-dependent priors",
"A note on the pac bayesian theorem",
"The benefit of multitask representation learning",
"A pac-bayesian tutorial with a dropout bound",
"Pac-bayesian model averaging",
"The need for biases in learning generalizations",
"A pacbayesian approach to spectrally-normalized margin bounds for neural networks",
"A pac-bayesian bound for lifelong learning",
"Lifelong learning with non-iid tasks",
"Optimization as a model for few-shot learning",
"Cnn features offthe-shelf: an astounding baseline for recognition",
"Stochastic backpropagation and approximate inference in deep generative models",
"Pac-bayesian generalisation error bounds for gaussian process classification",
"Pac-bayesian inequalities for martingales",
"Understanding machine learning: From theory to algorithms",
"Distral: Robust multitask reinforcement learning",
"Pac-bayes-empirical-bernstein inequality"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"pierre alquier",
"massimiliano pontil"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"marcin andrychowicz",
"misha denil",
"sergio gomez",
"matthew w hoffman",
"david pfau",
"tom schaul",
"nando de freitas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jean-yves audibert"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jonathan baxter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"charles blundell",
"julien cornebise",
"koray kavukcuoglu",
"daan wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pratik chaudhari",
"anna choromanska",
"stefano soatto",
"yann lecun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"djork-arné clevert",
"thomas unterthiner",
"sepp hochreiter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" devroye",
"g gyoörfi",
" lugosi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"karolina gintare",
"daniel m dziugaite",
" roy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"harrison edwards",
"amos storkey"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chelsea finn",
"pieter abbeel",
"sergey levine"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pascal germain",
"francis bach",
"alexandre lacoste",
"simon lacoste-julien"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex graves"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nitish shirish keskar",
"dheevatsa mudigere",
"jorge nocedal",
"mikhail smelyanskiy",
"ping tak",
"peter tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik kingma",
"jimmy ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p diederik",
"max kingma",
" welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tim diederik p kingma",
"max salimans",
" welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"james kirkpatrick",
"razvan pascanu",
"neil rabinowitz",
"joel veness",
"guillaume desjardins",
"andrei a rusu",
"kieran milan",
"john quan",
"tiago ramalho",
"agnieszka grabska-barwinska"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yann lecun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"andreas maurer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"andreas maurer",
"massimiliano pontil",
"bernardino romera-paredes"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david mcallester"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a david",
" mcallester"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"m tom",
" mitchell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"srinadh behnam neyshabur",
"david bhojanapalli",
"nathan mcallester",
" srebro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"anastasia pentina",
"christoph h lampert"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"anastasia pentina",
"christoph h lampert"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sachin ravi",
"hugo larochelle"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ali sharif razavian",
"hossein azizpour",
"josephine sullivan",
"stefan carlsson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"danilo jimenez rezende",
"shakir mohamed",
"daan wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"matthias seeger"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yevgeny seldin",
"nicolo franc ¸ois laviolette",
"john cesa-bianchi",
"peter shawe-taylor",
" auer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shai shalev",
"-shwartz ",
"shai ben-david"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yee whye teh",
"victor bapst",
"wojciech ",
"marian czarnecki",
"john quan",
"james kirkpatrick",
"raia hadsell",
"nicolas heess",
"razvan pascanu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"o ilya",
"yevgeny tolstikhin",
" seldin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1610.08628v1",
"1606.04474v2",
"",
"1106.0245v1",
"1505.05424v2",
"1611.01838v5",
"1511.07289v5",
"",
"1703.11008v2",
"1606.02185v2",
"arXiv:1703.03400",
"",
"",
"arXiv:1609.04836",
"1412.6980v9",
"arXiv:1312.6114",
"1506.02557v2",
"1612.00796v2",
"",
"cs/0411099v1",
"1505.06279v2",
"arXiv:1307.2118",
"",
"",
"arXiv:1707.09564",
"",
"",
"",
"",
"1401.4082v3",
"",
"",
"",
"1707.04175v1",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.555556 | 0.75 | null | null | null | null | null | rJUBryZ0W |
||
kidambi|on_the_insufficiency_of_existing_momentum_schemes_for_stochastic_optimization|ICLR_cc_2018_Conference | 1803.05591v2 | On the insufficiency of existing momentum schemes for Stochastic Optimization | Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov's accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). Rigorously speaking, fast gradient methods have provable improvements over gradient descent only for the deterministic case, where the gradients are exact. In the stochastic case, the popular explanations for their wide applicability is that when these fast gradient methods are applied in the stochastic case, they partially mimic their exact gradient counterparts, resulting in some practical gain. This work provides a counterpoint to this belief by proving that there exist simple problem instances where these methods cannot outperform SGD despite the best setting of its parameters. These negative problem instances are, in an informal sense, generic; they do not look like carefully constructed pathological instances. These results suggest (along with empirical evidence) that HB or NAG's practical performance gains are a by-product of minibatching.
Furthermore, this work provides a viable (and provable) alternative, which, on the same set of problem instances, significantly improves over HB, NAG, and SGD's performance. This algorithm, referred to as Accelerated Stochastic Gradient Descent (ASGD), is a simple to implement stochastic algorithm, based on a relatively less popular variant of Nesterov's Acceleration. Extensive empirical results in this paper show that ASGD has performance gains over HB, NAG, and SGD. The code for implementing the ASGD Algorithm can be found at https://github.com/rahulkidambi/AccSGD.
| {
"name": [
"rahul kidambi",
"praneeth netrapalli",
"prateek jain",
"sham m kakade"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Washington",
"location": "{'settlement': 'Seattle'}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'country': 'India'}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'country': 'India'}"
},
{
"laboratory": "",
"institution": "University of Washington",
"location": "{'settlement': 'Seattle'}"
}
]
} | null | [
"Computer Science",
"Mathematics"
] | Information Theory and Applications Workshop | 2018-02-15 | 47 | 103 | null | null | null | null | null | null | null | true | The reviewers unanimously recommended that this paper be accepted, as it contains an important theoretical result that there are problems for which heavy-ball momentum cannot outperform SGD. The theory is backed up by solid experimental results, and the writing is clear. While the reviewers were originally concerned that the paper was missing a discussion of some related algorithms (ASVRG and ASDCA) that were handled in discussion.
| {
"review_id": [
"Sy3aR8wxz",
"Sy2Sc4CWz",
"Sk0uMIqef"
],
"review": [
{
"title": "title: Nice idea, Like the paper",
"paper_summary": null,
"main_review": "main_review: I like the idea of the paper. Momentum and accelerations are proved to be very useful both in deterministic and stochastic optimization. It is natural that it is understood better in the deterministic case. However, this comes quite naturally, as deterministic case is a bit easier ;) Indeed, just recently people start looking an accelerating in stochastic formulations. There is already accelerated SVRG, Jain et al 2017, or even Richtarik et al (arXiv: 1706.01108, arXiv:1710.10737).\n\nI would somehow split the contributions into two parts:\n1) Theoretical contribution: Proposition 3 (+ proofs in appendix)\n2) Experimental comparison.\n\nI like the experimental part (it is written clearly, and all experiments are described in a lot of detail).\n\nI really like the Proposition 3 as this is the most important contribution of the paper. (Indeed, Algorithms 1 and 2 are for reference and Algorithm 3 was basically described in Jain, right?). \n\nSignificance: I think that this paper is important because it shows that the classical HB method cannot achieve acceleration in a stochastic regime.\n\nClarity: I was easy to read the paper and understand it.\n\nFew minor comments:\n1. Page 1, Paragraph 1: It is not known only for smooth problems, it is also true for simple non-smooth (see e.g. https://link.springer.com/article/10.1007/s10107-012-0629-5)\n2. In abstract : Line 6 - not completely true, there is accelerated SVRG method, i.e. the gradient is not exact there, also see Recht (https://arxiv.org/pdf/1701.03863.pdf) or Richtarik et al (arXiv: 1706.01108, arXiv:1710.10737) for some examples where acceleration can be proved when you do not have an exact gradient.\n3. Page 2, block \"4\" missing \".\" in \"SGD We validate\"....\n4. Section 2. I think you are missing 1/2 in the definition of the function. Otherwise, you would have a constant \"2\" in the Hessian, i.e. H= 2 E[xx^T]. So please define the function as f_i(w) = 1/2 (y - <w,x_i>)^2. The same applies to Section 3.\n5. Page 6, last line, .... was downloaded from \"pre\". I know it is a link, but when printed, it looks weird. \n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Accept",
"paper_summary": null,
"main_review": "main_review: I only got access to the paper after the review deadline; and did not have a chance to read it until now. Hence the lateness and brevity.\n\nThe paper is reasonably well written, and tackles an important problem. I did not check the mathematics. \n\nBesides the missing literature mentioned by other reviewers (all directly relevant to the current paper), the authors should also comment on the availability of accelerated methods inn the finite sum / ERM setting. There, the questions this paper is asking are resolved, and properly modified stochastic methods exist which offer acceleration over SGD (and not through minibatching). This paper does not comment on these developments. Look at accelerated SDCA (APPROX, ASDCA), accelerated SVRG (Katyusha) and so on.\n\nProvided these changes are made, I am happy to suggest acceptance.\n\n\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Good paper, accept",
"paper_summary": null,
"main_review": "main_review: I wonder how the ASGD compares to other optimization schemes applicable to DL, like Entropy-SGD, which is yet another algorithm that provably improves over SGD. This question is also valid when it comes to other optimization schemes that are designed for deep learning problems. For instance, Entropy-SGD and Path-SGD should be mentioned and compared with. As a consequence, the literature analysis is insufficient. \n\nAuthors provided necessary clarifications. I am raising my score.\n\n\n\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.7777777910232544,
0.6666666865348816
],
"confidence": [
0.75,
1,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Rebuttal",
"re: The parameters of momentum methods",
"Rebuttal",
"re: The parameters of momentum methods",
"list of changes made to the manuscript",
"re: The parameters of momentum methods",
"Rebuttal"
],
"comment": [
"Thanks a lot for insightful comments. We have updated the paper taking into account several of your comments. We will make more updates according to your suggestions. \n\n\nPaper organization: we will try to better organize the paper to highlight the contributions. \nProposition 3's importance: yes, your assessment is spot on.\n\nMinor comment 1,2: Thanks for pointing the minor mistake, we have updated the corresponding lines. Papers such as Accelerated SVRG, Recht et al. are offline stochastic accelerated methods. The paper of Richtarik (arXiv:1706.01108) deals with solving consistent linear systems in the offline setting; (arXiv:1710.10737) is certainly relevant and we will add more detailed comparison with this line of work. \nMinor comment 3, 5: thanks for pointing out the typos. They are fixed. \nMinor comment 4: Actually, the problem is a discrete problem where one observes one hot vectors in 2-dimensions, each of the vectors can occur with probability 1/2. So this is the reason why the Hessian does not carry an added factor of 2.\n\n\n",
"Part 1 of 2:\n[1] Thanks for clarifying your precise question. Answer: ASGD is indeed a generalization of NAG (i.e., there is some setting of parameters of ASGD which recovers NAG with fixed step size and momentum parameters) but ASGD does not correspond to NAG with time-varying step size/momentum. Below, we clarify the precise difference between NAG and ASGD (both with fixed parameters) and intuitively why ASGD might perform better than NAG.\n\nASGD\n----\nStart at x_0=y_0=v_0 (say).\nFor t = 1,2,...,T Repeat:\n v_t = (1-alpha) * (y_{t-1} - gamma * g(y_{t-1})) + alpha * v_{t-1}; /* gradient descent with long step \"gamma\" and exponential decaying average of past gradients*/\n y_t = beta * (y_{t-1} - delta * g(y_{t-1})) + (1-beta) * v_t;\t/* gradient descent with short step \"delta\" and exponential decaying average of past gradients*/\nend\n\nwhere g(y_{t-1}) is the gradient at y_{t-1} and \"alpha\", \"gamma\" and \"delta\" are the parameters of ASGD and \"beta = c/(c+1-alpha)\" (0<c<1 is arbitrary; we choose c = 0.7). Of these parameters, \"delta\" corresponds to the step size in standard terminology. This algorithm performs an exponentially decaying average of past gradients with long step \"gamma\" and short step \"delta\". The long step \"gamma\" is of size \"1/mu\" and short step \"delta\" is of size \"1/L\", where \"mu\" and \"L\" are strong convexity and smoothness of the problem respectively.\n\nIf we set \"alpha = theta*(1+c)/(c+theta)\" and \"gamma = (delta/(1-alpha)^2) * c * (1+c*theta)/(c+theta)\", then ASGD written above is exactly NAG with step size \"delta\" and momentum \"theta\". To see this, we note that the variables \"y_t\" and \"x_t = y_{t-1} - delta * g(y_{t-1})\" from ASGD above satisfy\n\n x_t = y_{t-1} - delta * g(y_{t-1});\n y_t = (1+theta) * x_t - theta * x_{t-1};\n\nwhich correspond to NAG updates. In this setting, the decay factor \"(1-alpha) ~ \\sqrt(delta/gamma)\" since for 0<theta<1, c * (1+c*theta)/(c+theta) is some reasonable constant between 0 and 2. For \"delta = 1/L\" and \"gamma = 1/mu\", we have \"(1-alpha) ~ \\sqrt(mu/L)\".\n\nThe difference in ASGD however, is that the decay factor \"1-alpha\" can be much smaller than \"\\sqrt(delta/gamma)\" since it is an independent parameter. In fact, there are some bad problems, where \"1-alpha\" needs to be smaller than \"delta/gamma\" (otherwise the algorithm might diverge) and for this choice we do not see acceleration (in these cases, note that \"1-alpha ~ mu/L\"). On the other hand, for good problems, \"1-alpha\" can be chosen to be closer to \"\\sqrt(delta/gamma)\" and for this choice we do see acceleration. See Jain et al. 2017 (https://arxiv.org/abs/1704.08227) for examples of such good problems (where acceleration is possible in the stochastic world) and bad problems (where acceleration is not possible in the stochastic world).\n\nThis view also suggests a plausible explanation for why time varying momentum parameter is perhaps necessary to get good performance for NAG (on several problems with stochastic gradients). For NAG to be stable and not diverge on some problems, we might require \"1-alpha ~ delta/gamma\" while NAG by design enforces \"1-alpha ~\\sqrt(delta/gamma)\". This means that \"delta/gamma ~ \\sqrt(delta/gamma)\" or equivalently \"\\sqrt(delta/gamma) ~ 1\". This implies that \"theta\" is away from 1 (cannot use large momentum). ASGD overcomes this issue by decoupling the short step-long step ratio (\"delta/gamma\") from the decay factor and using appropriate decay factors to ensure convergence of the algorithm.\n\nTo summarize the discussion, ASGD is indeed a generalization of NAG by decoupling the decay factor for average gradients from short step-long step ratio. However, the decay factor/short step/long step do not change with time. In our view, this seems to fix NAG by making it convergent with larger \"long steps\" as compared to vanilla NAG.\n\nIt would be very interesting to further try varying the parameters of ASGD for the neural net experiments and verify if it indeed improves the performance of ASGD. For the theoretical example mentioned in the paper, this is not required.",
"Thanks for your comments. \n\nWe have cited Entropy SGD and Path SGD papers and discuss the differences in Section 6 (related works). However, both the methods are complementary to our method. \n\nEntropy SGD adds a local strong convexity term to the objective function to improve generalization. However, currently we do not understand convergence rates or generalization performance of the technique rigorously, even for convex problems. The paper proposes to use SGD to optimize the altered objective function and mentions that one can use SGD+momentum as well (below algorithm box on page 6). Naturally, one can use the ASGD method as well to optimize the proposed objective function in the paper. \n\nPath SGD uses a modified SGD like update to ensure invariance to the scale of the data. Here again, the main goal is orthogonal to our work and one can easily use ASGD method in the same framework. \n",
"Part 2 of 2:\n[2] There appears to be some discrepancy in the usage of realizable and agnostic cases. We will clarify these issues more precisely:\n\nLet b = w.a + eps be underlying data generation model, with w.a = sum_i w_i a_i;\n\n\"Noiseless case\" -- In this case, eps = 0 always, we have the consistent linear system case since there exists w* such that for all (a,b), b = a.w*; This is the case considered in this paper. However, this does not mean that we have standard gradient descent since we only get gradient information from a single sample. This setting carries what is known as \"multiplicative\" noise owing to sampling gradients (i.e., from sampling 'a' and 'b') instead of computing a full gradient. \n\n\"Realizable case\" -- In this case, eps is a zero mean random variable and independent of a. Example: sample epsilon from a zero mean gaussian with standard deviation sigma. \n\n\"Non-realizable/agnostic case\" -- In this case, eps shares correlations with a.\n\nLet us now give some background on the error of SGD type algorithms. The error of any SGD type algorithm can be written as a sum of two parts: \"bias\" representing dependence of the error on the starting point w_0, and \"variance\" due to the noise \"eps\". If we run SGD or similar methods with a fixed step size and consider the last point, the \"bias\" error decays geometrically as exp(-n) but the \"variance\" error does not decay with n (here 'n' is the number of samples or SGD steps). Polyak averaging fixes this issue and decays the \"variance\" error at the right 1/n rate. However, if we average the iterates right from the start, the rate of decay of \"bias\" becomes 1/n^2, which is sub-optimal compared to exp(-n) from before -- this is the motivation to tail-average (i.e., average only the last several iterates). This gets the best of both worlds in the sense that we get a geometric exp(-n) decay on the \"bias\" term and the optimal 1/n decay of the \"variance\" term. However, it is important to note that there is a problem dependent constant that determines the exp(-n) rate of \"bias\" decay. More concretely, for (tail-averaged or non-averaged) SGD the rate of \"bias\" decay is exp(-n/\\kappa) where \\kappa is the condition number. In this paper, we show that the same rate exp(-n/\\kappa) is tight for both (non-averaged) HB and NAG (theoretically for HB and empirically for both). The rate of \"bias\" decay of Polyak-averaged or tail-averaged HB/NAG can only be worse (averaging never helps the \"bias\" term). Jain et al. 2017 shows that tail-averaged ASGD gets \"bias\" decay rate exp(-n/\\sqrt{\\kappa \\tilde{\\kappa}}), which is always better than that of SGD/HB/NAG since \\tilde{\\kappa} \\leq \\kappa. Furthermore, they also show that tail-averaged ASGD decays the \"variance\" error at the right 1/n rate. This means that tail-averaged ASGD improves upon the \"bias\" decay rate as compared to SGD/HB/NAG while achieving the same (optimal upto absolute numerical constants i.e., not problem dependent) decay rate on the \"variance\" term as (Polyak-averaged) SGD.\n\nSo to summarize, ASGD improves upon the \"bias\" decay rate of SGD/HB/NAG. Polyak averaging or tail-averaging is a complementary technique and improves the \"variance\" decay rate. For instance, tail-averaging can be used on top of ASGD and this is better than Polyak-averaged or tail-averaged SGD.\n\nSince the improvement of ASGD over SGD/HB/NAG is in the \"bias\" term, we tried to illustrate this using an example where the \"variance\" term is equal to zero. This is the reason we consider the noiseless or consistent linear system case. While we illustrate our results in this scenario, the claims of superiority of ASGD over SGD/HB/NAG carry over to the realizable case as well (i.e., eps is a zero mean, independent random variable) due to the reasoning in the above two paragraphs.\n\nReferences: For a precise understanding of the behavior of SGD for least squares with realizable/agnostic noise with or without Polyak averaging, refer to \"Parallelizing Stochastic Approximation Through Mini-Batching and Tail-Averaging\" (https://arxiv.org/abs/1610.03774). Behavior of ASGD for least squares and Polyak averaging of the final few iterates can be precisely understood from \"Accelerating Stochastic Gradient Descent\" (https://arxiv.org/abs/1704.08227).",
"We group the list of changes made to the manuscript based on suggestions of reviewers:\n\nAnonReviewer 3:\n- Added a paragraph on accelerated and fast methods for finite sums and their implications in the deep learning context. (in related work)\n\nAnonReviewer 2:\n- Included reference on Acceleration for simple non-smooth problems. (in page 1)\n- Included reference on Accelerated SVRG and other suggested references. (in related work)\n- Fixed citations for pytorch/download links and fixed typos.\n\nAnonReviewer 1:\n- Added a paragraph on entropic sgd and path normalized sgd and their complimentary nature compared to this work's message (in related work section).\n\nOther changes:\n- In the related work: background about Stochastic Heavy Ball, adding references addressing reviewer feedback.\n- Removed statement on generalization/batch size. (page 2)\n- Fixed minor typos. (page 3)\n- Added comment about NAG lower bound conjecture. (page 4, below proposition 3)",
"Thank you for your interest and questions:\n\n[1] Regarding decaying momentum: In smooth convex optimization, Nesterov's scheme is employed with time-varying momentum. In the smooth+strongly convex case, any accelerated method (including Nesterov's method) is implemented with a constant momentum term (see for example, Bubeck 2015). So, in a sense, for the (strongly convex) consistent linear system case described in the paper, a constant momentum term is in accordance with the prescription from convex optimization.\n\nFor the neural net examples, we used typical strategies of a constant momentum term, as employed in common packages (like tensorflow/pytorch), since these appear to be the most widely used in practice. Moreover, there are several parameters to tune and perform a grid search on, so a scheme for varying momentum just adds to making these grid searches longer. We note that time-varying momentum schemes can also be added to the proposed ASGD method and these comparisons can be made.\n\nFinally, in the case that you'd think that the ASGD method as described in the paper varies the rate of decay of average gradients over iterations, we would like to clarify that this is not the case and that ASGD also retains constant learning rate/momentum parameters across iterations.\n\n[2] For the bounds with Polyak averaging: Note that averaging the iterates of any stochastic gradient method provides gains only when there is additive noise. In the noiseless case, as you mention, the iterates of SGD converge linearly to the minimizer. Averaging the iterates is strictly worse than the final iterate for the noiseless case and leads to sub-linear convergence of the iterates towards the minimizer. So by no means, averaging iterates of SGD/NAG/HB can improve over ASGD (or even unaveraged versions of SGD/NAG/HB for that matter). For the behavior of Polyak-averaged SGD, refer to Jain et al. 2016 ''Parallelizing stochastic approximation through mini-batching and tail-averaging'' - Figure 2a (red and green curves represent averaged and unaveraged SGD respectively) and Theorem 2 for theoretical bounds (by setting $\\Sigma=0$ for the consistent linear system case). \n",
"Thanks for the references, we have included them in the paper and added a paragraph in Section 6 providing detailed comparison and key differences that we summarize below: \n \nASDCA, Katyusha, accelerated SVRG: these methods are \"offline\" stochastic algorithms that is they require multiple passes over the data and require multiple rounds of full gradient computation (over the entire training data). In contrast, ASGD is a single pass algorithm and requires gradient computation only a single data point at a time step. In the context of deep learning, this is a critical difference, as computing gradient over entire training data can be extremely slow. See Frostig, Ge, Kakade, Sidford ``Competing with the ERM in a single pass\" (https://arxiv.org/pdf/1412.6606.pdf) for a more detailed discussion on online vs offline stochastic methods. \n\nMoreover, the rate of convergence of the ASDCA depend on \\sqrt{\\kappa n} while the method studied in this paper has \\sqrt{\\kappa \\tilde{kappa}} dependence where \\tilde{kappa} can be much smaller than n. \n\n\n\n\n\n\n\n\n\n"
]
} | {
"paperhash": [
"zhang|yellowfin_and_the_art_of_momentum_tuning",
"chaudhari|entropy-sgd:_biasing_gradient_descent_into_wide_valleys",
"keskar|on_large-batch_training_for_deep_learning:_generalization_gap_and_sharp_minima",
"allen-zhu|katyusha:_the_first_direct_acceleration_of_stochastic_gradient_methods",
"he|identity_mappings_in_deep_residual_networks",
"he|deep_residual_learning_for_image_recognition",
"martens|optimizing_neural_networks_with_kronecker-factored_approximate_curvature",
"kingma|adam:_a_method_for_stochastic_optimization",
"defazio|saga:_a_fast_incremental_gradient_method_with_support_for_non-strongly_convex_composite_objectives",
"d’aspremont|smooth_optimization_with_approximate_gradient",
"robbins|a_stochastic_approximation_method",
"loizou|linearly_convergent_stochastic_heavy_ball_method_for_minimizing_generalization_error",
"reddi|a_generic_approach_for_escaping_saddle_points",
"jain|accelerating_stochastic_gradient_descent",
"jain|parallelizing_stochastic_approximation_through_mini-batching_and_tail-averaging",
"dieuleveut|harder,_better,_faster,_stronger_convergence_rates_for_least-squares_regression",
"defazio|a_simple_practical_accelerated_method_for_finite_sums",
"frostig|un-regularizing:_approximate_proximal_point_and_faster_stochastic_algorithms_for_empirical_risk_minimization",
"neyshabur|path-sgd:_path-normalized_optimization_in_deep_neural_networks",
"lin|a_universal_catalyst_for_first-order_optimization",
"frostig|competing_with_the_empirical_risk_minimizer_in_a_single_pass",
"shalev-shwartz|accelerated_proximal_stochastic_dual_coordinate_ascent_for_regularized_loss_minimization",
"shalev-shwartz|stochastic_dual_coordinate_ascent_methods_for_regularized_loss",
"nesterov|efficiency_of_coordinate_descent_methods_on_huge-scale_optimization_problems",
"roux|a_stochastic_gradient_method_with_an_exponential_convergence_rate_for_strongly-convex_optimization_with_finite_training_sets",
"rakhlin|making_gradient_descent_optimal_for_strongly_convex_stochastic_optimization"
],
"title": [
"YellowFin and the Art of Momentum Tuning",
"ENTROPY-SGD: BIASING GRADIENT DESCENT INTO WIDE VALLEYS",
"ON LARGE-BATCH TRAINING FOR DEEP LEARNING: GENERALIZATION GAP AND SHARP MINIMA",
"Katyusha: The First Direct Acceleration of Stochastic Gradient Methods",
"Identity Mappings in Deep Residual Networks",
"Deep Residual Learning for Image Recognition",
"",
"Published as a conference paper at ICLR 2015 ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION",
"Complexity Analysis of the Lasso Regularization Path",
"OpenAI Gym",
"Institute of Mathematical Statistics is collaborating with JSTOR to digitize, preserve, and extend access to The Annals of Mathematical Statistics",
"Linearly Convergent Stochastic Heavy Ball Method for Minimizing Generalization Error",
"A Generic Approach for Escaping Saddle points",
"Accelerating Stochastic Gradient Descent For Least Squares Regression *",
"Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification *",
"Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression",
"A Simple Practical Accelerated Method for Finite Sums",
"Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization",
"Path-SGD: Path-Normalized Optimization in Deep Neural Networks",
"A Universal Catalyst for First-Order Optimization",
"Competing with the Empirical Risk Minimizer in a Single Pass",
"Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization",
"Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization",
"CORE DISCUSSION PAPER 2010_2 Efficiency of coordinate descent methods on huge-scale optimization problems",
"A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets",
"Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [],
"affiliation": []
},
{
"name": [
"pratik chaudhari",
"anna choromanska",
"stefano soatto",
"yann lecun",
"carlo baldassi",
"christian borgs",
"jennifer chayes",
"levent sagun",
"riccardo zecchina"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Los Angeles'}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Los Angeles'}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Microsoft Research New England",
"location": "{'settlement': 'Cambridge'}"
},
{
"laboratory": "",
"institution": "Microsoft Research New England",
"location": "{'settlement': 'Cambridge'}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nitish shirish keskar",
"dheevatsa mudigere",
"jorge nocedal",
"mikhail smelyanskiy",
"ping tak",
"peter tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
}
]
},
{
"name": [
"jasper snoek",
"hugo larochelle",
"ryan p adams"
],
"affiliation": [
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
}
]
},
{
"name": [
"diederik p kingma",
"jimmy lei ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"julien mairal"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
}
]
},
{
"name": [
"greg brockman",
"vicki cheung",
"ludwig pettersson",
"jonas schneider",
"john schulman",
"jie tang",
"wojciech zaremba openai"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"nicolas loizou",
"peter richtárik"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sashank j reddi",
"manzil zaheer",
"suvrit sra",
"barnabás póczos",
"francis bach",
"ruslan salakhutdinov",
"alexander j smola"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"prateek jain",
"sham m kakade",
"rahul kidambi",
"praneeth netrapalli",
"aaron sidford"
],
"affiliation": [
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'settlement': 'Bangalore', 'country': 'India'}"
},
{
"laboratory": "",
"institution": "University of Washington",
"location": "{'settlement': 'Seattle', 'region': 'WA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Washington",
"location": "{'settlement': 'Seattle', 'region': 'WA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'settlement': 'Bangalore', 'country': 'India'}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{'settlement': 'Palo Alto', 'region': 'CA', 'country': 'USA'}"
}
]
},
{
"name": [
"prateek jain",
"sham m kakade",
"rahul kidambi",
"praneeth netrapalli",
"aaron sidford"
],
"affiliation": [
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'settlement': 'Bangalore', 'country': 'India'}"
},
{
"laboratory": "",
"institution": "University of Washington",
"location": "{'settlement': 'Seattle', 'region': 'WA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Washington",
"location": "{'settlement': 'Seattle', 'region': 'WA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'settlement': 'Bangalore', 'country': 'India'}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{'settlement': 'Palo Alto', 'region': 'CA', 'country': 'USA'}"
}
]
},
{
"name": [
"aymeric dieuleveut",
"nicolas flammarion",
"francis bach"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aaron defazio ambiata",
"sydney australia"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"roy frostig",
"rong ge",
"sham m kakade",
"aaron sidford"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "Microsoft Research",
"institution": "",
"location": "{'country': 'New England'}"
},
{
"laboratory": "Microsoft Research",
"institution": "",
"location": "{'country': 'New England'}"
},
{
"laboratory": "",
"institution": "MIT",
"location": "{}"
}
]
},
{
"name": [
"behnam neyshabur",
"ruslan salakhutdinov",
"nathan srebro"
],
"affiliation": [
{
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{}"
},
{
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago",
"location": "{}"
}
]
},
{
"name": [
"hongzhou lin",
"julien mairal",
"zaid harchaoui",
" nyu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"roy frostig",
"rong ge",
"sham m kakade",
"aaron sidford"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shai shalev-shwartz",
"tong zhang"
],
"affiliation": [
{
"laboratory": "",
"institution": "The Hebrew University",
"location": "{'settlement': 'Jerusalem', 'country': 'Israel'}"
},
{
"laboratory": "",
"institution": "Rutgers University",
"location": "{'region': 'NJ', 'country': 'USA'}"
}
]
},
{
"name": [
"shai shalev-shwartz",
"tong zhang"
],
"affiliation": [
{
"laboratory": "",
"institution": "Hebrew University",
"location": "{'settlement': 'Jerusalem', 'country': 'Israel'}"
},
{
"laboratory": "",
"institution": "Rutgers University",
"location": "{'region': 'NJ', 'country': 'USA'}"
}
]
},
{
"name": [
"yu nesterov"
],
"affiliation": [
{
"laboratory": "",
"institution": "Université catholique de Louvain, CORE and INMA",
"location": "{'postCode': 'B-1348', 'settlement': 'Louvain-la-Neuve', 'country': 'Belgium'}"
}
]
},
{
"name": [
"nicolas le roux",
"mark schmidt",
"francis bach"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alexander rakhlin",
"ohad shamir"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1706.03471v2",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | 1.22619 | 0.703704 | 0.75 | null | null | null | null | null | rJTutzbA- |
|
neal|learning_generative_models_with_locally_disentangled_latent_factors|ICLR_cc_2018_Conference | Learning Generative Models with Locally Disentangled Latent Factors | One of the most successful techniques in generative models has been decomposing a complicated generation task into a series of simpler generation tasks. For example, generating an image at a low resolution and then learning to refine that into a high resolution image often improves results substantially. Here we explore a novel strategy for decomposing generation for complicated objects in which we first generate latent variables which describe a subset of the observed variables, and then map from these latent variables to the observed space. We show that this allows us to achieve decoupled training of complicated generative models and present both theoretical and experimental results supporting the benefit of such an approach. | {
"name": [],
"affiliation": []
} | Decompose the task of learning a generative model into learning disentangled latent factors for subsets of the data and then learning the joint over those latent factors. | [
"Generative Models",
"Hierarchical Models",
"Latent Variable Models"
] | null | 2018-02-15 22:29:41 | 21 | null | null | null | null | null | null | null | null | false | Reviewers recognize the proposed method of hierarchical extension to ALI to be potentially novel and interesting but have expressed strong concerns on the experiments section. The paper also needs to have comparisons with relevant hierarchical generative model baselines. Not suitable for publication in its current form. | {
"review_id": [
"HypMNiy-G",
"H13up9Klf",
"ByKf-8jlG"
],
"review": [
{
"title": "title: A hierarchical extension of ALI. Not well-prepared paper",
"paper_summary": null,
"main_review": "main_review: Training GAN in a hierarchical optimization schedule shows promising performance recently (e.g. Zhao et al., 2016). However, these works utilize the prior knowledge of the data (e.g. image) and it's hard to generalize it to other data types (e.g. text). The paper aims to learn these hierarchies directly instead of designing by human. However, several parts are missing and not well-explained. Also, many claims in paper are not proved properly by theory results or empirical results. \n\n(1) It is not clear to me how to train the proposed algorithm. My understanding is train a simple ALI, then using the learned latent as the input and train the new layer. Do the authors use a separate training ? or a joint training algorithms. The authors should provide a more clear and rigorous objective function. It would be even better to have a pseudo code. \n\n(2) In abstract, the authors claim the theoretical results are provided. I am not sure whether it is sec 3.2 The claims is not clear and limited. For example, what's the theory statement of [Johnsone 200; Baik 2005]. What is the error measure used in the paper? For different error, the matrix concentration bound might be different. Also, the union bound discussed in sec 3.2 is also problematic. Lats, for using simple standard GAN to learn mixture of Gaussian, the rigorous theory result doesn't seem easy (e.g. [1]) The author should strive for this results if they want to claim any theory guarantee. \n\n(3) The experiments part is not complete. The experiment settings are not described clearly. Therefore, it is hard to justify whether the proposed algorithm is really useful based on Fig 3. Also, the authors claims it is applicable to text data in Section 1, this part is missing in the experiment. Also, the idea of \"local\" disentangled LV is not well justified to be useful.\n\n[1] On the limitations of first order approximation in GAN dynamics, ICLR 2018 under review\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: This work is incomplete and does not worth publishing with its current quality.",
"paper_summary": null,
"main_review": "main_review: This paper proposed a method called Locally Disentangled Factors for hierarchical latent variable generative model, which can be seen as a hierarchical variant of Adversarially Learned Inference (Dumoulin el atl. 2017). The idea seems to be a valid variant, however, the quality of the paper is not good. The introduction and related works sections read well, but the rest of the paper has not been written well. More specifically, the content in section 3 and experiment section is messy. Also the experiments have not been conducted thoroughly, and the results and the interpretation of the results are not complete.\n\nIntroduction:\nAlthough in introduction the author discussed a lot of works on hierarchical latent variable model and some motivating examples, after reading it the reviewer has absolutely no idea what the paper is about (except hierarchical latent variable model), what is the motivation, what is the general idea, what is the contribution of the paper. Only after carefully reading the detailed implementation in section 3.1 and section 5, did I realize that what the authors are actually doing is to use N variables to model N different parts of the observation, and one higher level variable to model the N variables. The paper should really more precisly state what the idea is throughout the paper, instead of causing confusion and ambiguity.\n\nSection 3:\n1. The concepts of \"disentanglment\" and \"local connectivity\" are really unnecessary and confusing. First, the whole paper and experiments has nothing to do with \"local connectivity\". Even though you might have the intention to propose the idea, you didn't show any support for the idea. Second, what you actually did is to use top level variable to generate N latent variables. That could hardly called \"disentanglement\". The mean field factorization in (Kingma & Welling 2013) is on the inference side (Q not P), and as found out in literature, it could not achieve disentanglement.\n\n2. In section 3.2, I understand that you want to say the hierarchical model may require less data sample. But, here you are really off-topic. It would be much better if you can relate to the proposed method, and state how it may require less data.\n\n3. Section 3.1 is more important, and is really major part of your method. Therefore, it need more extensive discussion and emphasis.\n\nExperiment:\nThis section is really bad.\n1. Since in the introduction and related works, there are already so many hierarchical latent variable model listed, the baseline methods should really not just vanilla GAN, but hierarchical latent variable models, such as the Hierachical VAE, Variational Ladder Autoencoder in (Zhao et al. 2017), ALI (not hierarchical, but should be a baseline) in (Dumoulin et al. 2017), etc.\n\n2. Since currently there is still no standard way to evaluate the quality of image generation, by giving only inception score, we can really not judge whether it is good or not. You need to give more metrics, or generation examples, recontruction examples, and so on. And equally importantly, compare and discuss about the results. Not just leave it there.\n\n3. For section 5.2, similar problems as above exist. Baseline methods might be insufficient. The paper only shows several examples, and the reviewer cannot draw any conclusion about it. Nor does the paper discuss any of the results.\n\n4. Section 5.3, second to the last line, typo: \"This is shown in 7\". Also this result is not available. \n\n5. More importantly, some experiments should be conducted to explicitly show the validity of the proposed hierarchical latent model idea. Show that it exists and works by some experiment explicitly.\n\nAnother suggestion the review would like to make is that, instead of proposing the general framework in section 2, it would be better to propose the hierarchical model in the context of section 3.1. That is, instead of saying z_0 -> z_1 ->... ->x, what the paper and experiment is really about is z_0 -> z_{1,1}, z_{1,2} ... z_{1,N} -> x_{1}, x_{2},...,x_{N}, where z_{1,1...N} are distinct variables. How section 2 is related to the learning of this might be concatenating these N distinct variables into one (if that's what you mean). Talking about the joint distribution and inference process in this way might more align with your idea. Also, the paper actually only deals with 2 level. It seems to me that it's meaningless to generalize to n levels in section 2, since you do not have any support of it.\n\nIn conclusion, the reviewer thinks that this work is incomplete and does not worth publishing with its current quality.\n==============================================================\nThe reviewer read the response from the authors. However, I do not think the authors resolved the issues I mentioned. And I am still not convinced by the quality of the paper. I would say the idea is not bad, but the paper is still not well-prepared. So I do not change my decision.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Hierarchical (2-level) latent variable model based on ALI from [Dumoulin et al, ICLR'16]",
"paper_summary": null,
"main_review": "main_review: The paper investigates the potential of hierarchical latent variable models for generating images and image sequences. The paper relies on the ALI model from [Dumoulin et al, ICLR'16] as the main building block. The main innovation in the paper is to propose to train several ALI models stacked on top of each other to create a hierarchical representation of the data. The proposed hierarchical model is trained in stages. First stage is an original ALI model as in [Dumoulin et al]. Each subsequent stage is constructed by using the Z variables from the previous stage as the target data to be generated.\n\nThe paper constructs models for generatation of images and image sequences. The model for images is a 2-level ALI. The first level is similar to PatchGAN from [1] but is trained as an ALI model. The second layer is another ALI that is trained to generate latent variables from the first layer. \n\n[1] Isola et al. Image-to-Image Translation with Conditional Adversarial Networks, CVPR'17 \n\nIn the the model for image sequences the hierarchy is somewhat different. The top layer is directly generating images and not patches as in the image-generating model.\n\nSummary: I think this paper presents a direct and somewhat straightforward extension of ALI. Therefore the novelty is limited. I think the paper would be stronger if it (1) demonstrated improvements when compared to ALI and (2) showed advantages of hierarchical training on other datasets, not just the somewhat simple datasets like CIFAR and Pacman. \n\nOther comments / questions: \n\n- baseline should probably be 1-level ALI from [Dumoulin et al.]. I believe in the moment the baseline is a standard GAN.\n\n- I think the paper would be stronger if it directly reproduced the experiments from [Dumoulin et al.] and showed how hierarchy compares to standard ALI without hierarchy. \n\n- the reference Isola et al. [1] should ideally be cited since the model for image genration is similar to PatchGAN in [1]\n\n- Why is the video model in this paper not directly extending the image model? Is it due to limitation of the implementation or direclty extending the iamge model didn't work? \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.2222222238779068,
0.3333333432674408,
0.5555555820465088
],
"confidence": [
0.75,
0.75,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response",
"Response",
"Response"
],
"comment": [
"\"(1) It is not clear to me how to train the proposed algorithm. My understanding is train a simple ALI, then using the learned latent as the input and train the new layer. Do the authors use a separate training ? or a joint training algorithms. The authors should provide a more clear and rigorous objective function. It would be even better to have a pseudo code. \"\n\nOur method uses a separate decoupled training objective which trains the higher level module after the lower level has finished training. We agree that having pseudocode could make this clearer. \n\n\"(3) The experiments part is not complete. The experiment settings are not described clearly. Therefore, it is hard to justify whether the proposed algorithm is really useful based on Fig 3. \"\n\nThe main goal of our experiments is to show that exploiting the decoupling from locally disentangled factors can allow for faster training and higher capacity models. Our inception scores on MNIST provide some evidence for the former and our video generation results provide some evidence for the latter. \n\n\"Also, the idea of \"local\" disentangled LV is not well justified to be useful.\"\n\nIf the data generating process actually uses locally disentangled factors, then I think the benefit is fairly apparent, in that the complexity of the learning task is greatly simplified. Whether this actually occurs in practice is an interesting open question. ",
"Thank you for your interesting response. \n\n\"1. The concepts of \"disentanglment\" and \"local connectivity\" are really unnecessary and confusing. First, the whole paper and experiments has nothing to do with \"local connectivity\". Even though you might have the intention to propose the idea, you didn't show any support for the idea. Second, what you actually did is to use top level variable to generate N latent variables. That could hardly called \"disentanglement\". The mean field factorization in (Kingma & Welling 2013) is on the inference side (Q not P), and as found out in literature, it could not achieve disentanglement.\"\n\nSo ultimately our goal was to learn a local representation for a part of the example which simplifies its structure as much as possible while having a 1:1 mapping with raw data for that part of the example. One can imagine specific types of data for which this should be possible. I think that if the disentanglement isn't perfect, it just lowers the potential benefit of our model, but it could still help. \n\n\"2. Since currently there is still no standard way to evaluate the quality of image generation, by giving only inception score, we can really not judge whether it is good or not. You need to give more metrics, or generation examples, recontruction examples, and so on. And equally importantly, compare and discuss about the results. Not just leave it there.\"\n\nI think that Inception scores, perhaps along with FID, are reasonable to use. However we agree that we definitely need to have a stronger baseline. \n\nHowever I do think that showing faster convergence here is a compelling result. ",
"\"- baseline should probably be 1-level ALI from [Dumoulin et al.]. I believe in the moment the baseline is a standard GAN.\"\n\nThis is a fair point, although ALI did not dramatically outperform the standard GAN in terms of generation quality, for example, in terms of inception score. \n\n\"- the reference Isola et al. [1] should ideally be cited since the model for image genration is similar to PatchGAN in [1]\"\n\nThat's a fair point. PatchGAN is different from our approach, but would serve as a reasonable baseline. \n\n\"Summary: I think this paper presents a direct and somewhat straightforward extension of ALI. Therefore the novelty is limited.\"\n\nI don't agree with this. Learning generative models which learn joints over larger and more complex objects is an important direction. For example, learning a joint distribution over a complete day of video or audio data. With standard approaches, this quickly becomes computationally intractable. Only a few approaches have been proposed to deal with this issue. To our knowledge, synthetic gradients and UORO are the most prominent. The Locally Disentangled Factors approach, while still in its infancy, could be an important method in this area. "
]
} | {
"paperhash": [
"dumoulin|adversarially_learned_inference",
"ioffe|batch_normalization:_accelerating_deep_network_training_by_reducing_internal_covariate_shift",
"jaderberg|decoupled_neural_interfaces_using_synthetic_gradients",
"lecun|gradient-based_learning_applied_to_document_recognition",
"salimans|improved_techniques_for_training_gans",
"matthew|visualizing_and_understanding_convolutional_networks",
"denton|deep_generative_image_models_using_a_laplacian_pyramid_of_adversarial_networks",
"karaletsos|adversarial_message_passing_for_graphical_models",
"lin|why_does_deep_and_cheap_learning_work_so_well",
"nguyen|synthesizing_the_preferred_inputs_for_neurons_in_neural_networks_via_deep_generator_networks",
"nguyen|plug_&_play_generative_networks:_conditional_iterative_generation_of_images_in_latent_space",
"nowozin|f-gan:_training_generative_neural_samplers_using_variational_divergence_minimization",
"reed|parallel_multiscale_autoregressive_density_estimation",
"roth|stabilizing_training_of_generative_adversarial_networks_through_regularization",
"zhao|learning_hierarchical_features_from_generative_models"
],
"title": [
"",
"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"Decoupled Neural Interfaces using Synthetic Gradients",
"Gradient-based learning applied to document recognition",
"Improved Techniques for Training GANs",
"Visualizing and Understanding Convolutional Networks",
"Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks",
"Adversarial Message Passing For Graphical Models",
"Why does deep and cheap learning work so well? *",
"Synthesizing the preferred inputs for neurons in neural networks via deep generator networks",
"Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space",
"f -GAN: Training Generative Neural Samplers using Variational Divergence Minimization",
"Parallel Multiscale Autoregressive Density Estimation",
"Stabilizing Training of Generative Adversarial Networks through Regularization",
"Learning Hierarchical Features from Generative Models"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"vincent dumoulin",
"ishmael belghazi",
"ben poole",
"olivier mastropietro",
"alex lamb",
"martin arjovsky",
"aaron courville"
],
"affiliation": [
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "Neural Dynamics and Computation Lab",
"institution": "",
"location": "{'settlement': 'Stanford'}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
}
]
},
{
"name": [
"sergey ioffe"
],
"affiliation": [
{
"laboratory": "",
"institution": "Christian Szegedy Google Inc",
"location": "{}"
}
]
},
{
"name": [
"max jaderberg",
"wojciech marian czarnecki",
"simon osindero",
"oriol vinyals",
"alex graves",
"david silver",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yann lecun",
"léon bottou",
"yoshua bengio",
"patrick haffner",
"yoshua bottou",
"patrick bengio",
" haffner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tim salimans",
"ian goodfellow",
"wojciech zaremba",
"vicki cheung",
"alec radford",
"xi chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"matthew d zeiler",
"rob fergus"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
}
]
},
{
"name": [
"emily denton",
"soumith chintala",
"arthur szlam",
"rob fergus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"theofanis karaletsos"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"henry w lin",
"max tegmark",
"david rolnick"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"anh nguyen",
"jason yosinski",
"thomas brox",
"jeff clune"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"anh nguyen",
"jeff clune",
"yoshua bengio",
"alexey dosovitskiy",
"jason yosinski"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Wyoming",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Wyoming",
"location": "{}"
},
{
"laboratory": "",
"institution": "Montreal Institute for Learning Algorithms",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Freiburg",
"location": "{}"
},
{
"laboratory": "",
"institution": "Uber AI Labs",
"location": "{}"
}
]
},
{
"name": [
"sebastian nowozin",
"botond cseke",
"ryota tomioka"
],
"affiliation": [
{
"laboratory": "Machine Intelligence and Perception Group Microsoft Research Cambridge",
"institution": "",
"location": "{'country': 'UK'}"
},
{
"laboratory": "Machine Intelligence and Perception Group Microsoft Research Cambridge",
"institution": "",
"location": "{'country': 'UK'}"
},
{
"laboratory": "Machine Intelligence and Perception Group Microsoft Research Cambridge",
"institution": "",
"location": "{'country': 'UK'}"
}
]
},
{
"name": [
"scott reed",
"aäron van den oord",
"nal kalchbrenner",
"sergio gómez colmenarejo",
"ziyu wang",
"dan belov",
"nando de freitas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kevin roth",
"aurelien lucchi",
"sebastian nowozin",
"thomas hofmann"
],
"affiliation": [
{
"laboratory": "",
"institution": "ETH Zürich",
"location": "{}"
},
{
"laboratory": "",
"institution": "ETH Zürich",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research Cambridge",
"location": "{'country': 'UK'}"
},
{
"laboratory": "",
"institution": "ETH Zürich",
"location": "{}"
}
]
},
{
"name": [
"shengjia zhao",
"jiaming song",
"stefano ermon"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.37037 | 0.666667 | null | null | null | null | null | rJTGkKxAZ |
||
wang|learning_priors_for_adversarial_autoencoders|ICLR_cc_2018_Conference | 1909.04443v1 | Learning Priors for Adversarial Autoencoders | Most deep latent factor models choose simple priors for simplicity, tractability
or not knowing what prior to use. Recent studies show that the choice of
the prior may have a profound effect on the expressiveness of the model,
especially when its generative network has limited capacity. In this paper, we propose to learn a proper prior from data for adversarial autoencoders
(AAEs). We introduce the notion of code generators to transform manually selected
simple priors into ones that can better characterize the data distribution. Experimental results show that the proposed model can generate better image quality and learn better disentangled representations than
AAEs in both supervised and unsupervised settings. Lastly, we present its
ability to do cross-domain translation in a text-to-image synthesis task. | {
"name": [],
"affiliation": []
} | null | [
"Computer Science",
"Mathematics"
] | Asia-Pacific Signal and Information Processing Association Annual Summit and Conference | 2018-11-01 | 10 | null | null | null | null | null | null | null | null | false | The paper proposes learning the prior for AAEs by training a code-generator that is seeded by the standard Gaussian distribution and whose output is taken as the prior. The code generator is trained by minimizing the GAN loss b/w the distribution coming out of the decoder and the real image distribution. The paper also modifies the AAE by replacing the L2 loss in pixel domain with "learned similarity metric" loss inspired by the earlier work (Larsen et al., 2015).
The contribution of the paper is specific to AAE which makes the scope narrow. Even there, the benefits of learning the prior using the proposed method are not clear. Experiments make two claims: (i) improved image generation over AAE, (ii) improved "disentanglement".
Towards (i), the paper compares images generated by AAE with those generated by their model. However, it is not clear if the improved generation quality is due to the use of decoder loss on the learned similarity metric (Larsen at al, 2015), or due to the use of GAN loss in the image space (ie, just having GAN loss over decoder's output w/o having a code generator), or due to learning the prior which is the main contribution of the paper. This has also been hinted at by AnonReviewer1. Hence, it's not clear if the sharper generated images are really due to the learned prior.
Towards (ii), the paper uses InfoGAN inspired objective to generate class conditional images. It shows the class-conditional generated images for AAE and the proposed method. Here AAE is also trained on "learned similarity metric" and augmented with similar InfoGAN type objective so the only difference is in the prior. Authors say the performance of both models is similar on MNIST and SVHN but on CIFAR their model with "learned prior" generates images that match the conditioned-upon labels better. However this claim is also subjective/qualitative and even if true, it is not clear if this is due to learned prior or due to the extra GAN discriminator loss in the image-space -- in other words, how do the results look for AAE + a discriminator in the image space, just like in the proposed model but without a code generator?
The t-SNE plots for the learned prior are also shown but they are only shown when InfoGAN loss is added. The same plots are not shown for AAE with added InfoGAN loss so it is difficult to know the benefits of learning the code-generator as proposed.
Overall, I feel the scope of the paper is narrow and the benefits of learning the prior using the method proposed in the paper are not clearly established by the reported experiments. I am hesitant to recommend acceptance to the main conference in its current form.
| {
"review_id": [
"ByzPtktlM",
"ByjrTO5ef",
"BkD44d9gM"
],
"review": [
{
"title": "title: Interesting idea, but more thorough analysis is needed.",
"paper_summary": null,
"main_review": "main_review: This paper proposes an interesting idea--to learn a flexible prior from data by maximizing data likelihood.\n\nIt seems that in the prior improvement stage, what you do is training a GAN with CG+dec as the generator while D_I as the discriminator (since you also update dec at the prior improvement stage). So it can also be regarded as GAN trained with an additional enc and D_c, and additional objective. In my opinion, this may explain why your model can generate sharper images.\n\nThe experiments do demonstrate the power of their model compared to AAE. However, only the qualitative analysis may not persuade me and more thorough analysis is needed.\n\n1. About the latent space for z. The motivation in AAE is to impose aggregated posterior regularization $D(q(z),p(z))$ where $p(z)$ is chosen as a simple one, e.g., Gaussian. I'm curious how the geometry of the latent space will be, when the code generator is introduced. Maybe some visualization like t-sne will be helpful.\n2. Any quantitative analysis? Doing a likelihood analysis like that in the AAE paper will be very informative. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A simple idea to improve adversarial autoencoders by learning priors",
"paper_summary": null,
"main_review": "main_review: Recently some interesting work on a role of prior in deep generative models has been presented. The choice of prior may have an impact on the expressiveness of the model [Hoffman and Johnson, 2016]. A few existing work presents methods for learning priors from data for variational autoencoders [Goyal et al., 2017][Tomczak and Welling, 2017]. The work, \"VAE with a VampPrior,\" [Tomczak and Welling, 2017] is missing in references.\n\nThe current work focuses on adversarial autoencoder (AAE) and introduces a code generator network to transform a simple prior into one that together with the generator can better fit the data distribution. Adversarial loss is used to train the code generator network, allowing the output of the network could be any distribution. I think the method is quite simple but interesting approach to improve AAEs without hurting the reconstruction. The paper is well written and is easy to read. The method is well described. However, what is missing in this paper is an analysis of learned priors, which help us to better understand its behavior. \n\nThe model is evaluated qualitatively only. What about quantitative evaluation? \n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Improving AAE by warping the Gaussian prior using deep networks",
"paper_summary": null,
"main_review": "main_review: This paper propose a simple extension of the adversarial auto-encoders for (conditional) image generation. The general idea is that instead of using Gaussian prior, the propose algorithm uses a \"code generator\" network to warp the gaussian distribution, such that the internal prior of the latent encoding space is more expressive and complicated. \n\nPros:\n- The proposed idea is simple and easy to implement\n- The results show improvement in terms of visual quality\n\nCons:\n- I agree that the proposed prior should better capture the data distribution. However, incorporating a generic prior over the latent space plays a vital role as regularisation, this helps avoid model collapse. Adding a complicated code generation network brings too much flexibility for the prior part. This makes the prior and posterior learnable, which makes it easier to fool the regularisation discriminator (think about the latent code and prior code collapsed to two different points). As a result, this weakens the regularisation over the latent encoder space. \n- The above mentioned could be verified through qualitative results. As shown in Fig. 5. I believe this is a result due to the fact that the adversarial loss in the regularisation phase does not a significant influence there. \n- I have some doubts over why AAE works so poorly when the latent dimension is 2000. How to make sure it's not a problem of implementation or the model wasn't trapped into a bad local optima / saddle points. Could you justify this?\n- Contributions; this paper propose an improvement over a existing model. However, neither the idea/insights it brought can be applied onto other generative models, nor the improvement bring a significant improvement over the-state-of-the-arts. I am wondering what the community will learn from this paper, or what the author would like to claim as significant contributions. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.5555555820465088,
0.4444444477558136
],
"confidence": [
0.5,
0.75,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to Reviewer 1",
"Response to Reviewer 2",
"Response to Reviewer 3",
"Response to Reviewer 3",
"Changes we've done in the revised manuscript"
],
"comment": [
"Dear Reviewer 1,\n\n\"This paper proposes an interesting idea--to learn a flexible prior from data by maximizing data likelihood. It seems that in the prior improvement stage, what you do is training a GAN with CG+dec as the generator while D_I as the discriminator (since you also update dec at the prior improvement stage). So it can also be regarded as GAN trained with an additional enc and D_c, and additional objective. In my opinion, this may explain why your model can generate sharper images. \nThe experiments do demonstrate the power of their model compared to AAE. However, only the qualitative analysis may not persuade me and more thorough analysis is needed. \"\n\nThanks for your suggestions. We have provided more analysis results including comparison of inception scores and visualization of learned code space in the revised manuscript. \n\n\"1. About the latent space for z. The motivation in AAE is to impose aggregated posterior regularization $D(q(z),p(z))$ where $p(z)$ is chosen as a simple one, e.g., Gaussian. I'm curious how the geometry of the latent space will be, when the code generator is introduced. Maybe some visualization like t-sne will be helpful. \n\n2. Any quantitative analysis? Doing a likelihood analysis like that in the AAE paper will be very informative. \"\n\nThanks for your suggestion. For quantitative evaluation, we have compared the inception score of the proposed method with other generative models in Table I. We also have visualized the learned priors with t-SNE in Figs. 9 and 12 for the supervised and unsupervised learning tasks. The text in Section 4.2.1 and Section 4.2.2 have been modified accordingly to include the discussions (see the last paragraphs in these sections). \n\nIn addition, since receiving the review comments, we have improved our model in several significant ways, including \n1) Introducing a pair of more capable encoder and decoder with ResNets. (See appendix for the implementation details) \n2) Employing a learned similarity metric in place of the default squared error in data space to improve the convergence of the decoder. (See Section 3 Learning The Prior for the reasons) \n3) Introducing the variational technique in InfoGAN for training the decoder and code generator when it is necessary to generate images conditionally on an input variable, as in our supervised and unsupervised learning tasks. (See Section 3 Learning The Prior for the reasons) With these changes, our model can now produce much better images without incurring obvious mode collapse.\n\nWe have re-written extensively the entire manuscript, presenting more experimental results and analyses as requested. \n\n",
"Dear Reviewer2, \n\n\"Recently some interesting work on a role of prior in deep generative models has been presented. The choice of prior may have an impact on the expressiveness of the model [Hoffman and Johnson, 2016]. A few existing work presents methods for learning priors from data for variational autoencoders [Goyal et al., 2017][Tomczak and Welling, 2017]. The work, \"VAE with a VampPrior,\" [Tomczak and Welling, 2017] is missing in references. \"\n\nThanks for your suggestion. We have cited this work in Introduction and provided a description in Related Work. \n\n\"The current work focuses on adversarial autoencoder (AAE) and introduces a code generator network to transform a simple prior into one that together with the generator can better fit the data distribution. Adversarial loss is used to train the code generator network, allowing the output of the network could be any distribution. I think the method is quite simple but interesting approach to improve AAEs without hurting the reconstruction. The paper is well written and is easy to read. The method is well described. However, what is missing in this paper is an analysis of learned priors, which help us to better understand its behavior. The model is evaluated qualitatively only. What about quantitative evaluation? \"\n\nThanks for your suggestion. For quantitative evaluation, we have compared the inception score of the proposed method with other generative models in Table I. We also have visualized the learned priors with t-SNE in Figs. 9 and 12 for the supervised and unsupervised learning tasks. The text in Section 4.2.1 and Section 4.2.2 have been modified accordingly to include the discussions (see the last paragraphs in these sections).\n\nIn addition, since receiving the review comments, we have improved our model in several significant ways, including \n1) Introducing a pair of more capable encoder and decoder with ResNets. (See appendix for the implementation details) \n2) Employing a learned similarity metric in place of the default squared error in data space to improve the convergence of the decoder. (See Section 3 Learning The Prior for the reasons) \n3) Introducing the variational technique in InfoGAN for training the decoder and code generator when it is necessary to generate images conditionally on an input variable, as in our supervised and unsupervised learning tasks. (See Section 3 Learning The Prior for the reasons) With these changes, our model can now produce much better images without incurring obvious mode collapse.\n\nWe have re-written extensively the entire manuscript, presenting more experimental results and analyses as requested. \n",
"Dear Reviewer 3,\n\n\"- Contributions; this paper propose an improvement over a existing model. However, neither the idea/insights it brought can be applied onto other generative models, nor the improvement bring a significant improvement over the-state-of-the-arts. I am wondering what the community will learn from this paper, or what the author would like to claim as significant contributions. \"\n\nThanks for your comments. \n\nWith the changes we have made so far, we believe our contributions include\n1)\tWe replace the simple prior with a learned prior by training the code generator to output latent variables that will minimize an adversarial loss in data space.\n2)\tWe employ a learned similarity metric (Larsen et al., 2015) in place of the default squared error in data space for training the autoencoder.\n3)\tWe maximize the mutual information between part of the code generator input and the decoder output for supervised and unsupervised training using a variational technique introduced in InfoGAN (Chen et al., 2016).\n\nExtensive experiments confirm its effectiveness of generating better quality images and learning better disentangled representations than AAE in both supervised and unsupervised settings, particularly on complicated datasets. In addition, to the best of our knowledge, this is one of the first few works that attempt to introduce a learned prior for AAE.\n\nWe have re-written extensively the entire manuscript, presenting more experimental results and analyses as requested. ",
"Dear Reviewer 3,\n\n\"This paper propose a simple extension of the adversarial auto-encoders for (conditional) image generation. The general idea is that instead of using Gaussian prior, the propose algorithm uses a \"code generator\" network to warp the gaussian distribution, such that the internal prior of the latent encoding space is more expressive and complicated. \n\nPros: \n- The proposed idea is simple and easy to implement \n- The results show improvement in terms of visual quality \n\nCons: \n- I agree that the proposed prior should better capture the data distribution. However, incorporating a generic prior over the latent space plays a vital role as regularisation, this helps avoid model collapse. \nAdding a complicated code generation network brings too much flexibility for the prior part. This makes the prior and posterior learnable, which makes it easier to fool the regularisation discriminator (think about the latent code and prior code collapsed to two different points). As a result, this weakens the regularisation over the latent encoder space. \n- The above mentioned could be verified through qualitative results. As shown in Fig. 5. I believe this is a result due to the fact that the adversarial loss in the regularisation phase does not a significant influence there. \"\n\nThanks for your comments. I agree that generic priors may help avoid mode collapse. However, it also risks overly regularizing the model, consequently decreasing its expressiveness. \n\nThis work, like few other similar attempts for VAE, aims to learn a prior through a code generation network so that the resulting model can better explain the data distribution. Unlike the prior works, which are mostly based on maximizing the data log-likelihood, ours tries to learn the prior by minimizing an adversarial loss in data space. \n\nSince receiving the review comments, we have improved our model in several significant ways, including\n1)\tIntroducing a pair of more capable encoder and decoder with ResNets. (See appendix for the implementation details)\n2)\tEmploying a learned similarity metric in place of the default squared error in data space to improve the convergence of the decoder. (See Section 3 Learning The prior for the reasons)\n3)\tIntroducing the variational technique in InfoGAN for training the decoder and code generator when it is necessary to generate images conditionally on an input variable, as in our supervised and unsupervised learning tasks. (See Section 3 Learning The Prior for the reasons)\n\nWith these changes, our model can now produce much better images without incurring obvious mode collapse. Furthermore, as shown in our visualization of latent code space in supervised and unsupervised tasks (see Figs 9 and 12), the code generator does exert a regularization effect while producing better images. \n\n\"- I have some doubts over why AAE works so poorly when the latent dimension is 2000. How to make sure it's not a problem of implementation or the model wasn't trapped into a bad local optima / saddle points. Could you justify this?\"\n\nThanks for pointing out this. We have implemented a pair of more capable encoder and decoder with ResNets. AAE now performs reasonably well (see Figs. 5 and 6). But, still when the latent dimension is increased to 100-D or 2000-D, the simple Gaussian prior may overly regularize the model. Imagine that the latent codes generated by the encoder may occupy only a tiny portion of the high dimensional code space specified by the prior. In this case, the limited training data can hardly ensure that every random sample drawn from the prior would produce a good decoded image.\n",
"Dear Thanh Tung Hoang,\n\n\"AAE with code generator can produce much better images but suffer from mode collapse. It seems that the improvement in the image quality is due to the fact that the network has remembered some of the input. In other words, the mode collapse problem makes generated images look better. I would love to see the result without mode collapse problem. For example, you could try Wasserstein GAN which suffer less from mode collapse problem. I am also interested in the learned prior distribution. If you could provide some analysis on the learned prior then your paper could be much better.\"\n\nSince receiving the review comments, we have improved our model in several significant ways, including\n1)\tIntroducing a pair of more capable encoder and decoder with ResNets. (See appendix for the implementation details)\n2)\tEmploying a learned similarity metric in place of the default squared error in data space to improve the convergence of the decoder. (See Section 3 Learning Priors for the reasons)\n3)\tIntroducing the variational technique in InfoGAN for training the decoder and code generator when it is necessary to generate images conditionally on an input variable, as in our supervised and unsupervised learning tasks. (See Section 3 Learning Priors for the reasons)\n\nWith these changes, our model can now produce much better images without incurring obvious mode collapse.\n\nWe have re-written extensively the entire manuscript, presenting more experimental results and analyses. "
]
} | {
"paperhash": [
"zhang|unifying_inter-region_autocorrelation_and_intra-region_structures_for_spatial_embedding_via_collective_adversarial_learning",
"li|domain_generalization_with_adversarial_feature_learning",
"tomczak|vae_with_a_vampprior",
"berthelot|began:_boundary_equilibrium_generative_adversarial_networks",
"gulrajani|improved_training_of_wasserstein_gans",
"goyal|nonparametric_variational_auto-encoders_for_hierarchical_representation_learning",
"mao|least_squares_generative_adversarial_networks",
"dilokthanakul|deep_unsupervised_clustering_with_gaussian_mixture_variational_autoencoders",
"wu|learning_a_probabilistic_latent_space_of_object_shapes_via_3d_generative-adversarial_modeling",
"chen|infogan:_interpretable_representation_learning_by_information_maximizing_generative_adversarial_nets",
"salimans|improved_techniques_for_training_gans",
"johnson|perceptual_losses_for_real-time_style_transfer_and_super-resolution",
"larsen|autoencoding_beyond_pixels_using_a_learned_similarity_metric",
"radford|unsupervised_representation_learning_with_deep_convolutional_generative_adversarial_networks",
"makhzani|adversarial_autoencoders",
"burda|importance_weighted_autoencoders",
"goodfellow|generative_adversarial_networks",
"kingma|auto-encoding_variational_bayes",
"nilsback|automated_flower_classification_over_a_large_number_of_classes",
"|elbo_surgery:_yet_another_way_to_carve_up_the_variational_evidence_lower_bound"
],
"title": [
"Unifying Inter-region Autocorrelation and Intra-region Structures for Spatial Embedding via Collective Adversarial Learning",
"Domain Generalization with Adversarial Feature Learning",
"VAE with a VampPrior",
"BEGAN: Boundary Equilibrium Generative Adversarial Networks",
"Improved Training of Wasserstein GANs",
"Nonparametric Variational Auto-Encoders for Hierarchical Representation Learning",
"Least Squares Generative Adversarial Networks",
"Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders",
"Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling",
"InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets",
"Improved Techniques for Training GANs",
"Perceptual Losses for Real-Time Style Transfer and Super-Resolution",
"Autoencoding beyond pixels using a learned similarity metric",
"Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks",
"Adversarial Autoencoders",
"Importance Weighted Autoencoders",
"Generative adversarial networks",
"Auto-Encoding Variational Bayes",
"Automated Flower Classification over a Large Number of Classes",
"Elbo surgery: yet another way to carve up the variational evidence lower bound"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"Yunchao Zhang",
"Yanjie Fu",
"Pengyang Wang",
"Xiaolin Li",
"Yu Zheng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Haoliang Li",
"Sinno Jialin Pan",
"Shiqi Wang",
"A. Kot"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jakub M. Tomczak",
"M. Welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"David Berthelot",
"Tom Schumm",
"Luke Metz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ishaan Gulrajani",
"Faruk Ahmed",
"Martín Arjovsky",
"Vincent Dumoulin",
"Aaron C. Courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Prasoon Goyal",
"Zhiting Hu",
"Xiaodan Liang",
"Chenyu Wang",
"E. Xing",
"C. Mellon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Xudong Mao",
"Qing Li",
"Haoran Xie",
"Raymond Y. K. Lau",
"Zhen Wang",
"Stephen Paul Smolley"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nat Dilokthanakul",
"Pedro A. M. Mediano",
"M. Garnelo",
"M. J. Lee",
"Hugh Salimbeni",
"Kai Arulkumaran",
"M. Shanahan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jiajun Wu",
"Chengkai Zhang",
"Tianfan Xue",
"Bill Freeman",
"J. Tenenbaum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Xi Chen",
"Yan Duan",
"Rein Houthooft",
"John Schulman",
"I. Sutskever",
"P. Abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Tim Salimans",
"I. Goodfellow",
"Wojciech Zaremba",
"Vicki Cheung",
"Alec Radford",
"Xi Chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Justin Johnson",
"Alexandre Alahi",
"Li Fei-Fei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Anders Boesen Lindbo Larsen",
"Søren Kaae Sønderby",
"H. Larochelle",
"O. Winther"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Alec Radford",
"Luke Metz",
"Soumith Chintala"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Alireza Makhzani",
"Jonathon Shlens",
"N. Jaitly",
"I. Goodfellow"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yuri Burda",
"R. Grosse",
"R. Salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"I. Goodfellow",
"Jean Pouget-Abadie",
"Mehdi Mirza",
"Bing Xu",
"David Warde-Farley",
"Sherjil Ozair",
"Aaron C. Courville",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Diederik P. Kingma",
"M. Welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Nilsback",
"Andrew Zisserman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
}
],
"arxiv_id": [
"",
"",
"1705.07120v5",
"1703.10717v4",
"1704.00028v3",
"1703.07027",
"1611.04076v3",
"1611.02648v2",
"1610.07584",
"1606.03657v1",
"1606.03498v1",
"1603.08155",
"1512.09300v2",
"1511.06434v2",
"1511.05644v2",
"1509.00519v4",
"1406.2661v1",
"1312.6114",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
[],
[],
[
"methodology",
"background"
],
[],
[],
[
"methodology"
],
[],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"methodology",
"background"
],
[],
[
"methodology",
"background"
],
[
"background"
],
[
"methodology"
],
[
"methodology",
"background"
],
[],
[
"methodology",
"background"
]
],
"isInfluential": [
false,
false,
true,
false,
false,
true,
false,
false,
false,
true,
false,
false,
true,
false,
true,
false,
true,
true,
false,
false
]
} | null | 75 | null | 0.518519 | 0.583333 | null | null | null | null | null | rJSr0GZR- |
|
donnat|spectral_graph_wavelets_for_structural_role_similarity_in_networks|ICLR_cc_2018_Conference | Spectral Graph Wavelets for Structural Role Similarity in Networks | Nodes residing in different parts of a graph can have similar structural roles within their local network topology. The identification of such roles provides key insight into the organization of networks and can also be used to inform machine learning on graphs. However, learning structural representations of nodes is a challenging unsupervised-learning task, which typically involves manually specifying and tailoring topological features for each node. Here we develop GraphWave, a method that represents each node’s local network neighborhood via a low-dimensional embedding by leveraging spectral graph wavelet diffusion patterns. We prove that nodes with similar local network neighborhoods will have similar GraphWave embeddings even though these nodes may reside in very different parts of the network. Our method scales linearly with the number of edges and does not require any hand-tailoring of topological features. We evaluate performance on both synthetic and real-world datasets, obtaining improvements of up to 71% over state-of-the-art baselines. | {
"name": [],
"affiliation": []
} | We develop a method for learning structural signatures in networks based on the diffusion of spectral graph wavelets. | [
"Graphs",
"Structural Similarities",
"Spectral Graph Wavelets",
"Graph Signal Processing",
"Unsupervised Learning"
] | null | 2018-02-15 22:29:33 | 31 | null | null | null | null | null | null | null | null | false | The reviewers present strong concerns about the lack of novelty in the paper. Further there are strong concerns about how the experiments are conducted. I recommend the authors to carefully go through the reviews. | {
"review_id": [
"rk9wKvM-z",
"SJqU9kAWf",
"BJnpN5cgf"
],
"review": [
{
"title": "title: The method is based on a principled derivation, but have not compared with other state-of-the-art",
"paper_summary": null,
"main_review": "main_review: The paper derived a way to compare nodes in graph based on wavelet analysis of graph laplacian. The method is correct but it is not clear whether the method can match the performance of state-of-the-art methods such as graph convolution neural network of Duvenaud et al. and Structure2Vec of Dai et al. in large scale datasets. \n1. Convolutional Networks on Graphs for Learning Molecular Fingerprints. D Duvenaud et al., NIPS 2015. \n2. Discriminative embeddings of latent variable models for structured data. Dai et al. ICML 2016.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The method is ad-hoc, there is no genuine experimental baseline, the mathematical derivations make the paper look more sophisticated but do not provide actual insight.",
"paper_summary": null,
"main_review": "main_review: The paper proposes a method for quantifying the similarity between the local neighborhoods of nodes in a graph/network.\n\nThere are many ways in which such a distance/similarity metric between nodes could be defined. For example, once could look at the induced subgraph G_i formed by the k-neighborhood of node i, and the induced subgraph G_j of the k-neighborhood of node j, and define the similarity as k(G_i,G_j) where k is any established graph kernel. Moreover, the task is unsupervised, which makes it hard to compare the performance of different methods. Most of the experiments in the paper seem a bit contrived.\n\nRegarding the algorithm, the question is: “sure, but why this way?”. The authors take the heat kernel matrix on the graph, treat each column as a probability distribution, compute its characteristic function, and define a distance between characteristics functions. This seems pretty arbitrary and heuristic. I also find it confusing that they refer to the heat kernel as wavelets. The spectral graph wavelets of Hammond et al is a beautiful construction, but, as far as I remember, it is explicitly emphasized that the wavelet generating function g must be continuous and satisfy g(0)=0. By setting g(\\lambda)=e^{-s \\lambda}, the authors just recover the diffusion/heat kernel of the graph. That’s not a wavelet. Why call this a “spectral graph wavelet” approach then? The heat kernel is much simpler. I find this misleading.\n\nI also feel that the mathematical results in the paper have little depth. Diffusion is an inherently local process. It is natural then that the diffusion matrix can be approximated by a polynomial in the Laplacian (in fact, it is sufficient to look at the power series of the matrix exponential). It is not surprising that the diffusion function captures some local properties of the graph (there are papers by Reid Andersen/ Fan Chung/ Kevin Lang, as well as by Mahoney, I believe on localized PCA in graphs following similar ideas). Again, there are many ways that this could be done. The particular way it is done in the paper is heuristic and not supported by either math or strong experiments.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: This paper uses spectral graph wavelet diffusion patterns of a node’s local neighborhood to embed the node in a low-dimensional space. In this low-dimensional space, nodes with similar local neighborhoods are close to each other even if they are far in graph distance. The methodology of the paper is fine but the experiments section is weak and the paper misses clear connections to sociology.",
"paper_summary": null,
"main_review": "main_review: The term \"structural equivalence\" is used incorrectly in the paper. From sociology, two nodes with the same position are in an equivalence relation. An equivalence, Q, is any relation that satisfies these three conditions:\n - Transitivity: (a,b), (b,c) ∈ Q ⇒ (a,c) ∈Q\n - Symmetry: (a, b) ∈ Q if and only if (b, a) ∈Q\n - Reflexivity: (a, a) ∈Q\n\nThere are three deterministic equivalences: structural, automorphic, and regular.\n\nFrom Lorrain & White (1971), two nodes u and v are structurally equivalent if they have the same relationships to all other nodes. Exact structural equivalence is rare in real-world networks.\n\nFrom Borgatti, et al. (1992) and Sparrow (1993), two nodes u and v are automorphically equivalent if all the nodes can be relabeled to form an isomorphic graph with the labels of u and v interchanged.\n\nFrom Everett & Borgatti (1992), two nodes u and v are regularly equivalent if they are equally related to equivalent others.\n\nParts of this statement are false: \"A notable example of such approaches is RolX (Henderson et al., 2012), which aims to recover a soft-clustering of nodes into a predetermined number of K distinct roles using recursive feature extraction (Henderson et al., 2011).\" RolX (as described in KDD 2012 paper) uses MDL to automatically determine the number of roles.\n\nAs indicated above, this statement is also false: \"We note that RolX requires the number of desired structural classes as input, ...\".\n\nThe paper does not discuss how the free parameter d (which represents the number of evenly spaced sample points) is chosen. \n\nThis statement is misleading: \"In particular, a small perturbation of the graph yields small perturbations of the eigenvalues.\" What is considered a small perturbation? One can delete an edge (seemingly a small perturbation) and change the eigenvalues of the Laplacian dramatically -- e.g., deleting an edge that increases the number of connected components.\n\nThe barbell graph experiment seemed contrived. Why would one except such a graph to have 8 classes? Why not 3? One for cliques, one for the chain, and one for connectors of the clique to the chain.\n\nIn Section 4.2, how many roles were selected for RolX?\n\nThe paper states: \"Furthermore, nodes from different graphs can be embedded into the same space and their structural roles can be compared across different graphs.\" Experiments were not conducted to see how the competing approaches such as RolX compare with GraphWave on transfer learning tasks.\n\nGilpin et al (KDD 2013) extended RolX to incorporate sparsity and diversity constraints on the role space and showed that their approach is superior to RolX on measuring distances. This is applicable to experiments in Figure 4.\n\nI strongly recommend running experiments that test the predictive power of the roles found by GraphWave.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.2222222238779068,
0.4444444477558136
],
"confidence": [
0.75,
0.75,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"RE: Response to the reviewer's comments",
"Response to the reviewer's comments: complementary state-of-the art ",
"Response to the reviewer's comments: clarifications, additional references and predictive power"
],
"comment": [
"We thank the reviewer for his or her comments. We address them below. \n\n#1: “There are many ways in which such a distance could be defined [...].”\n\nWhile we agree with the reviewer that there are many sensible definitions of node similarity, we would like to note that our goal here was broader than plain similarity search. In particular, we aimed to define a structural signature, i.e., an embedding for each node, which requires O(N) memory, instead of O(N^2) for the kernel-based pairwise comparisons suggested by the reviewer. We note that learning embeddings for graphs is a very common problem in machine learning (Henderson et al., 2012; Grover et al., 2016; Ribeiro et al., 2017)\n\n#2: \"Moreover, the task is unsupervised, which makes it hard to compare the performance of different methods. Most of the experiments in the paper seem a bit contrived.\"\n\nWe respectfully disagree with this characterization of our unsupervised task. Specifically, we developed multiple synthetic experiments and two real-world case studies to quantitatively compare GraphWave with two state-of-the-art approaches for solving the same unsupervised problem (struc2vec and RolX). Our experiments built upon those from these recent papers (e.g., the Barbell graph was a direct adaptation of an experiment in Ribeiro et al., 2017). In addition, we developed experiments that evaluated GraphWave in a variety of more complex settings (see Sections 4.1 and 4.2 in the paper). Overall, we believe that these experiments sufficiently demonstrate the benefits of GraphWave. However, we would appreciate any additional feedback on specific examples of how to further improve our experiments.\n\n#3: Concerns about spectral graph wavelet transform (SGWT) definition.\n\nThe reviewer is correct in stating that the SGWT requires g(0)=0. However in Section 4.2 of their paper, Hammond et al., 2010 introduces a \"second class of waveforms,\" which they call \"spectral graph scaling functions.\" As Hammond et al., 2010 states, these waveforms are \"analogous to the lowpass residual scaling functions from classical wavelet analysis.[...] They will be determined by a single real valued function h : R+ → R, which acts as a lowpass filter, and satisfies h(0) > 0 and h(x) → 0 as x → ∞. \" As we mention in Section 2.1, the heat kernel in GraphWave is a function of this class, and as such, it falls under Hammond et al.’s general SGWT framework. Because our work directly builds on Hammond et al.’s definition, we use the term spectral graph wavelet, rather than “heat kernel”, even though either term would be appropriate. \n\n#4: ''sure but why this way?'' [...] Diffusion is a local process [...] There are many ways in which this could be done\". \n\nWe agree with the reviewer in that GraphWave relies on an inherently local diffusion process. However, comparing diffusions across nodes in the graph to recover structural similarities is a tricky problem. Without an a-priori-known one-to-one mapping between neighborhoods, we are not aware of a computationally tractable method for comparing diffusions localized in different parts of the graph. For this reason, we suggested considering these diffusions as distributions, thus making the signature permutation-invariant to the labeling of the nodes.\n\nWe thank the reviewer for taking the time to read our response, and we hope that he or she will consider our arguments and help us improve our methodology.\n",
"We thank the reviewer for pointers to these papers, which we have carefully reviewed. However, we would like to explicitly point out that the methods developed in the mentioned papers have a different goal to GraphWave’s. In particular, both Molecular Fingerprints and Structure2Vec are solving a “graph-level embedding” problem, which converts an entire graph into a single low-dimensional vector. In contrast, GraphWave is solving the “node-level embedding” problem, where it generates a low-dimensional vector for each node based on node’s structural role in the graph -- which is why we had not originally intended comparing GraphWave to these methods. \n\nHowever, in certain settings, both Molecular Fingerprints and Structure2Vec can be compared with GraphWave. Specifically, these two methods yield node embeddings as a by-product of their algorithm, but only in supervised settings across multiple graphs where the graphs have “ground truth” labels. Following the reviewer’s suggestion, we developed an additional experiment to compare GraphWave with these methods in a specific supervised setting (see APPENDIX below). We note that GraphWave is much more general, and can yield node embeddings on a single graph, or across multiple unlabeled graphs, something that Molecular Fingerprint and Structure2Vec are unable to do. As shown in the experiments (APPENDIX, results), GraphWave outperformed Molecular Fingerprints by 37% in homogeneity score, 11% in completeness, and 890% in silhouette score. Additionally, GraphWave outperformed Structure2Vec by 4% in homogeneity, though Structure2Vec had a 7% higher completeness and 54% higher silhouette score. \n\nOverall, GraphWave outperforms the state-of-the-art in unsupervised settings (see the experiments in the paper) and yields very strong performance in supervised settings, even when compared against supervised methods (as shown in the APPENDIX here). \n\nTo address these comments by the reviewer, we will add both references to the related work section.\n\n------------------------------------------------\nAPPENDIX: Additional experiments.\n\n*Experimental setup.* \nThe goal of these experiments is to assess the predictive power of embeddings. That is, we analyze how well we can recognize structural similarity of nodes across different graphs. Note that the setup is a slight adaptation of the experiments in the paper. This was required in order to work across multiple graphs --- which was necessary to evaluate Molecular Fingerprints and Structure2Vec methods --- rather than within a single graph. \n\nIn particular, we generate 200 graphs, with ground truth labels corresponding to the true structural classes of each node. Each graph was generated as follows: \nWe generate its basis (a cycle, as in Figure 3A) of different (random) length.\nWe plant a random number of different shapes (houses, fans or stars, as shown in Figure 3A) on this cycle. Our experiment is set up so that with 60% probability, the graph only comprises one type of shape repeated multiple times (20% house, 20% fan, 20% stars), and with 40% chance, the graph comprises all of these shapes in varied numbers.\nWe have fixed a priori the scale in GraphWave to s=3. We trained Neural Fingerprints and Structure2Vec by providing each graph with a label (1: house, 2: fan, 3: star, 4: varied). We note that in this setting, the graph labels highly correlated with the structural roles of the nodes. This gives the supervised methods (Molecular Fingerprints and Structure2Vec) an advantage over the unsupervised GraphWave approach. This is necessary because without these labels, the supervised methods cannot be applied.\n\nWe run each algorithm, then fit k-means on the embeddings of the first 150 graphs to try to recover the 15 different structural roles of this experiments. We evaluate the performance of the clustering on the remaining 50 graphs in the test set. \n\n*Results.* \nResults are shown in the following table.\n\nMethod\t\t\t\t\t | Homogeneity | Completeness | Silhouette\n-----------------------------------------------------------------------------------------------------------------------------\nRolX (Henderson et al., 2012)\t\t\t 0.688\t\t 0.352\t\t 0.466\nGLRD (Gilpin et al., 2013)\t\t\t\t 0.329\t\t 0.175\t\t 0.101\nStructure2Vec (Dai et al., 2016)\t\t\t 0.825\t\t 0.811\t\t 0.890\nMolecular Fingerprints (Duvenaud et al., 2015) 0.626\t 0.681\t\t 0.065\nGraphWave (this paper)\t\t\t\t 0.860\t\t 0.756\t\t 0.579\n",
"\nWe thank the reviewer for the detailed comments and questions regarding our submission. Here, we try to clarify some details and address the reviewer’s concerns:\n\n#1: The term \"structural equivalence\" is used incorrectly in the paper. \n\nWe emphasize that we are using the same definition of structural equivalence as Lorrain and White, 1971. However, perfect structural equivalence, as the reviewer points out, is extremely rare in real-world networks. Therefore, instead of looking for nodes with exact equivalence, we instead recover a low-dimensional embedding, or a structural signature, to find structurally similar nodes. We note that this notion of structural similarity is a commonly-used term in network science (Airoldi et al., 2008; Hoff et al., 2008; Newman, 2011; Henderson et al., 2012; Grover et al., 2016; Ribeiro et at., 2017; etc).\n\n#2: RolX uses MDL to automatically determine the number of roles.\n\nWhile RolX algorithm requires a pre-determined number of clusters, the reviewer is correct in mentioning that the RolX authors do include a method of automatically selecting this number using MDL. We thank the reviewer for this comment and will update the manuscript to reflect this fact. We note however that in our experiments, as we point out in Section 4 of our paper, we used RolX as an oracle estimator (providing it with the “correct” number of classes, the best-case scenario for RolX). \n\n#3: The paper does not discuss how the parameter d is chosen. \n\nWe set d=100 in all experiments. This parameter corresponds to the number of sampling points along the characteristic parametric curves (example is shown in Figure 3C). We have not put any special effort to tune this parameter.\n\n#4: What is considered a small perturbation? One can delete an edge (seemingly a small perturbation) and change the eigenvalues of the Laplacian dramatically.\n\nAs correctly highlighted by the reviewer, a small perturbation cannot be defined simply through the Hamming distance between the original and perturbed adjacency matrices. In our paper, we use definition of a small perturbation as defined in Spielman, Spectral Graph Theory (Chapter 16), 2011. That is, a small perturbation of the k-hop neighborhood corresponds to a set of edge additions/deletions that have a small impact on the graph Laplacian L. As proved by Spielman and studied by Milanese et al., 2010, in this setting, the perturbation induced on the eigenspectrum and the eigenvectors of L is small. Thus, the difference between the original Laplacian L and the perturbed Laplacian \\tilde{L} is small as well (sup ||L^k -\\tilde{L}^k|| <eps). We thank the reviewer for pointing out that this definition was unclear, and we will make sure to clarify it in the revised paper.\n\n#5: Why would one expect the barbell graph to have 8 classes? Why not 3? One for cliques, one for the chain, and one for connectors of the clique to the chain.\n\nUsing the definition of structural equivalence as defined by Lorrain and White, 1970, the barbell graph has exactly 8 structurally equivalent classes (one corresponding to the nodes in the cliques, and the seven others comprising the nodes in the chain at a given distance level to the cliques). We note that GraphWave can recover all 8 classes, whereas RolX is only able to discover 3 classes, indicating that GraphWave can recover fine grain structural information.\n\n#6: In Section 4.2, how many roles were selected for RolX?\n\nPlease see our answer to Comment #2 above. We used RolX as an oracle estimator, providing it with the correct number of classes, the best-case scenario for RolX.\n\n#7: Experiments were not conducted on transfer learning tasks.\n\nWhile we mention transfer learning as a potential application of this work, we had originally left the formal analysis of such methods to future work, as we discussed in the conclusion. However, due to the reviewer’s comments we ran a new transfer learning experiment (see the APPENDIX in our response to Reviewer 1). These results show that GraphWave outperforms several state of the art methods for the transfer learning task.\n\n#8: We did not compare with Gilpin et al., 2013.\n\nWe thank the reviewer for pointing out the reference to Gilpin et al., 2013, and their method, GLRD. We were not aware of it and will add it to the related work. While the code for the method was not published online by the authors, we implemented a simplified version of their method ourselves (incorporating sparsity constraints). In additional experiments described in our response to Reviewer 1 (APPENDIX), GraphWave outperformed GLRD by 260% in homogeneity, 430% in completeness, and by 500% in silhouette score. \n"
]
} | {
"paperhash": [
"aubry|the_wave_kernel_signature:_a_quantum_mechanical_approach_to_shape_analysis",
"cardillo|emergence_of_network_features_from_multiplexity",
"chung|the_heat_kernel_as_the_pagerank_of_a_graph",
"coburn|weyl's_theorem_for_nonnormal_operators",
"coifman|diffusion_maps",
"defferrard|convolutional_neural_networks_on_graphs_with_fast_localized_spectral_filtering",
"garcia-duran|learning_graph_representations_with_embedding_propagation",
"grover|node2vec:_scalable_feature_learning_for_networks",
"hamilton|inductive_representation_learning_on_large_graphs",
"hamilton|representation_learning_on_graphs:_methods_and_applications",
"hammond|wavelets_on_graphs_via_spectral_graph_theory",
"henderson|it's_who_you_know:_graph_mining_using_recursive_structural_features",
"henderson|rolx:_structural_role_extraction_&_mining_in_large_graphs",
"jin|axiomatic_ranking_of_network_role_similarity",
"jin|scalable_and_axiomatic_ranking_of_network_role_similarity",
"kipf|semi-supervised_classification_with_graph_convolutional_networks",
"klimt|introducing_the_enron_corpus",
"kondor|diffusion_kernels_on_graphs_and_other_discrete_input_spaces",
"lukacs|characteristic_functions",
"monti|geometric_deep_learning_on_graphs_and_manifolds_using_mixture_model_cnns",
"ovsjanikov|one_point_isometric_matching_with_the_heat_kernel",
"perozzi|deepwalk:_online_learning_of_social_representations",
"ribeiro|struc2vec:_learning_node_representations_from_structural_identity",
"rosenberg|v-measure:_a_conditional_entropy-based_external_cluster_evaluation_measure",
"rustamov|wavelets_on_graphs_via_deep_learning",
"shuman|chebyshev_polynomial_approximation_for_distributed_signal_processing",
"shuman|the_emerging_field_of_signal_processing_on_graphs:_extending_high-dimensional_data_analysis_to_networks_and_other_irregular_domains",
"shuman|vertex-frequency_analysis_on_graphs",
"sun|a_concise_and_provably_informative_multi-scale_signature_based_on_heat_diffusion",
"tremblay|graph_wavelets_for_multiscale_community_mining",
"yang|revisiting_semi-supervised_learning_with_graph_embeddings"
],
"title": [
"The wave kernel signature: A quantum mechanical approach to shape analysis",
"Emergence of network features from multiplexity",
"The heat kernel as the PageRank of a graph",
"Weyl's theorem for nonnormal operators",
"Diffusion maps",
"Convolutional neural networks on graphs with fast localized spectral filtering",
"Learning graph representations with embedding propagation",
"node2vec: Scalable feature learning for networks",
"Inductive representation learning on large graphs",
"Representation learning on graphs: Methods and applications",
"Wavelets on graphs via spectral graph theory",
"It's who you know: graph mining using recursive structural features",
"RolX: structural role extraction & mining in large graphs",
"Axiomatic ranking of network role similarity",
"Scalable and axiomatic ranking of network role similarity",
"Semi-supervised classification with graph convolutional networks",
"Introducing the Enron corpus",
"Diffusion kernels on graphs and other discrete input spaces",
"Characteristic functions",
"Geometric deep learning on graphs and manifolds using mixture model cnns",
"One point isometric matching with the heat kernel",
"DeepWalk: online learning of social representations",
"struc2vec: Learning node representations from structural identity",
"V-measure: a conditional entropy-based external cluster evaluation measure",
"Wavelets on graphs via deep learning",
"Chebyshev polynomial approximation for distributed signal processing",
"The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains",
"Vertex-frequency analysis on graphs",
"A concise and provably informative multi-scale signature based on heat diffusion",
"Graph wavelets for multiscale community mining",
"Revisiting semi-supervised learning with graph embeddings"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"m aubry",
"u schlickewei",
"d cremers"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a cardillo",
"j gómez-gardenes",
"m zanin",
"m romance",
"d papo",
"f del",
"s pozo",
" boccaletti"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"f chung"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"l coburn"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"r coifman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"m defferrard",
"x bresson",
"p vandergheynst"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a garcia-duran",
"m niepert"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a grover",
"j leskovec"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"w hamilton",
"r ying",
"j leskovec"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"w hamilton",
"r ying",
"j leskovec"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d hammond",
"p vandergheynst",
"r gribonval"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"k henderson",
"b gallagher",
"l li",
"l akoglu",
"t eliassi-rad",
"h tong",
"c faloutsos"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"k henderson",
"b gallagher",
"t eliassi-rad",
"h tong",
"s basu",
"l akoglu",
"d koutra",
"c faloutsos",
"l li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"r jin",
"v lee",
"h hong"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"r jin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"t kipf",
"m welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"b klimt",
"y yang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"r kondor",
"j lafferty"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"e lukacs"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"f monti",
"d boscaini",
"j masci",
"e rodolà",
"j svoboda",
"m bronstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"m ovsjanikov",
"q mérigot",
"f mémoli",
"l guibas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"b perozzi",
"r al-rfou",
"s skiena"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"l ribeiro",
"p saverese",
"d figueiredo"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a rosenberg",
"j hirschberg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"r rustamov",
"l guibas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d shuman",
"p vandergheynst",
"p frossard"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d shuman",
"s narang",
"p frossard",
"a ortega",
"p vandergheynst"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d shuman",
"b ricaud",
"p vandergheynst"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"j sun",
"m ovsjanikov",
"l guibas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"n tremblay"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"z yang",
"w cohen",
"r salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"1212.2153v2",
"",
"",
"",
"1606.09375v3",
"",
"1607.00653v1",
"1706.02216v4",
"1709.05584v3",
"0912.3848v1",
"",
"",
"1102.3937v2",
"",
"",
"",
"",
"",
"1611.08402v3",
"",
"1403.6652v2",
"1704.03165v3",
"",
"",
"1105.1891v2",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.37037 | 0.833333 | null | null | null | null | null | rJR2ylbRb |
||
tallec|unbiased_online_recurrent_optimization|ICLR_cc_2018_Conference | 1702.05043v3 | Unbiased Online Recurrent Optimization | The novel \emph{Unbiased Online Recurrent Optimization} (UORO) algorithm allows for online learning of general recurrent computational graphs such as recurrent network models. It works in a streaming fashion and avoids backtracking through past activations and inputs. UORO is computationally as costly as \emph{Truncated Backpropagation Through Time} (truncated BPTT), a widespread algorithm for online learning of recurrent networks \cite{jaeger2002tutorial}. UORO is a modification of \emph{NoBackTrack} \cite{DBLP:journals/corr/OllivierC15} that bypasses the need for model sparsity and makes implementation easy in current deep learning frameworks, even for complex models. Like NoBackTrack, UORO provides unbiased gradient estimates; unbiasedness is the core hypothesis in stochastic gradient descent theory, without which convergence to a local optimum is not guaranteed. On the contrary, truncated BPTT does not provide this property, leading to possible divergence. On synthetic tasks where truncated BPTT is shown to diverge, UORO converges. For instance, when a parameter has a positive short-term but negative long-term influence, truncated BPTT diverges unless the truncation span is very significantly longer than the intrinsic temporal range of the interactions, while UORO performs well thanks to the unbiasedness of its gradients.
| {
"name": [
"corentin tallec",
"yann ollivier"
],
"affiliation": [
{
"laboratory": "Laboratoire de Recherche en Informatique",
"institution": "Université Paris Sud Gif-sur-Yvette",
"location": "{'postCode': '91190', 'country': 'France'}"
},
{
"laboratory": "Laboratoire de Recherche en Informatique",
"institution": "Université Paris Sud Gif-sur-Yvette",
"location": "{'postCode': '91190', 'country': 'France'}"
}
]
} | null | [
"Computer Science"
] | International Conference on Learning Representations | 2017-02-16 | 16 | 85 | null | null | null | null | null | null | null | true | The reviewers agree that the proposed method is theoretically interesting, but disagree on whether it has been properly experimentally validated. My view is that the the theoretical contribution is interesting enough to warrant inclusion in the conference, and so I will err on the side of accepting. | {
"review_id": [
"r11STCqxG",
"B1Ud2yqgz",
"B1QPFb5eG"
],
"review": [
{
"title": "title: Clever trick for making general memory-efficient online unbiased RNN learning possible",
"paper_summary": null,
"main_review": "main_review: This paper presents a generic unbiased low-rank stochastic approximation to full rank matrices that makes it possible to do online RNN training without the O(n^3) overhead of real-time recurrent learning (RTRL). This is an important and long-sought-after goal of connectionist learning and this paper presents a clear and concise description of why their method is a natural way of achieving that goal, along with experiments on classic toy RNN tasks with medium-range time dependencies for which other low-memory-overhead RNN training heuristics fail. My only major complaint with the paper is that it does not extend the method to large-scale problems on real data, for instance work from the last decade on sequence generation, speech recognition or any of the other RNN success stories that have led to their wide adoption (eg Graves 2013, Sutskever, Martens and Hinton 2011 or Graves, Mohamed and Hinton 2013). However, if the paper does achieve what it claims to achieve, I am sure that many people will soon try out UORO to see if the results are in any way comparable.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: The authors introduce a novel approach to online learning of the parameters of recurrent neural networks from long sequences that overcomes the limitation of truncated backpropagation through time (BPTT) of providing biased gradient estimates.\n\nThe idea is to use a forward computation of the gradient as in Williams and Zipser (1989) with an unbiased approximation of Delta s_t/Delta theta to reduce the memory and computational cost.\n\nThe proposed approach, called UORO, is tested on a few artificial datasets.\n\nThe approach is interesting and could potentially be very useful. However, the paper lacks in providing a substantial experimental evaluation and comparison with other methods.\nRather than with truncated BPTT with smaller truncation than required, which is easy to outperform, I would have expected a comparison with some of the other methods mentioned in the Related Work Section, such as NBT, ESNs, Decoupled Neural Interfaces, etc. Also the evaluation should be extended to other challenging tasks. \n\nI have increased the score to 6 based on the comments and revisions from the authors.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Very interesting paper that approaches online training of RNNs in a principled way, although more experiments would make it more convincing",
"paper_summary": null,
"main_review": "main_review: Post-rebuttal update:\nI am happy with the rebuttal and therefore I will keep the score of 7.\n\nThis is a very interesting paper. Training RNN's in an online fashion (with no backpropagation through time) is one of those problems which are not well explored in the research community. And I think, this paper approaches this problem in a very principled manner. The authors proposes to use forward approach for the calculation of the gradients. The author proposes to modify RTRL by maintaining a rank one approximation of jacobian matrix (derivative of state w.r.t parameters) which was done in NoBackTrack Paper. The way I think this paper is different from NoBackTrack Paper is that this version can be implemented in a black box fashion and hence easy to implement using current DL libraries like Pytorch. \n\nPros.\n\n- Its an interesting paper, very easy to follow, and with proper literature survey.\n\nCons:\n\n- The results are quite preliminary. I'll note that this is a very difficult problem.\n- \"The proof of UORO’s convergence to a local optimum is soon to be published Masse & Ollivier (To appear).\" I think, paper violates the anonymity. So, I'd encourage the authors to remove this. \n\nSome Points: \n\n- I find the argument of stochastic gradient descent wrong (I could be wrong though). RNN's follow the markov property (wrt hidden states from previous time step and the current input) so from time step t to t+1, if you change the parameters, the hidden state at time t (and all the time steps before) would carry stale information unless until you're using something like eligibility traces from RL literature. I also don't know how to overcome this issue. \n\n- I'd be worried about the variance in the estimate of rank one approximation. All the experiments carried out by the authors are small scale (hidden size = 64). I'm curious if authors tried experimenting with larger networks, I'd guess it wont perform well due to the high variance in the approximation. I'd like to see an experiment with hidden size = 128/256/512/1024. My intuition is that because of high variance it would be difficult to train this network, but I could be wrong. I'm curious what the authors had to say about this. \n\n- If the variance of the approximation is indeed high, can we use something to control the dynamics of the network which can result in less variance. Have authors thought about this ? \n\n- I'd also like to see experiments on copying task/adding task (as these are standard experiments which are done for analysis of long term dependencies) \n\n- I'd also like to see what effect the length of sequence has on the approximation. As small errors in approximation on each step can compound giving rise to chaotic dynamics. (small change in input => large change in output)\n\n- I'd also like to know how using UORO changes the optimization as compared to Back-propagation through time in the sense, does the two approaches would reach same local minimum ? or is there a possibility that the former can reach \"less\" number of potential local minimas as compared to BPTT. \n\n\nI'm tempted to give high score for this paper( Score - 7) , as it is unexplored direction in our research community, and I think this paper makes a very useful contribution to tackle this problem in a very principled way. But I'd like some more experiments to be done (which I have mentioned above), failing to do those experiments, I'd be forced to reduce the score (to score - 5) ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.7777777910232544,
0.5555555820465088,
0.6666666865348816
],
"confidence": [
1,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Paper revision",
"Answer to reviewer 1",
"Answer to reviewer 3",
"Answer to reviewer 2"
],
"comment": [
"The paper has been revised following the reviewers' advice. Two sections focusing on the evolution of the variance of the gradient approximation, both with respect to the length of the input sequence and to the size of the network have been added, along with corresponding experiments.",
"Thank you for your comments and suggestions.\n\n1/ Regarding comparison to other online methods such as NoBackTrack and Echo State Networks. For plain, fully connected RNNs, NoBackTrack and UORO turn out to be mathematically identical (though implemented quite differently), so they will perform the same. On the contrary, for LSTMs, NoBackTrack is extremely difficult to implement (to our knowledge, it has never been done); this was one of the motivations for UORO, but it makes the comparison difficult.\n\nFor Echo State Networks: ESNs amount in great part to not learning the internal weights, only the output weights (together with a carefully tuned initialization). As much as we are aware, they are not known to fare particularly well on the kind of task we consider, but we may have missed relevant references.\n\n2/ We have included a few more tasks and tests, although this remains relatively small-scale.\n",
"Thank you for your insights, questions and suggestions. We have tried to attend your concerns in the revised version of the paper. \n\n1/ As you pointed out, the results are indeed preliminary. As pointed out in the answer to Reviewer 2, it is difficult to obtain results competitive with BPTT on large scale benchmarks given the additional constraints on UORO (namely, no storage of past data, and good convergence properties, which is not the case of truncated backpropagation if dependencies exceed its truncation range).\n\n2/ About the variance of UORO for large networks: We have added an experiment to test this. The variance of UORO does increase with netowrk size (probably sublinearly), and larger networks will require smaller learning rates.\n\n3/ About the effect of length on the quality of the approximation: We have added an experiment to test the evolution of UORO variance when time increases along the input sequence. The variance of UORO does not explode over time, and is stationary. A key point of UORO is the whole renormalization process (variable rho), designed precisely for this. An independent, theoretical proof for the similar case of NoBackTrack is in (Masse 2017). Thus UORO is applicable to unbounded sequences (notably, in the experiments, datasets are fed as a single sequence, containing 10^6-10^7 characters). \n\n4/ About the stochastic gradient descent argument: indeed one has to be careful. If UORO is used to process a number of finite training sequences, and gradient steps are performed at the end of each sequence only, then this is a fully standard SGD argument: UORO computes, in a streaming fashion, an unbiased estimate of the same gradient as BPTT for each training sequence. However, if the gradients steps are performed at every time step, as we do here, then you are right that an additional argument is needed. The difference between applying gradients at each step and applying gradients only at the end of each sequence is at *second order* in the learning rate: if the learning rate is small, applying gradients at each time does not change the computations too much, and the SGD argument applies up to second-order terms. This is fully formalized in (Masse 2017). If moreover, only one infinite training sequence is provided, then an additional assumption of ergodicity (decay of correlations) is needed. But in any case unbiasedness is the central property.\n\n5/ About the optima reached by UORO vs BPTT: in the limit of very small learning rates, UORO, RTRL, and BPTT with increasing truncation lengths will all produce the same limit trajectories. The theory from (Masse 2017) proves local convergence to the *same* set of local optima for RTRL and UORO (if starting close enough to the local optimum). On the other hand, for large learning rates, we are not aware of theoretical results for any recurrent algorithm.\n\n6/ Regarding reference (Masse 2017): this reference is now publically\navailable and we provide a link in the bibliography. We were indeed aware of Masse's work a bit before it was put online, but that still covers many people, so we do not believe this breaks anonymity. Our paper is disjoint from (Masse 2017), as can be directly checked by comparing the texts.\n",
"Thank you for the constructive feedback. At the moment, we haven't\nsucceeded in scaling UORO up to the state of the art with results competitive with backpropagation on large scale benchmarks. This may be due to the additional constraints borne by UORO, namely, both\nmemorylessness and unbiasedness at all time scales. Such datasets (notably next-character or next-word predictions) contain difficult short-term dependencies: Truncated BPTT with relatively small truncation is expected to learn those dependencies better than an algorithm like UORO, which must consider all time ranges at once.\n"
]
} | {
"paperhash": [
"massé|autour_de_l'usage_des_gradients_en_apprentissage_statistique",
"jaderberg|decoupled_neural_interfaces_using_synthetic_gradients",
"gruslys|memory-efficient_backpropagation_through_time",
"kingma|adam:_a_method_for_stochastic_optimization",
"duchi|adaptive_subgradient_methods_for_online_learning_and_stochastic_optimization",
"jaeger|2007_special_issue:_optimization_and_applications_of_echo_state_networks_with_leaky-_integrator_neurons",
"steil|backpropagation-decorrelation:_online_recurrent_learning_with_o(n)_complexity",
"maass|real-time_computing_without_stable_states:_a_new_framework_for_neural_computation_based_on_perturbations",
"movellan|a_monte_carlo_em_approach_for_partially_observable_diffusion_processes:_theory_and_applications_to_neural_networks",
"gers|long_short-term_memory_learns_context_free_and_context_sensitive_languages",
"hochreiter|long_short-term_memory",
"pearlmutter|gradient_calculations_for_dynamic_recurrent_neural_networks:_a_survey",
"simard|tangent_prop_-_a_formalism_for_specifying_selected_invariances_in_an_adaptive_network",
"williams|a_learning_algorithm_for_continually_running_fully_recurrent_neural_networks",
"ollivier|1_the_nobacktrack_algorithm_1_._1_the_rank-one_trick_:_an_expectation-preserving_reduction",
"lecun|efficient_backprop",
"jaeger|a_tutorial_on_training_recurrent_neural_networks_,_covering_bppt_,_rtrl_,_ekf_and_the_\"_echo_state_network_\"_approach_-_semantic_scholar"
],
"title": [
"Autour De L'Usage des gradients en apprentissage statistique",
"Decoupled Neural Interfaces using Synthetic Gradients",
"Memory-Efficient Backpropagation Through Time",
"Adam: A Method for Stochastic Optimization",
"Adaptive Subgradient Methods for Online Learning and Stochastic Optimization",
"2007 Special Issue: Optimization and applications of echo state networks with leaky- integrator neurons",
"Backpropagation-decorrelation: online recurrent learning with O(N) complexity",
"Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations",
"A Monte Carlo EM Approach for Partially Observable Diffusion Processes: Theory and Applications to Neural Networks",
"Long Short-Term Memory Learns Context Free and Context Sensitive Languages",
"Long Short-Term Memory",
"Gradient calculations for dynamic recurrent neural networks: a survey",
"Tangent Prop - A Formalism for Specifying Selected Invariances in an Adaptive Network",
"A Learning Algorithm for Continually Running Fully Recurrent Neural Networks",
"1 The NoBackTrack algorithm 1 . 1 The rank-one trick : an expectation-preserving reduction",
"Efficient BackProp",
"A tutorial on training recurrent neural networks , covering BPPT , RTRL , EKF and the \" echo state network \" approach - Semantic Scholar"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"Pierre-Yves Massé"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Max Jaderberg",
"Wojciech M. Czarnecki",
"Simon Osindero",
"O. Vinyals",
"Alex Graves",
"David Silver",
"K. Kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Gruslys",
"R. Munos",
"Ivo Danihelka",
"Marc Lanctot",
"Alex Graves"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Diederik P. Kingma",
"Jimmy Ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"John C. Duchi",
"Elad Hazan",
"Y. Singer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"H. Jaeger",
"M. Lukoševičius",
"D. Popovici",
"U. Siewert"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jochen J. Steil"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Maass",
"T. Natschläger",
"H. Markram"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Movellan",
"Paul Mineiro",
"Ruth J. Williams"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Felix Alexander Gers"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sepp Hochreiter",
"J. Schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Barak A. Pearlmutter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Simard",
"B. Victorri",
"Yann LeCun",
"J. Denker"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ronald J. Williams",
"D. Zipser"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Y. Ollivier",
"Corentin Tallec",
"G. Charpiat"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yann LeCun",
"L. Bottou",
"G. Orr",
"K. Müller"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"H. Jaeger"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"1608.05343v2",
"1606.03401",
"1412.6980v9",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
[
"background"
],
[],
[
"background"
],
[],
[
"methodology"
],
[],
[],
[],
[
"background"
],
[],
[],
[
"background",
"methodology"
],
[],
[],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | null | 96 | 0.885417 | 0.666667 | 0.833333 | null | null | null | null | null | rJQDjk-0b |
|
wen|flipout_efficient_pseudoindependent_weight_perturbations_on_minibatches|ICLR_cc_2018_Conference | 3861760 | 1803.04386 | Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches | Stochastic neural net weights are used in a variety of contexts, including regularization, Bayesian neural nets, exploration in reinforcement learning, and evolution strategies. Unfortunately, due to the large number of weights, all the examples in a mini-batch typically share the same weight perturbation, thereby limiting the variance reduction effect of large mini-batches. We introduce flipout, an efficient method for decorrelating the gradients within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example. Empirically, flipout achieves the ideal linear variance reduction for fully connected networks, convolutional networks, and RNNs. We find significant speedups in training neural networks with multiplicative Gaussian perturbations. We show that flipout is effective at regularizing LSTMs, and outperforms previous methods. Flipout also enables us to vectorize evolution strategies: in our experiments, a single GPU with flipout can handle the same throughput as at least 40 CPU cores using existing methods, equivalent to a factor-of-4 cost reduction on Amazon Web Services. | {
"name": [
"yeming wen",
"paul vicol",
"jimmy ba",
"dustin tran",
"roger grosse"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Toronto Vector Institute",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto Vector Institute",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto Vector Institute",
"location": "{}"
},
{
"laboratory": "",
"institution": "Columbia University",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto Vector Institute",
"location": "{}"
}
]
} | We introduce flipout, an efficient method for decorrelating the gradients computed by stochastic neural net weights within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example. | [
"weight perturbation",
"reparameterization gradient",
"gradient variance reduction",
"evolution strategies",
"LSTM",
"regularization",
"optimization"
] | null | 2018-02-15 22:29:20 | 31 | 293 | 36 | null | null | null | null | null | null | true | Thank you for submitting you paper to ICLR. The idea is simple, but easy to implement and effective. The paper examines the performance fairly thoroughly across a number of different scenarios showing that the method consistently reduces variance. How this translates into final performance is complex of course, but faster convergence is demonstrated and the revised experiments in table 2 show that it can lead to improvements in accuracy. | {
"review_id": [
"rknUpWqgz",
"rkLiPl9xz",
"Hkh0HMjgM"
],
"review": [
{
"title": "title: Flipout is an important contribution for weight-perturbation algorithms",
"paper_summary": null,
"main_review": "main_review: Typical weight perturbation algorithms (as used for e.g. Regularization, Bayesian NN, Evolution\nStrategies) suffer from a high variance of the gradient estimates. This is caused\nby sharing a weight perturbation by all training examples in a minibatch. More specifically\nsharing perturbed weights over samples in a minibtach induces correlations between gradients of each sample, which can\nnot be resolved by standard averaging. The paper introduces a simple idea, flipout, to\nperturb the weights quasi-independently within a minibatch: a base perturbation (shared\nby all sample in a minibatch) is multiplied by a random rank-one sign matrix (different\nfor every sample). Due to its special structure it is possible to vectorize this\nper-sample-operation such that only matrix-matrix products (as in the default forward\npropagation) are involved. The incurred computational cost is roughly twice as much\nas a standard forward propagation path. The paper also proves that this approach\nreduces the variance of the gradient estimates (and in practice, flipout should\nobtain the ideal variance reduction). In a set of experiments it is demonstrated\nthat a significant reduction in gradient variance is achieved, resulting\nin speedups for training time. Additionally, it is demonstrated that\nflipout allows evolution strategies utilizing GPUs.\n\nOverall this is a very nice paper. It clearly lays out the problem, describes\none solution to it and shows both theoretically as well as empirically\nthat the proposed solution is a feasable one. Given the increasing importance\nof Bayesian NN and Evolution Strategies, flipout is an important contribution.\n\nQuality: Overall very well written. Relevant literature is covered and an important\nproblem of current research in ML is tackled.\n\nClarity: Ideas/Reasons are clearly presented.\n\nSignificance: The presented work is highly significant for practical applicability\nof Bayesian NN and Evolution Strategies.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The paper presents a strategy to retain variance reduction while dropping weights rather than activations. It is an important idea that needs more work.",
"paper_summary": null,
"main_review": "main_review: The paper is well written. The proposal is explained clearly. \nAlthough the technical contribution of this work is relevant for network learning, several key aspects are yet to be addressed thoroughly, particularly the experiments. \n\nWill there be any values of alpha, beta and gamma where eq(8) and eq(9) are equivalent. In other words, will it be the case that SharedPerturbation(alpha, beta, gamma, N) = Flipout(alpha1, beta1, gamma1, N1) for some choices of alpha, alpha1, beta, beta1, ...? This needs to be analyzed very thoroughly because some experiments seem to imply that Flip and NoFlip are giving same performance (Fig 2(b)). \nIt seems like small batch with shared perturbation should be similar to large batch with flipout? \nWill alpha and gamma depend on the depth of the network? Can we say anything about which networks are better? \nIt is clear that the perturbations E1 and E2 are to be uniform +/-1. Are there any benefits for choosing non-uniform sampling, and does the computational overhead of sampling them depend on the network depth/size. \n\nThe experiments seem to be inconclusive. \nFirstly, how would the proposed strategy work on standard vision problems including learning imagenet and cifar datasets (such experiments would put the proposal into perspective compared to dropout and residual net type procedures) ?\nSecondly, without confidence intervals (or significance tests of any kind), it is difficult to evaluate the goodness of Flipout vs. baselines, specifically in Figures 2(b,d). \nThirdly, it is known that small batch sizes give better performance guarantees than large ones, and so, what does Figure 1 really imply? (Needs more explanation here, relating back to description of alpha, beta and gamma; see above). \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A very pleasant article, but whose actual impact should be made clearer",
"paper_summary": null,
"main_review": "main_review: In this article, the authors offer a way to decrease the variance of the gradient estimation in the training of neural networks.\nThey start in the Introduction and Section 2 by explaining the multiple uses of random connection weights in deep learning and how the computational cost often restricts their use to a single randomly sampled set of weights per minibatch, which results to higher-variance gradient estimatos than could be achieved otherwise. In Section 3 the authors offer to get the benefits of multiple weights without most of the cost, when the distribution of the weights is symmetric and fully factorized, by multiplying sampled-once random perturbations of the weights by a rank-1 random sign matrix. This efficient mechanism is only twice as costly as a single random perturbation, and the authors show how to efficiently parallelize it on GPUs, thereby also allowing GPU-ization of evolution strategies (something so far difficult toachieve). Of note, they provide a theoretical analysis in Section 3.2, proving the actual variance reduction of their efficient pseudo-sampling scheme. In Section 4 they provide quite varied empirical analysis: they confirm their theoretical results on four architectures; they show its use it to regularise on language models; they apply it on large minibatch settings where high variance is a main problem; and on evolution strategies.\n\nWhile it is a rather simple idea which could be summarised much earlier in the single equation (3), I really like the thoroughness and the clarity of the exposure of the idea. Too many papers in our community skimp on details and on formalism, and it is a delight to see things exposed so clearly -- even accompanied with a proof.\n\nHowever, the painful part: while I am convinced by the idea and love its detailed exposure, and the gradient variance reduction is made very clear, the experimental impact in terms of accuracy (or perplexity) is, sadly, not very convincing. Nowhere in the text did I find a clear rationale of why it is beneficial to reduce the variance of the gradient. The numerical results in Table 1 and Table 2 also do not show a clear improvement: Flipout does not provide the best accuracy. The gain in wall clock could be a factor, but would need to be measured on the figures more clearly. And the validation errors in Figure 2 for Evolution strategies seem to be worse than backprop.The main text itself also only claims performance “comparable to the other methods”. The only visible gain is on the lower part Figure 2.a on a ConvNet.\n\nThis makes me wonder if the authors could do a better job of putting forward the actual advantages of their methods on the end-results: could wall clock measure be put more forward, to justify the extra work? This would, in my mind, strongly improve the case for publication of this article.\n\n\nA few improvement suggestions:\n* Could put earlier more emphasis of superiority to Local Reparameterization Trick in terms of architecture, not wait until Section 2.2 and section 4.1\n*Should also put more emphasis on limitations, not wait until 3.1.\n* Proposition 1 is quite straightforward, not sure it deserves a proposition, but it’s elegant to put it forward.\n* Footnote 1 on re-using the matrices is indeed practical, but also somewhat surprising in terms of bias risks. Could it be explained in more depth, maybe by the random permutations of the minibatches making the bias non systematic and cancelling out?\n* Theorem 1: For readability could merge the expectations on the joint distribution as E_{x, \\hat \\delta W} , rather than separate expectations with the conditional distributions.\n* Theorem 1: could the authors provide a clearer intuitive explanation of the \\beta term alone, not only as part of \\alpha + \\beta, especially as it plays such a key role, being the only one that does not disappear? And how do they explain their empirical observation that \\beta is close to 0? Any intuition on that?\n* Experiments: I salute the authors for providing all the details in exhaustive manner in the Appendix. Very commendable.\n* Experiments: I like the empirical verification of the theory. Very neat to see.\n\nMinor typo:\n* page 2 last paragraph, “evolution strategies” is plural but the verbs are used in singular (“is black box”, “It doesn’t”, “generates”)\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.7777777910232544,
0.5555555820465088,
0.5555555820465088
],
"confidence": [
0.5,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Updated paper",
"Thank you!",
"Response to Reviewer 2",
"Flipout offers optimization benefits for large batch sizes",
"Details of our approach",
"Reviewer's Answer to Authors' Response"
],
"comment": [
"We thank the reviewers for their helpful comments.\n\nWe updated the regularization experiments in Section 4.2, including the results in Table 2. We show that flipout applied to DropConnect outperforms all the other methods we investigated.\n\nWe also added Appendix E.4, in which we show that for large-batch LSTM training: 1) using DropConnect with flipout achieves significant variance reduction compared to using a shared DropConnect mask for all examples; and 2) DropConnect with flipout converges faster than DropConnect with shared masks, showcasing the optimization benefits of using flipout.",
"Thank you for your positive comments and for recognizing the work!",
"Thank you for your comment.\n\nVariance reduction is a central issue in stochastic optimization, and countless papers have tried to address it. To summarize, lower variance enables faster convergence and hence improves the sample efficiency. We gave one reference above, but there are many more that we did not mention (a few more examples are [1-12]). Furthermore, the variance of the stochastic gradients is arguably the most serious problem facing policy gradient methods such as evolution strategies, and some fundamental algorithms like REINFORCE are essentially variance reduction methods. So hopefully the importance of variance reduction is clear.\n\nWe demonstrated consistently large variance reduction effects, and showed that in at least some cases, this leads to more efficient training (see Figure 2.a). In addition: 1) we show in Table 2 that flipout applied to DropConnect outperforms all the other methods by a significant margin (73.02 test perplexity, compared to 75.31 for the next-best method); and 2) we show in Appendix E.4 that flipout applied to DropConnect converges faster than DropConnect with shared masks (which is currently the SOTA method).\n\nRegarding when our method helps: most straightforwardly, it helps when SGD is suffering from high estimation variance. This turned out to be the case for some of the BNNs we experimented with, as well as for ES (which is notoriously high-variance). As we’ve pointed out, estimation variance will become a more serious bottleneck as hardware trends favor increasing batch sizes. These factors give simple rules of thumb for when flipout will be useful.\n\n[1] Andrew C. Miller et al. Reducing reparameterization gradient variance. In NIPS, 2017.\n[2] Alberto Bietti and Julien Mairal. Stochastic optimization with variance reduction for infinite datasets with finite sum structure. In NIPS, 2017.\n[3] Sashank J. Reddi et al. Stochastic variance reduction for nonconvex optimization. In ICML, 2016.\n[4] Soham De, Gavin Taylor, and Tom Goldstein. Variance reduction for distributed stochastic gradient descent. arXiv preprint arXiv:1512.01708, 2015.\n[5] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In NIPS, 2014.\n[6] Aaron Defazio et al. Finito: A faster, permutable incremental gradient method for big data problems. In ICML, 2014.\n[7] Reza Harikandeh et al. Stop wasting my gradients: Practical SVRG. In NIPS, 2015.\n[8] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, 2013.\n[9] Sashank J. Reddi et al. On variance reduction in stochastic gradient descent and its asynchronous variants. In NIPS, 2015.\n[10] Nicolas L Roux, Mark Schmidt, and Francis R Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In NIPS, 2012.\n[11] Chong Wang, Xi Chen, Alex J Smola, and Eric P Xing. Variance reduction for stochastic gradient optimization. In NIPS, 2013.\n[12] Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057–2075, 2014.",
"Thank you for your careful and insightful feedback.\n\n-> Q: Why is it beneficial to reduce the variance of the gradient, flipout doesn’t provide the best accuracy, wall-clock time advantage?\n\nThe importance of variance reduction in SGD is well-established. Variance reduction is the whole reason for using batch sizes larger than 1, and some well-known works [1] have found that with careful implementation, the variance-reducing effect of large batches translates into a linear optimization speedup. Whether this relationship holds in a particular case depends on a whole host of configuration details which are orthogonal to our paper; e.g. the aforementioned paper had to choose careful initializations and learning rate schedules in order to achieve it. Still, we can confidently say there is no hope for unlocking the optimization benefits of large batches unless one uses some scheme (such as flipout) that enables variance reduction.\n\nNote that we do in fact observe significant optimization benefits (flipout converges 3X faster in terms of iterations), as shown in Figure 2(a). Additionally, although flipout is 2X more computationally expensive in theory, it can be implemented more efficiently in practice. For example, we can send each of the two matmul calls to a separate TPU chip, so they are done in parallel. Communication will add overhead but it shouldn’t be much, as it is only two chips communicating and the messages are matrices of size [batch_size, hidden_size] rather than the set of full weights.\n\nWith respect to the advantages offered by flipout for training LSTMs, we have conducted new experiments to compare regularization methods, and have updated Section 4.2 and the results in Table 2. We found that using flipout to implement DropConnect for recurrent regularization yields strong results, and significantly outperforms the other methods in both validation and test perplexity. For our original word-level LSTM experiments, we used the setup of [2], with a fixed learning schedule that decays the learning rate by a factor of 1.2 each epoch starting after epoch 6. In our new experiments, we decay the learning rate by a factor of 4 based on the nonmonotonic criterion introduced in [3]; the perplexities of all methods except the unregularized LSTM are reduced compared to the previous experiments. Using flipout to implement DropConnect allows us to use a different DropConnect mask per example in a batch efficiently (compared to [3], which shares the weights between all examples).\n\nWe also added Appendix E.4, which shows that using flipout with DropConnect yields significant variance reduction and faster training compared to using a shared DropConnect mask for all examples (as is done in [3]).\n\n-> Q: ES seems to be worse than backprop.\n\nWe’re not advocating for ES to replace backprop. The main comparison in this section is between NaiveES and FlipES; we show that FlipES behaves like NaiveES, but is more efficient due to parallelism. Our reason for including the backprop comparison is to show that this is an interesting regime to investigate. One might have thought that ES would hopelessly underperform backprop (since the latter uses gradients), but in fact FlipES turns out to be competitive. The reason this result is interesting is that unlike backprop, ES can also be applied to non-differentiable models.\n\n-> Q: Footnote 1 with bias risk.\n\nThe trick in Footnote 1 does not introduce any bias. Proposition 1 implies that the gradients are unbiased for any distribution over E which is independent of Delta W. This applies in particular to deterministic E (which is trivially independent), so E can be fixed throughout training. Such a scheme may not achieve the full variance reduction, but it is at least unbiased. Note that we do not use this trick in our experiments.\n\n-> Q: Why is it close to 0 in practice, intuitive explanation of beta term?\n\nBeta is the estimation variance when E is marginalized out. We’d expect this term to be much smaller than the full variance because it’s marginalizing over a symmetric perturbation distribution, so the perturbations in opposite directions should cancel. The finding that it was so close to zero was a pleasant surprise.\n\nWe also thank the reviewer for the suggestions for improvement. We will revise the final version to take them into account.\n\n\n[1] Goyal, Priya, et al. \"Accurate, large minibatch SGD: Training ImageNet in 1 hour.\" arXiv preprint arXiv:1706.02677 (2017)\n[2] Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent\nneural networks. In Advances in Neural Information Processing Systems (NIPS), pp. 1019–1027, (2016).\n[3] Merity, Stephen, Keskar, Nitish S., and Socher, Richard. \"Regularizing and optimizing LSTM language models.\" arXiv preprint arXiv:1708.02182 (2017).",
"Thank you for your careful and insightful feedback.\n\n--> Q: Will there be any values of alpha, beta and gamma where eq(8) and eq(9) are equivalent? Will alpha and gamma depend on the depth of the network? Can we say anything about which networks are better?\n\nMathematically, eqns. (8) and (9) are equivalent if alpha = 0, and they are nearly identical if beta or gamma dominates. However, we did not observe any examples of either case in any of our experiments. (In fact, beta was indistinguishable from 0 in all of our experiments.) Based on Figure 1, the values seem fairly consistent between very different architectures.\n\n--> Q: FlipES doesn’t outperform NaiveES in Figure 2.\n\nHere, NaiveES refers to fully independent perturbations, rather than a single shared perturbation. Hence, it is an upper bound on how well FlipES should perform, and indeed FlipES achieves this with a much faster wall-clock time (see Fig. 5 in the appendix, cpuES corresponds to noFlip). For clarity, we will rename NaiveES to be IdealES in the final version.\n\n--> Q: Shared perturbation with small batch should be similar to large batch with flipout?\n\nThe reason to train with large batches is to take advantage of parallelism. (If we only cared about the number of arithmetic operations, we’d all use batch size 1.) The size of this benefit depends on the hardware, and hardware trends (more GPU cores, TPUs) strongly favor increasing batch sizes. Currently, one may sometimes be able to use small batches to compensate for the inefficiency of shared perturbations, but this is a band-aid which won’t remain competitive much longer.\n\n--> Q: Can we use non-uniform E1, E2 and does the computational overhead of sampling depend on the network depth?\n\nYes, Proposition 1 certainly allows for non-uniform E1 and E2, although the advantage of this is unclear. In principle, sampling E1 and E2 ought not to be very expensive compared to the matrix multiplications. However, the overhead can be significant if the framework implements it inefficiently; in this case, one can use the trick in Footnote 1.\n\n--> Q: Will the proposed strategy work on standard vision problems including ImageNet and CIFAR?\n\nOur experiments include CIFAR-10, and we see no reason why flipout shouldn’t work on ImageNet. Weight perturbations are not currently widely used in vision tasks, but if that changes, flipout ought to be directly applicable. Our experiments focus on Bayesian neural nets and ES, which inherently require weight perturbations.\n\nAdditionally, it was shown that DropConnect (which is a special case of weight perturbation, as we show in Sec. 2.1) regularizes LSTM-based word language models and achieves SOTA on several tasks [1]. Flipout can be directly applied to it, and we show in Appendix E.4 that flipout reduces the stochastic gradient variance compared to [1].\n\n--> Q: Small batch sizes give better performance, what does Fig. 1 imply?\n\nWe’re not sure what you mean by this. Due to the variance reduction effects of large batches, one typically uses as large a batch as will fit on the GPU, and sometimes resorts to distributed training in order to use even larger batches. (A batch size of 1 is optimal if you only count the number of iterations, but this isn’t a realistic model, even on a single CPU.)\n\n[1] Merity, Stephen, Keskar, Nitish S., and Socher, Richard. \"Regularizing and optimizing LSTM language models.\" arXiv preprint arXiv:1708.02182 (2017).",
"I thank the authors for their response. I am disappointed that their revised paper does not provide any further explanation of why reducing the variance of SGD matters. The explanation from the authors in their response is that:\n* \"it is well established\": citing only one paper is not convincing, and even if it were so clearly well-established, your duty to your readers is to make the article's motivation as self-contained as possible.\n* \"that's why we use minibatches larger than 1\" : there's a world of diminishing returns between \"larger than 1\", which is indeed extreme, and \"1000+\"\n* \"it is an orthogonal problem\": I beg to differ. Knowing the setup in which your method can help is very much aligned with developing the method.\n\nAs such, after revision, it is with regret that I maintain my rating of \"6: Marginally above acceptance threshold\"."
]
} | {
"paperhash": [
"reddi|on_the_convergence_of_adam_and_beyond",
"merity|regularizing_and_optimizing_lstm_language_models",
"fortunato|noisy_networks_for_exploration",
"plappert|parameter_space_noise_for_exploration",
"louizos|bayesian_compression_for_deep_learning",
"miller|reducing_reparameterization_gradient_variance",
"jouppi|in-datacenter_performance_analysis_of_a_tensor_processing_unit",
"salimans|evolution_strategies_as_a_scalable_alternative_to_reinforcement_learning",
"ha|hypernetworks",
"krueger|zoneout:_regularizing_rnns_by_randomly_preserving_hidden_activations",
"cooijmans|recurrent_batch_normalization",
"semeniuta|recurrent_dropout_without_memory_loss",
"gal|a_theoretically_grounded_application_of_dropout_in_recurrent_neural_networks",
"kingma|variational_dropout_and_the_local_reparameterization_trick",
"blundell|weight_uncertainty_in_neural_networks",
"ioffe|batch_normalization:_accelerating_deep_network_training_by_reducing_internal_covariate_shift",
"zaremba|recurrent_neural_network_regularization",
"simonyan|very_deep_convolutional_networks_for_large-scale_image_recognition",
"le|fastfood:_approximate_kernel_expansions_in_loglinear_time",
"mnih|neural_variational_inference_and_learning_in_belief_networks",
"ranganath|black_box_variational_inference",
"wan|regularization_of_neural_networks_using_dropconnect",
"graves|practical_variational_inference_for_neural_networks",
"schmidhuber|training_recurrent_networks_by_evolino",
"hinton|keeping_the_neural_networks_simple_by_minimizing_the_description_length_of_the_weights",
"marcus|building_a_large_annotated_corpus_of_english:_the_penn_treebank",
"lopez|auto-encoding_variational_bayes",
"srivastava|dropout:_a_simple_way_to_prevent_neural_networks_from_overfitting",
"krizhevsky|learning_multiple_layers_of_features_from_tiny_images",
"lecun|gradient-based_learning_applied_to_document_recognition"
],
"title": [
"On the Convergence of Adam and Beyond",
"Regularizing and Optimizing LSTM Language Models",
"Noisy Networks for Exploration",
"Parameter Space Noise for Exploration",
"Bayesian Compression for Deep Learning",
"Reducing Reparameterization Gradient Variance",
"In-datacenter performance analysis of a tensor processing unit",
"Evolution Strategies as a Scalable Alternative to Reinforcement Learning",
"HyperNetworks",
"Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations",
"Recurrent Batch Normalization",
"Recurrent Dropout without Memory Loss",
"A Theoretically Grounded Application of Dropout in Recurrent Neural Networks",
"Variational Dropout and the Local Reparameterization Trick",
"Weight Uncertainty in Neural Networks",
"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"Recurrent Neural Network Regularization",
"Very Deep Convolutional Networks for Large-Scale Image Recognition",
"Fastfood: Approximate Kernel Expansions in Loglinear Time",
"Neural Variational Inference and Learning in Belief Networks",
"Black Box Variational Inference",
"Regularization of Neural Networks using DropConnect",
"Practical Variational Inference for Neural Networks",
"Training Recurrent Networks by Evolino",
"Keeping the neural networks simple by minimizing the description length of the weights",
"Building a Large Annotated Corpus of English: The Penn Treebank",
"AUTO-ENCODING VARIATIONAL BAYES",
"Dropout: a simple way to prevent neural networks from overfitting",
"Learning Multiple Layers of Features from Tiny Images",
"Gradient-based learning applied to document recognition"
],
"abstract": [
"Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where Adam does not converge to the optimal solution, and describe the precise problems with the previous analysis of Adam algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with `long-term memory' of past gradients, and propose new variants of the Adam algorithm which not only fix the convergence issues but often also lead to improved empirical performance.",
"Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on hidden-to-hidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.",
"We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent's policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and dueling agents (entropy reward and $\\epsilon$-greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance.",
"Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples. Combining parameter noise with traditional RL methods allows to combine the best of both worlds. We demonstrate that both off- and on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks. Our results show that RL with parameter noise learns more efficiently than traditional RL with action space noise and evolutionary strategies individually.",
"Compression and computational efficiency in deep learning have become a problem of great significance. In this work, we argue that the most principled and effective way to attack this problem is by adopting a Bayesian point of view, where through sparsity inducing priors we prune large parts of the network. We introduce two novelties in this paper: 1) we use hierarchical priors to prune nodes instead of individual weights, and 2) we use the posterior uncertainties to determine the optimal fixed point precision to encode the weights. Both factors significantly contribute to achieving the state of the art in terms of compression rates, while still staying competitive with methods designed to optimize for speed or energy efficiency.",
"Optimization with noisy gradients has become ubiquitous in statistics and machine learning. Reparameterization gradients, or gradient estimates computed via the ``reparameterization trick,'' represent a class of noisy gradients often used in Monte Carlo variational inference (MCVI). However, when these gradient estimators are too noisy, the optimization procedure can be slow or fail to converge. One way to reduce noise is to generate more samples for the gradient estimate, but this can be computationally expensive. Instead, we view the noisy gradient as a random variable, and form an inexpensive approximation of the generating procedure for the gradient sample. This approximation has high correlation with the noisy gradient by construction, making it a useful control variate for variance reduction. We demonstrate our approach on a non-conjugate hierarchical model and a Bayesian neural net where our method attained orders of magnitude (20-2{,}000$\\times$) reduction in gradient variance resulting in faster and more stable optimization.",
"Many architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC-called a Tensor Processing Unit (TPU)-deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X–30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X–80X higher. Moreover, using the GPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.",
"We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show that ES is a viable solution strategy that scales extremely well with the number of CPUs available: By using a novel communication strategy based on common random numbers, our ES implementation only needs to communicate scalars, making it possible to scale to over a thousand parallel workers. This allows us to solve 3D humanoid walking in 10 minutes and obtain competitive results on most Atari games after one hour of training. In addition, we highlight several advantages of ES as a black box optimization technique: it is invariant to action frequency and delayed rewards, tolerant of extremely long horizons, and does not need temporal discounting or value function approximation.",
"This work explores hypernetworks: an approach of using one network, also known as a hypernetwork, to generate the weights for another network. We apply hypernetworks to generate adaptive weights for recurrent networks. In this case, hypernetworks can be viewed as a relaxed form of weight-sharing across layers. In our implementation, hypernetworks are are trained jointly with the main network in an end-to-end fashion. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks.",
"We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and find that zoneout gives significant performance improvements across tasks. We achieve competitive results with relatively simple models in character- and word-level language modelling on the Penn Treebank and Text8 datasets, and combining with recurrent batch normalization yields state-of-the-art results on permuted sequential MNIST.",
"We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.",
"This paper presents a novel approach to recurrent neural network (RNN) regularization. Differently from the widely adopted dropout method, which is applied to forward connections of feedforward architectures or RNNs, we propose to drop neurons directly in recurrent connections in a way that does not cause loss of long-term memory. Our approach is as easy to implement and apply as the regular feed-forward dropout and we demonstrate its effectiveness for the most effective modern recurrent network – Long Short-Term Memory network. Our experiments on three NLP benchmarks show consistent improvements even when combined with conventional feed-forward dropout.",
"Recurrent neural networks (RNNs) stand at the forefront of many recent developments in deep learning. Yet a major difficulty with these models is their tendency to overfit, with dropout shown to fail when applied to recurrent layers. Recent results at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout. This grounding of dropout in approximate Bayesian inference suggests an extension of the theoretical results, offering insights into the use of dropout with RNN models. We apply this new variational inference based dropout technique in LSTM and GRU models, assessing it on language modelling and sentiment analysis tasks. The new approach outperforms existing techniques, and to the best of our knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank (73.4 test perplexity). This extends our arsenal of variational tools in deep learning.",
"We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the mini-batch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments.",
"We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.",
"Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.",
"We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"Despite their successes, what makes kernel methods difficult to use in many large scale problems is the fact that storing and computing the decision function is typically expensive, especially at prediction time. In this paper, we overcome this difficulty by proposing Fastfood, an approximation that accelerates such computation significantly. Key to Fastfood is the observation that Hadamard matrices, when combined with diagonal Gaussian matrices, exhibit properties similar to dense Gaussian random matrices. Yet unlike the latter, Hadamard and diagonal matrices are inexpensive to multiply and store. These two matrices can be used in lieu of Gaussian matrices in Random Kitchen Sinks proposed by Rahimi and Recht (2009) and thereby speeding up the computation for a large range of kernel functions. Specifically, Fastfood requires O(n log d) time and O(n) storage to compute n non-linear basis functions in d dimensions, a significant improvement from O(nd) computation and storage, without sacrificing accuracy. \nOur method applies to any translation invariant and any dot-product kernel, such as the popular RBF kernels and polynomial kernels. We prove that the approximation is unbiased and has low variance. Experiments show that we achieve similar accuracy to full kernel expansions and Random Kitchen Sinks while being 100x faster and using 1000x less memory. These improvements, especially in terms of memory usage, make kernel methods more practical for applications that have large training sets and/or require real-time prediction.",
"Highly expressive directed latent variable models, such as sigmoid belief networks, are difficult to train on large datasets because exact inference in them is intractable and none of the approximate inference methods that have been applied to them scale well. We propose a fast non-iterative approximate inference method that uses a feedforward network to implement efficient exact sampling from the variational posterior. The model and this inference network are trained jointly by maximizing a variational lower bound on the log-likelihood. Although the naive estimator of the inference network gradient is too high-variance to be useful, we make it practical by applying several straightforward model-independent variance reduction techniques. Applying our approach to training sigmoid belief networks and deep autoregressive networks, we show that it outperforms the wake-sleep algorithm on MNIST and achieves state-of-the-art results on the Reuters RCV1 document dataset.",
"Variational inference has become a widely used method to approximate posteriors in complex latent variables models. However, deriving a variational inference algorithm generally requires significant model-specific analysis, and these efforts can hinder and deter us from quickly developing and exploring a variety of models for a problem at hand. In this paper, we present a \"black box\" variational inference algorithm, one that can be quickly applied to many models with little additional derivation. Our method is based on a stochastic optimization of the variational objective where the noisy gradient is computed from Monte Carlo samples from the variational distribution. We develop a number of methods to reduce the variance of the gradient, always maintaining the criterion that we want to avoid difficult model-based derivations. We evaluate our method against the corresponding black box sampling based methods. We find that our method reaches better predictive likelihoods much faster than sampling methods. Finally, we demonstrate that Black Box Variational Inference lets us easily explore a wide space of models by quickly constructing and evaluating several models of longitudinal healthcare data.",
"We introduce DropConnect, a generalization of Dropout (Hinton et al., 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models.",
"Variational methods have been previously explored as a tractable approximation to Bayesian inference for neural networks. However the approaches proposed so far have only been applicable to a few simple network architectures. This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks. Along the way it revisits several common regularisers from a variational perspective. It also provides a simple pruning heuristic that can both drastically reduce the number of network weights and lead to improved generalisation. Experimental results are provided for a hierarchical multidimensional recurrent neural network applied to the TIMIT speech corpus.",
"In recent years, gradient-based LSTM recurrent neural networks (RNNs) solved many previously RNN-unlearnable tasks. Sometimes, however, gradient information is of little use for training RNNs, due to numerous local minima. For such cases, we present a novel method: EVOlution of systems with LINear Outputs (Evolino). Evolino evolves weights to the nonlinear, hidden nodes of RNNs while computing optimal linear mappings from hidden state to output, using methods such as pseudo-inverse-based linear regression. If we instead use quadratic programming to maximize the margin, we obtain the first evolutionary recurrent support vector machines. We show that Evolino-based LSTM can solve tasks that Echo State nets (Jaeger, 2004a) cannot and achieves higher accuracy in certain continuous function generation tasks than conventional gradient descent RNNs, including gradient-based LSTM.",
"Supervised neural networks generalize well if there is much less information in the weights than there is in the output vectors of the training cases. So during learning, it is important to keep the weights simple by penalizing the amount of information they contain. The amount of information in a weight can be controlled by adding Gaussian noise and the noise level can be adapted during learning to optimize the trade-o(cid:11) between the expected squared error of the network and the amount of information in the weights. We describe a method of computing the derivatives of the expected squared error and of the amount of information in the noisy weights in a network that contains a layer of non-linear hidden units. Provided the output units are linear, the exact derivatives can be computed e(cid:14)ciently without time-consuming Monte Carlo simulations. The idea of minimizing the amount of information that is required to communicate the weights of a neural network leads to a number of interesting schemes for encoding the weights.",
"Abstract : As a result of this grant, the researchers have now published oil CDROM a corpus of over 4 million words of running text annotated with part-of- speech (POS) tags, with over 3 million words of that material assigned skeletal grammatical structure. This material now includes a fully hand-parsed version of the classic Brown corpus. About one half of the papers at the ACL Workshop on Using Large Text Corpora this past summer were based on the materials generated by this grant.",
"To make decisions based on a model fit by Auto-Encoding Variational Bayes (AEVB), practitioners typically use importance sampling to estimate a functional of the posterior distribution. The variational distribution found by AEVB serves as the proposal distribution for importance sampling. However, this proposal distribution may give unreliable (high variance) importance sampling estimates, thus leading to poor decisions. We explore how changing the objective function for learning the variational distribution, while continuing to learn the generative model based on the ELBO, affects the quality of downstream decisions. For a particular model, we characterize the error of importance sampling as a function of posterior variance and show that proposal distributions learned with evidence upper bounds are better. Motivated by these theoretical results, we propose a novel variant of the VAE. In addition to experimenting with MNIST, we present a full-fledged application of the proposed method to single-cell RNA sequencing. In this challenging instance of multiple hypothesis testing, the proposed method surpasses the current state of the art.",
"Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.",
"Groups at MIT and NYU have collected a dataset of millions of tiny colour images from the web. It is, in principle, an excellent dataset for unsupervised training of deep generative models, but previous researchers who have tried this have found it dicult to learn a good set of lters from the images. We show how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex. Using a novel parallelization algorithm to distribute the work among multiple machines connected on a network, we show how training such a model can be done in reasonable time. A second problematic aspect of the tiny images dataset is that there are no reliable class labels which makes it hard to use for object recognition experiments. We created two sets of reliable labels. The CIFAR-10 set has 6000 examples of each of 10 classes and the CIFAR-100 set has 600 examples of each of 100 non-overlapping classes. Using these labels, we show that object recognition is signicantly improved by pre-training a layer of features on a large set of unlabeled tiny images.",
"Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day."
],
"authors": [
{
"name": [
"Sashank J. Reddi",
"Satyen Kale",
"Surinder Kumar"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Stephen Merity",
"N. Keskar",
"R. Socher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Meire Fortunato",
"M. G. Azar",
"Bilal Piot",
"Jacob Menick",
"Ian Osband",
"Alex Graves",
"Vlad Mnih",
"R. Munos",
"D. Hassabis",
"O. Pietquin",
"C. Blundell",
"S. Legg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Matthias Plappert",
"Rein Houthooft",
"Prafulla Dhariwal",
"Szymon Sidor",
"Richard Y. Chen",
"Xi Chen",
"T. Asfour",
"P. Abbeel",
"Marcin Andrychowicz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Christos Louizos",
"Karen Ullrich",
"M. Welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Andrew C. Miller",
"N. Foti",
"A. D'Amour",
"Ryan P. Adams"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"N. Jouppi",
"C. Young",
"Nishant Patil",
"David Patterson",
"Gaurav Agrawal",
"Raminder Bajwa",
"Sarah Bates",
"Suresh Bhatia",
"Nan Boden",
"Al Borchers",
"Rick Boyle",
"Pierre-luc Cantin",
"Clifford Chao",
"Chris Clark",
"Jeremy Coriell",
"Mike Daley",
"Matt Dau",
"Jeffrey Dean",
"Ben Gelb",
"Taraneh Ghaemmaghami",
"Rajendra Gottipati",
"William Gulland",
"Robert Hagmann",
"C. Richard Ho",
"Doug Hogberg",
"John Hu",
"R. Hundt",
"Dan Hurt",
"Julian Ibarz",
"A. Jaffey",
"Alek Jaworski",
"Alexander Kaplan",
"Harshit Khaitan",
"Daniel Killebrew",
"Andy Koch",
"Naveen Kumar",
"Steve Lacy",
"James Laudon",
"James Law",
"Diemthu Le",
"Chris Leary",
"Zhuyuan Liu",
"Kyle Lucke",
"Alan Lundin",
"Gordon MacKean",
"Adriana Maggiore",
"Maire Mahony",
"Kieran Miller",
"R. Nagarajan",
"Ravi Narayanaswami",
"Ray Ni",
"Kathy Nix",
"Thomas Norrie",
"Mark Omernick",
"Narayana Penukonda",
"Andy Phelps",
"Jonathan Ross",
"Matt Ross",
"Amir Salek",
"Emad Samadiani",
"Chris Severn",
"Gregory Sizikov",
"Matthew Snelham",
"Jed Souter",
"Dan Steinberg",
"Andy Swing",
"Mercedes Tan",
"Gregory Thorson",
"Bo Tian",
"Horia Toma",
"Erick Tuttle",
"Vijay Vasudevan",
"Richard Walter",
"Walter Wang",
"Eric Wilcox",
"Doe Hyun Yoon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Tim Salimans",
"Jonathan Ho",
"Xi Chen",
"I. Sutskever"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"David Ha",
"Andrew M. Dai",
"Quoc V. Le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"David Krueger",
"Tegan Maharaj",
"J'anos Kram'ar",
"M. Pezeshki",
"Nicolas Ballas",
"Nan Rosemary Ke",
"Anirudh Goyal",
"Yoshua Bengio",
"H. Larochelle",
"Aaron C. Courville",
"C. Pal"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Tim Cooijmans",
"Nicolas Ballas",
"César Laurent",
"Aaron C. Courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Stanislau Semeniuta",
"Aliaksei Severyn",
"E. Barth"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Y. Gal",
"Zoubin Ghahramani"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Diederik P. Kingma",
"Tim Salimans",
"M. Welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Blundell",
"Julien Cornebise",
"K. Kavukcuoglu",
"D. Wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sergey Ioffe",
"Christian Szegedy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Wojciech Zaremba",
"I. Sutskever",
"O. Vinyals"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"K. Simonyan",
"Andrew Zisserman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Quoc V. Le",
"Tamás Sarlós",
"Alex Smola"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Mnih",
"Karol Gregor"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Ranganath",
"S. Gerrish",
"D. Blei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Li Wan",
"Matthew D. Zeiler",
"Sixin Zhang",
"Yann LeCun",
"R. Fergus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Alex Graves"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Schmidhuber",
"D. Wierstra",
"M. Gagliolo",
"Faustino J. Gomez"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Geoffrey E. Hinton",
"D. Camp"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Mitchell P. Marcus",
"Beatrice Santorini",
"Mary Ann Marcinkiewicz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Romain Lopez",
"Pierre Boyeau",
"N. Yosef",
"Michael I. Jordan",
"J. Regier"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nitish Srivastava",
"Geoffrey E. Hinton",
"A. Krizhevsky",
"I. Sutskever",
"R. Salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Krizhevsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yann LeCun",
"L. Bottou",
"Yoshua Bengio",
"P. Haffner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1904.09237",
"1708.02182",
"1706.10295",
"1706.01905",
"1705.08665",
"1705.07880",
"1704.04760",
"1703.03864",
"1609.09106",
"1606.01305",
"1603.09025",
"1603.05118",
"1512.05287",
"1506.02557",
"1505.05424",
"1502.03167",
"1409.2329",
"1409.1556",
"1408.3060",
"1402.0030",
"1401.0118",
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"3455897",
"212756",
"5176587",
"2971655",
"9328854",
"39723350",
"4202768",
"11410889",
"208981547",
"12200521",
"1107124",
"1707814",
"15953218",
"46343823",
"1806222",
"5808102",
"17719760",
"14124313",
"15634106",
"1981188",
"1580089",
"2936324",
"14885866",
"11745761",
"9346534",
"252796",
"211146177",
"6844431",
"18268744",
"14542261"
],
"intents": [
[
"methodology"
],
[
"methodology"
],
[
"background",
"methodology"
],
[
"background",
"methodology"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"background",
"methodology"
],
[
"result"
],
[
"background",
"methodology"
],
[
"result"
],
[
"methodology"
],
[
"methodology"
],
[
"background",
"methodology"
],
[
"background",
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"background",
"methodology"
],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"background",
"methodology"
],
[
"methodology"
],
[],
[
"background",
"methodology"
]
],
"isInfluential": [
true,
true,
false,
false,
false,
false,
false,
true,
false,
true,
false,
true,
true,
true,
true,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false
]
} | null | 84 | 3.488095 | 0.62963 | 0.666667 | null | null | null | null | null | rJNpifWAb |
zhang|bayesian_time_series_forecasting_with_change_point_and_anomaly_detection|ICLR_cc_2018_Conference | Bayesian Time Series Forecasting with Change Point and Anomaly Detection | Time series forecasting plays a crucial role in marketing, finance and many other quantitative fields. A large amount of methodologies has been developed on this topic, including ARIMA, Holt–Winters, etc. However, their performance is easily undermined by the existence of change points and anomaly points, two structures commonly observed in real data, but rarely considered in the aforementioned methods. In this paper, we propose a novel state space time series model, with the capability to capture the structure of change points and anomaly points, as well as trend and seasonality. To infer all the hidden variables, we develop a Bayesian framework, which is able to obtain distributions and forecasting intervals for time series forecasting, with provable theoretical properties. For implementation, an iterative algorithm with Markov chain Monte Carlo (MCMC), Kalman filter and Kalman smoothing is proposed. In both synthetic data and real data applications, our methodology yields a better performance in time series forecasting compared with existing methods, along with more accurate change point detection and anomaly detection. | {
"name": [],
"affiliation": []
} | We propose a novel state space time series model with the capability to capture the structure of change points and anomaly points, so that it has a better forecasting performance when there exist change points and anomalies in the time series. | [
"Time Series Forecasting",
"Change Point Detection",
"Anomaly Detection",
"State Space Model",
"Bayesian"
] | null | 2018-02-15 22:29:30 | 31 | null | null | null | null | null | null | null | null | false | Thank you for submitting you paper to ICLR. The consensus from the reviewers is that this is not quite ready for publication. There is also concern about whether ICLR, with its focus on representational learning, is the right venue for this work.
One of the reviewers initially submitted an incorrect review, but this mistake has now been rectified. Apologies that this was not done sooner in order to allow you to address their concerns. | {
"review_id": [
"HJhn9OtxG",
"HJL1pxqeG",
"HJ0Hc82gM"
],
"review": [
{
"title": "title: Solid methodology; its practical performance requires further investigation",
"paper_summary": null,
"main_review": "main_review: The paper introduces a Bayesian model for timeseries with anomaly and change points besides regular trend and seasonality. It develops algorithms for inference and forecasting. The performance is evaluated and compared against state-of-the-art methods on three data sets: 1) synthetic data obtained from the generative Bayesian model itself; 2) well-log data; 3) internet traffic data.\n\nOn the methodological side, this appears to be a solid and significant contribution, although I am not sure how well it is aligned with the scope of ICLR. The introduced model is elegant; the algorithms for inference are non-trivial.\n\nFrom a practical perspective, one cannot expect this contribution to be ground-breaking, since there has been more than 40 years of work on time series forecasting, change point and anomaly detection. In some situations the methodology proposed here will work better than previous approaches (particularly in the situation where the data comes from the Bayesian model itself - in that case, there clearly is no better approach), in other cases (which the paper might have put less emphasis on), previous approaches will work better. To position this kind of work, I think it is important the authors discuss the limitations of their approach. Some guidelines on when or when not to use it would be valuable. Clearly, these days one cannot introduce methodology in this area and expect it to outperform existing methods under all circumstances (and hence practitioners to always choose it over any other existing method).\n\nWhat is surprising is that relatively simple approaches like ETS or STL work pretty much equally well (in some cases even better in terms of MSE) than the proposed approach, while more recent approaches - like BSTS - dramatically fail. It would be good if the authors could comment on why this might be the case.\n\nSummary:\n+ Methodology appears to be a significant, solid contribution.\n- Experiments are not conclusive as to when or when not to choose this approach over existing methods\n- writing needs to be improved (large number of grammatical errors and typos, e.g. 'Mehtods')",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Well-written paper, but lack of novelty",
"paper_summary": null,
"main_review": "main_review: Minor comments:\n- page 3. “The observation equation and transition equations together (i.e., Equation (1,2,3)) together define “ - one “together” should be removed\n- page 4. “From Figure 2, the joint distribution (i.e., the likelihood function ” - there should be additional bracket\n- page 7. “We can further integral out αn “ -> integrate out\n\nMajor comments:\nThe paper is well-written. The paper considers structural time-series model with seasonal component and stochastic trend, which allow for change-points and structural breaks.\n\nSuch type of parametric models are widely considered in econometric literature, see e.g.\n[1] Jalles, João Tovar, Structural Time Series Models and the Kalman Filter: A Concise Review (June 19, 2009). FEUNL Working Paper No. 541. Available at SSRN: https://ssrn.com/abstract=1496864 or http://dx.doi.org/10.2139/ssrn.1496864 \n[2] Jacques J. F. Commandeur, Siem Jan Koopman, Marius Ooms. Statistical Software for State Space Methods // May 2011, Volume 41, Issue 1.\n[3] Scott, Steven L. and Varian, Hal R., Predicting the Present with Bayesian Structural Time Series (June 28, 2013). Available at SSRN: https://ssrn.com/abstract=2304426 or http://dx.doi.org/10.2139/ssrn.2304426 \n[4] Phillip G. Gould, Anne B. Koehler, J. Keith Ord, Ralph D. Snyder, Rob J. Hyndman, Farshid Vahid-Araghi, Forecasting time series with multiple seasonal patterns, In European Journal of Operational Research, Volume 191, Issue 1, 2008, Pages 207-222, ISSN 0377-2217, https://doi.org/10.1016/j.ejor.2007.08.024.\n[5] A.C. Harvey, S. Peters. Estimation Procedures for structural time series models // Journal of Forecasting, Vol. 9, 89-108, 1990\n[6] A. Harvey, S.J. Koopman, J. Penzer. Messy Time Series: A Unified approach // Advances in Econometrics, Vol. 13, pp. 103-143.\n\nThey also use Kalman filter and MCMC-based approaches to sample posterior to estimate hidden components.\n\nThere are also non-parametric approaches to extraction of components from quasi-periodic time-series, see e.g.\n[7] Artemov A., Burnaev E. Detecting Performance Degradation of Software-Intensive Systems in the Presence of Trends and Long-Range Dependence // 16th International Conference on Data Mining Workshops (ICDMW), IEEE Conference Publications, pp. 29 - 36, 2016. DOI: 10.1109/ICDMW.2016.0013\n[8] Alexey Artemov, Evgeny Burnaev and Andrey Lokot. Nonparametric Decomposition of Quasi-periodic Time Series for Change-point Detection // Proc. SPIE 9875, Eighth International Conference on Machine Vision, 987520 (December 8, 2015); 5 P. doi:10.1117/12.2228370;http://dx.doi.org/10.1117/12.2228370\n\nIn some of these papers models of structural brakes and change-points are also considered, see e.g. \n- page 118 in [6]\n- papers [7, 8]\n\nThere were also Bayesian approaches for change-point detection, which are similar to the model of change-point, proposed in the considered paper, e.g.\n[9] Ryan Prescott Adams, David J.C. MacKay. Bayesian Online Changepoint Detection // https://arxiv.org/abs/0710.3742\n[10] Ryan Turner, Yunus Saatçi, and Carl Edward Rasmussen. Adaptive sequential Bayesian change point detection. In Zaïd Harchaoui, editor, NIPS Workshop on Temporal Segmentation, Whistler, BC, Canada, December 2009.\n\nThus,\n- the paper does not provide comparison with relevant econometric literature on parametric structural time-series models,\n- the paper does not provide comparison with relevant advanced change-point detection methods e.g. [7,8,9,10]. The comparison is provided only with very simple methods,\n- the proposed model itself looks very similar to what can be found across econometric literature,\n- the datasets, used for comparison, are very scarce. There are datasets for anomaly detection in time-series data, which should be used for extensive comparison, e.g. Numenta Anomaly Detection Benchmark.\n\nTherefore, also the paper is well-written, \n- it lacks novelty,\n- its topic does not perfectly fit topics of interest for ICLR,\nSo, I do not recommend this paper to be published.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review for “BAYESIAN TIME SERIES FORECASTING WITH CHANGE POINT AND ANOMALY DETECTION”",
"paper_summary": null,
"main_review": "main_review: \n\nSummary:\n\nThis paper develops a state space time series forecasting model in the Bayesian framework, jointly detects anomaly and change points. Integrated with an iterative MCMC method, the authors develop an efficient algorithm and use both synthetic and real data set to demonstrate that their algorithms outperform many other state-of-art algorithms. \n\nMajor comments:\nIn the beginning of section 3, the authors assume that all the terms that characterize the change-points and anomaly points are normally distributed with mean zero and different variance. However, in classic formulation for change-point or anomaly detection, usually there is also a mean shift other than the variance change. For example, we might assume $r_t \\sim N(\\theta, \\sigma_r^2)$ for some $\\theta>0$ to demonstrate the positive mean shift. I believe that this kind of mean shift is more efficient to model the structure of change-point. \n\nMy main concern is with the novelty. The work does not seem to be very novel.\n\nMinor comments:\n\n1. In the end of the page 2, the last panel is the residual, not the spikes. \n\n2. In page 12, the caption of figure 5 should be (left) and (right), not (top) and (bottom).",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.3333333432674408,
0.4444444477558136
],
"confidence": [
0.5,
1,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Further Illustration of Methodology",
"\"Related Work\" Section Added and Contributions Highlighted",
"WRONG review (this is possibly a review for another paper, not for ours)"
],
"comment": [
"Dear Reviewer,\n\nThank you for reviewing our paper and thank you for appreciating our work. We have made changes following your suggestions. Please see below for our response point by point. Thank you.\n\n\nThe paper introduces a Bayesian model for timeseries with anomaly and change points besides regular trend and seasonality. It develops algorithms for inference and forecasting. The performance is evaluated and compared against state-of-the-art methods on three data sets: 1) synthetic data obtained from the generative Bayesian model itself; 2) well-log data; 3) internet traffic data.\n\n>> Thanks for the appreciation of our work. \n\nFrom a practical perspective, one cannot expect this contribution to be ground-breaking, since there has been more than 40 years of work on time series forecasting, change point and anomaly detection. In some situations the methodology proposed here will work better than previous approaches (particularly in the situation where the data comes from the Bayesian model itself - in that case, there clearly is no better approach), in other cases (which the paper might have put less emphasis on), previous approaches will work better. To position this kind of work, I think it is important the authors discuss the limitations of their approach. \n\n>> As most (if not all) of the time series works, our method cannot work in every case. For example, when the time series does not have clear decomposition structure as modeled in Eqs.(1-3), the model may not correctly recover the hidden components and correspondingly perform forecasting.\n\nSome guidelines on when or when not to use it would be valuable. Clearly, these days one cannot introduce methodology in this area and expect it to outperform existing methods under all circumstances (and hence practitioners to always choose it over any other existing method).\n\nWhat is surprising is that relatively simple approaches like ETS or STL work pretty much equally well (in some cases even better in terms of MSE) than the proposed approach, while more recent approaches - like BSTS - dramatically fail. It would be good if the authors could comment on why this might be the case.\n\n>> BSTS fails in some cases due to the mismatch between model assumptions and actual data distribution and generation process. Usually more complicated a model is, more likely it will fail when the data structure does not satisfy its underlying assumptions. In those cases, simple approaches may achieve better performance, which is not surprising. Nevertheless, our proposed method obtains the best result.\n\nSummary:\n+ Methodology appears to be a significant, solid contribution.\n- Experiments are not conclusive as to when or when not to choose this approach over existing methods\n- writing needs to be improved (large number of grammatical errors and typos, e.g. 'Methods')\n\n>> We will incorporate the discussions regarding model strength, application conditions, and the limitations in final version. \n>> We already fix the typos in updated version. \n",
"Dear Reviewer,\n\nThank you for your comments. We have addressed them accordingly. Please see below for our response point by point.\n\n\n\nMinor comments:\n- page 3. “The observation equation and transition equations together (i.e., Equation (1,2,3)) together define “ - one “together” should be removed\n- page 4. “From Figure 2, the joint distribution (i.e., the likelihood function ” - there should be additional bracket\n- page 7. “We can further integral out αn “ -> integrate out\n\n>> Thanks, we corrected the typos. \n\nThus,\n- the paper does not provide comparison with relevant econometric literature on parametric structural time-series models,\n\n>> Econometric literature of refs [1, 2, 3, 4, 5] do not properly consider and process the changing point and anomalies although they perform time-series forecasting.\n\n- the paper does not provide comparison with relevant advanced change-point detection methods e.g. [7,8,9,10]. The comparison is provided only with very simple methods,\n\n>> We compared state-of-the-art Bayesian Structural Time Series (BSTS), Prophet R package by Taylor & Letham (2017), , Exponential Smoothing State Space Model (ETS). The results are shown in Tables 2-6. The idea of [9--10] are quite similar to them. \n\n- the proposed model itself looks very similar to what can be found across econometric literature,\n\n>> The econometric literature ignores the proper treatment of changing point and anomalies. Bayesian modeling part is also different regarding estimation of posterior for hidden components given different prior distributions.\n>>We add the “related work section” to illustrate the differences between our work and the existing works.\n\n- the datasets, used for comparison, are very scarce. There are datasets for anomaly detection in time-series data, which should be used for extensive comparison, e.g. Numenta Anomaly Detection Benchmark.\n\n>> The experimental study demonstrates that our method outperforms the other methods as well on other benchmarks for anomaly detection. Our ultimate goal is time series forecasting conditional on structure changes. It might not be that meaningful to compare in Numenta Anomaly Detection Benchmark since Anomaly Detection is kind of secondary endpoint.\n\nTherefore, also the paper is well-written, \n- it lacks novelty,\n- its topic does not perfectly fit topics of interest for ICLR,\n\n>> There are three goals of our work: (1) time series forecasting; (2) change point detection; (3) anomalies detection; these three goals are jointly put in one unified framework by modeling using state-space bayesian modeling. The change point and anomalies are detected for better forecasting giving time series (structure) input. Due to the strong description power of bayesian state-space model, the results of model prediction and abnormal and change points detection are mutually improved. Compared to the existing bayesian modeling, our work is novel by sampling posterior to estimate hidden components given the individual Bernoulli prior of changing point and anomalies.\n>> The paper is related to structure input representation and state-space modeling, which in fact is relevant to ICLR audience. We also highlighted the novelty of the work in “contribution of this work” on Page 2 in the updated version. \n",
"Dear Reviewer,\n\nThank you for your time and effort on reviewing papers. Unfortunately it seems like you uploaded a WRONG review. This is possibly a review for some other paper titled \"Deformation of Bregman divergence and its application\", not for ours."
]
} | {
"paperhash": [
"adams|bayesian_online_changepoint_detection",
"lokot|nonparametric_decomposition_of_quasi-periodic_time_series_for_change-point_detection",
"artemov|detecting_performance_degradation_of_software-intensive_systems_in_the_presence_of_trends_and_long-range_dependence",
"barry|a_bayesian_analysis_for_change_point_problems",
"box|time_series_analysis:_forecasting_and_control",
"brodersen|inferring_causal_impact_using_bayesian_structural_time-series_models",
"chen|joint_estimation_of_model_parameters_and_outlier_effects_in_time_series",
"cleveland|stl:_a_seasonal-trend_decomposition_procedure_based_on_loess",
"cochrane|time_series_for_macroeconomics_and_finance",
"commandeur|statistical_software_for_state_space_methods",
"durbin|time_series_analysis_by_state_space_methods",
"fearnhead|on-line_inference_for_hidden_markov_models_via_particle_filters",
"phillip|forecasting_time_series_with_multiple_seasonal_patterns",
"hangos|analysis_and_control_of_nonlinear_process_systems",
"harvey|estimation_procedures_for_structural_time_series_models",
"harvey|messy_time_series:_a_unified_approach",
"keith|time_series_modelling_of_water_resources_and_environmental_systems",
"charles|forecasting_seasonals_and_trends_by_exponentially_weighted_moving_averages",
"hyndman|forecasting_with_exponential_smoothing:_the_state_space_approach",
"jalles|structural_time_series_models_and_the_kalman_filter:_a_concise_review",
"jk|numerical_bayesian_methods_applied_to_signal_processing",
"killick|changepoint:_an_r_package_for_changepoint_analysis",
"netflix|time_series_anomaly_detection",
"osogami|bidirectional_learning_for_time-series_models_with_hidden_units",
"steven|predicting_the_present_with_bayesian_structural_time_series",
"souhaib|coherent_probabilistic_forecasts_for_hierarchical_time_series",
"aberer|hybrid_neural_networks_for_learning_the_trend_in_time_series",
"turner|adaptive_sequential_bayesian_change_point_detection",
"twitter|anomalydetection:_anomaly_detection_with_r",
"winters|forecasting_sales_by_exponentially_weighted_moving_averages",
"peter|time_series_forecasting_using_a_hybrid_arima_and_neural_network_model"
],
"title": [
"Bayesian online changepoint detection",
"Nonparametric decomposition of quasi-periodic time series for change-point detection",
"Detecting performance degradation of software-intensive systems in the presence of trends and long-range dependence",
"A bayesian analysis for change point problems",
"Time series analysis: forecasting and control",
"Inferring causal impact using bayesian structural time-series models",
"Joint estimation of model parameters and outlier effects in time series",
"Stl: A seasonal-trend decomposition procedure based on loess",
"Time series for macroeconomics and finance",
"Statistical software for state space methods",
"Time series analysis by state space methods",
"On-line inference for hidden markov models via particle filters",
"Forecasting time series with multiple seasonal patterns",
"Analysis and Control of Nonlinear Process Systems",
"Estimation procedures for structural time series models",
"Messy time series: A unified approach",
"Time series modelling of water resources and environmental systems",
"Forecasting seasonals and trends by exponentially weighted moving averages",
"Forecasting with exponential smoothing: the state space approach",
"Structural time series models and the kalman filter: a concise review",
"Numerical bayesian methods applied to signal processing",
"changepoint: An r package for changepoint analysis",
"Time series anomaly detection",
"Bidirectional learning for time-series models with hidden units",
"Predicting the present with bayesian structural time series",
"Coherent probabilistic forecasts for hierarchical time series",
"Hybrid neural networks for learning the trend in time series",
"Adaptive sequential Bayesian change point detection",
"Anomalydetection: Anomaly detection with r",
"Forecasting sales by exponentially weighted moving averages",
"Time series forecasting using a hybrid arima and neural network model"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"ryan p adams",
"david j c mackay"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"andrey lokot",
"alexey artemov",
"evgeny burnaev"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alexey artemov",
"evgeny burnaev"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"daniel barry",
"john a hartigan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" george ep box",
"m gwilym",
"gregory c jenkins",
"greta m reinsel",
" ljung"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"fabian kay h brodersen",
"jim gallusser",
"nicolas koehler",
"steven l remy",
" scott"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chung chen",
"lon-mu liu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"william s robert b cleveland",
"irma cleveland",
" terpenning"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" john h cochrane"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jacques commandeur",
"siem koopman",
"marius ooms"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"james durbin",
"jan siem",
" koopman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"paul fearnhead",
"peter clifford"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"g phillip",
"anne b gould",
"j keith koehler",
"ralph d ord",
"rob j snyder",
"farshid hyndman",
" vahid-araghi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"katalin hangos",
"jzsef bokor",
"g szederknyi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a c harvey",
"s peters"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"andrew harvey",
"siem jan koopman",
" penzer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"w keith",
"ian hipel",
" mcleod"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"c charles",
" holt"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rob hyndman",
"anne b koehler",
"j keith ord",
"ralph d snyder"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"joao jalles"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"o r jk",
"f wj"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rebecca killick",
"idris eckley"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" netflix",
" rad"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"takayuki osogami",
"hiroshi kajino",
"taro sekiyama"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"l steven",
"hal r scott",
" varian"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ben souhaib",
"james w taieb",
"rob j taylor",
" hyndman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"karl aberer",
"tao lin",
"tian guo"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ryan turner",
"yunus saatci",
"carl edward rasmussen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" twitter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" peter r winters"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"zhang peter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"0710.3742v1",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.444444 | 0.833333 | null | null | null | null | null | rJLTTe-0W |
||
connor|transfer_learning_on_manifolds_via_learned_transport_operators|ICLR_cc_2018_Conference | Transfer Learning on Manifolds via Learned Transport Operators | Within-class variation in a high-dimensional dataset can be modeled as being on a low-dimensional manifold due to the constraints of the physical processes producing that variation (e.g., translation, illumination, etc.). We desire a method for learning a representation of the manifolds induced by identity-preserving transformations that can be used to increase robustness, reduce the training burden, and encourage interpretability in machine learning tasks. In particular, what is needed is a representation of the transformation manifold that can robustly capture the shape of the manifold from the input data, generate new points on the manifold, and extend transformations outside of the training domain without significantly increasing the error. Previous work has proposed algorithms to efficiently learn analytic operators (called transport operators) that define the process of transporting one data point on a manifold to another. The main contribution of this paper is to define two transfer learning methods that use this generative manifold representation to learn natural transformations and incorporate them into new data. The first method uses this representation in a novel randomized approach to transfer learning that employs the learned generative model to map out unseen regions of the data space. These results are shown through demonstrations of transfer learning in a data augmentation task for few-shot image classification. The second method use of transport operators for injecting specific transformations into new data examples which allows for realistic image animation and informed data augmentation. These results are shown on stylized constructions using the classic swiss roll data structure and in demonstrations of transfer learning in a data augmentation task for few-shot image classification. We also propose the use of transport operators for injecting transformations into new data examples which allows for realistic image animation. | {
"name": [],
"affiliation": []
} | Learning transport operators on manifolds forms a valuable representation for doing tasks like transfer learning. | [
"manifold learning",
"transfer learning"
] | null | 2018-02-15 22:29:18 | 37 | null | null | null | null | null | null | null | null | false | Learning identity-preserving transformations from unlabeled data is definitely an important and useful direction. However the paper does not have convincing experiments to establish the effectiveness of the proposed method on real datasets which is a crucial limitation in my view, given that the paper is largely based on an earlier published work by Culpepper and Olshausen (2009). | {
"review_id": [
"rypjqWBlf",
"ryjTQZ9xz",
"HklkGPeeG"
],
"review": [
{
"title": "title: Learning transport operators",
"paper_summary": null,
"main_review": "main_review: This paper propose to learn manifold transport operators via a dictionary learning framework that alternatively optimize a dictionary of transformations and coefficients defining the transformation between random pairs of data points. Experiments on the swiss roll and synthetic rotated images on USPS digits show that the proposed method could learn useful transformations on the data manifold.\n\nHowever, the experiments in the paper is weak. As the paper mentioned, manifold learning algorithms tend to be quite sensitive to the quality of data, usually requiring dense data at each local neighborhood to successfully learn the manifold well. However, this paper, claiming to be learn more rubust representations, lacks solid supporting experiments. The swiss roll is a very simple synthetic dataset. The USPS is also simple, and the manifold learning is performed on synthetic (rotated) USPS digits with only 1 manifold dimension. I would recommend testing the proposed algorithm on more complicated datasets (e.g. Imagenet or even CIFAR images) to see how well it performs in practice, in order to provide stronger empirical supports for the proposed method. At the current state, I don't think it is good for publishing at ICLR.\n\n=========================\nPost-rebuttal comments\n\nThanks for the updates of the paper and added experiments. I think the paper has improved over the previous version and I have updated my score.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting paper on learning transport operators; but somewhat confusingly written with unclear novelty. ",
"paper_summary": null,
"main_review": "main_review: Summary:\n\nThe paper considers the framework of manifold transport operator learning of Culpepper and Olshausen (2009), and interpret it as obtaining a MAP estimate under a probabilistic generative model. Motivated by this interpretation, the authors propose a new similarity metric between data points, which leads to a new manifold embedding method. This also leads the authors to propose a new transfer learning mechanism that can lead to improvements in classification accuracy.\nSome representative simulation results are included to demonstrate the efficacy of the proposed methods.\n\nMain comments:\n\nThis direction is interesting. But unfortunately the paper is confusingly written and several points are never made clear. The conveyed impression is that the proposed methods are mainly incremental additions to the framework of Culpepper and Olshausen.\n\nIt would be far more helpful if the authors would have clearly described the following in more detail:\n- The new manifold embedding algorithm in Section 2 -- a proper explanation of the similarity measure, what the role of the MSE is in this algorithm, how to choose the parameters gamma and zeta etc.\n- Why the authors claim that this method is more robust than other classical manifold learning methods. There certainly seems to be some robustness improvement over Isomap -- but this is a somewhat weak strawman since Isomap is notoriously prone to improper neighborhood selection.\n- Why the transport operator viewpoint is an improvement over other out-of-sample approaches in manifold learning.\n- Why the data augmentation using learned transport operators would be more beneficial than augmentation using other mechanisms (manual rotations, other generative models).\n\netc.\n\n\nOther comments/questions:\n\n- Bit confused about the experiment for Figure 1. Why set gamma = 0? Also, you seem to be fixing the number of dictionary elements to two (suggesting an ell-0 constraint), but also impose an ell-1 constraint. Why both?\n- From what distribution are the random coefficients governing the transport operators drawn (uniform? gaussian?) how to choose the anchor points?\n- The experiment in USPS digits is somewhat confusing. Rotations are easy to generate, so the \"true rotation\" curve is probably the easiest to implement and also the best performing -- so why go through the transport operator training process at all? In any case, I would be careful to not draw too many conclusions from a single experiment on MNIST.\n\n================\n\nPost-rebuttal comments:\n\nThanks for the response. Still not convinced, unfortunately. I would go back to the classification example: it is unclear what the benefits of the transport operator viewpoint is over simply augmenting the dataset using rotations (or \"true rotations\" as you call them), or translations, or some other well-known parametric family. Even for the faces dataset, it seems that the transformations to model \"happiness\" or \"sadness\" are fairly simple to model and one does not need to solve a complicated sparse regression problem to guess the basis elements. Consider fleshing this angle out a bit more in detail with some more compelling evidence (perhaps test on a bigger/more complex dataset?). \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Fun idea, limited experiments, unclear intuition of proposed dynamics",
"paper_summary": null,
"main_review": "main_review: Overview:\nThe paper aim to model non-linear, intrinsically low-dimensional structure, in data by estimating \"transport operators\" that predict how points move along the manifold. This is an old idea, and the stated contribution of the paper is:\n\"The main contribution of this paper is to show that the manifold representation learned in the transport operators is valuable both as a probabilistic model to improve general machine learning tasks as well as for performing transfer learning in classification tasks.\" \nThe paper provide nice illustrative experiments arguing why transport operators may be a useful modeling tool, but does not go beyond illustrative experiments.\nWhile I follow the intuitions behind transport operators I am doubtful if they will generalize beyond very simple manifold structures (see detailed comments below).\n\nQuality:\nThe paper is well-written and fairly easy to follow. In particular, I appreciate that the authors make no attempt to overclaim contributions. From a methodology point-of-view, the paper has limited novelty (transport operators, and learning thereof has been studied elsewhere), but there are some technical insights (likelihood model, use in data augmentation). Since the provided experiments are mostly illustrations, I would argue that the significance of the paper is limited. I'd say that to really convince a broader audience that this old idea is worth revisiting, the work must go beyond illustrations and apply to a real data problem.\n\nDetailed Comments and Questions:\n*) Equation 1 of the paper describe the key dynamics of the applied transport operators. Basically, the paper assume that the underlying data manifold is locally governed by a linear differential equation. This is a very suitable assumption, e.g., for the swiss roll data set, but it is unclear to this reader why it is a suitable assumption beyond such toy data. I would very much appreciate a detailed discussion of when this is a suitable modeling choice, and when it is not. My intuition is that this is mostly a suitable model when the data manifold appears due to simple transformations (e.g. rotations) of data. This is also exactly the type of data considered in the paper.\n*) In Eq. 3, should it be \"expm\" instead of \"exp\" ?\n*) The first two paragraphs of Sec. 2 are background material, whereas paragraph 3 and beyond describe material that is key to the paper. I would recommend introducing a \\subsection (or something like it) to make this more clear.\n*) The idea of working with transformations of data rather than the actual data is the corner-stone of Ulf Grenander's renowned \"Pattern Theory\". A citation to this seminal work would be appropriate.\n*) In the first paragraph of the introduction links are drawn to the neuroscience literature; it would be appropriate to cite a suitable publication.\n\nPros(+) & Cons(-):\n+ Well-written.\n+ Good illustrative experiments.\n- Real-life experiments are lacking.\n- Limited methodology contribution.\n- The assumed dynamics might be too simplistic (at least a discussing of this is missing).\n\nFor the AC:\nThe submitted paper acknowledges several grants (including grant numbers), which can directly be tied to the authors identity. This may be a violation of the double blind review policy. I did not use this information to determine the authors identity, though, so this review is still double blind.\n\nPost-rebuttal comments:\nThe paper has improved with the incorporated revisions, but my main concerns remain. I find the Swiss Roll / rotated-USPS examples to be too contrived as the dynamics are exactly tailored to the linear ODE assumption. These are examples where the model assumptions are perfect. What is unclear is how the model behaves when the linear ODE assumption is not-quite-correct-but-also-not-totally-incorrect, i.e. how the model behaves in real life. I didn't get that from the newly added experiment. So, I'll keep my rating as is. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.3333333432674408,
0.3333333432674408
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [],
"comment": []
} | {
"paperhash": [
"bengio|representation_learning:_a_review_and_new_perspectives",
"lecun|gradient-based_learning_applied_to_document_recognition",
"wang|generative_image_modeling_using_style_and_structure_adversarial_networks"
],
"title": [
"Representation Learning: A Review and New Perspectives",
"Gradient-based learning applied to document recognition",
"Generative Image Modeling using Style and Structure Adversarial Networks"
],
"abstract": [
"",
"",
""
],
"authors": [
{
"name": [
"yoshua bengio",
"aaron courville",
"pascal vincent"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yann lecun",
"léon bottou",
"yoshua bengio",
"patrick haffner",
"yoshua bottou",
"patrick bengio",
" haffner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xiaolong wang",
"abhinav gupta"
],
"affiliation": [
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": "{}"
}
]
}
],
"arxiv_id": [
"",
"",
""
],
"s2_corpus_id": [
"",
"",
""
],
"intents": [
null,
null,
null
],
"isInfluential": [
null,
null,
null
]
} | null | 84 | null | 0.37037 | 0.75 | null | null | null | null | null | rJL6pz-CZ |
||
xie|largescale_cloze_test_dataset_designed_by_teachers|ICLR_cc_2018_Conference | Large-scale Cloze Test Dataset Designed by Teachers | Cloze test is widely adopted in language exams to evaluate students' language proficiency. In this paper, we propose the first large-scale human-designed cloze test dataset CLOTH in which the questions were used in middle-school and high-school language exams. With the missing blanks carefully created by teachers and candidate choices purposely designed to be confusing, CLOTH requires a deeper language understanding and a wider attention span than previous automatically generated cloze datasets. We show humans outperform dedicated designed baseline models by a significant margin, even when the model is trained on sufficiently large external data. We investigate the source of the performance gap, trace model deficiencies to some distinct properties of CLOTH, and identify the limited ability of comprehending a long-term context to be the key bottleneck. In addition, we find that human-designed data leads to a larger gap between the model's performance and human performance when compared to automatically generated data. | {
"name": [],
"affiliation": []
} | A cloze test dataset designed by teachers to assess language proficiency | [
"dataset",
"human-designed",
"language understanding"
] | null | 2018-02-15 22:29:34 | 31 | null | null | null | null | null | null | null | null | false | Meta score: 4
The paper presents a manually-constructed cloze-style fill-in-the-missing-word dataset, with baseline language modelling experiments that aim to show that this dataset is difficult for machines relative to human performance. The dataset is interesting but the fact that the experiments are confined to baseline language models
Pros:
- interesting dataset
- clear and well-written
- attempt to move the field forward in an important area
Cons:
- limited experimentation
- language modelling approaches not appropriate baseline
| {
"review_id": [
"SJU2A_Yyf",
"BynNGX9eG",
"BJAUOGclz"
],
"review": [
{
"title": "title: Promising dataset; needs better experiments to analyze",
"paper_summary": null,
"main_review": "main_review: This paper presents a new dataset for cloze style question-answering. The paper starts with a very valid premise that many of the automatically generated cloze datasets for testing reading comprehension suffer from many shortcomings. The paper collects data from a novel source: reading comprehension data for English exams in China. The authors collect data for middle school and high school exams and clean it to obtain passages and corresponding questions and candidate answers for each question.\n\nThe rest of the paper is about analyzing this data and performance of various models on this dataset. \n\n1) The authors divide the questions into various types based on the type of reasoning needed to answer the question, noticeably short-term reasoning and long-term reasoning. \n2) The authors then show that human performance on this dataset is much higher than the performance of LSTM-based and language model-based baselines; this is in contrast to existing cloze style datasets where neural models achieve close to human performance. \n3) The authors hypothesize that this is partially explained by the fact that neural models do not make use of long-distance information. The authors verify their claim by running human eval where they show annotators only 1 sentence near the empty slot and find that the human performance is basically matched by a language model trained on 1 billion words. This part is very cool.\n4) The authors then hypothesize that human-generated data provides more information. They even train an informativeness prediction network to (re-)weight randomly generated examples which can then be used to train a reading comprehension model.\n\nPros of this work:\n1) This work contributes a nice dataset that addresses a real problem faced by automatically generated datasets.\n2) The breakdown of characteristics of questions is quite nice as well.\n3) The paper is clear, well-written, and is easy to read.\n\nCons:\n1) Overall, some of the claims made by the paper are not fully supported by the experiments. E.g., the paper claims that neural approaches are much worse than humans on CLOTH data -- however, they do not use state-of-the-art neural reading comprehension techniques but only a standard LSTM baseline. It might be the case that the best available neural techniques are still much worse than humans on CLOTH data, but that remains to be seen. \n2) Informativeness prediction: The authors claim that the human-generated data provides more information than automatically/randomly generated data by showing that the models trained on the former achieve better performance than the latter on test data generated by humans. The claim here is problematic for two reasons:\n a) The notion of \"informativeness\" is not clearly defined. What does it mean here exactly?\n b) The claim does not seem fully justified by the experiments -- the results could just as well be explained by distributional mismatch without appealing to the amount of information per se. The authors should show comparisons when evaluating on randomly generated data.\n\nOverall, this paper contributes a useful dataset; the analysis can be improved in some places.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: This is an interesting dataset but the baselines are not very compelling.",
"paper_summary": null,
"main_review": "main_review: This paper collects a cloze-style fill-in-the-missing-word dataset constructed manually by English teachers to test English proficiency. Experiments are given which are claimed to show that this dataset is difficult for machines relative to human performance. The dataset seems interesting but I find the empirical evaluations unconvincing. The models used to evaluate machine difficulty are basic language models. The problems are multiple choice with at most four choices per question. This allows multiple choice reading comprehension architectures to be used. A window of words around the blank could be used as the \"question\". A simple reading comprehension baseline is to encode the question (a window around the blank) and use the question vector to compute an attention over the passage. One can then compute a question-specific representation of the passage and score each candidate answer by the inner product of the question-specific sentence representation and the vector representation of the candidate answer. See \"A thorough examination of the CNN/Daily Mail reading comprehension task\" by Chen, Bolton and Manning.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Not convinced",
"paper_summary": null,
"main_review": "main_review: 1) this paper introduces a new cloze dataset, \"CLOTH\", which is designed by teachers. The authors claim that this cloze dataset is a more challenging dataset since CLOTH requires a deeper language understanding and wider attention span. I think this dataset is useful for demonstrating the robustness of current RC models. However, I still have the following questions which lead me to reject this paper.\n\n2) I have the questions as follows:\ni) The major flaw of this paper is about the baselines in experiments. I don't think the language model is a robust baseline for this paper. When a wider span is used for selecting answers, the attention-based model should be a reasonable baseline instead of pure LM. \nii) the author also should provide the error rates for each kind of questions (grammar questions or long-term reasoning). \niii) the author claim that this CLOTH dataset requires wider span for getting the correct answer, however, there are only 22.4 of the entire data need long-term reasoning. More importantly, there are 26.5% questions are about grammar. These problems can be easily solved by LM. \niv) I would not consider 16% percent of accuracy is a \"significant margin\" between human and pure LM-based methods. LM-based methods should not be considered as RC model.\nv) what kind accuracy is improved if you use 1-billion corpus trained LM? Are these improvements mostly in grammar? I did not see why larger training corpus for LM could help a lot about reasoning since reasoning is only related to question document.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.3333333432674408,
0.3333333432674408
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response",
"Attention Baselines",
"Response",
"Response"
],
"comment": [
"Thank you for your valuable review!\n1. Please see our comment about the attention baseline in the top thread. \n2. Indeed, the statement about informativeness is not rigorous. With further experiments, we find that the results should be explained by a distributional mismatch instead of informativeness. Specifically, when the training set contains both the human-designed data and automatically generated data, the accuracy on automatically generated data increases if we have a higher proportion of automatically generated data in the training set. Please see Table 7 for more details. We restructured Section 4 and removed the informativeness section. \n3. However, we believe human-designed data is a much better test bed for general cloze test with the following reasons: Human-designed data is different from automatically generated data since it leads to a larger gap between the model’s performance and the human performance. The model's performance and human's performance on the human-designed data are 0.484 and 0.860 respectively, leading to a gap of 0.376. The performance gap on the automatically-generated data is at most 0.185 since the model's performance reaches 0.815. Similarly, on Children’s Book Test where the questions are generated, the human performance is between 0.708 to 0.828 on four categories and the language model can nearly achieve human performance on the preposition and verb categories. Hence human-designed data is a good test base because of the larger gap between performances of the model and the human, although the distributional mismatch problem makes it difficult to be the best training source for out-of-domain cloze test such as automatically generated cloze test.\n",
"Since all three reviewers suggested employing stronger baselines, specifically attention models, we will first clarify here:\n\n1. We tested machine comprehension models (with attention) when we started working on the task but found that they do not significantly outperform the LSTM baseline. Specifically, the Stanford Attentive Reader achieves an accuracy of 0.487 on CLOTH while an LSTM based method has an accuracy of 0.484. We also implemented position-aware attention model [Zhang et al. 2017] to enable the model to use the distance information. It achieves an accuracy of 0.485. We have updated these results in the paper. \n2. In fact, LSTM based language model is capable of modeling statistical regularities of language. Hill et al. 2015 show language models outperform memory networks and nearly achieves human performance on the verbs or prepositions questions of Children’s Book Test. A concurrent work also shows that language model is very good at modeling complex language regularities when trained on a large amount of data, although they use the LM to extract features instead of directly using it for prediction (Please see ICLR submission “Deep contextualized word representations” ). Specifically, by replacing word vectors with hidden representations of LM, they achieve state-of-the-art results on six language tasks including textual entailment, question answering, semantic role labeling, coreference resolution, named entity extraction, sentiment analysis. Reasoning also benefits from LM features, e.g., the F1 on reading comprehension (SQuAD) improves from 81.1 to 85.3.\n3. We hypothesize the attention models’ unexpected performance is due to the difficulty to learn to comprehend longer contexts when the majority of the training data only requires understanding short-term information. Specifically, there are 23.2% of questions that require a long-term context. Note that although the cloze test was previously introduced for evaluating reasoning abilities in the machine comprehension task, CLOTH does NOT focus on reasoning. We mentioned the difference in the related work section: “Our dataset focuses on evaluating language proficiency including knowledge in vocabulary, reasoning and grammar while the focus of reading comprehension is reasoning.” We have updated the paper to emphasize this point in the introduction. \n\nReference:\nZhang, Y., Zhong, V., Chen, D., Angeli, G., & Manning, C. D. (2017). Position-aware Attention and Supervised Data Improve Slot Filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (pp. 35-45).\nHill, F., Bordes, A., Chopra, S., & Weston, J. (2015). The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations. arXiv preprint arXiv:1511.02301.\n",
"Thank you for your valuable review!\ni) Please see our comment about the attention baseline in the top thread. \nii) The error rates for each kind of questions are added in Figure 1. \niii) The questions in CLOTH dataset require a wider span when compared to automatically generated questions. We added more comparisons about human-designed data and automatically generated data in Section 4.1. \niv) The margin 15.3% results from training on a large external dataset. Specifically, the 1-billion-word dataset is more than 40 times larger than our dataset. However, in practice, it requires too many computational resources to train models on such a large dataset. Hence, it is valuable to compare models that do not use external data. When we do not use external data, the margin between the best model and the human performance is 27.7%, which is still a large margin.\nv) Accuracies on all categories are improved if we train the LM on the 1-billion-word corpus. It shows that a large amount of data is necessary to learn complex language regularities. Please see Figure 1 for more details. \n",
"Thank you for your valuable review! Please see our comment about the attention baseline in the top thread. "
]
} | {
"paperhash": [
"anonymous|deep_contextualized_word_representations",
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"chelba|one_billion_word_benchmark_for_measuring_progress_in_statistical_language_modeling",
"chen|a_thorough_examination_of_the_cnn/daily_mail_reading_comprehension_task",
"correia|automatic_generation_of_cloze_question_distractors",
"correia|automatic_generation_of_cloze_question_stems",
"dhingra|gated-attention_readers_for_text_comprehension",
"fotos|the_cloze_test_as_an_integrative_measure_of_efl_proficiency:_a_substitute_for_essays_on_college_entrance_examinations?",
"hermann|teaching_machines_to_read_and_comprehend",
"hill|the_goldilocks_principle:_reading_children's_books_with_explicit_memory_representations",
"hochreiter|long_short-term_memory",
"jonz|cloze_item_types_and_second_language_comprehension",
"joshi|triviaqa:_a_large_scale_distantly_supervised_challenge_dataset_for_reading_comprehension",
"jozefowicz|exploring_the_limits_of_language_modeling",
"kingma|adam:_a_method_for_stochastic_optimization",
"lai|race:_large-scale_reading_comprehension_dataset_from_examinations",
"nguyen|ms_marco:_a_human_generated_machine_reading_comprehension_dataset",
"onishi|who_did_what:_a_large-scale_person-centered_cloze_dataset",
"peñas|overview_of_clef_qa_entrance_exams_task_2014",
"pennington|glove:_global_vectors_for_word_representation",
"rajpurkar|squad:_100,000+_questions_for_machine_comprehension_of_text",
"rodrigo|overview_of_clef_qa_entrance_exams_task_2015",
"sachs|how_to_construct_a_cloze_test:_lessons_from_testing_measurement_theory_models",
"seo|bidirectional_attention_flow_for_machine_comprehension",
"shibuki|overview_of_the_ntcir-11_qa-lab_task",
"skory|predicting_cloze_task_quality_for_vocabulary_training",
"wilson|cloze_procedure:_a_new_tool_for_measuring_readability",
"tremblay|proficiency_assessment_standards_in_second_language_acquisition_research",
"trischler|newsqa:_a_machine_comprehension_dataset",
"zhang|positionaware_attention_and_supervised_data_improve_slot_filling",
"zweig|the_microsoft_research_sentence_completion_challenge"
],
"title": [
"Deep contextualized word representations",
"Neural machine translation by jointly learning to align and translate",
"One billion word benchmark for measuring progress in statistical language modeling",
"A thorough examination of the cnn/daily mail reading comprehension task",
"Automatic generation of cloze question distractors",
"Automatic generation of cloze question stems",
"Gated-attention readers for text comprehension",
"The cloze test as an integrative measure of efl proficiency: A substitute for essays on college entrance examinations?",
"Teaching machines to read and comprehend",
"The goldilocks principle: Reading children's books with explicit memory representations",
"Long short-term memory",
"Cloze item types and second language comprehension",
"Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension",
"Exploring the limits of language modeling",
"Adam: A method for stochastic optimization",
"Race: Large-scale reading comprehension dataset from examinations",
"Ms marco: A human generated machine reading comprehension dataset",
"Who did what: A large-scale person-centered cloze dataset",
"Overview of clef qa entrance exams task 2014",
"Glove: Global vectors for word representation",
"Squad: 100,000+ questions for machine comprehension of text",
"Overview of clef qa entrance exams task 2015",
"How to construct a cloze test: Lessons from testing measurement theory models",
"Bidirectional attention flow for machine comprehension",
"Overview of the ntcir-11 qa-lab task",
"Predicting cloze task quality for vocabulary training",
"cloze procedure: a new tool for measuring readability",
"Proficiency assessment standards in second language acquisition research",
"Newsqa: A machine comprehension dataset",
"Positionaware attention and supervised data improve slot filling",
"The microsoft research sentence completion challenge"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
" anonymous"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dzmitry bahdanau",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ciprian chelba",
"tomas mikolov",
"mike schuster",
"qi ge",
"thorsten brants",
"phillipp koehn",
"tony robinson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"danqi chen",
"jason bolton",
"christopher d manning"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rui correia",
"jorge baptista",
"nuno mamede",
"isabel trancoso",
"maxine eskenazi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rui correia",
"jorge baptista",
"maxine eskenazi",
"nuno j mamede"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"bhuwan dhingra",
"hanxiao liu",
"william w cohen",
"ruslan salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" sandra s fotos"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"karl moritz hermann",
"tomas kocisky",
"edward grefenstette",
"lasse espeholt",
"will kay",
"mustafa suleyman",
"phil blunsom"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"felix hill",
"antoine bordes",
"sumit chopra",
"jason weston"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jon jonz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mandar joshi",
"eunsol choi",
"daniel s weld",
"luke zettlemoyer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rafal jozefowicz",
"oriol vinyals",
"mike schuster",
"noam shazeer",
"yonghui wu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik kingma",
"jimmy ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"guokun lai",
"qizhe xie",
"hanxiao liu",
"yiming yang",
"eduard hovy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tri nguyen",
"mir rosenberg",
"xia song",
"jianfeng gao",
"saurabh tiwary",
"rangan majumder",
"li deng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"takeshi onishi",
"hai wang",
"mohit bansal",
"kevin gimpel",
"david mcallester"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"anselmo peñas",
"yusuke miyao",
"álvaro rodrigo",
"eduard h hovy",
"noriko kando"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jeffrey pennington",
"richard socher",
"christopher manning"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pranav rajpurkar",
"jian zhang",
"konstantin lopyrev",
"percy liang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"álvaro rodrigo",
"anselmo peñas",
"yusuke miyao",
"eduard h hovy",
"noriko kando"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" sachs",
" tung",
" lam"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"minjoon seo",
"aniruddha kembhavi",
"ali farhadi",
"hannaneh hajishirzi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hideyuki shibuki",
"kotaro sakamoto",
"yoshinobu kano",
"teruko mitamura",
"madoka ishioroshi",
"di kelly y itakura",
"tatsunori wang",
"noriko mori",
" kando"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"adam skory",
"maxine eskenazi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"l wilson",
" taylor"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"annie tremblay"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"adam trischler",
"tong wang",
"xingdi yuan",
"justin harris",
"alessandro sordoni",
"philip bachman",
"kaheer suleman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yuhao zhang",
"victor zhong",
"danqi chen",
"gabor angeli",
"christopher d manning"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"geoffrey zweig",
"j c christopher",
" burges"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1802.05365v2",
"1409.0473v7",
"1312.3005v3",
"1606.02858v2",
"",
"",
"arXiv:1606.01549",
"",
"1506.03340v3",
"1511.02301v4",
"",
"",
"1705.03551v2",
"1602.02410v2",
"1412.6980v9",
"",
"1611.09268v3",
"arXiv:1608.05457",
"",
"",
"1606.05250v3",
"",
"",
"1611.01603v6",
"",
"",
"",
"",
"1611.09830v3",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.444444 | 0.75 | null | null | null | null | null | rJJzTyWCZ |
||
smith|an_inferencebased_policy_gradient_method_for_learning_options|ICLR_cc_2018_Conference | An inference-based policy gradient method for learning options | In the pursuit of increasingly intelligent learning systems, abstraction plays a vital role in enabling sophisticated decisions to be made in complex environments. The options framework provides formalism for such abstraction over sequences of decisions. However most models require that options be given a priori, presumably specified by hand, which is neither efficient, nor scalable. Indeed, it is preferable to learn options directly from interaction with the environment. Despite several efforts, this remains a difficult problem: many approaches require access to a model of the environmental dynamics, and inferred options are often not interpretable, which limits our ability to explain the system behavior for verification or debugging purposes. In this work we develop a novel policy gradient method for the automatic learning of policies with options. This algorithm uses inference methods to simultaneously improve all of the options available to an agent, and thus can be employed in an off-policy manner, without observing option labels. Experimental results show that the options learned can be interpreted. Further, we find that the method presented here is more sample efficient than existing methods, leading to faster and more stable learning of policies with options. | {
"name": [],
"affiliation": []
} | We develop a novel policy gradient method for the automatic learning of policies with options using a differentiable inference step. | [
"reinforcement learning",
"hierarchy",
"options",
"inference"
] | null | 2018-02-15 22:29:16 | 27 | null | null | null | null | null | null | null | null | false | The reviewers are unanimous that this is an interesting paper, but that ultimately the empirical results are not sufficiently promising to warrant the added complexity. | {
"review_id": [
"ByV--6Xbz",
"B1LVBmqgf",
"B1F3lO2lG"
],
"review": [
{
"title": "title: Interesting, but not impactful",
"paper_summary": null,
"main_review": "main_review: This paper proposes what is essentially an off-policy method for learning options in complex continuous problems. The idea is to use policy gradient style algorithms to update a suite of options using relatively \n\nOn the positive side, I like the core idea of this paper. The idea of updating multiple options at once is a good one. I think the authors should definitely continue to investigate this line of work. I also appreciated that the authors took the time to try and visualize what was learned. The paper is generally well-written and easy to read.\n\nOn the negative side: ultimately, the algorithm doesn't seem to work all that well. Empirically, the method doesn't seem to perform substantially better than other algorithms, although there seems to be some slight advantage. A clearly missing comparison would be something like TRPO or DDPG.\n\nFigure 1 was helpful in understanding marginalization and the forward algorithm. Thanks.\n\nWas there really only 4 options that were learned? How would this scale to more?\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: This paper proposes a new algorithm for discovering options, but the benefits of the algorithm are not clear empirically.",
"paper_summary": null,
"main_review": "main_review: This paper treats option discovery as being analogous to discovering useful latent variables. The proposed formulation assumes there is a policy over options, which invokes an option’s policy to select actions at each timestep until the option’s termination function is activated. A contribution of this paper is to learn all possible options that might have caused an observed trajectory, and to update parameters for all these pertinent option-policies with backprop. The proposed method, IOPG, is compared to A3C and the option-critic (OC) on four continuous control tasks in Mujoco, and IOPG has the best performance on one of the four domains.\n\nThe primary weakness of this paper is the absence of performance or conceptual improvements in exchange for the additional complexity of using options. The only domain where IOPG outperforms both A3C and OC is the Walker2D-v1 domain, and the reported performance on that domain (~800) is far below the performance of other methods (shown on OpenAI’s Gym site or in the PPO paper). Also, there is not much analysis on what kind of options are learned with this approach, beyond noting that the options seem clustered on tSNE plots. Given the close match between the A3C agent and the IOPG agent on the other three domains, I expect that the system is mostly relying on the base A3C components with limited contributions from the extensions introduced in the network for options. \n\nThe clarity of the paper’s contributions could be improved. The contribution of options might be made more clearly in smaller domains or in more detailed experiments. How is the termination beta provided from the network? How frequently did the policy over options switch between them? How was the number of options selected, and what happens when the number of possible options is varied from 1 to 4 or beyond 4? To what extent was there overlap in the learned policies to realize the proposed algorithmic benefit of learning multiple option-policies from the same transitions? The results in this paper do not provide strong support for using the proposed method.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The paper is well written and presents a good extension of infernece based option discovery. However, the results are not convincing and there is a crucial issue in the assumptions of the algorithm. ",
"paper_summary": null,
"main_review": "main_review: The paper presents a new policy gradient technique for learning options. The option index is treated as latent variable and, in order to compute the policy gradient, the option distribution for the current sample is computed by using a forward pass. Hence, a single sample can be used to update all options and not just the option that has been used for this sample.\n \nThe idea of the paper is good but the novelty is limited. As noted by the authors, the idea of using inference for option discovery has already been presented in Daniel2016. Note that the option discovery process is Daniel2016 is not limited to linear sub-policies, only the policy update strategy is. So the main contribution is to use a new policy update strategy, i.e., policy gradients, for inference based option discovery. Thats fine but should be stated more clearly in the paper. The paper is also written very well and the topic is relevant for the ICLR conference. \n\nHowever, the paper has two main problems:\n- The results are not convincing. In most domains, the performance is similar to the A3C algorithm (which does not use inference based option discovery), so the impact of this paper seems limited. \n\n- One of the main assumptions of the algorithm is wrong. The assumption is that rewards from the past are not correlated with actions in the future conditioned on the state s_t (otherwise we would always have a correlation) ,which is needed to use the policy gradient theorem. The assumption is only true for MDPs. However, using the option index as latent variable yields a PoMDP. There, this assumption does not hold any more. Example: Reward at time step t-1 depends on the action, which again depends on the option o_t-1. Action at time step t depends on o_t. Hence, there is a strong correlation between reward r_t-1 and action a_t+1 as o_t and o_t+1 are strongly correlated. o_t is not a conditional variable of the policy as it is not part of the state, thats why this assumption does not work any more.\n\nSummary: The paper is well written and presents a good extension of inference based option discovery. However, the results are not convincing and there is a crucial issue in the assumptions of the algorithm. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.2222222238779068,
0.2222222238779068
],
"confidence": [
0.75,
0.75,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Updated Version",
"Response",
"Performance of IOPG",
"Response",
"Response"
],
"comment": [
"We have uploaded an updated version of the paper, which addresses some of the concerns the reviewers had, as well as providing additional information on the nature of the options learned.",
"We thank the reviewer for their time and insight. Individual points are addressed below.\n\n\n> Empirically, the method doesn't seem to perform substantially better than other algorithms, although there seems to be some slight advantage. A clearly missing comparison would be something like TRPO or DDPG.\n\nWe do not expect IOPG to outperform A3C significantly in a single task, but expect benefits in interpretability and transferability. Please see our official comment for our more detailed response to this. While TRPO has been shown to outperform A3C in certain situations, we feel that the policy update strategy is largely independent of the option learning method presented here. That is, it should not be too difficult to write an algorithm that uses trust region updates with option learning. We compare to A3C so that the value of our contribution in isolation is more clear. We could also add a comparison to an inferred-option extension of a more powerful policy search algorithm such as PPO, TRPO, or DDPG.\n\n\n> Was there really only 4 options that were learned? How would this scale to more?\n\nThe number of options learned is prespecified as a hyperparameter, as is the case in several option learning methods. The computational complexity is quadratic in the number of options, with linear memory complexity. We will add an experiment comparing the number of options in the next version of the paper.\n",
"We would like to thank the reviewers for their insightful comments. Here, we focus on the issue that all three reviewers raised: that A3C does as well as IOPG in most environments.\n\nIt is our opinion that A3C ought to perform roughly as well as IOPG. The optimization performed is nearly identical between the two algorithms, where IOPG is parameterized in a particular manner such that options can be learned. We developed IOPG as a data-efficient method to optimize several options simultaneously. We present it in a general form, without any sort of regularization on the structure of the options. Even without such regularization, the options learned by IOPG express some worthwhile characteristics, which several existing option learning algorithms cannot produce: namely temporal extension, and spatial separation. Without additional problem-specific regularization on the structure of those options, there is no reason to expect performance improvements in the single-task setting.\n\nThis said, we feel that the extra structure learned by IOPG yields several benefits. Options can be useful for the interpretation of agent behaviours, as our t-SNE experiments (Fig. 3) show. Further there is strong evidence to suggest that learned options can be useful for transfer learning (OptionGAN: Henderson et al. 2018, Option-Critic: Bacon et al. 2017, Subgoal Discovery: McGovern and Barto 2001). We feel that these benefits make IOPG a worthwhile algorithm, especially since it comes at no cost to data efficiency, variance, or asymptotic learning compared to A3C. We are currently working on experiments that better quantify such upsides.",
"We thank the reviewer for taking the time to evaluate our paper. Individual points are addressed below.\n\n\n> The assumption [that rewards are independent of future actions, conditioned on the current state] is only true for MDPs. However, using the option index as latent variable yields a PoMDP. There, this assumption does not hold any more.\n\nUnder the standard set of assumptions this would be correct. As shown in the line before Eqn. 3, the conditional assumption that we make is slightly different. It is true that a_k and r_j are not independent in general. However, they are conditionally independent given s_k and s_j, and a_j. We are conditioning on all of the observed states and observed actions since the start of the trajectory. Since the reward only depends on these observed variables, no information is passed to future actions.\n\n\n> As noted by the authors, the idea of using inference for option discovery has already been presented in Daniel2016. Note that the option discovery process is Daniel2016 is not limited to linear sub-policies, only the policy update strategy is. So the main contribution is to use a new policy update strategy, i.e., policy gradients, for inference based option discovery\n\nWe agree that the graphical model employed here is the same as that used in Daniel2016. However, the option inference step is not the same, since they employ the use of backward information, while we only require forwards information. This means that our algorithm can be employed online, while the one presented in Daniel2016 can only be applied in the episodic case, where updates are made only after the episode is terminated.\n\n\n> The results are not convincing. In most domains, the performance is similar to the A3C algorithm (which does not use inference based option discovery), so the impact of this paper seems limited.\n\nWe do not expect IOPG to outperform A3C significantly in a single task, but expect benefits in interpretability and transferability. Please see our official comment for our more detailed response to this. ",
"Thank you to the reviewer for their insightful comments. Individual points are addressed below.\n\n\n> The only domain where IOPG outperforms both A3C and OC is the Walker2D-v1 domain, and the reported performance on that domain (~800) is far below the performance of other methods (shown on OpenAI’s Gym site or in the PPO paper).\n\nWe do not expect IOPG to outperform A3C significantly in a single task, but expect benefits in interpretability and transferability. Please see our comment for our more detailed response to this. We would like to add here that we view the option learning strategy contributed here to be largely independent to the method used for policy optimization. This is to say that it should be easy to write a PPO-IOPG algorithm with the benefits of both. We compare to A3C so that the value of our contribution in isolation is more clear. We could also add a comparison to an inferred-option extension to PPO.\n\n\n> How is the termination beta provided from the network? \n\nWe apologize for forgetting to include this. Termination is sampled for the currently active option from a linear sigmoid layer on top of the policy network, as an additional head. We will clarify this in the updated version of the paper.\n\n\n> How frequently did the policy over options switch between them? \n\nWe will add this information to the appendix in the next version of the paper.\n\n\n> How was the number of options selected, and what happens when the number of possible options is varied from 1 to 4 or beyond 4?\n\nMost existing option learning methods require specification of the number of options as a hyperparameter. In general this is optimized according to the task at hand. Here, however, we did no optimization over this parameter, but we'll be happy to add an experiment to the next version of the paper.\n\n\n> To what extent was there overlap in the learned policies to realize the proposed algorithmic benefit of learning multiple option-policies from the same transitions?\n\nAt the start of learning, the policies tend to overlap highly due to random initialization. Because of this, early training benefits from the simultaneous update, as all options are implicated in every action. As training progresses, the t-SNE experiments demonstrate that is little overlap between final policies. Each policy appears to be active in a different region of state space. This is likely due to the fact that the most likely option is most updated, rather than a single option being updated improperly in the event of an unlikely action.\n"
]
} | {
"paperhash": [
"andre|state_abstraction_for_programmable_reinforcement_learning_agents",
"bacon|the_option-critic_architecture",
"bengio|representation_learning:_a_review_and_new_perspectives",
"daniel|probabilistic_inference_for_determining_options_in_reinforcement_learning",
"fox|multi-level_discovery_of_deep_options",
"konidaris|skill_discovery_in_continuous_reinforcement_learning_domains_using_skill_chaining",
"kfir|unified_inter_and_intra_options_learning_using_policy_gradient_methods",
"maaten|visualizing_data_using_t-sne",
"marlos|a_laplacian_framework_for_option_discovery_in_reinforcement_learning",
"mankowitz|adaptive_skills_adaptive_partitions_(asap)",
"mcgovern|automatic_discovery_of_subgoals_in_reinforcement_learning_using_diverse_density",
"menache|q-cut-dynamic_discovery_of_sub-goals_in_reinforcement_learning",
"mnih|asynchronous_methods_for_deep_reinforcement_learning",
"mousavi|deep_reinforcement_learning:_an_overview",
"niekum|clustering_via_dirichlet_process_mixture_models_for_portable_skill_discovery",
"precup|temporal_abstraction_in_reinforcement_learning",
"provost|self-organizing_distinctive_state_abstraction_using_options",
"schulman|proximal_policy_optimization_algorithms",
"silver|compositional_planning_using_optimal_option_models",
"özgür|skill_characterization_based_on_betweenness",
"sutton|between_mdps_and_semi-mdps:_a_framework_for_temporal_abstraction_in_reinforcement_learning",
"sutton|policy_gradient_methods_for_reinforcement_learning_with_function_approximation",
"tieleman|lecture_6.5_-rmsprop:_divide_the_gradient_by_a_running_average_of_its_recent_magnitude._coursera:_neural_networks_for_machine_learning",
"todorov|mujoco:_a_physics_engine_for_model-based_control",
"vezhnevets|strategic_attentive_writer_for_learning_macro-actions",
"vezhnevets|feudal_networks_for_hierarchical_reinforcement_learning",
"ronald|simple_statistical_gradient-following_algorithms_for_connectionist_reinforcement_learning"
],
"title": [
"State abstraction for programmable reinforcement learning agents",
"The option-critic architecture",
"Representation learning: A review and new perspectives",
"Probabilistic inference for determining options in reinforcement learning",
"Multi-level discovery of deep options",
"Skill discovery in continuous reinforcement learning domains using skill chaining",
"Unified inter and intra options learning using policy gradient methods",
"Visualizing data using t-sne",
"A Laplacian Framework for Option Discovery in Reinforcement Learning",
"Adaptive skills adaptive partitions (ASAP)",
"Automatic discovery of subgoals in reinforcement learning using diverse density",
"Q-cut-dynamic discovery of sub-goals in reinforcement learning",
"Asynchronous methods for deep reinforcement learning",
"Deep Reinforcement Learning: An Overview",
"Clustering via Dirichlet process mixture models for portable skill discovery",
"Temporal abstraction in reinforcement learning",
"Self-organizing distinctive state abstraction using options",
"Proximal policy optimization algorithms",
"Compositional planning using optimal option models",
"Skill characterization based on betweenness",
"Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning",
"Policy gradient methods for reinforcement learning with function approximation",
"Lecture 6.5 -rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning",
"Mujoco: A physics engine for model-based control",
"Strategic attentive writer for learning macro-actions",
"Feudal networks for hierarchical reinforcement learning",
"Simple statistical gradient-following algorithms for connectionist reinforcement learning"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"david andre",
"stuart russell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pierre-luc bacon",
"jean harb",
"doina precup"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoshua bengio",
"aaron c courville",
"pascal vincent"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"christian daniel",
"herke van hoof",
"jan peters",
"gerhard neumann"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"roy fox",
"sanjay krishnan",
"ion stoica",
"ken goldberg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"george konidaris",
"andrew g barto"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"y kfir",
"nahum levy",
" shimkin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"laurens van der maaten",
"geoffrey hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"c marlos",
"marc g machado",
"michael h bellemare",
" bowling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"timothy a daniel j mankowitz",
"shie mann",
" mannor"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"amy mcgovern",
"andrew g barto"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ishai menache",
"shie mannor",
"nahum shimkin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"volodymyr mnih",
"adria puigdomenech badia",
"mehdi mirza",
"alex graves",
"timothy lillicrap",
"tim harley",
"david silver",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"seyed sajad mousavi",
"michael schukat",
"enda howley"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"scott niekum",
"andrew g barto"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"doina precup"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jefferson provost",
"benjamin kuipers",
"risto miikkulainen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"john schulman",
"filip wolski",
"prafulla dhariwal",
"alec radford",
"oleg klimov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d silver",
" ciosek"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"s özgür",
"andrew g barto"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"doina richard s sutton",
"satinder precup",
" singh"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david a richard s sutton",
" mcallester",
"p satinder",
"yishay singh",
" mansour"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tijmen tieleman",
"geoffrey hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"emanuel todorov",
"tom erez",
"yuval tassa"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alexander vezhnevets",
"volodymyr mnih",
"john agapiou",
"simon osindero",
"alex graves",
"oriol vinyals",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alexander vezhnevets",
"simon osindero",
"tom schaul",
"nicolas heess",
"max jaderberg",
"david silver",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"williams ronald"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"1206.5538v3",
"",
"",
"",
"",
"",
"1703.00956v2",
"1602.03351v2",
"",
"",
"1602.01783v2",
"1806.08894v1",
"",
"",
"",
"1707.06347v2",
"1206.6473v1",
"",
"",
"",
"",
"",
"",
"1703.01161v2",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.259259 | 0.833333 | null | null | null | null | null | rJIgf7bAZ |
||
peysakhovich|maintaining_cooperation_in_complex_social_dilemmas_using_deep_reinforcement_learning|ICLR_cc_2018_Conference | 1707.01068v4 | Maintaining cooperation in complex social dilemmas using deep reinforcement learning | Social dilemmas are situations where individuals face a temptation to increase their payoffs at a cost to total welfare. Building artificially intelligent agents that achieve good outcomes in these situations is important because many real world interactions include a tension between selfish interests and the welfare of others. We show how to modify modern reinforcement learning methods to construct agents that act in ways that are simple to understand, nice (begin by cooperating), provokable (try to avoid being exploited), and forgiving (try to return to mutual cooperation). We show both theoretically and experimentally that such agents can maintain cooperation in Markov social dilemmas. Our construction does not require training methods beyond a modification of self-play, thus if an environment is such that good strategies can be constructed in the zero-sum case (eg. Atari) then we can construct agents that solve social dilemmas in this environment. | {
"name": [],
"affiliation": []
} | null | [
"Computer Science"
] | arXiv.org | 2017-07-04 | 55 | null | null | null | null | null | null | null | null | false | The reviewers found numerous issues in the paper, including unclear problem definitions, lack of motivation, no support for desiderata, clarity issues, points in discussion appearing to be technically incorrect, restrictive setting, sloppy definitions, and uninteresting experiments. Unfortunately, little note of positive aspects was mentioned. The authors wrote substantial rebuttals, including an extended exchange with Reviewer2, but this had no effect in terms of score changes. Given the current state of the paper, the committee feels the paper falls short of acceptance in its current form. | {
"review_id": [
"rkhkuoNgf",
"Bk_1Ws3xf",
"B1_TQ-clG"
],
"review": [
{
"title": "title: Issues with clarity and technical statements",
"paper_summary": null,
"main_review": "main_review: This paper addresses multiagent learning problems in which there is a social dilemma: settings where there are no 'cooperative polices' that form an equilibrium. The paper proposes a way of dealing with these problems via amTFT, a variation of the well-known tit-for-that strategy, and presents some empirical results.\n\nMy main problem with this paper is clarity and I am afraid that not everything might be technically correct. Let me just list my main concerns in the below.\n\nThe definition of social dilemma, is unclear:\n\"A social dilemma is a game where there are no cooperative policies which form equilibria. In other\nwords, if one player commits to play a cooperative policy at every state, there is a way for their\npartner to exploit them and earn higher rewards at their expense.\"\ndoes this mean to say \"there are no cooperative *Markov* policies\" ? It seems to me that the paper precisely intents to show that by resorting to history-dependent policies (such as both using amTFT), there is a cooperative equilibrium. \n\nI don't understand:\n\"Note that in a social dilemma there may be policies which achieve the payoffs of cooperative policies because they cooperate on the trajectory of play and prevent exploitation by threatening non-cooperation on states which are never reached by the trajectory. If such policies exist, we call the social dilemma solvable.\"\nis this now talking about non-Markov policies? If not, there seems to be a contradiction?\n\nThe work focuses on TFT-like policies, motivated by \n\"if one can commit to them, create incentives for a partner to behave cooperatively\"\nhowever it seems that, as made clear below definition 4, we can only create such incentives for sufficiently powerful agents, that remember and learn from their failures to cooperate in the past?\n\nWhy is the method called \"approximate Markov\"? As soon as one introduces history dependence, the Markov property stops to hold?\n\nOn page 4, I have problems following the text due to inconsistent use of notation: subscripts and superscripts seem random, it is not clear which symbols denote strategy profiles (rather than individual strategies), there seems mix-ups between 'i' and '1' / '2', there is sudden use of \\hat{}, and other undefined symbols (Q_CC?).\n\nFor all practical purposes, it seems that the made assumptions imply uniqueness of the cooperative joint strategy. I fully appreciate that the coordination question is difficult and important, so if the proposed method is not compatible with dealing with that important question, that strikes me as a large drawback.\n\nI have problems understanding how it is possible to guarantee \"If they start in a D phase, they eventually return to a C phase.\" without making more assumptions on the domain. The clear example being the typical 'heaven or hell' type of problems: what if after one defect, we are trapped in the 'hell' state where no cooperation is even possible? \n\n\"If policies converge with this training then πˆ is a Markov equilibrium (up to function approximation).\" There are two problems here:\n1) A problem is that very typically things will not converge... E.g., \nWunder, Michael, Michael L. Littman, and Monica Babes. \"Classes of multiagent q-learning dynamics with epsilon-greedy exploration.\" Proceedings of the 27th International Conference on Machine Learning (ICML-10). 2010.\n2) \"Up to function approximation\" could be arbitrary large?\n\n\nAnother significant problem seems to be with this statement:\n\"while in the cooperative reward schedule the standard RL convergence guarantees apply. The latter is because cooperative training is equivalent to one super-agent controlling both players and trying to optimize for a single scalar reward.\" The training of individual learners is quite different from \"joint action learners\" [Claus & Boutilier 98], and this in turn is different from a 'super-agent' which would also control the exploration. In absence of the super-agent, I believe that the only guarantee is that one will, in the limit, converge to a Nash equilibrum, which might be arbitrary far from the optimal joint policy. And this only holds for the tabular case. See the discussion in \nA concise introduction to multiagent systems and distributed artificial intelligence. N Vlassis. Synthesis Lectures on Artificial Intelligence and Machine Learning 1 (1), 1-71\n\nAlso, the approach used in the experiments \"Cooperative (self play with both agents receiving sum of rewards) training for both games\", would be insufficient for many settings where a cooperative joint policy would be asymmetric.\n\nThe entire approach hinges on using rollouts (the commented lines in Algo. 1). However, it is completely not clear to me how this works. The one paragraph is insufficient to get across these crucial parts of the proposed approach.\n\nIt is not clear why the tables in Figure 1 are not symmetric; this strikes me as extremely problematic. It is not clear what the colors encode either.\n\nIt also seems that \"grim\" is better against all, except against amTFT, why should we not use that? In general, the explanation of this closely related paper by De Cote & Littman (which was published at UAI'08), is insufficient. It is not quite clear to me what the proposed approach offers over the previous method.\n\n\n\n\n\n\n\n\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The paper proposes an RL algorithm that achieves good outcomes in social dilemmas. No evidence of how innovative w.r.t. other approaches is the approach and is not well presented. ",
"paper_summary": null,
"main_review": "main_review: About the first point, it does not present a clear problem definition. The paper continues stating what it should do (e.g. \"our agents only live once at at test time and must maintain cooperation by behaving intelligently within the confines of a single game rather than threats across games.\") without any support for these desiderata. It then continues explaining how to achieve these desiderata, but at this point it is impossible to follow a coherent argument without understanding why are the authors making these strong assumptions about the problem they are trying to solve, and why. Without this problem description and a good motivation, it is impossible to assess why such desiderata (which look awkward to me) are important. The paper continues defining some joint behavior (e.g. cooperative policies), but then construct arguments for individual policy deviations, including elements like \\pi_A and \\Pi_2^{A_k} that, as you see, A is used sometimes as subindex and sometimes as supperindex. Could not follow this part, as such elements lack definition. D_k is also not defined. \n\nExperiments are uninteresting and show same results as many other RL algorithms that have been proposed in the past. No comparison with such other approaches is presented, nor even recognized. The paper should include a related work section that explain such similar approaches and their difference with this approach. The paper should continue the experimental section making explicit comparisons with such related work.\n\n**Detailed suggestions**\n- On page 2 you say \"This methodology cannot be directly applied to our problem\" without first defining what the problem is.\n- When authors talk about the agent, it is unclear what agent they refer to\n- \\delta undefined\n- You say selfish reward schedule each agent i treats the other agent just as a part of their environment. However, you need to make some assumption about its behavior (e.g. adversarial, cooperative, etc.) and this disregarded. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A fun paper on learning to cooperate in RL using game theory",
"paper_summary": null,
"main_review": "main_review: This paper studies learning to play two-player general-sum games with state (Markov games). The idea is to learn to cooperate (think prisoner's dilemma) but in more complex domains. Generally, in repeated prisoner's dilemma, one can punish one's opponent for noncooperation. In this paper, they design an apporach to learn to cooperate in a more complex game, like a hybrid pong meets prisoner's dilemma game. This is fun but I did not find it particularly surprising from a game-theoretic or from a deep learning point of view. \n\nFrom a game-theoretic point of view, the paper begins with somewhat sloppy definitions followed by a theorem that is not very surprising. It is basically a straightforward generalization of the idea of punishing, which is common in \"folk theorems\" from game theory, to give a particular equilibrium for cooperating in Markov games. Many Markov games do not have a cooperative equilibrium, so this paper restricts attention to those that do. Even in games where there is a cooperative solution that maximizes the total welfare, it is not clear why players would choose to do so. When the game is symmetric, this might be \"the natural\" solution but in general it is far from clear why all players would want to maximize the total payoff. \n\nThe paper follows with some fun experiments implementing these new game theory notions. Unfortunately, since the game theory was not particularly well-motivated, I did not find the overall story compelling. It is perhaps interesting that one can make deep learning learn to cooperate, but one could have illustrated the game theory equally well with other techniques.\n\nIn contrast, the paper \"Coco-Q: Learning in Stochastic Games with Side Payments\" by Sodomka et. al. is an example where they took a well-motivated game theoretic cooperative solution concept and explored how to implement that with reinforcement learning. I would think that generalizing such solution concepts to stochastic games and/or deep learning might be more interesting.\n\nIt should also be noted that I was asked to review another ICLR submission entitled \"CONSEQUENTIALIST CONDITIONAL COOPERATION IN\nSOCIAL DILEMMAS WITH IMPERFECT INFORMATION\n\" which amazingly introduced the same \"Pong Player’s Dilemma\" game as in this paper. \n\nNotice the following suspiciously similar paragraphs from the two papers:\n\nFrom \"MAINTAINING COOPERATION IN COMPLEX SOCIAL DILEMMAS USING DEEP REINFORCEMENT LEARNING\":\nWe also look at an environment where strategies must be learned from raw pixels. We use the method\nof Tampuu et al. (2017) to alter the reward structure of Atari Pong so that whenever an agent scores a\npoint they receive a reward of 1 and the other player receives −2. We refer to this game as the Pong\nPlayer’s Dilemma (PPD). In the PPD the only (jointly) winning move is not to play. However, a fully\ncooperative agent can be exploited by a defector.\n\nFrom \"CONSEQUENTIALIST CONDITIONAL COOPERATION IN SOCIAL DILEMMAS WITH IMPERFECT INFORMATION\":\nTo demonstrate this we follow the method of Tampuu et al. (2017) to construct a version of Atari Pong \nwhich makes the game into a social dilemma. In what we call the Pong Player’s Dilemma (PPD) when an agent \nscores they gain a reward of 1 but the partner receives a reward of −2. Thus, in the PPD the only (jointly) winning\nmove is not to play, but selfish agents are again tempted to defect and try to score points even though\nthis decreases total social reward. We see that CCC is a successful, robust, and simple strategy in this\ngame.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.2222222238779068,
0.3333333432674408
],
"confidence": [
0.75,
1,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Replies (to replies)",
"Reply",
"Reply Part 3",
"rollouts procedure",
"Replies (to replies) 2",
"Reply Part 1",
"\"up to function approximation\"",
"defininition of social dilemma",
"reaction",
"reaction",
"Reply Part 1",
"Reply Part 2",
"reaction",
"Reply Part 2",
"\"Could you clarify which question you feel was not addressed? \""
],
"comment": [
"We thank the reviewer for the constructive discussion. We believe this has fundamentally improved the clarity of the paper. Our replies to the referee's concerns are in-line below.\n \n>>> “Sorry, I cannot understand this sentence. I am not sure what is meant with \"after every state\". I also am not certain how cooperation along the path of play is influenced by what actions are taken off the path off play. The path of play is the path of play because those alternatives paths will lead to lower payoffs (independent of whether those are 'cooperative' on 'noncooperative' actions), right?”\n \nYou’re right, we could have made this sentence clearer. We mean that a social dilemma is a game where there is no equilibrium between two ***strategies*** that cooperate in every possible state (i.e. unconditional cooperators). E.g. in the repeated PD (where the state is, eg. the history of play) always cooperating can be exploited by always defecting.\n\nHowever, there may exist strategies (such as grim trigger) that don't unconditionally cooperate (eg. Grim Trigger defects if you defect). However, if you look at the realized trajectory of play against each other, they always cooperate along the path. This is what we mean by “maintaining cooperation”.\n \nWe argue this is the key property of a social dilemma: **always cooperating** leaves one open to being cheated. We solve the social dilemma by constructing a policy which maintains cooperation on the path by off-path threats. The key difficulty is how to detect that a defection has been made (value rather than action space) and how to compute the proper “threat” (use rollouts).\n \n>>> “Alright, but I do think this is a very severe assumption, and the paper ought to be very up front about it. As is, the paper claims to MAINTAIN COOPERATION IN COMPLEX SOCIAL DILEMMAS, but truth is that it does not seem to do this for many settings, such as deciding to which movie (romance or comedy) we should go to?”\n \nWe agree that our method only attacks one aspect of sociality: creating the incentives to not cheat. It does not fix the coordination problem. We tried to be quite clear on this in the paper.\n\nHowever, while we agree the coordination problem is very important (and exchangeability is a crucial assumption) we also would like to point out that it’s not as strong as it looks. Exchangeability mostly affects games where we require coordination within a single timestep (eg. the 2 action, 2 player, matrix, Romance or Comedy game). \n \nFor example, Pong requires coordination about where on the screen to bounce the ball back and forth. However, strategies of the form “if the other player hits it to any vertical location, hit it gently back in a straight line from that location” form exchangeable cooperative strategies (this is because it takes more than 1 timestep for the ball to get across the screen).\n \nWe note that in other literatures (eg. evolutionary biology, behavioral economics) cooperation refers specifically only to the instance of social dilemmas, not to the coordination problem. See eg. the well known review by Nowak *Science* 2006 which defines cooperation as: “A cooperator is someone who pays a cost, c, for another individual to receive a benefit, b. A defector has no cost and does not deal out benefits.”\n ",
"We thank R1 for their thorough comments. We have made several changes to the presentation of the paper to address them.\n \n**>>> “The paper follows with some fun experiments implementing these new game theory notions. Unfortunately, since the game theory was not particularly well-motivated, I did not find the overall story compelling…” **\n \nResponse: \nThis may stem from our lack of clarity in our problem definition (see reply to R3 above). The point here is not to “get RL to cooperate.” Rather, we are interested in expanding the ideas proposed by Axelrod (1984) to Markov games. \n\nPlease see our reply to R3 above discussing why our work is related to but also quite different from what is done in other work on cooperative games. \n \n>> Similar paragraphs in 2 papers\nWe are also the authors of the other paper. \n\nCould the reviewer please clarify the issue here? Is it that the game is re-used without attribution or is it that we use similar text to describe it? \n\nWe are happy to make it clear that this (the amTFT paper) is the first one to use the PPD as an environment and the CCC paper uses it for robustness checks.\n\nThe important differences here are as follows:\n \namTFT (this paper) is a strategy that can only be implemented in Markov perfectly observed games. The CCC paper looks at IMPERFECTLY observed games where amTFT cannot be used. Note that there are other major differences in the guarantees between strategies (eg. CCC only has infinite time limit guarantees).\n \nSince any MDP can be trivially written into a POMDP it follows that the CCC strategy introduced in the other paper can also be used whenever amTFT can be used. \n\nDoes this mean that amTFT is completely dominated by CCC? The answer is no, in the CCC paper the PPD is used as an example to show that the CCC algorithm works well in some places (standard PPD) and not others (risky PPD). \n \n \n****Other Comments****\n>>> Even in games where there is a cooperative solution that maximizes the total welfare, it is not clear why players would choose to do so. When the game is symmetric, this might be \"the natural\" solution but in general it is far from clear why all players would want to maximize the total payoff.\n\nWe agree with the reviewer on this point. One can view this as a discussion about whether a particular equilibrium is a focal point or not. It is well known that in symmetric games (including bargaining and coordination games) that people view the symmetric sum of payoffs to be a natural focal point while in asymmetric versions of the problem they do not (see eg. the chapter on bargaining in Kagel & Roth Handbook of Experimental Economics or more recent work on inequality in public goods games eg. Hauser, Kraft-Todd, Rand, Nowak & Norton 2016).\n \nFiguring out which kinds of payoff distributions are “reasonable” focal points, especially for playing with humans, is an important direction for future research but beyond the scope of this paper (the question is not even settled in behavioral science as there are many debates on whether people are averse to inequality itself, unequal treatment or perhaps something else).\n\nNote that amTFT can be adapted to any focal point that can be expressed in terms of payoffs (for example, pure inequity aversion can be expressed as U_1(payoff1, payoff2) = payoff1 - A*|payoff1-payoff2|, see eg. Charness & Rabin (2002) for a generic utility function that can express many social goals). \n\nThe way one can adapt amTFT to these focal points is to train the D policies as we do in the paper, but now train the C policies using this modified reward at each time step (and use the amTFT switching rule at test time). A full discussion of when particular focal points can be implemented in particular games is beyond the scope of the paper.\n\nWe have made this point clear in the both the introduction and conclusion of the paper.\n",
"**>> It also seems that \"grim\" is better against all, except against amTFT, why should we not use that? In general, the explanation of this closely related paper by De Cote & Littman (which was published at UAI'08), is insufficient. It is not quite clear to me what the proposed approach offers over the previous method. **\n\nThe main result of the experiments is that Grim, in practice, behaves almost always like the defect policy (due to its reliance on “wrong action” rather than “wrong value”), and thus gives the same rewards to both players as defect.\n\nWhat is wrong with playing a policy of pure defection? Indeed it does achieve high payoffs against either pure cooperation or pure defection. One problem is that it incentivizes the partner to defect rather than cooperate, which can be seen in the table. A second problem is that it fails to realize gains of cooperation with conditional cooperators. This can't be shown directly in the table (as we cannot enumerate all possible conditionally cooperative strategies), but as a necessary condition it even fails to cooperate with itself (or amTFT). \n\nAll of this can be extracted from the current figure, but we agree that it needs to be highlighted better, and have added an additional table that measures these desiderata explicitly. We thank the reviewer for pointing this out. \n\n\nIn addition, we note that amTFT is different from De Cote & Littman in 4 ways\n\n * amTFT is usable within a single game rather than across multiple iterations of the same game \n * amTFT uses self-play and deep RL rather than the tabular computation in De Cote & Littman thus can be applied to more complex games\n * amTFT returns to cooperation following a defection rather than applying a Grim Trigger strategy which stops cooperating after the wrong action is taken\n * The biggest difference: amTFT uses value space as the trigger as opposed to action space. As we see in our experiments this is important in Markov games where there are multiple value-equivalent cooperating strategies that differ on actions (eg. move left then move down vs. move down then move left in Coins). \n\n \nWe have edited the text to make these clearer.",
"I feel like we could iterate on this quite a bit further, but in the end I feel that all this should have been clear in the submitted paper which I assessed. This rebuttal phase is for clarifying minor misconceptions. For performing iterations of in-depth analysis of what is really going on and clarifying the message, I usually get my name on a paper.",
"\n>>> “The rollouts procedure seems very complex, and frankly, I cannot understand much of it. (why this procedure? why is it unbiased?) I think this might actually be an important contribution, but clearly it needs much better treatment.”\n \nWe apologize for any lack of clarity, we are happy to add more of this discussion to the appendix. \n \nLet pi1, pi2, be policies for players 1 and 2 respectively, and let s be an initial state. The job of the rollouts is to compute V1(s, pi1, pi2) which is the expected sum of discounted rewards for player 1 for starting in s and both behaving according to pi1, pi2. \n\nRollouts are just the straightforward application of Monte Carlo to compute this expectation, by repeatedly sampling trajectories (starting from s, behaving according to pi1, pi2) and computing their expected discounted sum of rewards. If we do this K times and then average those batches and if K is large, we will approximate V1(s, pi1, pi2) arbitrarily well. \n\nOur only additional approximation is to compute the discounted reward for finite-length trajectories. Given that there is discounting the potential bias shrinks exponentially (after t periods the bias is at most delta^t / (1-delta) * r_max) where r_max is the biggest possible reward from one period. Therefore, this MC estimate converges to the correct value in the limit of large batch size and trajectory length.\n \nThe way this is used in the paper is: let a be an action that can be taken today by player 1. We want to find the advantage of the one shot deviation taken by choosing a today and p1 forever after tomorrow relative to just playing pi1 all the time. Call d1 this modified deviation policy.\n \nWe can approximate V1(s, d1, pi2) the same way as above and subtract it from V1(s, pi1, pi2) to get the advantage.\n \n>>> “If you agree that there are many problems with function approximation, I don't understand why the formulation \"is a Markov equilibrium (up to function approximation).\" is not adapted. It just seems quite misleading... as in general it simply will not be an equilibrium.”\n \nWe meant “Up to function approximation” in the sense of “will converge to the equilibrium in the limit of low function approximation error”. Perhaps we should be more precise in our language here. \n \nWe point out that all RL with function approximation has this same limitation (if your function approximator is bad, it won’t work), so it is the assumption in all function-approximation RL work. These methods have nevertheless been quite successful.\n \nWe do this because for the computation of the D phase length we need to compute the payoffs lost to the other player from the D phase. \n \nIn order to do this, we bound this number using the payoffs of joint defection. \n \nIn order for this bound to work, we need that the other player can’t exceed their (Pi_D, Pi_D) payoffs by choosing some other clever policy to follow during the D phase (ie. that PiD, PiD is a finite time equilibrium). \n \nThis isn’t as strong as it seems. In practice, we don’t necessarily need (Pi_D, Pi_D) to be an equilibrium, we just need it to be difficult for the other player to find the better strategy. \n \nIf the other agent has the same computational capacity as our agent, we can be relatively sure that if we couldn’t find a much better response they probably won’t be able to either. This is because Pi_D was computed with self play, so if there was an obvious better response we wouldn’t have stopped at our current one during training. However, this notion is difficult to formalize. \n \nWe are happy to add this discussion to the paper.\n \n>>> “This side-steps my question: it seems that the presented experiments somehow had variance that was so large that they were not representative? Clearly this is important to clear up: if the paper reported non-significant results that is a reason to further question the other results too.”\n \nCould you clarify which question you feel was not addressed? Assuming you are referring to “why the tables in Figure 1 are not symmetric”, the answer as stated in our reply is: The tables in Figure 1 show the payoff of the row player against the column player – thus there is no reason to expect them to be symmetric (eg. the box for (D,C) corresponds to the payoff that D gets when C is their partner, which is not the same payoff that C gets when D is their partner).\n \nIn addition, we can check that there is no variance issue by computing the standard errors of the mean in our tournament payoffs. They are on the order of ~1 point in Coins where score differences between strategies are on the order of 40 points. The PPD results are similarly extremely statistically significant. We are happy to add the standard errors to the figures in the appendix. \n \n",
"We thank the reviewer for their comments. We believe that many of the reviewer's issues are actually addressed in the paper already, though we were unclear in our presentation. \n\nWe have made major revisions to the motivating text and the presentation of the main results in the newly uploaded version.\n \n>>> The reviewer argues there is a lack of clear problem definition\nWe apologize if the problem definition is unclear, we have edited the text to be clearer. In addition, we have reformulated some of the results presentations to more clearly align with our problem defintion.\n\nThe goal of the paper is to begin with a question discussed by *The Evolution of Cooperation *(Axelrod (1984)): suppose that we are going to enter into a repeated Prisoner's Dilemma with an unknown partner, how should we behave?\n\nAxelrod (and much follow up work) comes up with strategies which seek to work well against mixed populations where some individuals are cooperators, some are pure defectors but most are conditional cooperators (often this is justified by the idea that this is a good approximation of the distribution of people). This literature seeks to construct strategies (eg. Tit-for-Tat, or Win-Stay-Lose-Shift/Pavlov) which\n1) cooperate with cooperators\n2) aren't exploited by defectors\n3) incentivize conditional cooperators to cooperate\n4) are simple to explain\n\nThese are fine desiderata for what it means to \"solve\" a social dilemma. However, a weakness of this literature is that it mostly works with simple 2 player repeated Prisoner's Dilemma games.\n\nOur goal is to expand the Axelrod ideas from the repeated PD case (where there are 2 actions that are clearly labeled) to some perfect information Markov game G which is not repeated (we only play G once), has a social dilemma structure, and is too complex to be solved in a tabular format (so requires deep RL).\n\nOur question is related to, but actually quite different from, the literatures on:\n\n* The folk theorem in game theory (Fudenberg & Maskin 1986, Fudenberg, Levine & Maskin 1996) – this literature asks “given a repeated game G, does an efficient equilibrium exist?”\n* The work on “computational folk theorem” (De Cote & Littman 2008, Littman & Stone 2005) – this literature asks: “can I compute the efficient equilibrium strategies in a repeated game or Markov game?”\n* Alternative solution concepts (eg. Sodomka et al. 2013) – this literature asks: “can we define solution concepts beyond Nash and under what conditions will learning converge to them?”\n* The learning in (Markov) games literature (Fudenberg & Levine 1998, Sandholm & Crites 1996, Leibo et. al. 2017) – this literature asks: “which equilibrium will learners converge to as a function of game parameters/learning rules?”\n* The shaping in learning in games literature (Babes et al 2008) – this literature asks: “if I can change the reward functions of agents, can I guide them to a good equilibrium?”\n* Friend-or-Foe learning (Littman 2001) – this paper asks “what kind of learning rule should I use in positive sum games?” This is quite related to our work though again requires multiple plays of G with the same partner rather than self play training and then a SINGLE play of G.\n* How do humans behave in these kinds of situations? (eg. Rand et al. 2012, Kleinman-Weiner 2016)\n\nAgain, our situation is that we have access to the game and we can do whatever we want at training time, but at test time we play G once and we want to achieve good performance in the Axelrod sense: sometimes we face pure cooperators, sometimes pure defectors, but mostly we face conditional cooperators. As we can see, this question is related to but not the same as the literatures above (though they all provide valuable tools and context).\n\nNote also that we are not looking for equilibria in the game, in the PD tit-for-tat is not an equilibrium strategy (the best response is to always cooperate), however it is a very good commitment strategy if we seek to design an agent.\n\nWe can see from the reviews that the relationship between our work and prior work was unclear from the text, we have edited the introduction and main text significantly to address these comments.\n",
"\"We point out that all RL with function approximation has this same limitation (if your function approximator is bad, it won’t work), \"\n\n-> yes, but we don't say that that \"converges to the optimal value function up to function approximation\". This is precisely why I think the statement is somewhat misleading.\n\n\n\"In practice, we don’t necessarily need (Pi_D, Pi_D) to be an equilibrium, we just need it to be difficult for the other player to find the better strategy. \"\n\n-> I understand the sentiment, but defining 'difficult' here is a bit tricky. In any case \"equilibrium\" means \"impossible\" not \"difficult\".",
"I feel like things are becoming more convoluted as we go along. Surely, agents \n\n\"remain on the equilibrium path because of what they anticipate would happen if\nthey were to deviate\" -- Binmore (1992)\n\nBut this is a statement that holds for general games, I don't see how this helps define a \"social dilemma\"? Are you not just trying to say \"always cooperating is not an equilibrium\" ?\n\n\nAnd if this is the case, the response to the second \"bullet\" (>>>) is a bit confusing. Where I had interpreted social dilemmas as a broad class including non-exchangeable problems, the response now seems to say that (in other literatures) \"social dilemmas\" exclude the coordination problem. However, it is not excluded by your own definition, only by the additional assumption, if I understand correctly.\n\nIn any case, all this of course is a matter of definitions. Bottom line is that the results in this paper are only for a much narrower class than what I had imagined when reading the title.",
"If you agree that there are many problems with function approximation, I don't understand why the formulation \"is a Markov equilibrium (up to function approximation).\" is not adapted. It just seems quite misleading... as in general it simply will not be an equilibrium.\n\nThe rollouts procedure seems very complex, and frankly, I cannot understand much of it. (why this procedure? why is it unbiased?) I think this might actually be an important contribution, but clearly it needs much better treatment.\n\n\"From the comments given by the review team we see [...]\"\n\nThis side-steps my question: it seems that the presented experiments somehow had variance that was so large that they were not representative? Clearly this is important to clear up: if the paper reported non-significant results that is a reason to further question the other results too.",
"\"Thus we define a social dilemma to be one where cooperation after EVERY STATE is impossible in equilibrium – that is, if there is cooperation along the path of play it must be because OFF THE PATH of play cooperation stops.\"\n\nSorry, I cannot understand this sentence. I am not sure what is meant with \"after every state\". I also am not certain how cooperation along the path of play is influenced by what actions are taken off the path off play. The path of play is the path of play because those alternatives paths will lead to lower payoffs (independent of whether those are 'cooperative' on 'noncooperative' actions), right?\n\n\"The main assumption used is the exchangeability assumption that all strategies form an equivalence class in that any two pairs of cooperative strategies (C1, C1), (C2, C2) are compatible with each other in the sense that (C1, C2) generates the same stream of payoffs. [...] there is no good zero-shot solution to those issues in the literature as well.\"\n\nAlright, but I do think this is a very severe assumption, and the paper ought to be very up front about it. As is, the paper claims to MAINTAIN COOPERATION IN COMPLEX SOCIAL DILEMMAS, but truth is that it does not seem to do this for many settings, such as deciding to which movie (romance or comedy) we should go to?",
"We thank the reviewer for pointing out several important issues. We believe these are mostly issues of clarity in exposition/notation. We have edited the text to address these issues.\n\n>> The definition of social dilemma, is unclear: \"A social dilemma is a game where there are no cooperative policies which form equilibria…. does this mean to say \"there are no cooperative *Markov* policies\" ? **\n \nThe referee is correct, this should mean that there are no Markov policies. Note that when we refer to cooperative policies we specifically refer to ones which cooperate at ALL states. \n\nThus we define a social dilemma to be one where cooperation after EVERY STATE is impossible in equilibrium – that is, if there is cooperation along the path of play it must be because OFF THE PATH of play cooperation stops.\n \nThis is identical to the logic in the standard repeated Prisoner’s Dilemma where policies which always cooperate are not equilibria, rather, in order to maintain cooperation along the path of play there must be defection off the path of play (eg. Grim Trigger). \n \nWe have edited the text to make this point clearer.\n \n**>> Why is the method called \"approximate Markov\"? As soon as one introduces history dependence, the Markov property stops to hold? **\n \nWe call the method approximate Markov because we use function approximation (approximate) and because amTFT only uses Markov policies from the original game (only using the augmented memory to switch between them).\n \nWe have made this clearer in the paper.\n \n**>> On page 4, I have problems following the text due to inconsistent use of notation: subscripts and superscripts seem random, it is not clear which symbols denote strategy profiles (rather than individual strategies), there seems mix-ups between 'i' and '1' / '2', there is sudden use of \\hat{}, and other undefined symbols (Q_CC?).**\n\nWe apologize if the mixup between sub/superscripts caused any confusion, we have fixed these typos. In addition, we now clarify the hat/no hat notation - as in statistics we use the no hat symbol to refer to a \"real\" policy whereas \\hat{} objects refer to approximations (eg. the output of the deep RL training).\n \nWe note that the Q function is introduced in Definition 2 but the notation Q_CC is introduced in section 4 (“we call the converged policies under the selfish reward schedule πˆiD and the associated Q function approximations QˆiDD.”). We apologize for this confusion and will edit Defintion 2 notation to match the Section 4 notation.\n \n** **\n**>> For all practical purposes, it seems that the made assumptions imply uniqueness of the cooperative joint strategy. I fully appreciate that the coordination question is difficult and important, so if the proposed method is not compatible with dealing with that important question, that strikes me as a large drawback. **\n** **\nThe main assumption used is the exchangeability assumption that all strategies form an equivalence class in that any two pairs of cooperative strategies (C1, C1), (C2, C2) are compatible with each other in the sense that (C1, C2) generates the same stream of payoffs. \n\nThis is much weaker than a uniqueness assumption. Indeed, working in value space is one of the innovations of am TFT. \n\nAs an example of this, consider the Pong Player’s Dilemma. The exchangeability assumption it allows both players to do whatever they want as long as they “softly” hit the ball over to the other player in some way. \n\nFor example, our partner can move the paddle around however they like while the ball is in flight, and, importantly, it allows for a partner (eg. a human) who hits the ball slightly too fast sometimes (but not so fast that our agent can't get to it). In both of these situations a strategy like the Grim Trigger (a direct application of De Cote & Littman 2008) will assume the partner is not cooperating and defect.\n\nWe agree that there are situations where this assumption fails (for example if we need to make simultaneous decisions that may or may not be compatible with one another as in, eg. a coordination games), but there is no good zero-shot solution to those issues in the literature as well.",
">>> The reviewer would like to see more baselines\nGiven the discussion above, we argue that there are 2 potential baselines to amTFT. Both of these are already studied in the paper:\n\n**Baseline 1: Apply standard self play at training time, save that strategy, use it at play time **\nWe find that self-play finds the defect policies and thus while it can exploit pure cooperators and not be exploited by pure defectors it isn’t able to realize the gains of cooperation when the partner is a conditional cooperator (eg. amTFT).\n\n**Baseline 2: De Cote & Littman (2008). **\nNote that the De Cote & Littman algorithm works ACROSS multiple iterations of a repeated Markov game (playing a Markov game G multiple times) rather than WITHIN a single game (which is what our agent faces). However, we can amend the DeCote & Littman algorithm as follows: compute a cooperative (C) policy (De Cote & Littman actually compute the equitable policies, but in our games they are identical), compute the defect policy, if our partner chooses an ACTION that is inconsistent with the C policy, use the D policy forever after.\n\nWe show that this approach does not work well because working in ACTION space is not very robust to function approximation or any existence of multiple ways to cooperate.\n\namTFT uses a very similar rule but works in value space rather than action space which makes it robust to multiple policies that have the same or similar values. We see in our experiments that this is an important property.\n\nIf the reviewer has other baselines in mind that we have missed, we are happy to compare our approach to them.\n\n****Other Responses****\n>> The paper continues defining some joint behavior (e.g. cooperative policies), but then construct arguments for individual policy deviations, including elements like \\pi_A and \\Pi_2^{A_k} that, as you see, A is used sometimes as subindex and sometimes as supperindex. Could not follow this part, as such elements lack definition. D_k is also not defined.\n\nWe apologize for the flipping of indices, we thought that we had caught all of the sub/super flips but some managed to get away from us. We have fixed many of the flips.\n\nWe do note that D_k is defined on page 4: “we first introduce the notation of a compound policy πXk Z which is a policy that behaves according to X for k turns and then Z afterwards.”\n\n>>> Experiments are uninteresting and show same results as many other RL algorithms that have been proposed in the past. No comparison with such other approaches is presented, nor even recognized.\n\nWe discuss above why we believe that our work does indeed consider, discuss, properly cite, and compare to prior work on this problem. \n\nIf there is work that the reviewer believes we have left out, we are happy to discuss it in the paper.\n\n>> “\\delta undefined”\nDelta is defined in definition 2: “We assume agents discount the future with rate *δ *which we subsume into the value function.”\n\n>> You say selfish reward schedule each agent i treats the other agent just as a part of their environment. However, you need to make some assumption about its behavior (e.g. adversarial, cooperative, etc.) and this disregarded.\n\nWe apologize if this is unclear. The “selfish reward” schedule is simply standard self-play where each agent treats the other agent as stationary (this is exactly the assumption made in other learning rules eg. fictitious play). While this assumption is incorrect in finite time it is correct in the limit if agents converge to a Nash equilibrium. We are not trying to study this assumption (it is beyond the scope of this paper), rather we use it because it is what is done in standard self-play/standard learning in games (see eg. Fudenberg & Levine 1998 for more discussion).",
"I think these clarifications are helpful, but I don't think that they are made sufficiently clear in the updated paper. (I had a really hard time decyphering these statements). I would advise to actually make the \" value space as opposed to action space\" the primary hypothesis of a revised paper, since this seems to get to the core of the novelty. ",
">> I have problems understanding how it is possible to guarantee \"If they start in a D phase, they eventually return to a C phase.\" without making more assumptions on the domain. The clear example being the typical 'heaven or hell' type of problems: what if after one defect, we are trapped in the 'hell' state where no cooperation is even possible? **\n\nThe referee is correct that there exist domains where a single deviation by a player can simply never be made up in the D phase. Note that Theorem 1 specifically rules out this scenario, it says that for *any* state, the gain in value of cooperating from it forever (vs. defecting forever) is bigger than any one-shot deviation possible in the game. Thus, any debit a partner earns can eventually be made up by playing D for only k periods and then playing C forever.\n\nThis doesn't mean we rule out all types of “heaven or hell” scenarios. For example: suppose that cooperation earns a payoff of 10 every turn unless someone has defected once, in which case it earns 5, defection earns the defector 100 points and causes the other to lose 200 and mutual defection is worth -101. However, in this case, after a defection by a partner amTFT will return to cooperation after 1 turn of mutual defection but payoffs will be permanently lower.\n \n\n>>>R3 discusses many issues with convergence guarantees of the deep RL methods.**\nWe agree that a weakness of any deep RL approach is that it is often hard to make statements about convergence guarantees / issues with function approximation. \n\nOne way to see whether a amTFT is exploitable is to directly train an RL agent to try to exploit the amTFT agent. We see that in Coins learners fail to learn to exploit (we had issues doing this in the PPD due to instability of training Atari policies wiht low discount rates). Nevertheless the Coins result gives us confidence that this at least works empirically in some simple environments. Importantly, this method also gives us a possible way to stress test amTFT in any practical application.\n \nWe are happy to add this discussion to the main text as a direction for future results.\n \n>> The entire approach hinges on using rollouts (the commented lines in Algo. 1). However, it is completely not clear to me how this works. The one paragraph is insufficient to get across these crucial parts of the proposed approach. \n\nThe rollouts work as follows:\n\n1) The amTFT agent has policy pairs (C,C), (D,D) saved from training\n2) At time t, suppose the partner takes a' when the amTFT agent expected a (according to C(s)). \n3) The amTFT agent simulates 2B replicas of the game for M turns. \n4) In B of the replicates their partner starts with a’ and continues with C - “true path”\n5) In B of the replicates their partner starts with a and continues with C - “counterfactual path”\n6) The amTFT agent takes the difference in the average total reward to the partner from the two paths and uses that as the per period debit\n\nIn the limit of large M and B this is an unbiased estimator of the partner's Q function.\n\nThere is also the option to append the continuation value V(s) to the end of the rollout, we elide it. Note that in games where an action today can only affect payoffs up to M periods from now it suffices to use rollouts of length M and elide the continuation value\n \nWe have changed the text to make this clearer.\n \n>> It is not clear why the tables in Figure 1 are not symmetric; this strikes me as extremely problematic. It is not clear what the colors encode either. \nThe tables in Figure 1 show the payoff of the row player against the column player – thus there is no reason to expect them to be symmetric (eg. the box for (D,C) corresponds to the payoff that D gets when C is their partner, which is not the same payoff that C gets when D is their partner).\n\nFrom the comments given by the review team we see that those figures were not the best way to present our main results. Rather, we have specifically measured the exploitability of a strategy as well as whether it incentivizes cooperation from a partner and have added those numbers as our main results.\n",
"My bad, I had not seen that sentence, I had started reading after the line break. This now makes sense."
]
} | {
"paperhash": [
"chatterjee|stochastic_games",
"silver|mastering_the_game_of_go_without_human_knowledge",
"foerster|learning_with_opponent-learning_awareness",
"peysakhovich|prosocial_learning_agents_solve_generalized_stag_hunts_better_than_selfish_ones",
"pérolat|a_multi-agent_reinforcement_learning_model_of_common-pool_resource_appropriation",
"lowe|multi-agent_actor-critic_for_mixed_cooperative-competitive_environments",
"lewis|deal_or_no_deal?_end-to-end_learning_of_negotiation_dialogues",
"evtimova|emergent_communication_in_a_multi-modal,_multi-step_referential_game",
"evtimova|emergent_language_in_a_multi-modal,_multi-step_referential_game",
"shirado|locally_noisy_autonomous_agents_improve_global_human_coordination_in_network_experiments",
"das|learning_cooperative_visual_dialog_agents_with_deep_reinforcement_learning",
"crandall|cooperating_with_machines",
"arechar|‘i’m_just_a_soul_whose_intentions_are_good’:_the_role_of_communication_in_noisy_repeated_games",
"foerster|stabilising_experience_replay_for_deep_multi-agent_reinforcement_learning",
"havrylov|emergence_of_language_with_multi-agent_games:_learning_to_communicate_with_sequences_of_symbols",
"leibo|multi-agent_reinforcement_learning_in_sequential_social_dilemmas",
"jorge|learning_to_play_guess_who?_and_inventing_a_grounded_language_as_a_consequence",
"lazaridou|multi-agent_cooperation_and_the_emergence_of_(natural)_language",
"denil|learning_to_perform_physics_experiments_via_deep_reinforcement_learning",
"wu|training_agent_for_first-person_shooter_game_with_actor-critic_curriculum_learning",
"usunier|episodic_exploration_for_deep_deterministic_policies:_an_application_to_starcraft_micromanagement_tasks",
"foerster|learning_to_communicate_with_deep_multi-agent_reinforcement_learning",
"kempka|vizdoom:_a_doom-based_ai_research_platform_for_visual_reinforcement_learning",
"lerer|learning_physical_intuition_of_block_towers_by_example",
"heinrich|deep_reinforcement_learning_from_self-play_in_imperfect-information_games",
"mnih|asynchronous_methods_for_deep_reinforcement_learning",
"silver|mastering_the_game_of_go_with_deep_neural_networks_and_tree_search",
"tampuu|multiagent_cooperation_and_competition_with_deep_reinforcement_learning",
"brown|hierarchical_abstraction,_distributed_equilibrium_computation,_and_post-processing,_with_application_to_a_champion_no-limit_texas_hold'em_agent",
"mnih|human-level_control_through_deep_reinforcement_learning",
"kraft-todd|promoting_cooperation_in_the_field",
"ouss|when_punishment_doesn't_pay:_'cold_glow'_and_decisions_to_punish",
"hoffman|cooperate_without_looking:_why_we_care_what_people_think_and_not_just_what_they_do",
"peysakhovich|habits_of_virtue:_creating_norms_of_cooperation_and_defection_in_the_laboratory",
"kingma|adam:_a_method_for_stochastic_optimization",
"peysakhovich|humans_display_a_‘cooperative_phenotype’_that_is_domain_general_and_temporally_stable",
"|cooperating_with_the_future",
"rand|social_heuristics_shape_intuitive_cooperation",
"ontañón|a_survey_of_real-time_strategy_game_ai_research_and_competition_in_starcraft",
"fudenberg|recency,_records_and_recaps:_learning_and_non-equilibrium_behavior_in_a_simple_decision_problem",
"yoeli|powering_up_with_indirect_reciprocity_in_a_large-scale_field_experiment",
"bó|the_evolution_of_cooperation_in_infinitely_repeated_games:_experimental_evidence",
"fudenberg|slow_to_anger_and_fast_to_forgive:_cooperation_in_an_uncertain_world",
"baker|action_understanding_as_inverse_planning",
"tomasello|why_we_cooperate",
"riedmiller|reinforcement_learning_for_robot_soccer",
"cote|a_polynomial-time_nash_equilibrium_algorithm_for_repeated_stochastic_games",
"babes-vroman|social_reward_shaping_in_the_prisoner's_dilemma",
"imhof|tit-for-tat_or_win-stay,_lose-shift?",
"shoham|if_multi-agent_learning_is_the_answer,_what_is_the_question?",
"bicchieri|the_grammar_of_society:_the_nature_and_dynamics_of_social_norms",
"bó|cooperation_under_the_shadow_of_the_future:_experimental_evidence_from_infinitely_repeated_games",
"abbeel|apprenticeship_learning_via_inverse_reinforcement_learning",
"conitzer|awesome:_a_general_multiagent_learning_algorithm_that_converges_in_self-play_and_learns_a_best_response_against_stationary_opponents",
"littman|a_polynomial-time_nash_equilibrium_algorithm_for_repeated_games",
"macy|learning_dynamics_in_social_dilemmas",
"littman|friend-or-foe_q-learning_in_general-sum_games",
"fischbacher|are_people_conditionally_cooperative?_evidence_from_a_public_goods_experiment",
"ng|algorithms_for_inverse_reinforcement_learning",
"fehr|fairness_and_retaliation:_the_economics_of_reciprocity",
"erev|predicting_how_people_play_games:_reinforcement_learning_in_experimental_games_with_unique,_mixed_strategy_equilibria",
"fehr|a_theory_of_fairness,_competition_and_cooperation",
"dutta|a_folk_theorem_for_stochastic_games",
"tesauro|temporal_difference_learning_and_td-gammon",
"nowak|a_strategy_of_win-stay,_lose-shift_that_outperforms_tit-for-tat_in_the_prisoner's_dilemma_game",
"andreoni|impure_altruism_and_donations_to_public_goods:_a_theory_of_warm-glow_giving*",
"fudenberg|the_folk_theorem_in_repeated_games_with_discounting_or_with_incomplete_information",
"selten|end_behavior_in_sequences_of_finite_prisoner's_dilemma_supergames",
"may|the_evolution_of_cooperation",
"rapoport|the_strategy_of_conflict.",
"neumann|zur_theorie_der_gesellschaftsspiele",
"kleiman-weiner|coordinate_to_cooperate_or_compete:_abstract_goals_and_joint_intentions_in_social_interaction",
"|what_makes_a_price_fair?_an_experimental_study_of_market_experience_and_endogenous_fairness_norms",
"papadimitriou|algorithmic_game_theory:_the_complexity_of_finding_nash_equilibria",
"|bargaining_and_market_behavior_in_jerusalem,_ljubljana,_pittsburgh,_and_tokyo:_an_experimental_study",
"axelrod|evolutionary_dynamics",
"williams|simple_statistical_gradient-following_algorithms_for_connectionist_reinforcement_learning",
"fudenberg|the_theory_of_learning_in_games",
"sandholm|multiagent_reinforcement_learning_in_the_iterated_prisoner's_dilemma.",
"|iterative_solution_of_games_by_fictitious_play",
"|we_train_with_a_learning_rate_of_0.001,_continuation_probability_.998_(i.e._games_last_on_average_500_steps),_discount_rate_0.98,_and_a_batch_size_of_32"
],
"title": [
"Stochastic games",
"Mastering the game of Go without human knowledge",
"Learning with Opponent-Learning Awareness",
"Prosocial learning agents solve generalized Stag Hunts better than selfish ones",
"A multi-agent reinforcement learning model of common-pool resource appropriation",
"Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments",
"Deal or No Deal? End-to-End Learning of Negotiation Dialogues",
"Emergent Communication in a Multi-Modal, Multi-Step Referential Game",
"Emergent Language in a Multi-Modal, Multi-Step Referential Game",
"Locally noisy autonomous agents improve global human coordination in network experiments",
"Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning",
"Cooperating with machines",
"‘I’m Just a Soul Whose Intentions Are Good’: The Role of Communication in Noisy Repeated Games",
"Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning",
"Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols",
"Multi-agent Reinforcement Learning in Sequential Social Dilemmas",
"Learning to Play Guess Who? and Inventing a Grounded Language as a Consequence",
"Multi-Agent Cooperation and the Emergence of (Natural) Language",
"Learning to Perform Physics Experiments via Deep Reinforcement Learning",
"Training Agent for First-Person Shooter Game with Actor-Critic Curriculum Learning",
"Episodic Exploration for Deep Deterministic Policies: An Application to StarCraft Micromanagement Tasks",
"Learning to Communicate with Deep Multi-Agent Reinforcement Learning",
"ViZDoom: A Doom-based AI research platform for visual reinforcement learning",
"Learning Physical Intuition of Block Towers by Example",
"Deep Reinforcement Learning from Self-Play in Imperfect-Information Games",
"Asynchronous Methods for Deep Reinforcement Learning",
"Mastering the game of Go with deep neural networks and tree search",
"Multiagent cooperation and competition with deep reinforcement learning",
"Hierarchical Abstraction, Distributed Equilibrium Computation, and Post-Processing, with Application to a Champion No-Limit Texas Hold'em Agent",
"Human-level control through deep reinforcement learning",
"Promoting cooperation in the field",
"When Punishment Doesn't Pay: 'Cold Glow' and Decisions to Punish",
"Cooperate without looking: Why we care what people think and not just what they do",
"Habits of Virtue: Creating Norms of Cooperation and Defection in the Laboratory",
"Adam: A Method for Stochastic Optimization",
"Humans display a ‘cooperative phenotype’ that is domain general and temporally stable",
"Cooperating with the future",
"Social heuristics shape intuitive cooperation",
"A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft",
"Recency, records and recaps: learning and non-equilibrium behavior in a simple decision problem",
"Powering up with indirect reciprocity in a large-scale field experiment",
"The Evolution of Cooperation in Infinitely Repeated Games: Experimental Evidence",
"Slow to Anger and Fast to Forgive: Cooperation in an Uncertain World",
"Action understanding as inverse planning",
"Why We Cooperate",
"Reinforcement learning for robot soccer",
"A Polynomial-time Nash Equilibrium Algorithm for Repeated Stochastic Games",
"Social reward shaping in the prisoner's dilemma",
"Tit-for-tat or win-stay, lose-shift?",
"If multi-agent learning is the answer, what is the question?",
"The grammar of society: the nature and dynamics of social norms",
"Cooperation under the Shadow of the Future: Experimental Evidence from Infinitely Repeated Games",
"Apprenticeship learning via inverse reinforcement learning",
"AWESOME: A general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents",
"A polynomial-time nash equilibrium algorithm for repeated games",
"Learning dynamics in social dilemmas",
"Friend-or-Foe Q-learning in General-Sum Games",
"Are People Conditionally Cooperative? Evidence from a Public Goods Experiment",
"Algorithms for Inverse Reinforcement Learning",
"Fairness and Retaliation: The Economics of Reciprocity",
"Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria",
"A Theory of Fairness, Competition and Cooperation",
"A Folk Theorem for Stochastic Games",
"Temporal difference learning and TD-Gammon",
"A strategy of win-stay, lose-shift that outperforms tit-for-tat in the Prisoner's Dilemma game",
"IMPURE ALTRUISM AND DONATIONS TO PUBLIC GOODS: A THEORY OF WARM-GLOW GIVING*",
"The Folk Theorem in Repeated Games with Discounting or with Incomplete Information",
"End behavior in sequences of finite prisoner's dilemma supergames",
"The evolution of cooperation",
"The Strategy of Conflict.",
"Zur Theorie der Gesellschaftsspiele",
"Coordinate to cooperate or compete: Abstract goals and joint intentions in social interaction",
"What makes a price fair? an experimental study of market experience and endogenous fairness norms",
"Algorithmic Game Theory: The Complexity of Finding Nash Equilibria",
"Bargaining and Market Behavior in Jerusalem, Ljubljana, Pittsburgh, and Tokyo: An Experimental Study",
"Evolutionary Dynamics",
"Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning",
"The Theory of Learning in Games",
"Multiagent reinforcement learning in the Iterated Prisoner's Dilemma.",
"Iterative solution of games by fictitious play",
"We train with a learning rate of 0.001, continuation probability .998 (i.e. games last on average 500 steps), discount rate 0.98, and a batch size of 32"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"K. Chatterjee"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"David Silver",
"Julian Schrittwieser",
"K. Simonyan",
"Ioannis Antonoglou",
"Aja Huang",
"A. Guez",
"T. Hubert",
"Lucas baker",
"Matthew Lai",
"Adrian Bolton",
"Yutian Chen",
"T. Lillicrap",
"Fan Hui",
"L. Sifre",
"George van den Driessche",
"T. Graepel",
"D. Hassabis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jakob N. Foerster",
"Richard Y. Chen",
"Maruan Al-Shedivat",
"Shimon Whiteson",
"P. Abbeel",
"Igor Mordatch"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Peysakhovich",
"Adam Lerer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Pérolat",
"Joel Z. Leibo",
"V. Zambaldi",
"Charlie Beattie",
"K. Tuyls",
"T. Graepel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ryan Lowe",
"Yi Wu",
"Aviv Tamar",
"J. Harb",
"P. Abbeel",
"Igor Mordatch"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Lewis",
"Denis Yarats",
"Yann Dauphin",
"Devi Parikh",
"Dhruv Batra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Katrina Evtimova",
"Andrew Drozdov",
"Douwe Kiela",
"Kyunghyun Cho"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Katrina Evtimova",
"Andrew Drozdov",
"Douwe Kiela",
"Kyunghyun Cho"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"H. Shirado",
"N. Christakis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Abhishek Das",
"Satwik Kottur",
"J. Moura",
"Stefan Lee",
"Dhruv Batra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Crandall",
"Mayada Oudah",
"Tennom Chenlinangjia",
"Fatimah Ishowo-Oloko",
"Sherief Abdallah",
"Jean‐François Bonnefon",
"Manuel Cebrian",
"A. Shariff",
"M. Goodrich",
"Iyad Rahwan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Arechar",
"Anna Dreber",
"D. Fudenberg",
"David G. Rand"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jakob N. Foerster",
"Nantas Nardelli",
"Gregory Farquhar",
"Triantafyllos Afouras",
"Philip H. S. Torr",
"Pushmeet Kohli",
"Shimon Whiteson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Serhii Havrylov",
"Ivan Titov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Joel Z. Leibo",
"V. Zambaldi",
"Marc Lanctot",
"J. Marecki",
"T. Graepel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Emilio Jorge",
"Mikael Kågebäck",
"E. Gustavsson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Angeliki Lazaridou",
"A. Peysakhovich",
"Marco Baroni"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Misha Denil",
"Pulkit Agrawal",
"Tejas D. Kulkarni",
"Tom Erez",
"P. Battaglia",
"Nando de Freitas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yuxin Wu",
"Yuandong Tian"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nicolas Usunier",
"Gabriel Synnaeve",
"Zeming Lin",
"Soumith Chintala"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jakob N. Foerster",
"Yannis Assael",
"Nando de Freitas",
"Shimon Whiteson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Michal Kempka",
"Marek Wydmuch",
"Grzegorz Runc",
"Jakub Toczek",
"Wojciech Jaśkowski"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Adam Lerer",
"Sam Gross",
"R. Fergus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Johannes Heinrich",
"David Silver"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Volodymyr Mnih",
"Adrià Puigdomènech Badia",
"Mehdi Mirza",
"Alex Graves",
"T. Lillicrap",
"Tim Harley",
"David Silver",
"K. Kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"David Silver",
"Aja Huang",
"Chris J. Maddison",
"A. Guez",
"L. Sifre",
"George van den Driessche",
"Julian Schrittwieser",
"Ioannis Antonoglou",
"Vedavyas Panneershelvam",
"Marc Lanctot",
"S. Dieleman",
"Dominik Grewe",
"John Nham",
"Nal Kalchbrenner",
"I. Sutskever",
"T. Lillicrap",
"M. Leach",
"K. Kavukcuoglu",
"T. Graepel",
"D. Hassabis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ardi Tampuu",
"Tambet Matiisen",
"Dorian Kodelja",
"Ilya Kuzovkin",
"Kristjan Korjus",
"Juhan Aru",
"Jaan Aru",
"Raul Vicente"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Noam Brown",
"Sam Ganzfried",
"T. Sandholm"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Volodymyr Mnih",
"K. Kavukcuoglu",
"David Silver",
"Andrei A. Rusu",
"J. Veness",
"Marc G. Bellemare",
"Alex Graves",
"Martin A. Riedmiller",
"A. Fidjeland",
"Georg Ostrovski",
"Stig Petersen",
"Charlie Beattie",
"Amir Sadik",
"Ioannis Antonoglou",
"Helen King",
"D. Kumaran",
"Daan Wierstra",
"S. Legg",
"D. Hassabis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Gordon T. Kraft-Todd",
"Erez Yoeli",
"Syon P. Bhanot",
"David G. Rand"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Aurelie Ouss",
"A. Peysakhovich"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Moshe Hoffman",
"Erez Yoeli",
"M. Nowak"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Peysakhovich",
"David G. Rand"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Diederik P. Kingma",
"Jimmy Ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Peysakhovich",
"M. Nowak",
"David G. Rand"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"David G. Rand",
"A. Peysakhovich",
"Gordon T. Kraft-Todd",
"George E. Newman",
"Owen Wurzbacher",
"M. Nowak",
"Joshua D. Greene"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Santiago Ontañón",
"Gabriel Synnaeve",
"Alberto Uriarte",
"Florian Richoux",
"David Churchill",
"M. Preuss"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Fudenberg",
"A. Peysakhovich"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Erez Yoeli",
"Moshe Hoffman",
"David G. Rand",
"M. Nowak"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Bó",
"Guillaume Fréchette"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Fudenberg",
"David G. Rand",
"Anna Dreber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Chris L. Baker",
"R. Saxe",
"J. Tenenbaum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Tomasello"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Martin A. Riedmiller",
"T. Gabel",
"Roland Hafner",
"S. Lange"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. M. D. Cote",
"M. Littman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Monica Babes-Vroman",
"E. M. D. Cote",
"M. Littman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"L. Imhof",
"D. Fudenberg",
"M. Nowak"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Y. Shoham",
"Rob Powers",
"Trond Grenager"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"C. Bicchieri"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Bó"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Abbeel",
"A. Ng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Vincent Conitzer",
"T. Sandholm"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Littman",
"P. Stone"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Macy",
"A. Flache"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Littman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"U. Fischbacher",
"S. Gächter",
"E. Fehr"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Andrew Y. Ng",
"Stuart Russell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Fehr",
"S. Gächter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ido Erev",
"A. Roth"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Fehr"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Dutta"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Tesauro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Nowak",
"K. Sigmund"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Andreoni"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Fudenberg",
"E. Maskin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Selten",
"Rolf Stoecker"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Robert M. May"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Rapoport",
"T. Schelling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Neumann"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Max Kleiman-Weiner",
"Mark K. Ho",
"Joseph L. Austerweil",
"M. Littman",
"J. Tenenbaum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"C. Papadimitriou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"Robert Axelrod"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ronald J. Williams"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Fudenberg",
"D. Levine"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"T. Sandholm",
"Robert H. Crites"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [],
"affiliation": []
}
],
"arxiv_id": [
"",
"",
"1709.04326",
"1709.02865v2",
"1707.06600",
"1706.02275",
"1706.05125",
"",
"1705.10369",
"",
"1703.06585v2",
"1703.06207v5",
"",
"1702.08887",
"1705.11192",
"1702.03037",
"1611.03218v4",
"1612.07182",
"1611.01843v3",
"",
"1609.02993v3",
"1605.06676",
"1605.02097",
"1603.01312v1",
"1603.01121",
"1602.01783v2",
"",
"1511.08779v1",
"",
"",
"",
"",
"",
"",
"1412.6980v9",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"1206.3277",
"",
"",
"",
"",
"",
"",
"cs/0307002",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
[
"methodology"
],
[],
[],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[],
[
"background"
],
[
"methodology",
"background"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[],
[],
[
"background"
],
[],
[
"background"
],
[
"methodology",
"background"
],
[
"background"
],
[],
[],
[
"background"
],
[
"methodology"
],
[],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[],
[],
[
"methodology",
"background"
],
[],
[],
[
"background"
],
[
"background"
],
[],
[],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | null | 91 | null | 0.296296 | 0.833333 | null | null | null | null | null | rJIN_4lA- |
|
xiao|novel_and_effective_parallel_mixgenerator_generative_adversarial_networks|ICLR_cc_2018_Conference | NOVEL AND EFFECTIVE PARALLEL MIX-GENERATOR GENERATIVE ADVERSARIAL NETWORKS | In this paper, we propose a mix-generator generative adversarial networks (PGAN) model that works in parallel by mixing multiple disjoint generators to approximate a complex real distribution. In our model, we propose an adjustment component that collects all the generated data points from the generators, learns the boundary between each pair of generators, and provides error to separate the support of each of the generated distributions. To overcome the instability in a multiplayer game, a shrinkage adjustment component method is introduced to gradually reduce the boundary between generators during the training procedure. To address the linearly growing training time problem in a multiple generators model, we propose a method to train the generators in parallel. This means that our work can be scaled up to large parallel computation frameworks. We present an efficient loss function for the discriminator, an effective adjustment component, and a suitable generator. We also show how to introduce the decay factor to stabilize the training procedure. We have performed extensive experiments on synthetic datasets, MNIST, and CIFAR-10. These experiments reveal that the error provided by the adjustment component could successfully separate the generated distributions and each of the generators can stably learn a part of the real distribution even if only a few modes are contained in the real distribution. | {
"name": [],
"affiliation": []
} | multi generator to capture Pdata, solve the competition and one-beat-all problem | [
"neural networks",
"generative adversarial networks",
"parallel"
] | null | 2018-02-15 22:29:26 | 24 | null | null | null | null | null | null | null | null | false | The paper aims to address the mode collapse issue in GANs by training multiple generators and forcing them to be diverse.
Reviewers agree that the proposed solution is not novel and has disadvantages such as increased parameters due to multiple generator models. The authors do not provide convincing arguments as to why the proposed approach should work well. The experiments presented also fail to demonstrate this. The results are limited to smaller MNIST and CIFAR10 datasets. Comparisons with approaches that directly address the mode collapse problem are missing. | {
"review_id": [
"HyqGENDgz",
"ByyDCx9xf",
"BkQh8t5gz"
],
"review": [
{
"title": "title: Interesting idea to train parallel generators, but not ready for publication",
"paper_summary": null,
"main_review": "main_review: Overall, the writing is very confusing at points and needs some attention to make the paper clearer. I’m not entirely sure the authors understand the material particularly well, as I found some of the arguments and narrative confusing or just incorrect. I don’t really see any significant contribution here except “we had this idea for this model, and it works”. There’s no interesting questions being asked about missing modes (and no answers through good experimentation), no insight that might contribute to our understanding of the problem, and no comparison to other models. My guess is this submission was rushed (and perhaps they were just looking for feedback). I like the idea, don’t get me wrong: a model that is trainable across multiple GPUs and that distributes generative work is pretty cool, and I want to see this work succeed (after a *lot* more work). But the paper really lacks what I’d consider good science, and I don’t see it publishable without significant improvement.\n\nPersonally I think you should change the angle from missing modes to parallel training. I don’t see any strong guarantees that the model will do what you say it will, especially as beta goes to zero.\n\nDetailed comments\n\nP1\n“, that explicitly approximate data distribution, the approximation of GAN is implicit”\nThe wording of this is pretty strange: by “implicit”, we mean that we only have *samples* from the distribution(s) of interest, but what does it mean for an approximation to be “implicit”?\n\nFrom the intro, it doesn’t sound like the approach is meant for the “mode collapse” problem, but for dealing with missing modes. These are different types of failures for GANs, and while there are many theories for why these happen, to my knowledge there’s no such consensus that these issues are the same.\nFor instance, what is keeping each of the generators from collapsing onto a single value? We often see the model collapse on several different values: why couldn’t each of your generators do this?\n\nP2: No, it is incorrect that the KL is what is causing mode collapse, and I think actually you mean “missing modes”. Arjovsky et al addresses the mode collapse problem, which is just another word for a type of instability in GANs. But this isn’t because of “vanishing gradients”, as the “proxy loss” (which you call “heuristic loss”, this isn’t a common term, fyi), which is what GANs are trained on in practice don’t vanish, but show some other sorts of instabilities (Arjovsky 2016). That said, other GAN variants without regularization also show collapse *and* missing modes, such as LSGAN and all the f-GAN variants (even the auto encoder variants).\n\nYou should also probably cite Che et al 2016 as another model that addressed missing modes. Also, what about ALI, BiGAN, and ALiCE? These also address missing modes (at least they claim to).\n\nI don’t understand why you’re comparing f-GAN and WGAN convergences: they are addressing different things with GANs: one shows insight into what exactly traditional GANs are doing (solving a dual problem of minimizing an f-divergence) versus addressing stability through using an IPM (though also a dual formulation of the wasserstein). f-GANs ensure neither stability nor non-vanishing gradients.\n\nP3: I like the breakdown of how the memory is organized.\nThis is for multi-GPU, correct? This needs to be explicitly stated.\n\nP6:\nThere’s a sign error in proof 1 (both in the definition of the reverse KL and when the loss is written out).\nAlso, the gradient w.r.t. theta magically appears in the second half.\nThis is a pretty round-about way to arrive at that you’re minimizing the reverse KL: I’m pretty sure this can be shown by formulating the second term in f-gan (the one where you sample from the generator), that is f*(T), where f* is the convex conjugate of f = -log\n\nMixture of Gaussians: common *missing modes* experiment.\n\nSo my general comments about the experiments\nYou need to compare to other models that address missing modes. Overall, many people have shown success with experiments similar to your simple mixture of Gaussians experiments, so in order to show something significant here, you will need to have a more challenging experiments and show a comparison to other models.\nThe real-world experiments are fairly unconvincing, as you only show MNIST and CIFAR-10 (and MNIST doesn’t look very good). Overall, the good inception scores aren’t too surprising given the model has several generators for each mode, but I think we need to see a demonstration on better datasets.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Avoiding mode collapse in GANs through combination of multiple weak generators.",
"paper_summary": null,
"main_review": "main_review: Summary:\nThis paper proposes parallel GANs (PGANs). This is a new architecture which composes the generator based on a mixture of weak generators with the main intended purpose that each unique generator may suffer mode collapse, but as long as each generator collapses to a distinct mode, the combination of generators will cover the whole image distribution. The paper proposes a number of technical details to 1) ensure that each sub generator offers distinct information (adjustment component, C) and 2) to efficiently train the generators in parallel while accumulating information to update both the discriminator and the adjustment component. \nResults are shown on a synthetic dataset of gaussian mixtures, demonstrating that the model does indeed find all modes within the data, and on two small real image datasets: MNIST and CIFAR-10. Overall the parallel generator model results in ~x2 speedup in training time compared with a single complex generator model.\n\nStrengths:\nMode collapse in GANs is a timely and unsolved problem. While most work aims to construct auxiliary loss function to prevent this collapse, this paper instead chooses to accept the collapse and instead encourage multiple models which collapse to unique modes. Though this does present a new problem in chooses the number of modes to estimate within a data source, the paper also presents a solution to systematically combine redundant modes over time, making the model more robust to the choice of number of generators overall. \n\nWeaknesses:\nOrganization - The paper is quite difficult to read. Some concepts are presented out of order. For example, the notion of an adjustment component is very natural but not introduced until after it is mentioned a few times. Similarly, G_{-k} is mentioned many times but not clearly defined. I would suggest to the authors to reorder the subsections in the method part to first outline the main idea: (parallel generators to capture different parts of overall distribution), mention the need to prevent redundancy between the generators (C), and mention some technical overhead in determining how to process all generated images by D. All of this may be discussed within the context of Fig 1. Also Fig 1a-b may be combined and may aid in explanation. \n\nExperiments - Comparison is limited to single generator models. Many other generator approaches exist beyond a single generator/discriminator GAN. In particular, different loss functions for training the generator (LS-GAN etc). Missing some relevant details like why use HogWild or what it is. \n\nMinimal understanding - I would like to know what exactly each generator contributes in the real world datasets. Can you show some generations from each mode? Is there a human perceivable difference?\n\nFigure 4: why does the inception score for the single generator models vary with the #generators?\n\nLast paragraph before 4.2.1: Please clarify this sentence - “we designed a relatively strong discriminator with a high learning rate, since the gradient vanish problem is not observed in reverse KL GAN.” \n\nTypo: last line page 7: “we the use” → “we use the”",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Promising direction, but needs more work",
"paper_summary": null,
"main_review": "main_review: The paper proposes to use multiple generators to fix mode collapse issue. The multiple generators are trained to be diverse. Each generator uses the reverse KL loss so that it models a single mode. One disadvantage is that it increases the number of networks (and hence the number of parameters). \n\nThe paper needs some additional experiments to convincingly demonstrate the usefulness of the proposed method. Experiments on a challenging dataset with large number of classes (e.g. ImageNet as done by AC-GAN paper) would better illustrate the power of the method.\n\nAC-GAN paper:\nConditional Image Synthesis with Auxiliary Classifier GANs\nhttps://arxiv.org/pdf/1610.09585.pdf\n\nThe paper lacks clarity in some places and could use another round of editing/polishing.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.2222222238779068,
0.5555555820465088,
0.4444444477558136
],
"confidence": [
1,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Re: Generative Adversarial Parallelization"
],
"comment": [
"Hello, thanks for your comment. In GAP, multiple discriminators are trained and the swap operator will reduce the coupling between a generator and discriminator since a tight pair could lead to mode collapse problem. However, in our proposed method, only one global discriminator is used and each generator is trained to capture different modes of the data distribution. The extra component C will penalize those generators that collapse to the same mode. Another simple understanding of our proposed method is that each generator tries to capture the data distribution while keeps a distance with any other generators. So that the search space will be partitioned into k separate parts(k is the number of generator) and each generator will capture a certain part. \nSo our method is different from GAP, where GAP use swap operator to bring different adversaries to each generator, while in our method, we partition the space using extra component C, and each generator will capture a certain part of the data distribution. "
]
} | {
"paperhash": [
"arjovsky|towards_principled_methods_for_training_generative_adversarial_networks",
"arora|generalization_and_equilibrium_in_generative_adversarial_nets_(gans)",
"berthelot|began:_boundary_equilibrium_generative_adversarial_networks",
"durugkar|generative_multi-adversarial_networks",
"freund|experiments_with_a_new_boosting_algorithm",
"goodfellow|nips_2016_tutorial:_generative_adversarial_networks",
"goodfellow|generative_adversarial_nets",
"grover|boosted_generative_models",
"hoang|multi-generator_generative_adversarial_nets",
"kingma|adam:_a_method_for_stochastic_optimization",
"diederik|auto-encoding_variational_bayes",
"lecun|mnist_handwritten_digit_database",
"metz|unrolled_generative_adversarial_networks",
"nguyen|dual_discriminator_generative_adversarial_nets",
"nowozin|f-gan:_training_generative_neural_samplers_using_variational_divergence_minimization",
"radford|unsupervised_representation_learning_with_deep_convolutional_generative_adversarial_networks",
"edward|the_infinite_gaussian_mixture_model",
"recht|hogwild:_a_lock-free_approach_to_parallelizing_stochastic_gradient_descent",
"salakhutdinov|restricted_boltzmann_machines_for_collaborative_filtering",
"salimans|improved_techniques_for_training_gans",
"srivastava|veegan:_reducing_mode_collapse_in_gans_using_implicit_variational_learning",
"theis|aäron_van_den_oord,_and_matthias_bethge._a_note_on_the_evaluation_of_generative_models",
"tolstikhin|adagan:_boosting_generative_models",
"zhao|energy-based_generative_adversarial_network"
],
"title": [
"Towards principled methods for training generative adversarial networks",
"Generalization and equilibrium in generative adversarial nets (gans)",
"Began: Boundary equilibrium generative adversarial networks",
"Generative multi-adversarial networks",
"Experiments with a new boosting algorithm",
"Nips 2016 tutorial: Generative adversarial networks",
"Generative adversarial nets",
"Boosted generative models",
"Multi-generator generative adversarial nets",
"Adam: A method for stochastic optimization",
"Auto-encoding variational bayes",
"MNIST handwritten digit database",
"Unrolled generative adversarial networks",
"Dual discriminator generative adversarial nets",
"f-gan: Training generative neural samplers using variational divergence minimization",
"Unsupervised representation learning with deep convolutional generative adversarial networks",
"The infinite gaussian mixture model",
"Hogwild: A lock-free approach to parallelizing stochastic gradient descent",
"Restricted boltzmann machines for collaborative filtering",
"Improved techniques for training gans",
"Veegan: Reducing mode collapse in gans using implicit variational learning",
"Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models",
"Adagan: Boosting generative models",
"Energy-based generative adversarial network"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"martin arjovsky",
"léon bottou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sanjeev arora",
"rong ge",
"yingyu liang",
"tengyu ma",
"yi zhang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david berthelot",
"tom schumm",
"luke metz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ishan durugkar",
"ian gemp",
"sridhar mahadevan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoav freund",
"robert e schapire"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ian goodfellow"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ian goodfellow",
"jean pouget-abadie",
"mehdi mirza",
"bing xu",
"david warde-farley",
"sherjil ozair",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aditya grover",
"stefano ermon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"quan hoang",
"tu dinh nguyen",
"trung le",
"dinh q phung"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik kingma",
"jimmy ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p diederik",
"max kingma",
" welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yann lecun",
"corinna cortes"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"luke metz",
"ben poole",
"david pfau",
"jascha sohl-dickstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"trung tu dinh nguyen",
"hung le",
"dinh vu",
" phung"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sebastian nowozin",
"botond cseke",
"ryota tomioka"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alec radford",
"luke metz",
"soumith chintala"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"carl edward",
"rasmussen "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"benjamin recht",
"christopher re",
"stephen wright",
"feng niu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ruslan salakhutdinov",
"andriy mnih",
"geoffrey hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tim salimans",
"ian goodfellow",
"wojciech zaremba",
"vicki cheung",
"alec radford",
"xi chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"akash srivastava",
"lazar valkov",
"chris russell",
"michael gutmann",
"charles sutton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lucas theis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ilya tolstikhin",
"sylvain gelly",
"olivier bousquet",
"carl-johann simon-gabriel",
"bernhard schölkopf"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"junbo zhao",
"michael mathieu",
"yann lecun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1701.04862v1",
"1703.00573v5",
"1703.10717v4",
"arXiv:1611.01673",
"",
"1701.00160v4",
"",
"1702.08484v2",
"1708.02556v4",
"1412.6980v9",
"arXiv:1312.6114",
"",
"1611.02163v4",
"1709.03831v1",
"1606.00709v1",
"1511.06434v2",
"",
"",
"",
"1606.03498v1",
"1705.07761v3",
"arXiv:1511.01844",
"1701.02386v2",
"arXiv:1609.03126"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.407407 | 0.833333 | null | null | null | null | null | rJHcpW-CW |
||
evtimova|emergent_communication_in_a_multimodal_multistep_referential_game|ICLR_cc_2018_Conference | 3706001 | null | Emergent Communication in a Multi-Modal, Multi-Step Referential Game | Inspired by previous work on emergent communication in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration. The multi-modal multi-step setting allows agents to develop an internal communication significantly closer to natural language, in that they share a single set of messages, and that the length of the conversation may vary according to the difficulty of the task. We examine these properties empirically using a dataset consisting of images and textual descriptions of mammals, where the agents are tasked with identifying the correct object. Our experiments indicate that a robust and efficient communication protocol emerges, where gradual information exchange informs better predictions and higher communication bandwidth improves generalization. | {
"name": [
"katrina evtimova",
"andrew drozdov",
"douwe kiela",
"kyunghyun cho"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
}
]
} | null | [
"emergent communication",
"multi-agent systems",
"multi-modal"
] | null | 2018-02-15 22:29:38 | 31 | 97 | 6 | null | null | null | null | null | null | true | An interesting paper, generally well-written. Though it would be nice to see that the methods and observations generalize to other datasets, it is probably too much to ask as datasets with required properties do not seem to exist. There is a clear consensus to accept the paper.
+ an interesting extension of previous work on emergent communications (e.g., referential games)
+ well written paper
| {
"review_id": [
"rJhUvu5gf",
"SkN953tgG",
"BJ8ZFxKgM"
],
"review": [
{
"title": "title: Interesting take on representation learning",
"paper_summary": null,
"main_review": "main_review: The setup in the paper for learning representations is different to many other approaches in the area, using to agents that communicate over descriptions of objects using different modalities. The experimental setup is interesting in that it allows comparing approaches in learning an effective representation. The paper does mention the agents will be available, but leaves open wether the dataset will be also available. For reproducibility and comparisons, this availability would be essential. \n\nI like that the paper gives a bit of context, but presentation of results could be clearer, and I am missing some more explicit information on training and results (eg how long / how many training examples, how many testing, classification rates, etc).\nThe paper says is the training procedure is described in Appendix A, but as far as I see that contains the table of notations. \n\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting paper that extends basic referential game",
"paper_summary": null,
"main_review": "main_review: The paper proposes a new multi-modal, multi-step reference game, where the sender has access to visual data and the receiver has access to textual messages, and also the conversation can be terminated by the receiver when proper. \n\nLater, the paper describes their idea and extension in details and reports comprehensive experiment results of a number of hypotheses. The research questions seems straightforward, but it is good to see those experiments review some interesting points. One thing I am bit concerned is that the results are based on a single dataset. Do we have other datasets that can be used?\n\nThe authors also lay out further several research directions. Overall, I think this paper is easy to read and good. \n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: --------------\nSummary and Evaluation:\n--------------\n\nThe paper presents a nice set of experiments on language emergence in a mutli-modal, multi-step setting. The multi-modal reference game provides an interesting setting for communication, with agents learning to map descriptions to images. The receiving agent's direct control over dialog length is also novel and allows for the interesting analysis presented in later sections. \n\nOverall I think this is an interesting and well-designed work; however, some details are missing that I think would make for a stronger submission (see weaknesses).\n\n\n--------------\nStrengths:\n--------------\n- Generally well-written with the Results and Analysis section appearing especially thought-out and nicely presented.\n\n- The proposed reference game provides a number of novel contributions -- giving the agents control over dialog length, providing both agents with the same vocabulary without constraints on how each uses it (implicit through pretraining or explicit in the structure/loss), and introducing an asymmetric multi-modal context for the dialog.\n\n- The analysis is extensive and well-grounded in the three key hypothesis presented at the beginning of Section 6.\n\n--------------\nWeaknesses:\n--------------\n\n- There is room to improve the clarity of Sections 3 and 4 and I encourage the authors to revisit these sections. Some specific suggestions that might help:\n\t\t- numbering all display style equations\n\t\t- when describing the recurrent receiver, explain the case where it terminates (s^t=1) first such that P(o_r=1) is defined prior to being used in the message generation equation. \n\n- I did not see an argument in support of the accuracy@K metric. Why is putting the ground truth in the top 10% the appropriate metric in this setting? Is it to enable comparison between the in-domain, out-domain, and transfer settings?\n\n- Unless I missed something, the transfer test set results only comes up once in the context of attention methods and are not mentioned elsewhere. Why is this? It seems appropriate to include in Figure 5 if no where else in the analysis.\n\n- Do the authors have a sense for how sensitive these results are to different runs of the training process?\n\n- I did not understand this line from Section 5.1: \"and discarding any image with a category beyond the 398-th most frequent one, as classified by a pretrained ImageNet classifier'\"\n\n- It is not specified (or I missed it) whether the F1 scores from the separate classifier are from training or test set evaluations.\n\n- I would have liked to see analysis on the training process such as a plot of reward (or baseline adjusted reward) over training iterations. \n\n- I encourage authors to see the EMNLP 2017 paper \"Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog\" which also perform multi-round dialogs between two agents. Like this work, the authors also proposed removing memory from one of the agents as a means to avoid learning degenerate 'non-dialog' protocols.\n\n- Very minor point: the use of fixed-length, non-sequence style utterances is somewhat disappointing given the other steps made in the paper to make the reference game more 'human like' such as early termination, shared vocabularies, and unconstrained utterance types. I understand however that this is left as future work.\n\n\n--------------\nCuriosities:\n--------------\n- I think the analysis is Figure 3 b,c is interesting and wonder if something similar can be computed over all examples. One option would be to plot accuracy@k for different utterance indexes -- essentially forcing the model to make a prediction after each round of dialog (or simply repeating its prediction if the model has chosen to stop). \n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.6666666865348816,
0.6666666865348816
],
"confidence": [
0.5,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to Reviewer",
"Response to Reviewer",
"Response to Reviewer"
],
"comment": [
"We would like to thank the reviewer for their thoughtful compliments and criticism. In particular, the detailed list of areas for improvement have lead us to run additional experiments and make edits in the text that we believe have strengthened our work.\n\nLet us address your concerns and questions below.\n\n> analysis on the training process\n\nWe’ve updated Figure 6 in the paper to display Accuracy@1 in addition to Accuracy@6. We hope this metric plotted over each epoch gives a useful overview of the training process and some insight in how the model’s performance changes over time. \n\n> Do the authors have a sense for how sensitive these results are to different runs of the training process?\n\nWe ran six experiments with different random seeds and reported the mean and variance on their loss and accuracy in Appendix B, but would be open to include these values in the main text if this seems useful.\n\n> the transfer test set\n\nThere was not much to be gleaned from the transfer set besides the effect of the attention mechanism. We’re more explicit about saying so in Section 6.\n\n> the accuracy@K metric\n\nWe use this metric since many mammal classes are quite similar to each other, and we don't want to overpenalize predicting similar classes such as kangaroo and wallaby. As suggested by the reviewer, this metric also enables comparison between the in-domain, out-domain, and transfer test sets.\n\n> the F1 scores from the separate classifier are from training or test set evaluations\n\nThe plot in Figure 2a and its associated F1 scores are derived from the in-domain test set. \n\n> discarding any image with a category beyond the 398-th most frequent one\n\nWhen we build our dataset, we discard images that are not likely to be an animal, as determined by a pre-trained classifier.\n\n> numbering all display style equations\n\nWe appreciate the reviewer’s suggestion to add equation numbers, but believe that since we have so many equations, it is alright to only number the equations that we reference explicitly in the text. \n\n> when describing the recurrent receiver, explain the case where it terminates (s^t=1) first such that P(o_r=1) is defined prior to being used in the message generation equation\n\nThe first message of the receiver is learned as a separate parameter in all cases and we’ve mentioned this in the “Recurrent Receiver” portion of Section 3.\n\n> the analysis is Figure 3 b,c\n\nFor Figure 3b and 3c, we show only the top-4 predicted classes because the probabilities given to the other classes are negligible in comparison. The observation that we made regarding this figure (that as the conversation progresses, similar but incorrect categories receive smaller probabilities than the correct one) held for all other categories, but we limited to these two classes as we felt this sufficiently conveyed the idea.\n\n> I encourage authors to see the EMNLP 2017 paper \"Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog\" which also perform multi-round dialogs between two agents. Like this work, the authors also proposed removing memory from one of the agents as a means to avoid learning degenerate 'non-dialog' protocols.\n\nAnd\n\n> Very minor point: the use of fixed-length, non-sequence style utterances is somewhat disappointing given the other steps made in the paper to make the reference game more 'human like' such as early termination, shared vocabularies, and unconstrained utterance types. I understand however that this is left as future work.\n\nThere are some matters that we will leave for future work. Kottur et al. explain how limiting memory can force consistency over different steps in a dialog. This can be a useful property, but our work was primarily concerned with the distribution over messages and the model’s prediction confidence. It’s a natural progression to investigate the meaning of these messages as a follow-up work, and to attempt models that encode meaning not only in individual words, but also the latent structure in sequences of words.\n",
"We’d like to thank the reviewer for their thoughtful feedback. In response to the following comment:\n\n> One thing I am bit concerned is that the results are based on a single dataset.\n\nA distinguishing property of our dataset is that, in addition to images, each class has an informative textual description, and there is a natural hierarchy of properties shared between classes. As there wasn’t a similar dataset already available, we had to collect the data ourselves. In section 5.2 of the de-anonymized version of the paper, we’ll include a link to our codebase which contains instructions to build such a dataset.\n",
"Thank you for your thoughtful comments.\n\n> The paper does mention the agents will be available, but leaves open whether the dataset will be also available.\n\nYou bring up a great point that in order to reproduce our results it would be necessary to have access to a similar dataset. In addition, even with written details of the implementation, it can be difficult to reproduce experiments. For these reasons, we’ve prepared to release the code and instructions on how to build the dataset, and will include a link in the de-anonymized version of the paper. We allude to this in section 5.2 under Code.\n\n> missing some more explicit information on training and results\n\nAnd \n\n> The paper says the training procedure is described in Appendix A\n\nWe also thank the reviewer for pointing out the typo in relation to Appendix A. In terms of training and results, a plot of the classification accuracy by epoch is shown in the updated Figure 6. We added the following details in sections 5.1 and 5.2 that should clear up confusion about the training procedure:\n\nThe number of images per class in the out-of-domain test set is 100 images per class (for 10 classes in total).\nWe use early stopping with a maximum 500 training epochs.\nWe train on a single GPU (Nvidia Titan X Pascal), and a single experiment takes roughly 8 hours for 500 epochs.\n\nIs the addition of these details sufficient? \n"
]
} | {
"paperhash": [
"tenenbaum|building_machines_that_learn_and_think_like_people",
"andreas|translating_neuralese",
"das|learning_cooperative_visual_dialog_agents_with_deep_reinforcement_learning",
"mordatch|emergence_of_grounded_compositional_language_in_multi-agent_populations",
"strub|end-to-end_optimization_of_goal-driven_and_visually_grounded_dialogue_systems",
"havrylov|emergence_of_language_with_multi-agent_games:_learning_to_communicate_with_sequences_of_symbols",
"vries|guesswhat?!_visual_object_discovery_through_multi-modal_dialogue",
"jorge|learning_to_play_guess_who?_and_inventing_a_grounded_language_as_a_consequence",
"lazaridou|multi-agent_cooperation_and_the_emergence_of_(natural)_language",
"kiela|virtual_embodiment:_a_scalable_long-term_strategy_for_artificial_intelligence_research",
"gauthier|a_paradigm_for_situated_and_goal-driven_language_learning",
"sukhbaatar|learning_multiagent_communication_with_backpropagation",
"andreas|reasoning_about_pragmatics_with_neural_listeners_and_speakers",
"foerster|learning_to_communicate_to_solve_riddles_with_deep_distributed_recurrent_q-networks",
"he|deep_residual_learning_for_image_recognition",
"xu|show,_attend_and_tell:_neural_image_caption_generation_with_visual_attention",
"pennington|glove:_global_vectors_for_word_representation",
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"cho|learning_phrase_representations_using_rnn_encoder–decoder_for_statistical_machine_translation",
"mnih|neural_variational_inference_and_learning_in_belief_networks",
"krizhevsky|imagenet_classification_with_deep_convolutional_neural_networks",
"steels|the_grounded_naming_game",
"deng|imagenet:_a_large-scale_hierarchical_image_database",
"larochelle|zero-data_learning_of_new_tasks",
"miller|wordnet:_a_lexical_database_for_english",
"cohen|searching_large_hypothesis_spaces_by_asking_questions"
],
"title": [
"Building Machines that Learn and Think Like People",
"Translating Neuralese",
"Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning",
"Emergence of Grounded Compositional Language in Multi-Agent Populations",
"End-to-end optimization of goal-driven and visually grounded dialogue systems",
"Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols",
"GuessWhat?! Visual Object Discovery through Multi-modal Dialogue",
"Learning to Play Guess Who? and Inventing a Grounded Language as a Consequence",
"Multi-Agent Cooperation and the Emergence of (Natural) Language",
"Virtual Embodiment: A Scalable Long-Term Strategy for Artificial Intelligence Research",
"A Paradigm for Situated and Goal-Driven Language Learning",
"Learning Multiagent Communication with Backpropagation",
"Reasoning about Pragmatics with Neural Listeners and Speakers",
"Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-Networks",
"Deep Residual Learning for Image Recognition",
"Show, Attend and Tell: Neural Image Caption Generation with Visual Attention",
"GloVe: Global Vectors for Word Representation",
"Neural Machine Translation by Jointly Learning to Align and Translate",
"Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation",
"Neural Variational Inference and Learning in Belief Networks",
"ImageNet classification with deep convolutional neural networks",
"The Grounded Naming Game",
"ImageNet: A large-scale hierarchical image database",
"Zero-data Learning of New Tasks",
"WordNet: A Lexical Database for English",
"Searching large hypothesis spaces by asking questions"
],
"abstract": [
"Recent successes in artificial intelligence and machine learning have been largely driven by methods for sophisticated pattern recognition, including deep neural networks and other data-intensive methods. But human intelligence is more than just pattern recognition. And no machine system yet built has anything like the flexible, general-purpose commonsense grasp of the world that we can see in even a one-year-old human infant. I will consider how we might capture the basic learning and thinking abilities humans possess from early childhood, as one route to building more human-like forms of machine learning and thinking. At the heart of human common sense is our ability to model the physical and social environment around us: to explain and understand what we see, to imagine things we could see but haven't yet, to solve problems and plan actions to make these things real, and to build new models as we learn more about the world. I will focus on our recent work reverse-engineering these capacities using methods from probabilistic programming, program induction and program synthesis, which together with deep learning methods and video game simulation engines, provide a toolkit for the joint enterprise of modeling human intelligence and making AI systems smarter in more human-like ways.",
"Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel. While these policies are effective for many tasks, interpretation of their induced communication strategies has remained a challenge. Here we propose to interpret agents’ messages by translating them. Unlike in typical machine translation problems, we have no parallel data to learn from. Instead we develop a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener. We present theoretical guarantees and empirical evidence that our approach preserves both the semantics and pragmatics of messages by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language.",
"We introduce the first goal-driven training for visual question answering and dialog agents. Specifically, we pose a cooperative ‘image guessing’ game between two agents – Q-BOT and A-BOT– who communicate in natural language dialog so that Q-BOT can select an unseen image from a lineup of images. We use deep reinforcement learning (RL) to learn the policies of these agents end-to-end – from pixels to multi-agent multi-round dialog to game reward.,,We demonstrate two experimental results.,,First, as a ‘sanity check’ demonstration of pure RL (from scratch), we show results on a synthetic world, where the agents communicate in ungrounded vocabularies, i.e., symbols with no pre-specified meanings (X, Y, Z). We find that two bots invent their own communication protocol and start using certain symbols to ask/answer about certain visual attributes (shape/color/style). Thus, we demonstrate the emergence of grounded language and communication among ‘visual’ dialog agents with no human supervision.,,Second, we conduct large-scale real-image experiments on the VisDial dataset [5], where we pretrain on dialog data with supervised learning (SL) and show that the RL finetuned agents significantly outperform supervised pretraining. Interestingly, the RL Q-BOT learns to ask questions that A-BOT is good at, ultimately resulting in more informative dialog and a better team.",
"\n \n By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.\n \n",
"End-to-end design of dialogue systems has recently become a popular research topic thanks to powerful tools such as encoder-decoder architectures for sequence-to-sequence learning. Yet, most current approaches cast human-machine dialogue management as a supervised learning problem, aiming at predicting the next utterance of a participant given the full history of the dialogue. This vision is too simplistic to render the intrinsic planning problem inherent to dialogue as well as its grounded nature , making the context of a dialogue larger than the sole history. This is why only chitchat and question answering tasks have been addressed so far using end-to-end architectures. In this paper, we introduce a Deep Reinforcement Learning method to optimize visually grounded task-oriented dialogues , based on the policy gradient algorithm. This approach is tested on a dataset of 120k dialogues collected through Mechanical Turk and provides encouraging results at solving both the problem of generating natural dialogues and the task of discovering a specific object in a complex picture.",
"Learning to communicate through interaction, rather than relying on explicit supervision, is often considered a prerequisite for developing a general AI. We study a setting where two agents engage in playing a referential game and, from scratch, develop a communication protocol necessary to succeed in this game. Unlike previous work, we require that messages they exchange, both at train and test time, are in the form of a language (i.e. sequences of discrete symbols). We compare a reinforcement learning approach and one using a differentiable relaxation (straight-through Gumbel-softmax estimator) and observe that the latter is much faster to converge and it results in more effective protocols. Interestingly, we also observe that the protocol we induce by optimizing the communication success exhibits a degree of compositionality and variability (i.e. the same information can be phrased in different ways), both properties characteristic of natural languages. As the ultimate goal is to ensure that communication is accomplished in natural language, we also perform experiments where we inject prior information about natural language into our model and study properties of the resulting protocol.",
"We introduce GuessWhat?!, a two-player guessing game as a testbed for research on the interplay of computer vision and dialogue systems. The goal of the game is to locate an unknown object in a rich image scene by asking a sequence of questions. Higher-level image understanding, like spatial reasoning and language grounding, is required to solve the proposed task. Our key contribution is the collection of a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images. We explain our design decisions in collecting the dataset and introduce the oracle and questioner tasks that are associated with the two players of the game. We prototyped deep learning models to establish initial baselines of the introduced tasks.",
"Learning your first language is an incredible feat and not easily duplicated. Doing this using nothing but a few pictureless books, a corpus, would likely be impossible even for humans. As an alternative we propose to use situated interactions between agents as a driving force for communication, and the framework of Deep Recurrent \nQ-Networks (DRQN) for learning a common language grounded in the provided environment. We task the agents with interactive image search in the form of the game Guess Who?. The images from the game provide a non trivial environment for the agents to discuss and a natural grounding for the concepts they decide to encode in their communication. Our experiments show that it is possible to learn this task using DRQN and even more importantly that the words the agents use correspond to physical attributes present in the images that make up the agents environment.",
"The current mainstream approach to train natural language systems is to expose them to large amounts of text. This passive learning is problematic if we are interested in developing interactive machines, such as conversational agents. We propose a framework for language learning that relies on multi-agent communication. We study this learning in the context of referential games. In these games, a sender and a receiver see a pair of images. The sender is told one of them is the target and is allowed to send a message from a fixed, arbitrary vocabulary to the receiver. The receiver must rely on this message to identify the target. Thus, the agents develop their own language interactively out of the need to communicate. We show that two networks with simple configurations are able to learn to coordinate in the referential game. We further explore how to make changes to the game environment to cause the \"word meanings\" induced in the game to better reflect intuitive semantic properties of the images. In addition, we present a simple strategy for grounding the agents' code into natural language. Both of these are necessary steps towards developing machines that are able to communicate with humans productively.",
"Meaning has been called the \"holy grail\" of a variety of scientific disciplines, ranging from linguistics to philosophy, psychology and the neurosciences. The field of Artifical Intelligence (AI) is very much a part of that list: the development of sophisticated natural language semantics is a sine qua non for achieving a level of intelligence comparable to humans. Embodiment theories in cognitive science hold that human semantic representation depends on sensori-motor experience; the abundant evidence that human meaning representation is grounded in the perception of physical reality leads to the conclusion that meaning must depend on a fusion of multiple (perceptual) modalities. Despite this, AI research in general, and its subdisciplines such as computational linguistics and computer vision in particular, have focused primarily on tasks that involve a single modality. Here, we propose virtual embodiment as an alternative, long-term strategy for AI research that is multi-modal in nature and that allows for the kind of scalability required to develop the field coherently and incrementally, in an ethically responsible fashion.",
"A distinguishing property of human intelligence is the ability to flexibly use language in order to communicate complex ideas with other humans in a variety of contexts. Research in natural language dialogue should focus on designing communicative agents which can integrate themselves into these contexts and productively collaborate with humans. In this abstract, we propose a general situated language learning paradigm which is designed to bring about robust language agents able to cooperate productively with humans.",
"Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNet, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.",
"We present a model for pragmatically describing scenes, in which contrastive behavior results from a combination of inference-driven pragmatics and learned semantics. Like previous learned approaches to language generation, our model uses a simple feature-driven architecture (here a pair of neural \"listener\" and \"speaker\" models) to ground language in the world. Like inference-driven approaches to pragmatics, our model actively reasons about listener behavior when selecting utterances. For training, our approach requires only ordinary captions, annotated _without_ demonstration of the pragmatic behavior the model ultimately exhibits. In human evaluations on a referring expression game, our approach succeeds 81% of the time, compared to a 69% success rate using existing techniques.",
"We propose deep distributed recurrent Q-networks (DDRQN), which enable teams of agents to learn to solve communication-based coordination tasks. In these tasks, the agents are not given any pre-designed communication protocol. Therefore, in order to successfully communicate, they must first automatically develop and agree upon their own communication protocol. We present empirical results on two multi-agent learning problems based on well-known riddles, demonstrating that DDRQN can successfully solve such tasks and discover elegant communication protocols to do so. To our knowledge, this is the first time deep reinforcement learning has succeeded in learning communication protocols. In addition, we present ablation experiments that confirm that each of the main components of the DDRQN architecture are critical to its success.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.",
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"In this paper, we propose a novel neural network model called RNN Encoder‐ Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixedlength vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder‐Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.",
"Highly expressive directed latent variable models, such as sigmoid belief networks, are difficult to train on large datasets because exact inference in them is intractable and none of the approximate inference methods that have been applied to them scale well. We propose a fast non-iterative approximate inference method that uses a feedforward network to implement efficient exact sampling from the variational posterior. The model and this inference network are trained jointly by maximizing a variational lower bound on the log-likelihood. Although the naive estimator of the inference network gradient is too high-variance to be useful, we make it practical by applying several straightforward model-independent variance reduction techniques. Applying our approach to training sigmoid belief networks and deep autoregressive networks, we show that it outperforms the wake-sleep algorithm on MNIST and achieves state-of-the-art results on the Reuters RCV1 document dataset.",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.",
"This research was carried out at the AI Lab of the University of Brussels (VUB) and the Sony Computer Science Laboratory in Paris.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"We introduce the problem of zero-data learning, where a model must generalize to classes or tasks for which no training data are available and only a description of the classes or tasks are provided. Zero-data learning is useful for problems where the set of classes to distinguish or tasks to solve is very large and is not entirely covered by the training data. The main contributions of this work lie in the presentation of a general formalization of zero-data learning, in an experimental analysis of its properties and in empirical evidence showing that generalization is possible and significant in this context. The experimental work of this paper addresses two classification problems of character recognition and a multitask ranking problem in the context of drug discovery. Finally, we conclude by discussing how this new framework could lead to a novel perspective on how to extend machine learning towards AI, where an agent can be given a specification for a learning problem before attempting to solve it (with very few or even zero examples).",
"Because meaningful sentences are composed of meaningful words, any system that hopes to process natural languages as people do must have information about words and their meanings. This information is traditionally provided through dictionaries, and machine-readable dictionaries are now widely available. But dictionary entries evolved for the convenience of human readers, not for machines. WordNet1 provides a more effective combination of traditional lexicographic information and modern computing. WordNet is an online lexical database designed for use under program control. English nouns, verbs, adjectives, and adverbs are organized into sets of synonyms, each representing a lexicalized concept. Semantic relations link the synonym sets [4].",
"One way people deal with uncertainty is by asking questions. A showcase of this ability is the classic 20 questions game where a player asks questions in search of a secret object. Previous studies using variants of this task have found that people are effective question-askers according to normative Bayesian metrics such as expected information gain. However, so far, the studies amenable to mathematical modeling have used only small sets of possible hypotheses that were provided explicitly to participants, far from the unbounded hypothesis spaces people often grapple with. Here, we study how people evaluate the quality of questions in an unrestricted 20 Questions task. We present a Bayesian model that utilizes a large data set of object-question pairs and expected information gain to select questions. This model provides good predictions regarding people’s preferences and outperforms simpler alternatives."
],
"authors": [
{
"name": [
"J. Tenenbaum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jacob Andreas",
"A. Dragan",
"D. Klein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Abhishek Das",
"Satwik Kottur",
"J. Moura",
"Stefan Lee",
"Dhruv Batra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Igor Mordatch",
"P. Abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Florian Strub",
"H. D. Vries",
"Jérémie Mary",
"Bilal Piot",
"Aaron C. Courville",
"O. Pietquin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Serhii Havrylov",
"Ivan Titov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"H. D. Vries",
"Florian Strub",
"A. Chandar",
"O. Pietquin",
"H. Larochelle",
"Aaron C. Courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Emilio Jorge",
"Mikael Kågebäck",
"E. Gustavsson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Angeliki Lazaridou",
"A. Peysakhovich",
"Marco Baroni"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Douwe Kiela",
"L. Bulat",
"A. Vero",
"S. Clark"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jon Gauthier",
"Igor Mordatch"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sainbayar Sukhbaatar",
"Arthur Szlam",
"R. Fergus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jacob Andreas",
"D. Klein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jakob N. Foerster",
"Yannis Assael",
"Nando de Freitas",
"Shimon Whiteson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kaiming He",
"X. Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ke Xu",
"Jimmy Ba",
"Ryan Kiros",
"Kyunghyun Cho",
"Aaron C. Courville",
"R. Salakhutdinov",
"R. Zemel",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jeffrey Pennington",
"R. Socher",
"Christopher D. Manning"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Dzmitry Bahdanau",
"Kyunghyun Cho",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kyunghyun Cho",
"B. V. Merrienboer",
"Çaglar Gülçehre",
"Dzmitry Bahdanau",
"Fethi Bougares",
"Holger Schwenk",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Mnih",
"Karol Gregor"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Krizhevsky",
"I. Sutskever",
"Geoffrey E. Hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"L. Steels",
"Martin Loetzsch"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jia Deng",
"Wei Dong",
"R. Socher",
"Li-Jia Li",
"K. Li",
"Li Fei-Fei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"H. Larochelle",
"D. Erhan",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Miller"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Alexander Cohen",
"B. Lake"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
"1704.06960",
"1703.06585",
"1703.04908",
"1703.05423",
"1705.11192",
"1611.08481",
"1611.03218",
"1612.07182",
"1610.07432",
"1610.03585",
"1605.07736",
"1604.00562",
"1602.02672",
"1512.03385",
"1502.03044",
null,
"1409.0473",
"1406.1078",
"1402.0030",
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"260496023",
"291100",
"1448723",
"13548281",
"8143148",
"24100971",
"36417",
"11357932",
"14911774",
"6274593",
"18797791",
"6925519",
"337390",
"17473440",
"206594692",
"1055111",
"1957433",
"11212020",
"5590763",
"1981188",
"195908774",
"10482330",
"57246310",
"7249642",
"1671874",
"9608226"
],
"intents": [
[
"background"
],
[
"methodology"
],
[
"background"
],
[],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"methodology",
"background"
],
[],
[
"background"
],
[
"background"
],
[],
[
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[],
[],
[
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"background"
]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false
]
} | null | 84 | 1.154762 | 0.666667 | 0.666667 | null | null | null | null | null | rJGZq6g0- |
yang|deep_mean_field_theory_layerwise_variance_and_width_variation_as_methods_to_control_gradient_explosion|ICLR_cc_2018_Conference | Deep Mean Field Theory: Layerwise Variance and Width Variation as Methods to Control Gradient Explosion | A recent line of work has studied the statistical properties of neural networks to great success from a {\it mean field theory} perspective, making and verifying very precise predictions of neural network behavior and test time performance.
In this paper, we build upon these works to explore two methods for taming the behaviors of random residual networks (with only fully connected layers and no batchnorm).
The first method is {\it width variation (WV)}, i.e. varying the widths of layers as a function of depth.
We show that width decay reduces gradient explosion without affecting the mean forward dynamics of the random network.
The second method is {\it variance variation (VV)}, i.e. changing the initialization variances of weights and biases over depth.
We show VV, used appropriately, can reduce gradient explosion of tanh and ReLU resnets from $\exp(\Theta(\sqrt L))$ and $\exp(\Theta(L))$ respectively to constant $\Theta(1)$.
A complete phase-diagram is derived for how variance decay affects different dynamics, such as those of gradient and activation norms.
In particular, we show the existence of many phase transitions where these dynamics switch between exponential, polynomial, logarithmic, and even constant behaviors.
Using the obtained mean field theory, we are able to track surprisingly well how VV at initialization time affects training and test time performance on MNIST after a set number of epochs: the level sets of test/train set accuracies coincide with the level sets of the expectations of certain gradient norms or of metric expressivity (as defined in \cite{yang_meanfield_2017}), a measure of expansion in a random neural network.
Based on insights from past works in deep mean field theory and information geometry, we also provide a new perspective on the gradient explosion/vanishing problems: they lead to ill-conditioning of the Fisher information matrix, causing optimization troubles. | {
"name": [],
"affiliation": []
} | By setting the width or the initialization variance of each layer differently, we can actually subdue gradient explosion problems in residual networks (with fully connected layers and no batchnorm). A mathematical theory is developed that not only tells you how to do it, but also surprisingly is able to predict, after you apply such tricks, how fast your network trains to achieve a certain test set performance. This is some black magic stuff, and it's called "Deep Mean Field Theory." | [
"mean field",
"dynamics",
"residual network",
"variance variation",
"width variation",
"initialization"
] | null | 2018-02-15 22:29:23 | 26 | null | null | null | null | null | null | null | null | false | All the reviewers agree that this is an interesting paper but have concerns about readability and presentation. There is also concern that many results are speculative and not concretely tested. I recommend the authors to carefully investigate their claims with stronger experiments and submit it to another venue. I recommend presenting at ICLR workshop to obtain further feedback. | {
"review_id": [
"rkDLp95lG",
"SJTc3MAgf",
"Bk8iCb0Wz"
],
"review": [
{
"title": "title: Nice addition to the mean-field-theory subfield",
"paper_summary": null,
"main_review": "main_review: This paper further develops the research program using mean field theory to predict generalization performance of deep neural networks. As with all recent mean-field papers, the main query here is to what extent the assumptions (Axioms 1+2, which basically define the asymptotic parameters of interest to be the quantities defined in Sec. 2.; and also the fully connected residual structure of the network) apply in practice. This is answered using the same empirical standard as in [Yang and Schoenholz, Schoenholz et al.], i.e. showing that the dynamics of initialization predict generalization behavior on MNIST according to theory.\n\nAs with the earlier papers in this recent program, the paper is notation-heavy but generally written well, though there is some overreliance on the readers' knowledge of previous work, for instance in presenting the evidence as above. Try as I might, I cannot find a detailed explanation of the color scale for the important Fig. 4. A small notation issue: the current Hebrew letter for the gradient quantity does not go with the other Greek letters and is typographically poor choice because of underlining, etc.). Also, several of the citations should be fixed to reflect peer-reviewed publication of Arxiv papers. I was not able to review all the proofs, but what I checked was sound. Finally, the techniques of WV and VV would be more applicable if it were not for the very tenuous relationship between gradient explosion and performance, which should be mentioned more than the one time it appears in the paper.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Difficult to follow for someone not familiar with all the terminologies used in deep nets. ",
"paper_summary": null,
"main_review": "main_review: The authors study mean field theory for deep neural nets. \n\nTo the best of my knowledge we do not have a good understanding of mean field theory for neural networks and this paper and some references therein are starting to address some of it. \n\nHowever, my concern about the paper is in readability. I am very familiar with the literature on mean field theory but less so on deep nets. I found it difficult to follow many parts because the authors assume that the reader will have the knowledge of all the terminology in the paper, which there is a lot of. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Missing literature, figures lack clarity, but conceptually interesting",
"paper_summary": null,
"main_review": "main_review: Mean field theory is an approach to analysing complex systems where correlations between highly dependent random variables are ignored, thus making the problem analytically tractable. It is hoped that analytical insights gained in this idealised setting might translate back to the original (and far messier) problem. The authors use a mean field theory approach to study how varying certain network hyperparameters with depth can effect gradient and activation statistics. A correlation between the behaviour of these statistics and training performance on MNIST is noted.\n\nAs someone asked to conduct an 'emergency' review of this paper, I would have greatly appreciated the authors making more of an effort to present their results clearly. Some general comments in this regard:\n\nClarity issues:\n- the authors appear to have ignored the ICLR style guidelines\n- the references are all written in green, making them difficult to read\n- figures are either missing color maps or make poor choice of colors\n- the figure captions are difficult to understand in isolation from the main text\n- the authors themselves appear to muddle their 'zigs' and 'zags' (first line of discussion)\n\nNow to get to the actual content of the paper. The authors do not properly place their work in context. Mean field theory has been studied in the context of neural networks at least since the 80's. Entire books have been written on the statistical mechanics of neural networks. It seems wrong that the authors only cite papers on this matter going back to 2016.\n\nWith that said, the main thrust of the paper is very interesting. The authors derive recurrence relations for mean activations and gradients. They show how scaling layer width and initialisation variance with depth can better control the propagation of these means. The results of their calculations appear to match their random network simulations, and this part of the work seems strong.\n\nWhat is not clear is what effect we should expect these quantities to have on learning? The authors claim there is a tradeoff between expressivity and exploding gradients. This seems quite speculative since it is not clear to me what effect either of these things will have on training. For one, how expressive does a model need to be to correctly classify MNIST? And are exploding gradients necessarily a bad thing? Provided they do not reach infinity, can we not just choose a smaller learning rate?\n\nI'm open to reevaluating the review if the issues of clarity and missing literature review are fixed.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.4444444477558136,
0.4444444477558136
],
"confidence": [
0.5,
0,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Changelog",
"Response (cont)",
"Thank you for your review.",
"Thank you for taking the time to read and review our paper.",
"Thank you for your review.",
"Response (cont. 2)"
],
"comment": [
"We have updated our paper as follows:\n1.\tWe added a new section that elucidates the gradient explosion/vanishing problem from an information geometry perspective. We reason that this problem manifests in the exponential ill-conditioning of the Fisher information matrix, so that (stochastic) gradient descent approximates the natural gradient poorly.\n2.\tWe added experiments on applying VV to tanh resnets. We find that variance decay improves performance of tanh resnets. In particular, the optimal decay cannot be too small nor too large, but rather must balance trainability and expressivity.\n3.\tWe added a background section summarizing the recent line of work that we are building on and discuss how our work relates to them.\n4.\tWe added a section overviewing our techniques and main results in intuitive terms. In particular, we devote a significant chunk to discussing the trainability vs expressivity tradeoff.\n5.\tWe devoted significant space in the introduction to discuss prior works in mean field theory and recent trends.\n6.\tWe swapped out the Hebrew letters for better alternatives; for example, Hebrew daleth is now chi. We also bolded all mean field quantities to improve readability.\n7.\tWe added a notation glossary to improve readability.\n8.\tWe improved colors and presentations of the plots, especially the heatmaps and overlaid contours. We also added color bars.\n9.\tWe moved the detailed discussion on the VV dynamics in the original manuscript to the appendix, and only sketch the key points in enough detail in the main text for the experiments to make sense to the reader.\n10.\tWe moved discussion of mean field assumption to the appendix, as they might be confusing to the first time reader.\n11.\tSimilarly we moved definition of the integral operators V and W, along with the table of dynamical equations we derive in this paper, to the appendix, to decrease notation baggage. Most of the main text can be understood without examining these details.\n12.\tWe rewrote figure captions to be self-contained.\n13.\tWe fixed various ICLR style guideline issues.\n14.\tWe turned off colored links.\n15.\tWe fixed various typos and grammatical mistakes.\n\n",
"> The authors claim there is a tradeoff between expressivity and exploding gradients. This seems quite speculative since it is not clear to me what effect either of these things will have on training. For one, how expressive does a model need to be to correctly classify MNIST?\n\nWe want to first make the following clarification: We are only claiming there is an effect on relative performance, i.e. we can say that one initialization achieves weakly better results (in particular, weakly better learning curves) than another initialization. We are NOT saying that that by initializing a certain way, you can solve MNIST or imagenet. We admit that we have not been sufficiently clear in the paper, and have stressed this point from the get-go in the updated version.\n\nGradient explosion/vanishing is one of the most famous obstacles to training deep neural networks; see Bengio et al. (1994) and Pascanu et al. (2013), for example. The former noted that much of the difficulty of training RNNs arise from such gradient problems. In fact, in that paper already, the notion of expressivity vs trainability has arised: it is easy for an RNN to suffer from gradient explosion/vanishing problems when it tries to learn long time dependencies (striving to be expressive).\n\nThe form of the claim specific to our case originates in Yang and Schoenholz (2017). There the authors made the observation that the optimal initialization scheme for tanh resnets makes an optimal tradeoff between expressivity and trainability: if the initialization variances are too big, then the random network will suffer from gradient explosion with high probability; if they are too small, then the random network will be approximately constant (i.e. has low metric expressivity) with high probability. Metric expressivity of a random network is the expectation of ||f(x) - f(x’)||^2, where f is the random net and x and x’ are two different input vectors. It measures how much the network expands the input space, on average. Intuitively, a larger metric expressivity means that it is easier to tell apart two vectors from their neural network embeddings via a linear separator.\nThis claim is strongly corroborated by their experiments with tanh and ReLU resnets.\n\nIn our paper, we see this tradeoff determining the outcome of experiments in all but one case (ReLU resnet in the zag phase). We discuss this tradeoff at length in our revised paper, but we provide a summary below in case the reviewer does not have time to look at it.\n\nWe confirm this behavior in tanh resnets when decaying their initialization variances with depth: When there is no decay, gradient explosion bottlenecks the test set accuracy after training; when we impose strong decay, gradient dynamics is mollified but then metric expressivity (essentially the average distance between the images of two different input vectors), being strongly constrained, caps the performance.\nIndeed, we can predict test set accuracy by level curves of the magnitude of gradient explosion in the region of small variance decay, while we can do the same with level curves of metric expressivity when in the region of large decay. The performance peaks at the intersection of these two regions. Please see our experimental section in VV for more details.\n\nWith ReLU resnets, there are two phases of behavior when we apply VV. In one (the zig phase), we start applying variance decay to some parameters (w and b). We see what is very similar to Yang and Schoenholz's observation, that decaying the variance prevents training failure from numerical overflow, but decaying it further reduces test time accuracy by reducing metric expressivity. This is consistent with the tradeoff: Our ReLU resnets in this zig phase have fairly tame gradient explosion (polynomial with low degree) while the metric expressivity is growing superpolynomially with depth, so the latter naturally dominates the effect on performance. \n\nIn the other (zag) phase, which continues from the zig phase, we start decaying variances of other parameters. Here we observe a seeming counterexample to this tradeoff: weight gradient explosion worsens and expressivity decreases but the test set accuracy increases! In this phase, both metric expressivity and gradient explosion have polynomial dynamics with low degrees. So plausibly, a new factor begins to dominate the effect on performance that we do not know about yet.\n",
"We appreciate you answering the emergency call to review our paper.\nOur responses are as follows.\n\n> Clarity issues:\n> - the authors appear to have ignored the ICLR style guidelines\nIn the new version, we have done the following:\nAbstract merged into 1 paragraph.\nChanged table title to be lower case except first word and pronoun.\nWe have put parentheses around tail citations.\nPlease let us know if you found more violations of the style guideline.\n\n> - the references are all written in green, making them difficult to read\nWe thought that they actually improve readability, but based on your suggestion we have turned off colored links.\n\n> - figures are either missing color maps or make poor choice of colors\nThank you for pointing this out. We have added color bars and improved color choices, especially in the heatmaps and their contour overlays.\n\n> - the figure captions are difficult to understand in isolation from the main text\nIn response to your feedback, we have made figure captions much more self-contained.\n\n> - the authors themselves appear to muddle their 'zigs' and 'zags' (first line of discussion)\nThanks for pointing out this error. It has been fixed.\n\n> Now to get to the actual content of the paper. The authors do not properly place their work in context. Mean field theory has been studied in the context of neural networks at least since the 80's. Entire books have been written on the statistical mechanics of neural networks. It seems wrong that the authors only cite papers on this matter going back to 2016.\n\nWe apologize for this omission. In the new version, a significant chunk of the introduction is used for surveying previous works on mean field theory of neural networks.\n\n\n",
"We respond to your comments as follows.\n\n> As with the earlier papers in this recent program, the paper is notation-heavy but generally written well, though there is some overreliance on the readers' knowledge of previous work, for instance in presenting the evidence as above. \n\nThank you for your kind review. We agree that this overreliance has lead to poor presentation of our results. We have significantly rewritten our main text, devoting much space to summarizing the previous work and context, while toning down the heaviness of notation and technicality in favor of more intuitive discussion. See the changelog for a full list of changesl\n\n> Try as I might, I cannot find a detailed explanation of the color scale for the important Fig. 4. \nThank you for pointing this out. We have added color bars to our heatmaps.\n\n> A small notation issue: the current Hebrew letter for the gradient quantity does not go with the other Greek letters and is typographically poor choice because of underlining, etc.). \nWe have changed the Hebrew daleth to the Greek letter Chi, and bolded all mean field quantities to make them more readable. We have also compiled a symbol glossary to ameliorate the notation heaviness of our paper.\n\n> Also, several of the citations should be fixed to reflect peer-reviewed publication of Arxiv papers.\nThank you for pointing out the error. We have updated the citations accordingly.\n\n> I was not able to review all the proofs, but what I checked was sound. \n\n> Finally, the techniques of WV and VV would be more applicable if it were not for the very tenuous relationship between gradient explosion and performance, which should be mentioned more than the one time it appears in the paper.\n\nIt is true that, as Yang and Schoenholz observed in their NIPS 2017 paper, ReLU resnets are not bottlenecked by trainability but rather by (metric) expressivity. This is what we find in the zig phase of ReLU resnet VV, where metric expressivity predicts performance. However, VV does indeed decrease the activation explosion of ReLU resnets to prevent forward computation from overflowing.\n\nIn the updated version of our paper, we have included our experiments on applying VV to tanh resnets, and there variance decay does improve performance by reducing gradient explosion. This is apparent in our figure 3 (in the new version), which shows that the optimal variance decay is larger for larger depth L. Again, this is expected based on Yang and Schoenholz's observation that tanh resnets are bottlenecked by trainability when variances are too large.\n\nLet us know if you are satisfied with our responses.",
"We have revamped the presentation of the paper, improving its presentation and addressing your concerns in readability. We hope you can give it another read.",
"\n> And are exploding gradients necessarily a bad thing? \n\nThis is a great question. “Conventional wisdom” (starting from Bengio et al. (1994)) posits that they are always bad for training a deep net, and Pascanu et al. hypothesized that the reason is the ill-conditioning of the Hessian.\n\n\nIn the updated version of our paper, we show this hypothesis is true if we replace “Hessian” with “Fisher information matrix” (which is the Hessian for KL divergence). See our new section 2 for details. Thus we do expect concrete optimization obstacles when there is gradient explosion/vanishing.\n\n\nIn the context of random networks, this is supported experimentally by recent works by Schoenholz et al. (2017) and Yang and Schoenholz (2017), where optimal initializations are those that avoid gradient explosion (without losing too much expressivity). This is also supported by our new experiments on applying VV to tanh resnets, where imposing stronger variance decay improves performance (until the point where metric expressivity drops too much).\n\n\nBut our ReLU experiments also show that mysteriously, in the zag regime of VV for ReLU resnets, larger weight gradients correlate with better performance, and we do not know how to explain it any other way. Thus your question reflects exactly one point raised by our work: are there in fact scenarios where greater gradient explosion can actually cause better performance? We hope to answer this in the future.\n\n\n> Provided they do not reach infinity, can we not just choose a smaller learning rate?\n\n\nIn fact a “smaller learning rate” was essentially what Pascanu et al. proposed --- gradient clipping --- and remains one of the most popular ways to deal with gradient explosion when they occur. However, as discussed in our new section 2, gradient explosion causes optimization difficulties in the way of ill-conditioned Fisher information. In the case when we are actually minimizing the KL divergence so that Fisher information is in fact its Hessian, this ill-conditioning presents an obstruction to first order optimization methods, regardless of learning rate. Please see our text for details. We want to stress that gradient explosion is not simply a matter of gradient magnitude too big, but rather an issue where the first few layers of a deep network gets \"more error signals\" in the form of gradients than the last few. Multiplying every gradient term by the same learning rate does not change this circumstance. This \"information propagation\" perspective is in fact the theme of Schoenholz et al. (2017).\n\nWe do agree however that more research is needed to decipher the cross effect of learning rate and initialization. Work is currently underway."
]
} | {
"paperhash": [
"david|a_learning_algorithm_for_boltzmann_machines*",
"amari|natural_gradient_works_efficiently_in_learning",
"shun|information_geometry_and_its_applications",
"bertschinger|at_the_edge_of_chaos:_real-time_computations_and_self-organized_criticality_in_recurrent_neural_networks",
"choromanska|open_problem:_the_landscape_of_the_loss_surfaces_of_multilayer_networks",
"daniely|toward_deeper_understanding_of_neural_networks:_the_power_of_initialization_and_a_dual_view_on_expressivity",
"desjardins|xavier_glorot_and_yoshua_bengio._understanding_the_difficulty_of_training_deep_feedforward_neural_networks",
"grosse|a_kronecker-factored_approximate_fisher_matrix_for_convolution_layers",
"he|deep_residual_learning_for_image_recognition",
"he|identity_mappings_in_deep_residual_networks",
"lillicrap|random_synaptic_feedback_weights_support_error_backpropagation_for_deep_learning",
"martens|optimizing_neural_networks_with_kronecker-factored_approximate_curvature",
"montfar|on_the_number_of_linear_regions_of_deep_neural_networks",
"radford|bayesian_learning_for_neural_networks",
"iu|introductory_lectures_on_convex_optimization:_a_basic_course",
"pascanu|revisiting_natural_gradient_for_deep_networks",
"pennington|geometry_of_neural_network_loss_surfaces_via_random_matrix_theory",
"pennington|nonlinear_random_matrix_theory_for_deep_learning",
"guyon|advances_in_neural_information_processing_systems_30",
"pennington|resurrecting_the_sigmoid_in_deep_learning_through_dynamical_isometry:_theory_and_practice",
"luxburg|exponential_expressivity_in_deep_neural_networks_through_transient_chaos",
"saul|mean_field_theory_for_sigmoid_belief_networks",
"samuel|deep_information_propagation",
"sompolinsky|chaos_in_random_neural_networks",
"wu|scalable_trust-region_method_for_deep_reinforcement_learning_using_kronecker-factored_approximation",
"yang|meanfield_residual_network:_on_the_edge_of_chaos"
],
"title": [
"A Learning Algorithm for Boltzmann Machines*",
"Natural Gradient Works Efficiently in Learning",
"Information geometry and its applications",
"At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks",
"Open Problem: The landscape of the loss surfaces of multilayer networks",
"Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity",
"Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks",
"A Kronecker-factored approximate Fisher matrix for convolution layers",
"Deep Residual Learning for Image Recognition",
"Identity mappings in deep residual networks",
"Random synaptic feedback weights support error backpropagation for deep learning",
"Optimizing Neural Networks with Kronecker-factored Approximate Curvature",
"On the Number of Linear Regions of Deep Neural Networks",
"Bayesian learning for neural networks",
"Introductory lectures on convex optimization: a basic course",
"Revisiting Natural Gradient for Deep Networks",
"Geometry of Neural Network Loss Surfaces via Random Matrix Theory",
"Nonlinear random matrix theory for deep learning",
"Advances in Neural Information Processing Systems 30",
"Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice",
"Exponential expressivity in deep neural networks through transient chaos",
"Mean field theory for sigmoid belief networks",
"Deep Information Propagation",
"Chaos in Random Neural Networks",
"Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation",
"Meanfield Residual Network: On the Edge of Chaos"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"h david",
"geoffrey e ackley",
"terrence j hinton",
" sejnowski"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shun-ichi amari"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"' shun",
" amari"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nils bertschinger",
"thomas natschlger",
"robert a legenstein",
"lawrence k saul"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"anna choromanska",
"yann lecun",
"grard ben",
"arous "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"amit daniely",
"roy frostig",
"yoram singer",
"; m sugiyama",
"u v luxburg",
"i guyon",
"r garnett"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"guillaume desjardins",
"karen simonyan",
"razvan pascanu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"roger grosse",
"james martens"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"timothy p lillicrap",
"daniel cownden",
"douglas b tweed",
"colin j akerman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"james martens",
"roger grosse"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"guido montfar",
"razvan pascanu",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"m radford",
" neal"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" e iu",
" nesterov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"razvan pascanu",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jeffrey pennington",
"yasaman bahri"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jeffrey pennington",
"pratik worah"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"in i guyon",
"u v luxburg",
"s bengio",
"h wallach",
"r fergus",
"s vishwanathan",
"r garnett"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jeffrey pennington",
"samuel schoenholz",
"surya ganguli"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"v luxburg",
"s bengio",
"h wallach",
"r fergus",
"s vishwanathan",
"r "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tommi lawrence k saul",
"michael i jaakkola",
" jordan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"s samuel",
"justin schoenholz",
"surya gilmer",
"jascha ganguli",
" sohl-dickstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"h sompolinsky",
"a crisanti",
"h j sommers"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yuhuai wu",
"elman mansimov",
"shun liao",
"roger grosse",
"jimmy ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"greg yang",
"samuel s schoenholz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"1602.05897v2",
"",
"",
"1512.03385v1",
"1603.05027v3",
"",
"arXiv:1503.05671",
"1402.1869v2",
"",
"",
"1301.3584v7",
"",
"",
"",
"1711.04735v1",
"1606.05340v2",
"cs/9603102v1",
"1611.01232v2",
"",
"arXiv:1708.05144",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.518519 | 0.333333 | null | null | null | null | null | rJGY8GbR- |
||
wang|model_distillation_with_knowledge_transfer_from_face_classification_to_alignment_and_verification|ICLR_cc_2018_Conference | 1709.02929v2 | Model Distillation with Knowledge Transfer from Face Classification to Alignment and Verification | Knowledge distillation is a potential solution for model compression. The idea is to make a small student network imitate the target of a large teacher network, then the student network can be competitive to the teacher one. Most previous studies focus on model distillation in the classification task, where they propose different architectures and initializations for the student network. However, only the classification task is not enough, and other related tasks such as regression and retrieval are barely considered. To solve the problem, in this paper, we take face recognition as a breaking point and propose model distillation with knowledge transfer from face classification to alignment and verification. By selecting appropriate initializations and targets in the knowledge transfer, the distillation can be easier in non-classification tasks. Experiments on the CelebA and CASIA-WebFace datasets demonstrate that the student network can be competitive to the teacher one in alignment and verification, and even surpasses the teacher network under specific compression rates. In addition, to achieve stronger knowledge transfer, we also use a common initialization trick to improve the distillation performance of classification. Evaluations on the CASIA-Webface and large-scale MS-Celeb-1M datasets show the effectiveness of this simple trick. | {
"name": [],
"affiliation": []
} | null | [
"Computer Science"
] | 2017-09-09 | 31 | null | null | null | null | null | null | null | null | false | The authors propose a distillation-based approach that is applied to transfer knowledge from a classification network to non-classification tasks (face alignment and verification). The writing is very imprecise - for instance repeatedly referring to a 'simple trick' rather than actually defining the procedure - and the method is described in very task-specific ways that make it hard to understand how or whether it would generalize to other problems. | {
"review_id": [
"rJR-8EAgG",
"SkX-5ijlG",
"B1736j_gz"
],
"review": [
{
"title": "title: manuscript needs significant revisions",
"paper_summary": null,
"main_review": "main_review: Summary:\nThe manuscript presents experiments on distilling knowledge from a face classification model to student models for face alignment and verification. By selecting a good initialization strategy and guidelines for selecting appropriate targets for non-classification tasks, the authors achieve improved performance, compared to networks trained from scratch or with different initialization strategies.\n\nReview:\nThe paper seems to be written in a rush. \nI am not sure about the degree of novelty, as pretraining with domain-related data instead of general-purpose ImageNet data has been done before, Liu et al. (2014), for example pretrain a CNN on face classification to be used for emotion recognition. Admitted, knowledge transfer from classification to regression and retrieval tasks is not very common yet, except via pretraining on ImageNet, followed by fine-tuning on the target task.\nMy main concern is with the presentation of the paper. It is very hard to follow! Two reasons are that it has too many grammatical mistakes and that very often a “simple trick” or a “common trick” is mentioned instead of using a descriptive name for the method used.\n\nHere are a few points that might help improving the work:\n1) Many kind of empty phrases are repeated all over the paper, e.g. the reader is teased with mention of a “simple trick” or a “common trick”. I don’t think the phrase “breaking point”, that is repeated a couple of times, is correctly used (see https://www.merriam-webster.com/dictionary/breaking%20point for a defininition).\n2) Section 4.1 does not explain the initialization but just describes motivation and notation.\n3) Clarity of the approach: Using the case of alignment as an example, do you first pretrain both the teacher and student on classification, then finetune the teacher on alignment before the distillation step? \n4) Table 1 mentions Fitnets, but cites Ba & Caruana (2014) instead of Romero et al. (2015)\n5) The “experimental trick” you mention for setting alpha and beta, seems to be just validation, comparing different settings and picking the one yielding the highest improvements. On what partition of the data are you doing this hyperparameter selection?\n6) The details of the architectures are missing, e.g. exactly what changes do you make to the architecture, when you change the task from classification to alignment or verification? What exactly is the “hidden layer” in that architecture?\n7) Minor: Usually there is a space before parentheses (many citations don’t have one)\n\nIn its current form, I cannot recommend the manuscript for acceptance. I get the impression that the experimental work might be of decent quality, but the manuscript fails to convey important details of the method, of the experimental setup and in the interpretation of the results. The overall quality of the write-up has to be significantly improved.\n\nReferences:\nLiu, Mengyi, Ruiping Wang, Shaoxin Li, Shiguang Shan, Zhiwu Huang, and Xilin Chen. \"Combining multiple kernel methods on riemannian manifold for emotion recognition in the wild.\" In Proceedings of the 16th International Conference on Multimodal Interaction, pp. 494-501. ACM, 2014.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: This paper proposed to transfer the classifier from the model for face classification to the task of alignment and verification. The problem setting is interesting and valuable, however, the contribution is not clearly demonstrated. ",
"paper_summary": null,
"main_review": "main_review: This paper proposed to transfer the classifier from the model for face classification to the task of alignment and verification. The problem setting is interesting and valuable, however, the contribution is not clearly demonstrated. \n\nSpecifically, it proposed to utilize the teacher model from classification to other tasks, and proposed a unified objective function to model the transferability as shown in Equation (5). The two terms in (5), (7) and (9) are used to transfer the knowledge from the teacher model. It maybe possible to claim that the different terms may play different roles for different tasks. However, there should be some general guidelines for choosing these different terms for regularization, rather than just make the claim purely based on the final results. In table 4 and table 5, the results seem to be not so consistent for using the distillation loss. The author mentioned that it is due to the weak teacher model. However, the teacher model just differs in performance with around 3% in accuracy. How could we define the “good” or “bad” of a teacher model for model distillation/transfer?\n\nBesides, it seems that the improvement comes largely from the trick of initialization as mentioned in Section 3.2. Hence, it is still not clear which parts contribute to the final performance improvements. It could be better if the authors can report the results from each of the components together. \n\n The authors just try the parameter (\\alpha, \\beta) to be (0,0), (1,0), (0,1) and (1,1). I think the range for both values could be any positive real value, and how about the performance for other sets of combinations, like (0.5, 0.5)?",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Limited scope and contribution",
"paper_summary": null,
"main_review": "main_review: The paper proposes knowledge distillation on two very specific non-classification tasks. I find the scope of the paper is quite limited and the approach seems hard to generalize to other tasks. There is also very limited technical contribution. I think the paper might be a better fit in conferences on faces such as FG.\n\nPros:\n1. The application of knowledge distillation in face alignment is interesting. \n\nCons:\n1. The writing of the paper can be significantly improved. The technical description is unclear.\n2. The method has two parameters \\alpha and \\beta, and Section 4.2.3. mentions the key is to measure the relevance of tasks. It seems to me defining the relevance between tasks is quite empirical and often confusing. How are they actually selected in the experiments? Sometimes alpha=0, beta=0 works the best which means the added terms are useless?\n3. The paper works on a very limited scope of face alignment. How does the proposed method generalize to other tasks?",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.2222222238779068,
0.4444444477558136,
0.2222222238779068
],
"confidence": [
0.75,
1,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [],
"comment": []
} | {
"paperhash": [
"rastegari|xnor-net:_imagenet_classification_using_binary_convolutional_neural_networks",
"he|deep_residual_learning_for_image_recognition",
"szegedy|rethinking_the_inception_architecture_for_computer_vision",
"han|deep_compression:_compressing_deep_neural_network_with_pruning,_trained_quantization_and_huffman_coding",
"han|learning_both_weights_and_connections_for_efficient_neural_network",
"ren|faster_r-cnn:_towards_real-time_object_detection_with_region_proposal_networks",
"schroff|facenet:_a_unified_embedding_for_face_recognition_and_clustering",
"hinton|distilling_the_knowledge_in_a_neural_network",
"ioffe|batch_normalization:_accelerating_deep_network_training_by_reducing_internal_covariate_shift",
"romero|fitnets:_hints_for_thin_deep_nets",
"gong|compressing_deep_convolutional_networks_using_vector_quantization",
"liu|deep_learning_face_attributes_in_the_wild",
"szegedy|going_deeper_with_convolutions",
"simonyan|very_deep_convolutional_networks_for_large-scale_image_recognition",
"denton|exploiting_linear_structure_within_convolutional_networks_for_efficient_evaluation",
"ba|do_deep_nets_really_need_to_be_deep?",
"guo|ms-celeb-1m:_a_dataset_and_benchmark_for_large-scale_face_recognition",
"luo|face_model_compression_by_distilling_knowledge_from_neurons"
],
"title": [
"",
"Deep Residual Learning for Image Recognition",
"Rethinking the Inception Architecture for Computer Vision",
"DEEP COMPRESSION: COMPRESSING DEEP NEURAL NETWORKS WITH PRUNING, TRAINED QUANTIZATION AND HUFFMAN CODING",
"Learning both Weights and Connections for Efficient Neural Networks",
"Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
"FaceNet: A Unified Embedding for Face Recognition and Clustering",
"Distilling the Knowledge in a Neural Network",
"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"FITNETS: HINTS FOR THIN DEEP NETS",
"Under review as a conference paper at ICLR 2015 COMPRESSING DEEP CONVOLUTIONAL NETWORKS USING VECTOR QUANTIZATION",
"Deep Learning Face Attributes in the Wild *",
"Going deeper with convolutions",
"VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION",
"Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation",
"Do Deep Nets Really Need to be Deep? *** Draft for NIPS 2014 (not camera ready copy) ***",
"MS-Celeb-1M: A Dataset and Benchmark for Large-Scale Face Recognition",
"Face Model Compression by Distilling Knowledge from Neurons"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"mohammad rastegari",
"vicente ordonez",
"joseph redmon",
"ali farhadi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
}
]
},
{
"name": [
"christian szegedy",
"zbigniew wojna"
],
"affiliation": [
{
"laboratory": "",
"institution": "University College London",
"location": "{}"
},
{
"laboratory": "",
"institution": "University College London",
"location": "{}"
}
]
},
{
"name": [
"song han",
"huizi mao",
"william j dally",
"train connectivity"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University",
"location": "{'postCode': '94305', 'settlement': 'Stanford', 'region': 'CA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Tsinghua University",
"location": "{'postCode': '100084', 'settlement': 'Beijing', 'country': 'China'}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{'postCode': '94305', 'settlement': 'Stanford', 'region': 'CA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "NVIDIA",
"location": "{'postCode': '95050', 'settlement': 'Santa Clara', 'region': 'CA', 'country': 'USA'}"
}
]
},
{
"name": [
"song han",
"jeff pool",
"john tran",
"william j dally"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shaoqing ren",
"kaiming he",
"ross girshick",
"jian sun",
"• r girshick"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"florian schroff",
"inc google",
" dmitry kalenichenko",
"james philbin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"geoffrey hinton",
"oriol vinyals",
"jeff dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sergey ioffe"
],
"affiliation": [
{
"laboratory": "",
"institution": "Christian Szegedy Google Inc",
"location": "{}"
}
]
},
{
"name": [
"adriana romero",
"nicolas ballas",
"samira ebrahimi kahou",
"antoine chassang",
"carlo gatta",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": "",
"institution": "Universitat de Barcelona",
"location": "{'settlement': 'Barcelona', 'country': 'Spain'}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{'settlement': 'Montréal', 'region': 'Québec', 'country': 'Canada'}"
},
{
"laboratory": "",
"institution": "École Polytechnique de Montréal",
"location": "{'settlement': 'Montréal', 'region': 'Québec', 'country': 'Canada'}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{'settlement': 'Montréal', 'region': 'Québec', 'country': 'Canada'}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{'settlement': 'Montréal', 'region': 'Québec', 'country': 'Canada'}"
}
]
},
{
"name": [
"yunchao gong",
"liu liu",
"ming yang",
"lubomir bourdev"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ziwei liu",
"ping luo",
"xiaogang wang",
"xiaoou tang",
"hong kong"
],
"affiliation": [
{
"laboratory": "",
"institution": "The Chinese University",
"location": "{}"
},
{
"laboratory": "",
"institution": "The Chinese University",
"location": "{}"
},
{
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": "{}"
},
{
"laboratory": "",
"institution": "The Chinese University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"christian szegedy",
"wei liu",
"inc google",
"scott reed",
"anguelov dragomir",
"vincent vanhoucke",
"andrew rabinovich"
],
"affiliation": [
{
"laboratory": "",
"institution": "Google Inc",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of North Carolina",
"location": "{'settlement': 'Chapel Hill Yangqing Jia'}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{}"
}
]
},
{
"name": [
"karen simonyan",
"andrew zisserman"
],
"affiliation": [
{
"laboratory": "Visual Geometry Group",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "Visual Geometry Group",
"institution": "University of Oxford",
"location": "{}"
}
]
},
{
"name": [
"remi denton",
"wojciech zaremba",
"joan bruna",
"yann lecun",
"rob fergus"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
}
]
},
{
"name": [
"jimmy lei",
" ba",
"rich caruana"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
}
]
},
{
"name": [
"yandong guo",
"lei zhang",
"yuxiao hu",
"xiaodong he",
"jianfeng gao"
],
"affiliation": [
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'settlement': 'Redmond', 'region': 'WA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'settlement': 'Redmond', 'region': 'WA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'settlement': 'Redmond', 'region': 'WA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'settlement': 'Redmond', 'region': 'WA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'settlement': 'Redmond', 'region': 'WA', 'country': 'USA'}"
}
]
},
{
"name": [
"ping luo",
"zhenyao zhu",
"ziwei liu",
"xiaogang wang",
"xiaoou tang",
"hong kong"
],
"affiliation": [
{
"laboratory": "",
"institution": "The Chinese University",
"location": "{}"
},
{
"laboratory": "",
"institution": "The Chinese University",
"location": "{}"
},
{
"laboratory": "",
"institution": "The Chinese University",
"location": "{}"
},
{
"laboratory": "",
"institution": "The Chinese University of Hong",
"location": "{'settlement': 'Kong'}"
},
{
"laboratory": "",
"institution": "The Chinese University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 89 | null | 0.296296 | 0.833333 | null | null | null | null | null | rJFOptp6Z |
||
sedoc|neural_tree_transducers_for_tree_to_tree_learning|ICLR_cc_2018_Conference | Neural Tree Transducers for Tree to Tree Learning | We introduce a novel approach to tree-to-tree learning, the neural tree transducer (NTT), a top-down depth first context-sensitive tree decoder, which is paired with recursive neural encoders. Our method works purely on tree-to-tree manipulations rather than sequence-to-tree or tree-to-sequence and is able to encode and decode multiple depth trees. We compare our method to sequence-to-sequence models applied to serializations of the trees and show that our method outperforms previous methods for tree-to-tree transduction. | {
"name": [],
"affiliation": []
} | null | [
"deep learning",
"tree transduction"
] | null | 2018-02-15 22:29:20 | 33 | null | null | null | null | null | null | null | null | false | The proposed neural tree transduction framework is basically a combination of tree encoding and tree decoding. The tree encoding component is simply reused from previous work (TreeLSTM) whereas the decoding components is somewhat different from the previous work. They key problems (acknowledge also by at least 2 reviewers):
Pros:
-- generating trees input under-explored direction (note that it is more general than parsing as nodes may not directly correspond to input symbols)
Cons:
-- no comparison with previous tree-decoding work
-- only artificial experiments
-- the paper is hard too read (confusing) / mathematical notation and terminology is confusing and seems sometimes inaccurate (see R3)
| {
"review_id": [
"B1ueBCKeM",
"B1ISgaRez",
"B1BFRS7ZM"
],
"review": [
{
"title": "title: Very limited contribution",
"paper_summary": null,
"main_review": "main_review: The paper introduces a neural tree decoder architecture for binary trees that conditions the next node prediction on \nrepresentations of its ascendants (encoded with an LSTM recurrent net) and left sibling subtree (encoded with a binary LSTM recursive net) for right sibling nodes. \nTo perform tree to tree transduction the input tree is encoded as a vector with a Tree LSTM; correspondences between input and output subtrees are not modelled directly (using e.g. attention) as is done in traditional tree transducers. \nWhile the term context-sensitive should be used with caution, I do accept the claim here, although the notation used does not make the exposition clear. \nExperimental results show that the architecture performs better at synthetic tree transduction tasks (relabeling, reordering, deletion) than sequence-to-sequence baselines. \n\nWhile neural approches to tree-to-tree transduction is an understudied problem, the contributions of this paper are very narrow and it is not shown that the proposed approach will generalize to more expressive models or real-world applications of tree-to-tree transduction. \nExisting neural tree decoders, such as Dong and Lapata or Alvarex-Melis and Jaakkola, could be combined with tree LSTM encoders without any technical innovations and could possibly do as well as the proposed model for the transduction tasks tested - no experiments are performed with existing tree-based decoder architectures. \n\nSpecific comments per section:\n\n1. Unclear what is meant be \"equivalent\" in first paragraph. \n2. The model does not assign an explicit probability to the tree structure - rather it seems to rely on the distinction between terminal and non-terimal symbols and the restriction to binary trees to know when closing brackets are implied - this is not made clear, and a general model should not have this restriction, as there are many cases where we want to generate non-binary trees.\nThe production rule notation used is incorrect and confusing, mixing sets with non-terminals and terminal symbols: \nA better notation for the rules in 2.1.1 would be something like S -> P | v | \\epsilon; P -> Q R | Q u | u Q | u w, where P, Q, R \\in O and u, w \\in v.\n2.1.2. Splitting production rules as ->_left, ->_right is not standard notation. Rather introduce intermediate non-terminals in the grammar:\nO -> O_L O_R; O_L -> a | Q, O_R -> b | Q. \n2.1.3 The context-sensitively here arise when conditioning on the entire left sibling subtree (not just the top non-terimal).\nThe rules should have a format such as O -> O_L O_R; O_L -> a | Q; \\alpha O_R -> \\alpha a | \\alpha Q, where \\alpha is an entire subtree rooted at O_L.\n2.1.4 Should be g(x|.) = exp( ), the softmax function includes the normalization which is done in the equation below. \n\n3. Note that is is possible to restrict the decoder to produce tree structures while keeping a sequential neural architecture. For some tasks sequential decoders do actually produce mostly well-formed trees, given enough training data. \nRNNG encodes completed subtrees recursively, and the stack LSTM encodes the entire partially-produced tree, so it does produce and condition on trees not just sequences. The model in this paper is not more expressive than RNNG, it just encodes somewhat different structural biases, which might or might not be suited for real tasks. \n\n4. In the examples given, the same set of symbols are used as both terminals and non-terminals. How is the tree structure then predicted by the decoder?\nDetails about the training setup are missing: How is the training data generated, what is the size of the trees during training (compared to testing)?\n4.2 The steep drop in performance between depth 5 and 6 indicates that model is very sensitive to its memorization capacity and might not be generalizing over the given training data.\nFor real tree-to-tree applications involving these operations, there is good reason to believe that some kind of attention mechanism will be needed over the input tree during decoding. \n\nReference should generally be to published proceedings rather than to arxiv where available - e.g. Aharoni and Goldberg, Dong and Lapata, Erguchi et al, Rush et al. For Graehl and Knight there is a published journal paper in Computational Linguistics.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Would benefit from a better evaluation setup.",
"paper_summary": null,
"main_review": "main_review: The authors propose to tackle the tree transduction learning problem using recursive NN architectures: the prediction of a node label is conditioned on the ancestors sequence and the nodes in the left sibling subtree (in a serialized order)\nPros:\n- they identify the issue of locality as important (sequential serialization distorts locality) and they move the architecture closer to the tree structure of the problem\n- the architecture proposed moves the bar forward in the tree processing field\nCons: \n- there is still a serialization step (depth first) that can potentially create sharp dips to null probabilities for marginal changes in the conditioning sequence (the issue is not addressed or commented by the authors) \n- the experimental setup lacks a perturbation test: rather than a copy task, it would be of greater interest to assess the capacity to recover from noise in the labels (as the noise magnitude increases)\n- a clearer and more articulated comparison of the pros/cons w.r.t. competitive architectures would improve the quality of the work: what are the properties (depth, vocabulary size, complexity of the underlying generative process, etc) that are best dealt with by the proposed approach? \n- it is not clear if the is the vocabulary size in their model needs to increase exponentially with the tree depth: a crucial vocabulary size vs performance experiment is missing\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: There may be some interesting ideas here, but I think in many places the mathematical description is very confusing and/or flawed.",
"paper_summary": null,
"main_review": "main_review: There may be some interesting ideas here, but I think in many places the mathematical\ndescription is very confusing and/or flawed. To give some examples:\n\n* Just before section 2.1.1, P(T) = \\prod_{p \\in Path(T)} ... : it's not clear \nat all clear that this defines a valid distribution over trees. There is an\nimplicit order over the paths in Path(T) that is simply not defined (otherwise\nhow for x^p could we decide which symbols x^1 ... x^{p-1} to condition\nupon?)\n\n* \"We can write S -> O | v | \\epsilon...\" with S, O and v defined as sets.\nThis is certainly non-standard notation, more explanation is needed.\n\n* \"The observation is generated by the sequence of left most \nproduction rules\". This appears to be related to the idea of left-most\nderivations in context-free grammars. But no discussion is given, and\nthe writing is again vague/imprecise.\n\n* \"Although the above grammar is not, in general, context free\" - I'm not\nsure what is being referred to here. Are the authors referring to the underlying grammar,\nor the lack of independence assumptions in the model? The grammar\nis clearly context-free; the lack of independence assumptions is a separate\nissue.\n\n* \"In a probabilistic context-free grammar (PCFG), all production rules are\nindependent\": this is not an accurate statement, it's not clear what is meant\nby production rules being independent. More accurate would be to say that\nthe choice of rule is conditionally independent of all other information \nearlier in the derivation, once the non-terminal being expanded is\nconditioned upon.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.2222222238779068,
0.6666666865348816,
0.1111111119389534
],
"confidence": [
0.75,
0.75,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [],
"comment": []
} | {
"paperhash": [
"aharoni|towards_string-to-tree_neural_machine_translation",
"alur|streaming_tree_transducers._automata,_languages,_and_programming",
"alvarez|tree-structured_decoding_with_doubly_recurrent_neural_networks",
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"bille|a_survey_on_tree_edit_distance_and_related_problems",
"bowman|tree-structured_composition_in_neural_networks_without_tree-structured_architectures",
"chen|improved_neural_machine_translation_with_a_syntax-aware_encoder_and_decoder",
"comon|tree_automata_techniques_and_applications",
"brooke|a_tree-to-tree_model_for_statistical_machine_translation",
"dong|language_to_logical_form_with_neural_attention",
"dyer|recurrent_neural_network_grammars",
"engelfriet|bottom-up_and_top-down_tree_transformations-a_comparison",
"eriguchi|character-based_decoding_in_tree-to-sequence_attention-based_neural_machine_translation",
"eriguchi|tree-to-sequence_attentional_neural_machine_translation",
"frasconi|a_general_framework_for_adaptive_processing_of_data_structures",
"felix|lstm_recurrent_networks_learn_simple_context-free_and_context-sensitive_languages",
"graehl|training_tree_transducers",
"grefenstette|learning_to_transduce_with_unbounded_memory",
"henry|critical_behavior_from_deep_dynamics:_a_hidden_dimension_in_natural_language",
"linzen|assessing_the_ability_of_lstms_to_learn_syntax-sensitive_dependencies",
"luong|effective_approaches_to_attention-based_neural_machine_translation",
"munkhdalai|neural_tree_indexers_for_text_understanding",
"razmara|application_of_tree_transducers_in_statistical_machine_translation",
"rush|a_neural_attention_model_for_abstractive_sentence_summarization",
"hava|on_the_computational_power_of_neural_nets",
"socher|parsing_natural_scenes_and_natural_language_with_recursive_neural_networks",
"sukhbaatar|end-to-end_memory_networks",
"sutskever|sequence_to_sequence_learning_with_neural_networks",
"tai|the_tree-to-tree_correction_problem",
"vinyals|grammar_as_a_foreign_language",
"vinyals|show_and_tell:_a_neural_image_caption_generator",
"wang|chinese_syntactic_reordering_for_statistical_machine_translation",
"zhang|top-down_tree_long_short-term_memory_networks"
],
"title": [
"Towards string-to-tree neural machine translation",
"Streaming tree transducers. Automata, Languages, and Programming",
"Tree-structured decoding with doubly recurrent neural networks",
"Neural machine translation by jointly learning to align and translate",
"A survey on tree edit distance and related problems",
"Tree-structured composition in neural networks without tree-structured architectures",
"Improved neural machine translation with a syntax-aware encoder and decoder",
"Tree automata techniques and applications",
"A tree-to-tree model for statistical machine translation",
"Language to logical form with neural attention",
"Recurrent neural network grammars",
"Bottom-up and top-down tree transformations-a comparison",
"Character-based decoding in tree-to-sequence attention-based neural machine translation",
"Tree-to-sequence attentional neural machine translation",
"A general framework for adaptive processing of data structures",
"Lstm recurrent networks learn simple context-free and context-sensitive languages",
"Training tree transducers",
"Learning to transduce with unbounded memory",
"Critical behavior from deep dynamics: A hidden dimension in natural language",
"Assessing the ability of lstms to learn syntax-sensitive dependencies",
"Effective approaches to attention-based neural machine translation",
"Neural tree indexers for text understanding",
"Application of tree transducers in statistical machine translation",
"A neural attention model for abstractive sentence summarization",
"On the computational power of neural nets",
"Parsing natural scenes and natural language with recursive neural networks",
"End-to-end memory networks",
"Sequence to sequence learning with neural networks",
"The tree-to-tree correction problem",
"Grammar as a foreign language",
"Show and tell: A neural image caption generator",
"Chinese syntactic reordering for statistical machine translation",
"Top-down tree long Short-Term memory networks"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"roee aharoni",
"yoav goldberg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rajeev alur",
"loris d' antoni"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david alvarez",
"-melis ",
"tommi s jaakkola"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dzmitry bahdanau",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"philip bille"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"christopher d samuel r bowman",
"christopher manning",
" potts"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"huadong chen",
"shujian huang",
"david chiang",
"jiajun chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"max hubert comon",
"rémi dauchet",
"christof gilleron",
"florent löding",
"denis jacquemard",
"sophie lugiez",
"marc tison",
" tommasi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alissa brooke",
" cowan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"li dong",
"mirella lapata"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chris dyer",
"adhiguna kuncoro",
"miguel ballesteros",
"noah a smith"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"joost engelfriet"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"akiko eriguchi",
"kazuma hashimoto",
"yoshimasa tsuruoka"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"akiko eriguchi",
"kazuma hashimoto",
"yoshimasa tsuruoka"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"paolo frasconi",
"marco gori",
"alessandro sperduti"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a felix",
"e gers",
" schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jonathan graehl",
"kevin knight"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"edward grefenstette",
"karl moritz hermann",
"mustafa suleyman",
"phil blunsom"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"w henry",
"max lin",
" tegmark"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tal linzen",
"emmanuel dupoux",
"yoav goldberg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"minh-thang luong",
"hieu pham",
"christopher d manning"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tsendsuren munkhdalai",
"hong yu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"majid razmara"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sumit alexander m rush",
"jason chopra",
" weston"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"t hava",
"eduardo d siegelmann",
" sontag"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"richard socher",
"cliff c lin",
"chris manning",
"andrew y ng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sainbayar sukhbaatar",
"jason weston",
"rob fergus"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ilya sutskever",
"oriol vinyals",
"quoc v le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kuo-chung tai"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"oriol vinyals",
"łukasz kaiser",
"terry koo",
"slav petrov",
"ilya sutskever",
"geoffrey hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"oriol vinyals",
"alexander toshev",
"samy bengio",
"dumitru erhan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chao wang",
"michael collins",
"philipp koehn"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xingxing zhang",
"liang lu",
"mirella lapata"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"arXiv:1704.04743",
"",
"",
"1409.0473v7",
"",
"",
"arXiv:1707.05436",
"",
"",
"1601.01280v2",
"1602.07776v4",
"",
"",
"arXiv:1603.06075",
"",
"",
"",
"1506.02516v3",
"arXiv:1606.06737",
"arXiv:1611.01368",
"arXiv:1508.04025",
"1607.04492v2",
"",
"1509.00685v2",
"",
"",
"",
"1409.3215v3",
"arXiv:1506.05869",
"1412.7449v3",
"1411.4555v2",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.333333 | 0.833333 | null | null | null | null | null | rJBwoM-Cb |
||
lei|training_rnns_as_fast_as_cnns|ICLR_cc_2018_Conference | Training RNNs as Fast as CNNs | Common recurrent neural network architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU) architecture, a recurrent unit that simplifies the computation and exposes more parallelism. In SRU, the majority of computation for each step is independent of the recurrence and can be easily parallelized. SRU is as fast as a convolutional layer and 5-10x faster than an optimized LSTM implementation. We study SRUs on a wide range of applications, including classification, question answering, language modeling, translation and speech recognition. Our experiments demonstrate the effectiveness of SRU and the trade-off it enables between speed and performance. | {
"name": [],
"affiliation": []
} | null | [
"recurrent neural networks",
"natural language processing"
] | null | 2018-02-15 22:29:39 | 70 | null | null | null | null | null | null | null | null | false | The paper presents Simple Recurrent Unit, which is characterised by the lack of state-to-gates connections as used in conventional recurrent architectures. This allows for efficient implementation, and leads to results competitive with the recurrent baselines, as shown on several benchmarks.
The submission lacks novelty, as the proposed method is essentially a special case of Quasi-RNN [Bradbury et al.], published at ICLR 2017. The comparison in Appendix A confirms that, as well as similar results of SRU and Quasi-RNN in Figures 4 and 5. Quasi-RNN has already been demonstrated to be amenable to efficient implementation and perform on a par with the recurrent baselines, so this submission doesn’t add much to that. | {
"review_id": [
"SyjjOZ5gM",
"HyMadv_bz",
"BJsMKkGgf"
],
"review": [
{
"title": "title: Very useful RNN cell with ok results but over-hyped presentation.",
"paper_summary": null,
"main_review": "main_review: The authors introduce SRU, the Simple Recurrent Unit that can be used as a substitute for LSTM or GRU cells in RNNs. SRU is much more parallel than the standard LSTM or GRU, so it trains much faster: almost as fast as a convolutional layer with properly optimized CUDA code. Authors perform experiments on numerous tasks showing that SRU performs on par with LSTMs, but the baselines for these tasks are a little problematic (see below).\n\nOn the positive side, the paper is very clear and well-written, the SRU is a superbly elegant architecture with a fair bit of originality in its structure, and the results show that it could be a significant contribution to the field as it can probably replace LSTMs in most cases but yield fast training. On the negative side, the authors present the results without fully referencing and acknowledging state-of-the-art. Some of this has been pointed out in the comments below already. As another example: Table 5 that presents results for English-German WMT translation only compares to OpenNMT setups with maximum BLEU about 21. But already a long time ago Wu et. al. presented LSTMs reaching 25 BLEU and current SOTA is above 28 with training time much faster than those early models (https://arxiv.org/abs/1706.03762). While the latest are non-RNN architectures, a table like Table 5 should include them too, for a fair presentation. In conclusion: the authors seem to avoid discussing the problem that current non-RNN architectures could be both faster and yield better results on some of the studied problems. That's bad presentation of related work and should be improved in the next versions (at which point this reviewer is willing to revise the score). But in all cases, this is a significant contribution to deep learning and deserves acceptance.\n\nUpdate: the revised version of the paper addresses all my concerns and the comments show new evidence of potential applications, so I'm increasing my score.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Low novelty and lacks comparison with obvious baselines",
"paper_summary": null,
"main_review": "main_review: The authors propose to drop the recurrent state-to-gates connections from RNNs to speed up the model. The recurrent connections however are core to an RNN. Without them, the RNN defaults simply to a CNN with gated incremental pooling. This results in a somewhat unfortunate naming (simple *recurrent* unit), but most importantly makes a comparison with autoregressive sequence CNNs [ Bytenet (Kalchbrenner et al 2016), Conv Seq2Seq (Dauphin et al, 2017) ] crucial in order to show that gated incremental pooling is beneficial over a simple CNN architecture baseline. \n\nIn essence, the paper shows that autoregressive CNNs with gated incremental pooling perform comparably to RNNs on a number of tasks while being faster to compute. Since it is already extensively known that autoregressive CNNs and attentional models can achieve this, the *CNN* part of the paper cannot be counted as a novel contribution. What is left is the gated incremental pooling operation; but to show that this operation is beneficial when added to autoregressive CNNs, a thorough comparison with an autoregressive CNN baseline is necessary.\n\nPros:\n- Fairly well presented\n- Wide range of experiments, despite underwhelming absolute results\n\nCons:\n- Quasi-RNNs are almost identical and already have results on small-scale tasks.\n- Slightly unfortunate naming that does not account for autoregressive CNNs\n- Lack of comparison with autoregressive CNN baselines, which signals a major conceptual error in the paper.\n- I would suggest to focus on a small set of tasks and show that the model achieves very good or SOTA performance on them, instead of focussing on many tasks with just relative improvements over the RNN baseline.\n\nI recommend showing exhaustively and experimentally that gated incremental pooling can be helpful for autoregressive CNNs on sequence tasks (MT, LM and ASR). I will adjust my score accordingly if the experiments are presented.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Nice idea, tested extensively",
"paper_summary": null,
"main_review": "main_review: This work presents the Simple Recurrent Unit architecture which allows more parallelism than the LSTM architecture while maintaining high performance.\n\nSignificance, Quality and clarity:\nThe idea is well motivated: Faster training is important for rapid experimentation, and altering the RNN cell so it can be paralleled makes sense. \nThe idea is well explained and the experiments convince that the new architecture is indeed much faster yet performs very well.\n\nA few constructive comments:\n- The experiment’s tables alternate between “time” and “speed”, It will be good to just have one of them.\n- Table 4 has time/epoch yet only time is stated",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.7777777910232544,
0.3333333432674408,
0.6666666865348816
],
"confidence": [
1,
1,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Still no SOTA results in the tables?",
"Review response",
"Nice work",
"Revision with updated result tables",
"Tables clarified",
"Results on Switchboard",
"Results on Switchboard",
"Depth vs. width",
"SOTA included",
"Review response",
"Review response",
"activation and highway bias",
"Looks much better, thank you!"
],
"comment": [
"I am not sure how to interpret the comment about the Transformer architecture. There is a table with results in your paper that are far below SOTA and it doesn't even mention this -- it looks like clearly misleading presentation, and with your comment it starts looking like it's misleading on purpose. Thus I'm lowering my score until the presentation is improved. In particular, your results are below 21 BLEU which is very far apart from the 28 BLEU of the Transformer -- the suggestion you make in the comment (that architectures like Transformer may not be needed with SRUs) seems to be far from conclusive at this point. Please present your work fairly and compare to existing SOTA -- it's a very good work, but the presentation is misleading.",
"Thank you for the comments and feedback. \n\nWe agree that having both “time” and “speed” in the tables are confusing. “Time/epoch” in Table 4 is misleading. We will use “Time per epoch” or simply “Time” instead. \n\nWe will address your feedback in the next version. Thanks!\n",
"In Tables 6 / 9: \nIt is not clear why SRU model capacity was increased in depth (to 12 layers) and not in width, which would give an even faster model I would think. As you mention for LSTM 5 layers appear to be optimal, so it is surprising that 12 were needed for SRU.",
"We updated the paper to include recent state-of-the-art results for the QA and translation tasks to avoid confusion about how the results should be interpreted. We thank AnonReviewer3 for suggesting this. ",
"The latest revision contains fixes to the tables and unifies the measurements used. Thanks for the suggestion",
"Thank you! We will update it.",
"Thanks for your comments.\n\nSorry for the confusing, we didn't use RNN-LM here (only N-gram). So the number we should compare with is 10.0 in Table 8. I think JHU recently have better number using the same language model with lattice-free MMI training. We will try this new loss later. But similar to RNN-LM, this is orthogonal to this paper, we are trying to compare with LSTM only for acoustic modeling.\n\nWe haven't try it on 2000hrs. (1) To my understanding, there still lots of institute use 300hrs setup especially at school. If you check last year ICASSP, there are still many paper use 300hrs set, e.g. http://danielpovey.com/files/2017_spl_tdnnlstm.pdf. (2) In my experiences, 20000hrs vs. 300hrs do make a difference, especially for end-to-end system. But 2000hrs set and 300hrs usually don't have significant difference in term of testing the trend of the model quality (especially for HMM-NN hybrid system, model A > model B for 300hrs usually also hold for the full fisher set). Also, 300hrs usually take 4 days on a single GPU which is a reasonable setup for reproduce results.\n",
"In general we found that increasing depth is more helpful than increasing width as long as the width is in a reasonable size. I think this is because we drop \"the dependency\" between \"h\" and this context needs to be recovered by adding more layers. But since SWB training takes about 4 days, we didn't try all the configuration. That's why we didn't draw a conclusion on depth vs. width. ",
"Hi,\n\nSorry for the delayed revision. The state-of-the-art results have been included in the tables for both machine translation and reading comprehension tasks. We hope the results are now better presented. \n\nPlease let use know if other related work should be included. We are happy to address additional comments.\n\nAlso, we didn't mean that \"Transformer may not be needed with SRUs\". As discussed in the introduction of the Transformer paper, RNN is discarded in Transformer architecture due to the difficulty to parallelize recurrent computation. Thus, it is perhaps possible to \"achieve the best of both worlds\" by incorporating SRU into Transformer (e.g. substituting the FFN sub-unit). \n\n",
"Thank you for the comments and feedback. We respond to the concerns and questions raised in three section. \n\n== Recurrent or convolution ==\nWe wish to certain aspects pertaining to the distinction between recurrent and convolution architectures as we use in the paper:\n\n(1) SRU only applies simple matrix multiplications (Wx_t) for each x_t. This is not a typical convolution operation that is applied over k consecutive tokens. While matrix multiplication can be considered a convolution operation of k=1, this entails that feed-forward networks (FFN) are also a convolutional network. More important, with k=1 there is no convolution over the words, which is the key aim of CNNs for text processing, for example to reason about n-gram patterns. Therefore, while notationaly correct, we consider the k=1 case to empty the term convolution from the meaning it is intended to convey, and do not use it in this way in the paper. That said, we discuss the relationship of these two types of computations in Appendix A, and will be happy to clarify it further in the body of the paper. \n\n(2) This being said, the effectiveness of SRU comes from the recurrent computation of its internal state c[t] (rather than applying conv operations). This internal state computation (referred to in the review as gated incremental pooling) is commonly used as the key component in gated RNN variants, including LSTM, GRU, RAN, MGU, etc. \n\n(3) Beyond the choice of terms, and even if we were to consider SRU as a special type of CNN (with k=1), to the best of our knowledge, our study is the first to demonstrate that k=1 suffices to work effectively across a range of NLP and speech tasks. This emphasis on efficiency goes beyond prior work (e.g. Bytenet, ConvS2S and Quasi-RNN), where conv operations of k=3,4,etc are used throughout the experiments. This allows us to simplify architecture tuning and significantly speeds up the network, which is the main focus of this work. As shown in Figure 2, SRU operates faster than a single conv operation of k=3.\n\n(4) Quasi-RNN, T-RNN and T-LSTM (https://arxiv.org/pdf/1602.02218.pdf) have also used “RNN” in naming, despite defaulting to CNN with gated incremental pooling. Broadly speaking, we consider any unit that successively updates state c[t] based on current input x[t] and the previous vector c[t-1] (as a function c[t]=f(x[t], c[t-1])) as a recurrent unit. We will clarify this better in the paper. \n\n== Quasi-RNN and scale of tasks ==\nWe discuss the comparison to Quasi-RNN in Appendix A, and emphasize the critical differences. In our experiments, the training time of a single run on machine translation takes about 2 days, and 4 days on speech on a Titan X GPU.\n\n== Wide experiments vs deep experiments ==\nOur experiments are aimed to study SRU’s effectiveness on a broad set of realistic applications via fair comparison. We discuss this more in our response to Reviewer 3. \n\nOur work focuses on practical simplifications, optimizations, and the applicability of SRU to a wide range of realistic tasks. Although we do not perform an exhaustive hyper-parameter / architecture tuning on each task given space and time constraints, we do see an improvement over deep CNNs on speech recognition. Similar results have been reported in prior work such as RCNN (Lei et al; 15,16), KNN (Lei et al; 17) and Quasi-RNN (Bradbury et al; 17), demonstrating that gated pooling is helpful for CNN-type models on tasks such as classification, retrieval, LM etc.\n",
"Thank you for the comments and feedback.\n\n== Paper revision ==\nWe will include missing SOTA results and related work for translation as pointed by R3, as we already included for language modeling and speech. We will update the table in the next version.\n\n== Clarification on our experiments ==\nThe goal of our experiments is not to outperform previous SOTA. Instead, the experiments were designed to study SRU’s effectiveness on a broad set of realistic applications via fair comparison. Therefore, we emphasized using existing open source implementations for MT and QA. Different implementations (network architectures, data processing etc.) have non-trivial impact on the final numbers. To the best of our effort, we aimed to avoid this influencing our experiments. Therefore, in the current version, Tables 1, 3, and 5 only compare the results of using LSTM / SRU / Conv2d as building blocks in existing models such DrQA and OpenNMT. We definitely agree that including SOTA models in these tables will improve our presentation. Thank you for the suggestion.\n\n== Non-RNN architectures ==\nThank you for the comment. We will include discussions of non-RNN architectures. Our contribution is orthogonal to recent architectures, such as Transformer (https://arxiv.org/abs/1706.03762), which is a novel combination of multi-head attention and feed-forward networks. Part of the motivation behind the Transformer architecture is the computational bottleneck of recurrent architectures. With SRU this is not longer the case. In fact, we observe in the translation model that only 4 minutes are spent per SRU layer, and 96 minutes are spent in the attention+softmax computation. An interesting direction for future work is combining the SRU and Transformer architectures to gain the benefits of both. While this is an important problem, it is beyond the scope of our experiments. ",
"Thank you for the comment. \n\nThe identity activation (use_tanh=0) and non-zero highway bias are applied only on language modeling following a few of recent papers such as \n - language modeling via gated convolutional network: https://arxiv.org/pdf/1612.08083.pdf\n - recurrent highway network: https://arxiv.org/abs/1607.03474\n\nWe expect the model to perform better on other tasks as well by initializing a non-zero highway bias, since it can help to balance gradient propagation and model complexity (non-linearity) from layer stacking. This is recommended in the original highway network paper (https://arxiv.org/abs/1505.00387). However, we choose to use zero highway bias on other tasks for simplicity. \n\nRegarding the choice of activation function:\n - this could be an empirical question since the best activation varies across tasks / datasets (Appendix A)\n - identity already works since the pre-activation state (i.e. c[t]) readily encapsulates sequence similarity computation. see the discussed related work (Lei et al 2017; section 2.1 & 2.2) https://arxiv.org/pdf/1705.09037.pdf\n\nThank you again for bringing up the questions.\n ",
"Thank you for the new version of the paper. It looks much better, and I misunderstood the comments about Transformer. Indeed, combining it with SRUs could bring the best of both worlds and improve results even more. I have no more objections to accepting this work and I see its big potential, adjusting my review."
]
} | {
"paperhash": [
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"goyal|yangqing_jia,_and_kaiming_he._accurate,_large_minibatch_sgd:_training_imagenet_in_1_hour",
"he|deep_residual_learning_for_image_recognition",
"kim|convolutional_neural_networks_for_sentence_classification",
"kingma|adam:_a_method_for_stochastic_optimization",
"li|learning_question_classifiers",
"melis|on_the_state_of_the_art_of_evaluation_in_neural_language_models",
"rajpurkar|squad:_100,000+_questions_for_machine_comprehension_of_text",
"seo|bidirectional_attention_flow_for_machine_comprehension",
"seo|bidirectional_attention_flow_for_machine_comprehension",
"shazeer|outrageously_large_neural_networks:_the_sparsely-gated_mixture-of-experts_layer",
"vaswani|attention_is_all_you_need",
"wu|google's_neural_machine_translation_system:_bridging_the_gap_between_human_and_machine_translation",
"zaremba|recurrent_neural_network_regularization",
"anselmi|deep_convolutional_networks_are_hierarchical_kernel_machines",
"appleyard|optimizing_performance_of_recurrent_neural_networks_on_gpus",
"daniely|toward_deeper_understanding_of_neural_networks:_the_power_of_initialization_and_a_dual_view_on_expressivity",
"yann|language_modeling_with_gated_convolutional_networks",
"gal|a_theoretically_grounded_application_of_dropout_in_recurrent_neural_networks",
"gehring|convolutional_sequence_to_sequence_learning",
"greff|lstm:_a_search_space_odyssey",
"inan|tying_word_vectors_and_word_classifiers:_a_loss_framework_for_language_modeling",
"jean|on_using_very_large_target_vocabulary_for_neural_machine_translation",
"kalchbrenner|a_convolutional_neural_network_for_modelling_sentences",
"kuchaiev|factorization_tricks_for_lstm_networks",
"lei|deriving_neural_architectures_from_sequence_and_graph_kernels",
"misra|mapping_instructions_and_visual_observations_to_actions_with_reinforcement_learning",
"pang|a_sentimental_education:_sentiment_analysis_using_subjectivity_summarization_based_on_minimum_cuts",
"pang|seeing_stars:_exploiting_class_relationships_for_sentiment_categorization_with_respect_to_rating_scales",
"|using_the_output_embedding_to_improve_language_models",
"saon|the_ibm_2016_english_conversational_telephone_speech_recognition_system",
"rupesh|training_very_deep_networks",
"wu|an_empirical_exploration_of_skip_connections_for_sequential_tagging"
],
"title": [
"NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE",
"A Diffusion Approximation Theory of Momentum SGD in Nonconvex Optimization",
"Deep Residual Learning for Image Recognition",
"Convolutional Neural Networks for Sentence Classification",
"Published as a conference paper at ICLR 2015 ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION",
"Learning Question Classifiers £",
"On the State of the Art of Evaluation in Neural Language Models",
"SQuAD: 100,000+ Questions for Machine Comprehension of Text",
"BI-DIRECTIONAL ATTENTION FLOW FOR MACHINE COMPREHENSION",
"BI-DIRECTIONAL ATTENTION FLOW FOR MACHINE COMPREHENSION",
"Under review as a conference paper at ICLR 2017 OUTRAGEOUSLY LARGE NEURAL NETWORKS: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER",
"Attention Is All You Need",
"Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation",
"Under review as a conference paper at ICLR 2015 RECURRENT NEURAL NETWORK REGULARIZATION",
"Deep Convolutional Networks are Hierarchical Kernel Machines",
"Optimizing Performance of Recurrent Neural Networks on GPUs",
"Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity",
"Language Modeling with Gated Convolutional Networks",
"Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam",
"Convolutional Sequence to Sequence Learning",
"LSTM: A Search Space Odyssey",
"TYING WORD VECTORS AND WORD CLASSIFIERS: A LOSS FRAMEWORK FOR LANGUAGE MODELING",
"On Using Very Large Target Vocabulary for Neural Machine Translation",
"A Convolutional Neural Network for Modelling Sentences",
"Workshop track -ICLR 2017 FACTORIZATION TRICKS FOR LSTM NETWORKS",
"Deriving Neural Architectures from Sequence and Graph Kernels",
"Mapping Instructions and Visual Observations to Actions with Reinforcement Learning",
"From Paraphrase Database to Compositional Paraphrase Model and Back",
"Whitening Sentence Representations for Better Semantics and Faster Retrieval",
"Training Very Deep Networks",
"An Empirical Exploration of Skip Connections for Sequential Tagging",
"Training Very Deep Networks",
"Under review as a conference paper at ICLR 2017 NEURAL ARCHITECTURE SEARCH WITH REINFORCEMENT LEARNING"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"dzmitry bahdanau",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tianyi liu",
"zhehui chen",
"enlu zhou",
"tuo zhao"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{}"
}
]
},
{
"name": [
"yoon kim"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
}
]
},
{
"name": [
"diederik p kingma",
"jimmy lei ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xin li",
"dan roth"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": "{}"
}
]
},
{
"name": [
"gábor melis",
"chris dyer",
"phil blunsom"
],
"affiliation": [
{
"laboratory": "DeepMind",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "DeepMind",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "DeepMind",
"institution": "University of Oxford",
"location": "{}"
}
]
},
{
"name": [
"pranav rajpurkar",
"jian zhang",
"konstantin lopyrev",
"percy liang"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
}
]
},
{
"name": [
"minjoon seo",
"aniruddha kembhavi",
"ali farhadi",
"hananneh hajishirzi"
],
"affiliation": [
{
"laboratory": "",
"institution": "Allen Institute for Artificial Intelligence",
"location": "{}"
},
{
"laboratory": "",
"institution": "Allen Institute for Artificial Intelligence",
"location": "{}"
},
{
"laboratory": "",
"institution": "Allen Institute for Artificial Intelligence",
"location": "{}"
},
{
"laboratory": "",
"institution": "Allen Institute for Artificial Intelligence",
"location": "{}"
}
]
},
{
"name": [
"minjoon seo",
"aniruddha kembhavi",
"ali farhadi",
"hananneh hajishirzi"
],
"affiliation": [
{
"laboratory": "",
"institution": "Allen Institute for Artificial Intelligence",
"location": "{}"
},
{
"laboratory": "",
"institution": "Allen Institute for Artificial Intelligence",
"location": "{}"
},
{
"laboratory": "",
"institution": "Allen Institute for Artificial Intelligence",
"location": "{}"
},
{
"laboratory": "",
"institution": "Allen Institute for Artificial Intelligence",
"location": "{}"
}
]
},
{
"name": [
"noam shazeer",
"azalia mirhoseini",
"krzysztof maziarz",
"andy davis",
"quoc le",
"geoffrey hinton",
"jeff dean"
],
"affiliation": [
{
"laboratory": "",
"institution": "Google Brain",
"location": "{}"
},
{
"laboratory": "",
"institution": "Google Brain",
"location": "{}"
},
{
"laboratory": "",
"institution": "Jagiellonian University",
"location": "{'settlement': 'Cracow'}"
},
{
"laboratory": "",
"institution": "Google Brain",
"location": "{}"
},
{
"laboratory": "",
"institution": "Google Brain",
"location": "{}"
},
{
"laboratory": "",
"institution": "Google Brain",
"location": "{}"
},
{
"laboratory": "",
"institution": "Google Brain",
"location": "{}"
}
]
},
{
"name": [
"ashish vaswani",
"google brain",
"noam shazeer",
"niki parmar",
"jakob uszkoreit",
"llion jones",
"aidan n gomez",
"łukasz kaiser"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yonghui wu",
"mike schuster",
"zhifeng chen",
"quoc v le",
"mohammad norouzi",
"wolfgang macherey",
"maxim krikun",
"yuan cao",
"qin gao",
"klaus macherey",
"jeff klingner",
"apurva shah",
"melvin johnson",
"xiaobing liu",
"łukasz kaiser",
"stephan gouws",
"yoshikiyo kato",
"taku kudo",
"hideto kazawa",
"keith stevens",
"george kurian",
"nishant patil",
"wei wang",
"cliff young",
"jason smith",
"jason riesa",
"alex rudnick",
"oriol vinyals",
"greg corrado",
"macduff hughes",
"jeffrey dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wojciech zaremba"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
}
]
},
{
"name": [
"fabio anselmi",
"lorenzo rosasco",
"cheston tan",
"tomaso poggio"
],
"affiliation": [
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology",
"location": "{'postCode': '02139', 'settlement': 'Cambridge', 'region': 'MA'}"
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology",
"location": "{'postCode': '02139', 'settlement': 'Cambridge', 'region': 'MA'}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Massachusetts Institute of Technology",
"location": "{'postCode': '02139', 'settlement': 'Cambridge', 'region': 'MA'}"
}
]
},
{
"name": [
"jeremy appleyard",
"tomáš kočiský",
"phil blunsom",
"‡ ⋆ nvidia"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Oxford ‡ Google DeepMind",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Oxford ‡ Google DeepMind",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Oxford ‡ Google DeepMind",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Oxford ‡ Google DeepMind",
"location": "{}"
}
]
},
{
"name": [
"amit daniely",
"roy frostig",
"yoram singer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yann n dauphin",
"angela fan",
"michael auli",
"david grangier"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mohammad emtiyaz khan",
"didrik nielsen",
"voot tangkaratt",
"wu lin",
"yarin gal",
"akash srivastava",
"mohammad emtiyaz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "University of British Columbia",
"location": "{'settlement': 'Vancouver', 'country': 'Canada'}"
},
{
"laboratory": "",
"institution": "University of Oxford",
"location": "{'settlement': 'Oxford', 'country': 'UK'}"
},
{
"laboratory": "",
"institution": "University of Edin-burgh",
"location": "{'settlement': 'Edinburgh', 'country': 'UK'}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jonas gehring",
"michael auli",
"david grangier",
"denis yarats",
"yann n dauphin facebook",
"a i research"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"klaus greff",
"rupesh k srivastava",
"jan koutník",
"bas r steunebrink",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hakan inan",
"khashayar khosravi",
"richard socher"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University Stanford",
"location": "{'region': 'CA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Stanford University Stanford",
"location": "{'region': 'CA', 'country': 'USA'}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sébastien jean",
"kyunghyun cho",
"roland memisevic",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal CIFAR Senior Fellow",
"location": "{}"
}
]
},
{
"name": [
"nal kalchbrenner",
"edward grefenstette",
"phil blunsom"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Oxford",
"location": "{}"
}
]
},
{
"name": [
"oleksii kuchaiev",
"boris ginsburg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tao lei",
"wengong jin",
"regina barzilay",
"tommi jaakkola"
],
"affiliation": [
{
"laboratory": "MIT Computer Science & Ar-tificial Intelligence Laboratory",
"institution": "",
"location": "{}"
},
{
"laboratory": "MIT Computer Science & Ar-tificial Intelligence Laboratory",
"institution": "",
"location": "{}"
},
{
"laboratory": "MIT Computer Science & Ar-tificial Intelligence Laboratory",
"institution": "",
"location": "{}"
},
{
"laboratory": "MIT Computer Science & Ar-tificial Intelligence Laboratory",
"institution": "",
"location": "{}"
}
]
},
{
"name": [
"dipendra misra",
"john langford",
"yoav artzi"
],
"affiliation": [
{
"laboratory": "",
"institution": "Cornell University",
"location": "{'postCode': '10044', 'settlement': 'New York', 'region': 'NY'}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'postCode': '10011', 'settlement': 'New York', 'region': 'NY'}"
},
{
"laboratory": "",
"institution": "Cornell University",
"location": "{'postCode': '10044', 'settlement': 'New York', 'region': 'NY'}"
}
]
},
{
"name": [
"john wieting",
"mohit bansal",
"kevin gimpel",
"karen livescu",
"dan roth"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": "{'postCode': '61801', 'settlement': 'Urbana', 'region': 'IL', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago",
"location": "{'postCode': '60637', 'settlement': 'Chicago', 'region': 'IL', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago",
"location": "{'postCode': '60637', 'settlement': 'Chicago', 'region': 'IL', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago",
"location": "{'postCode': '60637', 'settlement': 'Chicago', 'region': 'IL', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": "{'postCode': '61801', 'settlement': 'Urbana', 'region': 'IL', 'country': 'USA'}"
}
]
},
{
"name": [
"jianlin su",
"jiarun cao",
"weijie liu",
"yangyiwen ou"
],
"affiliation": [
{
"laboratory": "",
"institution": "Shenzhen Zhuiyi Technology Co",
"location": "{'country': 'Ltd'}"
},
{
"laboratory": "",
"institution": "Shenzhen Zhuiyi Technology Co",
"location": "{'country': 'Ltd'}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Shenzhen Zhuiyi Technology Co",
"location": "{'country': 'Ltd'}"
}
]
},
{
"name": [
"rupesh kumar srivastava",
"klaus greff",
"j ürgen schmidhuber"
],
"affiliation": [
{
"laboratory": "",
"institution": "The Swiss AI Lab IDSIA",
"location": "{'country': 'USI, SUPSI'}"
},
{
"laboratory": "",
"institution": "The Swiss AI Lab IDSIA",
"location": "{'country': 'USI, SUPSI'}"
},
{
"laboratory": "",
"institution": "The Swiss AI Lab IDSIA",
"location": "{'country': 'USI, SUPSI'}"
}
]
},
{
"name": [
"huijia wu",
"jiajun zhang",
"chengqing zong"
],
"affiliation": [
{
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "",
"location": "{'country': 'CAS'}"
},
{
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "",
"location": "{'country': 'CAS'}"
},
{
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "",
"location": "{'country': 'CAS'}"
}
]
},
{
"name": [
"rupesh kumar srivastava",
"klaus greff",
"j ürgen schmidhuber"
],
"affiliation": [
{
"laboratory": "",
"institution": "The Swiss AI Lab IDSIA",
"location": "{'country': 'USI, SUPSI'}"
},
{
"laboratory": "",
"institution": "The Swiss AI Lab IDSIA",
"location": "{'country': 'USI, SUPSI'}"
},
{
"laboratory": "",
"institution": "The Swiss AI Lab IDSIA",
"location": "{'country': 'USI, SUPSI'}"
}
]
},
{
"name": [
"barret zoph",
"quoc v le google brain"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"1707.05589v2",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.592593 | 0.916667 | null | null | null | null | null | rJBiunlAW |
||
traynor|a_simple_fully_connected_network_for_composing_word_embeddings_from_characters|ICLR_cc_2018_Conference | A Simple Fully Connected Network for Composing Word Embeddings from Characters | This work introduces a simple network for producing character aware word embeddings. Position agnostic and position aware character embeddings are combined to produce an embedding vector for each word. The learned word representations are shown to be very sparse and facilitate improved results on language modeling tasks, despite using markedly fewer parameters, and without the need to apply dropout. A final experiment suggests that weight sharing contributes to sparsity, increases performance, and prevents overfitting. | {
"name": [],
"affiliation": []
} | A fully connected architecture is used to produce word embeddings from character representations, outperforms traditional embeddings and provides insight into sparsity and dropout. | [
"natural language processing",
"word embeddings",
"language models",
"neural network",
"deep learning",
"sparsity",
"dropout"
] | null | 2018-02-15 22:29:36 | 21 | null | null | null | null | null | null | null | null | false | The paper presents yet another approach for modeling words based on their characters. Unfortunately the authors do not compare properly to previous approaches and the idea is very incremental. | {
"review_id": [
"HkDiq0Flf",
"Byp-dy9gG",
"By7uW7Pef"
],
"review": [
{
"title": "title: Position aware character embeddings can be sparse and are less prone to overfitting on language modeling.",
"paper_summary": null,
"main_review": "main_review: The paper uses both position agnostic and position aware embeddings for tokens in a language modeling task. To obtain token embeddings, they concatenate two embeddings: the sum of character embeddings and the sum of (character, position) embeddings, the former being position agnostic and the latter being position aware. In a language modeling task, they find that using a combination of both improves perplexity over the standard token embedding baseline with fewer parameters. \n\nThe paper shows that the character embeddings are more sparse, measured with the Gini coefficient, than token embeddings and are more robust to overfitting. They also find that while dropout increases overall sparsity, it makes a few tokens homogenous. The paper does not give a crisp answer to why such sparsity patterns are observed. \n\nThe paper falls a bit short both empirically and technically. While their technique is interesting, they do not compare it to the baseline of using convolutions over characters. More empirical evidence is needed for the technique to be adopted by the community. On the theory side, they should dig deeper into the reasons for sparsity and how it might help to train better models. \n\nIf the papers shows that the approach can work well in machine translation or language modeling of morphologically rich languages, it might encourage practitioners to use the technique. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Another model for learning to embed words as a function of their characters",
"paper_summary": null,
"main_review": "main_review: This paper presents a new model for composing representations of characters into word embeddings. The starting point of their argument is to include position-specific embeddings of characters rather than just position-independent characters. By adding together position-specific vectors, reasonable results are obtained.\n\nThis is an interesting result, but I have a few recommendations to improve the paper.\n1) It is a bit hard to assess since it is not evaluated on a standard datasets. There are a number standard datasets for open vocabulary language modeling. E.g., the MWC corpus (http://k-kawakami.com/research/mwc), or even the Penn Treebank (although it is conventionally modeled in closed vocabulary form).\n2) There are many existing models for composing characters into words. In addition to those cited in the paper, see the citations listed below. Comparison with those is crucial in a paper like this.\n3) Since the predictions are done at the word type level, it is unclear how vocabulary set of the corpus is determined, and what is done with OOV word types at test time (while it is possible to condition on them using the technique in the paper, it is not possible to use this technique for generation).\n4) The analysis is interesting, but a more intuitive explanation would be to show nearest neighbor plots.\n\nSome missing citations:\n\nComposing characters into words:\n\ndos Santos and Zadrozny. (2014 ICML) http://proceedings.mlr.press/v32/santos14.pdf\nLing et al. (2015 EMNLP) Finding Function in Form. https://arxiv.org/abs/1508.02096\n\nAdditionally, using explicit positional features in modeling language has been used:\nVaswani et al. (2017) Attention is all you need https://arxiv.org/abs/1706.03762\nand a variety of other sources.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The authors implement a model to obtain an embedding for a word given its characters, but fail to compare to relevant previous work or even any published results, making the validity of their claims difficult to asses.",
"paper_summary": null,
"main_review": "main_review: The authors propose a neural network architecture which takes the characters of a word as input along with their positions, and output a word embedding. They then use these as inputs to a GRU language model, which is evaluated on two medium size data sets made from a series of novels and the Project Gutenberg Canada books respectively.\n\nWhile the idea has merit, the experimental protocol is too flawed to draw any reliable conclusions. Why use Wheel of Time, which is not in the public domain, rather than e.g. text8? Why not train the model to convergence (Figure 3)? Do the learned embeddings exhibit any morphological significance, or does the model only serve a regularization purpose?\n\nAs for the model itself: are the position agnostic character embeddings actually helpful in the spelling model? Does the model have the expressivity to learn the same embeddings as a look-up table?\n\nThe authors are also missing a significant amount of relevant literature on the topic of building word embeddings from characters, for example:\nFinding Function in Form: Compositional Character Models for Open Vocabulary Word Representation, Ling et al., 2015\nEnriching Word Vectors with Subword Information, Bojanowski et al. 2017\nCompositional Morphology for Word Representations and Language Modelling, Botha and Blunsom 2014\n\nPros:\n- Valid idea\n\nCons:\n- Too many missing references\n- Some modeling choices lack justification\n- Experiments do not provide meaningful comparisons and are not reproducible\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.3333333432674408,
0.2222222238779068
],
"confidence": [
0.75,
1,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [],
"comment": []
} | {
"paperhash": [
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"bojanowski|enriching_word_vectors_with_subword_information",
"botha|compositional_morphology_for_word_representations_and_language_modelling",
"chen|joint_learning_of_character_and_word_embeddings",
"cheng|long_short-term_memory-networks_for_machine_reading",
"cho|learning_phrase_representations_using_rnn_encoder-decoder_for_statistical_machine_translation",
"gini|variabilità_e_mutabilità",
"hurley|comparing_measures_of_sparsity",
"jordan|the_wheel_of_time",
"józefowicz|exploring_the_limits_of_language_modeling",
"kim|character-aware_neural_language_models",
"ling|finding_function_in_form:_compositional_character_models_for_open_vocabulary_word_representation",
"luong|achieving_open_vocabulary_neural_machine_translation_with_hybrid_word-character_models",
"mikolov|efficient_estimation_of_word_representations_in_vector_space",
"pascanu|understanding_the_exploding_gradient_problem",
"rumelhart|neurocomputing:_foundations_of_research._chapter_learning_representations_by_back-propagating_errors",
"cicero|learning_character-level_representations_for_part-of-speech_tagging",
"srivastava|dropout:_a_simple_way_to_prevent_neural_networks_from_overfitting",
"srivastava|highway_networks",
"vaswani|attention_is_all_you_need",
"zaremba|recurrent_neural_network_regularization"
],
"title": [
"Neural machine translation by jointly learning to align and translate",
"Enriching word vectors with subword information",
"Compositional morphology for word representations and language modelling",
"Joint learning of character and word embeddings",
"Long short-term memory-networks for machine reading",
"Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"Variabilità e mutabilità",
"Comparing measures of sparsity",
"The Wheel of Time",
"Exploring the limits of language modeling",
"Character-aware neural language models",
"Finding function in form: Compositional character models for open vocabulary word representation",
"Achieving open vocabulary neural machine translation with hybrid word-character models",
"Efficient estimation of word representations in vector space",
"Understanding the exploding gradient problem",
"Neurocomputing: Foundations of research. chapter Learning Representations by Back-propagating Errors",
"Learning character-level representations for part-of-speech tagging",
"Dropout: A simple way to prevent neural networks from overfitting",
"Highway networks",
"Attention is all you need",
"Recurrent neural network regularization"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"dzmitry bahdanau",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"piotr bojanowski",
"edouard grave",
"armand joulin",
"tomas mikolov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jan a botha",
"phil blunsom"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xinxiong chen",
"lei xu",
"zhiyuan liu",
"maosong sun",
"huanbo luan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jianpeng cheng",
"li dong",
"mirella lapata"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kyunghyun cho",
"bart van merrienboer",
"c ¸aglar gülc ¸ehre",
"fethi bougares",
"holger schwenk",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"c gini"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"niall p hurley",
"scott t rickard"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"robert jordan",
"brandon sanderson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rafal józefowicz",
"oriol vinyals",
"mike schuster",
"noam shazeer",
"yonghui wu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoon kim",
"yacine jernite",
"david sontag",
"alexander m rush"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wang ling",
"tiago luís",
"luís marujo",
"ramón fernández astudillo",
"silvio amir",
"chris dyer",
"alan w black",
"isabel trancoso"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"minh-thang luong",
"christopher d manning"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tomas mikolov",
"kai chen",
"greg corrado",
"jeffrey dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"razvan pascanu",
"tomas mikolov",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david e rumelhart",
"geoffrey e hinton",
"ronald j williams"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dos cicero",
"santos ",
"bianca zadrozny"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nitish srivastava",
"geoffrey hinton",
"alex krizhevsky",
"ilya sutskever",
"ruslan salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rupesh kumar srivastava",
"klaus greff",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ashish vaswani",
"noam shazeer",
"niki parmar",
"jakob uszkoreit",
"llion jones",
"aidan n gomez",
"lukasz kaiser",
"illia polosukhin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wojciech zaremba",
"ilya sutskever",
"oriol vinyals"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1409.0473v7",
"1607.04606v2",
"1405.4273v1",
"",
"",
"",
"",
"0811.4706v2",
"",
"1602.02410v2",
"",
"1508.02096v2",
"",
"1301.3781v3",
"",
"",
"",
"",
"1505.00387v2",
"1706.03762v7",
"1409.2329v5"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.333333 | 0.833333 | null | null | null | null | null | rJ8rHkWRb |
||
dong|enhance_word_representation_for_outofvocabulary_on_ubuntu_dialogue_corpus|ICLR_cc_2018_Conference | Enhance Word Representation for Out-of-Vocabulary on Ubuntu Dialogue Corpus | Ubuntu dialogue corpus is the largest public available dialogue corpus to make it feasible to build end-to-end
deep neural network models directly from the conversation data. One challenge of Ubuntu dialogue corpus is
the large number of out-of-vocabulary words. In this paper we proposed an algorithm which combines the general pre-trained word embedding vectors with those generated on the task-specific training set to address this issue. We integrated character embedding into Chen et al's Enhanced LSTM method (ESIM) and used it to evaluate the effectiveness of our proposed method. For the task of next utterance selection, the proposed method has demonstrated a significant performance improvement against original ESIM and the new model has achieved state-of-the-art results on both Ubuntu dialogue corpus and Douban conversation corpus. In addition, we investigated the performance impact of end-of-utterance and end-of-turn token tags. | {
"name": [],
"affiliation": []
} | Combine information between pre-built word embedding and task-specific word representation to address out-of-vocabulary issue | [
"next utterance selection",
"ubuntu dialogue corpus",
"out-of-vocabulary",
"word representation"
] | null | 2018-02-15 22:29:51 | 41 | null | null | null | null | null | null | null | null | false | This paper's idea is to augment pre-trained word embeddings on a large corpus with embeddings learned on the data of interest. This is shown to yield better results than the pre-trained word embeddings alone. This contribution is too limited to justify publication at iclr. | {
"review_id": [
"rJdjmmLez",
"H1RMVeqgz",
"BkomChuxf"
],
"review": [
{
"title": "title: Good paper with important practical and engineering relevance. Little methodological novelty, though.",
"paper_summary": null,
"main_review": "main_review: Summary:\nThis paper proposes an approach to improve the out-of-vocabulary embedding prediction for the task of modeling dialogue conversations. The proposed approach uses generic embeddings and combines them with the embeddings trained on the training dataset in a straightforward string-matching algorithm. In addition, the paper also makes a couple of improvements to Chen et. al's enhanced LSTM by adding character-level embeddings and replacing average pooling by LSTM last state summary vector. The results are shown on the standard Ubuntu dialogue dataset as well as a new Douban conversation dataset. The proposed approach gives sizable gains over the baselines.\n\n\nComments:\n\nThe paper is well written and puts itself nicely in context of previous work. Though, the proposed extension to handle out-of-vocabulary items is a simple and straightforward string matching algorithm, but nonetheless it gives noticeable increase in empirical performance on both the tasks. All in all, the methodological novelty of the paper is small but it has high practical relevance in terms of giving improved accuracy on an important task of dialogue conversation.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Promising results but insufficient clarity and focus in write-up",
"paper_summary": null,
"main_review": "main_review: The main contributions in this paper are:\n1) New variants of a recent LSTM-based model (\"ESIM\") are applied to the task of response-selection in dialogue modeling -- ESIM was originally introduced and evaluated for natural language inference. In this new setting, the ESIM model (vanilla and extended) outperform previous models when trained and evaluated on two distinct conversational datasets.\n\n2) A fairly trivial method is proposed to extend the coverage of pre-trained word embeddings to deal with the OOV problem that arises when applying them to these conversational datasets.\nThe method itself is to combine d1-dimensional word embeddings that were pretrained on a large unannotated corpus (vocabulary S) with distinct d2-dimensional word embeddings that are trained on the task-specific training data (vocabulary T). The enhanced (d1+d2)-dimensional representation for a word is constructed by concatenating its vectors from the two embeddings, setting either the d1- or d2-dimensional subvector to zeros when the word is absent from either S or T, respectively. This method is incorporated as an extension into ESIM and evaluated on the two conversation datasets.\n\nThe main results can be characterized as showing that this vocabulary extension method leads to performance gains on two datasets, on top of an ESIM-model extended with character-based word embeddings, which itself outperforms the vanilla ESIM model.\n\nThese empirical results are potentially meaningful and could justify reporting, but the paper's organization is very confusing, and too many details are too unclear, leading to low confidence in reproducibility. \n\nThere is basic novelty in applying the base model to a new task, and the analysis of the role of the special conversational boundary tokens is interesting and can help to inform future modeling choices. The embedding-enhancing method has low originality but is effective on this particular combination of model architecture, task and datasets. I am left wondering how well it might generalize to other models or tasks, since the problem it addresses shows up in many other places too...\n\nOverall, the presentation switches back and forth between the Douban corpus and the Ubuntu corpus, and between word2vec and Glove embeddings, and this makes it very challenging to understand the details fully.\n\nS3.1 - Word representation layer: This paragraph should probably mention that the character-composed embeddings are newly introduced here, and were not part of the original formulation of ESIM. That statement is currently hidden in the figure caption.\n\nAlgorithm 1:\n- What set does P denote, and what is the set-theoretic relation between P and T?\n- Under one possible interpretation, there may be items in P that are in neither T nor S, yet the algorithm does not define embeddings for those items even though its output is described as \"a dictionary with word embeddings ... for P\". This does not seem consistent? I think the sentence in S4.2 about initializing remaining OOV words as zeros is relevant and wonder if it should form part of the algorithm description?\n\nS4.1 - What do the authors mean by the statement that response candidates for the Douban corpus were \"collected by Lucene retrieval model\"?\n\nS4.2 - Paragraph two is very unclear. In particular, I don't understand the role of the Glove vectors here when Algorithm 1 is used, since the authors refer to word2vec vectors later in this paragraph and also in the Algorithm description.\n\nS4.3 - It's insufficiently clear what the model definitions are for the Douban corpus. Is there still a character-based LSTM involved, or does FastText make it unnecessary?\n\nS4.3 - \"It can be seen from table 3 that the original ESIM did not perform well without character embedding.\" This is a curious way to describe the result, when, in fact, the ESIM model in table 3 already outperforms all the previous models listed.\n\nS4.4 - gensim package -- for the benefit of readers unfamiliar with gensim, the text should ideally state explicitly that it is used to create the *word2vec* embeddings, instead of the ambiguous \"word embeddings\".\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A minor solution to resolving OOV word representations",
"paper_summary": null,
"main_review": "main_review: The paper considers a setting (Ubuntu Dialogue Corpus and Douban Conversation Corpus) where most word types in the data are not covered by pretrained representations. The proposed solution is to combine (1) external pretrained word embeddings and (2) pretrained word embeddings on the training data by keeping them as two views: use the view if it's available, otherwise use a zero vector. This scheme is shown to perform well compared to other methods, specifically combinations of pretraining vs not pretraining embeddings on the training data, updating vs not updating embeddings during training, and others. \n\nQuality: Low. The research is not very well modularized: the addressed problem has nothing specifically to do with ESIM and dialogue response classification, but it's all tangled up. The proposed solution is reasonable but rather minor. Given that the model will learn task-specific word representations on the training set anyway, it's not clear how important it is to follow this procedure, though minor improvement is reported (Table 5). \n\nClarity: The writing is clear. But the point of the paper is not immediately obvious because of its failure to modularize its contributions (see above).\n\nOriginality: Low to minor.\n\nSignificance: It's not convincing that an incremental improvement in the pretraining phase is so significant, for instance compared to developing a novel better architecture actually tailored to the dialogue task. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.4444444477558136,
0.2222222238779068
],
"confidence": [
0.5,
0.75,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Reply to \"Request for code\"",
"Reply to \"Further clarification\"",
"Reply to \"a minor solution to resolving OOV word representations\"",
"Reply to \"Request for Code\"",
"Reply to \"Reproducibility Summary\"",
"Reply to \"\"Promising results but insufficient clarity and focus in write-up\"",
"Reply to \"Promising results but insufficient clarity and focus in write-up\"",
"Reply to \"Request for Code\"",
"Reply to \"Reproducibility Summary\"",
"Reply to \"Promising results but insufficient clarity and focus in write-up\""
],
"comment": [
"Hi, Alex,\n Thank for your interest in our paper. Open source approval process in our company may take time. At the same time, if further clarification about technical implementation details (e.g hyper-parameter setting) is needed, feel free to ask here. We like to help you reproduce the results in the paper.",
"> 1. We used stanford CoreNLP's library \nWe wrote a java program based on CoreNLP library to perform PTBTokenizer, other than command-line interface (CLI). For CLI, it is not easy to create input-output correspondence.\nSee java API example (https://stanfordnlp.github.io/CoreNLP/api.html)\nProperties props = new Properties();\nprops.put(\"annotators\", \"tokenize, ssplit, lemma\"}\n\n> Regarding word2vec, did you use any non-default hyperparameters? \nuse the default. Iter=20\n> did you train contexts and responses as distinct inputs or concatenate the context-response pairs to train?\ndistinct inputs. Each context/response takes one line in the input data file.\n\n\n\n\n\n",
"Thank for your valuable feedback. \n\n> the addressed problem has nothing specifically to do with ESIM and dialogue response classification, but it's all tangled up. The proposed solution is reasonable but rather minor. \n\nIn order to check whether the effectiveness of the proposed enhanced representation depends on ESIM model and dataset, I uploaded a revision (12/11/2017) to use a very simple model (represent contexts/responses by a simple average of word vectors). I evaluated it on Ubuntu, Douban and WikiQA datasets. The results on the enhanced representation are still better on the above three datasets. This may indicate that the enhanced vectors may fuse domain-specific info into pre-built vectors. Also this process is unsupervised.\n\nSee section \"4.5 EVALUATION OF ENHANCED REPRESENTATION ON A SIMPLE MODEL\"\n\n\n\n\n",
"We are extremely excited that you have selected our paper for reproducibility. We are going through our employer's open source approval process which will take a much longer time than the Dec 15 deadline. Few questions that may help us with alternatives. \n\n1. Do we need to stay anonymous to continue further our correspondence?\n2. Are you open for us to enter a legal contractual agreement to access our source code between our employer and your school for the \"reproducibility\" purpose? This would be potentially a faster process to give you access to our source code. I can explore this route to get more affirmative answers on the timing if you are open to enter a legal contractual agreement, like \"no cost collaboration\".\n\nAt this time, we believe open source process would take beyond your Dec 15 deadline, but we hope to finish the open source approval for the conference date. \nIf you have any further questions about our paper, please let us know as well. \n",
"For reference and result reproducibility (ESIM^a in Table 3 in the paper), I pasted the logs of performance evaluation on the validation every 1000 steps during the training. It took about 13 hours 41 minutes to reach 23000 training steps.\n\nstep: 1000\nMAP (mean average precision: 0.735673771383\tMRR (mean reciprocal rank): 0.735673771383\tTop-1 precision: 0.607566462168\tNum_query: 19560\n\nStep: 2000\nMAP (mean average precision: 0.762894553186\tMRR (mean reciprocal rank): 0.762894553186\tTop-1 precision: 0.643149284254\tNum_query: 19560\n\nStep: 3000\nMAP (mean average precision: 0.781005473594\tMRR (mean reciprocal rank): 0.781005473594\tTop-1 precision: 0.666462167689\tNum_query: 19560\n\nStep: 4000\nMAP (mean average precision: 0.791324840945\tMRR (mean reciprocal rank): 0.791324840945\tTop-1 precision: 0.679396728016\tNum_query: 19560\n\nStep: 5000\nMAP (mean average precision: 0.793004146785\tMRR (mean reciprocal rank): 0.793004146785\tTop-1 precision: 0.680112474438\tNum_query: 19560\n\nStep: 6000\nMAP (mean average precision: 0.806250669491\tMRR (mean reciprocal rank): 0.806250669491\tTop-1 precision: 0.698108384458\tNum_query: 19560\n\n....\nStep: 9000\nMAP (mean average precision: 0.819590433992\tMRR (mean reciprocal rank): 0.819590433992\tTop-1 precision: 0.717791411043\tNum_query: 19560\n\nStep: 10000\nMAP (mean average precision: 0.818069269971\tMRR (mean reciprocal rank): 0.818069269971\tTop-1 precision: 0.714008179959\tNum_query: 19560\n\nStep: 11000\nMAP (mean average precision: 0.818855596942\tMRR (mean reciprocal rank): 0.818855596942\tTop-1 precision: 0.714979550102\tNum_query: 19560\n\nStep: 12000\nMAP (mean average precision: 0.821677885708\tMRR (mean reciprocal rank): 0.821677885708\tTop-1 precision: 0.719325153374\tNum_query: 19560\n\nStep: 13000\nMAP (mean average precision: 0.8232087472\tMRR (mean reciprocal rank): 0.8232087472\tTop-1 precision: 0.721523517382\tNum_query: 19560\n\nStep: 14000\nMAP (mean average precision: 0.825161326971\tMRR (mean reciprocal rank): 0.825161326971\tTop-1 precision: 0.724948875256\tNum_query: 19560\n\nStep: 15000\nMAP (mean average precision: 0.825991109975\tMRR (mean reciprocal rank): 0.825991109975\tTop-1 precision: 0.725051124744\tNum_query: 19560\n\nStep: 16000\nMAP (mean average precision: 0.824983891648\tMRR (mean reciprocal rank): 0.824983891648\tTop-1 precision: 0.722750511247\tNum_query: 19560\n\nStep: 17000\nMAP (mean average precision: 0.827094653812\tMRR (mean reciprocal rank): 0.827094653812\tTop-1 precision: 0.727198364008\tNum_query: 19560\n\nStep: 18000\nMAP (mean average precision: 0.829552151297\tMRR (mean reciprocal rank): 0.829552151297\tTop-1 precision: 0.730981595092\tNum_query: 19560\n\nStep: 19000\nMAP (mean average precision: 0.830157512903\tMRR (mean reciprocal rank): 0.830157512903\tTop-1 precision: 0.73200408998\tNum_query: 19560\n\nStep: 20000\nMAP (mean average precision: 0.82902826468\tMRR (mean reciprocal rank): 0.82902826468\tTop-1 precision: 0.729703476483\tNum_query: 19560\n\nStep: 21000\nMAP (mean average precision: 0.832002669848\tMRR (mean reciprocal rank): 0.832002669848\tTop-1 precision: 0.734918200409\tNum_query: 19560\n\nStep: 22000\nMAP (mean average precision: 0.830050982731\tMRR (mean reciprocal rank): 0.830050982731\tTop-1 precision: 0.731339468303\tNum_query: 19560\n\nStep: 23000\nMAP (mean average precision: 0.832678571429\tMRR (mean reciprocal rank): 0.832678571429\tTop-1 precision: 0.735736196319\tNum_query: 19560\n\nStep: 24000\nMAP (mean average precision: 0.828641116467\tMRR (mean reciprocal rank): 0.828641116467\tTop-1 precision: 0.728936605317\tNum_query: 19560\n\nStep: 25000\nMAP (mean average precision: 0.826601259454\tMRR (mean reciprocal rank): 0.826601259454\tTop-1 precision: 0.725766871166\tNum_query: 19560\n",
"I uploaded a new revision on Dec. 6.\nOn Table 5, added performance comparison with FastText vectors.\n\nUsed the fixed pre-built FastText vectors ( https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.en.zip) where word vectors for out-of-vocabulary words were computed based on built model.\nThat is,\nall_words_on_ubtuntu_dataset|./fasttext print-word-vectors wiki.en.bin > ubuntu_fastText_word_vectors.txt\n(see: https://github.com/facebookresearch/fastText)\n\nThe performance of the proposed method is better.",
"I uploaded the revision on 12/11/2017 to address whether the effectiveness of the proposed enhanced representation depends on ESIM model and datasets.\n\nI added a section \"4.5 EVALUATION OF ENHANCED REPRESENTATION ON A SIMPLE MODEL\". Here I used a very simple model : represent contexts (or responses) by a simple average of word vectors. Cosine-similarity is used to rank candidate responses. The results on the enhanced vectors are still better. I also tested it on WikiQA dataset.",
"Hi, Alex,\n I could not see your email in open review profile (\"a****4@cs\"). Open source the code in the paper is in progress. I don't know whether I can share the code for this reproduction challenge now and need to check the legal department in my company. \n\n> 1. What random seed did you use to generate the Ubuntu corpus?\njust used the default one (default = 1234) (see: https://github.com/rkadlec/ubuntu-ranking-dataset-creator) so that results are comparable with others.\n\n> 2. How did you implement the character-composed embedding? \n> 3. could you clarify on the concatenation of word and character embeddings?\n\nI used the tensorflow (tf.nn.bidirectional_dynamic_rnn) to conduct all experiments in the paper.\nFor example, you can define function below:\ndef lstm_layer(inputs, input_seq_len, rnn_size, dropout_keep_prob, scope, scope_reuse=False):\n with tf.variable_scope(scope, reuse=scope_reuse) as vs:\n fw_cell = tf.contrib.rnn.LSTMCell(rnn_size, forget_bias=1.0, state_is_tuple=True, reuse=scope_reuse)\n fw_cell = tf.contrib.rnn.DropoutWrapper(fw_cell, output_keep_prob=dropout_keep_prob)\n bw_cell = tf.contrib.rnn.LSTMCell(rnn_size, forget_bias=1.0, state_is_tuple=True, reuse=scope_reuse)\n bw_cell = tf.contrib.rnn.DropoutWrapper(bw_cell, output_keep_prob=dropout_keep_prob)\n rnn_outputs, rnn_states = tf.nn.bidirectional_dynamic_rnn(cell_fw=fw_cell, cell_bw=bw_cell,\n inputs=inputs,\n sequence_length=input_seq_len,\n dtype=tf.float32)\n return rnn_outputs, rnn_states\n\nThen\n#context_char_embedded: [batch_size * max_sequence_length, max_word_length, embed_char_dim]\n#context_char_length: [batch_size * max_sequence_length] (define number of character per word)\n#charRNN_size: 40 \n#max_word_length: 18\n#max_sequence_length: 180\n#dropoutput_keep_prob: 1.0\n#embed_char_dim: 69\n#batch_size: 128\nchar_rnn_output_context, char_rnn_state_context = lstm_layer(context_char_embedded, context_char_length, charRNN_size, dropout_keep_prob, charRNN_scope_name, scope_reuse=False)\n\n#response_char_embedded: [batch_size * max_sequence_length, max_word_length, embed_char_dim]\n#response_char_length: [batch_size * max_sequence_length]\n\nchar_rnn_output_response, char_rnn_state_response = lstm_layer(response_char_embedded,\nresponse_char_length, charRNN_size, dropout_keep_prob, charRNN_scope_name, scope_reuse=True)\n\n#context char representation\nchar_embed_dim = charRNN_size * 2\n#context_char_state: [batch_size * max_sequence_length, char_embed_dim]\ncontext_char_state = tf.concat(axis=1, values=[char_rnn_state_context[0].h, char_rnn_state_context[1].h])\n#reshape \ncontext_char_state = tf.reshape(context_char_state, [-1, max_sequence_length, char_embed_dim])\n\nThe similar operations are applied to char_rnn_state_response.\n\nFor word embedding, I assume that you can get \"context_word_output and response_word_output\"\nBoth tensors will have shape [batch_size, max_sequence_length, word_embedding_dim]\nThen you can use tf.concat to get the combined representation.\n\n> Regarding your ESIM, what settings did you use for the following hyper-parameters: patience, gradient clipping threshold, max epochs?\nI am not familiar with patience. No gradient clipping was used. In my experiments, training usually achieved the best performance (MRR) on the validation set at around 22000 - 25000 batch steps. \n\nNote: tensorflow version (tensorflow-gpu (1.1.0)). \n\nIf you share your email, we can communicate through email or another channel.\n\n\n\n\n\n\n\n\n",
"Thank Hugo et al very much for reproducing the results. \n\n> The paper does not detail the computing infrastructure that was used.\nLocal machine : Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz * 2\n RAM: 32G ( 8 * 4G (DDR4, 2133 MHz)\n One GPU : nvidia P5000 (16 G GPU RAM)\n\nYou used Telsa K80 (with 24G GPU RAM). I have not compared the performance between P5000 and Tesla K80.\n\n> Accuracy and cost over the validation set and over a subset of the training set were employed to evaluate the training of the model.\n\nIn our experiments, we evaluated the accuracy, MRR, P@1 on the validation set every 1000 steps and saved the model with the highest MRR. In your code, you saved the model with the best accuracy on the validation set every 50 steps. \nMy suggestion: \n 1) use MRR\n 2) perform the evaluation on the validation set every K steps (K could be larger to reduce the computational cost since evaluation on the validation set is slow). This will help you speed up the training.\n\n> In training the character embeddings using Word2Vec,\n>we used all the default hyperparameters, and trained each\n> context/response as distinct inputs such that each context/response\n>pair takes one line in the input data file.\n\nI assume that there is a typo here. 'character embedding' may be 'word embedding'.\nIn our algorithm 1, we used Word2vec to generate word embedding on the training set and concatenated them with pre-built GloVe vectors. Character Embedding is used in our ESIM. Since you only evaluated the baseline ESIM model, character embedding would not be used.\n\n> the training of character-composed embeddings is briefly described only as the concatenation of final state vectors at the BiLSTM.\nThe implementation of character embedding was showed in my first comment. It is relatively easy to integrate them into your code (see: tf_esim.py Line 43 and Line 44). Character-embedding may consume more memory. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n",
"Thank for your feedback. I have uploaded a new revision based on your suggestions.\n\n> The embedding-enhancing method has low originality but is effective on this particular combination of model architecture, task and datasets. I am left wondering how well it might generalize to other models or tasks, since the problem it addresses shows up in many other places too...\n\nGood point. I will test embedding-enhanced method on other benchmark set/task to check whether it is still effective. I will report results here.\n\n> S3.1 - Word representation layer: This paragraph should probably mention that the character-composed embeddings are newly introduced here\nI updated it in revised version based on your advice.\n\n> What set does P denote, and what is the set-theoretic relation between P and T?\nP: all words in training/validation/testing sets (number of unique words could be large)\nT: words with word2vec embedding on the training set. T is a subset of P. Word2vec also uses word document frequency to remove some low frequency words.\n\nIn the revised version, I change output to \" dimension d1 + d2 for (S\\cap P) \\cup T\" and added notes \"The remaining words which are in P and not in the above output dictionary are initialized with zero vectors\". Here we did not store word with zero vector in the above dictionary to save space in the output dictionary. This initialization is usually done during neural network initialization stage.\n\n> S4.1 - What do the authors mean by the statement that response candidates for the Douban corpus were \"collected by Lucene retrieval model\"?\nBased on your advice, I added the following sentences in the revised paper\n\"That is, the last turn of each Douban dialogue with additional keywords extracted from the context on the test set was used as query to retrieve 10 response candidates from the Lucene index set (Details are referred to section 4 in (Wu et al., 2017)).\" \n\nDouban data was created by Wu et al., not by us (paper: https://arxiv.org/pdf/1612.01627.pdf, \nSee section 4: Response Candidate retrieval and Section 5.2 Douban Conversation Corpus). On this dataset, response negative candidates on the training/validation sets were random sampled whereas the retrieved method was used for testing set. \n\n> S4.2 - Paragraph two is very unclear. In particular, I don't understand the role of the Glove vectors here when Algorithm 1 is used, since the authors refer to word2vec vectors later in this paragraph and also in the Algorithm description.\n\nHere GloVe vectors are just pre-trainined word embedding ones from a general large dataset.\n\nFor the clarification, I added the following sentence in Section 3.2\n\"Here the pre-trainined word vectors can be from known methods such as GloVe (Pennington et al., 2014), word2vec (Mikolov et al., 2013) and FastText (Bojanowski et al., 2016).\".\n\nOn the training set we used word2vec in Algorithm 1 though other methods (GloVe and FastText) can be used too. \n\n> S4.3 - It's insufficiently clear what the model definitions are for the Douban corpus. Is there still a character-based LSTM involved, \nI used the same model layout and hyper-parameters for Douban and Ubuntu corpus. In Section 4.2 \n\"The same hyper-parameter settings are applied to both Ubuntu Dialogue and Douban conversation corpus.\"\n\nOnly the differences are pre-trained embedding vectors and word2vec generated on the training sets. Wu et al's Douban dataset (Chinese) have been already tokenized so that it is easy for us to run word2vec based on gensim. \n\n> does FastText make it unnecessary?\nFor western languages such as English, Germany, FastText generates ngram (character) internal embeddings and are used to address out-of-vocabulary issue. For OOV (a word is out of FastText pre-trained embeddings), we can use average of word ngram to obtain its representation. For Ubuntu corpus, I can test it if you think that it is useful.\nFor Douban, it is not easy for us to do it since dataset has been tokenized by Chinese tokenizer.\n\n> S4.3 - \"It can be seen from table 3 that the original ESIM did not perform well without character embedding.\" \nThanks. I changed it to \"\nIt can be seen from table 3 that character embedding enhances the performance of original ESIM.\"\n\"\n\n> S4.4 - gensim package -- for the benefit of readers unfamiliar with gensim, the text should ideally state explicitly that it is used to create the *word2vec* embeddings, \nI updated it in revised version based on your advice.\n\n\n\n\n\n\n\n\n\n\n\n\n"
]
} | {
"paperhash": [
"abadi|tensorflow:_large-scale_machine_learning_on_heterogeneous_distributed_systems",
"baeza-yates|modern_information_retrieval",
"baudiš|sentence_pair_scoring:_towards_unified_framework_for_text_comprehension",
"bojanowski|enriching_word_vectors_with_subword_information",
"chen|enhanced_lstm_for_natural_language_inference",
"dodge|evaluating_prerequisite_qualities_for_learning_end-to-end_dialog_systems",
"hochreiter|long_short-term_memory",
"huang|learning_deep_structured_semantic_models_for_web_search_using_clickthrough_data",
"ji|an_information_retrieval_approach_to_short_text_conversation",
"kadlec|improved_deep_learning_baselines_for_ubuntu_corpus_dialogs",
"kim|convolutional_neural_networks_for_sentence_classification",
"kim|character-aware_neural_language_models",
"kingma|adam:_a_method_for_stochastic_optimization",
"lowe|the_ubuntu_dialogue_corpus:_a_large_dataset_for_research_in_unstructured_multi-turn_dialogue_systems",
"lowe|training_end-to-end_dialogue_systems_with_the_ubuntu_dialogue_corpus",
"manning|the_stanford_corenlp_natural_language_processing_toolkit",
"mikolov|distributed_representations_of_words_and_phrases_and_their_compositionality",
"ankur|a_decomposable_attention_model_for_natural_language_inference",
"pennington|glove:_global_vectors_for_word_representation",
"ritter|data-driven_response_generation_in_social_media",
"cicero|learning_character-level_representations_for_part-of-speech_tagging",
"nogueira|boosting_named_entity_recognition_with_neural_character_embeddings",
"seo|bidirectional_attention_flow_for_machine_comprehension",
"shen|a_latent_semantic_model_with_convolutional-pooling_structure_for_information_retrieval",
"speer|an_ensemble_method_to_produce_high-quality_word_embeddings",
"speer|representing_general_relational_knowledge_in_conceptnet_5",
"speer|conceptnet_5.5:_an_open_multilingual_graph_of_general_knowledge",
"tan|lstm-based_deep_learning_models_for_non-factoid_answer_selection",
"|oriol_vinyals_and_quoc_le._a_neural_conversational_model",
"voorhees|the_trec-8_question_answering_track_report",
"wan|match-srnn:_modeling_the_recursive_matching_structure_with_spatial_rnn",
"wang|a_dataset_for_research_on_short-text_conversations",
"wang|learning_natural_language_inference_with_lstm",
"wang|bilateral_multi-perspective_matching_for_natural_language_sentences",
"wu|sequential_matching_network:_a_new_architecture_for_multi-turn_response_selection_in_retrieval-based_chatbots",
"yan|learning_to_respond_with_deep_neural_networks_for_retrievalbased_human-computer_conversation_system",
"yan|docchat:_an_information_retrieval_approach_for_chatbot_engines_using_unstructured_documents",
"yang|wikiqa:_a_challenge_dataset_for_open-domain_question_answering",
"yang|words_or_characters?_fine-grained_gating_for_reading_comprehension",
"yang|multi-task_cross-lingual_sequence_tagging_from_scratch",
"zhou|multi-view_response_selection_for_human-computer_conversation"
],
"title": [
"Tensorflow: Large-scale machine learning on heterogeneous distributed systems",
"Modern information retrieval",
"Sentence pair scoring: Towards unified framework for text comprehension",
"Enriching word vectors with subword information",
"Enhanced lstm for natural language inference",
"Evaluating prerequisite qualities for learning end-to-end dialog systems",
"Long short-term memory",
"Learning deep structured semantic models for web search using clickthrough data",
"An information retrieval approach to short text conversation",
"Improved deep learning baselines for ubuntu corpus dialogs",
"Convolutional neural networks for sentence classification",
"Character-aware neural language models",
"Adam: A method for stochastic optimization",
"The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems",
"Training end-to-end dialogue systems with the ubuntu dialogue corpus",
"The stanford corenlp natural language processing toolkit",
"Distributed representations of words and phrases and their compositionality",
"A decomposable attention model for natural language inference",
"Glove: Global vectors for word representation",
"Data-driven response generation in social media",
"Learning character-level representations for part-of-speech tagging",
"Boosting named entity recognition with neural character embeddings",
"Bidirectional attention flow for machine comprehension",
"A latent semantic model with convolutional-pooling structure for information retrieval",
"An ensemble method to produce high-quality word embeddings",
"Representing general relational knowledge in conceptnet 5",
"Conceptnet 5.5: An open multilingual graph of general knowledge",
"Lstm-based deep learning models for non-factoid answer selection",
"Oriol Vinyals and Quoc Le. A neural conversational model",
"The trec-8 question answering track report",
"Match-srnn: Modeling the recursive matching structure with spatial rnn",
"A dataset for research on short-text conversations",
"Learning natural language inference with lstm",
"Bilateral multi-perspective matching for natural language sentences",
"Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots",
"Learning to respond with deep neural networks for retrievalbased human-computer conversation system",
"Docchat: An information retrieval approach for chatbot engines using unstructured documents",
"Wikiqa: A challenge dataset for open-domain question answering",
"Words or characters? fine-grained gating for reading comprehension",
"Multi-task cross-lingual sequence tagging from scratch",
"Multi-view response selection for human-computer conversation"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"martín abadi",
"ashish agarwal",
"paul barham",
"eugene brevdo",
"zhifeng chen",
"craig citro",
"greg s corrado",
"andy davis",
"jeffrey dean",
"matthieu devin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ricardo baeza-yates",
"berthier ribeiro-neto"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"petr baudiš",
"jan pichl",
"tomáš vyskočil",
"jan šedivỳ"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"piotr bojanowski",
"edouard grave",
"armand joulin",
"tomas mikolov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"qian chen",
"xiaodan zhu",
"zhen-hua ling",
"si wei",
"hui jiang",
"diana inkpen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jesse dodge",
"andreea gane",
"xiang zhang",
"antoine bordes",
"sumit chopra",
"alexander miller",
"arthur szlam",
"jason weston"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"po-sen huang",
"xiaodong he",
"jianfeng gao",
"li deng",
"alex acero",
"larry heck"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"zongcheng ji",
"zhengdong lu",
"hang li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rudolf kadlec",
"martin schmid",
"jan kleindienst"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoon kim"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoon kim",
"yacine jernite",
"david sontag",
"alexander m rush"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik kingma",
"jimmy ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ryan lowe",
"nissan pow",
"iulian serban",
"joelle pineau"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ryan thomas lowe",
"nissan pow",
"iulian vlad serban",
"laurent charlin",
"chia-wei liu",
"joelle pineau"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mihai christopher d manning",
"john surdeanu",
"jenny rose bauer",
"steven finkel",
"david bethard",
" mcclosky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tomas mikolov",
"ilya sutskever",
"kai chen",
"greg s corrado",
"jeff dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p ankur",
"oscar parikh",
"dipanjan täckström",
"jakob das",
" uszkoreit"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jeffrey pennington",
"richard socher",
"christopher manning"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alan ritter",
"colin cherry",
"william b dolan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d cicero",
"bianca santos",
" zadrozny"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"cicero nogueira",
"dos santos",
"victor guimaraes"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"minjoon seo",
"aniruddha kembhavi",
"ali farhadi",
"hannaneh hajishirzi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yelong shen",
"xiaodong he",
"jianfeng gao",
"li deng",
"grégoire mesnil"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"robert speer",
"joshua chin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"robert speer",
"catherine havasi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"robert speer",
"joshua chin",
"catherine havasi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ming tan",
"bing cicero dos santos",
"bowen xiang",
" zhou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [
"ellen m voorhees"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shengxian wan",
"yanyan lan",
"jun xu",
"jiafeng guo",
"liang pang",
"xueqi cheng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hao wang",
"zhengdong lu",
"hang li",
"enhong chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shuohang wang",
"jing jiang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"zhiguo wang",
"wael hamza",
"radu florian"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yu wu",
"wei wu",
"chen xing",
"ming zhou",
"zhoujun li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rui yan",
"yiping song",
"hua wu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nan zhao yan",
"jun-wei duan",
"peng bao",
"ming chen",
"zhoujun zhou",
"jianshe li",
" zhou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yi yang",
"wen-tau yih",
"christopher meek"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"zhilin yang",
"bhuwan dhingra",
"ye yuan",
"junjie hu",
"william w cohen",
"ruslan salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"zhilin yang",
"ruslan salakhutdinov",
"william cohen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xiangyang zhou",
"daxiang dong",
"hua wu",
"shiqi zhao",
"dianhai yu",
"hao tian",
"xuan liu",
"rui yan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"arXiv:1603.04467",
"",
"1603.06127v4",
"1607.04606v2",
"1609.06038v3",
"arXiv:1511.06931",
"",
"",
"1408.6988v1",
"1510.03753v2",
"1408.5882v2",
"",
"1412.6980v9",
"arXiv:1506.08909",
"",
"",
"1310.4546v1",
"1606.01933v2",
"",
"",
"",
"1505.05008v2",
"1611.01603v6",
"",
"arXiv:1604.01692",
"",
"1612.03975v2",
"arXiv:1511.04108",
"arXiv:1506.05869",
"",
"1604.04378v1",
"",
"1512.08849v2",
"arXiv:1702.03814",
"",
"",
"",
"",
"arXiv:1611.01724",
"arXiv:1603.06270",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.407407 | 0.75 | null | null | null | null | null | rJ7yZ2P6- |
||
tran|generative_models_for_alignment_and_data_efficiency_in_language|ICLR_cc_2018_Conference | Generative Models for Alignment and Data Efficiency in Language | We examine how learning from unaligned data can improve both the data efficiency of supervised tasks as well as enable alignments without any supervision. For example, consider unsupervised machine translation: the input is two corpora of English and French, and the task is to translate from one language to the other but without any pairs of English and French sentences. To address this, we develop feature-matching autoencoders (FMAEs). FMAEs ensure that the marginal distribution of feature layers are preserved across forward and inverse mappings between domains. We show that FMAEs achieve state of the art for data efficiency and alignment across three tasks: text decipherment, sentiment transfer, and neural machine translation for English-to-German and English-to-French. Most compellingly, FMAEs achieve state of the art for neural translation with limited supervision, with significant BLEU score differences of up to 5.7 and 6.3 over traditional supervised models. Furthermore, on English-to-German, they outperform last year's best fully supervised models such as ByteNet (Kalchbrenner et al., 2016) while using only half as many supervised examples. | {
"name": [],
"affiliation": []
} | null | [] | null | 2018-02-15 22:29:44 | 33 | null | null | null | null | null | null | null | null | false | The pros and cons of this paper cited by the reviewers (with a small amount of my personal opinion) can be summarized below:
Pros:
* The method itself seems to be tackling an interesting problem, which is feature matching between encoders within a generative model
Cons:
* The paper is sloppily written and symbols are not defined clearly
* The paper overclaims its contributions in the introduction, which are not supported by experimental results
* It mis-represents the task of decipherment and fails to cite relevant work
* The experimental setting is not well thought out in many places (see Reviewer 1's comments in particular)
As a result, I do not think this is up to the standards of ICLR at this time, although it may have potential in the future. | {
"review_id": [
"r1dDwZqxM",
"HyZEwgCgM",
"H1fmtttez"
],
"review": [
{
"title": "title: not good enough",
"paper_summary": null,
"main_review": "main_review: This paper proposes a generative model called matching auto-encoder to carry out the learning from unaligned data.\nHowever, it is very disappointed to read the contents after the introduction, since most of the contributions are overclaimed.\n\nDetailed comments:\n- Figure 1 is incorrect because the pairs (x, z) and (y, z) should be put into two different plates if x and y are unaligned.\n\n- Lots of contents in Sec. 3 are confusing to me. What is the difference between g_l(x) and g_l(y) if g_l : H_{l−1} → H_l and f_l: H_{l−1} → H_l are the same? What are e_x and e_y? Why is there a λ if it is a generative model?\n\n- If the title is called 'text decipherment', there should be no parallel data at all, otherwise it is a huge overclaim on the decipherment tasks. Please add citations of Kevin Knight's recent papers on deciperment.\n\n- Reading the experiment results of 'Sentiment Transfer' is a disaster to me. I couldn't get much information on 'sentiment transfer' from a bunch of ungrammatical unnatural language sentences. I would prefer to see some results of baseline models for comparison instead of a pure qualitative analysis.\n\n- The claim on \"FMAEs are state of the art for neural machine translation with limited supervision on EN-DE and EN-FR\" is not exciting to me. Semi-supervised learning is interesting, but in the scenario of MT we do have enough parallel data for many language pairs. Unless the model is able to exceed the 'real' state-of-the-art that uses the full set of parallel data, otherwise we couldn't identify whether the models are able to benefit NMT. Interestingly, the authors didn't provide any of the results that are experimented with full parallel data set. Possibly it is because the introduction of stochastic variables that prevent the models from overfitting on small datasets.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The current paper is too sloppy to appear in a good conference: the concept is not described well and the experiments are not well-motivated",
"paper_summary": null,
"main_review": "main_review: \nThe paper is sloppily written where math issues and undefined symbols make it hard to understand. The experiments seem to be poorly done and does not convey any clear points, and not directly comparable to previous results.\n\n(3) evaluates to 0, and is not a penalty. Same issue in (4). Use different symbols. I also do not understand how this is adversarial, as these are just computed through forward propagation.\n\nAlso what is this two argument f in eq 3? It seems to be a different and unspecified function from the one introduced in 2)\n\n4.1: a substitution cipher has an exact model, and there is no reason why a neural networks would do well here. I understand the extra-twist is that training set is unaligned, but there should be an actual baseline which correctly models the cipher process and decipher it. You should include that very natural baseline model.\n\n4.2 does not give any clear conclusions. The bottom is a draw from the model conditioned on the top? What was the training data, what is draw supposed to be? Some express the same sentiment, others different, and I have no idea if they are supposed to express the same meaning or not.\n\n4.3: why are all the results non-overlapping with previous results? You have to either reproduce some of the previous results, or run your own experiment in matching settings. The current result tables show your model is better than some version of the transformer, but not necessarily better than the \"big\" transformer. The setup and descriptions do not inspire confidence.\n\nMinor issues\n\n3.1: effiency => efficiency\n\nData efficiency is used as a task/technique, which I find hard to parse. \"Data efficiency and alignment have seen most success for dense, continuous data such as images.\"\n\"powerful data efficiency and alignment\"\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A little bit unclear",
"paper_summary": null,
"main_review": "main_review: This work propose a generative model for unsupervised learning of translation model using a variant of auto-encoder which reconstruct internal layer representation in two directions. Basic idea is to treat the intermediate layers as feature representation which is reconstructed from the other direction. Experiments on substitution cipher shows improvement over a state of the art results. For translation, the proposed method shows consistent gains over baselines, under a condition where supervised data is limited.\n\nOne of the problems of this paper is the clarity.\n- It is not immediately clear how the feature mapping explained in section 2 is related to section 3. It would be helpful if the authors could provide what is reconstructed using the transformer model as an example.\n- The improved noisy attention in section 3.3 sounds orthogonal to the proposed model. I'd recommend the authors to provide empirical results.\n- MT experiments are unclear to me. When running experiments for 2M data, did you use the remaining 2.5M for unsupervised training in English-German task?\n- It is not clear whether equation 3 is correct: The first term sounds g(e_x, e_y) instead of f(...)? Likewise, equation 4 needs to replace the first f(...) with g(...).\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.1111111119389534,
0.4444444477558136
],
"confidence": [
0.5,
0.5,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [],
"comment": []
} | {
"paperhash": [
"melis|on_the_state_of_the_art_of_evaluation_in_neural_language_models",
"reed|generative_adversarial_text_to_image_synthesis",
"salimans|improved_techniques_for_training_gans",
"bowman|generating_sentences_from_a_continuous_space",
"liu|coupled_generative_adversarial_networks",
"sutskever|sequence_to_sequence_learning_with_neural_networks",
"wu|google's_neural_machine_translation_system:_bridging_the_gap_between_human_and_machine_translation",
"xia|dual_supervised_learning",
"yu|seqgan:_sequence_generative_adversarial_nets_with_policy_gradient"
],
"title": [
"On the State of the Art of Evaluation in Neural Language Models",
"Generative Adversarial Text to Image Synthesis",
"Improved Techniques for Training GANs",
"Generating Sentences from a Continuous Space",
"Coupled Generative Adversarial Networks",
"Sequence to Sequence Learning with Neural Networks",
"Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation",
"Dual Supervised Learning",
"SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"gábor melis",
"chris dyer",
"phil blunsom"
],
"affiliation": [
{
"laboratory": "DeepMind",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "DeepMind",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "DeepMind",
"institution": "University of Oxford",
"location": "{}"
}
]
},
{
"name": [
"scott reed",
"zeynep akata",
"xinchen yan",
"lajanugen logeswaran reedscot",
" akata",
" llajan",
"bernt schiele",
"honglak lee"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'settlement': 'Ann Arbor', 'region': 'MI', 'country': 'USA'}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'settlement': 'Ann Arbor', 'region': 'MI', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'settlement': 'Ann Arbor', 'region': 'MI', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "University of Michigan",
"location": "{'settlement': 'Ann Arbor', 'region': 'MI', 'country': 'USA'}"
}
]
},
{
"name": [
"tim salimans",
"ian goodfellow",
"wojciech zaremba",
"vicki cheung",
"alec radford",
"xi chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"samuel r bowman",
"luke vilnis",
"oriol vinyals",
"andrew m dai",
"rafal jozefowicz",
"bengio samy",
" google brain"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ming-yu liu",
"oncel tuzel"
],
"affiliation": [
{
"laboratory": "Mitsubishi Electric Research Labs (MERL)",
"institution": "",
"location": "{}"
},
{
"laboratory": "Mitsubishi Electric Research Labs (MERL)",
"institution": "",
"location": "{}"
}
]
},
{
"name": [
"ilya sutskever",
"oriol vinyals",
"quoc v le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yonghui wu",
"mike schuster",
"zhifeng chen",
"quoc v le",
"mohammad norouzi",
"wolfgang macherey",
"maxim krikun",
"yuan cao",
"qin gao",
"klaus macherey",
"jeff klingner",
"apurva shah",
"melvin johnson",
"xiaobing liu",
"łukasz kaiser",
"stephan gouws",
"yoshikiyo kato",
"taku kudo",
"hideto kazawa",
"keith stevens",
"george kurian",
"nishant patil",
"wei wang",
"cliff young",
"jason smith",
"jason riesa",
"alex rudnick",
"oriol vinyals",
"greg corrado",
"macduff hughes",
"jeffrey dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yingce xia",
"tao qin",
"wei chen",
"jiang bian",
"nenghai yu",
"tie-yan liu"
],
"affiliation": [
{
"laboratory": "",
"institution": "Univer-sity of Science and Technology of China",
"location": "{'settlement': 'Hefei, Anhui', 'country': 'China'}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'settlement': 'Beijing', 'country': 'China'}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'settlement': 'Beijing', 'country': 'China'}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'settlement': 'Beijing', 'country': 'China'}"
},
{
"laboratory": "",
"institution": "Univer-sity of Science and Technology of China",
"location": "{'settlement': 'Hefei, Anhui', 'country': 'China'}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'settlement': 'Beijing', 'country': 'China'}"
}
]
},
{
"name": [
"lantao yu",
"weinan zhang",
"jun wang",
"yong yu"
],
"affiliation": [
{
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": "{}"
},
{
"laboratory": "",
"institution": "University College London",
"location": "{}"
},
{
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": "{}"
}
]
}
],
"arxiv_id": [
"1707.05589v2",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.296296 | 0.5 | null | null | null | null | null | rJ7RBNe0- |
||
khosla|policy_driven_generative_adversarial_networks_for_accented_speech_generation|ICLR_cc_2018_Conference | POLICY DRIVEN GENERATIVE ADVERSARIAL NETWORKS FOR ACCENTED SPEECH GENERATION | In this paper, we propose the generation of accented speech using generative adversarial
networks. Through this work we make two main contributions a) The
ability to condition latent representations while generating realistic speech samples
b) The ability to efficiently generate long speech samples by using a novel
latent variable transformation module that is trained using policy gradients. Previous
methods are limited in being able to generate only relatively short samples
or are not very efficient at generating long samples. The generated speech samples
are validated through a number of various evaluation measures viz, a WGAN
critic loss and through subjective scores on user evaluations against competitive
speech synthesis baselines and detailed ablation analysis of the proposed model.
The evaluations demonstrate that the model generates realistic long speech samples
conditioned on accent efficiently. | {
"name": [],
"affiliation": []
} | null | [
"speech",
"generation",
"accent",
"gan",
"adversarial",
"reinforcement",
"memory",
"lstm",
"policy",
"gradients",
"human"
] | null | 2018-02-15 22:29:17 | 34 | null | null | null | null | null | null | null | null | false | The paper proposes a method for accented speech generation using GANs.
The reviewers have pointed out the problems in the justification of the method (e.g. the need for using policy gradients with a differentiable objective) as well as its evaluation. | {
"review_id": [
"rkKtPpFxz",
"rJfNnSxez",
"SkxuGmyZG"
],
"review": [
{
"title": "title: The paper lacks any novel technical insight, contributions are not explained well, exposition is poor, and the evaluations are invalid.",
"paper_summary": null,
"main_review": "main_review: The contributions made by this paper is unclear. As one of the listed contributions, the authors propose using policy gradient. However, in this setting, the reward is a known differentiable function, and the action is continuous, and thus one could simply backpropagate through to get the gradients on the encoder. Also, it seems the reward is not a function of the future actions, which further questions the need for a reinforcement learning formulation.\n\nThe paper is written poorly. For instance, I don't understand what this sentence means: \"We condition the latent variables to come from rich distributions\". Observed accent labels are referred to as latent (hidden) variables.\n\nWhile the independent Wasserstein critic is useful to study whether models are overfitting (by comparing train/heldout numbers), their use for comparing across different model types is not justified. Moreover, since GAN-based methods optimize the Wasserstein distance directly, it cannot serve as a metric to compare GAN-based models with other models.\n\nAll of the models compared against do not use accent information during training (table 2), so this is not a fair comparison.\n\nOverall, the paper lacks any novel technical insight, contributions are not explained well, exposition is poor, and the evaluations are invalid.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Relevant work, but not executed or presented well",
"paper_summary": null,
"main_review": "main_review: This paper presents a method for generating speech audio in a particular accent. The proposed approach relies on a generative adversarial network (GAN), combined with a policy approach for joining together generated speech segments. The latter is used to deal with the problem of generating very long sequences (which is generally difficult with GANs).\n\nThe problem of generating accented speech is very relevant since accent plays a large role in human communication and speech technology. Unfortunately, this paper is hard to follow. Some of the approach details are unclear and the research is not motivated well. The evaluation does not completely support the claims of the paper, e.g., there is no human judgment of whether the generated audio actually matches the desired accent.\n\nDetailed comments, suggestions, and questions:\n- It would be very useful to situate the research within work from the speech community. Why is accented modelling important? How is this done at the moment in speech synthesis systems? The paper gives some references, but without context. The paper from Ikeno and Hansen below might be useful.\n- Accents are also a big problem in speech recognition (references below). Could your approach give accent-invariant representations for recognition?\n- Figure 1: Add $x$, $y$, and the other variables you mention in Section 3 to the figure.\n- What is $o$ in eq. (1)?\n- Could you add a citation for eq. (2)? This would also help justifying that \"it has a smoother curve and hence allows for more meaningful gradients\".\n- With respect to the critic $C_\\nu$, I can see that it might be helpful to add structure to the hidden representation. In the evaluation, could you show the effect of having/not having this critic (sorry if I missed it)? The statement about \"more efficient layers\" is not clear.\n- Section 3.4: If I understand correctly, this is a nice idea for ensuring that generated segments are combined sensibly. It would be helpful defining with \"segments\" refer to, and stepping through the audio generation process.\n- Section 4.1: \"using which we can\" - typo.\n- Section 5.1: \"Figure 1 shows how the Wasserstein distance ...\" I think you refer to the figure with Table 1?\n- Figure 4: Add (a), (b) and (c) to the relevant parts in the figure.\n\nReferences that might be useful:\n- Ikeno, Ayako, and John HL Hansen. \"The effect of listener accent background on accent perception and comprehension.\" EURASIP Journal on Audio, Speech, and Music Processing 2007, no. 3 (2007): 4.\n- Van Compernolle, Dirk. \"Recognizing speech of goats, wolves, sheep and… non-natives.\" Speech Communication 35, no. 1 (2001): 71-79.\n- Benzeghiba, Mohamed, Renato De Mori, Olivier Deroo, Stephane Dupont, Teodora Erbes, Denis Jouvet, Luciano Fissore et al. \"Automatic speech recognition and speech variability: A review.\" Speech communication 49, no. 10 (2007): 763-786.\n- Wester, Mirjam, Cassia Valentini-Botinhao, and Gustav Eje Henter. \"Are We Using Enough Listeners? No!—An Empirically-Supported Critique of Interspeech 2014 TTS Evaluations.\" In Sixteenth Annual Conference of the International Speech Communication Association. 2015.\n\nThe paper tries to address an important problem, and there are good ideas in the approach (I suspect Sections 3.3 and 3.4 are sensible). Unfortunately, the work is not presented or evaluated well, and I therefore give a week reject.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Policy gradients are not needed for continuous latent variables",
"paper_summary": null,
"main_review": "main_review: The paper considers speech generation conditioned on an accent class.\nLeast Squares GAN and a reconstruction loss is used to train the network.\n\nThe network is using continuous latent variables. These variables are trained by policy gradients.\nI do not see a reason for the policy gradients. It would be possible to use the cleaner gradient from the discriminator.\nThe decoder is already trained with gradient from the discriminator.\nIf you are worried about truncated backpropagation through time,\nyou can bias it by \"Unbiasing Truncated Backpropagation Through Time\" by Corentin Tallec and Yann Ollivier.\n\n\nComments on clarity:\n- It would be helpful to add x, z, y, o labels to the Figure 1.\nI understood the meaning of `o` only from Algorithm 1.\n- It was not clear from the text what is called the \"embedding variable\". Is it `z`?\n- It is not clear how the skip connections connect the encoder and the decoder.\nAre the skip connections not used when generating?\n- In Algorithm 1, \\hat{y}_k is based on z_k, instead of \\hat{z}_k. That seems to be a typo.\n\nComments on evaluation:\n- It is hard to evaluate speech conditioned just on the accent class.\nOverfitting may be unnoticed.\nYou should do an evaluation on a validation set.\nFor example, you can condition on a text and generate samples\nfor text sentences from a validation set.\nPeople can then judge the quality of the speech synthesis.\nA good speech synthesis would be very useful.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.2222222238779068,
0.4444444477558136,
0.3333333432674408
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Human ratings of accent preference",
"Human evaluation and Critic based evaluation is used keeping in mind the uniqueness of the problem being tackled",
"Policy Gradients are efficient and robust during training, in comparison to standard backprop",
"Independent Wasserstein Critics measure the distance between distributions well",
"Reviewer response",
"Human ratings of closeness of accent"
],
"comment": [
"Thanks for getting back to us. If you would look at Figure 4, the presented graph is the weighted preference (plotted to show the difference between models explicitly). We describe what exactly has been plotted in the graph in section 5.2 (the last paragraph being most relevant). We decided not to report the actual values for space considerations, and hoped the representation would help us to get the point across better. We apologise for the confusion this might have caused. ",
"We thank the reviewer for all the valuable inputs. \n\nMotivation: The reviewer’s point about motivating the problem of generated accented speech further is well received and we thank the reviewer for pointing us to some relevant references. The revised version now contains a more detailed motivation.\n\nAccent-invariant representations for recognition: Our proposed approach could indeed be used to generate representations that can be incorporated within speech recognition systems for accented speech. This is a direction we intend to explore as future work and we consider this to be outside the scope of the current work.\n\nHuman judgment of whether the generated audio matched the desired accent: We did actually conduct such a study. Section 4.2.2 describes the setup of our human evaluation study where we asked participants to listen to reference samples corresponding to a specific accent and then rate on a numeric scale how close a generated sample was to the accent in the reference sample. \n\nEffect of not adding structure to the latent representation: This is discussed in Table 2 which shows Wasserstein distances from an independent critic on different ablations of AccentGAN. PolicyGAN is identical to AccentGAN except there is no conditioning of the latent variables. We observe that PolicyGAN performs poorly in comparison to AccentGAN.\n\nThe remaining comments on improving clarity will be addressed in the revised version.\n",
"We thank the reviewer for the valuable suggestions. \n\nUsefulness of policy gradients for continuous variables: \n\nWe thank the reviewer for raising this question, as we should have included a discussion regarding this in the paper. (Indeed, given this question from two reviewers, a contribution of our work could be seen as showcasing the relevance of policy gradients even when the variables involved are continuous.)\n\nIn the early stages of our project, we did experiment with plain back-propagation, as the reviewer suggested. But we observed that the resulting generated samples were of very poor quality. (We have uploaded a few samples from such a model at http://ec2-13-126-31-173.ap-south-1.compute.amazonaws.com:5000/ alongside samples generated by our proposed approach.) Hence we clearly needed techniques beyond plain back-propagation. Policy gradients appealed to us as we could readily adopt it to our setting, and immediately it gave us improvements over the original approach (with high quality utterances up to 12s long). Further, it did not add any significant computational overhead.\n\nWe have added this discussion in the revised version.\n\nWe do not deny the possibility that other recent approaches developed for similar purposes could also be adopted to our task, but the goal of this work has been to report the very significant improvements we achieved by adopting policy gradients. \n\n\nComments on clarity: These have all been addressed in the revised version.\n\nComments on evaluation: Conditioning on text and synthesizing accented speech is indeed part of our future work. Given the additional technical challenges involved, we have considered this to be outside the scope of the current work. \n\nThe remaining comments on improving clarity have been addressed in the revised version.",
"The reviewer has made several objections. We agree to the extent that the exposition could have been improved. We would like to answer the other concerns below.\n\nNeed for policy gradients: As we detailed in an answer to the first reviewer, simple back-propagation as the reviewer suggests demonstrably fails. Using policy gradients overcame the drawbacks of directly using back-propagation, without introducing significant computational overheads. While we had carried out extensive experimentation on this aspect, we omitted it entirely in our submission. We shall incorporate this in the revised version.\n\nUse of an independent Wasserstein critic to compare across models: We do not agree with the reviewer’s contention that using an independent Wasserstein critic to compare across models is unjustified. Not only is it a natural approach, but also one of the main uses detailed in Danihelka et al. To quote from the paper: “If we use the independent critic, we can compare generators trained by other GAN methods or by different approaches.”\n\nTable 2 Comparisons: The GAN models from the literature we compare with did not provide a means to incorporate accent information during training. Nevertheless, they were trained on data from a mix of accents identical to that in the validation/test data. So the additional data that our models were given corresponds to less than 5 bits per utterance, and this was essential for a harder task (of being able to generate speech in given accents) that is not captured in Table 2.\n\nWe have tried to improve the presentation by removing some mysterious sounding phrasings (that resulted from using the vocabulary from our internal discussions). We apologize for any confusion they may have caused.\n\nWe urge the reviewer to kindly reconsider their impression of the paper in light of our response.\n\nThe remaining comments on improving clarity have been addressed in the revised version.\n",
"Ah, thank you. The figure caption is a bit confusing since you describe it as a \"preference\" rather than saying that you compare to a reference accent (as you do in the first par. of Section 5.2), but I think you have answered the question.",
"Thanks for responding to the review. I did spot in the original paper \"they were also asked to mark on a numeric scale which of the two samples they thought was closer to the accent in the reference samples,\" but I could not find these result in the original paper nor in the revised version. Again, apologies if I am just missing these."
]
} | {
"paperhash": [
"abadi|tensorflow:_large-scale_machine_learning_on_heterogeneous_distributed_systems",
"arik|deep_voice_2:_multi-speaker_neural_text-to-speech",
"arora|understanding_deep_neural_networks_with_rectified_linear_units",
"bahari|accent_recognition_using_ivector,_gaussian_mean_supervector_and_gaussian_posterior_probability_supervector_for_spontaneous_telephone_speech",
"benzeghiba|automatic_speech_recognition_and_speech_variability:_a_review",
"dai|towards_diverse_and_natural_image_descriptions_via_a_conditional_gan",
"danihelka|comparison_of_maximum_likelihood_and_gan-based_training_of_real_nvps",
"dehak|front-end_factor_analysis_for_speaker_verification",
"donahue|exploring_speech_enhancement_with_generative_adversarial_networks_for_robust_speech_recognition",
"fan|tts_synthesis_with_bidirectional_lstm_based_recurrent_neural_networks",
"luisa|generating_segmental_foreign_accent",
"godfrey|switchboard-1_release_2_ldc97s62",
"hsu|voice_conversion_from_unaligned_corpora_using_variational_autoencoding_wasserstein_generative_adversarial_networks",
"hunt|continuous_learning_control_with_deep_reinforcement",
"ikeno|the_effect_of_listener_accent_background_on_accent_perception_and_comprehension",
"kaneko|generative_adversarial_network-based_postfilter_for_statistical_parametric_speech_synthesis",
"karhila|rapid_adaptation_of_foreign-accented_hmm-based_speech_synthesis",
"lander|cslu:_foreign_accented_english_release_1.2_ldc2007s08",
"mao|least_squares_generative_adversarial_networks",
"mehri|samplernn:_an_unconditional_end-to-end_neural_audio_generation_model",
"oord|wavenet:_a_generative_model_for_raw_audio",
"pascual|segan:_speech_enhancement_generative_adversarial_network",
"rosca|variational_approaches_for_auto-encoding_generative_adversarial_networks",
"schreitter|exploring_inter-and_intra-speaker_variability_in_multi-modal_task_descriptions",
"sutton|policy_gradient_methods_for_reinforcement_learning_with_function_approximation",
"tamagawa|the_effects_of_synthesized_voice_accents_on_user_perceptions_of_robots",
"toman|evaluation_of_state_mapping_based_foreign_accent_conversion",
"tomokiyo|foreign_accents_in_synthetic_speech:_development_and_evaluation",
"veaux|cstr_vctk_corpus",
"wang|a_fully_end-to-end_text-to-speech_synthesis_model",
"yang|statistical_parametric_speech_synthesis_using_generative_adversarial_networks_under_a_multi-task_learning_framework",
"yu|seqgan:_sequence_generative_adversarial_nets_with_policy_gradient",
"zen|statistical_parametric_speech_synthesis",
"zen|statistical_parametric_speech_synthesis_using_deep_neural_networks"
],
"title": [
"Tensorflow: Large-scale machine learning on heterogeneous distributed systems",
"Deep voice 2: Multi-speaker neural text-to-speech",
"Understanding Deep Neural Networks with Rectified Linear Units",
"Accent recognition using ivector, gaussian mean supervector and gaussian posterior probability supervector for spontaneous telephone speech",
"Automatic speech recognition and speech variability: A review",
"Towards diverse and natural image descriptions via a conditional gan",
"Comparison of maximum likelihood and gan-based training of real nvps",
"Front-end factor analysis for speaker verification",
"Exploring speech enhancement with generative adversarial networks for robust speech recognition",
"TTS synthesis with bidirectional LSTM based recurrent neural networks",
"Generating segmental foreign accent",
"Switchboard-1 release 2 ldc97s62",
"Voice conversion from unaligned corpora using variational autoencoding Wasserstein generative adversarial networks",
"Continuous learning control with deep reinforcement",
"The effect of listener accent background on accent perception and comprehension",
"Generative adversarial network-based postfilter for statistical parametric speech synthesis",
"Rapid adaptation of foreign-accented hmm-based speech synthesis",
"Cslu: Foreign accented english release 1.2 ldc2007s08",
"Least Squares Generative Adversarial Networks",
"Samplernn: An unconditional end-to-end neural audio generation model",
"Wavenet: A generative model for raw audio",
"Segan: Speech enhancement generative adversarial network",
"Variational approaches for auto-encoding generative adversarial networks",
"Exploring inter-and intra-speaker variability in multi-modal task descriptions",
"Policy gradient methods for reinforcement learning with function approximation",
"The effects of synthesized voice accents on user perceptions of robots",
"Evaluation of state mapping based foreign accent conversion",
"Foreign accents in synthetic speech: development and evaluation",
"Cstr vctk corpus",
"A fully end-to-end text-to-speech synthesis model",
"Statistical parametric speech synthesis using generative adversarial networks under a multi-task learning framework",
"Seqgan: Sequence generative adversarial nets with policy gradient",
"Statistical parametric speech synthesis",
"Statistical parametric speech synthesis using deep neural networks"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"martín abadi",
"ashish agarwal",
"paul barham",
"eugene brevdo",
"zhifeng chen",
"craig citro",
"greg s corrado",
"andy davis",
"jeffrey dean",
"matthieu devin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sercan arik",
"gregory diamos",
"andrew gibiansky",
"john miller",
"kainan peng",
"wei ping",
"jonathan raiman",
"yanqi zhou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"raman arora",
"amitabh basu",
"poorya mianjy",
"anirbit mukherjee"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mohamad hasan bahari",
"rahim saeidi",
"david van leeuwen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mohamed benzeghiba",
"renato de mori",
"olivier deroo",
"stephane dupont",
"teodora erbes",
"denis jouvet",
"luciano fissore",
"pietro laface",
"alfred mertins",
"christophe ris"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"bo dai",
"sanja fidler",
"raquel urtasun",
"dahua lin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ivo danihelka",
"balaji lakshminarayanan",
"benigno uria",
"daan wierstra",
"peter dayan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"najim dehak",
"patrick j kenny",
"réda dehak",
"pierre dumouchel",
"pierre ouellet"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chris donahue",
"bo li",
"rohit prabhavalkar"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yuchen fan",
"feng-long yao qian",
"frank k xie",
" soong"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"maría luisa",
"garcía lecumberri",
"roberto barra chicote",
"rubén pérez ramón",
"junichi yamagishi",
"martin cooke"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"john godfrey",
"holliman edward"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chin-cheng hsu",
"hsin-te hwang",
"yi-chiao wu",
"yu tsao",
"hsin-min wang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jonathan j hunt",
"alexander pritzel",
"nicolas heess",
"tom erez",
"yuval tassa",
"david silver",
"daan wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ayako ikeno",
"john hl hansen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"takuhiro kaneko",
"hirokazu kameoka",
"nobukatsu hojo",
"yusuke ijima",
"kaoru hiramatsu",
"kunio kashino"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"reima karhila",
"mirjam wester"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"t lander"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xudong mao",
"qing li",
"haoran xie",
"y k raymond",
"zhen lau",
" wang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"soroush mehri",
"kundan kumar",
"ishaan gulrajani",
"rithesh kumar",
"shubham jain",
"jose sotelo",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aaron van den oord",
"sander dieleman",
"heiga zen",
"karen simonyan",
"oriol vinyals",
"alex graves",
"nal kalchbrenner",
"andrew senior",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"santiago pascual",
"antonio bonafonte",
"joan serrà"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mihaela rosca",
"balaji lakshminarayanan",
"david warde-farley",
"shakir mohamed"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"stephanie schreitter",
"brigitte krenn"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david a richard s sutton",
" mcallester",
"p satinder",
"yishay singh",
" mansour"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rie tamagawa",
"catherine i watson",
"han kuo",
"bruce a macdonald",
"elizabeth broadbent"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"markus toman",
"michael pucher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"laura mayfield tomokiyo",
"alan w black",
"kevin a lenzo"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"christophe veaux",
"junichi yamagishi",
"kirsten macdonald"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yuxuan wang",
"r j skerry-ryan",
"daisy stanton",
"yonghui wu",
"ron j weiss",
"navdeep jaitly",
"zongheng yang",
"ying xiao",
"zhifeng chen",
"samy bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shan yang",
"lei xie",
"xiao chen",
"xiaoyan lou",
"xuan zhu",
"dongyan huang",
"haizhou li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lantao yu",
"weinan zhang",
"jun wang",
"yong yu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"heiga zen",
"keiichi tokuda",
"alan w black"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"heiga zen",
"andrew senior",
"mike schuster"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"arXiv:1603.04467",
"arXiv:1705.08947",
"1611.01491v6",
"",
"",
"1703.06029v3",
"arXiv:1705.05263",
"",
"1711.05747v2",
"",
"",
"",
"1704.00849v3",
"",
"",
"",
"",
"",
"1611.04076v3",
"",
"1609.03499v2",
"1703.09452v3",
"arXiv:1706.04987",
"",
"",
"",
"",
"",
"",
"arXiv:1703.10135",
"1707.01670v2",
"1609.05473v6",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.333333 | 0.75 | null | null | null | null | null | rJ6iJmWCW |
||
tsai|discovering_order_in_unordered_datasets_generative_markov_networks|ICLR_cc_2018_Conference | Discovering Order in Unordered Datasets: Generative Markov Networks | The assumption that data samples are independently identically distributed is the backbone of many learning algorithms. Nevertheless, datasets often exhibit rich structures in practice, and we argue that there exist some unknown orders within the data instances. Aiming to find such orders, we introduce a novel Generative Markov Network (GMN) which we use to extract the order of data instances automatically. Specifically, we assume that the instances are sampled from a Markov chain. Our goal is to learn the transitional operator of the chain as well as the generation order by maximizing the generation probability under all possible data permutations. One of our key ideas is to use neural networks as a soft lookup table for approximating the possibly huge, but discrete transition matrix. This strategy allows us to amortize the space complexity with a single model and make the transitional operator generalizable to unseen instances. To ensure the learned Markov chain is ergodic, we propose a greedy batch-wise permutation scheme that allows fast training. Empirically, we evaluate the learned Markov chain by showing that GMNs are able to discover orders among data instances and also perform comparably well to state-of-the-art methods on the one-shot recognition benchmark task. | {
"name": [],
"affiliation": []
} | Propose to observe implicit orders in datasets in a generative model viewpoint. | [
"Markov chain",
"discovering orders",
"generative model",
"one-shot"
] | null | 2018-02-15 22:29:42 | 37 | null | null | null | null | null | null | null | null | false | The problem of discovering ordering in an unordered dataset is quite interesting, and the authors have outlined a few potential applications. However, the reviewer consensus is that this draft is too preliminary for acceptance. The main issues were clarity, lack of quantitative results for the order discovery experiments, and missing references. The authors have not yet addressed these issues with a new draft, and therefore the reviewers have not changed their opinions. | {
"review_id": [
"r1bzglTgG",
"Hy97waqxM",
"B1ySxEolG"
],
"review": [
{
"title": "title: This paper is well written and experiments are carefully done. However it is unclear how impactful are the results.",
"paper_summary": null,
"main_review": "main_review: The paper is about learning the order of an unordered data sample via learning a Markov chain. The paper is well written, and experiments are carefully performed. The math appears correct and the algorithms are clearly stated. However, it really is unclear how impactful are the results.\n\nGiven that finding order is important, A high level question is that given a markov chain's markov property, why is it needed to estimate the entire sequence \\pi star at all? Given that the RHS of the first equation in section 3.2 factorizes, why not simply estimate the best next state for every data s_i?\n\nIn the related works section, there are past generative models which deserve mentions: Deep Boltzmann Machines, Deep Belief Nets, Restricted Boltzmann Machines, and Neural Autoregressive Density Estimators.\n\nEquation 1, why is P(\\pi) being multiplied with the probability of the sequence p({s_i}) ? are there other loss formulations here?\n\nAlg 1, line 7, are there typos with the subscripts?\n\nSection 3.1 make sure to note that f(s,s') sums to 1.0, else it is not a proper transition operator.\n\nSection 3.4, the Bernoulli transition operators very much similar to RBMs, where z is the hidden layer, and there are a lot of literature related to MCMC with RBM models.\n\nDue the complexity of the full problem, a lot of simplification are made and coordinate descent is used. However there are no guarantees to finding the optimal order and a local minimum is probably always reached. Imagining a situation where there are two distinct clusters of s_i, the initial transition operator just happen to jump to the other cluster. This would produce a very different learned order \\pi compared to a transition operator which happen to be very local. Therefore, initialization of the transition operator is very important, and without any regularization, it's not clear what is the point of learning a locally optimal ordering.\n\nMost of the ordering results are qualitative, it would be nice if a dataset with a ground truth ordering can be obtained and we have some quantitative measure. (such as the human pose joint tracking example given by the authors)\n\nIn summary, there are some serious concerns on the impact of this paper. However, this paper is well written and interesting.\n\n\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: [updated] Reject - interesting ideas, but weak presentation and experiments",
"paper_summary": null,
"main_review": "main_review: [After rebuttal]: \nI appreciate the effort the authors have put into the rebuttal, but I do not see a paper revision or new results, so I keep my rating.\n\n---\n\nThe paper proposes “Generative Markov Networks” - a deep-learning-based approach to modeling sequences and discovering order in datasets. The key ingredient of the model is a deep network playing the role of a transition operator in Markov chain, trained via Variational Bayes, similar to a variational autoencoder (but with non-identical input and output images). Given an unordered dataset, the authors maximize its likelihood under the model by alternating gradient ascent steps on the parameters of the network and greedy reordering of the dataset. The model learns to find reasonable order in unordered datasets, and achieves non-trivial performance on one-shot learning. \n\nPros:\n1) The one-shot learning results are promising. The method is conceptually more attractive than many competitors, because it does not involve specialized training on the one-shot classification task. The ability to perform unsupervised fine-tuning on the target test set is also appealing.\n2) The idea of explicitly representing the neighborhood structure within a dataset is generally interesting and seems related to the concept of low-dimensional image manifold. It’s unclear why does this manifold have to be 1-dimensional, though.\n\nCons:\n1) The motivation of the paper is not convincing. Why does one need to find order in unordered datasets? The authors do not really discuss this at all, even though this seems to be the key task in the paper, as reflected in the title. What does one do with this order? How does one even evaluate if a discovered order is good or not?\n2) The one-shot classification results are to me the strongest part of the paper. However, they are rushed and not analyzed in detail. It is unclear which components of the system contribute to the performance. As I understand the method, the authors effectively select several neighbors of the labeled samples and then classify the remaining samples based on the average similarity to these. What if the same procedure is performed with a different similarity measure, not the one learned by GMN? I am not convinced that the proposed method is well tuned for the task. Why is it useful to discover one-dimensional structure, rather than learning a clustering or a metric? Could it be that with a different similarity measure (like the distance in the feature space of a network trained on classification) this procedure would work even better? Or is GMN especially good for this task? If so. why?\n3) The experiments on dataset ordering are not convincing. What should one learn from those? There are no quantitative results, just a few examples (and more in the supplement). The authors even admit that “Comparing to the strong ordering baseline Nearest Neighbor sorting, one could hardly tell which one is better”. Nearest neighbor with Euclidean metric is not a strong baseline at all, and not being able to tell if the proposed method is better than that is not a good sign.\n4) The authors call their method distance-metric-free. This is strange to me. The loss function used during training of the network is a measure of similarity between two samples (may or may not be a proper distance metric). So the authors do assume having some similarity measure between the data points. The distance-metric-free claim is similar to saying that negative log-likelihood of a Gaussian has nothing to do with Euclidean distance. \n5) The experiments on using the proposed model as a generative model are confusing. First, the authors do not generate the samples directly, but instead select them from the dataset - this is quite unconventional. Then, the NN baseline is obviously doomed to jump between two samples - the authors could come up with a better baseline, for instance linearly extrapolating based on two most recent samples, or learning the transition operator with a simple linear model. \n6) I am puzzled by the hyperparameter choices. It seems there was a lot of tuning behind the scenes, and it should be commented on. The parameters are very different between the datasets (top of page 7), why is that? Why do they have to differ so much - is the method very unstable w.r.t. the parameters? How can it be that b_{overlap} = b ? Also, in the one-shot classification results, the number of sampled neighbors is 1 without fine-tuning and 5 with fine-tuning - this is strange and not explained.\n7) This work seems related to simultaneous clustering and representation learning, in that it combines discrete reordering and continuous deep network training. The authors should perhaps mention this line of work. See e.g. Yang et al. “Joint Unsupervised Learning of Deep Representations and Image Clusters”, CVPR 2016.\n\nTo conclude, the paper has some interesting ideas, but the presentation is not convincing, and the experiments are substandard. Therefore at this point I cannot recommend the paper for publication.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: REVISED - still not convinced about the significance of the work. Substandard experiments and confusing. I leave my score as it is. -- The proposed model has a very interesting motivation but the description is not clear. The authors do not explain the basics that strongly define the GMN. The experimental results are hard to follow since no intuition is provided. ",
"paper_summary": null,
"main_review": "main_review: \nThe authors deal with the problem of implicit ordering in a dataset and the challenge of recovering it, i.e. when given a random dataset with no explicit ordering in the samples, the model is able to recover an ordering. They propose to learn a distance-metric-free model that assumes a Markov chain as the generative mechanism of the data and learns not only the transition matrix but also the optimal ordering of the observations.\n\n\n> Abstract\n“Aiming to find such orders, we introduce a novel Generative Markov Network (GMN) which we use to extract the order of data instances automatically. ”\nI am not sure what automatically refers here to. Do the authors mean that the GMN model does not explicitly assume any ordering in the observed dataset? This needs to be better stated here. \n“Aiming to find such orders, we introduce a novel Generative Markov Network (GMN) which we use to extract the order of data instances automatically; given an unordered dataset, it outputs the best -most possible- ordering.”\n\nMost of the models assume an explicit ordering in the dataset and use it as an integral modelling assumption. Contrary to that they propose a model where no ordering assumption is made explicitly, but the model itself will recover it if any.\n\n> Introduction\nThe introduction is fairly well structured and the example of the joint locations in different days helps the reader. \n\nIn the last paragraph of page 1, “we argue that … a temporal model can generate it.”, the authors present very good examples where ordered observations (ballerina poses, video frames) can be shuffled and then the proposed model can recover a temporal ordering out of them. What I would like to think also here is about an example where the recovered ordering will also be useful as such. An example where the recovered ordering will increase the importance of the inferred solution would be more interesting..\n\n\n\n2. Related work\nThis whole section is not clear how it relates to the proposed model GMN. Rewriting is strongly suggested. \nThe authors mention Deep Generative models and One-shot learning methods as related work but the way this section is constructed makes it hard for the reader to see the relation. It is important that first the authors discuss the characteristics of GMN that makes it similar to Deep generative models and the one-shot learning models. They should briefly explain the characteristics of DGN and one-shot learning so that the readers see the relationship. \nAlso, the authors never mention that the architecture they propose is deep.\n \nRegarding the last paragraph of page 2, “Our approach can be categorised … can be computed efficiently.”:\nNot sure why the authors assume that the samples can be sampled from an unmixed chain. An unmixed chain can also result in observing data that do not exhibit the real underlying relationships. Also the authors mention couple of characteristics of the GMN but without really explaining them. What are the explicit and implicit models [1] … this needs more details. \n\n[1] P. J. Diggle and R. J. Gratton. Monte Carlo methods of inference for implicit statistical models. Journal of the Royal Statistical Society. Series B (Methodological), pages 193–227, 1984. \n\n“Second, prior approaches were proposed based on the notion of denoising models. In other words, their goal was generating high-quality images; on the other hand, we aim at discovering orders in datasets.” —>this bit is confusing. Do the authors mean that prior approaches were considering the observed ordering as part of the model assumptions and were just focusing on the denoising? \n\n3. Generative Markov models\nFirst, I would like to draw the attention of the authors on the terminology they use. The states here are not the latent states usually referred in the literature of Markov chains. The states here are observed and should not be confused with the emissions also usually stated in the corresponding literature. There are as many states as the number of observations and not differentiation is made for ties. All these are based on my understanding of the model.\n\nIn the Equation just before equation (1), on the left hand side, shouldn’t \\pi be after the `;’. It’s an average over the possible \\pi. We cannot consider the average over \\pi when we also want to find the optimal \\pi. The sum doesn’t need to be there. Shouldn’t it just be max_{\\theta, \\pi} log P({s_i}^{n}_{i=1}; \\pi, \\theta) ?\nEquation (1), same. The summation over the possible \\pi is confusing. It’s an optimisation problem…\n\npage 4, section 3.1: The discussion about the use of Neural Net for the construction of the transition matrix needs expansion. It is unclear how the matrix is constructed. Please add more details. E.g. use of soft-max non-linear transformation so that the output of the Neural Net can be interpreted as the probabilities of jumping to one of the possible states. In this fashion, we map the input (current state) and transform it to the probability gf occupying states at the next time step.\n\nWhy this needs expansion: The construction of the transition matrix is the one that actually plays the role of the distance metric in the related models. More specifically, the choice of the non-linear function that outputs the transition probability is crucial; e.g. a smooth function will output comparable transition probabilities to similar inputs (i.e. similar states). \n\nsection 3.2: \nMy concern about averaging over \\pi applies on the equations here too. \n\n“However, without further assumption on the structure of the transitional operator..”—> I think the choice of the nonlinear function in the output node of the NN is actually related to the transition matrix and defines the probabilities. It is a confusing statement to make and authors need to discuss more about it. After all, what is the driving force of the inference? This is a problem/task where the observations are considered in a number of different permutations. As such, the ordering is not fixed and the main driving force regarding the best choice of ordering should come from the architecture of the transition matrix; what kind of transitions does the Neural Net architecture favour? Distance free metric but still assumptions are made that favour specific transitions over others. \n\n“At first, Alg. 1 enumerates all the possible states appearing in the first time step. For each of the following steps, it finds the next state by maximizing the transition probability at the current step, i.e., a local search to find the next state. ” —> local search in the sense that the algorithm chooses as the next state the state with the biggest transition probability (to it) as defined in the Neural Net (transition operator) output? This is a deterministic step, right? \n\n4.1 DISCOVERING ORDERS IN DATASETS \nNice description of the datasets. In the <MSR_SenseCam> the choice of one of the classes needs to be supported. Why? What do the authors expect to happen if a number of instances from different classes are chosen? \n\n4.1.1 IMPLICIT ORDERS IN DATASETS \nThe explanation of the inferred orderings for the GMN and Nearest Neighbour model is not clear. In figure 2, what forces the GMN to make distinguishable transitions as opposed to the Nearest neighbour approach that prefers to get stuck to similar states? Is it the transition matrix architecture as defined by the neural network? \n\n>> Figure 10: why use of X here? Why not keep being consistent by using s?\n\n*** DO the authors test the model performance on a ordered dataset (after shuffling it…) ? Is the model able of recovering the order? **\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.3333333432674408,
0.3333333432674408
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Rebuttal",
"Rebuttal",
"Rebuttal"
],
"comment": [
"1. [Concern on the Abstract] \nThe term \"automatically\" refers to the meaning that our proposed GMN assumes this order can be learned even though it is not given explicitly. We will clarify this in the revised manuscript.\n\n\n2. [Concern on the Introduction] \nConsider the task of studying evolutions for galaxy or star systems. Usually, the process takes millions or even billions of years, and it is infeasible for a human to collect successive data points manifesting meaningful changes. Therefore, we propose to recover the evolution when just providing a snapshot of thousands of data points. Similar arguments can be made in the study of slow-moving human diseases such as Parkinson's. On the opposite side, the cellular or molecular processes are too fast to permit entire trajectories. In these applications, scientists would like to recover the order from non-sequenced and individual data, which can further benefit the following researches such as learning dynamic systems, observing specific patterns in the data stream, and performing comparisons on different sequences. We will add these comments in the revised manuscript.\n\n3. [Concern on Related Work]\nWe thank the Reviewer for providing helpful suggestions for improving Related Work section. We will make more clear connections between our proposed GMN and Deep Generative Models as well as One-Shot Learning Models. Moreover, since we utilize deep neural networks for amortizing the large state space in the transitional operator, we consider our model as a deep model.\n\nAll previous works build on a strong assumption that the chain needs to be mixed, while in practice it’s very hard to judge whether a chain is mixing or not. As a comparison, our model is free of this assumption, because the underlying model does not build on any property related to the stationary distribution. It is not our intent to claim that the unmixed chain can result in exhibiting real data relationships. We will clarify this as well as the differences between \"implicit\" and \"explicit\" model in the revised manuscript.\n\nAdditionally, prior work proposed to learn the Markov chain such that the data are gradually denoised from low-quality to high-quality images. On the other hand, our model aims to order the data by assuming the order follows Markov chain data generation order. \n\n4. [Concerns on the Generative Markov Models]\n\nYes, we agree that it’s very important to describe in more detail on how to construct the transitional operators using neural networks. As the reviewer has pointed out, this essentially plays the role of the implicit distance metric in our model. We thank the reviewer for this suggestion and we will definitely expand the discussion in a revised version. In the current version, we briefly discuss the neural network parametrization in Sec. 3.4. More specifically, we consider two distribution families (Bernoulli for binary-valued state and Gaussian for real-valued state). Also, this is a proper transitional operator. That is, sum of f(s,s') is 1.0. We use the conditional independence assumption which is also adopted in Restricted Boltzmann Machines. We will note this in the revised manuscript. \n\n5. [Concerns on Sec. 4.1]\n\nWe randomly partition the entire datasets into batches, which means that, in each batch, we do not assume all the classes are available nor an equal number of instances per class. We will clarify this in the revised manuscript.\n\n6. [Concerns on Sec. 4.1.1]\n\nFig. 2 illustrates the advantage of using our proposed algorithm for searching next state. Our transitional operator is trained to recover the order in the entire dataset, and thus it could significantly reduce the problem of stucking in similar states. The distinguishable transitions benefit from our algorithm instead of the architecture design for the transitional operator. However, the parametrization from Neural Network is also crucial. Neural Network serves as an universal function approximator, which enables us to amortize the large state space for every single state in a unified model.\n\n7. [Concerns on the Consistency between x and s]\n\nWe will unify the notation in the revised manuscript.\n\n8. [Evaluation on Ordered Dataset]\n\nWe do provide the evaluation with the ordered dataset (Moving MNIST) in Supplementary. In the revised manuscript, we will also provide the quantitative results that compare our proposed algorithm with the true order and other methods for more order-given datasets.\n\n\n",
"We thank the Reviewer for pointing out the possible improvements on the paper.\n\n1. [Concerns on the Motivation and Quantitative Results]\n\nConsider the task of studying evolutions for galaxy or star systems. Usually, the process takes millions or even billions of years, and it is infeasible for a human to collect successive data points manifesting meaningful changes. Therefore, we propose to recover the evolution when just providing a snapshot of thousands of data points. Similar arguments can be made in the study of slow-moving human diseases such as Parkinson's. On the opposite side, the cellular or molecular processes are too fast to permit entire trajectories. In these applications, scientists would like to recover the order from non-sequenced and individual data, which can further benefit the following researches such as learning dynamic systems, observing specific patterns in the data stream, and performing comparisons on different sequences. We will add these comments in the revised manuscript.\n\nAdditionally, in the revised manuscript, we will provide the quantitative results that compare our proposed algorithm with the true order and other methods in some order-given datasets.\n\n2. [Concerns on the One-Shot Learning Experiments]\n\nTo clarify, given a labeled data, we do not select nearest neighbor data for it. Instead, we treat our proposed GMN as a generative model and then generate a sequence of data. Consider the 5-way (i.e., 5 classes) 1-shot (i.e., 1 labeled data per class) task; now we'll have 5 sequences for different categories. Next, we determine the class of unlabeled data based on the fitness within each sequence, which means we determine the class based on the highest generation probability (see Eq. (4)). On the other hand, all the other approaches are deterministic models, which are not able to generate data. Note that, we only have 1 labeled data per class at testing time.\n\n3. [Nearest Neighbor as a strong baseline]\n\nAs far as we know, there is not much prior work on discovering the order in an unordered dataset. Therefore, we consider Nearest Neighbor as a baseline method. We will avoid the \"strong\" word in the revised manuscript.\n\n4. [Distance Metric Free]\n\nWe do not intend to claim the negative log-likelihood of a Gaussian has nothing to do with Euclidean distance. We aim to propose an algorithm that can discover the order based on the Markov chain generation probability. This is compared to the Nearest Neighbor sorting, which requires a pre-defined distance metric. To avoid the confusion, we will rephrase distance-metric-free term in the revised manuscript.\n\n5. [Concerns on Generative Model Experiments]\n\nWe will rephrase the section to avoid confusion with conventional experiments in the generative model. \n\nFig. 2 illustrates the advantage of using our proposed algorithm for searching next state. Our transition operator is trained to recover the order in the entire dataset, and thus it could significantly reduce the problem of being stuck in similar states. Note that this is all carried out under a unified model. Therefore, we adopt Nearest Neighbor search as a baseline comparison. To provide more thorough experiments, we will also provide the suggested baseline \"linearly extrapolating based on two most recent samples\" in the revised manuscript.\n\n6. [Concerns on the Hyper Parameters]\n\nOur proposed algorithm is not very sensitive to the choice of hyperparameters. First, the total number of data in various datasets are different. For example, MNIST, Horse, and MSR_SenseCam have 60,000, 328, and 362 instances, respectively. Second, we can feed the entire dataset into a batch when the total number of data is small. That is, we can have b = 328 and 362 for Horse and MSR_SenseCam dataset, respectively. And the corresponding overlaps between batches (i.e., b_overlap) would be 328 and 362. Please see Alg. 2 for more details.\n\n7. [Concerns on Related Works]\n\nAlthough we do not focus on clustering, we will add the discussion with the suggested paper in the revised manuscript\n\n",
"1. [Impact of the Results] \n\nWe think finding the implicit order in a given set is an important problem and the proposed method could be applied in various domains, including studying galaxy evolutions/human diseases, and recovering videos from image frames.\n\n2. [Why estimating the entire sequence \\pi?]\n\nIn our algorithmic development we indeed estimate the best next state for each given state in the dataset (See Alg. 1, Line 5). But such greedy heuristics is a local search strategy and does not guarantee the globally optimal ordering that maximizes the likelihood function. \n\nOn the other hand, we have also conducted experiments for estimating the best next state given every state s_i. Unfortunately, this makes the learned Markov chain stuck in a few dominant modes. To fix this, we treat the Markov chain generation process as the permutation (i.e., an implicit order) of the data. This modification encourages the state to explore different states without having the issue of collapsing into few dominant modes. We will clarify this in the revised manuscript.\n\n3. [Permutation \\pi]\n\nWe assume the dataset exhibits an implicit order \\pi^* which follows the generation process in a Markov chain. However, the direct computation is computationally intractable (i.e., the total number of data may be too large). In Sec. 3.3, we relax the learning of the order from the entire dataset into different batches of the dataset. To ensure an ergodic Markov chain, we assure the batches overlap with each other.\n\n\n4. [Related Generative Models] \n\nWe will add the discussions in related work for Deep Boltzmann Machines, Deep Belief Nets, Restricted Boltzmann Machines, and Neural Autoregressive Density Estimators.\n\n5. [Typos and Clarifications] \n\nThere is an additional term (a typo) \\sum_{\\pi \\in \\Pi(n)} in Eq. (1). However, the prior of permutation (i.e., \\pi) may not be uniform, and thus P(\\pi) should not be avoided in Eq. (1). \n\nThere is also a typo in line 7, Alg. 1. \n\nWe will fix these typos in the revised manuscript.\n\n6. [Transitional Operator]\n\nSum of f(s,s') is 1.0. We use the conditional independence assumption which is also adopted in Restricted Boltzmann Machines. We will note this in the revised manuscript. Other MCMC approaches related to RBM will also be discussed in Sec. 3.4 in the revised manuscript.\n\n7. [No guarantees to finding the optimal order]\n\nIn the revised version we have shown that finding the globally optimal order in a given Markov chain and a dataset is NP-complete, hence there is no efficient algorithm that can find the optimal order. We argue that in this sense, locally optimal order obtained using greedy heuristics is favorable in many real-world applications.\n\n8. [Concern on Initialization]\n\nWe have tried three different initializations in our experiments. The first is to use Nearest Neighbor with Euclidean distance to suggest an initial order, and then train the transitional operator based on this order in few iterations (i.e., 5 iterations). The second is replacing Euclidean distance with L1-distance. The third is random initialization. We observe that even for the random initialization, the order recovered from our proposed algorithm still leads to a reasonable one that avoids unstable jumps between two distinct clusters. Therefore, we argue that the initialization may not be so crucial to our algorithm. We will add the discussion in the revised manuscript.\n\n9. [Quantitative Results]\n\nIn the revised manuscript, we will provide the quantitative results that compare our proposed algorithm with the true order and other methods for some order-given datasets. \n\n"
]
} | {
"paperhash": [
"andrychowicz|learning_to_learn_by_gradient_descent_by_gradient_descent",
"kingma|adam:_a_method_for_stochastic_optimization",
"diederik|auto-encoding_variational_bayes",
"russakovsky|imagenet_large_scale_visual_recognition_challenge",
"simonyan|very_deep_convolutional_networks_for_large-scale_image_recognition",
"vinyals|matching_networks_for_one_shot_learning",
"kingma|adam:_a_method_for_stochastic_optimization",
"bengio|deep_generative_stochastic_networks_trainable_by_backprop",
"bordes|learning_to_generate_samples_from_noise_through_infusion_training",
"finn|deep_visual_foresight_for_planning_robot_motion",
"finn|model-agnostic_meta-learning_for_fast_adaptation_of_deep_networks",
"ioffe|batch_normalization:_accelerating_deep_network_training_by_reducing_internal_covariate_shift",
"jojic|structural_epitome:_a_way_to_summarize_ones_visual_experience",
"li|meta-sgd:_learning_to_learn_quickly_for_few_shot_learning",
"mehrotra|generative_adversarial_residual_pairwise_networks_for_one_shot_learning",
"mishra|meta-learning_with_temporal_convolutions",
"mnih|alex_graves,_ioannis_antonoglou,_daan_wierstra,_and_martin_riedmiller._playing_atari_with_deep_reinforcement_learning",
"shyam|attentive_recurrent_comparators",
"snell|prototypical_networks_for_few-shot_learning",
"sohl-dickstein|deep_unsupervised_learning_using_nonequilibrium_thermodynamics",
"song|a-nice-mc:_adversarial_training_for_mcmc",
"srivastava|unsupervised_learning_of_video_representations_using_lstms",
"triantafillou|few-shot_learning_through_an_information_retrieval_lens"
],
"title": [
"Learning to learn by gradient descent by gradient descent",
"Published as a conference paper at ICLR 2015 ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION",
"Natural Evolution Strategies",
"ImageNet Large Scale Visual Recognition Challenge",
"VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION",
"Matching Networks for One Shot Learning",
"Published as a conference paper at ICLR 2015 ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION",
"Deep Generative Stochastic Networks Trainable by Backprop",
"LEARNING TO GENERATE SAMPLES FROM NOISE THROUGH INFUSION TRAINING",
"Published as a conference paper at ICLR 2015 ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION",
"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks",
"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"LEARNING TO REMEMBER RARE EVENTS",
"Meta-SGD: Learning to Learn Quickly for Few-Shot Learning",
"Generative Adversarial Residual Pairwise Networks for One Shot Learning",
"META-LEARNING FOR SEMI-SUPERVISED FEW-SHOT CLASSIFICATION",
"Playing Atari with Deep Reinforcement Learning",
"Attentive Recurrent Comparators",
"Prototypical Networks for Few-shot Learning",
"Deep Unsupervised Learning using Nonequilibrium Thermodynamics",
"A-NICE-MC: Adversarial Training for MCMC",
"Unsupervised Learning of Video Representations using LSTMs",
"Few-Shot Learning Through an Information Retrieval Lens"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"marcin andrychowicz",
"misha denil",
"sergio gómez colmenarejo",
"matthew w hoffman",
"david pfau",
"tom schaul",
"brendan shillingford",
"nando de freitas",
"google deepmind"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik p kingma",
"jimmy lei ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"daan wierstra",
"tom schaul",
"tobias glasmachers",
"yi sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"olga russakovsky",
"jia deng",
"hao su",
"jonathan krause",
"sanjeev satheesh",
"sean ma",
"zhiheng huang",
"andrej karpathy",
"aditya khosla",
"michael bernstein",
"alexander c berg",
"li fei-fei",
"jim mutch",
"sharat chikkerur",
"hristo paskov",
"ruslan salakhutdinov",
"stan bileschi",
"hueihan jhuang",
"ibm research †",
"georgia tech",
"lexing xie",
"hua ouyang",
"apostol natsev",
"tatsuya harada",
"hideki nakayama",
"yoshitaka ushiku",
"yuya yamashita",
"jun imura",
"yasuo kuniyoshi",
"georges quénot",
"yuanqing lin",
"fengjun lv",
"shenghuo zhu",
"ming yang",
"timothee cour",
"kai yu",
"liangliang cao",
"zhen li",
"min-hsuan tsai",
"xiao zhou",
"thomas huang",
"tong zhang",
"cai-zhi zhu",
"shiníchi satoh",
"jorge sanchez",
"florent perronnin",
"thomas mensink",
"asako kanezaki",
"sho inaba",
"hiroshi muraoka",
"yasuo kuniyoshi nii"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Xerox Research Centre Europe",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Xerox Research Centre Europe",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"karen simonyan",
"andrew zisserman"
],
"affiliation": [
{
"laboratory": "Visual Geometry Group",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "Visual Geometry Group",
"institution": "University of Oxford",
"location": "{}"
}
]
},
{
"name": [
"oriol vinyals",
"google deepmind",
"charles blundell",
"timothy lillicrap",
"koray kavukcuoglu",
"daan wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik p kingma",
"jimmy lei ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoshua bengio",
"éric thibodeau-laufer",
"guillaume alain",
"jason yosinski"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"florian bordes",
"sina honari",
"pascal vincent"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Canadian Institute For Advanced Research (CIFAR)",
"location": "{}"
}
]
},
{
"name": [
"diederik p kingma",
"jimmy lei ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chelsea finn",
"pieter abbeel",
"sergey levine"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sergey ioffe"
],
"affiliation": [
{
"laboratory": "",
"institution": "Christian Szegedy Google Inc",
"location": "{}"
}
]
},
{
"name": [
"łukasz kaiser",
"google brain",
"ofir nachum",
"aurko roy",
"georgia tech",
"samy bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"zhenguo li",
"fengwei zhou",
"fei chen",
"hang li"
],
"affiliation": [
{
"laboratory": "Huawei Noah's Ark Lab",
"institution": "",
"location": "{}"
},
{
"laboratory": "Huawei Noah's Ark Lab",
"institution": "",
"location": "{}"
},
{
"laboratory": "Huawei Noah's Ark Lab",
"institution": "",
"location": "{}"
},
{
"laboratory": "Huawei Noah's Ark Lab",
"institution": "",
"location": "{}"
}
]
},
{
"name": [
"akshay mehrotra",
"ambedkar dukkipati"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mengye ren",
"eleni triantafillou",
"sachin ravi",
"jake snell",
"kevin swersky",
"joshua b tenenbaum",
"hugo larochelle",
"richard s zemel"
],
"affiliation": [
{
"laboratory": "",
"institution": "§ Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "§ Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "§ Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "§ Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "§ Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "§ Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "§ Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "§ Princeton University",
"location": "{}"
}
]
},
{
"name": [
"volodymyr mnih",
"koray kavukcuoglu",
"david silver",
"alex graves",
"ioannis antonoglou",
"daan wierstra",
"martin riedmiller"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pranav shyam",
"shubham gupta",
"ambedkar dukkipati"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jake snell",
"richard s zemel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jascha sohl-dickstein",
"eric a weiss",
"niru maheswaranathan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jiaming song",
"shengjia zhao",
"stefano ermon"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
}
]
},
{
"name": [
"nitish srivastava",
"ruslan salakhutdinov"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{'addrLine': '6 Kings College Road', 'postCode': 'M5S 3G4', 'settlement': 'Toronto', 'region': 'ON', 'country': 'CANADA'}"
},
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{'addrLine': '6 Kings College Road', 'postCode': 'M5S 3G4', 'settlement': 'Toronto', 'region': 'ON', 'country': 'CANADA'}"
}
]
},
{
"name": [
"eleni triantafillou",
"richard zemel",
"raquel urtasun"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Toronto Vector Institute",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto Vector Institute",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto Vector Institute Uber ATG",
"location": "{}"
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.333333 | 0.75 | null | null | null | null | null | rJ695PxRW |
||
sharma|hyperedge2vec_distributed_representations_for_hyperedges|ICLR_cc_2018_Conference | Hyperedge2vec: Distributed Representations for Hyperedges | Data structured in form of overlapping or non-overlapping sets is found in a variety of domains, sometimes explicitly but often subtly. For example, teams, which are of prime importance in social science studies are \enquote{sets of individuals}; \enquote{item sets} in pattern mining are sets; and for various types of analysis in language studies a sentence can be considered as a \enquote{set or bag of words}. Although building models and inference algorithms for structured data has been an important task in the fields of machine learning and statistics, research on \enquote{set-like} data still remains less explored. Relationships between pairs of elements can be modeled as edges in a graph. However, modeling relationships that involve all members of a set, a hyperedge is a more natural representation for the set. In this work, we focus on the problem of embedding hyperedges in a hypergraph (a network of overlapping sets) to a low dimensional vector space. We propose a probabilistic deep-learning based method as well as a tensor-based algebraic model, both of which capture the hypergraph structure in a principled manner without loosing set-level information. Our central focus is to highlight the connection between hypergraphs (topology), tensors (algebra) and probabilistic models. We present a number of interesting baselines, some of which adapt existing node-level embedding models to the hyperedge-level, as well as sequence based language techniques which are adapted for set structured hypergraph topology. The performance is evaluated with a network of social groups and a network of word phrases. Our experiments show that accuracy wise our methods perform similar to those of baselines which are not designed for hypergraphs. Moreover, our tensor based method is quiet efficient as compared to deep-learning based auto-encoder method. We therefore, argue that we have proposed more general methods which are suited for hypergraphs (and therefore also for graphs) while maintaining accuracy and efficiency. | {
"name": [],
"affiliation": []
} | null | [
"hypergraph",
"representation learning",
"tensors"
] | null | 2018-02-15 22:29:15 | 56 | null | null | null | null | null | null | null | null | false | While there are some interesting and novel aspects in this paper, none of the reviewers recommends acceptance. | {
"review_id": [
"rJvDxGceG",
"S1teFU6gG",
"H1kAEtYlz"
],
"review": [
{
"title": "title: interesting, but the presentation needs to be improved",
"paper_summary": null,
"main_review": "main_review: This paper studies the problem of representation learning in hyperedges. The author claims their novelty for using several different models to build hyperedge representations. To generate representations for hyperedge, this paper proposes to use several different models such as Denoising AutoEncoder, tensor decomposition, word2vec or spectral embeddings. Experimental results show the effectiveness of these models in several different datasets. \n\nThe author uses several different models (both recent studies like Node2Vec / sen2vec, and older results like spectral or tensor decomposition). The idea of studying embedding of a hypergraph is interesting and novel, and the results show that several different kinds of methods can all provide meaningful results for realistic applications. \n\nDespite the novel idea about hyperedge embedding generation, the paper is not easy to follow. \nThe introduction of ``hypergraph`` takes too much spapce in preliminary, while the problem for generating embeddings of hyperedge is the key of paper. \n\nFurther, the experiments only present several models this paper described. \nSome recent papers about hypergraph and graph structure (even though cannot generate embeddings directly) are still worth mention and compare in the experimental section. It will be persuasive to mention related methods in similar tasks. \n\nit would better better if the author can add some related work about hyperedge graph studies. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The methods presented in the paper are direct adaptation of existing techniques and rely on heursitcs to work. These methods need to be more thoroughly evaluated (among themselves, to know which method suits for a given problem) as well as against against a simple baseline. ",
"paper_summary": null,
"main_review": "main_review: This paper addresses the problem of embedding sets into a finite dimensional vector space where the sets have the structure that they are hyper-edges of a hyper graph. It presents a collection of methods for solving this problem and most of these methods are only adaptation of existing techniques to the hypergraph setting. The only novelty I find is in applying node2vec (an existing technique) on the dual of the hypergraph to get an embedding for hyperedges. \n\nFor several methods proposed, they have to rely on unexplained heuristics (or graph approximations) for the adaptation to work. For example, why taking average line 9 Algorithm 1 solves problem (5) with an additional constraint that \\mathbf{U}s are same? Problem 5 is also not clearly defined: why is there superscript $k$ on the optimization variable when the objective is sum over all degrees $k$?\n\nIt is not clear why it makes sense to adapt sen2vec (where sequence matters) for the problem of embedding hyperedges (which is just a set). To get a sequence independent embedding, they again have to rely on heuristics.\n\nOverall, the paper only tries to use all the techniques developed for learning on hypergraphs (e.g., tensor decomposition for k-uniform hypergraphs, approximating a hypergraph with a clique graph etc.) to develop the embedding methods for hyperedges. It also does not show/discuss which method is more suitable to a given setting. In the experiments, they show very similar results for all methods. Comparison of proposed methods against a baseline is missing. \n\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Computing node embeddings and hypernode embeddings for hypergraphs",
"paper_summary": null,
"main_review": "main_review: The paper studies different methods for defining hypergraph embeddings, i.e. defining vectorial representations of the set of hyperedges of a given hypergraph. It should be noted that the framework does not allow to compute a vectorial representation of a set of nodes not already given as an hyperedge. A set of methods is presented : the first one is based on an auto-encoder technique ; the second one is based on tensor decomposition ; the third one derives from sentence embedding methods. The fourth one extends over node embedding techniques and the last one use spectral methods. The two first methods use plainly the set structure of hyperedges. Experimental results are provided on semi-supervised regression tasks. They show very similar performance for all methods and variants. Also run-times are compared and the results are expected. In conclusion, the paper gives an overview of methods for computing hypernode embeddings. This is interesting in its own. Nevertheless, as the target problem on hypergraphs is left unspecified, it is difficult to infer conclusions from the study. Therefore, I am not convinced that the paper should be published in ICLR'18.\n\n* typos\n* Recent surveys on graph embeddings have been published in 2017 and should be cited as \"A comprehensive survey of graph embedding ...\" by Cai et al\n* Preliminaries. The occurrence number R(g_i) are not modeled in the hypergraphs. A graph N_a is defined but not used in the paper.\n* Section 3.1. the procedure for sampling hyperedges in the lattice shoud be given. At least, you should explain how it is made efficient when the number of nodes is large.\n* Section 3.2. The method seems to be restricted to cases where the cardinality of hyperedges can take a small number of values. This is discussed in Section 3.6 but the discussion is not convincing enough.\n* Section 3.3 The term Sen2vec is not common knowledge\n* Section 3.3 The length of the sentences depends on the number of permutations of $k$ elements. How can you deal with large k ?\n* Section 3.4 and Section 3.5. The methods proposed in these two sections should be related with previous works on hypergraph kernels. I.e. there should be mentions on the clique expansion and star expansion of hypergraphs. This leads to the question why graph embeddings methods on these expansions have not be considered in the paper.\n* Section 4.1. Only hyperedeges of cardinality in [2,6] are considered. This seems a rather strong limitation and this hypothesis does not seem pertinent in many applications. \n* Section 4. For online multi-player games, hypernode embeddings only allow to evaluate existing teams, i.e. already existing as hyperedges in the input hypergraph. One of the most important problem for multi-player games is team making where team evaluation should be made for all possible teams.\n* Section 5. Seems redundant with the Introduction.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.4444444477558136,
0.4444444477558136
],
"confidence": [
0.5,
0.75,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Made further changes in the experiments for scalability and choice of methods",
"Some comments on the rebuttal",
"Agree to the confusion over baselines",
"Thanks for the reviews",
"Thanks for your valuable comments, some of your concerns were due lack of clear baselines which we have addressed"
],
"comment": [
"Dear Reviewer, \n\nThanks a lot for your response.\n\nWe have submitted a newer version of the paper. In section 4.3 we have made further changes and tried to answer the scalability and choice of methods, aspect more specifically.\n\nWe hope that you find our modifications convincing enough.\nPlease do let us know.\n\nSincere Thanks. ",
"thanks for the rebuttal and modifications on the submitted version. Experimental results do not help to choose between the different methods. Large hyperedges exist in real applications for social networks.",
"Dear Reviewer, \n\nThanks for you valuable comments, we have responded to them in-line below. \n\n> This paper addresses the problem of embedding sets into a finite dimensional vector space where the sets have the structure that they are hyper-edges of a hyper graph. It presents a collection of methods for solving this problem and most of these methods are only adaptation of existing techniques to the hypergraph setting. \n\nØ We agree our paper lacked a comprehensive clarification of the baseline methods, leading to some confusion. We have revised the paper with this clarification. Specifically, we propose two methods: hypergraph tensor decomposition and hypergraph auto-encoder, as our main contributions. Both these methods are designed to take into account the hypergraph structure in a principled manner. Rest all the methods are adaptations of existing graph or language models which have to be adapted by use of proxy or heuristics as they are not designed for hypergraphs. In this sense, our proposed techniques are more general. \n\n> The only novelty I find is in applying node2vec (an existing technique) on the dual of the hypergraph to get an embedding for hyperedges.\n \nØ Regarding novelty aspect, we have clearly listed the novelties we claim in the paper’s introduction, which we again comprehend as follows: \no We propose the concept of dual tensor, which is itself novel and allows us to get hyperedge embedding directly.\no Our proposed hypergraph tensor decomposition method is designed for general hypergraphs (containing different cardinality hyperedges). Therefore, this tensor decomposition is different than simple uniform hypergraph tensor decomposition which is restricted to fixed cardinality hyperedges (i.e. uniform hypergraph). \no Use of de-noising auto-encoder in a hypergraph setting is novel. The idea of creating noise using random-walks over hasse diagram topology is original and unique.\n\nApart from the methods we propose, we have used several interesting tricks and heuristics in our baselines while adapting them for hypergraph setting. \no Use of node2vec over hypergraph dual. (Reviewer has pointed this out himself)\n\no Using hyperedges to model sentences is a novel idea and opens up possibilities of various applications using higher order topological methods for modeling language structure. We show one possible application. \n\no Adapting set structured data to fit in a sequence based language model using proxy text is an interesting idea.\n \n> For several methods proposed, they have to rely on unexplained heuristics (or graph approximations) for the adaptation to work. For example, why taking average line 9 Algorithm 1 solves problem (5) with an additional constraint that \\mathbf{U}s are same? \n\nØ Although its a heuristic, but in our implementation we empirically observe that our algorithm converges successfully. Averaging can be interpreted as equal contribution from the latent factors learned from different cardinality (uniform) sub-hypergraphs. Also the optimization objective of problem (5) is unweighted. \n\n> Problem 5 is also not clearly defined: why is there superscript $k$ on the optimization variable when the objective is sum over all degrees $k$?\n \nØ We have clarified the problem definition more precisely.\n \n> It is not clear why it makes sense to adapt sen2vec (where sequence matters) for the problem of embedding hyperedges (which is just a set). To get a sequence independent embedding, they again have to rely on heuristics.\n \nØ As clarified above, hyperedge2vec using sen2vec is a baseline method. Given sen2vec is for sequences, we have to generate proxy node sequence i.e. proxy text, to be used as input for the sen2vec.\n \n> Overall, the paper only tries to use all the techniques developed for learning on hypergraphs (e.g., tensor decomposition for k-uniform hypergraphs, approximating a hypergraph with a clique graph etc.) to develop the embedding methods for hyperedges. It also does not show/discuss which method is more suitable to a given setting. In the experiments, they show very similar results for all methods. Comparison of proposed methods against a baseline is missing.\n \nØ As pointed out previously, over all we propose two methods which are principally designed to handle hypergraph structured data. Our experiments show that accuracy wise our methods perform similar to those of baselines which are not designed for hypergraphs. Moreover, our tensor based method is quiet efficient as compared to deep-learning based auto-encoder method. We therefore, argue that we have proposed more general methods which are suited for hypergraphs (and therefore also for graphs) while maintaining accuracy and efficiency. ",
"Dear Reviewer, \n\n> This paper studies the problem of representation learning in hyperedges. The author claims their novelty for using several different models to build hyperedge representations. To generate representations for hyperedge, this paper proposes to use several different models such as Denoising AutoEncoder, tensor decomposition, word2vec or spectral embeddings. Experimental results show the effectiveness of these models in several different datasets.\n \n> The author uses several different models (both recent studies like Node2Vec / sen2vec, and older results like spectral or tensor decomposition). The idea of studying embedding of a hypergraph is interesting and novel, and the results show that several different kinds of methods can all provide meaningful results for realistic applications.\n \n> Despite the novel idea about hyperedge embedding generation, the paper is not easy to follow. The introduction of ``hypergraph`` takes too much space in preliminary, while the problem for generating embeddings of hyperedge is the key of paper.\n\nØ We have taken care these. \n\n> Further, the experiments only present several models this paper described. Some recent papers about hypergraph and graph structure (even though cannot generate embeddings directly) are still worth mention and compare in the experimental section. It will be persuasive to mention related methods in similar tasks. it would better better if the author can add some related work about hyperedge graph studies.\n\n Ø We have revised our related work.",
"Dear Reviewer,\n\nPlease find our replies in-line below.\n \n* typos\n* Recent surveys on graph embeddings have been published in 2017 and should be cited as \"A comprehensive survey of graph embedding ...\" by Cai et al.\n\nØ This we have taken care \n\n* Preliminaries. The occurrence number R(g_i) are not modeled in the hypergraphs. A graph N_a is defined but not used in the paper.\n\nØ We have used it in our methods.\n\n* Section 3.1. the procedure for sampling hyperedges in the lattice should be given. At least, you should explain how it is made efficient when the number of nodes is large.\n \nØ This we have taken care of by explaining in further detail the procedure.\n \n* Section 3.2. The method seems to be restricted to cases where the cardinality of hyperedges can take a small number of values. This is discussed in Section 3.6 but the discussion is not convincing enough.\n \nØ Exact prediction of a “large” set is rarely found in real world applications. If we need to go beyond size six we can always employ distributed hypergraph computation frameworks like MESH [2] or HyperX [3].\n \n* Section 3.3 The term Sen2vec is not common knowledge\n* Section 3.3 The length of the sentences depends on the number of permutations of $k$ elements. How can you deal with large k ?\n \t\nØ This is a baseline method as we have clarified. And therefore, this is an inherent limitation of the baseline which we have have adapted for comparison purposes. Although, if we still need to use this baseline with large ‘k’ we can again use distributed hypergraph computation frameworks like MESH [2] or HyperX [3] or in general any other distributed computation for scalable enumeration. \n \t\n* Section 3.4 and Section 3.5. The methods proposed in these two sections should be related with previous works on hypergraph kernels. I.e. there should be mentioned on the clique expansion and star expansion of hypergraphs. This leads to the question why graph embeddings methods on these expansions have not be considered in the paper.\n \nØ We have considered the most popular normalized hypergraph laplacian, which has been extensively used in various application domains and is considered state of the art. Not just clique/star expansion but there are number of other hypergraph laplacians that can be employed. The paper by Agarwal et. al. [1] lists several such laplacians, and infact shows that number of them are actually equivalent. Specially the clique / star expansion one are shown to be equivalent with the normalized laplacian we have employed in our work. We therefore, leave this exploration of various such hypergraph laplacians which actually work on a proxy graph as something for future work. \n \n* Section 4.1. Only hyperedges of cardinality in [2,6] are considered. This seems a rather strong limitation and this hypothesis does not seem pertinent in many applications.\n \nØ Our algorithms in general works for any given cardinality range [c_min,c_max]. In the datasets used we found that a large portion of the hyperedges were found in the range [2,6]. Therefore, for our experimentation purpose this was a suitable choice. If we need to go beyond size six or any larger c_max, we can always go distributed hypergraph computation frameworks like MESH [2] or HyperX [3].\n \n* Section 4. For online multi-player games, hypernode embeddings only allow to evaluate existing teams, i.e. already existing as hyperedges in the input hypergraph. One of the most important problem for multi-player games is team making where team evaluation should be made for all possible teams.\n \nØ We think that team formation is a separate problem in its own right. Team performance is one of the problem we have chosen to illustrate the use of hyperedge embedding as a social science application. \n \n* Section 5. Seems redundant with the Introduction.\n \nØ We have taken care of this.\n\nReferences:\n\n[1] Agarwal, Sameer, Kristin Branson, and Serge Belongie. \"Higher order learning with graphs.\" In Proceedings of the 23rd international conference on Machine learning, pp. 17-24. ACM, 2006.\n\n[2] Enabling Scalable Social Group Analytics via Hypergraph Analysis Systems. Benjamin Heintz and Abhishek Chandra. In the USENIX Workshop on Hot Topics in Cloud Computing (HotCloud). Santa Clara, CA. July, 2015.\n\n[3] J. Huang, R. Zhang, and J. X. Yu, “Scalable hypergraph learning and processing,” in Proc. of ICDM, Nov 2015, pp. 775–780."
]
} | {
"paperhash": [
"agarwal|higher_order_learning_with_graphs",
"ahmed|distributed_large-scale_natural_graph_factorization",
"ahmed|yannick_atouba_ada,_and_marshall_scott_poole._identification_of_groups_in_online_environments:_the_twist_and_turns_of_grouping_groups",
"anthony|molecular_set_theory:_a_mathematical_representation_for_chemical_reaction_mechanisms",
"belkin|laplacian_eigenmaps_and_spectral_techniques_for_embedding_and_clustering",
"bengio|modeling_high-dimensional_discrete_data_with_multi-layer_neural_networks",
"bengio|representation_learning:_a_review_and_new_perspectives",
"bengio|learning_deep_architectures_for_ai",
"berge|graphs_and_hypergraphs",
"berge|hypergraphs:_combinatorics_of_finite_sets",
"bretto|hypergraph_theory._an_introduction._mathematical_engineering",
"samuel|a_game-theoretic_approach_to_hypergraph_clustering",
"cai|a_comprehensive_survey_of_graph_embedding:_problems,_techniques_and_applications",
"christakopoulou|hoslim:_higher-order_sparse_linear_method_for_topn_recommender_systems",
"cooper|spectra_of_uniform_hypergraphs",
"trevor|multidimensional_scaling",
"deshpande|item-based_top-n_recommendation_algorithms",
"estrada|complex_networks_as_hypergraphs",
"fagin|degrees_of_acyclicity_for_hypergraphs_and_relational_database_schemes",
"gao|laplacian_sparse_coding,_hypergraph_laplacian_sparse_coding,_and_applications",
"grover|node2vec:_scalable_feature_learning_for_networks",
"han|clustering_based_on_association_rule_hypergraphs",
"heintz|enabling_scalable_social_group_analytics_via_hypergraph_analysis_systems",
"heintz|mesh:_a_flexible_distributed_hypergraph_processing_system",
"huang|scalable_hypergraph_learning_and_processing",
"hwang|learning_on_weighted_hypergraphs_to_integrate_protein_interactions_and_gene_expressions_for_cancer_outcome_prediction",
"klamt|hypergraphs_and_cellular_networks",
"tamara|tensor_decompositions_and_applications",
"quoc|distributed_representations_of_sentences_and_documents",
"li|news_recommendation_via_hypergraph_learning:_encapsulation_of_user_behavior_and_news_content",
"mikolov|efficient_estimation_of_word_representations_in_vector_space",
"mikolov|distributed_representations_of_words_and_phrases_and_their_compositionality",
"munkres|elements_of_algebraic_topology",
"perozzi|deepwalk:_online_learning_of_social_representations",
"qi|eigenvalues_of_a_real_supersymmetric_tensor",
"rezatofighi|deepsetnet:_predicting_sets_with_deep_neural_networks",
"sam|nonlinear_dimensionality_reduction_by_locally_linear_embedding",
"sharma|predicting_small_group_accretion_in_social_networks:_a_topology_based_incremental_approach",
"sharma|weighted_simplicial_complex:_a_novel_approach_for_predicting_small_group_evolution",
"shashua|multi-way_clustering_using_super-symmetric_nonnegative_tensor_factorization",
"skiena|hasse_diagrams._implementing_discrete_mathematics:_combinatorics_and_graph_theory_with_mathematica",
"socher|parsing_with_compositional_vector_grammars",
"steven|exploring_complex_networks",
"subbian|content-centric_flow_mining_for_influence_analysis_in_social_streams",
"tang|pte:_predictive_text_embedding_through_large-scale_heterogeneous_text_networks",
"tang|line:_large-scale_information_network_embedding",
"tang|leveraging_social_media_networks_for_classification",
"tenenbaum|a_global_geometric_framework_for_nonlinear_dimensionality_reduction",
"tian|learning_deep_representations_for_graph_clustering",
"tian|a_hypergraph-based_learning_algorithm_for_classifying_gene_expression_and_arraycgh_data_with_prior_knowledge",
"vincent|extracting_and_composing_robust_features_with_denoising_autoencoders",
"vincent|stacked_denoising_autoencoders:_learning_useful_representations_in_a_deep_network_with_a_local_denoising_criterion",
"vinyals|order_matters:_sequence_to_sequence_for_sets",
"xiaoyi|a_deep_learning_approach_to_link_prediction_in_dynamic_networks",
"zhou|learning_with_hypergraphs:_clustering,_classification,_and_embedding",
"zhou|learning_with_hypergraphs:_clustering,_classification,_and_embedding"
],
"title": [
"Higher order learning with graphs",
"Distributed large-scale natural graph factorization",
"Yannick Atouba Ada, and Marshall Scott Poole. Identification of groups in online environments: The twist and turns of grouping groups",
"Molecular set theory: A mathematical representation for chemical reaction mechanisms",
"Laplacian eigenmaps and spectral techniques for embedding and clustering",
"Modeling high-dimensional discrete data with multi-layer neural networks",
"Representation learning: A review and new perspectives",
"Learning deep architectures for ai",
"Graphs and hypergraphs",
"Hypergraphs: combinatorics of finite sets",
"Hypergraph theory. An introduction. Mathematical Engineering",
"A game-theoretic approach to hypergraph clustering",
"A comprehensive survey of graph embedding: Problems, techniques and applications",
"Hoslim: higher-order sparse linear method for topn recommender systems",
"Spectra of uniform hypergraphs",
"Multidimensional scaling",
"Item-based top-n recommendation algorithms",
"Complex networks as hypergraphs",
"Degrees of acyclicity for hypergraphs and relational database schemes",
"Laplacian sparse coding, hypergraph laplacian sparse coding, and applications",
"node2vec: Scalable feature learning for networks",
"Clustering based on association rule hypergraphs",
"Enabling scalable social group analytics via hypergraph analysis systems",
"Mesh: A flexible distributed hypergraph processing system",
"Scalable hypergraph learning and processing",
"Learning on weighted hypergraphs to integrate protein interactions and gene expressions for cancer outcome prediction",
"Hypergraphs and cellular networks",
"Tensor decompositions and applications",
"Distributed representations of sentences and documents",
"News recommendation via hypergraph learning: encapsulation of user behavior and news content",
"Efficient estimation of word representations in vector space",
"Distributed representations of words and phrases and their compositionality",
"Elements of algebraic topology",
"Deepwalk: Online learning of social representations",
"Eigenvalues of a real supersymmetric tensor",
"Deepsetnet: Predicting sets with deep neural networks",
"Nonlinear dimensionality reduction by locally linear embedding",
"Predicting small group accretion in social networks: A topology based incremental approach",
"Weighted simplicial complex: A novel approach for predicting small group evolution",
"Multi-way clustering using super-symmetric nonnegative tensor factorization",
"Hasse diagrams. Implementing Discrete Mathematics: Combinatorics and Graph Theory With Mathematica",
"Parsing With Compositional Vector Grammars",
"Exploring complex networks",
"Content-centric flow mining for influence analysis in social streams",
"Pte: Predictive text embedding through large-scale heterogeneous text networks",
"Line: Large-scale information network embedding",
"Leveraging social media networks for classification",
"A global geometric framework for nonlinear dimensionality reduction",
"Learning deep representations for graph clustering",
"A hypergraph-based learning algorithm for classifying gene expression and arraycgh data with prior knowledge",
"Extracting and composing robust features with denoising autoencoders",
"Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion",
"Order matters: Sequence to sequence for sets",
"A deep learning approach to link prediction in dynamic networks",
"Learning with hypergraphs: Clustering, classification, and embedding",
"Learning with hypergraphs: Clustering, classification, and embedding"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"sameer agarwal",
"kristin branson",
"serge belongie"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"amr ahmed",
"nino shervashidze",
"shravan narayanamurthy",
"vanja josifovski",
"alexander j smola"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"iftekhar ahmed",
"channing brown",
"andrew pilny",
"dora cai"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"f anthony",
" bartholomay"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mikhail belkin",
"partha niyogi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoshua bengio",
"samy bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoshua bengio",
"aaron courville",
"pascal vincent"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"c berge"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"claude berge"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alain bretto"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"r samuel",
"marcello bulò",
" pelillo"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hongyun cai",
"vincent w zheng",
"kevin chen",
"-chuan chang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"evangelia christakopoulou",
"george karypis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"joshua cooper",
"aaron dutle"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"f trevor",
"michael aa cox",
" cox"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mukund deshpande",
"george karypis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ernesto estrada",
"juan a rodriguez- velazquez"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"r fagin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shenghua gao",
"ivor wai-hung",
"liang-tien tsang",
" chia"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aditya grover",
"jure leskovec"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"e h han",
"g karypis",
"v kumar",
"b mobasher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"benjamin heintz",
"abhishek chandra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"benjamin heintz",
"shivangi singh",
"rankyung hong",
"guarav khandelwal",
"corey tesdahl",
"abhishek chandra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jin huang",
"rui zhang",
"jeffrey xu",
"yu "
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"taehyun hwang",
"ze tian",
"rui kuangy",
"jean-pierre kocher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"s klamt",
"u u haus",
"f theis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"g tamara",
"brett w kolda",
" bader"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"v quoc",
"tomas le",
" mikolov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lei li",
"tao li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tomas mikolov",
"kai chen",
"greg corrado",
"jeffrey dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tomas mikolov",
"ilya sutskever",
"kai chen",
"greg corrado",
"jeffrey dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" james r munkres"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"bryan perozzi",
"rami al-rfou",
"steven skiena"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"liqun qi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"seyed hamid rezatofighi",
"anton milan",
"ehsan abbasnejad",
"anthony dick",
"ian reid"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"t sam",
"lawrence k roweis",
" saul"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ankit sharma",
"rui kuang",
"jaideep srivastava",
"xiaodong feng",
"kartik singhal"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ankit sharma",
"terrence j moore",
"ananthram swami",
"jaideep srivastava"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"amnon shashua",
"ron zass",
"tamir hazan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"steven skiena"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"richard socher",
"alex perelygin",
"jean wu",
"jason chuang",
"christopher manning",
"andrew ng",
"christopher potts"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"h steven",
" strogatz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"karthik subbian",
"charu aggarwal",
"jaideep srivastava"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jian tang",
"meng qu",
"qiaozhu mei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jian tang",
"meng qu",
"mingzhe wang",
"ming zhang",
"jun yan",
"qiaozhu mei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lei tang",
"huan liu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"joshua b tenenbaum",
"vin de silva",
"john c langford"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"bin fei tian",
"qing gao",
"enhong cui",
"tie-yan chen",
" liu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ze tian",
"taehyun hwang",
"rui kuang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pascal vincent",
"hugo larochelle",
"yoshua bengio",
"pierre-antoine manzagol"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pascal vincent",
"hugo larochelle",
"isabelle lajoie",
"yoshua bengio",
"pierre-antoine manzagol"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"oriol vinyals",
"samy bengio",
"manjunath kudlur"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"li xiaoyi",
"li hui du nan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d zhou",
"j huang",
"b scholkopf"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dengyong zhou",
"jiayuan huang",
"bernhard schölkopf"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"1206.5538v3",
"",
"",
"",
"",
"",
"1709.07604v3",
"",
"1106.4856v3",
"",
"",
"physics/0505137v1",
"",
"",
"1607.00653v1",
"",
"",
"1904.00549v2",
"",
"",
"",
"",
"1405.4053v2",
"",
"1301.3781v3",
"1310.4546v1",
"",
"1403.6652v2",
"",
"1611.08998v5",
"",
"1507.03183v1",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"1511.06391v4",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.444444 | 0.583333 | null | null | null | null | null | rJ5C67-C- |
||
ginsburg|large_batch_training_of_convolutional_networks_with_layerwise_adaptive_rate_scaling|ICLR_cc_2018_Conference | Large Batch Training of Convolutional Networks with Layer-wise Adaptive Rate Scaling | A common way to speed up training of large convolutional networks is to add computational units. Training is then performed using data-parallel synchronous Stochastic Gradient Descent (SGD) with a mini-batch divided between computational units. With an increase in the number of nodes, the batch size grows. However, training with a large batch often results in lower model accuracy. We argue that the current recipe for large batch training (linear learning rate scaling with warm-up) is not general enough and training may diverge. To overcome these optimization difficulties, we propose a new training algorithm based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled AlexNet and ResNet-50 to a batch size of 16K. | {
"name": [],
"affiliation": []
} | A new large batch training algorithm based on Layer-wise Adaptive Rate Scaling (LARS); using LARS, we scaled AlexNet and ResNet-50 to a batch of 16K. | [
"large batch",
"LARS",
"adaptive rate scaling"
] | null | 2018-02-15 22:29:50 | 14 | null | null | null | null | null | null | null | null | false | Pros:
+ The proposed large-batch, synchronous SGD method is able to generalize at larger batch sizes than previous approaches (e.g., Goyal et al., 2017).
Cons:
- Evaluation on more than one task would make the paper more convincing.
- The addition of more hyperparameters makes the proposed algorithm less appealing.
- Some theoretical justifiction of the layer-wise rate scaling would help.
- It isn't clear that the comparison to Goyal et al., 2017 is entirely fair, because that paper also had recommendations for the implementation of batch normalization, weight decay, and a momentum correction as the learning rate is scaled up, but this submission does not address any of those.
Although the revised paper addressed many of the reviewers' concerns, they still did not feel it was quite strong enough to be accepted to ICLR.
| {
"review_id": [
"ry33G5FxM",
"Sk-rQ7qxf",
"S1b88I5xM"
],
"review": [
{
"title": "title: A layer-wise learning rate is proposed. Some state-of-the-art baselines are missing in comparison. ",
"paper_summary": null,
"main_review": "main_review: This paper provides an optimization approach for large batch training of CNN with layer-wise adaptive learning rates. \nIt starts from the observation that the ratio between the L2-norm of parameters and that of gradients on parameters varies\nsignificantly in the optimization, and then introduce a local learning rate to consider this observation for a more stable and efficient optimization. Experimental results show improvements compared with the state-of-the-art algorithm.\n\nReview:\n(1) Pros\nThe proposed optimization method considers the dynamic self-adjustment of the learning rate in the optimization based on the ratio between the L2-norm of parameters and that of gradients on parameters when the batch size increases, and shows improvements in experiments compared with previous methods.\n\n(2) Cons\ni) LR \"warm-up\" can mitigate the unstable training in the initial phase and the proposed method is also motivated by the stability but uses a different approach. However, it seems that the authors also combine with LR \"warm-up\" in your proposed method in the experimental part, e.g., Table 3. So does it mean that the proposed method cannot handle the problem in general?\n\nii) There is one coefficient that is independent from layers and needs to be set manually in the proposed local learning rate. The authors do not have a detail explanation and experiments about it. In fact, as can be seen in the Algorithm 1, this coefficient can be as an independent hyper-parameter (even is put with the global learning rate together as one fix term).\n\niii) In the section 6, when increase the training steps, experiments compared with previous methods should be implemented since they can also get better results with more epochs.\n\niv) Writing should be improved, e.g., the first paragraph in section 6. Some parts are confusing, for example, the authors claim that they use initial LR=0.01, but in Table 1(a) it is 0.02. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The paper proposes a heuristic for layer-wise learning rate selection in convolutional networks. The contribution is relatively minor and is not evaluated with sufficient depth.",
"paper_summary": null,
"main_review": "main_review: The paper proposes a new approach to determine learning late for convolutional neural networks. It starts from observation that for batch learning with a fixed number of epochs, the accuracy drops when the batch size is too large. Assuming that the number or epochs and batch size are fixed, the contribution of the paper is a heuristic that assigns different learning late to each layer of a network depending on a ratio of the norms of weights and gradients in a layer. The experimental results show that the proposed heuristic helps AlexNet and ResNet end up in a larger accuracy on ImageNet data.\n Positives:\n- the proposed approach is intuitively justified\n- the experimental results are encouraging\n Negatives:\n- the methodological contribution is minor\n- no attempt is made to theoretically justify the proposed heuristic\n- the method introduces one or two new hyperparameters and it is not clear from the experimental results what overhead is this adding to network training\n- the experiments are done only on a single data set, which is not sufficient to establish superiority of an approach\n Suggestions:\n- consider using different abbreviation (LARS is used for least-angle regression) \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: This paper proposes a training algorithm based on Layer-wise Adaptive Rate Scaling (LARS) to overcome the optimization difficulties for training with large batch size. The authors use a linear scaling and warm-up scheme to train AlexNet on ImageNet. The results show promising performance when using a relatively large batch size. The presented method is interesting. However, the experiments are poorly organized since some necessary descriptions and discussions are missing. ",
"paper_summary": null,
"main_review": "main_review: This paper proposes a training algorithm based on Layer-wise Adaptive Rate Scaling (LARS) to overcome the optimization difficulties for training with large batch size. The authors use a linear scaling and warm-up scheme to train AlexNet on ImageNet. The results show promising performance when using a relatively large batch size. The presented method is interesting. However, the experiments are poorly organized since some necessary descriptions and discussions are missing. My detailed comments are as follows.\n\nContributions:\n\n1.\tThe authors propose a training algorithm based LARS with the adaptive learning rate for each layer, and train the AlexNet and ResNet-50 to a batch size of 16K. \n2.\tThe training method shows stable performance and helps to avoid gradient vanishing or exploding.\n\nWeak points:\n\nThe training algorithm does not overcome the optimization difficulties when the batch size becomes larger (e.g. 32K), where the training becomes unstable, and the training based on LARS and warm-up can’t improve the accuracy compared to the baselines. \n\nSpecific comments: \n\n1.\tIn Algorithm 1, how to choose $ \\eta $ and $ \\beta $ in the experiment?\n2.\tUnder the line of Equation (3), $ \\nabla L(x_j, w_{t+1}) \\approx L(x_j, w_{t}) $ should be $ \\nabla L(x_j, w_{t+1}) \\approx \\nabla L(x_j, w_{t}) $.\n3.\tHow can the training algorithm based on LARS improve the generalization for the large batch? \n4.\tIn the experiments, what is the parameter iter_size? How to choose it?\n5.\tIn the experiments, no descriptions and discussions are given for Table 3, Figure 4, Table 4, Figure 5, Table 5 and Table 6. The authors should give more discussions on these tables and figures. Furthermore, the captions of these tables and figures confusing.\n6.\tOn page 4, there is a statement “The ratio is high during the initial phase, and it is rapidly decreasing after few epochs (see Figure 2).” This is quite confusing, since Figure 2 is showing the change of learning rates w.r.t. training epochs.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.3333333432674408,
0.4444444477558136
],
"confidence": [
0.5,
0.75,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Reply to comment 2",
"Reply to Comment 1",
"Reply to Comment 3"
],
"comment": [
"1. Comment 1: \"the methodological contribution is minor\"\n A: We proposed a new training method, which enable training with large batch of networks which is not possible with all other methods (AFAK). \n\n2. Comment 2: \" no attempt is made to theoretically justify the proposed heuristic\"\n A: \" Agree, unfortunately most methods used for deep learning don't have formal proof yet\"\n\n3. Comment 3: \"the method introduces one or two new hyperparameters and it is not clear from the experimental results what overhead is this adding to network training\"\nA: there is one hyper-parameter - trust coefficient $0<eta<1\"$ . I added the explanation how it depends to revised paper \n\n4. Comment 4: \"the experiments are done only on a single data set, which is not sufficient to establish superiority of an approach\"\nA: We focused on Imagenet classification only for large batch training The results in the paper are for 3 models (Alexnet, Alexnet-BN, and Resnet-50). I will add Googlenet results for completeness. \n\n5. Suggestions: \" consider using different abbreviation (LARS is used for least-angle regression) \"\n A: Agree, probably LARC (Layer-wise Adaptive Rate Control\" would be better, but the algorithm was already implemented in nvcaffe when we realized that there is a name collision. ",
"Q: \"Weak points: The training algorithm does not overcome the optimization difficulties when the batch size becomes larger (e.g. 32K), where the training becomes unstable, and the training based on LARS and warm-up can’t improve the accuracy compared to the baselines\"\nA:\n1) Standard recipe \"increase learning rate proportionally to batch size\" does not work for such networks as Alexnet and Googlenet even with warm-up. LARS is the only algorithm (AFAK) that allows to train Alexnet with batch > 2K to the same accuracy as for small batches\n2) We added Appendix with data on LARS performanse comparing to other \"Large Batch training\" methods for Resnet-50. \n\nSpecific comments: \n1) Q: \"In Algorithm 1, how to choose $ \\eta $ and $ \\beta $ in the experiment?\"\nA: $ $0<\\eta<1 $, and it depends on the batch size $B$. It grows with B: for example for Alexnet with B=1K the optimal $\\eta=0.002$, with B=4K the optimal $\\eta=0.005$, with B=4K the optimal $\\eta=0.008$,... Weight decay $\\beta$ is chosen as usual. We found that with large batch it's beneficial to increase weight decay to improve the regularization \n2) typo: fixed in the revised paper \n3) Q: \"How can the training algorithm based on LARS improve the generalization for the large batch?\"\nA: LARS does not replace standard regularization methods (weight decay, batch norm, or data augmentation). But we found that with LARS we can use larger weight decay than usual, since LARS automatically limits the norm of weights during training: $|| W(T)|| <= ||W(0)|| * exp \\int_{0}^{T} \\gamma(t) dt$. \n4) Q: In the experiments, what is the parameter iter_size? How to choose it?\n A: iter_size is used in caffe to emulate large batch if batch does not fit into GPU DRAM. For example if the batch which fits in GPU memory is 1K, and we want to use B=8K, then iter_size=8.\n5) and 6) We will add more explanation to the revised paper\n ",
"1. Comment : \"LR \"warm-up\" can mitigate the unstable training in the initial phase and the proposed method is also motivated by the stability but uses a different approach. However, it seems that the authors also combine with LR \"warm-up\" in your proposed method in the experimental part, e.g., Table 3. So does it mean that the proposed method cannot handle the problem in general?\"\nA: Warm-up alone is not able to mitigate the unstable training for Alexnet. LARS with warm-up can. There is also a new version of algorithm which eliminates warm-up completely\n\n2. Comment: \"There is one coefficient that is independent from layers and needs to be set manually in the proposed local learning rate. The authors do not have a detail explanation and experiments about it. In fact, as can be seen in the Algorithm 1, this coefficient can be as an independent hyper-parameter (even is put with the global learning rate together as one fix term).\"\nA: Agree. In the paper we used fixed trust coefficient $eta\" and changed learning rate. One can used instead fixed global learning rate policy which does not depend on networks, and scale up only trust coefficient . I will add the explanation to the revised paper.\n\n3. Comment \"In the section 6, when increase the training steps, experiments compared with previous methods should be implemented since they can also get better results with more epochs.\" \nA: The point of section 6 was to show that there is no \"fundamental\" limit on the accuracy of large batch training, provided we do train it long enough and regularize well (e.g. increase weight decay or add data augmentation. \n\n4. Comment 4: \"the authors claim that they use initial LR=0.01, but in Table 1(a) it is 0.02\"\n A: typo is fixed in the revised paper. \n\n \n"
]
} | {
"paperhash": [
"chen|revisiting_distributed_synchronous_sgd",
"codreanu|blog:_achieving_deep_learning_training_in_less_than_40_minutes_on_imagenet-1k_with_scale-out_intel_r_xeon_tm_/xeon_phi_tm_architectures",
"deng|imagenet:_a_large-scale_hierarchical_image_database",
"goyal|yangqing_jia,_and_kaiming_he._accurate,_large_minibatch_sgd:_training_imagenet_in_1_hour",
"he|deep_residual_learning_for_image_recognition",
"hoffer|train_longer,_generalize_better:_closing_the_generalization_gap_in_large_batch_training_of_neural_networks",
"ioffe|batch_normalization:_accelerating_deep_network_training_by_reducing_internal_covariate_shift",
"keskar|on_large-batch_training_for_deep_learning:_generalization_gap_and_sharp_minima",
"kingma|adam:_a_method_for_stochastic_optimization",
"krizhevsky|one_weird_trick_for_parallelizing_convolutional_neural_networks",
"krizhevsky|imagenet_classification_with_deep_convolutional_neural_networks",
"lafond|diagonal_rescaling_for_neural_networks",
"li|scaling_distributed_machine_learning_with_system_and_algorithm_co-design",
"li|efficient_mini-batch_training_for_stochastic_optimization"
],
"title": [
"Revisiting distributed synchronous sgd",
"Blog: Achieving deep learning training in less than 40 minutes on imagenet-1k with scale-out intel R xeon TM /xeon phi TM architectures",
"Imagenet: A large-scale hierarchical image database",
"Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour",
"Deep residual learning for image recognition",
"Train longer, generalize better: closing the generalization gap in large batch training of neural networks",
"Batch normalization: accelerating deep network training by reducing internal covariate shift",
"On large-batch training for deep learning: Generalization gap and sharp minima",
"Adam: a method for stochastic optimization",
"One weird trick for parallelizing convolutional neural networks",
"Imagenet classification with deep convolutional neural networks",
"Diagonal rescaling for neural networks",
"Scaling Distributed Machine Learning with System and Algorithm Co-design",
"Efficient mini-batch training for stochastic optimization"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"jianmin chen",
"rajat monga",
"samy bengio",
"rafal jozefowicz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"valeriu codreanu",
"damian podareanu",
"vikram saletore"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jia deng",
"wei dong",
"richard socher",
"li-jia li",
"kai li",
"li fei-fei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"priya goyal",
"piotr dollár",
"ross girshick",
"pieter noordhuis",
"lukasz wesolowski",
"aapo kyrola",
"andrew tulloch"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"elad hoffer",
"itay hubara",
"daniel soudry"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sergey ioffe",
"christian szegedy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nitish shirish keskar",
"dheevatsa mudigere",
"jorge nocedal",
"mikhail smelyanskiy",
"ping tak",
"peter tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik kingma",
"jimmy ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex krizhevsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex krizhevsky",
"ilya sutskever",
"geoffrey e hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jean lafond",
"nicolas vasilache",
"léon bottou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mu li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mu li",
"tong zhang",
"yuqiang chen",
"alexander j smola"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1702.05800v2",
"",
"",
"arXiv:1706.02677",
"1512.03385v1",
"1705.08741v2",
"1502.03167v3",
"arXiv:1609.04836",
"1412.6980v9",
"1404.5997v2",
"",
"1705.09319v1",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.407407 | 0.75 | null | null | null | null | null | rJ4uaX2aW |
||
sasaki|deterministic_policy_imitation_gradient_algorithm|ICLR_cc_2018_Conference | Deterministic Policy Imitation Gradient Algorithm | The goal of imitation learning (IL) is to enable a learner to imitate an expert’s behavior given the expert’s demonstrations. Recently, generative adversarial imitation learning (GAIL) has successfully achieved it even on complex continuous control tasks. However, GAIL requires a huge number of interactions with environment during training. We believe that IL algorithm could be more applicable to the real-world environments if the number of interactions could be reduced. To this end, we propose a model free, off-policy IL algorithm for continuous control. The keys of our algorithm are two folds: 1) adopting deterministic policy that allows us to derive a novel type of policy gradient which we call deterministic policy imitation gradient (DPIG), 2) introducing a function which we call state screening function (SSF) to avoid noisy policy updates with states that are not typical of those appeared on the expert’s demonstrations. Experimental results show that our algorithm can achieve the goal of IL with at least tens of times less interactions than GAIL on a variety of continuous control tasks. | {
"name": [],
"affiliation": []
} | We propose a model free imitation learning algorithm that is able to reduce number of interactions with environment in comparison with state-of-the-art imitation learning algorithm namely GAIL. | [
"Imitation Learning"
] | null | 2018-02-15 22:29:46 | 32 | null | null | null | null | null | null | null | null | false | All of the reviewers found some aspects of the formulation and experiments interesting, but they found the paper hard to read and understand. Some of the components of the technique such as the state screening function (SSF) seem ad-hoc and heuristic without much justification. Please improve the exposition and remove the unnecessary component of the technique, or come up with better justifications. | {
"review_id": [
"S1tVQ5Kef",
"S1_na_OlG",
"B1nuCculG"
],
"review": [
{
"title": "title: Combines IRL, adversarial training, and ideas from deterministic policy gradients. Paper is hard to read. MuJoCo results are good.",
"paper_summary": null,
"main_review": "main_review: The paper lists 5 previous very recent papers that combine IRL, adversarial learning, and stochastic policies. The goal of this paper is to do the same thing but with deterministic policies as a way of decreasing the sample complexity. The approach is related to that used in the deterministic policy gradient work. Imitation learning results on the standard control problems appear very encouraging.\n\nDetailed comments:\n\n\"s with environment\" -> \"s with the environment\"?\n\n\"that IL algorithm\" -> \"that IL algorithms\".\n\n\"e to the real-world environments\" -> \"e to real-world environments\".\n\n\" two folds\" -> \" two fold\".\n\n\"adopting deterministic policy\" -> \"adopting a deterministic policy\".\n\n\"those appeared on the expert’s demonstrations\" -> \"those appearing in the expert’s demonstrations\".\n\n\"t tens of times less interactions\" -> \"t tens of times fewer interactions\".\n\nOk, I can't flag all of the examples of disfluency. The examples above come from just the abstract. The text of the paper seems even less well edited. I'd highly recommend getting some help proof reading the work.\n\n\"Thus, the noisy policy updates could frequently be performed in IL and make the learner’s policy poor. From this observation, we assume that preventing the noisy policy updates with states that are not typical of those appeared on the expert’s demonstrations benefits to the imitation.\": The justification for filtering is pretty weak. What is the statistical basis for doing so? Is it a form of a standard variance reduction approach? Is it a novel variance reduction approach? If so, is it more generally applicable?\n\nUnfortunately, the text in Figure 1 is too small. The smallest font size you should use is that of a footnote in the text. As such, it is very difficult to assess the results.\n\nAs best I can tell, the empirical results seem impressive and interesting.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Hard to read",
"paper_summary": null,
"main_review": "main_review: This paper proposes to extend the determinist policy gradient algorithm to learn from demonstrations. The method is combined with a type of density estimation of the expert to avoid noisy policy updates. It is tested on Mujoco tasks with expert demonstrations generated with a pre-trained network. \n\nI found the paper a bit hard to read. My interpretation is that the main original contribution of the paper (besides changing a stochastic policy for a deterministic one) is to integrate an automatic estimate of the density of the expert (probability of a state to be visited by the expert policy) so that the policy is not updated by gradient coming from transitions that are unlikely to be generated by the expert policy. \n\nI do think that this part is interesting and I would have liked this trick to be used with other imitation methods. Indeed, the deterministic policy is certainly helpful but it is tested in a deterministic continuous control task. So I'm not sure about how it generalizes to other tasks. Also, the expert demonstration are generated by the pre-trained network so the distribution of the expert is indeed the distribution of the optimal policy. So I'm not sure the experiments tell a lot. But if the density estimation could be combined with other methods and tested on other tasks, I think this could be a good paper. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: This paper proposes an extension of the generative adversarial imitation learning (GAIL) algorithm by replacing the stochastic policy of the learner with a deterministic one. Simulation results with MuJoCo physics simulator show that this simple trick reduces the amount of needed data by an order of magnitude.",
"paper_summary": null,
"main_review": "main_review: This paper considers the problem of model-free imitation learning. The problem is formulated in the framework of generative adversarial imitation learning (GAIL), wherein we alternate between optimizing reward parameters and learner policy's parameters. The reward parameters are optimized so that the margin between the cost of the learner's policy and the expert's policy is maximized. The learner's policy is optimized (using any model-free RL method) so that the same cost margin is minimized. Previous formulation of GAIL uses a stochastic behavior policy and the RIENFORCE-like algorithms. The authors of this paper propose to use a deterministic policy instead, and apply the deterministic policy gradient DPG (Silver et al., 2014) for optimizing the behavior policy. \nThe authors also briefly discuss the problem of the little overlap between the teacher's covered state space and the learner's. A state screening function (SSF) method is proposed to drive the learner to remain in areas of the state space that have been covered by the teacher. Although, a more detailed discussion and a clearer explanation is needed to clarify what SSF is actually doing, based on the provided formulation.\nExcept from a few typos here and there, the paper is overall well-written. The proposed idea seems new. However, the reviewer finds the main contribution rather incremental in its nature. Replacing a stochastic policy with a deterministic one does not change much the original GAIL algorithm, since the adoption of stochastic policies is often used just to have differentiable parameterized policies, and if the action space is continuous, then there is not much need for it (except for exploration, which is done here through re-initializations anyway). My guess is that if someone would use the GAIL algorithm for real problems (e.g, robotic task), they would significantly reduce the stochasticity of the behavior policy, which would make it virtually similar in term of data efficiency to the proposed method.\nPros:\n- A new GAIL formulation for saving on interaction data. \nCons:\n- Incremental improvement over GAIL\n- Experiments only on simulated toy problems \n- No theoretical guarantees for the state screening function (SSF) method",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.5555555820465088,
0.4444444477558136
],
"confidence": [
0.5,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Thank you for positive evaluations.",
"Responses",
"Responses"
],
"comment": [
"Thank you for your constructive comments and positive evaluations on our paper. We will clarify the role of SSF in the camera-ready version.\n\n> My interpretation is that the main original contribution of the paper (besides changing a stochastic policy for a deterministic one) is to integrate an automatic estimate of the density of the expert (probability of a state to be visited by the expert policy)\n\nThank you for clearly understanding the role of SSF.\n\n> Indeed, the deterministic policy is certainly helpful but it is tested in a deterministic continuous control task. So I'm not sure about how it generalizes to other tasks.\n\nThe expert's policy used in the experimetns is a stochastic one. Hence, the proposed method works not only on a deterministic continuous control tasks but also a stochastic one. We expect that it generalizes well to other tasks.\n",
"Thank you for your constructive comments on our paper. We will fix typos and clarify the role of SSF in the camera-ready version.\n\n> The authors also briefly discuss the problem of the little overlap between the teacher's covered state space and the learner's. A state screening function (SSF) method is proposed to drive the learner to remain in areas of the state space that have been covered by the teacher.\n\nThe main purpose of introducing a SSF is not what you mentioned. Since we use the Jacobian of reward function to derive PG as opposed to prior IL works, the Jacobian is supposed to have information about how to get close to the expert's behavior for the learner. However, in the IRL objective (4), which is general in (max-margin) IRL literature, the reward function could know how the expert acts just only on the states appearing in the demonstration. In other words, the Jacobian could have information about how to get close to the expert's behavior just only on states appearing in the demonstration. What we claimed in Sec.3.2 is that the Jacobian for states which does not appear in the demonstration is just garbage for the learner since it does not give any information about how to get close to the expert. The main purpose of introducing the SSF is to sweep the garbage as much as possible.\n\n> However, the reviewer finds the main contribution rather incremental in its nature. Replacing a stochastic policy with a deterministic one does not change much the original GAIL algorithm, since the adoption of stochastic policies is often used just to have differentiable parameterized policies, and if the action space is continuous, then there is not much need for it (except for exploration, which is done here through re-initializations anyway)\n\nFigure.1 shows worse performance of Ours \\setminus SSF which just replace a stochastic policy with a deterministic one. If Ours \\setminus SSF worked well, we agree with your opinion that the main contribution is just incremental. However, introducing the SSF besides replacing a stochastic policy with a deterministic one is required to imitate the expert's behavior. Hence, we don't agree that the proposed method is just incremental. \n\n> My guess is that if someone would use the GAIL algorithm for real problems (e.g, robotic task), they would reduce the stochasticity of the behavior policy, which would make it virtually similar in term of data efficiency to the proposed method.\n\nBecause the GAIL algorithm is an on-policy algorithm, it essentially requires much interactions for an update and never uses behavior policy. Hence, it would not make it virtually similar in term of data efficiency to the proposed method which is off-policy algorithm.\n\n> Cons:\n> - Incremental improvement over GAIL\n\nAs mentioned above, we think that the proposed method is not just incremental improvement over GAIL. \n\n> - Experiments only on simulated toy problems \n\nWe wonder why you thought the Mujoco tasks are just \"toy\" problems. Even though those tasks are not real-world problems, they have not been solved until GAIL has been proposed. In addition, the variants of GAIL (Baram et al., 2017; Wang et al., 2017; Hausman et al.) also evaluated their performance using those tasks. Hence, we think that those tasks are enough difficult to solve and can be used as a well-suited benchmark to evaluate whether the proposed method is applicable to the real-world problems in comparison with other IL algorithms.\n",
"Thank you for your constructive comments on our paper. We will fix typos and Figure.1. in the camera-ready version. \n\n> The justification for filtering is pretty weak. \n\nSince Figure.1 shows worse performance of Ours \\setminus SSF which does not filter states appearing in the demonstration, we think that the justification is enough.\n\n> What is the statistical basis for doing so?\n\nIntroducing a SSF is a kind of heuristic method, but it works as mentioned above.\n\n> Is it a form of a standard variance reduction approach? Is it a novel variance reduction approach? If so, is it more generally applicable?\n\nIntroducing the SSF itself is not a variance reduction approach. We would say that direct use of the Joacobian of (single-step) reward function rather than that of Q-function to derive the PG (8) might reduce the variance because the range of outputs are bounded.\nSince we use the Jacobian of reward function to derive PG as opposed to prior IL works, the Jacobian is supposed to have information about how to get close to the expert's behavior for the learner. However, in the IRL objective (4), which is general in (max-margin) IRL literature, the reward function could know how the expert acts just only on the states appearing in the demonstration. In other words, the Jacobian could have the information about how to get close to the expert's behavior just only on states appearing in the demonstration. What we claimed in Sec.3.2 is that the Jacobian for states which does not appear in the demonstration is just garbage for the learner since it does not give any information about how to get close to the expert. The main purpose of introducing the SSF is to sweep the garbage as much as possible. The prior IL works have never mentioned about the garbage."
]
} | {
"paperhash": [
"abadi|tensorflow:_large-scale_machine_learning_on_heterogeneous_distributed_systems",
"abbeel|apprenticeship_learning_via_inverse_reinforcement_learning",
"baram|end-to-end_differentiable_adversarial_imitation_learning",
"degris|off-policy_actor-critic",
"finn|a_connection_between_generative_adversarial_networks,_inverse_reinforcement_learning,_and_energy-based_models",
"finn|guided_cost_learning:_deep_inverse_optimal_control_via_policy_optimization",
"goodfellow|generative_adversarial_nets",
"hausman|multi-modal_imitation_learning_from_unstructured_demonstrations_using_generative_adversarial_nets",
"hinton|rmsprop:_divide_the_gradient_by_a_running_average_of_its_recent_magnitude._neural_networks_for_machine_learning",
"ho|generative_adversarial_imitation_learning",
"levine|nonlinear_inverse_reinforcement_learning_with_gaussian_processes",
"li|inferring_the_latent_structure_of_human_decision-making_from_raw_visual_inputs",
"lillicrap|continuous_control_with_deep_reinforcement_learning",
"maas|rectifier_nonlinearities_improve_neural_network_acoustic_models",
"maas|rectifier_nonlinearities_improve_neural_network_acoustic_models",
"mnih|human-level_control_through_deep_reinforcement_learning",
"mnih|asynchronous_methods_for_deep_reinforcement_learning",
"andrew|algorithms_for_inverse_reinforcement_learning",
"dean|efficient_training_of_artificial_neural_networks_for_autonomous_navigation",
"precup|off-policy_learning_with_options_and_recognizers",
"ross|efficient_reductions_for_imitation_learning",
"russell|learning_agents_for_uncertain_environments",
"schulman|trust_region_policy_optimization",
"schulman|high-dimensional_continuous_control_using_generalized_advantage_estimation",
"silver|deterministic_policy_gradient_algorithms",
"silver|mastering_the_game_of_go_with_deep_neural_networks_and_tree_search",
"sutton|policy_gradient_methods_for_reinforcement_learning_with_function_approximation",
"syed|apprenticeship_learning_using_linear_programming",
"todorov|mujoco:_a_physics_engine_for_model-based_control",
"wang|robust_imitation_of_diverse_behaviors",
"ronald|simple_statistical_gradient-following_algorithms_for_connectionist_reinforcement_learning",
"ziebart|maximum_entropy_inverse_reinforcement_learning"
],
"title": [
"Tensorflow: Large-scale machine learning on heterogeneous distributed systems",
"Apprenticeship learning via inverse reinforcement learning",
"End-to-end differentiable adversarial imitation learning",
"Off-policy actor-critic",
"A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models",
"Guided cost learning: Deep inverse optimal control via policy optimization",
"Generative adversarial nets",
"Multi-modal imitation learning from unstructured demonstrations using generative adversarial nets",
"Rmsprop: Divide the gradient by a running average of its recent magnitude. Neural networks for machine learning",
"Generative adversarial imitation learning",
"Nonlinear inverse reinforcement learning with gaussian processes",
"Inferring the latent structure of human decision-making from raw visual inputs",
"Continuous control with deep reinforcement learning",
"Rectifier nonlinearities improve neural network acoustic models",
"Rectifier nonlinearities improve neural network acoustic models",
"Human-level control through deep reinforcement learning",
"Asynchronous methods for deep reinforcement learning",
"Algorithms for inverse reinforcement learning",
"Efficient training of artificial neural networks for autonomous navigation",
"Off-policy learning with options and recognizers",
"Efficient reductions for imitation learning",
"Learning agents for uncertain environments",
"Trust region policy optimization",
"High-dimensional continuous control using generalized advantage estimation",
"Deterministic policy gradient algorithms",
"Mastering the game of go with deep neural networks and tree search",
"Policy gradient methods for reinforcement learning with function approximation",
"Apprenticeship learning using linear programming",
"Mujoco: A physics engine for model-based control",
"Robust imitation of diverse behaviors",
"Simple statistical gradient-following algorithms for connectionist reinforcement learning",
"Maximum entropy inverse reinforcement learning"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"martín abadi",
"ashish agarwal",
"paul barham",
"eugene brevdo",
"zhifeng chen",
"craig citro",
"greg s corrado",
"andy davis",
"jeffrey dean",
"matthieu devin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pieter abbeel",
"andrew y ng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nir baram",
"oron anschel",
"itai caspi",
"shie mannor"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"thomas degris",
"martha white",
"richard s sutton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chelsea finn",
"paul christiano",
"pieter abbeel",
"sergey levine"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chelsea finn",
"sergey levine",
"pieter abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ian goodfellow",
"jean pouget-abadie",
"mehdi mirza",
"bing xu",
"david warde-farley",
"sherjil ozair",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"karol hausman",
"yevgen chebotar",
"stefan schaal",
"gaurav sukhatme",
"joseph lim"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"n hinton",
" srivastava",
" swersky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jonathan ho",
"stefano ermon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sergey levine",
"zoran popovic",
"vladlen koltun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yunzhu li",
"jiaming song",
"stefano ermon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jonathan j timothy p lillicrap",
"alexander hunt",
"nicolas pritzel",
"tom heess",
"yuval erez",
"david tassa",
"daan silver",
" wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"awni y andrew l maas",
"andrew y hannun",
" ng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"awni y andrew l maas",
"andrew y hannun",
" ng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"volodymyr mnih",
"koray kavukcuoglu",
"david silver",
"andrei a rusu",
"joel veness",
"marc g bellemare",
"alex graves",
"martin riedmiller",
"andreas k fidjeland",
"georg ostrovski"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"volodymyr mnih",
"adria puigdomenech badia",
"mehdi mirza",
"alex graves",
"timothy lillicrap",
"tim harley",
"david silver",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"y andrew",
"stuart j ng",
" russell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a dean",
" pomerleau"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"doina precup",
"cosmin paduraru",
"anna koop",
"richard s sutton",
"p satinder",
" singh"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"stéphane ross",
"drew bagnell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"stuart russell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"john schulman",
"sergey levine",
"pieter abbeel",
"michael jordan",
"philipp moritz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"john schulman",
"philipp moritz",
"sergey levine",
"michael jordan",
"pieter abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david silver",
"guy lever",
"nicolas heess",
"thomas degris",
"daan wierstra",
"martin riedmiller"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david silver",
"aja huang",
"chris j maddison",
"arthur guez",
"laurent sifre",
"george van den",
"julian driessche",
"ioannis schrittwieser",
"veda antonoglou",
"marc panneershelvam",
" lanctot"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david a richard s sutton",
" mcallester",
"p satinder",
"yishay singh",
" mansour"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"umar syed",
"michael bowling",
"robert e schapire"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"emanuel todorov",
"tom erez",
"yuval tassa"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ziyu wang",
"josh merel",
"scott reed",
"greg wayne",
"nando de freitas",
"nicolas heess"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"williams ronald"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"brian d ziebart",
"andrew l maas",
"j andrew bagnell",
"anind k dey"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"arXiv:1603.04467",
"",
"",
"arXiv:1205.4839",
"arXiv:1611.03852",
"1603.00448v3",
"",
"1705.10479v2",
"",
"1606.03476v1",
"",
"arXiv:1703.08840",
"1509.02971v6",
"",
"",
"",
"1602.01783v2",
"",
"",
"",
"",
"",
"1502.05477v5",
"arXiv:1506.02438",
"",
"",
"",
"",
"",
"1707.02747v2",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.481481 | 0.666667 | null | null | null | null | null | rJ3fy0k0Z |
||
brutzkus|sgd_learns_overparameterized_networks_that_provably_generalize_on_linearly_separable_data|ICLR_cc_2018_Conference | 30745030 | 1710.10174 | SGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data | Neural networks exhibit good generalization behavior in the
over-parameterized regime, where the number of network parameters
exceeds the number of observations. Nonetheless,
current generalization bounds for neural networks fail to explain this
phenomenon. In an attempt to bridge this gap, we study the problem of
learning a two-layer over-parameterized neural network, when the data is generated by a linearly separable function. In the case where the network has Leaky
ReLU activations, we provide both optimization and generalization guarantees for over-parameterized networks.
Specifically, we prove convergence rates of SGD to a global
minimum and provide generalization guarantees for this global minimum
that are independent of the network size.
Therefore, our result clearly shows that the use of SGD for optimization both finds a global minimum, and avoids overfitting despite the high capacity of the model. This is the first theoretical demonstration that SGD can avoid overfitting, when learning over-specified neural network classifiers. | {
"name": [
"alon brutzkus",
"amir globerson",
"eran malach",
"shai shalev-shwartz"
],
"affiliation": [
{
"laboratory": "",
"institution": "Tel Aviv University",
"location": "{'country': 'Israel'}"
},
{
"laboratory": "",
"institution": "Tel Aviv University",
"location": "{'country': 'Israel'}"
},
{
"laboratory": "",
"institution": "The Hebrew University",
"location": "{'country': 'Israel'}"
},
{
"laboratory": "",
"institution": "The Hebrew University",
"location": "{'country': 'Israel'}"
}
]
} | We show that SGD learns two-layer over-parameterized neural networks with Leaky ReLU activations that provably generalize on linearly separable data. | [
"Deep Learning",
"Non-convex Optmization",
"Generalization",
"Learning Theory",
"Neural Networks"
] | null | 2018-02-15 22:29:42 | 29 | 273 | 20 | null | null | null | null | null | null | true | This is a high quality paper, clearly written, highly original, and clearly significant. The paper gives a complete analysis of SGD in a two layer network where the second layer does not undergo training and the data are linearly separable. Experimental results confirm the theoretical suggestion that the second layer can be trained provided the weights don't change sign and remain bounded. The authors address the major concerns of the reviewers (namely, whether these results are indicative given the assumptions). This line of work seems very promising. | {
"review_id": [
"rJV8Y8ulf",
"HJ9LXfvlz",
"BJBRHQkWf"
],
"review": [
{
"title": "title: The paper proves interesting properties of SGD on linearly separable data, first result on a interesting direction although the assumption/techniques seem a bit limited.",
"paper_summary": null,
"main_review": "main_review: This paper shows that on linearly separable data, SGD on a overparametrized network (one hidden layer, with leaky ReLU activations) can still lean a classifier that provably generalizes. The assumption on data and structure of network is a bit strong, but this is the first result that achieves a number of desirable properties\n``1. Works for overparametrized network\n2. Finds global optimal solution for a non-convex network.\n3. Has generalization guarantees (and generalization is related to the SGD algorithm).\n4. Number of samples need not depend on the number of neurons. \n\nThere have been several papers achieving 1 and 2 (with much weaker assumptions), but they do not have 3 and 4. The proof of the optimization part is very similar to the proof of perceptron algorithm, and really relies on linear separability. The proof of generalization is based on a compression argument, where if an algorithm does not take many nonzero steps, then it must have good generalization. Ideally, one would also want to see a result where overparametrization actually helps (in the main result the whole data can be learned by a linear classifier). This is somewhat achieved when the activation is replaced with standard ReLU, where the paper showed with a small number of hidden units the algorithm is likely to get stuck at a local minima, but with enough hidden units the algorithm is likely to converge (but even in this case, the data is still linearly separable and can be learned just by a perceptron). \n\nThe main concern about the paper is the possibility of generalizing the result. The algorithm part seems to heavily rely on the linear separable assumption. The generalization part relies on not making many non-zero updates, which is not really true in realistic settings (where the data is accessed in multiple passes) [After author response: Yes in the linearly separable case with hinge loss it is quite possible that the number of updates is sublinear. However what I meant here is that with more complicated data and different loss functions it is hard to believe that this can still hold.]. The related work section is also a bit unfair to some of the other generalization results (e.g. Bartlett et al. Neyshabur et al.): those results work on more general network settings, and it's not completely clear that they cannot be related to the algorithm because they rely on certain solution specific quantities (such as spectral/Frobenius norms of the weight matrices) and it could be possible that SGD tends to find a solution with small norm (which can be proved in linear setting and might also be provable for the setting of this paper) [This is addressed in the author response].\n\nOverall, even though the assumptions might be a bit strong, I think this is an interesting result working towards a good direction and should be accepted.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting result for generalisation guarantees of overparametrised 1-hidden layer network with fixed output layer.",
"paper_summary": null,
"main_review": "main_review: Paper studies an interesting phenomenon of overparameterised models being able to learn well-generalising solutions. It focuses on a setting with three crucial simplifications:\n- data is linearly separable\n- model is 1-hidden layer feed forward network with homogenous activations\n- **only input-hidden layer weights** are trained, while the hidden-output layer's weights are fixed to be (v, v, v, ..., v, -v, -v, -v, ..., -v) (in particular -- (1,1,...,1,-1,-1,...,-1))\nWhile the last assumption does not limit the expressiveness of the model in any way, as homogenous activations have the property of f(ax)=af(x) (for positive a) and so for any unconstrained model in the second layer, we can \"propagate\" its weights back into first layer and obtain functionally equivalent network. However, learning dynamics of a model of form \n z(x) = SUM( g(Wx+b) ) - SUM( g(Vx+c) ) + d\nand \"standard\" neural model\n z(x) = Vg(Wx+b)+c\ncan be completely different.\nConsequently, while the results are very interesting, claiming their applicability to the deep models is (at this point) far fetched. In particular, abstract suggests no simplifications are being made, which does not correspond to actual result in the paper. The results themselves are interesting, but due to the above restriction it is not clear whether it sheds any light on neural nets, or simply described a behaviour of very specific, non-standard shallow model.\n\nI am happy to revisit my current rating given authors rephrase the paper so that the simplifications being made are clear both in abstract and in the text, and that (at least empirically) it does not affect learning in practice. In other words - all the experiments in the paper follow the assumption made, if authors claim is that the restriction introduced does not matter, but make proofs too technical - at least experimental section should show this. If the claims do not hold empirically without the assumptions made, then the assumptions are not realistic and cannot be used for explaining the behaviour of models we are interested in.\n\nPros:\n- tackling a hard problem of overparametrised models, without introducing common unrealistic assumptions of activations independence\n- very nice result of \"phase change\" dependend on the size of hidden layer in section 7\n\nCons:\n- simplification with non-trainable second layer is currently not well studied in the paper; and while not affecting expressive power - it is something that can change learning dynamics completely\n\n# After the update\n\nAuthors addressed my concerns by:\n- making simplification assumption clearer in the text\n- adding empirical evaluation without the assumption\n- weakening the assumptions\n\nI find these modifications satisfactory and rating has been updated accordingly. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Good paper on understanding the role of SGD in generalization",
"paper_summary": null,
"main_review": "main_review: Summary:\nThis paper considers the problem of classifying linearly separable data with a two layer \\alpha- Leaky ReLU network, in the over-parametrized setting with 2k hidden units. The algorithm used for training is SGD which minimizes the hinge loss error over the training data. The parameters in the top layer are fixed in advance and only the parameters in the hidden layer are updated using SGD. First result shows that the loss function does not have any sub-optimal local minima. Later, for the above method, the paper gives a bound proportional to ||w*||^2/\\alpha^2, on the number of non-zero updates made by the algorithm (similar to perceptron analysis), before converging to a global minima - w*. Using this a generalization error bound independent of number of hidden units is presented. Later the paper studies ReLU networks and shows that loss in this case can have sub-optimal local minima. \n\nComments:\n\nThis paper considers a simpler setting to study why SGD is successful in recovering solutions that generalize well even though the neural networks used are typically over-parametrized. While the paper considers a simpler setting of classifying linearly separable data and training only the hidden layer, it nevertheless provides a useful insight on the role of SGD in recovering solutions that generalize well (independent of number of hidden units 'k'). \n\nOne confusing aspect in the paper is the optimization and generalization results hold for any global minima w* of the L_s(w). There is a step missing of taking the minimum over all such w*, which will give the tightest bounds for SGD, and it will be useful to clear this up in the paper. \n\nMore importantly I am curious how close the updates are when, 1)SGD is updating only the hidden units and 2) SGD is updating both the layers. Simple intuition suggests SGD might update the top layer \"more\" that the hidden layer as the gradients tend to decay down the layers. It is useful to discuss this in the paper and may be have some experiments on linearly separable data but with updates in both layers.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.6666666865348816,
0.7777777910232544
],
"confidence": [
0.5,
0.5,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to All Reviewers"
],
"comment": [
"We thank the reviewers for their helpful feedback. The main concern that was raised by the reviewers is whether these results generalize to a realistic neural network training process.\nSpecifically, in the submission we have analyzed a variant of SGD which updates only the first layer of the network, while keeping the weights of the second layer fixed. AnonReviewer2 correctly notes that in practice, the second layer is also updated, and asks to what degree our results hold in this case. To address this concern, we revise the text as follows:\n1. We clearly state our assumptions in both the abstract and the paper itself (see Section 5.3). \n2. We conduct the same experiments as in the paper, but with both layers trained. We empirically show that training both layers has similar training and generalization performance as training the first layer (Figure 2).\n3. We show that the main theoretical result still holds even when the second layer weights are updated, as long as they do not change signs during the training process, and their absolute values are bounded from below and from above. \n4. We conduct experiments similar to the setting in (2) above, but now we choose a constant step size such that the condition in (3) above holds. Namely, we ensure that the weights of the last layer do not change their sign, and are correctly bounded. The performance of SGD in this case is similar to previous experiments and is in line with our theoretical findings.\n\nThe above show that although the dynamics of the problem indeed change when updating the second layer, our results and conclusions still hold. A complete theoretical analysis of the two layer case is left for future work.\n\nRegarding the linear separability assumption, this is a realistic setting and this assumption allows us to show for the first time a complete analysis of optimization and generalization for over-parameterized neural networks. We are not aware of any other result of this kind under different realistic assumptions.\nAs for the proposition that SGD tends to find solutions with small norm in our problem, we are not aware of any existing results that imply that this is indeed the case, though this may be an interesting problem to study in the future. We have rephrased our notes on other generalization results in the related work section, addressing AnonReviewer1’s remark.\nAnonReviewer1 mentioned that in practice there should be many non-zero updates since the data is accessed multiple times. However, we note that we considered the hinge loss, which vanishes for points that are classified with a margin. Therefore, it is possible that with multiple passes over the data there are only a few non-zero updates.\nFinally, AnonReviewer3 notes that we can optimize our bound with respect to w^*. This is true, as in the vanilla Perceptron, the best w* is the one with the largest margin. \n"
]
} | {
"paperhash": [
"kawaguchi|generalization_in_deep_learning",
"du|when_is_a_convolutional_filter_easy_to_learn?",
"neyshabur|a_pac-bayesian_approach_to_spectrally-normalized_margin_bounds_for_neural_networks",
"soltanolkotabi|theoretical_insights_into_the_optimization_landscape_of_over-parameterized_shallow_neural_networks",
"neyshabur|exploring_generalization_in_deep_learning",
"bartlett|spectrally-normalized_margin_bounds_for_neural_networks",
"zhong|recovery_guarantees_for_one-hidden-layer_neural_networks",
"li|convergence_analysis_of_two-layer_neural_networks_with_relu_activation",
"nguyen|the_loss_surface_of_deep_and_wide_neural_networks",
"kuzborskij|data-dependent_stability_of_stochastic_gradient_descent",
"tian|an_analytical_formula_of_population_gradient_for_two-layered_relu_network_and_its_applications_in_convergence_and_critical_point_analysis",
"brutzkus|globally_optimal_gradient_descent_for_a_convnet_with_gaussian_inputs",
"soudry|exponentially_vanishing_sub-optimal_local_minima_in_multilayer_neural_networks",
"zhang|understanding_deep_learning_requires_rethinking_generalization",
"hardt|gradient_descent_learns_linear_dynamical_systems",
"neyshabur|norm-based_capacity_control_in_neural_networks",
"neyshabur|in_search_of_the_real_inductive_bias:_on_the_role_of_implicit_regularization_in_deep_learning",
"glorot|understanding_the_difficulty_of_training_deep_feedforward_neural_networks",
"anthony|neural_network_learning:_theoretical_foundations",
"yu|on_the_local_minima_free_condition_of_backpropagation_learning",
"shalev-shwartz|understanding_machine_learning_-_from_theory_to_algorithms",
"littlestone|relating_data_compression_and_learnability",
"gori|on_the_problem_of_local_minima_in_backpropagation"
],
"title": [
"Generalization in Deep Learning",
"When is a Convolutional Filter Easy To Learn?",
"A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks",
"Theoretical Insights Into the Optimization Landscape of Over-Parameterized Shallow Neural Networks",
"Exploring Generalization in Deep Learning",
"Spectrally-normalized margin bounds for neural networks",
"Recovery Guarantees for One-hidden-layer Neural Networks",
"Convergence Analysis of Two-layer Neural Networks with ReLU Activation",
"The Loss Surface of Deep and Wide Neural Networks",
"Data-Dependent Stability of Stochastic Gradient Descent",
"An Analytical Formula of Population Gradient for two-layered ReLU network and its Applications in Convergence and Critical Point Analysis",
"Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs",
"Exponentially vanishing sub-optimal local minima in multilayer neural networks",
"Understanding deep learning requires rethinking generalization",
"Gradient Descent Learns Linear Dynamical Systems",
"Norm-Based Capacity Control in Neural Networks",
"In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning",
"Understanding the difficulty of training deep feedforward neural networks",
"Neural Network Learning: Theoretical Foundations",
"On the local minima free condition of backpropagation learning",
"Understanding Machine Learning - From Theory to Algorithms",
"Relating Data Compression and Learnability",
"On the Problem of Local Minima in Backpropagation"
],
"abstract": [
"This paper provides non-vacuous and numerically-tight generalization guarantees for deep learning, as well as theoretical insights into why and how deep learning can generalize well, despite its large capacity, complexity, possible algorithmic instability, nonrobustness, and sharp minima, responding to an open question in the literature. We also propose new open problems and discuss the limitations of our results.",
"We analyze the convergence of (stochastic) gradient descent algorithm for learning a convolutional filter with Rectified Linear Unit (ReLU) activation function. Our analysis does not rely on any specific form of the input distribution and our proofs only use the definition of ReLU, in contrast with previous works that are restricted to standard Gaussian input. We show that (stochastic) gradient descent with random initialization can learn the convolutional filter in polynomial time and the convergence rate depends on the smoothness of the input distribution and the closeness of patches. To the best of our knowledge, this is the first recovery guarantee of gradient-based algorithms for convolutional filter on non-Gaussian input distributions. Our theory also justifies the two-stage learning rate strategy in deep neural networks. While our focus is theoretical, we also present experiments that illustrate our theoretical findings.",
"We present a generalization bound for feedforward neural networks in terms of the product of the spectral norm of the layers and the Frobenius norm of the weights. The generalization bound is derived using a PAC-Bayes analysis.",
"In this paper, we study the problem of learning a shallow artificial neural network that best fits a training data set. We study this problem in the over-parameterized regime where the numbers of observations are fewer than the number of parameters in the model. We show that with the quadratic activations, the optimization landscape of training, such shallow neural networks, has certain favorable characteristics that allow globally optimal models to be found efficiently using a variety of local search heuristics. This result holds for an arbitrary training data of input/output pairs. For differentiable activation functions, we also show that gradient descent, when suitably initialized, converges at a linear rate to a globally optimal model. This result focuses on a realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted weight coefficients.",
"With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.",
"This paper presents a margin-based multiclass generalization bound for neural networks that scales with their margin-normalized \"spectral complexity\": their Lipschitz constant, meaning the product of the spectral norms of the weight matrices, times a certain correction factor. This bound is empirically investigated for a standard AlexNet network trained with SGD on the mnist and cifar10 datasets, with both original and random labels; the bound, the Lipschitz constants, and the excess risks are all in direct correlation, suggesting both that SGD selects predictors whose complexity scales with the difficulty of the learning task, and secondly that the presented bound is sensitive to this complexity.",
"In this paper, we consider regression problems with one-hidden-layer neural networks (1NNs). We distill some properties of activation functions that lead to $\\mathit{local~strong~convexity}$ in the neighborhood of the ground-truth parameters for the 1NN squared-loss objective. Most popular nonlinear activation functions satisfy the distilled properties, including rectified linear units (ReLUs), leaky ReLUs, squared ReLUs and sigmoids. For activation functions that are also smooth, we show $\\mathit{local~linear~convergence}$ guarantees of gradient descent under a resampling rule. For homogeneous activations, we show tensor methods are able to initialize the parameters to fall into the local strong convexity region. As a result, tensor initialization followed by gradient descent is guaranteed to recover the ground truth with sample complexity $ d \\cdot \\log(1/\\epsilon) \\cdot \\mathrm{poly}(k,\\lambda )$ and computational complexity $n\\cdot d \\cdot \\mathrm{poly}(k,\\lambda) $ for smooth homogeneous activations with high probability, where $d$ is the dimension of the input, $k$ ($k\\leq d$) is the number of hidden nodes, $\\lambda$ is a conditioning property of the ground-truth parameter matrix between the input layer and the hidden layer, $\\epsilon$ is the targeted precision and $n$ is the number of samples. To the best of our knowledge, this is the first work that provides recovery guarantees for 1NNs with both sample complexity and computational complexity $\\mathit{linear}$ in the input dimension and $\\mathit{logarithmic}$ in the precision.",
"In recent years, stochastic gradient descent (SGD) based techniques has become the standard tools for training neural networks. However, formal theoretical understanding of why SGD can train neural networks in practice is largely missing. \nIn this paper, we make progress on understanding this mystery by providing a convergence analysis for SGD on a rich subset of two-layer feedforward networks with ReLU activations. This subset is characterized by a special structure called \"identity mapping\". We prove that, if input follows from Gaussian distribution, with standard $O(1/\\sqrt{d})$ initialization of the weights, SGD converges to the global minimum in polynomial number of steps. Unlike normal vanilla networks, the \"identity mapping\" makes our network asymmetric and thus the global minimum is unique. To complement our theory, we are also able to show experimentally that multi-layer networks with this mapping have better performance compared with normal vanilla networks. \nOur convergence theorem differs from traditional non-convex optimization techniques. We show that SGD converges to optimal in \"two phases\": In phase I, the gradient points to the wrong direction, however, a potential function $g$ gradually decreases. Then in phase II, SGD enters a nice one point convex region and converges. We also show that the identity mapping is necessary for convergence, as it moves the initial point to a better place for optimization. Experiment verifies our claims.",
"While the optimization problem behind deep neural networks is highly non-convex, it is frequently observed in practice that training deep networks seems possible without getting stuck in suboptimal points. It has been argued that this is the case as all local minima are close to being globally optimal. We show that this is (almost) true, in fact almost all local minima are globally optimal, for a fully connected network with squared loss and analytic activation function given that the number of hidden units of one layer of the network is larger than the number of training points and the network structure from this layer on is pyramidal.",
"We establish a data-dependent notion of algorithmic stability for Stochastic Gradient Descent (SGD), and employ it to develop novel generalization bounds. This is in contrast to previous distribution-free algorithmic stability results for SGD which depend on the worst-case constants. By virtue of the data-dependent argument, our bounds provide new insights into learning with SGD on convex and non-convex problems. In the convex case, we show that the bound on the generalization error depends on the risk at the initialization point. In the non-convex case, we prove that the expected curvature of the objective function around the initialization point has crucial influence on the generalization error. In both cases, our results suggest a simple data-driven strategy to stabilize SGD by pre-screening its initialization. As a corollary, our results allow us to show optimistic generalization bounds that exhibit fast convergence rates for SGD subject to a vanishing empirical risk and low noise of stochastic gradient.",
"In this paper, we explore theoretical properties of training a two-layered ReLU network $g(\\mathbf{x}; \\mathbf{w}) = \\sum_{j=1}^K \\sigma(\\mathbf{w}_j^T\\mathbf{x})$ with centered $d$-dimensional spherical Gaussian input $\\mathbf{x}$ ($\\sigma$=ReLU). We train our network with gradient descent on $\\mathbf{w}$ to mimic the output of a teacher network with the same architecture and fixed parameters $\\mathbf{w}^*$. We show that its population gradient has an analytical formula, leading to interesting theoretical analysis of critical points and convergence behaviors. First, we prove that critical points outside the hyperplane spanned by the teacher parameters (\"out-of-plane\") are not isolated and form manifolds, and characterize in-plane critical-point-free regions for two ReLU case. On the other hand, convergence to $\\mathbf{w}^*$ for one ReLU node is guaranteed with at least $(1-\\epsilon)/2$ probability, if weights are initialized randomly with standard deviation upper-bounded by $O(\\epsilon/\\sqrt{d})$, consistent with empirical practice. For network with many ReLU nodes, we prove that an infinitesimal perturbation of weight initialization results in convergence towards $\\mathbf{w}^*$ (or its permutation), a phenomenon known as spontaneous symmetric-breaking (SSB) in physics. We assume no independence of ReLU activations. Simulation verifies our findings.",
"Deep learning models are often successfully trained using gradient descent, despite the worst case hardness of the underlying non-convex optimization problem. The key question is then under what conditions can one prove that optimization will succeed. Here we provide a strong result of this kind. We consider a neural net with one hidden layer and a convolutional structure with no overlap, and a ReLU activation function. For this architecture we show that learning is NP-complete in the general case, but that when the input distribution is Gaussian, gradient descent converges to the global optimum in polynomial time. To the best of our knowledge, this is the first global optimality guarantee of gradient descent on a convolutional neural network with ReLU activations.",
"Background: Statistical mechanics results (Dauphin et al. (2014); Choromanska et al. (2015)) suggest that local minima with high error are exponentially rare in high dimensions. However, to prove low error guarantees for Multilayer Neural Networks (MNNs), previous works so far required either a heavily modified MNN model or training method, strong assumptions on the labels (e.g., \"near\" linear separability), or an unrealistic hidden layer with $\\Omega\\left(N\\right)$ units. \nResults: We examine a MNN with one hidden layer of piecewise linear units, a single output, and a quadratic loss. We prove that, with high probability in the limit of $N\\rightarrow\\infty$ datapoints, the volume of differentiable regions of the empiric loss containing sub-optimal differentiable local minima is exponentially vanishing in comparison with the same volume of global minima, given standard normal input of dimension $d_{0}=\\tilde{\\Omega}\\left(\\sqrt{N}\\right)$, and a more realistic number of $d_{1}=\\tilde{\\Omega}\\left(N/d_{0}\\right)$ hidden units. We demonstrate our results numerically: for example, $0\\%$ binary classification training error on CIFAR with only $N/d_{0}\\approx 16$ hidden neurons.",
"Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. \nThrough extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. \nWe interpret our experimental findings by comparison with traditional models.",
"We prove that gradient descent efficiently converges to the global optimizer of the maximum likelihood objective of an unknown linear time-invariant dynamical system from a sequence of noisy observations generated by the system. Even though the objective function is non-convex, we provide polynomial running time and sample complexity bounds under strong but natural assumptions. Linear systems identification has been studied for many decades, yet, to the best of our knowledge, these are the first polynomial guarantees for the problem we consider.",
"We investigate the capacity, convexity and characterization of a general family of norm-constrained feed-forward networks.",
"We present experiments demonstrating that some other form of capacity control, different from network size, plays a central role in learning multilayer feed-forward networks. We argue, partially through analogy to matrix factorization, that this is an inductive bias that can help shed light on deep learning.",
"Whereas before 2006 it appears that deep multilayer neural networks were not successfully trained, since then several algorithms have been shown to successfully train them, with experimental results showing the superiority of deeper vs less deep architectures. All these experimental results were obtained with new initialization or training mechanisms. Our objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future. We first observe the influence of the non-linear activations functions. We find that the logistic sigmoid activation is unsuited for deep networks with random initialization because of its mean value, which can drive especially the top hidden layer into saturation. Surprisingly, we find that saturated units can move out of saturation by themselves, albeit slowly, and explaining the plateaus sometimes seen when training neural networks. We find that a new non-linearity that saturates less can often be beneficial. Finally, we study how activations and gradients vary across layers and during training, with the idea that training may be more difficult when the singular values of the Jacobian associated with each layer are far from 1. Based on these considerations, we propose a new initialization scheme that brings substantially faster convergence. 1 Deep Neural Networks Deep learning methods aim at learning feature hierarchies with features from higher levels of the hierarchy formed by the composition of lower level features. They include Appearing in Proceedings of the 13 International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: WC Weston et al., 2008). Much attention has recently been devoted to them (see (Bengio, 2009) for a review), because of their theoretical appeal, inspiration from biology and human cognition, and because of empirical success in vision (Ranzato et al., 2007; Larochelle et al., 2007; Vincent et al., 2008) and natural language processing (NLP) (Collobert & Weston, 2008; Mnih & Hinton, 2009). Theoretical results reviewed and discussed by Bengio (2009), suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one may need deep architectures. Most of the recent experimental results with deep architecture are obtained with models that can be turned into deep supervised neural networks, but with initialization or training schemes different from the classical feedforward neural networks (Rumelhart et al., 1986). Why are these new algorithms working so much better than the standard random initialization and gradient-based optimization of a supervised training criterion? Part of the answer may be found in recent analyses of the effect of unsupervised pretraining (Erhan et al., 2009), showing that it acts as a regularizer that initializes the parameters in a “better” basin of attraction of the optimization procedure, corresponding to an apparent local minimum associated with better generalization. But earlier work (Bengio et al., 2007) had shown that even a purely supervised but greedy layer-wise procedure would give better results. So here instead of focusing on what unsupervised pre-training or semi-supervised criteria bring to deep architectures, we focus on analyzing what may be going wrong with good old (but deep) multilayer neural networks. Our analysis is driven by investigative experiments to monitor activations (watching for saturation of hidden units) and gradients, across layers and across training iterations. We also evaluate the effects on these of choices of activation function (with the idea that it might affect saturation) and initialization procedure (since unsupervised pretraining is a particular form of initialization and it has a drastic impact).",
"This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a \"large margin.\" The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics.",
"It is shown that if there are P noncoincident input patterns to learn and a two-layered feedforward neural network having P-1 sigmoidal hidden neuron and one dummy hidden neuron is used for the learning, then any suboptimal equilibrium point of the corresponding error surface is unstable in the sense of Lyapunov. This result leads to a sufficient local minima free condition for the backpropagation learning.",
"Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability ; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning ; and emerging theoretical concepts such as the PACBayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and nonexpert readers in statistics, computer science, mathematics, and engineering.",
"We explore the learnability of two-valued functions from samples using the paradigm of Data Compression. A first algorithm (compression) choses a small subset of the sample which is called the kernel. A second algorithm predicts future values of the function from the kernel, i.e. the algorithm acts as an hypothesis for the function to be learned. The second algorithm must be able to reconstruct the correct function values when given a point of the original sample. We demonstrate that the existence of a suitable data compression scheme is sufficient to ensure learnability. We express the probability that the hypothesis predicts the function correctly on a random sample point as a function of the sample and kernel sizes. No assumptions are made on the probability distributions according to which the sample points are generated. This approach provides an alternative to that of [BEHW86], which uses the Vapnik-Chervonenkis dimension to classify learnable geometric concepts. Our bounds are derived directly from the kernel size of the algorithms rather than from the Vapnik-Chervonenkis dimension of the hypothesis class. The proofs are simpler and the introduced compression scheme provides a rigorous model for studying data compression in connection with machine learning.",
"The authors propose a theoretical framework for backpropagation (BP) in order to identify some of its limitations as a general learning procedure and the reasons for its success in several experiments on pattern recognition. The first important conclusion is that examples can be found in which BP gets stuck in local minima. A simple example in which BP can get stuck during gradient descent without having learned the entire training set is presented. This example guarantees the existence of a solution with null cost. Some conditions on the network architecture and the learning environment that ensure the convergence of the BP algorithm are proposed. It is proven in particular that the convergence holds if the classes are linearly separable. In this case, the experience gained in several experiments shows that multilayered neural networks (MLNs) exceed perceptrons in generalization to new examples. >"
],
"authors": [
{
"name": [
"Kenji Kawaguchi",
"L. Kaelbling",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Du",
"J. Lee",
"Yuandong Tian"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Behnam Neyshabur",
"Srinadh Bhojanapalli",
"David A. McAllester",
"N. Srebro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Soltanolkotabi",
"Adel Javanmard",
"J. Lee"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Behnam Neyshabur",
"Srinadh Bhojanapalli",
"D. McAllester",
"N. Srebro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Bartlett",
"Dylan J. Foster",
"Matus Telgarsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kai Zhong",
"Zhao Song",
"Prateek Jain",
"P. Bartlett",
"I. Dhillon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yuanzhi Li",
"Yang Yuan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Quynh N. Nguyen",
"Matthias Hein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ilja Kuzborskij",
"Christoph H. Lampert"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yuandong Tian"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Alon Brutzkus",
"A. Globerson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Daniel Soudry",
"Elad Hoffer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Chiyuan Zhang",
"Samy Bengio",
"Moritz Hardt",
"B. Recht",
"O. Vinyals"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Moritz Hardt",
"Tengyu Ma",
"B. Recht"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Behnam Neyshabur",
"Ryota Tomioka",
"N. Srebro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Behnam Neyshabur",
"Ryota Tomioka",
"N. Srebro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Xavier Glorot",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Anthony",
"P. Bartlett"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Xiao-Hu Yu",
"Guo-An Chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Shai Shalev-Shwartz",
"Shai Ben-David"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"N. Littlestone",
"Manfred K. Warmuth"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Gori",
"A. Tesi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1710.05468",
"1709.06129",
"1707.09564",
"1707.04926",
"1706.08947",
"1706.08498",
"1706.03175",
"1705.09886",
"1704.08045",
"1703.01678",
"1703.00560",
"1702.07966",
"1702.05777",
"1611.03530",
"1609.05191",
"1503.00036",
"1412.6614",
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"3520119",
"3624410",
"3531730",
"29068185",
"9597660",
"90880",
"1002824",
"8592143",
"3286674",
"3351158",
"14147309",
"13000960",
"88514953",
"6212000",
"7597719",
"14338058",
"6021932",
"5575601",
"35737200",
"206458011",
"261295214",
"9780485",
"8098333"
],
"intents": [
[
"methodology"
],
[
"background"
],
[
"background",
"methodology"
],
[
"background"
],
[
"background",
"methodology"
],
[
"methodology"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"result"
],
[
"result"
],
[
"background",
"methodology"
],
[
"background"
],
[
"result"
]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false
]
} | null | 84 | 3.25 | 0.703704 | 0.583333 | null | null | null | null | null | rJ33wwxRb |
sharma|learnability_of_learned_neural_networks|ICLR_cc_2018_Conference | Learnability of Learned Neural Networks | This paper explores the simplicity of learned neural networks under various settings: learned on real vs random data, varying size/architecture and using large minibatch size vs small minibatch size. The notion of simplicity used here is that of learnability i.e., how accurately can the prediction function of a neural network be learned from labeled samples from it. While learnability is different from (in fact often higher than) test accuracy, the results herein suggest that there is a strong correlation between small generalization errors and high learnability.
This work also shows that there exist significant qualitative differences in shallow networks as compared to popular deep networks. More broadly, this paper extends in a new direction, previous work on understanding the properties of learned neural networks. Our hope is that such an empirical study of understanding learned neural networks might shed light on the right assumptions that can be made for a theoretical study of deep learning. | {
"name": [],
"affiliation": []
} | Exploring the Learnability of Learned Neural Networks | [
"Learnability",
"Generalizability",
"Understanding Deep Learning"
] | null | 2018-02-15 22:29:36 | 35 | null | null | null | null | null | null | null | null | false | + The paper proposes an interesting empirical measure of ""learnability"" of a trained network: how well the predictive function it represents can be learned by another network. And shows it empirically seems to correlate with better generalization.
- The work is purely empirical: it features no theory relating this learnability to generalization
- Learnability measure is somewhat ad-hoc with moving parts left to be specified (learning network, data splits, ...)
- as pointed out by a reviewer, learnability doesn't really provide any answers for now.
- the work would be much stronger if it went beyond a mere correlation study, and if learnability considerations allowed to derive a new approach/regularization scheme that was convincingly shown to improve generalization.
| {
"review_id": [
"rkf_j7cgG",
"H17N5b5lf",
"BJ9PiZ9eG"
],
"review": [
{
"title": "title: The paper poses interesting questions, but learnability doesn't provide many answers",
"paper_summary": null,
"main_review": "main_review: Review Summary:\nThe primary claim that there is \"a strong correlation between small generalization errors and high learnability\" is correct and supported by evidence, but it doesn't provide much insight for the questions posed at the beginning of the paper or for a general better understanding of theoretical deep learning. In fact the relationship between test accuracy and learnability seems quite obvious, which unfortunately undermines the usefulness of the learnability metric which is used in many experiments in the paper.\n\nFor example, consider the results in Table 7. A small network (N1 = 16 neurons) with low test accuracy results in a low learnability, while a large network (N1 = 1024 neurons) gets a higher test accuracy and higher learnability. In this case, the small network can be thought of as applying higher label noise relative to the larger network. Thus it is expected that agreement between N1 and N2 (learnability) will be higher for the larger network, as the predictions of N1 are less noisy. More importantly, this relationship between test accuracy and learnability doesn't answer the original question Q2 posed: \"Do larger neural networks learn simpler patterns compared to neural networks when trained on real data\". It instead draws some obvious conclusions about noisy labeling of training data.\n\nOther results presented in the paper are puzzling and require further experimentation and discussion, such as the trend that the learnability of shallow networks on random data is much higher than 10%, as discussed at the bottom of page 4. The authors provide some possible reasoning, stating that this strange effect could be due to class imbalance, but it isn't convincing enough.\n\nOther comments:\n-Section 3.4 is unrelated to the primary arguments of the paper and seems like a filler.\n-Equations should have equation numbers\n-Learnability numbers reported in all tables should be between 0-1 per the definition on page 3\n-As suggested in the final sentence of the discussion, it would be nice if conclusions drawn from the learnability experiments done in this paper were applied to the design new networks which better generalize",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Very nice paper showing how large networks can actually be \"simple\", in spite of their large capacity.",
"paper_summary": null,
"main_review": "main_review: Summary:\nThis paper presents very nice experiments comparing the complexity of various different neural networks using the notion of \"learnability\" --- the learnability of a model (N1) is defined as the \"expected agreement\" between the output of N1, and the output of another model N2 which has been trained to match N1 (on a dataset of size n). The paper suggests that the learnability of a model is a good measure of how simple the function learned by that model is --- furthermore, it shows that this notion of learnability correlates well (across extensive experiments) with the test accuracy of the model.\n\nThe paper presents a number of interesting results:\n1) Larger networks are typically more learnable than smaller ones (typically we think of larger networks as being MORE complicated than smaller networks -- this result suggests that in an important sense, large networks are simpler).\n2) Networks trained with random data are significantly less learnable than networks trained on real data.\n3) Networks trained on small mini-batches (larger variance SGD updates) are more learnable than those trained on large minibatches.\n\nThese results are in line with several of the observations made by Zhang et al (2017), which showed that neural networks are able to both (a) fit random data, and (b) generalize well; these results at first seem to run counter to the ideas from statistical learning theory that models with high capacity (VC dimension, radamacher complexity, etc.) have much weaker generalization guarantees than lower capacity models. These results suggest that models that have high capacity (by one definition) are also capable of being simple (by another definition). These results nicely complement the work which studies the \"sharpness/curvature\" of the local minima found by neural networks, which argue that the minima which generalize better are those with lower curvature.\n\nReview:\nQuality: I found this to be high quality work. The paper presents many results across a variety of network architectures. One area for improvement is presenting results on larger datasets (currently all experiments are on CIFAR-10), and/or on non-convolutional architectures. Additionally, a discussion of why learnabiblity might imply low generalization error would have been interesting (the more formal, the better), though it is unclear how difficult this would be.\n\nClarity: The paper is written clearly. A small point: Step 2 in section 3.1 should specify that argmax of N1(D2) is used to generate labels for the training of the second network. Also, what dataset D_i is used for tables 3-6? Please specify.\n\nOriginality: The specific questions tackled in this paper are original (learnability on random vs. real data, large vs. small networks, and large vs. small mini-batch training). But it is unclear to me exactly how original this use of \"learnability\" is in evaluating how simple a model is. It seems to me that this particular use of \"learnability\" is original, even though PAC learnability was defined a while ago.\n\nSignificance: I find the results in this paper to be quite significant, and to provide a new way of understanding why deep neural networks generalize. I believe it is important to find new ways of formally defining the \"simplicity/capacity\" of a model, such that \"simpler\" models can be proven to have smaller generalization gap (between train and test error) relative to more \"complicated\" models. It is clear that VC dimension and radamacher complexity alone are not enough to explain the generalization performance of neural networks, and that neural networks with high capacity by these definitions are likely \"simple\" by other definitions (as we have seen in this paper). This paper makes an important contribution to this conversation, and could perhaps provide a starting point for theoreticians to better explain why deep networks generalize well.\n\nPros\n- nice experiments, with very interesting results.\n- Helps explain one way in which large networks are in fact \"simple\"\n\nCons\n- The paper does not attempt to relate the notion of learnability to that of generalization performance. All it says is that these two metrics appear to be well correlated.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: This is an interesting empirical attempt at understanding the reproducibility of learned neural networks. Great ideas but needs more work.",
"paper_summary": null,
"main_review": "main_review: The proposed approach to figure out what do deep network learn is interesting -- the approach of learning a learned network. Some aspects needs more work to improve the work. The presentation of the results can be improved further. \n\nFirstly, confidence intervals on many experiments are missing (including Tables 3-9). Also, since we are looking at empirically validating the learnability criterion defined by the authors, all the results (the reported confusion tables) need to be tested statistically (to see whether one dominates the other). \n\nWhat is random label learning of N1 telling us? How different would that be in terms of simply learning random labels on real data directly. Further, the evaluations in Tables 3-6 need more attention since we are interested in the TLP=1 vs. PLP=0 case, and TLP=0 vs. PLP=1 case. \n\nThe influence of depth is not clear -- may be it is because of the way results are reported here. A simple figure with increasing layers vs. learnability values would do a better job at conveying the trends. \n\nThe evaluations in Section 3.5 are not conclusive? What is the question being tested for here? \n\nWhat about the influence of number of classes on learnability trends? Some experiments on large class datasets including cifar100 and/or imagenet need to be reported. \n\n--- Comments after response from authors --- \n\nThe authors have clarified and shown results for several of the issues I was concerned about. Although it is still unclear what the learnability model is capturing for deeper model or the trends in Section 3.5 (looks like the trends may relate to stability of SGD as well here) -- the proposed ideas are worth discussing. I have appropriately modified my rating. \n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.6666666865348816,
0.5555555820465088
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"New Revision",
"Relating learnability and generalization",
"Correlation between test accuracy and learnability \"not obvious\"",
"Response -- part II",
"Response to other comments",
"Response -- part I"
],
"comment": [
"After taking the reviewer’s suggestions we have made the following changes:\n\n-\tSec. 3.1: Modified equation (1) to normalize learnability (previous equation multiplied by 100)\n-\tSec. 3: Included new learnability results for MLPs (Multi-Layer Perceptrons) and CIFAR 100 dataset \n-\tAll reported tables now have confidence intervals \n-\tIntroduced a table (Table 5 in Sec. 3.2) for showing class-wise percentage distribution for N1\n-\tIncluded a new section (Sec. 4) with plots of learnability and generalization error vs epoch \n-\tIncluded an appendix about learnability for MNIST\n\n",
"Thank you for the review and kind words. The major comment/suggestion is to relate the notion of learnability to that of generalization. Indeed, we have a partial intuitive connection between these two notions based on Figure 3 in the updated draft. We also added Section 4 to discuss this aspect. We hypothesize that learnability captures the inductive bias of SGD training of neural networks. More precisely, when we start training, intuitively, the network is simpler (learnability is high) and generalization error is low (both train and test errors are high). As SGD changes the network to reduce the training error, it becomes more complex (learnability decreases) and the generalization error decreases (train error decreases rapidly while test error does not decrease as rapidly). Training is stopped when the training error becomes less than 1%. At this point, learnability has decreased from its initial high value, and generalization error has increased from its initial low value. Their precise values might be close (as happens in the case of, e.g., N1=N2=VGG11), or not so close (as happens in the case of N1 and N2 being shallow 2-layer CNNs with layer size 1024). Making this connection more formal would be an interesting direction of future work.",
"“-Review Summary: The primary claim that there is a strong correlation between small generalization errors and high learnability\" is correct and supported by evidence, . . .. .Do larger neural networks learn simpler patterns compared to neural networks when trained on real data. It instead draws some obvious conclusions about noisy labeling of training data.”\n\nFirstly, we would like to stress that there is “no obvious connection” between test accuracies and learnability. This is clearly demonstrated by a network N1 which predicts the same class (say class 1) for all examples. While N1 is easily learnable, its test accuracy is 10%. The reason for this apparent conflicting behavior is that even though N1 does noisy labeling of training data, the noise introduced is not random – it is highly structured. \n\nThe same argument applies to the cases of learned small (N1_small = 16 neurons) and large (N1_large = 1024 neurons) networks. At an intuitive level, one would expect that the noise added by N1_small is much more structured (simpler, smoother) compared to that added by N1_large, since the noise added by N1_small is generated by a small network. In short, higher test accuracy of N1_large does not obviously explain its superior learnability value compared to N1_small. Note that learnability and test accuracy can be substantially different (for shallow networks, the learnability can be up to 16 percent points higher---see Table 7), which shows that N2 learns the structure of N1 apart from learning about noisy version of the original data. \n\nAnother way to look at this experiment is to forget that there ever was true data (and hence also forget test accuracies) – all we have are N1_small and N1_large. Given just N1_small and N1_large, considering their relative sizes, traditional wisdom suggests that N1_small is more learnable than N1_large---we think that this has at least as much intuitive force as the hypothesis you suggest. However, that is simply not the case. There is something about N1_large which, despite its large size, makes it much easier to learn than N1_small. This precisely answers Q2: larger neural networks do learn simpler patterns compared to smaller networks when trained on real data. \n\nIf you have a look at the included MNIST results in the appendix, we can clearly see that even a very simpler network very few number of parameters and low-test accuracy is highly learnable because of its simplicity.\n\nTo sum up, explanation of the correlation between generalizability and learnability does not seem to be obvious. We do offer one partial explanation below in reply to AnonReviewer2.\n\n \n \n“Other results presented in the paper are puzzling and require further experimentation and discussion, such as the trend that the learnability of shallow networks on random data is much higher than 10%, as discussed at the bottom of page 4. The authors provide some possible reasoning, stating that this strange effect could be due to class imbalance, but it isn't convincing enough.”\n\nFollowing up on your comment, we present the class imbalance values for two deep networks and two shallow networks on true data, random labels and random images in Table 5 in the updated draft. While class imbalance is slightly higher for shallow networks compared to deeper ones on random data, it is indeed the case that the difference in class imbalance is not high. Answering this question does seem to require further experimentation.",
"“The evaluations in Section 3.5 are not conclusive? What is the question being tested for here? “\n\nThese experiments are an attempt to better understand the notion of learnability as we now explain in a bit more detail than in the paper: While our experiments in previous sections have the learnability values quite concentrated (confidence intervals are small), they say nothing about how concentrated the function computed by N1 itself is across different runs. More precisely, if we train N1 several times using SGD, we expect that the function computed by N1 approximates the data well. However, this function may differ for different runs of SGD and since we are interested in the learnability of the function computed by N1, we would like to understand if it's the same function we are learning each time. In the experiments of this section we are trying to understand the extent to which this happens.\n \nHere is one concrete conclusion of these experiments (also mentioned in the paper). An immediate conjecture suggested by the confusion matrix of VGG11 is that perhaps all that N2 learns is the original data from N1 as the agreement between the functions computed via different SGD runs is approximately the same as the test accuracy (about 73%). This is refuted by Figure 2 as it shows that only on about 55% of data there is full agreement among the different N1's. \n \nAdditionally, we can try to relate these experiments to other experiments in the paper: The confusion matrices clearly show that the (function computed by) N1 is considerably more stable in the case of shallow networks than in the case of VGG-11. A similarly stark difference between the two cases is seen also in Tables 7 and 8. In the former, the learnability can be much higher compared to test accuracy; but in the latter, learnability is about the same as test accuracy. It's conceivable that these two phenomena are related and investigating this potential link could provide further insights into both.\n\nOf course, these conclusions lead us to further questions. It is not our claim that we provide a full understanding of learnability and generalization. \n\n\n “What about the influence of number of classes on learnability trends? Some experiments on large class datasets including cifar100 and/or imagenet need to be reported. “\n \nWe have included results on CIFAR100 in Table 4. The results here confirm the trends observed on CIFAR10.\n",
"Other comments: \n“-Section 3.4 is unrelated to the primary arguments of the paper and seems like a filler”\nWe think that Section 3.4 perfectly aligns with the theme of the paper i.e., exploring learnability of learned neural networks and its relation to generalization. This section is aimed at answering Q3 posed at the beginning of the paper. It is well-known that networks obtained with higher batch size have poorer generalization. As our experiments indicate, networks trained with higher batch size also have poorer learnability. A priori, it's not clear what to expect from such an experiment on learnability. Thus, our experiments in this section can be thought of another confirmation of our finding that learnability and generalization tend to be correlated. \n\n \n“-Equations should have equation numbers “\nThere's only one equation in the paper and it's numbered (1). Did we understand your comment correctly?\n \n“-Learnability numbers reported in all tables should be between 0-1 per the definition on page 3”\nYou are correct. The reported values are percent values obtained by multiplying the value in the definition by 100. We have rectified this in the updated version. \n\n“-As suggested in the final sentence of the discussion, it would be nice if conclusions drawn from the learnability experiments done in this paper were applied to the design new networks which better generalize”\n \nOne immediate approach to achieve this would be to regularize training so as to guide it towards more learnable networks. Since learnability of a network can be estimated (but this is not very cheap) this is a reasonably concrete approach, though considerable amount of work seems to be required to make this work. \nThe final but one sentence of the discussion points out another way for this goal to be achieved: characterizing neural networks that can be efficiently learned via backprop. If such a characterization is available, either regularization of the loss function or modifying the backprop updates might be able to help us design new networks that generalize better. \n\nWhile training networks with better generalization is certainly a long-term goal of this study, it is outside the scope of current paper. We note that while the concept of flat/sharp minima and its relation to generalization were proposed back in 1997, it took almost 20 years to design a new algorithm (Entropy SGD) that exploits this principle to find networks that generalize better (and is still an ongoing program of work).\n",
"“The proposed approach to figure out what do deep network learn is interesting -- the approach of learning a learned network. Some aspects needs more work to improve the work. The presentation of the results can be improved further. \nFirstly, confidence intervals on many experiments are missing (including Tables 3-9). Also, since we are looking at empirically validating the learnability criterion defined by the authors, all the results (the reported confusion tables) need to be tested statistically (to see whether one dominates the other). “\nThese were not included in later tables to reduce clutter. We have now included these in the updated version. \n \n “What is random label learning of N1 telling us? How different would that be in terms of simply learning random labels on real data directly. Further, the evaluations in Tables 3-6 need more attention since we are interested in the TLP=1 vs. PLP=0 case, and TLP=0 vs. PLP=1 case”\n\nRandom label learning of N1 (Section 3.2) is trying to answer Q1 posed in the introduction: do neural networks learn simple patterns on random training data? Or equivalently, we could ask: are neural networks learned on random training data simple? The results of Section 3.2 tell us that this is not the case. There is a subtle but substantial difference between learning N2 using data from N1 (which itself is obtained by random label learning, as done in this paper) and learning N2 simply from random labels on real data directly. In the first scenario, the training and test data of N2 are both generated by N1, so it is indeed possible to get even 100% accuracy for N2. On the other hand, in the second scenario, the training and test data for N2 are random and independent. So the test accuracy of N2 will be close to 10% with high probability.\n\nWe are sorry, we did not understand your comment about Tables 3-6. Could you please elaborate?\n \n “The influence of depth is not clear -- may be it is because of the way results are reported here. A simple figure with increasing layers vs. learnability values would do a better job at conveying the trends. “\n \nFor clarity, we have now included results on learnability of MLPs with varying depth and a fixed hidden unit size (Table 3). These results suggest that learnability decreases slightly with increasing depth as the number of parameters increase. Note however, that the test accuracies here stay approximately the same with increasing depth. In this case, increasing depth naively does not seem to help.\n\nFor popular networks, we need to be careful about drawing conclusions about depth and learnability since a network with higher depth might still have much fewer parameters and hence have low representational power as well as test accuracy. This is the reason we chose to order the networks in increasing order of their test accuracy, which captures their generalizability since all the networks achieve a training error of zero.\n"
]
} | {
"paperhash": [
"arora|provable_bounds_for_learning_some_deep_representations",
"lei|do_deep_nets_really_need_to_be_deep?",
"belanger|structured_prediction_energy_networks",
"chen|dual_path_networks",
"dinh|sharp_minima_can_generalize_for_deep_nets",
"giryes|deep_neural_networks_with_random_gaussian_weights:_a_universal_classification_strategy?",
"goel|learning_depth-three_neural_networks_in_polynomial_time",
"goodfellow|qualitatively_characterizing_neural_network_optimization_problems",
"he|deep_residual_learning_for_image_recognition",
"he|identity_mappings_in_deep_residual_networks",
"hinton|distilling_the_knowledge_in_a_neural_network",
"hochreiter|flat_minima",
"huang|densely_connected_convolutional_networks",
"janzamin|beating_the_perils_of_non-convexity:_guaranteed_training_of_neural_networks_using_tensor_methods",
"kearns|efficient_noise-tolerant_learning_from_statistical_queries",
"michael|an_introduction_to_computational_learning_theory",
"keskar|on_large-batch_training_for_deep_learning:_generalization_gap_and_sharp_minima",
"krizhevsky|learning_multiple_layers_of_features_from_tiny_images",
"krueger|deep_nets_dont_learn_via_memorization",
"lecun|efficient_backprop_in_neural_networks:_tricks_of_the_trade",
"neyshabur|implicit_regularization_in_deep_learning",
"neyshabur|search_of_the_real_inductive_bias:_on_the_role_of_implicit_regularization_in_deep_learning",
"rolnick|serge_belongie,_and_nir_shavit._deep_learning_is_robust_to_massive_label_noise",
"russakovsky|imagenet_large_scale_visual_recognition_challenge",
"sagun|empirical_analysis_of_the_hessian_of_over-parametrized_neural_networks",
"shalev-shwartz|failures_of_gradient-based_deep_learning",
"shamir|distribution-specific_hardness_of_learning_neural_networks",
"simonyan|very_deep_convolutional_networks_for_large-scale_image_recognition",
"song|on_the_complexity_of_learning_neural_networks",
"szegedy|going_deeper_with_convolutions",
"urban|do_deep_convolutional_nets_really_need_to_be_deep_(or_even_convolutional",
"leslie|a_theory_of_the_learnable",
"xiong|achieving_human_parity_in_conversational_speech_recognition",
"zhang|understanding_deep_learning_requires_rethinking_generalization",
"zhong|recovery_guarantees_for_one-hidden-layer_neural_networks"
],
"title": [
"Provable bounds for learning some deep representations",
"Do deep nets really need to be deep?",
"Structured prediction energy networks",
"Dual path networks",
"Sharp minima can generalize for deep nets",
"Deep neural networks with random gaussian weights: a universal classification strategy?",
"Learning depth-three neural networks in polynomial time",
"Qualitatively characterizing neural network optimization problems",
"Deep residual learning for image recognition",
"Identity mappings in deep residual networks",
"Distilling the knowledge in a neural network",
"Flat minima",
"Densely connected convolutional networks",
"Beating the perils of non-convexity: Guaranteed training of neural networks using tensor methods",
"Efficient noise-tolerant learning from statistical queries",
"An introduction to computational learning theory",
"On large-batch training for deep learning: Generalization gap and sharp minima",
"Learning multiple layers of features from tiny images",
"Deep nets dont learn via memorization",
"Efficient backprop in neural networks: Tricks of the trade",
"Implicit regularization in deep learning",
"search of the real inductive bias: On the role of implicit regularization in deep learning",
"Serge Belongie, and Nir Shavit. Deep learning is robust to massive label noise",
"Imagenet large scale visual recognition challenge",
"Empirical analysis of the hessian of over-parametrized neural networks",
"Failures of gradient-based deep learning",
"Distribution-specific hardness of learning neural networks",
"Very deep convolutional networks for large-scale image recognition",
"On the complexity of learning neural networks",
"Going deeper with convolutions",
"Do deep convolutional nets really need to be deep (or even convolutional",
"A theory of the learnable",
"Achieving human parity in conversational speech recognition",
"Understanding deep learning requires rethinking generalization",
"Recovery guarantees for one-hidden-layer neural networks"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"sanjeev arora",
"aditya bhaskara",
"rong ge",
"tengyu ma"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jimmy lei",
"rich ba",
" caurana"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david belanger",
"andrew mccallum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yunpeng chen",
"jianan li",
"huaxin xiao",
"xiaojie jin",
"shuicheng yan",
"jiashi feng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"laurent dinh",
"razvan pascanu",
"samy bengio",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"raja giryes",
"guillermo sapiro",
"alexander m bronstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"surbhi goel",
"adam r klivans"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ian j goodfellow",
"oriol vinyals",
"andrew m saxe"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"geoffrey e hinton",
"oriol vinyals",
"jeffrey dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"gao huang",
"zhuang liu",
"kilian q weinberger",
"laurens van der maaten"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hanie majid janzamin",
"anima sedghi",
" anandkumar"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"michael kearns"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"j michael",
"umesh kearns",
"vazirani virkumar"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nitish shirish keskar",
"dheevatsa mudigere",
"jorge nocedal",
"mikhail smelyanskiy",
"ping tak",
"peter tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex krizhevsky",
"vinod nair",
"geoffrey hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david krueger",
"nicolas ballas",
"stanislaw jastrzebski",
"arpit devansh",
"s maxinder",
"tegan kanwal",
"emmanuel maharaj",
"asja bengio",
"aaron fischer",
" courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" lecun",
"g bottou",
" orr"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" behnam neyshabur"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ryota behnam neyshabur",
"nathan tomioka",
" srebro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david rolnick",
"andreas veit"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"olga russakovsky",
"jia deng",
"hao su",
"jonathan krause",
"sanjeev satheesh",
"sean ma",
"zhiheng huang",
"andrej karpathy",
"aditya khosla",
"michael bernstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"levent sagun",
"utku evci",
"v ugur guney",
"yann dauphin",
"leon bottou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shai shalev-shwartz",
"ohad shamir",
"shaked shammah"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ohad shamir"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"karen simonyan",
"andrew zisserman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"le song",
"santosh vempala",
"john wilmes",
"bo xie"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"christian szegedy",
"wei liu",
"yangqing jia",
"pierre sermanet",
"scott reed",
"dragomir anguelov",
"dumitru erhan",
"vincent vanhoucke",
"andrew rabinovich"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"gregor urban",
"j krzysztof",
"samira ebrahimi geras",
"özlem kahou",
"shengjie aslan",
"rich wang",
"abdelrahman caruana",
"matthai mohamed",
"matthew philipose",
" richardson"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"g leslie",
" valiant"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wayne xiong",
"jasha droppo",
"xuedong huang",
"frank seide",
"mike seltzer",
"andreas stolcke",
"dong yu",
"geoffrey zweig"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chiyuan zhang",
"samy bengio",
"moritz hardt",
"benjamin recht",
"oriol vinyals"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kai zhong",
"zhao song",
"prateek jain",
"peter l bartlett",
"inderjit s dhillon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1310.6343v1",
"1312.6184v7",
"1511.06350v3",
"1707.01629v2",
"1703.04933v2",
"1504.08291v5",
"",
"1412.6544v6",
"1512.03385v1",
"1603.05027v3",
"1503.02531v1",
"",
"1608.06993v5",
"1506.08473v3",
"",
"",
"arXiv:1609.04836",
"",
"",
"",
"1709.01953v2",
"1412.6614v4",
"arXiv:1705.10694",
"1409.0575v3",
"arXiv:1706.04454",
"",
"",
"",
"1707.04615v1",
"1409.4842v1",
"",
"",
"1610.05256v2",
"1611.03530v2",
"arXiv:1706.03175"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.518519 | 0.75 | null | null | null | null | null | rJ1RPJWAW |
||
reed|fewshot_autoregressive_density_estimation_towards_learning_to_learn_distributions|ICLR_cc_2018_Conference | 3462549 | 1710.10304 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset. | {
"name": [
"s reed",
"y chen",
"t paine",
"a van den oord",
"s m a eslami",
"d rezende",
"o vinyals",
"n de freitas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
} | Few-shot learning PixelCNN | [
"few-shot learning",
"density models",
"meta learning"
] | null | 2018-02-15 22:29:35 | 27 | 88 | 0 | null | null | null | null | null | null | true | This paper incorporates attention in the PixelCNN model and shows how to use MAML to enable few-shot density estimation. The paper received mixed reviews (7,6,4). After rebuttal the first reviewer updated the score to accept. The AC shares the concern of novelty with the first reviewer. However, it is also not trivial to incorporate attention and MAML in PixelCNN, thus the AC decided to accept the paper. | {
"review_id": [
"rkHhxN2lG",
"rJ3vXv5xf",
"HyKWS3KxM"
],
"review": [
{
"title": "title: Official Reviewer 1",
"paper_summary": null,
"main_review": "main_review: This paper focuses on the density estimation when the amount of data available for training is low. The main idea is that a meta-learning model must be learnt, which learns to generate novel density distributions by learn to adapt a basic model on few new samples. The paper presents two independent method.\n\nThe first method is effectively a PixelCNN combined with an attention module. Specifically, the support set is convolved to generate two sets of feature maps, the so called \"key\" and the \"value\" feature maps. The key feature map is used from the model to compute the attention in particular regions in the support images to generate the pixels for the new \"target\" image. The value feature maps are used to copmpute the local encoding, which is used to generate the respective pixels for the new target image, taking into account also the attention values. The second method is simpler, and very similar to fine-tuning the basis network on the few new samples provided during training. Despite some interesting elements, the paper has problems.\n\nFirst, the novelty is rather limited. The first method seems to be slightly more novel, although it is unclear whether the contribution by combining different models is significant. The second method is too similar to fine-tuning: although the authors claim that \\mathcal{L}_inner can be any function that minimizes the total loss \\mathcal{L}, in the end it is clear that the log-likelihood is used. How is this approach (much) different from standard fine-tuning, since the quantity P(x; \\theta') is anyways unknown and cannot be \"trained\" to be maximized.\n\nBesides the limited novelty, the submission leaves several parts unclear. First, why are the convolutional features of the support set in the first methods divided into \"key\" and \"value\" feature maps as in p_key=p[:, 0:P], p_value=p[:, P:2*P]? Is this division arbitrary, or is there a more basic reason? Also, is there any different between key and value? Why not use the same feature map for computing the attention and computing eq (7)?\n\nAlso, in the first model it is suggested that an additional feature can be having a 1-of-K channel for the supporting image label: the reason is that you might have multiple views of objects, and knowing which view contributes to the attention can help learning the density. However, this assumes that the views are ordered, namely that the recording stage has a very particular format. Isn't this a bit unrealistic, given the proposed setup anyways?\n\nRegarding the second method, it is not clear why leaving this room for flexibility (by allowing L_inner to be any function) to the model is a good idea. Isn't this effectively opening the doors to massive overfitting? Besides, isn't the statement that the function \\mathcal{L}_inner void? At the end of the day one can also claim the same for gradient descent: you don't need to have the true gradients of the true loss, as long as the objective function obtains gradually lower and lower values?\n\nLast, it is unclear what is the connection between the first and the second model. Are these two independent models that solve the same problem? Or are they connected?\n\nRegarding the evaluation of the models, the nature of the task makes the evaluation hard: for real data like images one cannot know the true distribution of particular support examples. Surrogate tasks are explored, first image flipping, then likelihood estimation of Omniglot characters, then image generation. Image flipping does not sound a very relevant task to density estimation, given that the task is deterministic. Perhaps, what would make more sense would be to generate a new image given that the support set has images of a particular orientation, meaning that the model must learn how to learn densities from arbitrary rotations. Regarding Omniglot character generation, the surrogate task of computing likelihood of known samples gives a bit better, however, this is to be expected when combining a model without attention, with an attention module.\n\nAll in all, the paper has some interesting ideas. I encourage the authors to work more on their submission and think of a better evaluation and resubmit.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Few shot learning with autoregressive density estimation",
"paper_summary": null,
"main_review": "main_review: This paper focuses on few shot learning with autoregressive density estimation. Specifically, the paper improves PixelCNN with 1) neural attention, 2) meta learning techniques, and shows that the model achieve STOA few showt density estimation on the Omniglot dataset and demonstrate the few showt image generation on the Stanford Online Products dataset. \n\nThe model is interesting, however, several details are not clear, which makes it harder to repeat the model and the experimental results. For example, what is the reason to use the (key, value) pair to encode these support images, what does the \"key\" means and what is the difference between \"keys\" and \"values\"? In the experiments, the author did not explain the meaning of \"nats/dim\" and how to compute it. Another question is about the repetition of the experimental results. We know that PixelCNN is already a quite complicated model, it would be even harder to implement the proposed model. I wonder whether the author will release the official code to public to help the community?",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A solid paper",
"paper_summary": null,
"main_review": "main_review: This paper considers the problem of one/few-shot density estimation, using metalearning techniques that have been applied to one/few-shot supervised learning. The application is an obvious target for research and some relevant citations are missing, e.g. \"Towards a Neural Statistician\" (Edwards et al., ICLR 2017). Nonetheless, I think the current paper seems interesting enough to merit publication.\n\nThe paper is well-produced, i.e. the overall writing, visuals, and narrative flow are good. It was easy to read the paper straight through while understanding both the technical details and more intuitive motivations.\n\nI have some concerns about the architectures and experiments presented in the paper. For architectures: the attention-based model seems powerful but difficult to scale to problems with more inputs for conditioning, and the meta PixelCNN model is a standard PixelCNN trained with the MAML approach by Finn et al. For experiments: the ImageNet flipping task is clearly tailored to the strengths of the attention-based model, and the presentation of the general Omniglot results could be improved. The image flipping experiment is neat, but the attention-based model's strong performance is unsurprising. I think the results in Tables 1/2 should be merged into a single table. It would make it clear that the MAML-based and attention-based models achieve similar performance on this task.\n\nOverall, I think the paper makes a nice contribution. The paper could be improved significantly, e.g., by showing how to scale the attention-based architecture to problems with more data or by designing an architecture specifically for use with MAML-based inference.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.5555555820465088,
0.6666666865348816
],
"confidence": [
1,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Clarifications about the method, and corrections of two important factual errors in a review."
],
"comment": [
"We sincerely thank all reviewers for their thoughtful feedback. We note that AR2 and AR3 both recommend accepting the paper. AR1 recommends reject, although we believe there are a few critical factual errors in that review that, once corrected, should be reflected in a higher score.\n\nBelow we respond to each review:\n\nAR1:\n\nWe think there are a few important factual errors in this review regarding our method, which should have a substantial effect on the review score. We address these below, and will attempt to improve our writing in the paper to make these points clearer.\n\nFine-tuning: Meta PixelCNN inference is in fact different than standard fine-tuning by gradient descent. With traditional fine-tuning, the procedure is ad-hoc (e.g. how many fine-tuning gradient steps, what learning rate, what batch size) and needs to be carefully designed to avoid under- or over-fitting. With Meta PixelCNN (and model-agnostic meta learning approaches in general), the critical difference is that the fine-tuning process itself is learned. The key reference that will further clarify this point is https://arxiv.org/abs/1703.03400. https://arxiv.org/abs/1710.11622 provides further theoretical justification.\n\nInner loss: In fact L_{inner} is learned; we do not use likelihood as the inner loss. So we indeed learn to maximize likelihood without computing likelihoods at test time, as claimed in the paper.\n\nBelow we respond to the rest of the review feedback.\n\nClarity regarding contribution of different model aspects: For the first method (Attention PixelCNN), we demonstrate a clear quantitative benefit of adding attention to the baseline PixelCNN. Although (Attention + Autoregressive Image Model) is a natural idea, we prove that it does indeed work and show a simple and effective implementation, which will be valuable to the research community.\n\nWhy use separate key and value? As you suggest it is possible to use the same vector as both key and value. However, separating them may give the network greater flexibility. An ablation here where key and value are the same could be a good experiment, which we are happy to add to the paper.\n\nAssumption of ordered support set: The order can be randomly chosen (and in fact is in our experiments), so the use of a channel for support image identifier should not limit the generality of the method.\n\nWhy flexibility of L_{inner} is useful: There are several reasons that we might want L_{inner} to be flexible. For example, a learned L_{inner} may be more efficient to compute than alternatives, as in this paper, or L_{inner} may require less supervision, for example see https://arxiv.org/abs/1709.04905.\n\nConnection between first and second model: The only connection is that they are autoregressive models based on PixelCNN. They are independent models.\n\nAR2:\n\nPresentation: Thank you for the suggestions on how to improve the presentation; indeed combining the tables seems like a good idea.\n\nScalability of attention: Indeed, this is one of the major challenges in scaling to high-resolution images. Potentially the memory would need to become hierarchical, or we would need to delve more into multiscale variations of the few-shot learning model, which is an interesting area of future research.\n\nAR3:\n\nMeaning of keys/values: The pairs of (query, key) vectors are used to compute the attention scores. Then, the “read” output of memory is the sum of all value vectors in memory each weighted by the normalized attention scores.\n\nLog-likelihood results units: “Bits/dim” results are interpretable as the number of bits that a compression scheme based on the PixelCNN model would need to compress every RGB color value (see e.g. https://arxiv.org/pdf/1601.06759.pdf page 6 for discussion). Nats/dim is the same but multiplied by ln(2). Concretely, in TensorFlow we can compute this value using tf.softmax_cross_entropy_with_logits or tf.sigmoid_cross_entropy_with_logits and then dividing by the total number of dimensions in the image.\n\nPublic PixelCNN replication: A great resource for this is https://github.com/openai/pixel-cnn, which is state of the art, and straightforward to modify. Furthermore, we are happy to help guide researchers replicate our experiments, especially on Omniglot which is now a common benchmark."
]
} | {
"paperhash": [
"finn|one-shot_visual_imitation_learning_via_meta-learning",
"bornschein|variational_memory_addressing_in_generative_models",
"gehring|convolutional_sequence_to_sequence_learning",
"duan|one-shot_imitation_learning",
"reed|parallel_multiscale_autoregressive_density_estimation",
"finn|model-agnostic_meta-learning_for_fast_adaptation_of_deep_networks",
"shyam|attentive_recurrent_comparators",
"bartunov|fast_adaptation_in_generative_models_with_generative_matching_networks",
"chen|learning_to_learn_for_global_optimization_of_black_box_functions",
"santoro|meta-learning_with_memory-augmented_neural_networks",
"oord|conditional_image_generation_with_pixelcnn_decoders",
"andrychowicz|learning_to_learn_by_gradient_descent_by_gradient_descent",
"vinyals|matching_networks_for_one_shot_learning",
"reed|generative_adversarial_text_to_image_synthesis",
"gregor|towards_conceptual_compression",
"rezende|one-shot_generalization_in_deep_generative_models",
"song|deep_metric_learning_via_lifted_structured_feature_embedding",
"gregor|draw:_a_recurrent_neural_network_for_image_generation",
"xu|show,_attend_and_tell:_neural_image_caption_generation_with_visual_attention",
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"lake|one-shot_learning_by_inverting_a_compositional_causal_process",
"deng|imagenet:_a_large-scale_hierarchical_image_database",
"neu|apprenticeship_learning_using_inverse_reinforcement_learning_and_gradient_methods",
"duffie|learning_how_to_learn",
"larochelle|the_neural_autoregressive_distribution_estimator",
"spelke|core_knowledge.",
"smith|the_development_of_embodied_cognition:_six_lessons_from_babies"
],
"title": [
"One-Shot Visual Imitation Learning via Meta-Learning",
"Variational Memory Addressing in Generative Models",
"Convolutional Sequence to Sequence Learning",
"One-Shot Imitation Learning",
"Parallel Multiscale Autoregressive Density Estimation",
"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks",
"Attentive Recurrent Comparators",
"Fast Adaptation in Generative Models with Generative Matching Networks",
"Learning to Learn for Global Optimization of Black Box Functions",
"Meta-Learning with Memory-Augmented Neural Networks",
"Conditional Image Generation with PixelCNN Decoders",
"Learning to learn by gradient descent by gradient descent",
"Matching Networks for One Shot Learning",
"Generative Adversarial Text to Image Synthesis",
"Towards Conceptual Compression",
"One-Shot Generalization in Deep Generative Models",
"Deep Metric Learning via Lifted Structured Feature Embedding",
"DRAW: A Recurrent Neural Network For Image Generation",
"Show, Attend and Tell: Neural Image Caption Generation with Visual Attention",
"Neural Machine Translation by Jointly Learning to Align and Translate",
"One-shot learning by inverting a compositional causal process",
"ImageNet: A large-scale hierarchical image database",
"Apprenticeship Learning using Inverse Reinforcement Learning and Gradient Methods",
"Learning how to learn",
"The Neural Autoregressive Distribution Estimator",
"Core knowledge.",
"The Development of Embodied Cognition: Six Lessons from Babies"
],
"abstract": [
"In order for a robot to be a generalist that can perform a wide range of jobs, it must be able to acquire a wide variety of skills quickly and efficiently in complex unstructured environments. High-capacity models such as deep neural networks can enable a robot to represent complex skills, but learning each skill from scratch then becomes infeasible. In this work, we present a meta-imitation learning method that enables a robot to learn how to learn more efficiently, allowing it to acquire new skills from just a single demonstration. Unlike prior methods for one-shot imitation, our method can scale to raw pixel inputs and requires data from significantly fewer prior tasks for effective learning of new skills. Our experiments on both simulated and real robot platforms demonstrate the ability to learn new tasks, end-to-end, from a single visual demonstration.",
"Aiming to augment generative models with external memory, we interpret the output of a memory module with stochastic addressing as a conditional mixture distribution, where a read operation corresponds to sampling a discrete memory address and retrieving the corresponding content from memory. This perspective allows us to apply variational inference to memory addressing, which enables effective training of the memory module by using the target information to guide memory lookups. Stochastic addressing is particularly well-suited for generative models as it naturally encourages multimodality which is a prominent aspect of most high-dimensional datasets. Treating the chosen address as a latent variable also allows us to quantify the amount of information gained with a memory lookup and measure the contribution of the memory module to the generative process. To illustrate the advantages of this approach we incorporate it into a variational autoencoder and apply the resulting model to the task of generative few-shot learning. The intuition behind this architecture is that the memory module can pick a relevant template from memory and the continuous part of the model can concentrate on modeling remaining variations. We demonstrate empirically that our model is able to identify and access the relevant memory contents even with hundreds of unseen Omniglot characters in memory",
"The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.",
"Imitation learning has been commonly applied to solve different tasks in isolation. This usually requires either careful feature engineering, or a significant number of samples. This is far from what we desire: ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring task-specific engineering. In this paper, we propose a meta-learning framework for achieving such capability, which we call one-shot imitation learning. \nSpecifically, we consider the setting where there is a very large set of tasks, and each task has many instantiations. For example, a task could be to stack all blocks on a table into a single tower, another task could be to place all blocks on a table into two-block towers, etc. In each case, different instances of the task would consist of different sets of blocks with different initial states. At training time, our algorithm is presented with pairs of demonstrations for a subset of all tasks. A neural net is trained that takes as input one demonstration and the current state (which initially is the initial state of the other demonstration of the pair), and outputs an action with the goal that the resulting sequence of states and actions matches as closely as possible with the second demonstration. At test time, a demonstration of a single instance of a new task is presented, and the neural net is expected to perform well on new instances of this new task. The use of soft attention allows the model to generalize to conditions and tasks unseen in the training data. We anticipate that by training this model on a much greater variety of tasks and settings, we will obtain a general system that can turn any demonstrations into robust policies that can accomplish an overwhelming variety of tasks. \nVideos available at this https URL .",
"PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.",
"We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.",
"Rapid learning requires flexible representations to quickly adopt to new evidence. We develop a novel class of models called Attentive Recurrent Comparators (ARCs) that form representations of objects by cycling through them and making observations. Using the representations extracted by ARCs, we develop a way of approximating a \\textit{dynamic representation space} and use it for one-shot learning. In the task of one-shot classification on the Omniglot dataset, we achieve the state of the art performance with an error rate of 1.5\\%. This represents the first super-human result achieved for this task with a generic model that uses only pixel information.",
"Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. We develop a new generative model called Generative Matching Network which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks. By conditioning on the additional input dataset, our model can instantly learn new concepts that were not available in the training data but conform to a similar generative process. The proposed framework does not explicitly restrict diversity of the conditioning data and also does not require an extensive inference procedure for training or adaptation. Our experiments on the Omniglot dataset demonstrate that Generative Matching Networks significantly improve predictive performance on the fly as more additional data is available and outperform existing state of the art conditional generative models.",
"We present a learning to learn approach for training recurrent neural networks to perform black-box global optimization. In the meta-learning phase we use a large set of smooth target functions to learn a recurrent neural network (RNN) optimizer, which is either a long-short term memory network or a differentiable neural computer. After learning, the RNN can be applied to learn policies in reinforcement learning, as well as other black-box learning tasks, including continuous correlated bandits and experimental design. We compare this approach to Bayesian optimization, with emphasis on the issues of computation speed, horizon length, and exploration-exploitation trade-offs.",
"Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of \"one-shot learning.\" Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory location-based focusing mechanisms.",
"This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.",
"The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.",
"Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6% to 93.2% and from 88.0% to 93.8% on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.",
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.",
"We introduce a simple recurrent variational auto-encoder architecture that significantly improves image modeling. The system represents the state-of-the-art in latent variable models for both the ImageNet and Omniglot datasets. We show that it naturally separates global conceptual information from lower level details, thus addressing one of the fundamentally desired properties of unsupervised learning. Furthermore, the possibility of restricting ourselves to storing only global information about an image allows us to achieve high quality 'conceptual compression'.",
"Humans have an impressive ability to reason about new concepts and experiences from just a single example. In particular, humans have an ability for one-shot generalization: an ability to encounter a new concept, understand its structure, and then be able to generate compelling alternative variations of the concept. We develop machine learning systems with this important capacity by developing new deep generative models, models that combine the representational power of deep learning with the inferential power of Bayesian reasoning. We develop a class of sequential generative models that are built on the principles of feedback and attention. These two characteristics lead to generative models that are among the state-of-the art in density estimation and image generation. We demonstrate the one-shot generalization ability of our models using three tasks: unconditional sampling, generating new exemplars of a given concept, and generating new exemplars of a family of concepts. In all cases our models are able to generate compelling and diverse samples-- having seen new examples just once--providing an important class of general-purpose models for one-shot machine learning.",
"Learning the distance metric between pairs of examples is of great importance for learning and visual recognition. With the remarkable success from the state of the art convolutional neural networks, recent works [1, 31] have shown promising results on discriminatively training the networks to learn semantic feature embeddings where similar examples are mapped close to each other and dissimilar examples are mapped farther apart. In this paper, we describe an algorithm for taking full advantage of the training batches in the neural network training by lifting the vector of pairwise distances within the batch to the matrix of pairwise distances. This step enables the algorithm to learn the state of the art feature embedding by optimizing a novel structured prediction objective on the lifted problem. Additionally, we collected Stanford Online Products dataset: 120k images of 23k classes of online products for metric learning. Our experiments on the CUB-200-2011 [37], CARS196 [19], and Stanford Online Products datasets demonstrate significant improvement over existing deep feature embedding methods on all experimented embedding sizes with the GoogLeNet [33] network. The source code and the dataset are available at: https://github.com/rksltnl/ Deep-Metric-Learning-CVPR16.",
"This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"People can learn a new visual class from just one example, yet machine learning algorithms typically require hundreds or thousands of examples to tackle the same problems. Here we present a Hierarchical Bayesian model based on com-positionality and causality that can learn a wide range of natural (although simple) visual concepts, generalizing in human-like ways from just one image. We evaluated performance on a challenging one-shot classification task, where our model achieved a human-level error rate while substantially outperforming two deep learning models. We also tested the model on another conceptual task, generating new examples, by using a \"visual Turing test\" to show that our model produces human-like performance.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"In this paper we propose a novel gradient algorithm to learn a policy from an expert's observed behavior assuming that the expert behaves optimally with respect to some unknown reward function of a Markovian Decision Problem. The algorithm's aim is to find a reward function such that the resulting optimal policy matches well the expert's observed behavior. The main difficulty is that the mapping from the parameters to policies is both nonsmooth and highly redundant. Resorting to sub differentials solves the first difficulty, while the second one is overcome by computing natural gradients. We tested the proposed method in two artificial domains and found it to be more reliable and efficient than some previous methods.",
"....................................................................................................................................................... 3 Introduction ................................................................................................................................................. 5 Retrieval Practice ........................................................................................................................................ 9 Spacing your Studies ................................................................................................................................. 11 Interleaving ................................................................................................................................................ 14 Report Card ............................................................................................................................................... 16 My Experiences ......................................................................................................................................... 18 Conclusion ................................................................................................................................................. 20 Bibliography .............................................................................................................................................. 21",
"We describe a new approach for modeling the distribution of high-dimensional vectors of discrete variables. This model is inspired by the restricted Boltzmann machine (RBM), which has been shown to be a powerful model of such distributions. However, an RBM typically does not provide a tractable distribution estimator, since evaluating the probability it assigns to some given observation requires the computation of the so-called partition function, which itself is intractable for RBMs of even moderate size. Our model circumvents this diculty by decomposing the joint distribution of observations into tractable conditional distributions and modeling each conditional using a non-linear function similar to a conditional of an RBM. Our model can also be interpreted as an autoencoder wired such that its output can be used to assign valid probabilities to observations. We show that this new model outperforms other multivariate binary distribution estimators on several datasets and performs similarly to a large (but intractable) RBM.",
"Human cognition is founded, in part, on four systems for representing objects, actions, number, and space. It may be based, as well, on a fifth system for representing social partners. Each system has deep roots in human phylogeny and ontogeny, and it guides and shapes the mental lives of adults. Converging research on human infants, non-human primates, children and adults in diverse cultures can aid both understanding of these systems and attempts to overcome their limits.",
"The embodiment hypothesis is the idea that intelligence emerges in the interaction of an agent with an environment and as a result of sensorimotor activity. We offer six lessons for developing embodied intelligent agents suggested by research in developmental psychology. We argue that starting as a baby grounded in a physical, social, and linguistic world is crucial to the development of the flexible and inventive intelligence that characterizes humankind."
],
"authors": [
{
"name": [
"Chelsea Finn",
"Tianhe Yu",
"Tianhao Zhang",
"P. Abbeel",
"S. Levine"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Bornschein",
"A. Mnih",
"Daniel Zoran",
"Danilo Jimenez Rezende"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jonas Gehring",
"Michael Auli",
"David Grangier",
"Denis Yarats",
"Yann Dauphin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yan Duan",
"Marcin Andrychowicz",
"Bradly C. Stadie",
"Jonathan Ho",
"Jonas Schneider",
"I. Sutskever",
"P. Abbeel",
"Wojciech Zaremba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Scott E. Reed",
"Aäron van den Oord",
"Nal Kalchbrenner",
"Sergio Gomez Colmenarejo",
"Ziyun Wang",
"Yutian Chen",
"Dan Belov",
"Nando de Freitas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Chelsea Finn",
"P. Abbeel",
"S. Levine"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Pranav Shyam",
"Shubham Gupta",
"Ambedkar Dukkipati"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sergey Bartunov",
"D. Vetrov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yutian Chen",
"Matthew W. Hoffman",
"Sergio Gomez Colmenarejo",
"Misha Denil",
"T. Lillicrap",
"Nando de Freitas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Adam Santoro",
"Sergey Bartunov",
"M. Botvinick",
"D. Wierstra",
"T. Lillicrap"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Aäron van den Oord",
"Nal Kalchbrenner",
"L. Espeholt",
"K. Kavukcuoglu",
"O. Vinyals",
"Alex Graves"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Marcin Andrychowicz",
"Misha Denil",
"Sergio Gomez Colmenarejo",
"Matthew W. Hoffman",
"David Pfau",
"T. Schaul",
"Nando de Freitas"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"O. Vinyals",
"C. Blundell",
"T. Lillicrap",
"K. Kavukcuoglu",
"D. Wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Scott E. Reed",
"Zeynep Akata",
"Xinchen Yan",
"Lajanugen Logeswaran",
"B. Schiele",
"Honglak Lee"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Karol Gregor",
"F. Besse",
"Danilo Jimenez Rezende",
"Ivo Danihelka",
"D. Wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Danilo Jimenez Rezende",
"S. Mohamed",
"Ivo Danihelka",
"Karol Gregor",
"D. Wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Hyun Oh Song",
"Yu Xiang",
"S. Jegelka",
"S. Savarese"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Karol Gregor",
"Ivo Danihelka",
"Alex Graves",
"Danilo Jimenez Rezende",
"D. Wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ke Xu",
"Jimmy Ba",
"Ryan Kiros",
"Kyunghyun Cho",
"Aaron C. Courville",
"R. Salakhutdinov",
"R. Zemel",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Dzmitry Bahdanau",
"Kyunghyun Cho",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"B. Lake",
"R. Salakhutdinov",
"J. Tenenbaum"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Jia Deng",
"Wei Dong",
"R. Socher",
"Li-Jia Li",
"K. Li",
"Li Fei-Fei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Gergely Neu",
"Csaba Szepesvari"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kyle Duffie"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"H. Larochelle",
"Iain Murray"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Spelke",
"Katherine D. Kinzler"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Linda B. Smith",
"M. Gasser"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1709.04905",
"1709.07116",
"1705.03122",
"1703.07326",
"1703.03664",
"1703.03400",
"1703.00767",
"1612.02192",
"1611.03824",
null,
"1606.05328",
"1606.04474",
"1606.04080",
"1605.05396",
"1604.08772",
"1603.05106",
"1511.06452",
"1502.04623",
"1502.03044",
"1409.0473",
null,
null,
"1206.5264",
null,
null,
null,
null
],
"s2_corpus_id": [
"22221787",
"1736260",
"3648736",
"8270841",
"8525940",
"6719686",
"10939199",
"10082291",
"14247148",
"6466088",
"14989939",
"2928017",
"8909022",
"1563370",
"7441501",
"5985692",
"5726681",
"1930231",
"1055111",
"11212020",
"1433222",
"57246310",
"9898063",
"11974053",
"141054998",
"10185110",
"7107473"
],
"intents": [
[
"result",
"background"
],
[
"background",
"methodology"
],
[
"methodology"
],
[
"background"
],
[],
[
"result",
"background"
],
[],
[
"methodology"
],
[
"background",
"methodology"
],
[
"methodology"
],
[
"methodology"
],
[
"background"
],
[
"methodology"
],
[
"background",
"methodology"
],
[
"background"
],
[
"background"
],
[],
[
"result",
"background"
],
[
"methodology"
],
[
"background",
"methodology"
],
[
"background",
"methodology"
],
[],
[
"result"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
]
],
"isInfluential": [
true,
true,
false,
false,
false,
true,
false,
false,
true,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
true,
true,
false,
false,
false,
false,
false,
false
]
} | null | 84 | 1.047619 | 0.592593 | 0.833333 | null | null | null | null | null | r1wEFyWCW |
hoogeboom|hexaconv|ICLR_cc_2018_Conference | 1803.02108v1 | HexaConv | The effectiveness of Convolutional Neural Networks stems in large part from their ability to exploit the translation invariance that is inherent in many learning problems. Recently, it was shown that CNNs can exploit other invariances, such as rotation invariance, by using group convolutions instead of planar convolutions. However, for reasons of performance and ease of implementation, it has been necessary to limit the group convolution to transformations that can be applied to the filters without interpolation. Thus, for images with square pixels, only integer translations, rotations by multiples of 90 degrees, and reflections are admissible.
Whereas the square tiling provides a 4-fold rotational symmetry, a hexagonal tiling of the plane has a 6-fold rotational symmetry. In this paper we show how one can efficiently implement planar convolution and group convolution over hexagonal lattices, by re-using existing highly optimized convolution routines. We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget. Furthermore, we find that the increased degree of symmetry of the hexagonal grid increases the effectiveness of group convolutions, by allowing for more parameter sharing. We show that our method significantly outperforms conventional CNNs on the AID aerial scene classification dataset, even outperforming ImageNet pre-trained models. | {
"name": [
"emiel hoogeboom",
"jorn w t peters",
"taco s cohen",
"max welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
} | null | [
"Computer Science",
"Mathematics"
] | 2018-03-06 | 23 | 52 | null | null | null | null | null | null | null | true | This paper implements Group convolutions on inputs defined over hexagonal lattices instead of square lattices, using the roto-translation group. The internal symmetries of the hexagonal grid allow for a larger discrete rotation group than when using square pixels, leading to improved performance on CIFAR and aerial datasets.
The paper is well-written and the reviewers were positive about its results. That said, the AC wonders what is the main contribution of this work relative to existing related works (such as Group Equivarant CNNS, Cohen & Welling'16, or steerable CNNs, Cohen & Welling'17). While it is true that extending GCNNs to hexagonal lattices is a non-trivial implementation task, the contribution lacks significance in the mathematical/learning fronts, which are perhaps the ones ICLR audience will care more about. Besides, the numerical results, while improved versus their square lattice counterparts, are not a major improvement over the state-of-the-art.
In summary, the AC believes this is a borderline paper. The unanimous favorable reviews tilt the decision towards acceptance. | {
"review_id": [
"rJnr8zdgM",
"Syvd8Qcgf",
"SysTGDdxf"
],
"review": [
{
"title": "title: This paper extends group equivariant convolutional networks to images with hexagonal pixelation. While performance gains w.r.t. to the original squared lattices are not very large, the work can be inspiring for further research. ",
"paper_summary": null,
"main_review": "main_review: The paper proposes G-HexaConv, a framework extending planar and group convolutions for hexagonal lattices. Original Group-CNNs (G-CNNs) implemented on squared lattices were shown to be invariant to translations and rotations by multiples of 90 degrees. With the hexagonal lattices defined in this paper, this invariance can be extended to rotations by multiples of 60 degrees. This shows small improvements in the CIFAR-10 performances, but larger margins in an Aerial Image Dataset. \n\nDefining hexagonal pixel configurations in convolutional networks requires both resampling input images (under squared lattices) and reformulate image indexing. All these steps are very well explained in the paper, combining mathematical rigor and clarifications. \n\nAll this makes me believe the paper is worth being accepted at ICLR conference. \n\nSome issues that would require further discussion/clarification: \n- G-HexaConv critical points are memory and computation complexity. Authors claim to have an efficient implementation but the paper lacks a proper quantitative evaluation. Memory complexity and computational time comparison between classic CNNs and G-HexaConv should be provided.\n- I encourage the authors to open the source code for reproducibility and comparison with future transformational equivariant representations \n-Also, in Fig.1, I would recommend to clarify that image ‘f’ corresponds to a 2D view of a hexagonal image pixelation. My first impression was a rectangular pixelation seen from a perspective view.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A good submission that shows how to practically implement G-convolutional layers for DNNs on hexagonal lattices and the benefits of doing so.",
"paper_summary": null,
"main_review": "main_review: The paper presents an approach to efficiently implement planar and group convolutions over hexagonal lattices to leverage better accuracy of these operations due to reduced anisotropy. They show that convolutional neural networks thus built lead to better performance - reduced inductive bias - for the same parameter budget.\n\nG-CNNs were introduced by Cohen and Welling in ICML, 2016. They proposed DNN layers that implemented equivariance to symmetry groups. They showed that group equivariant networks can lead to more effective weight sharing and hence more efficient networks as evinced by better performance on CIFAR10 & CIFAR10+ for the same parameter budget. This paper shows G-equivariance implemented on hexagonal lattices can lead to even more efficient networks. \n\nThe benefits of using hexagonal lattices over rectangular lattices is well known in the signal processing as well as in computer vision. For example, see \n\nGolay M. Hexagonal parallel pattern transformation. IEEE Transactions on Computers 1969. 18(8): p. 733-740.\n\nStaunton R. The design of hexagonal sampling structures for image digitization and their use with local operators. Image and Vision Computing 1989. 7(3): p. 162-166. \n\nL. Middleton and J. Sivaswamy, Hexagonal Image Processing, Springer Verlag, London, 2005\n\nThe originality of the paper lies in the practical and efficient implementation of G-Conv layers. Group-equivariant DNNs could lead to more robust, efficient and (arguably) better performing neural networks.\n\nPros\n\n- A good paper that systematically pushes the state of the art towards the design of invariant, efficient and better performing DNNs with G-equivariant representations.\n\n- It leverages upon the existing theory in a variety of areas - signal & image processing and machine learning, to design better DNNs.\n\n - Experimental evaluation suffices for a proof of concept validation of the presented ideas. \n\n \nCons\n\n- The authors should relate the paper better to existing works in the signal processing and vision literature.\n\n- The results are on simple benchmarks like CIFAR-10. It is likely but not immediately apparent if the benefits scale to more complex problems.\n\n- Clarity could be improved in a few places\n\n: Since * is used for a standard convolution operator, it might be useful to use *_g as a G-convolution operator.\n\n: Strictly speaking, for translation equivariance, the shift should be cyclic etc.\n\n: Spelling mistakes - authors should run a spellchecker.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting general approach, not really convinced by the practical use",
"paper_summary": null,
"main_review": "main_review: \nThe authors took my comments nicely into account in their revision, and their answers are convincing. I increase my rating from 5 to 7. The authors could also integrate their discussion about their results on CIFAR in the paper, I think it would help readers understand better the advantage of the contribution.\n\n----\n\nThis paper is based on the theory of group equivariant CNNs (G-CNNs), proposed by Cohen and Welling ICML'16.\n\nRegular convolutions are translation-equivariant, meaning that if an image is translated, its convolution by any filter is also translated. They are however not rotation-invariant for example. G-CNN introduces G-convolutions, which are equivariant to a given transformation group G.\n\nThis paper proposes an efficient implementation of G-convolutions for 6-fold rotations (rotations of multiple of 60 degrees), using a hexagonal lattice. The approach is evaluated on CIFAR-10 and AID, a dataset of aerial scene classification. The approach outperforms G-convolutions implemented on a squared lattice, which allows only 4-fold rotations on AID by a short margin. On CIFAR-10, the difference does not seem significative (according to Tables 1 and 2).\nI guess this can be explained by the fact that rotation equivariance makes sense for aerial images, where the scene is mostly fronto-parallel, but less for CIFAR (especially in the upper layers), which exhibits 3D objects.\n\nI like the general approach of explicitly putting desired equivariance in the convolutional networks. Using a hexagonal lattice is elegant, even if it is not new in computer vision (as written in the paper). However, as the transformation group is limited to rotations, this is interesting in practice mostly for fronto-parallel scenes, as the experiences seem to show. It is not clear how the method can be extended to other groups than 2D rotations.\n\nMoreover, I feel like the paper sometimes tries to mask the fact that the proposed method is limited to rotations. It is admittedly clearly stated in the abstract and introduction, but much less in the rest of the paper.\n\nThe second paragraph of Section 5.1 is difficult to keep in a paper. It says that \"From a qualitative inspection of these hexagonal interpolations we conclude that no information is lost during the sampling procedure.\" \"No information is lost\" is a strong statement from a qualitative inspection, especially of a hexagonal image. This statement should probably be removed. One way to evaluate the information lost could be to iterate interpolation between hexagonal and squared lattices to see if the image starts degrading at some point.\n\n\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.6666666865348816,
0.6666666865348816
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"reply to reviewer 3",
"reply to reviewer 2",
"reply to reviewer 1",
"reply to public comment"
],
"comment": [
"Dear reviewer,\n\nWe thank you for your comments and suggestions.\n\nOn hexagonal literature: \nWe agree that it is important to recognize existing work on hexagonal signal processing, and have added references by Petersen, Hartman and Middleton in the updated paper. \n\nOn benefits to scaling:\nWe agree that CIFAR-10 is a relatively simple benchmark. In our experiments we do show that an identical network architecture where conventional conv layers are replaced by hexagonal g-conv layers, results in consistent improvements on two distinct datasets. Furthermore, we plan to release our codebase which can help further research to scale these methods to larger problems.\n\nOn the group convolution operator:\nWe think it is a very good suggestion to change group convolution operators from * to *_g, to clarify what type of convolution is used. We changed the relevant operators in the updated paper.\n\nOn exact translation equivariance in CNNs: \nWe agree with the reviewer that in order for equivariance to hold exactly, either:\nshifts should be cyclic, or\nOne should use “valid”-mode convolutions and consider the input image as a function defined on all of Z^2, where values outside of the original image are zero.\nIn practice, we use “same” convolution instead of “valid” convolution, because the latter would increase the size of the feature maps with each layer. Thus, a typical convolutional network is not exactly translation equivariant. We have added a footnote that addresses this detail.\n\nOn spellchecking:\nWe have run a spellchecker and fixed the spelling mistakes. ",
"Dear reviewer,\n\nThank you for your review.\n\nOn performance of G-HexaConvs:\nIn the experiments we show that performance consistently improves with increasing degrees of symmetry. We understand the concern of the reviewer that these differences are small for the CIFAR dataset. The results of the experiments were collected over 10 different runs with random parameter initializations. The experiments section of the paper has been updated to emphasize that the values are obtained by taking the average of 10 runs. To show statistical significance of 6-fold rotational symmetries over 4-fold rotational symmetries, we have done a significance test on our data. We test p4 and p4m versus p6 and p6m (our method) in a pairwise t-test, and find it passes with p=0.036. \n\nAlso, it should not be undervalued that our method outperforms a transfer learning approach on AID, that has been pretrained on ImageNet. Our method reduces classification error by 2% compared to networks that leverage only 4-fold symmetries. And our methods improve the error of conventional network by more than 11%.\n\nOn extensions to other groups than 2D rotations:\nThe reviewer is right to observe that in fronto-parallel scenes, this method can leverage global symmetries in the picture. Nonetheless, our experiments on CIFAR-10 show that although the margin of the benefits is smaller, our method can leverage local symmetries on a smaller scale and improve performance. These findings agree with earlier experiments by Cohen and Welling who used only 4-fold symmetries.\n\nOn masking limitations to the group of rotations:\nIt is not our intention to mask in any way that our method is limited to mirror and rotation transformations. Note that although the mathematical framework introduced by Cohen and Welling can be used for any group, in some cases, such the case of 6-fold rotational symmetry, the concrete implementation is far from trivial. Our paper is focused on the various data structures and indexing schemes that are required for an efficient implementation of hexagonal G-convolutions. If the reviewer is not entirely satisfied after the updates we made to the paper, perhaps the reviewer can help us by pointing to specific locations that could be improved in this respect.\n\nOn information loss conclusion by qualitative inspection: \nWe completely agree with the reviewer this is not a precise claim. None of the conclusions on our paper depend on this claim. Moreover, classification performance does not degrade when using hex-images. The paragraph is rephrased in the updated paper.\n",
"Dear reviewer,\n\nThank you for your support and comments.\n\nOn memory & computation complexity:\nIn our method, the memory and computational complexity scale as in the framework introduced by Cohen and Welling. Say n is the number of elements in a group (e.g. 6 rotations), and say we wish to keep the number of parameters fixed relative to a planar CNN. Then memory scales with sqrt(n) (e.g. ~2.5), and computational complexity scales with n. This is exactly the same cost as simply increasing the number of channels by ~2.5, which one would normally do only when the dataset was much larger.\n\nOn open source:\nTo facilitate the development of further research, in areas such as G-HexaConvs on other coordinate systems, we will release our source code on github. This also addresses the second point that the reviewer raises.\n\nOn Figure 1:\nTo improve the clarity of Fig. 1, we modified the borders and size. In addition, the caption also describes what the image f is. We hope that this addresses the reviewer’s concerns regarding the figure.\n",
"Dear commenter, \n\nThank you for your interest in our paper. \n\nAlthough hexagonal grids have been used in signal processing for some time, our work is focused on the implementation of the group convolution for 6-fold rotational groups p6 and p6m. Thus, unlike other methods, our approach is able to exploit the symmetries of the hexagonal grid to improve statistical efficiency by parameter sharing. We have shown that our method convincingly beats a solid baseline on CIFAR, and outperforms the transfer learning baseline on AID. Our method is not related to adaptive, deformable, or separable convolution in either approach or intent.\n"
]
} | {
"paperhash": [
"esteves|polar_transformer_networks",
"li|deep_rotation_equivariant_network",
"ravanbakhsh|equivariance_through_parameter-sharing",
"zhou|oriented_response_networks",
"marcos|rotation_equivariant_vector_field_networks",
"worrall|harmonic_networks:_deep_translation_and_rotation_equivariance",
"henriques|warped_convolutions:_efficient_invariance_to_spatial_transformations",
"xia|aid:_a_benchmark_data_set_for_performance_evaluation_of_aerial_scene_classification",
"cohen|group_equivariant_convolutional_networks",
"dieleman|exploiting_cyclic_symmetry_in_convolutional_neural_networks",
"he|deep_residual_learning_for_image_recognition",
"jaderberg|spatial_transformer_networks",
"gens|deep_symmetry_networks",
"middleton|hexagonal_image_processing:_a_practical_approach",
"condat|quasi-interpolating_spline_models_for_hexagonally-sampled_data",
"kondor|a_novel_set_of_rotationally_and_translationally_invariant_features_for_images_based_on_the_non-commutative_bispectrum",
"dalal|histograms_of_oriented_gradients_for_human_detection",
"dyk|the_art_of_data_augmentation",
"lowe|object_recognition_from_local_scale-invariant_features",
"hartman|a_hexagonal_pyramid_data_structure_for_image_processing",
"mersereau|the_processing_of_hexagonally_sampled_two-dimensional_signals",
"petersen|sampling_and_reconstruction_of_wave-number-limited_functions_in_n-dimensional_euclidean_spaces",
"|steerable_cnn"
],
"title": [
"Polar Transformer Networks",
"Deep Rotation Equivariant Network",
"Equivariance Through Parameter-Sharing",
"Oriented Response Networks",
"Rotation Equivariant Vector Field Networks",
"Harmonic Networks: Deep Translation and Rotation Equivariance",
"Warped Convolutions: Efficient Invariance to Spatial Transformations",
"AID: A Benchmark Data Set for Performance Evaluation of Aerial Scene Classification",
"Group Equivariant Convolutional Networks",
"Exploiting Cyclic Symmetry in Convolutional Neural Networks",
"Deep Residual Learning for Image Recognition",
"Spatial Transformer Networks",
"Deep Symmetry Networks",
"Hexagonal Image Processing: A Practical Approach",
"Quasi-Interpolating Spline Models for Hexagonally-Sampled Data",
"A novel set of rotationally and translationally invariant features for images based on the non-commutative bispectrum",
"Histograms of oriented gradients for human detection",
"The Art of Data Augmentation",
"Object recognition from local scale-invariant features",
"A hexagonal pyramid data structure for image processing",
"The processing of hexagonally sampled two-dimensional signals",
"Sampling and Reconstruction of Wave-Number-Limited Functions in N-Dimensional Euclidean Spaces",
"Steerable cnn"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"Carlos Esteves",
"Christine Allen-Blanchette",
"Xiaowei Zhou",
"Kostas Daniilidis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Junying Li",
"Zichen Yang",
"Haifeng Liu",
"Deng Cai"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Siamak Ravanbakhsh",
"J. Schneider",
"B. Póczos"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yanzhao Zhou",
"Qixiang Ye",
"Qiang Qiu",
"Jianbin Jiao"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Diego Marcos",
"M. Volpi",
"N. Komodakis",
"D. Tuia"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Daniel E. Worrall",
"Stephan J. Garbin",
"Daniyar Turmukhambetov",
"G. Brostow"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"João F. Henriques",
"A. Vedaldi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Gui-Song Xia",
"Jingwen Hu",
"Fan Hu",
"Baoguang Shi",
"X. Bai",
"Yanfei Zhong",
"Liangpei Zhang",
"Xiaoqiang Lu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Taco Cohen",
"M. Welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Dieleman",
"J. Fauw",
"K. Kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kaiming He",
"X. Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Max Jaderberg",
"K. Simonyan",
"Andrew Zisserman",
"K. Kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Robert Gens",
"Pedro M. Domingos"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Lee Middleton",
"J. Sivaswamy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Laurent Condat",
"D. Ville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Kondor"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Navneet Dalal",
"B. Triggs"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. V. van Dyk",
"X. Meng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Lowe"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"N. Peri Hartman",
"Steven L. Tanimoto"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Mersereau"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Petersen",
"D. Middleton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
}
],
"arxiv_id": [
"1709.01889v3",
"1705.08623v2",
"1702.08389",
"1701.01833v2",
"1612.09346v3",
"1612.04642v2",
"1609.04382v5",
"1608.05167",
"1602.07576v3",
"1602.02660v2",
"1512.03385v1",
"1506.02025v3",
"",
"",
"",
"cs/0701127",
"",
"",
"",
"",
"",
"",
"1612.08498v1"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
[],
[
"background"
],
[],
[
"background"
],
[],
[],
[],
[
"background",
"methodology"
],
[
"background",
"methodology"
],
[
"background"
],
[
"result"
],
[
"background"
],
[],
[
"methodology"
],
[
"background"
],
[],
[],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
false,
true,
true,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | null | 83 | 0.626506 | 0.666667 | 0.75 | null | null | null | null | null | r1vuQG-CW |
||
yeh|neighborencoder|ICLR_cc_2018_Conference | Neighbor-encoder | We propose a novel unsupervised representation learning framework called neighbor-encoder in which domain knowledge can be trivially incorporated into the learning process without modifying the general encoder-decoder architecture. In contrast to autoencoder, which reconstructs the input data, neighbor-encoder reconstructs the input data's neighbors. The proposed neighbor-encoder can be considered as a generalization of autoencoder as the input data can be treated as the nearest neighbor of itself with zero distance. By reformulating the representation learning problem as a neighbor reconstruction problem, domain knowledge can be easily incorporated with appropriate definition of similarity or distance between objects. As such, any existing similarity search algorithms can be easily integrated into our framework. Applications of other algorithms (e.g., association rule mining) in our framework is also possible since the concept of ``neighbor" is an abstraction which can be appropriately defined differently in different contexts. We have demonstrated the effectiveness of our framework in various domains, including images, time series, music, etc., with various neighbor definitions. Experimental results show that neighbor-encoder outperforms autoencoder in most scenarios we considered. | {
"name": [],
"affiliation": []
} | null | [
"unsupervised learning",
"representation learning",
"autoencoder"
] | null | 2018-02-15 22:29:36 | 42 | null | null | null | null | null | null | null | null | false | The paper proposes a form of autoencoder that learns to predict the neighbors of a given input vector rather than the input itself. The idea is nice but there are some reviewer concerns about insufficient evaluation and the effect of the curse of dimensionality. The revised paper does address some questions and includes additional helpful experiments with different types of autoencoders. However, the work is still a bit preliminary. The area of auto-encoder variants, and corresponding experiments on CIFAR-10 and the like, is crowded. In order to convince the reader that a new approach makes a real contribution, it should have very thorough experiments. Suggestions: try to improve the CIFAR-10 numbers (they need not be state-of-the-art but should be more credible), adding more data sets (especially high-dimensional ones), and analyzing the effects of factors that are likely to be important (e.g. dimensionality, choice of distance function for neighbor search). | {
"review_id": [
"S1JBxOqlz",
"Hk4qYw7eG",
"HJDy-RKef"
],
"review": [
{
"title": "title: Review: neighbor-encoder -> neighbor encoder",
"paper_summary": null,
"main_review": "main_review: This paper presents a variant of auto-encoder that relaxes the decoder targets to be neighbors of a data point. Different from original auto-encoder, where data point x and the decoder output \\hat{x} are forced to be close, the neighbor-encoder encourage the decoder output to be similar to the neighbors of the input data point. By considering the neighbor information, the decoder targets would have smaller intra-class distances, thus larger inter-class distances, which helps to learn better separated latent representation of data in terms of data clusters. The authors conduct experiments on several real but relative small-scale data sets, and demonstrate the improvements of learned latent representations by using neighbors as targets. \n\nThe method of neighbor prediction is a simple and small modification of the original auto-encoder, but seems to provide a way to augment the targets such that intra-class distance of decoder targets can be tightened. Improvements in the conducted experiments seem significant compared to the most basic auto-encoder.\n\nMajor issues:\n\nThere are some unaddressed theoretical questions. The optimal solution to predict the set of neighbor points in mean-squared metric is to predict the average of those points, which is not well justified as the averaged image can easily fall off the data manifold. This may lead to a more blurry reconstruction when k increases, despite the intra-class targets are tight. It can also in turn harm the latent representation when euclidean neighbors are not actually similar (e.g. images in cifar10/imagenet that are not as simple as 10 digits). This seems to be a defect of the neighbor-encoder method and is not discussed in the paper.\n\nThe data sets used in the experiments are relatively small and simple, larger-scale experiments should be conducted. The fluctuations in Figure 9 and 10 suggest the significant variances in the results. Also, more complicated data/images can decrease the actual similarities of euclidean neighbors, thus affecting the results.\n\nThe baselines are weak. Only the most basic auto-encoder is compared, no additional variants or other data augmentation techniques are compared. It is possible other variants improve the basic auto-encoder in similar ways. \n\nSome results are not very well explained. It seems the performance increases monotonically as the number of neighbors increases (Figure 5, 9, 10). Will this continue or when will the performance decrease? I would expect it to decrease as the far away neighbors will be dissimilar. The authors can either attach the nearest neighbors figures or their statistics, and provide explanations on when and why the performance decrease is expected.\n\nSome notations are confusing and need to be improved. For example, X and Y are actually the same set of images, the separation is a bit confusing; y_i \\in y in last paragraph of page 4 is incorrect, should use something like y_i in N(y).",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Lack of comparison with other autoencoders",
"paper_summary": null,
"main_review": "main_review: This paper describes a generalization of autoencoders that are trained to reconstruct a close neighbor of its input, instead of merely the input itself. Experiments on 3 datasets show that this yields better representations in terms of post hoc classification with a linear classifier or clustering, compared to a regular autoencoder.\n\nAs the authors recognize, there is a long history of research on variants of autoencoders. Unfortunately this paper compares with none of them. While the authors suggest that, since these variations can be combined with the proposed neighbor reconstruction variant, it's not necessary to compare with these other variations, I disagree. It could very well be that this neighbor trick makes other methods worse for instance. \n\nAt the very least, I would expect a comparison with denoising autoencoders, since they are similar if one thinks of the use of neighbors as a structured form of noise added to the input. It could very well be in fact that simply adding noise to the input is sufficient to force the autoencoder to learn a valuable representation, and that the neighbor reconstruction approach is simply an overly complicated approach of achieving the same results. This is an open question right now that I'd expect this paper to answer.\n\nFinally, I think results would be more impressive and likely to have impact if the authors used datasets that are more commonly used for representation learning, so that a direct performance comparison can be made with previously published results. CIFAR 10 and SVHN would be good alternatives.\n\nOverall, I'm afraid I must recommend that this paper be rejected.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Nice Idea but what about \"Curse of Dimensionality\"?",
"paper_summary": null,
"main_review": "main_review: A representation learning framework from unsupervised data, based not on auto-encoding (x in, x out), but on neighbor-encoding (x in, N(x) out, where N(.) denotes the neighbor(s) of x) is introduced. \n\nThe underlying idea is interesting, as such, each and every degree of freedom do not synthesize itself similar to the auto-encoder setting, but rather synthesize a neighbor, or k-neighbors. The authors argue that this form of unsupervised learning is more powerful compared to the standard auto-encoder setting, and some preliminary experimental proof is also provided. \n\nHowever, I would argue that this is not a completely abstract - unsupervised representation learning setting since defining what is \"a neighbor\" and what is \"not a neighbor\" requires quite a bit of domain knowledge. As we all know, the euclidian distance, or any other comparable norm, suffers from the \"Curse of Dimensionality\" as the #-of-Dimensions increase. \n\nFor instance, in section 4.3, the 40-dimensional feature vector space is used to define neighbors in. It would be great how the neighborhood topology in that space looks like.\n\nAll in all, I do like the idea as a concept but I am wary about its applicability to real data where defining a good neighborhood metric might be a major challenge of its own. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.4444444477558136,
0.5555555820465088
],
"confidence": [
0.75,
1,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Thank you very much for your valuable review",
"We really appreciate your valuable review",
"Thank you for your helpful review and kind words",
"Thank you for your interest in our work"
],
"comment": [
"Dear Reviewer,\n\nThank you very much for your valuable review. Addressing your concerns has made our paper much stronger. Our responses to the major issues are listed below:\n\nIssue 1: There are some unaddressed theoretical questions. The optimal solution to predict the set of neighbor points in mean-squared metric is to predict the average of those points, which is not well justified as the averaged image can easily fall off the data manifold. This may lead to a more blurry reconstruction when k increases, despite the intra-class targets are tight. It can also in turn harm the latent representation when euclidean neighbors are not actually similar (e.g. images in cifar10/imagenet that are not as simple as 10 digits). This seems to be a defect of the neighbor-encoder method and is not discussed in the paper.\nResponse to Issue 1: Thank you for raising this concern. The issue is addressed by removing the original configuration in question (in which we randomly selected one of the k nearest neighbors as the target to predict) as it does have this \"averaging\" problem. All the experiments are rerun with the most basic neighbor-encoder setting, in which we predict only the nearest neighbor of each object. As the target to predict is fixed, we no longer suffer from the \"averaging neighbors\" problem.\n\nIssue 2: The data sets used in the experiments are relatively small and simple, larger-scale experiments should be conducted. The fluctuations in Figure 9 and 10 suggest the significant variances in the results. Also, more complicated data/images can decrease the actual similarities of euclidean neighbors, thus affecting the results.\nResponse to Issue 2: After we rerun all the experiments described in response to Issue 1, we no longer see significant variance in the results. A new set of experiment on CIFAR 10 is performed and reported in Section 4.2. We also included experiments comparing three variants of neighbor-encoder (vanilla, denoising, variational) with their autoencoder counterparts.\n\nIssue 3: The baselines are weak. Only the most basic auto-encoder is compared, no additional variants or other data augmentation techniques are compared. It is possible other variants improve the basic auto-encoder in similar ways.\nResponse to Issue 3: We added comparison to two more popular variants of autoencoder, the denoising and variational autoencoder, in all of our experiments.\n\nIssue 4: Some results are not very well explained. It seems the performance increases monotonically as the number of neighbors increases (Figure 5, 9, 10). Will this continue or when will the performance decrease? I would expect it to decrease as the far away neighbors will be dissimilar. The authors can either attach the nearest neighbors figures or their statistics, and provide explanations on when and why the performance decrease is expected.\nResponse to Issue 4: We believe that Figure 6 addresses this issue. A new set of experiments is performed by using neighbors that are further away (i.e., changing 1st neighbor to the ith nearest neighbor). The performance decreases as expected when i is larger than 16 because the performance is crippled by lower quality neighbors. Figure 15 shows example neighbor pairs under different proximity settings.\n\nIssue 5: Some notations are confusing and need to be improved. For example, X and Y are actually the same set of images, the separation is a bit confusing; y_i \\in y in last paragraph of page 4 is incorrect, should use something like y_i in N(y).\nResponse to Issue 5: The notation is improved as suggested.\n\nThanks,\nAuthors",
"Dear Reviewer,\n\nWe really appreciate your valuable review! We have modified our paper based on your feedback by:\n \n1) adding denoising and variational autoencoder (and their neighbor-encoder counterparts) to all experiments, and …\n2) adding a new set of experiment on CIFAR 10 in Section 4.2. In all the experiments, we observed that neighbor-encoder and its variants outperform their autoencoder counterparts when applied in semi-supervised classification (when the number of labeled data available is small) and clustering tasks.\n\nThanks,\nAuthors",
"Dear Reviewer,\n\nThank you for your helpful review and kind words! We are glad that you like the idea.\n\nIn the review, you have argued that the neighbor-encoder method is not a completely abstract-unsupervised representation learning method as it requires domain knowledge to define the neighbor relationship. This statement is certainly valid, as we do need some domain knowledge. However, the amount of domain knowledge required by neighbor-encoder is minimal in comparison to what is required by a typical supervised representation learning method: we only need a \"neighbor\" to be defined, the \"non-neighbor\" information is not needed. In other words, we only need to know what is \"similar\" (and this information can be very sparse), but not what is \"not similar\" (the key information needed to divide objects into different classes/clusters). \nFurthermore, note that the domain knowledge provided do not need to be precise. Our MINST example in Section 4.1 simply use Euclidean distance in raw pixel space as the similarity measure to find the neighbors. For the newly added CIFAR10 data set Section 4.2, we use Euclidean distance in a common computer vision feature space as the similarity measure; the feature selected does not have much discriminative power for this data set and only 22% of the object-neighbor pairs are from the same class. Nevertheless, the results (Figure 9 and Table 2) show that all three variants of neighbor-encoder outperform their autoencoder counterparts in both semi-supervised classification (when number of labeled data is small) and clustering tasks.\n\nTo clarify, our claim is not that neighbor-encoder is a purely unsupervised representation learning method. Instead, our claim is that even a tiny amount of domain knowledge can greatly improve unsupervised representation learning, and neighbor-encoder is an effective way to incorporate such domain knowledge into the unsupervised representation learning framework.\n\nFor any comparable norm based neighbor definition, \"curse of dimensionality\" indeed would be a problem. To quantify the severity of such problem, we measured the percentage of object-neighbor pairs being in the same class. For example, in Section 4.4 (originally Section 4.3), about 49% of the object-neighbor pairs in the 40-dimensional feature vector space are in the same class (note that this is relatively high, as the default rate for randomly assigned neighbor is just ~9% for this data set). Another way we envision that can further increase this percentage is to use side information to define a neighbor (as introduced in Section 3.4). For instance, images/document on the same webpage or reviews of the same paper/movie/music could be declared being neighbor of each other. Such side information would much less sensitive to the curse of dimensionality.\n\nThanks,\nAuthors",
"Q: Would it be fair to say that just changing the optimization function to reconstruct the neighbors as well as the input with a simple metric like MSE would be suffice (instead of separate decoders)? \nA: First, thanks for your interest. Do you mean that training the decoder to output a 28 x 56 image (containing both the input’s reconstruction and the neighbor’s reconstruction)?\nIn the MNIST experiment we presented in the paper, the output of the decoder is always a 28 x 28 image containing either the input (in the case of autoencoder) or a neighbor (in the case of neighbor-encoder).\n\nQ: From what I gather, the paper also suggests that the architecture is more powerful in presence of noise (in comparison to existing AE architectures?\nA: It is an observation we made when comparing AE versus NE on the human physical activities data set as we are using a neighbor mining technique which ignores noisy dimensions.\nOur suggestion is that training by neighbor reconstruction instead of self-reconstruction yields better representation for semi-supervised classification and clustering. Since the proposed method is just changing the reconstruction target of existing AE architectures, it can be applied to most existing AE architectures. Based on our experimental result (with vanilla, denosing, and variational architectures), different architecture excels on different data set. Our main finding is on the effect of changing the reconstruction target (neighbor versus self) rather than the architectures.\n"
]
} | {
"paperhash": [
"agrawal|learning_to_see_by_moving",
"bengio|greedy_layer-wise_training_of_deep_networks",
"bojanowski|unsupervised_learning_by_predicting_noise",
"chen|kate:_k-competitive_autoencoder_for_text",
"coates|learning_feature_representations_with_k-means",
"donahue|adversarial_feature_learning",
"dong|metapath2vec:_scalable_representation_learning_for_heterogeneous_networks",
"fuhrmann|automatic_musical_instrument_recognition_from_polyphonic_music_audio_signals",
"goodfellow|generative_adversarial_nets",
"goodfellow|deep_learning",
"grover|node2vec:_scalable_feature_learning_for_networks",
"hochreiter|long_short-term_memory",
"fu|unsupervised_learning_of_invariant_feature_hierarchies_with_applications_to_object_recognition",
"huang|similarity_embedding_network_for_unsupervised_sequential_pattern_learning_by_playing_music_puzzle_games",
"jayaraman|learning_image_representations_tied_to_ego-motion",
"diederik|auto-encoding_variational_bayes",
"koch|siamese_neural_networks_for_one-shot_image_recognition",
"kohonen|self-organized_formation_of_topologically_correct_feature_maps",
"krizhevsky|learning_multiple_layers_of_features_from_tiny_images",
"boesen|autoencoding_beyond_pixels_using_a_learned_similarity_metric",
"lecun|gradient-based_learning_applied_to_document_recognition",
"maaten|visualizing_data_using_t-sne",
"mairal|online_dictionary_learning_for_sparse_coding",
"makhzani|winner-take-all_autoencoders",
"mcfee|eric_battenberg,_and_oriol_nieto._librosa:_audio_and_music_signal_analysis_in_python",
"mikolov|efficient_estimation_of_word_representations_in_vector_space",
"mikolov|distributed_representations_of_words_and_phrases_and_their_compositionality",
"nguyen|hien_to,_and_cyrus_shahabi._m-tsne:_a_framework_for_visualizing_high-dimensional_multivariate_time_series",
"pathak|learning_features_by_watching_objects_move",
"perozzi|deepwalk:_online_learning_of_social_representations",
"radford|unsupervised_representation_learning_with_deep_convolutional_generative_adversarial_networks",
"reiss|creating_and_benchmarking_a_new_dataset_for_physical_activity_monitoring",
"reiss|introducing_a_new_benchmarked_dataset_for_activity_monitoring",
"rezende|stochastic_backpropagation_and_approximate_inference_in_deep_generative_models",
"ribeiro|struc2vec:_learning_node_representations_from_structural_identity",
"srivastava|unsupervised_learning_of_video_representations_using_lstms",
"tang|line:_largescale_information_network_embedding",
"vincent|extracting_and_composing_robust_features_with_denoising_autoencoders",
"vincent|stacked_denoising_autoencoders:_learning_useful_representations_in_a_deep_network_with_a_local_denoising_criterion",
"wang|unsupervised_learning_of_visual_representations_using_videos",
"yang|towards_k-means-friendly_spaces:_simultaneous_deep_learning_and_clustering",
"yeh|matrix_profile_vi:_meaningful_multidimensional_motif_discovery"
],
"title": [
"Learning to see by moving",
"Greedy layer-wise training of deep networks",
"Unsupervised learning by predicting noise",
"Kate: K-competitive autoencoder for text",
"Learning feature representations with k-means",
"Adversarial feature learning",
"metapath2vec: Scalable representation learning for heterogeneous networks",
"Automatic musical instrument recognition from polyphonic music audio signals",
"Generative adversarial nets",
"Deep Learning",
"node2vec: Scalable feature learning for networks",
"Long short-term memory",
"Unsupervised learning of invariant feature hierarchies with applications to object recognition",
"Similarity embedding network for unsupervised sequential pattern learning by playing music puzzle games",
"Learning image representations tied to ego-motion",
"Auto-encoding variational bayes",
"Siamese neural networks for one-shot image recognition",
"Self-organized formation of topologically correct feature maps",
"Learning multiple layers of features from tiny images",
"Autoencoding beyond pixels using a learned similarity metric",
"Gradient-based learning applied to document recognition",
"Visualizing data using t-sne",
"Online dictionary learning for sparse coding",
"Winner-take-all autoencoders",
"Eric Battenberg, and Oriol Nieto. librosa: Audio and music signal analysis in python",
"Efficient estimation of word representations in vector space",
"Distributed representations of words and phrases and their compositionality",
"Hien To, and Cyrus Shahabi. m-tsne: A framework for visualizing high-dimensional multivariate time series",
"Learning features by watching objects move",
"Deepwalk: Online learning of social representations",
"Unsupervised representation learning with deep convolutional generative adversarial networks",
"Creating and benchmarking a new dataset for physical activity monitoring",
"Introducing a new benchmarked dataset for activity monitoring",
"Stochastic backpropagation and approximate inference in deep generative models",
"struc2vec: Learning node representations from structural identity",
"Unsupervised learning of video representations using lstms",
"Line: Largescale information network embedding",
"Extracting and composing robust features with denoising autoencoders",
"Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion",
"Unsupervised learning of visual representations using videos",
"Towards k-means-friendly spaces: Simultaneous deep learning and clustering",
"Matrix profile vi: Meaningful multidimensional motif discovery"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"pulkit agrawal",
"joao carreira",
"jitendra malik"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoshua bengio",
"pascal lamblin",
"dan popovici",
"hugo larochelle"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"piotr bojanowski",
"armand joulin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yu chen",
"mohammed j zaki"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"adam coates",
"andrew y ng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jeff donahue",
"philipp krähenbühl",
"trevor darrell"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yuxiao dong",
"nitesh v chawla",
"ananthram swami"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ferdinand fuhrmann"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ian goodfellow",
"jean pouget-abadie",
"mehdi mirza",
"bing xu",
"david warde-farley",
"sherjil ozair",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ian goodfellow",
"yoshua bengio",
"aaron courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aditya grover",
"jure leskovec"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jie fu",
"y-lan huang",
"yann boureau",
" lecun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yu-siang huang",
"szu-yu chou",
"yi-hsuan yang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dinesh jayaraman",
"kristen grauman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p diederik",
"max kingma",
" welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"gregory koch",
"richard zemel",
"ruslan salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"teuvo kohonen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex krizhevsky",
"geoffrey hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"anders boesen",
"lindbo larsen",
"søren kaae sønderby",
"hugo larochelle",
"ole winther"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yann lecun",
"léon bottou",
"yoshua bengio",
"patrick haffner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"laurens van der maaten",
"geoffrey hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"julien mairal",
"francis bach",
"jean ponce",
"guillermo sapiro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alireza makhzani",
"brendan j frey"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"brian mcfee",
"colin raffel",
"dawen liang",
"p w daniel",
"matt ellis",
" mcvicar"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tomas mikolov",
"kai chen",
"greg corrado",
"jeffrey dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tomas mikolov",
"ilya sutskever",
"kai chen",
"greg s corrado",
"jeff dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"minh nguyen",
"sanjay purushotham"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"deepak pathak",
"ross girshick",
"piotr dollár",
"trevor darrell",
"bharath hariharan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"bryan perozzi",
"rami al-rfou",
"steven skiena"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alec radford",
"luke metz",
"soumith chintala"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"attila reiss",
"didier stricker"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"attila reiss",
"didier stricker"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"danilo jimenez rezende",
"shakir mohamed",
"daan wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pedro hp leonardo fr ribeiro",
" saverese",
" daniel r figueiredo"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nitish srivastava",
"elman mansimov",
"ruslan salakhudinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jian tang",
"meng qu",
"mingzhe wang",
"ming zhang",
"jun yan",
"qiaozhu mei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pascal vincent",
"hugo larochelle",
"yoshua bengio",
"pierre-antoine manzagol"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pascal vincent",
"hugo larochelle",
"isabelle lajoie",
"yoshua bengio",
"pierre-antoine manzagol"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xiaolong wang",
"abhinav gupta"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"bo yang",
"xiao fu",
"nicholas d sidiropoulos",
"mingyi hong"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chin-chia michael yeh",
"nickolas kavantzas",
"eamonn keogh"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1505.01596v2",
"",
"1704.05310v1",
"1705.02033v2",
"",
"1605.09782v7",
"",
"",
"",
"1807.07987v2",
"1607.00653v1",
"",
"",
"arXiv:1709.04384",
"",
"arXiv:1312.6114",
"",
"",
"",
"1512.09300v2",
"",
"",
"",
"",
"",
"1301.3781v3",
"1310.4546v1",
"arXiv:1708.07942",
"1612.06370v2",
"1403.6652v2",
"1511.06434v2",
"",
"",
"1401.4082v3",
"1704.03165v3",
"1502.04681v3",
"",
"",
"",
"1505.00687v2",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.444444 | 0.833333 | null | null | null | null | null | r1vccClCb |
||
zhang|learning_sparse_structured_ensembles_with_sgmcmc_and_network_pruning|ICLR_cc_2018_Conference | Learning Sparse Structured Ensembles with SG-MCMC and Network Pruning | An ensemble of neural networks is known to be more robust and accurate than an individual network, however usually with linearly-increased cost in both training and testing.
In this work, we propose a two-stage method to learn Sparse Structured Ensembles (SSEs) for neural networks.
In the first stage, we run SG-MCMC with group sparse priors to draw an ensemble of samples from the posterior distribution of network parameters. In the second stage, we apply weight-pruning to each sampled network and then perform retraining over the remained connections.
In this way of learning SSEs with SG-MCMC and pruning, we not only achieve high prediction accuracy since SG-MCMC enhances exploration of the model-parameter space, but also reduce memory and computation cost significantly in both training and testing of NN ensembles.
This is thoroughly evaluated in the experiments of learning SSE ensembles of both FNNs and LSTMs.
For example, in LSTM based language modeling (LM), we obtain 21\% relative reduction in LM perplexity by learning a SSE of 4 large LSTM models, which has only 30\% of model parameters and 70\% of computations in total, as compared to the baseline large LSTM LM.
To the best of our knowledge, this work represents the first methodology and empirical study of integrating SG-MCMC, group sparse prior and network pruning together for learning NN ensembles. | {
"name": [],
"affiliation": []
} | Propose a novel method by integrating SG-MCMC sampling, group sparse prior and network pruning to learn Sparse Structured Ensemble (SSE) with improved performance and significantly reduced cost than traditional methods. | [
"ensemble learning",
"SG-MCMC",
"group sparse prior",
"network pruning"
] | null | 2018-02-15 22:29:20 | 34 | null | null | null | null | null | null | null | null | false | This paper is interesting since it goes to showing the role of model averaging. The clarifications made improve the paper, but the impact of the paper is still not realised: the common confusion on the retraining can be re-examined, clarifications in the methodology and evaluation, and deeper contextulaisation of the wider literature. | {
"review_id": [
"Hy6mmeCgf",
"BJt3Bg5gM",
"B1A7YkceM"
],
"review": [
{
"title": "title: A useful approach for making model averaging more feasible",
"paper_summary": null,
"main_review": "main_review: The authors note that several recent papers have shown that bayesian model averaging is an effective and universal way to improve hold-out performance, but unfortunately are limited by increased computational costs. Towards that end, the authors of this manuscript propose several modifications to this procedure to make it computationally feasible and indeed improve performance.\n\nPros:\nThe authors demonstrate an effective procedure for FNN and LSTMs that makes model averaging improve performance.\nEmpirical evidence is convincing on the utility of the approach.\n\nCons:\nNot clear how this approach would be used with convolutional structures\nMuch of the benefit appears to come from the sparse prior, pruning, and retraining (Figure 3). The model averaging seems to have a smaller contribution. Due to that, it seems that the nature of the contribution needs to be clarified compared to the large literature on sparsifying neural networks, and the introductory comments of the paper should be rewritten to reflect that reality.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A more principled way for training ensemble neural networks, nice idea and experiments",
"paper_summary": null,
"main_review": "main_review: In this paper, the authors present a new framework for training ensemble of neural networks. The approach is based on the recent scalable MCMC methods, namely the stochastic gradient Langevin dynamics.\n\nThe paper is overall well-written and ideas are clear. The main contributions of the paper, namely using SG-MCMC methods within deep learning, and then increasing the computational efficiency by group sparsity+pruning are valuable and can have a significant impact in the domain. Besides, the proposed approach is more elegant the competing ones, while still not being theoretically justified completely. \n\nI have the following minor comments:\n\n1) The authors mention that retraining significantly improves the performance, even without pruning. What is the explanation for this? If there is no pruning, I would expect that all the samples would converge to the same minimum after retraining. Therefore, the reason why retraining improves the performance in all cases is not clear to me.\n\n2) The notation |\\theta_g| is confusing, the authors should use a different symbol.\n\n3) After section 4, the language becomes quite informal sometimes, the authors should check the sentences once again.\n\n4) The results with SGD (1 model) + GSP + PR should be added in order to have a better understanding of the improvements provided by the ensemble networks. \n\n5) Why does the performance get worse \"obviously\" when the pruning is 95% and why is it not obvious when the pruning is 90%?\n\n6) There are several typos\n\npg7: drew -> drawn\npg7: detail -> detailed\npg7: changing -> challenging\npg9: is strongly depend on -> depends on\npg9: two curve -> two curves",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Justification for the proposed algorithm is weak + weak experiments.",
"paper_summary": null,
"main_review": "main_review: The authors propose a procedure to generate an ensemble of sparse structured models. To do this, the authors propose to (1) sample models using SG-MCMC with group sparse prior, (2) prune hidden units with small weights, (3) and retrain weights by optimizing each pruned model. The ensemble is applied to MNIST classification and language modelling on PTB dataset. \n\nI have two major concerns on the paper. First, the proposed procedure is quite empirically designed. So, it is difficult to understand why it works well in some problems. Particularly. the justification on the retraining phase is weak. It seems more like to use SG-MCMC to *initialize* models which will then be *optimized* to find MAP with the sparse-model constraints. The second problem is about the baselines in the MNIST experiments. The FNN-300-100 model without dropout, batch-norm, etc. seems unreasonably weak baseline. So, the results on Table 1 on this small network is not much informative practically. Lastly, I also found a significant effort is also desired to improve the writing. \n\nThe following reference also needs to be discussed in the context of using SG-MCMC in RNN.\n- \"Scalable Bayesian Learning of Recurrent Neural Networks for Language Modeling\", Zhe Gan*, Chunyuan Li*, Changyou Chen, Yunchen Pu, Qinliang Su, Lawrence Carin",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.5555555820465088,
0.3333333432674408
],
"confidence": [
1,
0.5,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Updated paper",
"Response to Reviewer1",
"Response to Reviewer3",
"Response to Reviewer2",
"Response"
],
"comment": [
"Dear Reviewers,\nWe greatly appreciate your helpful and constructive comments on the paper. We have carefully revised the paper to incorporate your comments, adding some new results and polishing the writing for clarification. As a result, we believe that the paper has been substantially improved and strengthened.\nIn the following, we provide our response to your specific points.\nPlease find the updated paper for more details. ",
"Thank you very much for reviewing the paper.\n\n> Particularly. the justification on the retraining phase is weak. \n\nThanks for your note. As stated in the end of Section 3.2, there are two justifications for the retraining phase: First, theoretically (namely with infinite samples), model averaging does not need retraining. However, the actual number of samples used in practice is rather small for computational efficiency. So retraining essentially compensates for the limited size of samples for model averaging. Second, the MAP estimate is more likely than the network obtained just after pruning but before retraining. Retraining increases the posteriori probabilities of the networks in the ensemble and hopefully improves the prediction performance of the networks in the ensemble.\n\n> The second problem is about the baselines in the MNIST experiments. The FNN-300-100 model without dropout, batch-norm, etc. seems unreasonably weak baseline. So, the results on Table 1 on this small network is not much informative practically. \n\nSuch basic setting in the MNIST FNN experiments allows easy reproduction of the results.\nStrong results are reported on the more challenging LSTM LM task.\n\n> Lastly, I also found a significant effort is also desired to improve the writing.\n\nWe polish the paper and especially rewrite those parts after Sections 4.\n\n> The following reference also needs to be discussed in the context of using SG-MCMC in RNN. - \"Scalable Bayesian Learning of Recurrent Neural Networks for Language Modeling\", Zhe Gan*, Chunyuan Li*, Changyou Chen, Yunchen Pu, Qinliang Su, Lawrence Carin\n\nThis work pioneers in applying SG-MCMC to Bayesian learning of RNNs, but without considering model pruning and the cost of model averaging. We have added the discussion in Related Work.",
"Thank you very much for reviewing the paper.\n\n> 1) \nAs stated in the end of Section 3.2, there are two justifications for the retraining phase: First, theoretically (namely with infinite samples), model averaging does not need retraining. However, the actual number of samples used in practice is rather small for computational efficiency. So retraining essentially compensates for the limited size of samples for model averaging. Second, the MAP estimate is more likely than the network obtained just after pruning but before retraining. Retraining increases the posteriori probabilities of the networks in the ensemble and hopefully improves the prediction performance of the networks in the ensemble.\n\nNote that running SGLD enhances exploration of the model-parameter space, and we take thinned collection of samples so that there are low correlations between the samples. So in contrary to converging to the same minimum after retraining, thinned samples from SGLD would lead to neighbors of different local minima and retraining further fine-tune the paramters to take different minima.\n\n> 2) \nThanks for your suggestion, we have changed the notation to dim(\\theta_g).\n\n> 3)\nWe polish the paper and especially rewrite those parts after Sections 4.\n\n> 4)\nThanks for your suggestion. The results of SGD (1 model) + GSP + PR and SGD (ensemble) + GSP + PR have been added to Table 5, with the discussion in the paragraph before the last paragraph in Section 5.2.\nSGD (1 model)+GSP+PR can reduce the model size but the PPL is much worse than the ensemble, which clearly shows the improvement provided by the ensemble. Additionally, we compare SGLD (4 models)+GSP+PR with SGD (4 models)+GSP+PR. The two ensembles achieve close PPLs. However, SGLD ensemble learning reduces about 30% training time.\n\n> 5)\nWe empirically find that 90% is the highest pruning rate without hurting performance for LSTMs.\n\n> 6)\nTypos have been fixed.",
"Thank you very much for reviewing the paper.\n\n> Not clear how this approach would be used with convolutional structures\n\nIt has been shown in [1] that group Lasso regularization is effective for structured sparsity SGD learning for convolutional structures (filters, channels, filter shapes, and layer depth). It is conceivable that group Lasso used with SGLD can work for convolutional structures, by employing proper groupings like those in [1].\n[1] Wen, Wu, Wang, Chen and Li. Learning structured sparsity in deep neural networks, NIPS 2016.\n\n> The model averaging seems to have a smaller contribution. \n\nIt can be seen from Figure 3(a) that as the training proceeds, more models are averaged, which consistently improves the PPLs. Also, the relationship between the performance of an ensemble and the number of models in an ensemble is examined in Figure 3(b), which clearly shows the contribution of model averaging.\n\n> Due to that, it seems that the nature of the contribution needs to be clarified compared to the large literature on sparsifying neural networks, and the introductory comments of the paper should be rewritten to reflect that reality.\n\nLiterature review with regards to NN sparse structure learning and NN compression is rewritten and presented in Related Work.",
"Thanks for your comment.\nAs said in your comment, these previous works apply group Lasso with SGD to learn structurally sparse DNNs. They focus on point estimates and are not in the context of learning ensembles. We have added the discussion in Related Work."
]
} | {
"paperhash": [
"jose|learning_the_number_of_neurons_in_deep_networks",
"appleyard|optimizing_performance_of_recurrent_neural_networks_on_gpus",
"babacan|bayesian_group-sparse_modeling_and_variational_inference",
"balan|bayesian_dark_knowledge",
"chaudhari|entropy-sgd:_biasing_gradient_descent_into_wide_valleys",
"chen|stochastic_gradient_hamiltonian_monte_carlo",
"deng|the_mnist_database_of_handwritten_digit_images_for_machine_learning_research",
"gal|a_theoretically_grounded_application_of_dropout_in_recurrent_neural_networks",
"gan|scalable_bayesian_learning_of_recurrent_neural_networks_for_language_modeling",
"han|deep_compression:_compressing_deep_neural_networks_with_pruning,_trained_quantization_and_huffman_coding",
"han|learning_both_weights_and_connections_for_efficient_neural_network",
"han|ese:_efficient_speech_recognition_engine_with_sparse_lstm_on_fpga",
"|neural_network_ensembles",
"hinton|distilling_the_knowledge_in_a_neural_network",
"hochreiter|long_short-term_memory",
"hu|network_trimming:_a_data-driven_neuron_pruning_approach_towards_efficient_deep_architectures",
"huang|snapshot_ensembles:_train_1,_get_m_for_free",
"inan|tying_word_vectors_and_word_classifiers:_a_loss_framework_for_language_modeling",
"ju|the_relative_performance_of_ensemble_methods_with_deep_convolutional_neural_networks_for_image_classification",
"li|preconditioned_stochastic_gradient_langevin_dynamics_for_deep_neural_networks",
"loshchilov|sgdr:_stochastic_gradient_descent_with_restarts",
"marcus|building_a_large_annotated_corpus_of_english:_the_penn_treebank",
"marlin|group_sparse_priors_for_covariance_estimation",
"merity|regularizing_and_optimizing_lstm_language_models",
"narang|exploring_sparsity_in_recurrent_neural_networks",
"sato|approximation_analysis_of_stochastic_gradient_langevin_dynamics_by_using_fokker-planck_equation_and_ito_process",
"scardapane|group_sparse_regularization_for_deep_neural_networks",
"teh|consistency_and_fluctuations_for_stochastic_gradient_langevin_dynamics",
"welling|bayesian_learning_via_stochastic_gradient_langevin_dynamics",
"wen|learning_structured_sparsity_in_deep_neural_networks",
"wen|learning_intrinsic_sparse_structures_within_long_short-term_memory",
"yang|breaking_the_softmax_bottleneck:_a_high-rank_rnn_language_model",
"yuan|model_selection_and_estimation_in_regression_with_grouped_variables",
"zaremba|recurrent_neural_network_regularization"
],
"title": [
"Learning the number of neurons in deep networks",
"Optimizing performance of recurrent neural networks on gpus",
"Bayesian group-sparse modeling and variational inference",
"Bayesian dark knowledge",
"Entropy-sgd: Biasing gradient descent into wide valleys",
"Stochastic gradient hamiltonian monte carlo",
"The mnist database of handwritten digit images for machine learning research",
"A theoretically grounded application of dropout in recurrent neural networks",
"Scalable bayesian learning of recurrent neural networks for language modeling",
"Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding",
"Learning both weights and connections for efficient neural network",
"Ese: Efficient speech recognition engine with sparse lstm on fpga",
"Neural network ensembles",
"Distilling the knowledge in a neural network",
"Long short-term memory",
"Network trimming: A data-driven neuron pruning approach towards efficient deep architectures",
"Snapshot ensembles: Train 1, get m for free",
"Tying word vectors and word classifiers: A loss framework for language modeling",
"The relative performance of ensemble methods with deep convolutional neural networks for image classification",
"Preconditioned stochastic gradient langevin dynamics for deep neural networks",
"Sgdr: stochastic gradient descent with restarts",
"Building a large annotated corpus of english: The penn treebank",
"Group sparse priors for covariance estimation",
"Regularizing and optimizing lstm language models",
"Exploring sparsity in recurrent neural networks",
"Approximation analysis of stochastic gradient langevin dynamics by using fokker-planck equation and ito process",
"Group sparse regularization for deep neural networks",
"Consistency and fluctuations for stochastic gradient langevin dynamics",
"Bayesian learning via stochastic gradient langevin dynamics",
"Learning structured sparsity in deep neural networks",
"Learning intrinsic sparse structures within long short-term memory",
"Breaking the softmax bottleneck: A high-rank rnn language model",
"Model selection and estimation in regression with grouped variables",
"Recurrent neural network regularization"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"m jose",
"mathieu alvarez",
" salzmann"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jeremy appleyard",
"tomas kocisky",
"phil blunsom"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"shinichi s derin babacan",
"minh n nakajima",
" do"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"anoop korattikara balan",
"vivek rathod",
"kevin p murphy",
"max welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pratik chaudhari",
"anna choromanska",
"stefano soatto",
"yann lecun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tianqi chen",
"emily fox",
"carlos guestrin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"li deng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yarin gal",
"zoubin ghahramani"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"zhe gan",
"chunyuan li",
"changyou chen",
"yunchen pu",
"qinliang su",
"lawrence carin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"song han",
"huizi mao",
"william j dally"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"song han",
"jeff pool",
"john tran",
"william dally"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"song han",
"junlong kang",
"huizi mao",
"yiming hu",
"xin li",
"yubin li",
"dongliang xie",
"hong luo",
"song yao",
"yu wang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lars ",
"kai hansen",
"peter salamon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"geoffrey hinton",
"oriol vinyals",
"jeff dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hengyuan hu",
"rui peng",
"yu-wing tai",
"chi-keung tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"gao huang",
"yixuan li",
"geoff pleiss",
"zhuang liu",
"john e hopcroft",
"kilian q weinberger"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hakan inan",
"khashayar khosravi",
"richard socher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"cheng ju",
"aurélien bibaut",
"mark j van der laan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chunyuan li",
"changyou chen",
"david e carlson",
"lawrence carin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ilya loshchilov",
"frank hutter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mary mitchell p marcus",
"ann marcinkiewicz",
"beatrice santorini"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mark benjamin m marlin",
"kevin p schmidt",
" murphy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"stephen merity",
"nitish shirish keskar",
"richard socher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sharan narang",
"gregory diamos",
"shubho sengupta",
"erich elsen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"issei sato",
"hiroshi nakagawa"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"simone scardapane",
"danilo comminiello",
"amir hussain",
"aurelio uncini"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yee whye teh",
"alexandre h thiery",
"sebastian j vollmer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"max welling",
"yee w teh"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wei wen",
"chunpeng wu",
"yandan wang",
"yiran chen",
"hai li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wei wen",
"yuxiong he",
"samyam rajbhandari",
"wenhan wang",
"fang liu",
"bin hu",
"yiran chen",
"hai li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"zhilin yang",
"zihang dai",
"ruslan salakhutdinov",
"william w cohen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ming yuan",
"yi lin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wojciech zaremba",
"ilya sutskever",
"oriol vinyals"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1611.06321v3",
"1604.01946v1",
"",
"1506.04416v3",
"1611.01838v5",
"1402.4102v2",
"",
"1512.05287v5",
"1611.08034v2",
"1510.00149v5",
"1506.02626v3",
"1612.00694v2",
"",
"1503.02531v1",
"",
"arXiv:1607.03250",
"1704.00109v1",
"1611.01462v3",
"1704.01664v1",
"1512.07666v1",
"arXiv:1608.03983",
"",
"1205.2626v1",
"1708.02182v1",
"1704.05119v2",
"",
"1607.00485v1",
"1409.0578v2",
"",
"1608.03665v4",
"",
"arXiv:1711.03953",
"",
"1409.2329v5"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.481481 | 0.75 | null | null | null | null | null | r1uOhfb0W |
||
probst|the_set_autoencoder_unsupervised_representation_learning_for_sets|ICLR_cc_2018_Conference | The Set Autoencoder: Unsupervised Representation Learning for Sets | We propose the set autoencoder, a model for unsupervised representation learning for sets of elements. It is closely related to sequence-to-sequence models, which learn fixed-sized latent representations for sequences, and have been applied to a number of challenging supervised sequence tasks such as machine translation, as well as unsupervised representation learning for sequences.
In contrast to sequences, sets are permutation invariant. The proposed set autoencoder considers this fact, both with respect to the input as well as the output of the model. On the input side, we adapt a recently-introduced recurrent neural architecture using a content-based attention mechanism. On the output side, we use a stable marriage algorithm to align predictions to labels in the learning phase.
We train the model on synthetic data sets of point clouds and show that the learned representations change smoothly with translations in the inputs, preserve distances in the inputs, and that the set size is represented directly. We apply the model to supervised tasks on the point clouds using the fixed-size latent representation. For a number of difficult classification problems, the results are better than those of a model that does not consider the permutation invariance. Especially for small training sets, the set-aware model benefits from unsupervised pretraining. | {
"name": [],
"affiliation": []
} | We propose the set autoencoder, a model for unsupervised representation learning for sets of elements. | [
"set",
"unsupervised learning",
"representation learning"
] | null | 2018-02-15 22:29:47 | 24 | null | null | null | null | null | null | null | null | false | The paper proposes an autoencoder for sets, an interesting and timely problem. The encoder here is based on prior related work (Vinyals et al. 2016) while the decoder uses a loss based on finding a matching between the input and output set elements. Experiments on multiple data sets are given, but none are realistic. The reviewers have also pointed out a number of experimental comparisons that would improve the contribution of the paper, such as considering multiple matching algorithms and more baselines. In the end the idea is reasonable and results are encouraging, but too preliminary at this point. | {
"review_id": [
"rk7TfpBlG",
"Hk-Qowclf",
"B1EnXjFxG"
],
"review": [
{
"title": "title: review",
"paper_summary": null,
"main_review": "main_review: This paper mostly extends Vinyals et al, 2015 paper (\"Order Matters\") on how to represent sets as input and/or output of a deep architecture.\n\nAs far as I understood, the set encoder is the same as the one in \"Order Matters\". If not, it would be useful to underline the differences.\n\nThe decoder, on the other hand, is different and relies on a loss that is based on an heuristic to find the current best order (based on an ordering, or mapping W, found using the Gale-Shapely algorithm). Does this mean that Algorithm 1 needs to be run for every training (and test) example? if so, it is important to note what is the effective complexity of running it?\n\nThe experimental section is interesting, but in the end a bit disappointing: although a new artificial dataset is proposed to evaluate sets, it is unclear how different are the findings from those in the \"Order Matters\" paper:\n- the first set of results (in Section 4.1) confirms that the set encoder is important (which was also in the other paper I believe)\n- the second set of results (Section 4.2) shows that in some cases, an auto-encoder is also useful: this is mostly the case when the supervised data is small compared to the availability of a much larger unsupervised data (of sets). This is interesting (and novel compared to the \"Order Matters\" paper) but corresponds to known findings from most previous work on semi-supervised learning: pre-training is only useful when only a very small supervised data exists, and quickly becomes irrelevant. This is not specific to sets.\n\nFinally, It would have been very interesting to see experiments on real data concerned with sets.\n\n------------------\nI have read the respond to the reviewers but haven't seen any reason to\nchange my score. In particular, the authors have not answered my questions\nabout differences with the prior art, and have not provided results on\nreal data.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: An interesting framework but more real-world experiments needed",
"paper_summary": null,
"main_review": "main_review: Summary:\n\nThis paper proposes an encoder-decoder framework for learning latent representations of sets of elements. The model utilizes the neural attention mechanism for set inputs proposed in (Vinyals et al., ICLR 2016) to encode a set into a fixed-length latent representation, and then employs an LSTM decoder to reconstruct the original set of elements, in which a stable matching algorithm is used to match decoder outputs to input elements. Experimental results on synthetic datasets show that the model learns meaningful representations and effectively handles permutation invariance.\n\nMajor Concerns:\n\n1. Although the employed Gale-Shapely algorithm facilitates permutation-invariant set reconstruction, it has O(n^2) computational complexity during each back-propagation iteration, which might prevent it from scaling to sets of fairly big sizes. \n\n2. The experiments are only evaluated on synthetic datasets, and applications of the set autoencoder to real-world applications or scientific problems will make this work more interesting and significant.\n\n3. The main contribution of this work is the adoption of the stable matching algorithm in the decoder. A strong set autoencoder baseline will be, the encoder employs the neural attention mechanism proposed in (Vinyals et al., ICLR 2016), but the decoder just uses a standard LSTM as in a seq2seq framework. Comparisons to this baseline will reveal the contribution of the stable matching procedure in the whole framework of the set autoencoder for learning representations. \n\nMinor issues:\n\nOn page 5, above Section 4, d_j -> o_j ?\n\nthe footnote on page 5: we not consider -> we do not consider?\n\non page 6 and 7, 6.000, 1.000 and 10.000 training examples -> 6000, 1000 and 10,000 training examples",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Good preliminary results",
"paper_summary": null,
"main_review": "main_review: Summary\nThis paper proposes an autoencoder for sets. An input set is encoded into a\nfixed-length representation using an attention mechanism (previously proposed by\n[1]). The decoder generates the output sequentially and the generated sequence\nis matched to the best-matching ordering of the target output set.\nExperiments are done on synthetic datasets to demonstrate properties of the\nlearned representation.\n\nPros\n- Experiments show that the autoencoder helps improve classification accuracy\n for small training set sizes on the shape classification task.\n- The analysis of how the decoder generates data is insightful.\n\nCons\n- The experiments are on toy datasets only. Given the availability of point\n cloud data sets, for example, KITTI which has a widely used benchmark for\npoint cloud based object detection, it would make the paper stronger if this\nmodel was benchmarked against published baselines.\n\n- The autoencoder does not seem to help much on the regression tasks where even\n for the smaller training set size setting, directly using the encoder to solve\nthe task often works best. Even finetuning is unable to recover from the\npretrained weights. Therefore, it seems that the decoder (which is the novel\naspect of this work) is perhaps not working well, or is not well suited to the\nregression tasks being considered.\n\n- The classification task, for which the learned representations work well\n empirically, seems to be geared towards representing object shape. It doesn't\nreally require remembering each point. On the other hand, the regression tasks\nthat could require remembering the points don't seem to be benefit much from the\nautoencoder pretraining. This suggests that while the model is able to represent\noverall shape, it has a hard time remembering individual elements of the set.\nThis seems like a drawback, since a general \"set auto-encoder\" should be able\nto perform a wide variety of tasks on the input set which could require remembering\nthe set's elements.\n\nQuality\nThis paper describes the proposed model quite well and provides encouraging\npreliminary results.\n\nClarity\nThe paper is easy to understand.\n\nOriginality\nThe novelty in the model is using a matching algorithm to find the best ordering\nof the target output set to match with the sequentially generated decoder\noutput. However, the paper makes a choice of one ranking based matching scheme\nand does not compare to other alternatives.\n\nSignificance\nThis paper proposes a way of learning representations of sets which will be of\nbroad interest across the machine learning community. These models are likely to\nbecome more relevant with increasing prevelance of point cloud data.\n\nReferences\n[1] Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. Order matters: Sequence to\nsequence for sets. arXiv preprint arXiv:1511.06391.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.4444444477558136,
0.4444444477558136
],
"confidence": [
1,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"prior art",
"Authors' comments to reviews"
],
"comment": [
"Thank you for your comment. \n\nRegarding your question about prior art: Yes, the encoder is conceptually identical to the one proposed in the \"Oder Matters\" paper. We wrote \"similar to\" in the initial version since the some of the architectural details are not completely disclosed in the \"Order Matters\" paper (e.g. the \"small neural network\" for f^inp, which probably uses non-linearities, whereas our f^inp is linear). But structurally, the encoder is identical.",
"We thank the reviewers for the insightful and encouraging remarks. We comment on a number of these remarks below, and have updated some of the corresponding points in the paper.\n\n== Major concerns of one or multiple reviewers ==\n\n* O(n^2) complexity of Gale-Shapely.\nIt is true that this complexity could, in practice, restrict the applicability of the proposed algorithm to smaller sets. However, there is a range of problems where small set sizes are relevant, e.g. when an agent interacts with an environment where one or multiple instances of an object can be present (as opposed to point cloud representations of objects)\n\nWe have included the above remark in the paper.\n\n* Synthetic data set vs. real-world data set\nWe completely agree that the paper will be much stronger once we include results on a real-world data set. However, in the limited time available, we were not able to do so just yet.\n\n* Proposal by AnonReviewer3: use the same encoder, but a plain LSTM decoder as benchmark (to show whether the Gale-Shapely-augmented decoder works better).\n(i.e., use the first $n$ outputs $o_i,i\\in{1,\\dots,n}$ directly)\n\nThis is an interesting idea, that we will have to try out. However, the current assumption is that its behavior will probably be worse: Unlike the Seq-AE, it will not be able to store ordering information in the permutation-invariant embedding, but penalize misaligned points in the output heavily.\n\n* Remarks about applicability to different problem types\n\nWe agree with the reviewers' comments about the applicability of the model (and its limitations). The purpose of using a range of problems with different properties was precisely to test this. We currently think that future work could either try to make the model more generally applicable to multiple of these problem classes, or specialize it for a specific type of problem (however, we think that it would be beyond the scope of this paper, especially when taking the page limit into account).\n\n== Minor issues raised by one or multiple reviewers ==\n* We fixed multiple smaller issues (typos/formatting) in the latest version.\n \n "
]
} | {
"paperhash": [
"goodfellow|deep_learning",
"sutskever|sequence_to_sequence_learning_with_neural_networks",
"abadi|matthieu_devin,_and_others._tensorflow_-large-scale_machine_learning_on_heterogeneous_distributed_systems",
"angel|an_information-rich_3d_model_repository",
"kingma|adam_-a_method_for_stochastic_optimization",
"ravanbakhsh|deep_learning_with_sets_and_point_clouds",
"vinyals|order_matters:_sequence_to_sequence_for_sets",
"vinyals|pointer_networks",
"vinyals|show_and_tell:_a_neural_image_caption_generator"
],
"title": [
"Deep Learning",
"Sequence to Sequence Learning with Neural Networks",
"",
"",
"Published as a conference paper at ICLR 2015 ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION",
"DEEP LEARNING WITH SETS AND POINT CLOUDS",
"ORDER MATTERS: SEQUENCE TO SEQUENCE FOR SETS",
"Oriol Vinyals * Google Brain",
"Show and Tell: A Neural Image Caption Generator"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"nicholas g polson",
"vadim o sokolov"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Chicago",
"location": "{}"
},
{
"laboratory": "",
"institution": "George Mason University",
"location": "{}"
}
]
},
{
"name": [
"ilya sutskever",
"oriol vinyals",
"quoc v le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"martín abadi",
"ashish agarwal",
"paul barham",
"eugene brevdo",
"zhifeng chen",
"craig citro",
"greg s corrado",
"andy davis",
"jeffrey dean",
"matthieu devin",
"sanjay ghemawat",
"ian goodfellow",
"andrew harp",
"geoffrey irving",
"michael isard",
"yangqing jia",
"rafal jozefowicz",
"lukasz kaiser",
"manjunath kudlur",
"josh levenberg",
"dan mané",
"rajat monga",
"sherry moore",
"derek murray",
"chris olah",
"mike schuster",
"jonathon shlens",
"benoit steiner",
"ilya sutskever",
"kunal talwar",
"paul tucker",
"vincent vanhoucke",
"vijay vasudevan",
"fernanda viégas",
"oriol vinyals",
"pete warden",
"martin wattenberg",
"martin wicke",
"yuan yu",
"xiaoqiang zheng",
"google research"
],
"affiliation": [
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
},
{
"laboratory": "Large-Scale Machine Learning on Heterogeneous Distributed Systems",
"institution": "",
"location": "{}"
}
]
},
{
"name": [
"angel x chang",
"thomas funkhouser",
"leonidas guibas",
"pat hanrahan",
"qixing huang",
"zimo li",
"silvio savarese",
"manolis savva",
"shuran song",
"hao su",
"jianxiong xiao",
"yi li",
"fisher yu"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago",
"location": "{}"
},
{
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Princeton University",
"location": "{}"
}
]
},
{
"name": [
"diederik p kingma",
"jimmy lei ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"siamak ravanbakhsh",
"jeff schneider",
"barnabás póczos"
],
"affiliation": [
{
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": "{'postCode': '15213', 'region': 'PA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": "{'postCode': '15213', 'region': 'PA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": "{'postCode': '15213', 'region': 'PA', 'country': 'USA'}"
}
]
},
{
"name": [
"oriol vinyals",
"samy bengio",
"manjunath kudlur"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"meire fortunato",
"navdeep jaitly",
"google brain"
],
"affiliation": [
{
"laboratory": "",
"institution": "UC Berkeley",
"location": "{}"
},
{
"laboratory": "",
"institution": "UC Berkeley",
"location": "{}"
},
{
"laboratory": "",
"institution": "UC Berkeley",
"location": "{}"
}
]
},
{
"name": [
"vinyals oriol",
" google",
"alexander toshev google",
"bengio samy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.407407 | 0.833333 | null | null | null | null | null | r1tJKuyRZ |
||
baykal|small_coresets_to_represent_large_training_data_for_support_vector_machines|ICLR_cc_2018_Conference | Small Coresets to Represent Large Training Data for Support Vector Machines | Support Vector Machines (SVMs) are one of the most popular algorithms for classification and regression analysis. Despite their popularity, even efficient implementations have proven to be computationally expensive to train at a large-scale, especially in streaming settings. In this paper, we propose a novel coreset construction algorithm for efficiently generating compact representations of massive data sets to speed up SVM training. A coreset is a weighted subset of the original data points such that SVMs trained on the coreset are provably competitive with those trained on the original (massive) data set. We provide both lower and upper bounds on the number of samples required to obtain accurate approximations to the SVM problem as a function of the complexity of the input data. Our analysis also establishes sufficient conditions on the existence of sufficiently compact and representative coresets for the SVM problem. We empirically evaluate the practical effectiveness of our algorithm against synthetic and real-world data sets. | {
"name": [],
"affiliation": []
} | We present an algorithm for speeding up SVM training on massive data sets by constructing compact representations that provide efficient and provably approximate inference. | [
"coresets",
"data compression"
] | null | 2018-02-15 22:29:23 | 25 | null | null | null | null | null | null | null | null | false | While the paper shows some encouraging results for scaling up SVMs using coreset methods, it has fallen short of making a fully convincing case, particularly given the amount of intense interest in this topic back in the heydey of kernel methods. When it comes to scalability, it has become the norm now to benchmark results on far larger datasets using parallelism, specialized hardware in conjunction with algorithmic speedups (e.g., using random feature methods, low-rank approximations such as Nystrom and other approaches). As such the paper is unlikely to generate much interest in the ICLR community in its current form. | {
"review_id": [
"rkydZLAef",
"Byu50uOxf",
"BJu6VdYlf"
],
"review": [
{
"title": "title: coreset for svm. ",
"paper_summary": null,
"main_review": "main_review: The paper studies the problem of constructing small coreset for SVM.\nA coreset is a small subset of (weighted) points such that the optimal solution for the coreset is also a good approximation for the original point set. The notion of coreset was originally formulated in computational geometry by Agarwal et al.\n(see e.g., [A])\nRecently it has been extended to several clustering problems, linear algebra, and machine learning problems. This paper follows the important sampling approach first proposed in [B], and generalized by Feldman and Langberg. The key in this approach is to compute the sensitivity of points and bound the total sensitivity for the considered problem (this is also true for the present paper). For SVM, the paper presents a bad instance where the total sensitivity can be as bad as 2^d. Nevertheless,\nthe paper presents interesting upper bounds that depending on the optimal value and variance of the point set. The paper argues that in many data sets, the total sensitivity may be small, yielding small coreset. This makes sense and may have significant practical implications.\n\nHowever, I have the following reservation for the paper.\n(1) I don't quite understand the CHICKEN and EGG section. Indeed, it is unclear to me \nhow to estimate the optimal value. The whole paragraph is hand-waving. What is exactly merge-and-reduce? From the proof of theorem 9, it appears that the interior point algorithm is run on the entire dataset, with running time O(n^3d). Then there is no point to compute a coreset as the optimal solution is already known.\n\n(2) The running time of the algorithm is not attractive (in both theory and practice).\nIn fact, the experimental result on the running time is a bit weak. It seems that the algorithm is pretty slow (last in Figure 1). \n\n(3) The theoretical novelty is limited. The paper follows from now-standard technique for constructing coreset.\n\nOveral, I don't recommend acceptance.\n\nminor points:\nIt makes sense to cite the following papers where original ideas on constructing coresets were proposed initially.\n\n[A]Geometric Approximation via Coresets\nPankaj K. Agarwal Sariel Har-Peled Kasturi R. Varadarajan\n\n[B]Universal epsilon-approximators for integrals, by Langberg and Schulman\n\n---------------------------------------------------------\n\nAfter reading the response and the revised text, I understand the chicken-and-egg issue.\nI think the experimental section is still a bit weak (given that there are several very competitive SVM algorithms that the paper didn't compare with).\nI raised my score to 5. \n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: This paper studies the approach of coreset for SVM. In particular, it aims at sampling a small set of weighted points such that the loss function over these points provably approximates that over the whole dataset. This is done by applying an existing theoretical framework to the SVM training objective.\n\nThe coreset idea has been applied to SVM in existing work, but this paper uses a new theoretical framework. It also provides lower bound on the sample complexity of the framework for general instances and provides upper bound that is data dependent, shedding light on what kind of data this method is suitable for. \n\nThe main concern I have is about the novelty of the coreset idea applied to SVM. Also, there are some minor issues:\n-- Section 4.2: What's the point of building the coreset if you've got the optimal solution? Indeed one can do divide-and-conquer. But can one begin with an approximation solution? In general, the analysis of the coreset should still hold if one begins with an approximation solution. Also, even when doing divide-and-conquer, the solution obtained in the first line of the algorithm should still be approximate. The authors pointed out that Lemma 7 can be extended to this case, and I hope the proof can be written out explicitly.\n-- section 2, paragraph 4: why SGD-based approaches cannot be trivially extended to streaming settings? \n-- Definition 3: what randomness is the probability with respect to? \n-- For experiments: the comparison with CVM should be added.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Small Coresets to Represent Large Training Data for Support Vector Machines",
"paper_summary": null,
"main_review": "main_review: The paper suggests an importance sampling based Coreset construction for Support Vector Machines (SVM). To understand the results, we need to understand Coreset and importance sampling: \n\nCoreset: In the context of SVMs, a Coreset is a (weighted) subset of given dataset such that for any linear separator, the cost of the separator with respect to the given dataset X is approximately (there is an error parameter \\eps) the same as the cost with respect to the weighted subset. The main idea is that if one can find a small coreset, then finding the optimal separator (maximum margin etc.) over the coreset might be sufficient. Since the computation is done over a small subset of points, one hopes to gain in terms of the running time.\n\nImportance sampling: This is based on the theory developed in Feldman and Langberg, 2011 (and some of the previous works such as Langberg and Schulman 2010, the reference of which is missing). The idea is to define a quantity called sensitivity of a data-point that captures how important this datapoint is with respect to contributing to the cost function. Then a subset of datapoint are sampled based on the sensitivity and the sampled data point is given weight proportional to inverse of the sampling probability. As per the theory developed in these past works, sampling a subset of size proportional to the sum of sensitivities gives a coreset for the given problem.\n\nSo, the main contribution of the paper is to do all the sensitivity calculations with respect to SVM problem and then use the importance sampling theory to obtain bounds on the coreset size. One interesting point of this construction is that Coreset construction involves solving the SVM problem on the given dataset which may seem like beating the purpose. However, the authors note that one only needs to compute the Coreset of small batches of the given dataset and then use standard procedures (available in streaming literature) to combine the Coresets into a single Coreset. This should give significant running time benefits. The paper also compares the results against the simple procedure where a small uniform sample from the dataset is used for computation. \n\n\nEvaluation: \nSignificance: Coresets give significant running time benefits when working with very big datasets. Coreset construction in the context of SVMs is a relevant problem and should be considered significant.\n\nClarity: The paper is reasonably well-written. The problem has been well motivated and all the relevant issues point out for the reader. The theoretical results are clearly stated as lemmas a theorems that one can follow without looking at proofs. \n\nOriginality: The paper uses previously developed theory of importance sampling. However, the sensitivity calculations in the SVM context is new as per my knowledge. It is nice to know the bounds given in the paper and to understand the theoretical conditions under which we can obtain running time benefits using corsets. \n\nQuality: The paper gives nice theoretical bounds in the context of SVMs. One aspect in which the paper is lacking is the empirical analysis. The paper compares the Coreset construction with simple uniform sampling. Since Coreset construction is being sold as a fast alternative to previous methods for training SVMs, it would have been nice to see the running time and cost comparison with other training methods that have been discussed in section 2.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.4444444477558136,
0.6666666865348816
],
"confidence": [
0.75,
0.5,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to AnonReviewer1",
"General Response",
"Response to AnonReviewer3",
"Response to AnonReviewer2",
"borderline",
"Response to AnonReviewer1",
"Updated Experimental Results",
"Improved Results Section"
],
"comment": [
"Thank you for your insightful feedback and references to prior work that originally proposed the notion of constructing coresets. We added these references to the revised version of our paper. More specific responses below:\n\n1) The chicken and the egg phenomena commonly arises in coresets-related work, where the optimal or approximately-optimal solution is used to compute approximations of the sensitivity of each point. In our case, we compute the optimal solution to the problem using the Interior Point Method, but as mentioned in Sec. 4.3, an approximately optimal solution can be computed using Pegasos (Shalev-Shwatz et al., 2011). The merge-and-reduce procedure (explained below and cited in our original submission) ensures that our algorithm is never run against the entire data set, but rather small partitions of the data set. This implies that by repeatedly running our algorithm on a partition of the data set, consisting of approximately logarithmic number of points, and merging the resulting coresets yield a coreset for the entire data set. In other words, one needs to run the coreset procedure on only a small subset (or batch) of the input points and then use the standard merge-and-reduce procedure to combine the resulting coresets to form a coreset for the entire data set.\n\nThe merge-and-reduce procedure is a traditional technique in coreset-construction dating back to the work of Har-Peled and Mazumdar (2004) (for a recent exposition of this technique, see: Braverman et al. 2016, as we cited in Sec. 4.2 in our submission) that exploits the fact that coresets are composable and reducible, as explained in our general response above. Moreover, the merged coreset can be further reduced by noting that an epsilon-coreset of a say, delta-coreset, is an (epsilon + delta)-coreset. Both the chicken and the egg phenomena and merge-and-reduce techniques are covered in detail in the related work we cited in the section (Braverman et al. 2016).\n\nIn light of your feedback, we have modified the text to clarify the exposition of the chicken and the egg phenomena and the merge and reduce technique.\n\n2) Thank you for bringing up this ambiguity in the reported runtime. We have should have highlighted that the running time of the algorithm is approximately linear if the merge-and-reduce procedure above is used and the sufficient conditions on the sum of sensitivities for the existence of small coresets mentioned in the analysis section hold. We have modified our paper accordingly and these changes are reflected in the revised version, namely in Sec. 4.2.\n\nWe want to emphasize that our algorithm introduces a novel way to accelerate SVM training in streaming settings, where traditional SGD-based approaches to approximately-optimal SVM training (e.g., Pegasos) currently cannot handle. Therefore, comparing the *offline* performance of our algorithm (designed to operate in streaming settings) to SGD-based approaches (which cannot operate in streaming settings) may not be the most appropriate comparison. \n\n3) We agree and mention in our original submission that our work builds on the framework for coreset construction introduced by Langberg et al. (2010) and generalized by Feldman et al., (2011). However, as these authors also note, the main challenge in using the coresets framework lies in establishing accurate upper bounds on the sensitivity of each point using analytical and algorithmic techniques. In fact, the novelty in the most of the recently published coresets papers lies in the introduction of novel upper bounds on the sensitivity (typically, via bicriteria approximations). In our paper, we not only provide accurate, data-dependent upper bounds on the sensitivity of each point, but also establish lower bounds on the sensitivity, which enables us to classify the set of problem instances for which our algorithm is most suited. \n",
"We thank all the reviewers for their useful suggestions and careful consideration of our paper. Your feedback has raised several points we need to clarify prior to providing detailed answers. We understand that there is a range of expertise in this community and will improve our exposition to make sure the paper is broadly accessible to the ICLR community. \n\n\tOur submission proposes a coreset-based approach to speeding up SVM training by constructing compact representations of massive data sets. The key idea is that an SVM trained on coresets, i.e., weighted subsets of the original input points, generated by our algorithm is provably competitive with the SVM trained on the full data set. In contrast to SGD-based approaches, e.g., Pegasos, our approach extends to streaming data cases, where the input data set is so large that it may not be possible to store or process all the data at one time, as is common with Big Data applications and for dynamic datasets where samples are inserted/deleted. This new computational model for SVM is enabled by combining our coreset construction algorithm with the merge-and-reduce technique. The merge-and-reduce technique is over a decade old and is now a standard technique. We included references in the paper revision.\n\n\tOur algorithm requires knowledge of the optimal SVM solution in order to generate the coreset. This seemingly paradoxical construction is known as the chicken and the egg phenomenon, which commonly arises in coresets literature, and is resolved by the fact that the original algorithm is *not* intended to be run against the full data set, but rather small partitions of the data set. The merge-and-reduce procedure is a traditional technique in coreset construction dating back to the work of Har-Peled and Mazumdar (2004) (for a recent description of this technique, see: Braverman et al., (2016) as we cited in Sec. 4.2 in our submission) that exploits the fact that coresets are *composable*, i.e., if S_1 is an epsilon-coreset for data set P_1 and S_2 is an epsilon-coreset for data set P_2, then the union S_1 \\cup S_2 is an epsilon-coreset for P_1 \\cup P_2, and *reducible*, i.e., a delta-coreset of an epsilon-coreset is a ((1 + epsilon)*(1 + delta) - 1)-coreset.\n\n\tThus, if the merge-and-reduce technique is used, our algorithm is only run on small subsets of the original data set. The results are then appropriately merged together, which implies that despite the super-linear runtime required to compute the optimal solution, the overall runtime is polylog(n) * d^3 * n, ignoring epsilon-error and delta (probability of failure) factors (details can be found in Sec. 4.2 of our revision). In our original submission, we show how this construction can be further sped up by using an efficient method to obtain a coarse, but near optimal solution, e.g., via Pegasos. We have included an additional lemma in the manuscript to clarify this point and to extend our prior analysis to this case, as requested by AnonReviewer3. We have also added details to sections 4.2 and 4.3 to further clarify the chicken and the egg phenomenon and the merge and reduce technique. ",
"Thank you for your comments and feedback. Please find below our item-specific responses. \n\n1) As we also highlighted in our general response and response to AnonReviewer1, our coreset construction method is intended to be used in conjunction with the traditional merge-and-reduce technique, which ensures that our coreset construction algorithm is never run against the full data set. Rather, our coreset construction algorithm takes as input partitions of the original data set (where each set in the partition is of sufficiently small size, see Sec. 4.2). We have also included an extension of Lemma 7 to the case where only an approximately-optimal solution is available (see Lemma 11 in our revision).\n\n2) Gradient-based methods cannot be trivially extended to settings where the data points arrive in a streaming fashion, since seeing a new point results in a complete change of the gradient.\n\n3) Thank you for pointing out this ambiguity. The randomness is with respect to the sampling scheme used by our algorithm to construct the coreset, but we realize in retrospect that this is confusing since there exists deterministic coreset-construction algorithms. We have modified our paper to clarify the definition of a coreset and the (probabilistic) guarantees provided by our algorithm.\n\n4) As we mentioned to AnonReviewer2, we are currently in the process of running additional experiments that evaluate the performance of our algorithm against other algorithms, such as CVM as you mentioned. Our plan is to include the results of these experiments in a later revision to be uploaded before Dec. 20.",
"Thank you for your in-depth feedback and consideration of our paper. We included the reference to the original Langberg and Schulman (2010) paper that introduced the concept of sensitivity. We are currently in the process of running additional experiments that evaluate the performance of our algorithm against larger data-sets and compare it to more of the other approaches mentioned in Sec. 2. We plan to finalize these experiments and include the results in our revised version before Dec. 20.",
"I checked the new experimental results. \n\nThe new sampling method provides moderate improvement over the naive uniform sampling (in many cases). \nThe running time part is not so convincing, as in many cases, it is significantly slower than other methods.\nAlso, some text explaining those figures should be helpful.\n\nwhy the last figure in Figure 6 only has 2 curves?\n\nThe new results are certainly helpful. \nBut in my opinion, the paper may not be a clear accept of ICLR.",
"Thank you for the additional consideration.\n\n1) Regarding the *offline* running time of our algorithm, we include below the response that we had posted earlier regarding the runtime comparisons. In short, our algorithm, unlike prior approaches, can be applied to streaming settings where it may not be possible to store or process all the data at one time, as is common with Big Data applications and for dynamic datasets where samples are inserted/deleted.\n\nPrior response regarding runtime:\n---\nWe want to emphasize that our algorithm introduces a novel way to accelerate SVM training in streaming settings, where traditional SGD-based approaches to approximately-optimal SVM training (e.g., Pegasos) currently cannot handle. Therefore, comparing the *offline* performance of our algorithm (designed to operate in streaming settings) to SGD-based approaches (which cannot operate in streaming settings) may not be the most appropriate comparison. \n---\n\n2) The last graph in Fig. 6 actually contains all of the 4 curves, where uniform sampling, our coreset, and All Data essentially overlap (at either 0 or very close to 0 relative error). This is due to CVM's poor performance on this particular data set combined with the good performance (i.e., relative error very close to 0) of both uniform sampling and our coreset. We will recreate the figure to reflect this overlap of curves more clearly and will also add further explanatory text as requested.\n\n",
"Thank you again for your consideration. We have updated our submission with a revised manuscript that includes additional comparisons in the streaming setting and evaluations against competitive algorithms, including Pegasos and Core Vector Machines (CVMs). Please feel free to refer to our latest general response and revision for additional details.",
"We wanted to update the reviewers and readers that our latest revision contains additional experimental results that evaluate and compare the performance of our algorithm with that of state-of-the-art. In particular, our latest revision contains the following additional experimental results:\n\n1) Comparisons with Pegasos (Fig. 2)\n2) Comparisons with uniform subsampling in the streaming setting where the data points arrive one-by-one (Fig. 5)\n3) Comparisons with Core Vector Machines (CVMs) (Fig. 6).\n\nDue to space constraints and our consideration that our theoretical results may have been of higher interest to the community, we were not able to fit all of these additional results in our original submission. However, in the case that our paper is accepted, we will certainly investigate ways to include these additional results in the final version of our paper."
]
} | {
"paperhash": [
"bachem|practical_coreset_constructions_for_machine_learning",
"braverman|new_frameworks_for_offline_and_streaming_coreset_constructions",
"feldman|a_unified_framework_for_approximating_and_clustering_data",
"huggins|coresets_for_scalable_bayesian_logistic_regression"
],
"title": [
"Practical Coreset Constructions for Machine Learning",
"Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation",
"A Unified Framework for Approximating and Clustering Data",
"CORESETS FOR SCALABLE BAYESIAN LOGISTIC REGRESSION"
],
"abstract": [
"",
"",
"",
""
],
"authors": [
{
"name": [
"olivier bachem",
"mario lucic",
"andreas krause"
],
"affiliation": [
{
"laboratory": "",
"institution": "ETH Zurich",
"location": "{}"
},
{
"laboratory": "",
"institution": "ETH Zurich",
"location": "{}"
},
{
"laboratory": "",
"institution": "ETH Zurich",
"location": "{}"
}
]
},
{
"name": [
"remi denton",
"wojciech zaremba",
"joan bruna",
"yann lecun",
"rob fergus"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
}
]
},
{
"name": [
"d feldman",
"m langberg"
],
"affiliation": [
{
"laboratory": "",
"institution": "California Institute of Technology",
"location": "{'postCode': '91125', 'settlement': 'Pasadena', 'region': 'CA'}"
},
{
"laboratory": "",
"institution": "Open University of Israel",
"location": "{'addrLine': '108 Ravutski St', 'postCode': '43107', 'settlement': 'Raanana', 'country': 'Israel'}"
}
]
},
{
"name": [
"jonathan h huggins",
"trevor campbell",
"tamara broderick"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null
]
} | null | 84 | null | 0.518519 | 0.583333 | null | null | null | null | null | r1saNM-RW |
||
soudry|the_implicit_bias_of_gradient_descent_on_separable_data|ICLR_cc_2018_Conference | 3994909 | 1710.10345 | The Implicit Bias of Gradient Descent on Separable Data | We show that gradient descent on an unregularized logistic regression
problem, for almost all separable datasets, converges to the same direction as the max-margin solution. The result generalizes also to other monotone decreasing loss functions with an infimum at infinity, and we also discuss a multi-class generalizations to the cross entropy loss. Furthermore,
we show this convergence is very slow, and only logarithmic in the
convergence of the loss itself. This can help explain the benefit
of continuing to optimize the logistic or cross-entropy loss even
after the training error is zero and the training loss is extremely
small, and, as we show, even if the validation loss increases. Our
methodology can also aid in understanding implicit regularization
in more complex models and with other optimization methods. | {
"name": [
"daniel soudry",
"elad hoffer",
"shpigel nacson",
"nathan srebro"
],
"affiliation": [
{
"laboratory": "",
"institution": "Technion Haifa",
"location": "{'postCode': '320003', 'country': 'Israel'}"
},
{
"laboratory": "",
"institution": "Technion Haifa",
"location": "{'postCode': '320003', 'country': 'Israel'}"
},
{
"laboratory": "",
"institution": "Technion Haifa",
"location": "{'postCode': '320003', 'country': 'Israel'}"
},
{
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago Chicago",
"location": "{'postCode': '60637', 'region': 'Illinois', 'country': 'USA'}"
}
]
} | null | [
"Computer Science",
"Mathematics"
] | Journal of machine learning research | 2017-10-27 | 15 | 861 | 122 | null | null | null | null | null | null | true | The paper is tackling an important open problem.
AnonReviewer3 identified some technical issues that led them to rate the manuscript 5 (i.e., just below the acceptance threshold). Many of these issues are resolved by the reviewer in their review, and the author response makes it clear that these fixes are indeed correct. However, other issues that the reviewer raises are not provided with solutions. The authors address these points, but in one case at least (regarding w_infinity), I find the new text somewhat hand-waivy. Regardless, I'm inclined to accept the paper because the issues seem to be straightforward. Ultimately, the authors are responsible for the correctness of the results. | {
"review_id": [
"S1jezarxG",
"HyBrwGweG",
"HkS9oWtef"
],
"review": [
{
"title": "title: An interesting paper, but issues with correctness and presentation",
"paper_summary": null,
"main_review": "main_review: The paper offers a formal proof that gradient descent on the logistic\nloss converges very slowly to the hard SVM solution in the case where\nthe data are linearly separable. This result should be viewed in the\ncontext of recent attempts at trying to understand the generalization\nability of neural networks, which have turned to trying to understand\nthe implicit regularization bias that comes from the choice of\noptimizer. Since we do not even understand the regularization bias of\noptimizers for the simpler case of linear models, I consider the paper's\ntopic very interesting and timely.\n\nThe overall discussion of the paper is well written, but on a more\ndetailed level the paper gives an unpolished impression, and has many\ntechnical issues. Although I suspect that most (or even all) of these\nissues can be resolved, they interfere with checking the correctness of\nthe results. Unfortunately, in its current state I therefore do not\nconsider the paper ready for publication.\n\n\nTechnical Issues:\n\nThe statement of Lemma 5 has a trivial part and for the other part the\nproof is incorrect: Let x_u = ||nabla L(w(u))||^2.\n - Then the statement sum_{u=0}^t x_u < infinity is trivial, because\n it follows directly from ||nabla L(w(u))||^2 < infinity for all u. I\n would expect the intended statement to be sum_{u=0}^infinity x_u <\n infinity, which actually follows from the proof of the lemma.\n - The proof of the claim that t*x_t -> 0 is incorrect: sum_{u=0}^t x_u\n < infinity does not in itself imply that t*x_t -> 0, as claimed. For\n instance, we might have x_t = 1/i^2 when t=2^i for i = 1,2,... and\n x_t = 0 for all other t.\n\nDefinition of tilde{w} in Theorem 4:\n - Why would tilde{w} be unique? In particular, if the support vectors\n do not span the space, because all data lie in the same\n lower-dimensional hyperplane, then this is not the case.\n - The KKT conditions do not rule out the case that \\hat{w}^top x_n =\n 1, but alpha_n = 0 (i.e. a support vector that touches the margin,\n but does not exert force against it). Such n are then included in\n cal{S}, but lead to problems in (2.7), because they would require\n tilde{w}^top x_n = infinity, which is not possible.\n\nIn the proof of Lemma 6, case 2. at the bottom of p.14:\n - After the first inequality, C_0^2 t^{-1.5 epsilon_+} should be \n C_0^2 t^{-epsilon_+}\n - After the second inequality the part between brackets is missing an\n additional term C_0^2 t^{-\\epsilon_+}.\n - In addition, the label (1) should be on the previous inequality and\n it should be mentioned that e^{-x} <= 1-x+x^2 is applied for x >= 0\n (otherwise it might be false).\nIn the proof of Lemma 6, case 2 in the middle of p.15:\n - In the line of inequality (1) there is a t^{-epsilon_-} missing. In\n the next line there is a factor t^{-epsilon_-} too much.\n - In addition, the inequality e^x >= 1 + x holds for all x, so no need\n to mention that x > 0.\n\nIn Lemma 1:\n - claim (3) should be lim_{t \\to \\infty} w(t)^\\top x_n = infinity\n - In the proof: w(t)^top x_n > 0 only holds for large enough t.\n\nRemarks:\n\np.4 The claim that \"we can expect the population (or test)\nmisclassification error of w(t) to improve\" because \"the margin of w(t)\nkeeps improving\" is worded a little too strongly, because it presumes\nthat the maximum margin solution will always have the best\ngeneralization error.\n\nIn the proof sketch (p.3):\n - Why does the fact that the limit is dominated by gradients that are\n a linear combination of support vectors imply that w_infinity will\n also be a non-negative linear combination of support vectors?\n - \"converges to some limit\". Mention that you call this limit\n w_infinity\n\n\nMinor Issues:\n\nIn (2.4): add \"for all n\".\n\np.10, footnote: Shouldn't \"P_1 = X_s X_s^+\" be something like \"P_1 =\n(X_s^top X_s)^+\"?\n\nA.9: ell should be ell'\n\nThe paper needs a round of copy editing. For instance:\n - top of p.4: \"where tilde{w} A is the unique\"\n - p.10: \"the solution tilde{w} to TO eq. A.2\"\n - p.10: \"might BOT be unique\"\n - p.10: \"penrose-moorse pseudo inverse\" -> \"Moore-Penrose\n pseudoinverse\"\n \nIn the bibliography, Kingma and Ba is cited twice, with different years.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Very interesting characterisation of limiting behaviour of the log-loss minimisaton",
"paper_summary": null,
"main_review": "main_review: Paper focuses on characterising behaviour of the log loss minimisation on the linearly separable data. As we know, optimisation like this does not converge in a strict mathematical sense, as the norm of the model will grow to infinity. However, one can still hope for a convergence of normalised solution (or equivalently - convergence in term of separator angle, rather than parametrisation). This paper shows that indeed, log-loss (and some other similar losses), minimised with gradient descent, leads to convergence (in the above sense) to the max-margin solution. On one hand it is an interesting property of model we train in practice, and on the other - provides nice link between two separate learning theories.\n\nPros:\n- easy to follow line of argument\n- very interesting result of mapping \"solution\" of unregularised logistic regression (under gradient descent optimisation) onto hard max margin one\n\nCons:\n- it is not clear in the abstract, and beginning of the paper what \"convergence\" means, as in the strict sense logistic regression optimisation never converges on separable data. It would be beneficial for the clarity if authors define what they mean by convergence (normalised weight vector, angle, whichever path seems most natural) as early in the paper as possible.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: This paper analyzes the implicit regularization introduced by gradient descent for optimizing the smooth monotone exponential tailed loss function with separable data. The proposed result is very interesting since it illustrates that using gradient descent to minimize such loss function can lead to the L_2 maximum margin separator. ",
"paper_summary": null,
"main_review": "main_review: (a) Significance\nThe main contribution of this paper is to characterize the implicit bias introduced by gradient descent on separable data. The authors show the exact form of this bias (L_2 maximum margin separator), which is independent of the initialization and step size. The corresponding slow convergence rate explains the phenomenon that the predictor can continue to improve even when the training loss is already small. The result of this paper can inspire the study of the implicit bias introduced by gradient descent variants or other optimization methods, such as coordinate descent. In addition, the proposed analytic framework seems promising since it may be extended to analyze other models, like neural networks.\n\n(b) Originality\nThis is the first work to give the detailed characterizations of the implicit bias of gradient descent on separable data. The proposed assumptions are reasonable, but it seems to limit to the loss function with exponential tail. I’m curious whether the result in this paper can be applied to other loss functions, such as hinge loss.\n\n(c) Clarity & Quality \nThe presentation of this paper is OK. However, there are some places can be improved in this paper. For example, in Lemma 1, results (3) and (4) can be combined together. It is better for the authors to use another section to illustrate experimental settings instead of writing them in the caption of Figure 3.1. \n\nMinor comments: \n1. In Lemma 1 (4), w^T(t)->w(t)^T\n2. In the proof of Lemma 1, it’s better to use vector 0 for the gradient L(w)\n3. In Theorem 4, the authors should specify eta\n4. In appendix A, page 11, beta is double used\n5. In appendix D, equation (D.5) has an extra period\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.4444444477558136,
0.6666666865348816,
0.7777777910232544
],
"confidence": [
1,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Comments addressed in revision",
"Comments addressed in revision",
"Comment addressed in revision"
],
"comment": [
"We thank the reviewer for acknowledging the significance of our results, and for investing significant efforts in improving the quality of this manuscript. We uploaded a revised version in which all the reviewer comments were addressed, and the appendix was further polished. Notably,\n\n[Lemma 5 in appdendix]\n\n- Indeed, the upper limit of the sum over x_u should be 'infinity' instead of 't'.\n\n- It should be 'x_t -> 0', not 't*x_t -> 0'.\n\n[Definition of tilde{w} Theorem 4]\n\n- tilde{w} is indeed unique, given the initial conditions. We clarified this in Theorem 4 and its proof.\n\n- alpha_n=0 for the support vectors is only true for a measure zero of all datasets (we added a proof of this in appendix F). Thus, we clarified in the revision that our results hold for almost every dataset (and so, they are true with probability 1 for any data drawn from a continuous-valued distribution).\n\n[Why does the fact that the limit is dominated by gradients that are a linear combination of support vectors imply that w_infinity will also be a non-negative linear combination of support vectors?]\n\nWe clarified in the revision: “...The negative gradient would then asymptotically become a non-negative linear combination of support vectors. The limit w_{\\infinity} will then be dominated by these gradients, since any initial conditions become negligible as ||w(t)||->infinity (from Lemma 1)”.",
"We thank the reviewer for the positive review and for the helpful comments. We uploaded a revised version in which all the reviewer comments were addressed.\n\n[“I’m curious whether the result in this paper can be applied to other loss functions, such as hinge loss.”]\n\nWe believe our results could be extended to many other types of loss functions (in fact, we are currently working on such extensions). However, for the hinge loss (without regularization), gradient descent on separable data can converge to a finite solution which is not to the max margin vector. For example, if there is a single data point x=(1,0), and we start with a weight vector w=(2,2), the hinge loss and its gradient are both equal to zero. Therefore, no weight updates are performed, and we do not converge to the direction of the L2 max margin classifier: w=(1,0).\n\n[“It is better for the authors to use another section to illustrate experimental settings instead of writing them in the caption of Figure 3.1. “]\n\nWe felt it is easier to read if all details are summarized in the figure, and wanted to save space to fit the main paper into 8 pages. However, we can change this if required.",
"We thank the reviewer for the positive review and for the helpful comment. We uploaded a revised version in which clarified in the abstract that the weights converge “in direction” to the L2 max margin solution."
]
} | {
"paperhash": [
"ji|the_implicit_bias_of_gradient_descent_on_nonseparable_data",
"nacson|stochastic_gradient_descent_on_separable_data:_exact_convergence_with_a_fixed_learning_rate",
"gunasekar|implicit_bias_of_gradient_descent_on_linear_convolutional_networks",
"ji|risk_and_parameter_convergence_of_logistic_regression",
"nacson|convergence_of_gradient_descent_on_separable_data",
"gunasekar|characterizing_implicit_bias_in_terms_of_optimization_geometry",
"neyshabur|exploring_generalization_in_deep_learning",
"gunasekar|implicit_regularization_in_matrix_factorization",
"wilson|the_marginal_value_of_adaptive_gradient_methods_in_machine_learning",
"hoffer|train_longer,_generalize_better:_closing_the_generalization_gap_in_large_batch_training_of_neural_networks",
"zhang|understanding_deep_learning_requires_rethinking_generalization",
"hubara|quantized_neural_networks:_training_neural_networks_with_low_precision_weights_and_activations",
"keskar|on_large-batch_training_for_deep_learning:_generalization_gap_and_sharp_minima",
"hardt|train_faster,_generalize_better:_stability_of_stochastic_gradient_descent",
"neyshabur|path-sgd:_path-normalized_optimization_in_deep_neural_networks",
"kingma|adam:_a_method_for_stochastic_optimization",
"neyshabur|in_search_of_the_real_inductive_bias:_on_the_role_of_implicit_regularization_in_deep_learning",
"telgarsky|margins,_shrinkage,_and_boosting",
"duchi|adaptive_subgradient_methods_for_online_learning_and_stochastic_optimization",
"zhang|boosting_with_early_stopping:_convergence_and_consistency",
"rosset|boosting_as_a_regularized_path_to_a_maximum_margin_classifier",
"rosset|margin_maximizing_loss_functions",
"schapire|boosting_the_margin:_a_new_explanation_for_the_effectiveness_of_voting_methods"
],
"title": [
"The implicit bias of gradient descent on nonseparable data",
"Stochastic Gradient Descent on Separable Data: Exact Convergence with a Fixed Learning Rate",
"Implicit Bias of Gradient Descent on Linear Convolutional Networks",
"Risk and parameter convergence of logistic regression",
"Convergence of Gradient Descent on Separable Data",
"Characterizing Implicit Bias in Terms of Optimization Geometry",
"Exploring Generalization in Deep Learning",
"Implicit Regularization in Matrix Factorization",
"The Marginal Value of Adaptive Gradient Methods in Machine Learning",
"Train longer, generalize better: closing the generalization gap in large batch training of neural networks",
"Understanding deep learning requires rethinking generalization",
"Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations",
"On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima",
"Train faster, generalize better: Stability of stochastic gradient descent",
"Path-SGD: Path-Normalized Optimization in Deep Neural Networks",
"Adam: A Method for Stochastic Optimization",
"In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning",
"Margins, Shrinkage, and Boosting",
"Adaptive Subgradient Methods for Online Learning and Stochastic Optimization",
"Boosting with early stopping: Convergence and consistency",
"Boosting as a Regularized Path to a Maximum Margin Classifier",
"Margin Maximizing Loss Functions",
"Boosting the margin: A new explanation for the effectiveness of voting methods"
],
"abstract": [
"Gradient descent, when applied to the task of logistic regression, outputs iterates which are biased to follow a unique ray defined by the data. The direction of this ray is the maximum margin predictor of a maximal linearly separable subset of the data; the gradient descent iterates converge to this ray in direction at the rate O ( lnln t / ln t ) . The ray does not pass through the origin in general, and its offset is the bounded global optimum of the risk over the remaining data; gradient descent recovers this offset at a rate O ( (ln t ) 2 / √ t ) .",
"Stochastic Gradient Descent (SGD) is a central tool in machine learning. We prove that SGD converges to zero loss, even with a fixed (non-vanishing) learning rate --- in the special case of homogeneous linear classifiers with smooth monotone loss functions, optimized on linearly separable data. Previous works assumed either a vanishing learning rate, iterate averaging, or loss assumptions that do not hold for monotone loss functions used for classification, such as the logistic loss. We prove our result on a fixed dataset, both for sampling with or without replacement. Furthermore, for logistic loss (and similar exponentially-tailed losses), we prove that with SGD the weight vector converges in direction to the $L_2$ max margin vector as $O(1/\\log(t))$ for almost all separable datasets, and the loss converges as $O(1/t)$ --- similarly to gradient descent. Lastly, we examine the case of a fixed learning rate proportional to the minibatch size. We prove that in this case, the asymptotic convergence rate of SGD (with replacement) does not depend on the minibatch size in terms of epochs, if the support vectors span the data. These results may suggest an explanation to similar behaviors observed in deep networks, when trained with SGD.",
"We show that gradient descent on full-width linear convolutional networks of depth $L$ converges to a linear predictor related to the $\\ell_{2/L}$ bridge penalty in the frequency domain. This is in contrast to linearly fully connected networks, where gradient descent converges to the hard margin linear support vector machine solution, regardless of depth.",
"Gradient descent, when applied to the task of logistic regression, outputs iterates which are biased to follow a unique ray defined by the data. The direction of this ray is the maximum margin predictor of a maximal linearly separable subset of the data; the gradient descent iterates converge to this ray in direction at the rate $\\mathcal{O}(\\ln\\ln t / \\ln t)$. The ray does not pass through the origin in general, and its offset is the bounded global optimum of the risk over the remaining data; gradient descent recovers this offset at a rate $\\mathcal{O}((\\ln t)^2 / \\sqrt{t})$.",
"We provide a detailed study on the implicit bias of gradient descent when optimizing loss functions with strictly monotone tails, such as the logistic loss, over separable datasets. We look at two basic questions: (a) what are the conditions on the tail of the loss function under which gradient descent converges in the direction of the $L_2$ maximum-margin separator? (b) how does the rate of margin convergence depend on the tail of the loss function and the choice of the step size? We show that for a large family of super-polynomial tailed losses, gradient descent iterates on linear networks of any depth converge in the direction of $L_2$ maximum-margin solution, while this does not hold for losses with heavier tails. Within this family, for simple linear models we show that the optimal rates with fixed step size is indeed obtained for the commonly used exponentially tailed losses such as logistic loss. However, with a fixed step size the optimal convergence rate is extremely slow as $1/\\log(t)$, as also proved in Soudry et al. (2018). For linear models with exponential loss, we further prove that the convergence rate could be improved to $\\log (t) /\\sqrt{t}$ by using aggressive step sizes that compensates for the rapidly vanishing gradients. Numerical results suggest this method might be useful for deep networks.",
"We study the implicit bias of generic optimization methods, such as mirror descent, natural gradient descent, and steepest descent with respect to different potentials and norms, when optimizing underdetermined linear regression or separable linear classification problems. We explore the question of whether the specific global minimum (among the many possible global minima) reached by an algorithm can be characterized in terms of the potential or norm of the optimization geometry, and independently of hyperparameter choices such as step-size and momentum.",
"With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.",
"We study implicit regularization when optimizing an underdetermined quadratic objective over a matrix $X$ with gradient descent on a factorization of X. We conjecture and provide empirical and theoretical evidence that with small enough step sizes and initialization close enough to the origin, gradient descent on a full dimensional factorization converges to the minimum nuclear norm solution.",
"Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state-of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks.",
"Background: Deep learning models are typically trained using stochastic gradient descent or one of its variants. These methods update the weights using their gradient, estimated from a small fraction of the training data. It has been observed that when using large batch sizes there is a persistent degradation in generalization performance - known as the \"generalization gap\" phenomena. Identifying the origin of this gap and closing it had remained an open problem. \nContributions: We examine the initial high learning rate training phase. We find that the weight distance from its initialization grows logarithmically with the number of weight updates. We therefore propose a \"random walk on random landscape\" statistical model which is known to exhibit similar \"ultra-slow\" diffusion behavior. Following this hypothesis we conducted experiments to show empirically that the \"generalization gap\" stems from the relatively small number of updates rather than the batch size, and can be completely eliminated by adapting the training regime used. We further investigate different techniques to train models in the large-batch regime and present a novel algorithm named \"Ghost Batch Normalization\" which enables significant decrease in the generalization gap without increasing the number of updates. To validate our findings we conduct several additional experiments on MNIST, CIFAR-10, CIFAR-100 and ImageNet. Finally, we reassess common practices and beliefs concerning training of deep models and suggest they may not be optimal to achieve good generalization.",
"Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. \nThrough extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. \nWe interpret our experimental findings by comparison with traditional models.",
"We introduce a method to train Quantized Neural Networks (QNNs) -- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.",
"The stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, say $32$-$512$ data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize. We investigate the cause for this generalization drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions - and as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap.",
"We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convex and continuous optimization. We derive stability bounds for both convex and non-convex optimization under standard Lipschitz and smoothness assumptions. \nApplying our results to the convex case, we provide new insights for why multiple epochs of stochastic gradient methods generalize well in practice. In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting. Our findings conceptually underscore the importance of reducing training time beyond its obvious benefit.",
"We revisit the choice of SGD for training deep neural networks by reconsidering the appropriate geometry in which to optimize the weights. We argue for a geometry invariant to rescaling of weights that does not affect the output of the network, and suggest Path-SGD, which is an approximate steepest descent method with respect to a path-wise regularizer related to max-norm regularization. Path-SGD is easy and efficient to implement and leads to empirical gains over SGD and Ada-Grad.",
"We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.",
"We present experiments demonstrating that some other form of capacity control, different from network size, plays a central role in learning multilayer feed-forward networks. We argue, partially through analogy to matrix factorization, that this is an inductive bias that can help shed light on deep learning.",
"This manuscript shows that AdaBoost and its immediate variants can produce approximate maximum margin classifiers simply by scaling step size choices with a fixed small constant. In this way, when the unscaled step size is an optimal choice, these results provide guarantees for Friedman's empirically successful \"shrinkage\" procedure for gradient boosting (Friedman, 2000). Guarantees are also provided for a variety of other step sizes, affirming the intuition that increasingly regularized line searches provide improved margin guarantees. The results hold for the exponential loss and similar losses, most notably the logistic loss.",
"We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms.",
"Boosting is one of the most significant advances in machine learning for classification and regression. In its original and computationally flexible version, boosting seeks to minimize empirically a loss function in a greedy fashion. The resulting estimator takes an additive function form and is built iteratively by applying a base estimator (or learner) to updated samples depending on the previous iterations. An unusual regularization technique, early stopping, is employed based on CV or a test set. This paper studies numerical convergence, consistency and statistical rates of convergence of boosting with early stopping, when it is carried out over the linear span of a family of basis functions. For general loss functions, we prove the convergence of boosting's greedy optimization to the infinimum of the loss function over the linear span. Using the numerical convergence result, we find early-stopping strategies under which boosting is shown to be consistent based on i.i.d. samples, and we obtain bounds on the rates of convergence for boosting estimators. Simulation studies are also presented to illustrate the relevance of our theoretical results for providing insights to practical aspects of boosting. As a side product, these results also reveal the importance of restricting the greedy search step-sizes. as known in practice through the work of Friedman and others. Moreover, our results lead to a rigorous proof that for a linearly separable problem, AdaBoost with E → 0 step-size becomes an L 1 -margin maximizer when left to run to convergence.",
"In this paper we study boosting methods from a new perspective. We build on recent work by Efron et al. to show that boosting approximately (and in some cases exactly) minimizes its loss criterion with an l1 constraint on the coefficient vector. This helps understand the success of boosting with early stopping as regularized fitting of the loss criterion. For the two most commonly used criteria (exponential and binomial log-likelihood), we further show that as the constraint is relaxed---or equivalently as the boosting iterations proceed---the solution converges (in the separable case) to an \"l1-optimal\" separating hyper-plane. We prove that this l1-optimal separating hyper-plane has the property of maximizing the minimal l1-margin of the training data, as defined in the boosting literature. An interesting fundamental similarity between boosting and kernel support vector machines emerges, as both can be described as methods for regularized optimization in high-dimensional predictor space, using a computational trick to make the calculation practical, and converging to margin-maximizing solutions. While this statement describes SVMs exactly, it applies to boosting only approximately.",
"Margin maximizing properties play an important role in the analysis of classification models, such as boosting and support vector machines. Margin maximization is theoretically interesting because it facilitates generalization error analysis, and practically interesting because it presents a clear geometric interpretation of the models being built. We formulate and prove a sufficient condition for the solutions of regularized loss functions to converge to margin maximizing separators, as the regularization vanishes. This condition covers the hinge loss of SVM, the exponential loss of AdaBoost and logistic regression loss. We also generalize it to multi-class classification problems, and present margin maximizing multi-class versions of logistic regression and support vector machines.",
"One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance"
],
"authors": [
{
"name": [
"Ziwei Ji",
"Matus Telgarsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. S. Nacson",
"N. Srebro",
"Daniel Soudry"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Suriya Gunasekar",
"Jason D. Lee",
"Daniel Soudry",
"N. Srebro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ziwei Ji",
"Matus Telgarsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. S. Nacson",
"J. Lee",
"Suriya Gunasekar",
"N. Srebro",
"Daniel Soudry"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Suriya Gunasekar",
"Jason D. Lee",
"Daniel Soudry",
"N. Srebro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Behnam Neyshabur",
"Srinadh Bhojanapalli",
"D. McAllester",
"N. Srebro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Suriya Gunasekar",
"Blake E. Woodworth",
"Srinadh Bhojanapalli",
"Behnam Neyshabur",
"N. Srebro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ashia C. Wilson",
"R. Roelofs",
"Mitchell Stern",
"N. Srebro",
"B. Recht"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Elad Hoffer",
"Itay Hubara",
"Daniel Soudry"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Chiyuan Zhang",
"Samy Bengio",
"Moritz Hardt",
"B. Recht",
"O. Vinyals"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Itay Hubara",
"Matthieu Courbariaux",
"Daniel Soudry",
"Ran El-Yaniv",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"N. Keskar",
"Dheevatsa Mudigere",
"J. Nocedal",
"M. Smelyanskiy",
"P. T. P. Tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Moritz Hardt",
"B. Recht",
"Y. Singer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Behnam Neyshabur",
"R. Salakhutdinov",
"N. Srebro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Diederik P. Kingma",
"Jimmy Ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Behnam Neyshabur",
"Ryota Tomioka",
"N. Srebro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Matus Telgarsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"John C. Duchi",
"Elad Hazan",
"Y. Singer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Tong Zhang",
"Bin Yu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Saharon Rosset",
"Ji Zhu",
"T. Hastie"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Saharon Rosset",
"Ji Zhu",
"T. Hastie"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"R. Schapire",
"Y. Freund",
"Peter Barlett",
"Wee Sun Lee"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
null,
"1806.01796",
"1806.00468",
"1803.07300",
"1803.01905",
"1802.08246",
"1706.08947",
"1705.09280",
"1705.08292",
"1705.08741",
"1611.03530",
"1609.07061",
"1609.04836",
"1509.01240",
"1506.02617",
"1412.6980",
"1412.6614",
"1303.4172",
null,
"math/0508276",
null,
null,
null
],
"s2_corpus_id": [
"195769437",
"46940174",
"44099388",
"4803831",
"3692345",
"3484600",
"9597660",
"3909231",
"3273477",
"7967806",
"6212000",
"15817277",
"5834589",
"49015",
"2101905",
"6628106",
"6021932",
"16775293",
"538820",
"13158356",
"14037360",
"455073",
"573509"
],
"intents": [
[
"background"
],
[
"background"
],
[
"background",
"methodology"
],
[
"background"
],
[
"background"
],
[
"background",
"methodology"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[],
[
"background"
]
],
"isInfluential": [
false,
true,
true,
false,
false,
true,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | null | 88 | 9.784091 | 0.62963 | 0.833333 | null | null | null | null | null | r1q7n9gAb |
dipietro|analyzing_and_exploiting_narx_recurrent_neural_networks_for_longterm_dependencies|ICLR_cc_2018_Conference | Analyzing and Exploiting NARX Recurrent Neural Networks for Long-Term Dependencies | Recurrent neural networks (RNNs) have achieved state-of-the-art performance on many diverse tasks, from machine translation to surgical activity recognition, yet training RNNs to capture long-term dependencies remains difficult. To date, the vast majority of successful RNN architectures alleviate this problem using nearly-additive connections between states, as introduced by long short-term memory (LSTM). We take an orthogonal approach and introduce MIST RNNs, a NARX RNN architecture that allows direct connections from the very distant past. We show that MIST RNNs 1) exhibit superior vanishing-gradient properties in comparison to LSTM and previously-proposed NARX RNNs; 2) are far more efficient than previously-proposed NARX RNN architectures, requiring even fewer computations than LSTM; and 3) improve performance substantially over LSTM and Clockwork RNNs on tasks requiring very long-term dependencies. | {
"name": [],
"affiliation": []
} | We introduce MIST RNNs, which a) exhibit superior vanishing-gradient properties in comparison to LSTM; b) improve performance substantially over LSTM and Clockwork RNNs on tasks requiring very long-term dependencies; and c) are much more efficient than previously-proposed NARX RNNs, with even fewer parameters and operations than LSTM. | [
"recurrent neural networks",
"long-term dependencies",
"long short-term memory",
"LSTM"
] | null | 2018-02-15 22:29:26 | 40 | null | null | null | null | null | null | null | null | false | I think the model itself is not very novel, as pointed by the reviewers and the analysis is not very insightful either. However, the results themselves are interesting and quite good (on the copy task and pMnist, but not so much the other datasets presented (timit etc) where it not clear that long term dependencies would lead to better results). Since the method itself is not very novel, the onus is upon the authors to make a strong case for the merits of the paper -- It would be worth exploring these architectures further to see if there are useful elements for real world tasks -- more so than is demonstrated in the paper -- for example showing it on tasks such as machine translation or language modelling tasks requiring long term propagation of information or even real speech recognition, not just basic TIMIT phone frame classification rate.
As a result, while I think the paper could make for an interesting contribution, in its present form, I have settled on recommending the paper for the workshop track.
As a side note, paper is related to paper 874 in that an attention model is used to look at the past. The difference is in how the past is connected to the current model. | {
"review_id": [
"rycLSbcgf",
"H1OSO2dlz",
"BJMTxiOlG"
],
"review": [
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: Summary: The authors introduce a variant of NARX RNNs, which has an additional attention mechanism and a reset mechanism. The attention is only applied on subsets of hidden states, referred as delays. The delays are aggregated into a vector using the attention coefficients as weights, and then this vector is multiplied by the reset gates. \n\nThe model sounds a bit incremental, however, the performance improvements over pMNIST, copy and MobiAct tasks are interesting.\n\nA similar kind of architecture has been already proposed:\n[1] Soltani et al. “Higher Order Recurrent Neural Networks”, arXiv 1605.00064\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The paper introduces a variant of the well-known (but as of today not very frequently used) NARX architecture for Recurrent Neural Networks. It is demonstrated that with the proposed method (MIST RNNs), good performance is achieved on several common RNN problems.",
"paper_summary": null,
"main_review": "main_review: The presented MIST architecture certainly has got its merits, but in my opinion is not very novel, given the fact that NARX RNNs have been described 20 years ago, and Clockwork RNNs (which, as the authors point out in section 2, have a similar structure) have also been in use for several years. Still, the presented results are good, with standard LSTMs being substantially outperformed in three out of five standard RNN/LSTM benchmark tasks. The analysis in section 3 is decent (see however the minor comments below), but does not offer revolutionary new insights - it's perhaps more like a corollary of previous work (Pascanu et al., 2013).\n\nRegarding the concrete results, I would have wished for a more detailed analysis of the more surprising results, in particular, for the copy task (section 5.2): Is it really true that Clockwork RNNs fail because they make it \"difficult to learn long-term behavior that must be detected at high frequency\" [section 2]? How relevant are the results in figure 2 (yes, the gradient properties are very different, but is this an issue for accuracy)? In the sequential pMNIST classification, what about increasing the LSTM number of hidden units? If this brings the error rate further down, one could ask why exactly the LSTM captures long-term structure so differently with different number of units?\n\nIn summary, for me this paper is solid, and although the architecture is not that new, it is worth bringing it again into the focus of attention.\n\n\nMinor comments:\n- In several places, the formulas are rather strange and/or occasionally incorrect. In particular,\n* on the right-hand sind of the inline formula in section 3.1, the symbol v is missing completely, which cannot be right;\n* in formula 16, the primes seem to be misplaced, and the symbols t', t''', etc. should be defined;\n* the \\theta_l in the beginning of section 3.3 (formula 13) is completely superfluous.\n- The position of the tables and figures is rather weird, making the paper less readable than necessary. The authors should consider moving floating parts around (one could also move figure three to the bottom of a suitable page, for example).\n- It is a matter of taste, but since all experimental results except the ones on the copy task are tabulated, one could think of adding a table with the results now contained in figure 3.\n\nRelation to prior work: the authors are aware of most relevant work. \n\nOn p2 they write: \"Many other approaches have also been proposed to capture long-term dependencies.\" There is one that seems close to what the authors do: \n\nJ. Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234-242, 1992\n\nIt is related to clockwork RNNs, about which the authors write:\n\n\"A recent architecture that is similar in spirit to our work is that of Clockwork RNNs (Koutnik et al., 2014), which split weights and hidden units into partitions, each with a distinct period. When it’s not a partition’s time to tick, its hidden units are passed through unchanged, thus in some ways mimicking the behavior of NARX RNNs. However Clockwork RNNs differ in two key ways. First, Clockwork RNNs sever high-frequency-to-low-frequency paths, thus making it difficult to learn long-term behavior that must be detected at high frequency (for example, learning to depend on quick motions from the past for activity recognition). Second, Clockwork RNNs require hidden units to be partitioned a priori, which in practice is difficult to do in any meaningful way. NARX RNNs suffer from neither of these drawbacks.\"\n\nThe neural history compressor, however, adapts to the frequency of unexpected events, by ticking only when there is an unpredictable event, thus overcoming some of the issues above. Perhaps this trick could further improve the system of the authors, as well as the Clockwork RNNs, at least for certain tasks?\n\nGeneral recommendation: Accept, provided the comments are taken into account.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: little novelty and unconvincing",
"paper_summary": null,
"main_review": "main_review: The followings are my main critics of the paper: \n1. Analysis does not provide any new insights. \n2. Similar work (recurrent skip coefficient and the corresponding architecture in [1]) has been done, but has not been mentioned. \n3. The experimental results are not convincing. This includes 1. the choices of tasks are limited -- very small in size, 2. the performance in pMNIST is worse than [1], under the same settings.\n\nHence I think the novelty of the paper is very little, and the experiments are not convincing.\n\n[1] Architectural Complexity Measures of Recurrent Neural Networks. Saizheng Zhang, Yuhuai Wu, Tong Che, Zhouhan Lin, Roland Memisevic, Ruslan Salakhutdinov, Yoshua Bengio. NIPS, 2016. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.6666666865348816,
0.2222222238779068
],
"confidence": [
0.75,
1,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to AnonReviewer3",
"Added revision incorporating reviewer feedback",
"Response to AnonReviewer1",
"Response to AnonReviewer2"
],
"comment": [
"Thank you for your review. We kindly note that some of the comments in this review are incorrect, and as such we sincerely hope that you are willing to reconsider your evaluation of our work.\n\n>>>>> The experimental results are not convincing. This includes 1. the choices of tasks are limited -- very small in size, 2. the performance in pMNIST is worse than [1], under the same settings.\n\nPoint 2:\n\nPlease note that this is incorrect. In [1], the best reported error rate for pMNIST is 6.0% error, whereas we obtain 5.5 +- 0.2% error. Also, their results (Table 2) correspond to a hyperparameter sweep, with s = 11 achieving 6.0% error. We require no such sweeps: our delays were kept fixed for all 5 tasks in the paper (still outperforming every model proposed in [1]).\n\nPoint 1:\n\nPlease note that we evaluated these methods across\n\n- 2 synthetic tasks that have been widely used for testing long-term dependencies, as was highlighted in Section 5 with references (Hochreiter et al., 1997; Martens et al., 2011; Le et al., 2015; Arjovsky et al., 2016; Henaff et al., 2016; Danihelka et al., 2016)\n\n- 3 real tasks that were chosen because they a) likely require long-term dependencies and b) are of moderate size so that statistically-significant results can be obtained.\n\nWe followed the experimental design of [2], which also includes 3 real tasks of moderate size, preferring random hyperparameter sweeps and statistically-significant results over manual sweeps and statistically-questionable results. Also, please note that this design seems to be reasonable to the community, as [2] has been cited 400+ times since 2014.\n\nRegarding the dataset sizes: TIMIT is standard, with splits identical to [2]. MobiAct contains approximately 3200 sequences of mobile sensor data from 67 users, very similar in size to the datasets in [2]. MISTIC-SL is smaller in size, but we chose this task because long-term dependencies are required and because state of the art is held by LSTM (which we ended up matching with MIST RNNs).\n\n[1] Zhang et al. Architectural complexity measures of recurrent neural networks. Advances in neural information processing systems (NIPS), 2016.\n\n[2] Greff et al. LSTM: A search space odyssey. IEEE Trans. on Neural Networks and Learning Systems, 2016.\n\n>>>>> Similar work (recurrent skip coefficient and the corresponding architecture in [1]) has been done, but has not been mentioned. \n\nBased on this comment, we have added a discussion of [1] to the Background section. However kindly note that\n\n- with regard to the architecture, [1] proposes precisely a simple NARX RNN ([19], discussed extensively in our paper) with non-zero weights for only two delays. This bears little resemblance to our work. Most importantly, MIST RNNs provide exponentially-short paths to the past while maintaining fewer parameters and computations than LSTM. In contrast, [1] does not provide exponentially-short paths, and uses two delays to avoid high parameter/computation counts. In case there is any doubt about this, we quote [1]: \"By using this specific construction, the recurrent skip coefficient increases from 1 (i.e., baseline) to k and the new model with extra connection has 2 hidden matrices (one from t to t + 1 and the other from t to t + k).\"\n\n- with regard to skip coefficients, [1] defines a *measure* of shortest paths called Recurrent Skip Coefficients. However in [1] the motivation for this definition is \"it is known that adding skip connections across multiple time steps may help improve the performance on long-term dependency problems [19, 20].\" Again, [19] introduced simple NARX RNNs, as discussed extensively in our paper. Thus the extent to which [1]'s skip coefficients overlap with our work is that we both recognize that short paths are important. A difference between our work and [1] is that we provide a self-contained derivation of this.\n\n[1] Zhang et al. Architectural complexity measures of recurrent neural networks. Advances in neural information processing systems (NIPS), 2016.\n\n[19] Lin et al. Learning long-term dependencies in NARX recurrent neural networks. IEEE Transactions on Neural Networks, 7(6):1329–1338, 1996.\n\n[20] Sutskever et al. Temporal-kernel recurrent neural networks. Neural Networks, 23(2):239–243, 2010.\n\n>>>>> Analysis does not provide any new insights.\n\nThe connection of gradient components to paths via the chain rule for ordered derivatives is new. However we agree that the analysis portion of the paper is not revolutionary - this was not the goal of the analysis. Our goals were to provide a self-contained justification of our approach and to extend the results from ([1], [2]) to general NARX RNNs.\n\n[1] Bengio et al. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157-166, 1994.\n\n[2] Pascanu et al. On the difficulty of training recurrent neural networks. International Conference on Machine Learning (ICML), 28:1310-1318, 2013.",
"Changes:\n\n- The last 3 paragraphs of Section 2 (Background) were expanded and edited based on feedback from all 3 reviewers.\n\n- Section 3 (The Vanishing Gradient Problem in the Context of NARX RNNs) was edited for clarity and to fix typos spotted by AnonReviewer2.\n\n- Section 5.1 (Permuted MNIST results) was heavily modified based on AnonReviewer2's feedback. In particular, results were added with additional hidden-unit counts, and results were added to show that LSTM performance does not depend at all on information from the distant past (whereas MIST RNN performance does).\n\n- A paragraph was added to the end of Section 5.2 (Copy Problem results) based on AnonReviewer2's feedback. In particular we discuss additional Clockwork RNN results; the reasons that Clockwork RNNs must fail for large delays; and show that Clockwork RNNs do indeed behave like simple RNNs if enough hidden units are provided.\n\n- Figures and Tables were moved around for clarity, based on AnonReviewer2's feedback.\n\n- Small miscellaneous edits were made throughout to open space for the previous changes.",
"Thank you for your review. We also found it interesting that MIST RNNs can capture such long-term dependencies.\n\n>>>>> A similar kind of architecture has been already proposed: [1] Soltani et al. “Higher Order Recurrent Neural Networks”, arXiv 1605.00064\n\nBased on this comment, we have added a short discussion of [1] to the Background section.\n\nHowever, we would like to kindly note that [1] defines a \"higher order recurrent neural network (HORNN)\" precisely as a simple NARX RNN, which was introduced 20 years earlier in [2], and which was already discussed extensively in our paper.\n\nImportantly, every HORNN variant in [1] suffers from the same issue that is mentioned in our paper for simple NARX RNNs: the vanishing gradient problem is only mitigated mildly as n_d, the number of delays, increases; and simultaneously parameter and computation counts grow by this same factor n_d. We would like to emphasize that MIST RNNs are the first NARX RNNs that resolve both of these issues, by providing exponentially short connections to the past while maintaining even fewer parameters and computations than LSTM.\n\n[1] Rohollah Soltani and Hui Jiang. Higher order recurrent neural networks. arXiv preprint arXiv:1605.00064, 2016.\n\n[2] Tsungnan Lin, Bill G Horne, Peter Tino, and C Lee Giles. Learning long-term dependencies in NARX recurrent neural networks. IEEE Transactions on Neural Networks, 7(6):1329–1338, 1996.",
"We are pleased that you enjoyed our work. Thank you very much for your detailed review and insightful comments. We have done our best to address every question raised, and we have updated the paper to reflect every response here:\n\n>>>>> for the copy task (section 5.2): Is it really true that Clockwork RNNs fail because they make it \"difficult to learn long-term behavior that must be detected at high frequency\" [section 2]?\n\nFor large delays (D >= 100), this is precisely the reason that Clockwork RNNs fail, but we see no way of providing further empirical evidence of this. We instead describe in detail why Clockwork RNNs must fail:\n\n- Symbol 0 can be 'copied ahead' by all partitions, and so perhaps it is possible to learn to replicate this symbol later in time.\n\n- Symbol 1 can only be seen by the highest-frequency partition (period of T = 1) because 1 % T = 0 for T = 1, but not T = 2, 4, 8, 16, etc. Also, this partition cannot send information to lower-frequency partitions. Hence Clockwork RNNs cannot learn to replicate symbol 1 for the exact same reason that a simple RNN cannot: the shortest past to the loss has at least D matrix multiplies and nonlinearities.\n\n- Symbol 2 can similarly only be seen by the two highest-frequency partitions (T = 1, T = 2), so we have a shortest path with D / 2 nonlinearities and matrix multiplies (a negligible difference for medium-to-large delays).\n\n- Symbol 3 can only be seen by the single highest-frequency partition because again 3 % T = 0 only for T = 1, so the situation is identical to symbol 1.\n\n- And so on. Hence Clockwork RNNs must fail to learn to copy most of these symbols for medium-to-large delays.\n\nFor small delays (D = 50), Clockwork RNNs should solve the copy task, because the highest-frequency partition resembles a simple RNN. However, this partition has only 256 / 8 = 32 hidden units. We thus ran additional Clockwork RNN experiments with 1024 hidden units (and 10x as many parameters), with 128 units allocated to the high-frequency partition. We then see that Clockwork RNNs do solve the copy problem with a delay of 50 and continue to fail to solve the problem for higher delays, as expected.\n\n>>>>> In the sequential pMNIST classification, what about increasing the LSTM number of hidden units? If this brings the error rate further down, one could ask why exactly the LSTM captures long-term structure so differently with different number of units?\n\nWe ran additional experiments with 512 units for both LSTM and MIST RNNs. LSTM obtains an improved error rate of 7.6%, and MIST RNNs obtain an improved error rate of 4.5%. However, we verified that capacity does not help with long-term dependencies; please see the next question.\n\n>>>>> How relevant are the results in figure 2 (yes, the gradient properties are very different, but is this an issue for accuracy)?\n\nWe included Figure 2 to show that empirical observations match our expectations for gradient decay. To provide further empirical validation, we ran additional pMNIST experiments for the 512-unit LSTM and MIST RNNs:\n\n- Based on Figure 2, we used only the last 200 pixels (rather than all 784).\n\n- LSTM performance remained the same (within 1 std. dev., 7.4% error), showing that LSTM gained nothing from including the distant past.\n\n- MIST RNN performance degraded by 15 standard deviations (6.0% error), showing that MIST RNNs do benefit from the distant past.\n\n- Finally we note that MIST RNNs still outperform LSTM. This is expected since LSTM has trouble learning even from steps <= 200 from the loss (as shown in Fig. 2).\n\n>>>>> on the right-hand side of the inline formula in section 3.1, the symbol v is missing\n\nThank you. This arose from merging two previous examples. Fixed.\n\n>>>>> in formula 16, the primes seem to be misplaced, and the symbols t', t''', etc. should be defined\n\nFixed\n\n>>>>> the \\theta_l in the beginning of section 3.3 (formula 13) is completely superfluous.\n\nWe agree but include this to make the connection to practice immediately evident. We added a sentence to clarify this.\n\n>>>>> The position of the tables and figures is rather weird...\n\nFixed.\n\n>>>>> Relation to prior work: the authors are aware of most relevant work... There is one that seems close to what the authors do: J. Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234-242, 1992 ...\n\nLearning a generative model over inputs to identify surprising inputs for processing is an interesting approach; we added this to the Background section.\n\n>>>>> Perhaps this trick could further improve the system of the authors, as well as the Clockwork RNNs, at least for certain tasks?\n\nWe would not be surprised at all if this method can improve results for some tasks, especially those with highly-correlated, low-dimensional inputs such as MNIST (or even pMNIST). However, addressing this question fully would be far from trivial, so we leave it as future work."
]
} | {
"paperhash": [
"arjovsky|unitary_evolution_recurrent_neural_networks",
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"bengio|learning_long-term_dependencies_with_gradient_descent_is_difficult",
"chatzaki|human_daily_activity_and_fall_recognition_using_a_smartphone's_acceleration_sensor",
"cho|learning_phrase_representations_using_rnn_encoder-decoder_for_statistical_machine_translation",
"danihelka|associative_long_short-term_memory",
"dipietro|recognizing_surgical_activities_with_recurrent_neural_networks",
"el|hierarchical_recurrent_neural_networks_for_long-term_dependencies",
"elman|yarin_gal_and_zoubin_ghahramani._a_theoretically_grounded_application_of_dropout_in_recurrent_neural_networks",
"gao|language_of_surgery:_a_surgical_gesture_dataset_for_human_motion_modeling",
"garofolo|darpa_timit_acoustic-phonetic_continous_speech_corpus_cd-rom._nist_speech_disc_1-1.1._nasa_sti/recon_technical_report",
"gers|learning_to_forget:_continual_prediction_with_lstm",
"graves|neural_turing_machines",
"greff|lstm:_a_search_space_odyssey",
"andrew|heterogeneous_acoustic_measurements_and_multiple_classifiers_for_speech_recognition",
"henaff|orthogonal_rnns_and_long-memory_tasks",
"hochreiter|untersuchungen_zu_dynamischen_neuronalen_netzen._diploma,_technische_universität_münchen",
"hochreiter|long_short-term_memory",
"jozefowicz|an_empirical_exploration_of_recurrent_network_architectures",
"koutnik|a_clockwork_rnn",
"krueger|regularizing_rnns_by_randomly_preserving_hidden_activations",
"le|a_simple_way_to_initialize_recurrent_networks_of_rectified_linear_units",
"lecun|gradient-based_learning_applied_to_document_recognition",
"lee|speaker-independent_phone_recognition_using_hidden_markov_models",
"lin|learning_long-term_dependencies_in_narx_recurrent_neural_networks",
"martens|learning_recurrent_neural_networks_with_hessian-free_optimization",
"miao|eesen:_end-to-end_speech_recognition_using_deep_rnn_models_and_wfst-based_decoding",
"oord|pixel_recurrent_neural_networks",
"pascanu|on_the_difficulty_of_training_recurrent_neural_networks",
"plate|holographic_recurrent_networks",
"rumelhart|learning_representations_by_backpropagating_errors",
"schmidhuber|learning_complex,_extended_sequences_using_the_principle_of_history_compression",
"soltani|higher_order_recurrent_neural_networks",
"paul|generalization_of_backpropagation_with_application_to_a_recurrent_gas_market_model",
"paul|maximizing_long-term_gas_industry_profits_in_two_minutes_in_lotus_using_neural_network_methods",
"paul|backpropagation_through_time:_what_it_does_and_how_to_do_it",
"weston|memory_networks._international_conference_on_learning_representations_(iclr)",
"ronald|a_learning_algorithm_for_continually_running_fully_recurrent_neural_networks",
"wu|google's_neural_machine_translation_system:_bridging_the_gap_between_human_and_machine_translation",
"zhang|architectural_complexity_measures_of_recurrent_neural_networks"
],
"title": [
"Unitary evolution recurrent neural networks",
"Neural machine translation by jointly learning to align and translate",
"Learning long-term dependencies with gradient descent is difficult",
"Human daily activity and fall recognition using a smartphone's acceleration sensor",
"Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"Associative long short-term memory",
"Recognizing surgical activities with recurrent neural networks",
"Hierarchical recurrent neural networks for long-term dependencies",
"Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks",
"Language of surgery: A surgical gesture dataset for human motion modeling",
"DARPA TIMIT acoustic-phonetic continous speech corpus CD-ROM. NIST speech disc 1-1.1. NASA STI/Recon technical report",
"Learning to forget: Continual prediction with LSTM",
"Neural turing machines",
"LSTM: A search space odyssey",
"Heterogeneous acoustic measurements and multiple classifiers for speech recognition",
"Orthogonal RNNs and long-memory tasks",
"Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universität München",
"Long short-term memory",
"An empirical exploration of recurrent network architectures",
"A clockwork RNN",
"Regularizing rnns by randomly preserving hidden activations",
"A simple way to initialize recurrent networks of rectified linear units",
"Gradient-based learning applied to document recognition",
"Speaker-independent phone recognition using hidden Markov models",
"Learning long-term dependencies in NARX recurrent neural networks",
"Learning recurrent neural networks with hessian-free optimization",
"EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding",
"Pixel recurrent neural networks",
"On the difficulty of training recurrent neural networks",
"Holographic recurrent networks",
"Learning representations by backpropagating errors",
"Learning complex, extended sequences using the principle of history compression",
"Higher order recurrent neural networks",
"Generalization of backpropagation with application to a recurrent gas market model",
"Maximizing long-term gas industry profits in two minutes in lotus using neural network methods",
"Backpropagation through time: what it does and how to do it",
"Memory networks. International Conference on Learning Representations (ICLR)",
"A learning algorithm for continually running fully recurrent neural networks",
"Google's neural machine translation system: Bridging the gap between human and machine translation",
"Architectural complexity measures of recurrent neural networks"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"martin arjovsky",
"shah amar",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dzmitry bahdanau",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoshua bengio",
"patrice simard",
"paolo frasconi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"charikleia chatzaki",
"matthew pediaditis",
"george vavoulas",
"manolis tsiknakis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kyunghyun cho",
"bart van merriënboer",
"c ¸aglar gülc ¸ehre",
"dzmitry bahdanau",
"fethi bougares",
"holger schwenk",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ivo danihelka",
"greg wayne",
"benigno uria",
"nal kalchbrenner",
"alex graves"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"robert dipietro",
"colin lea",
"anand malpani",
"narges ahmidi",
"swaroop vedula",
"i gyusung",
" lee",
"gregory d mija r lee",
" hager"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"salah el",
"hihi ",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" jeffrey l elman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yixin gao",
"s swaroop vedula",
"carol e reiley",
"narges ahmidi",
"balakrishnan varadarajan",
"henry c lin",
"lingling tao",
"luca zappella",
"benjamn bejar",
"david d yuh",
"chi chiung",
"grace chen",
"rene vidal",
"sanjeev khudanpur",
"gregory d hager"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lori f john s garofolo",
"william m lamel",
"jonathon g fisher",
"david s fiscus",
" pallett"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jürgen felix a gers",
"fred schmidhuber",
" cummins"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex graves",
"greg wayne",
"ivo danihelka"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"k greff",
"r k srivastava",
"j koutník",
"b r steunebrink",
"j schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"k andrew",
" halberstadt"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mikael henaff",
"arthur szlam",
"yann lecun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rafal jozefowicz",
"wojciech zaremba",
"ilya sutskever"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jan koutnik",
"klaus greff",
"faustino gomez",
"juergen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david krueger",
"tegan maharaj",
"jános kramár",
"mohammad pezeshki",
"nicolas ballas",
"nan rosemary ke",
"anirudh goyal",
"yoshua bengio",
"hugo larochelle",
"aaron courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"navdeep quoc v le",
"geoffrey e jaitly",
" hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yann lecun",
"léon bottou",
"yoshua bengio",
"patrick haffner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"k-f lee",
"h-w hon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tsungnan lin",
"bill g horne",
"peter tino",
"c lee giles"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"james martens",
"ilya sutskever"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yajie miao",
"mohammad gowayyed",
"florian metze"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aaron van den oord",
"nal kalchbrenner",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"razvan pascanu",
"tomas mikolov",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tony a plate"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"geoffrey e david e rumelhart",
"ronald j hinton",
" williams"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rohollah soltani",
"hui jiang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"werbos paul"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"werbos paul"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"werbos paul"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jason weston",
"sumit chopra",
"antoine bordes"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"j ronald",
"david williams",
" zipser"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yonghui wu",
"mike schuster",
"zhifeng chen",
"v quoc",
"mohammad le",
"wolfgang norouzi",
"maxim macherey",
"yuan krikun",
"qin cao",
"klaus gao",
" macherey"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"saizheng zhang",
"yuhuai wu",
"tong che",
"zhouhan lin",
"roland memisevic",
"ruslan salakhutdinov",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1511.06464v4",
"1409.0473v7",
"",
"",
"",
"",
"1606.06329v2",
"",
"",
"",
"",
"",
"1410.5401v2",
"1503.04069v2",
"",
"",
"",
"",
"",
"1402.3511v1",
"arXiv:1606.01305",
"1504.00941v2",
"",
"",
"",
"",
"",
"1601.06759v3",
"1211.5063v2",
"",
"",
"",
"1605.00064v1",
"",
"",
"",
"",
"",
"1609.08144v2",
"1602.08210v3"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.481481 | 0.833333 | null | null | null | null | null | r1pW0WZAW |
||
ding|video_action_segmentation_with_hybrid_temporal_networks|ICLR_cc_2018_Conference | Video Action Segmentation with Hybrid Temporal Networks | Action segmentation as a milestone towards building automatic systems to understand untrimmed videos has received considerable attention in the recent years. It is typically being modeled as a sequence labeling problem but contains intrinsic and sufficient differences than text parsing or speech processing. In this paper, we introduce a novel hybrid temporal convolutional and recurrent network (TricorNet), which has an encoder-decoder architecture: the encoder consists of a hierarchy of temporal convolutional kernels that capture the local motion changes of different actions; the decoder is a hierarchy of recurrent neural networks that are able to learn and memorize long-term action dependencies after the encoding stage. Our model is simple but extremely effective in terms of video sequence labeling. The experimental results on three public action segmentation datasets have shown that the proposed model achieves superior performance over the state of the art. | {
"name": [],
"affiliation": []
} | We propose a new hybrid temporal network that achieves state-of-the-art performance on video action segmentation on three public datasets. | [
"action segmentation",
"video labeling",
"temporal networks"
] | null | 2018-02-15 22:29:16 | 28 | null | null | null | null | null | null | null | null | false | All reviewers believed that the novelty of the contribution was limited. | {
"review_id": [
"HJ5VPLYxG",
"H1lTTDulG",
"B1tWwoKxz"
],
"review": [
{
"title": "title: Video Action Segmentation with Hybrid Temporal Networks",
"paper_summary": null,
"main_review": "main_review: I will be upfront: I have already reviewed this paper when it was submitted to NIPS 2017, so this review is based heavily on the NIPS submission. \n\nI am quite concerned that this paper has been resubmitted as it is, word by word, character by character. The authors could have benefited from the feedback they obtained from the reviewers of their last submissions to improved their paper, but nothing has been done. Even very easy remarks, like bolding errors (see below) have been kept in the paper.\n\nThe proposed paper describes a method for video action segmentation, a task where the video must be temporally densely labeled by assigned an action (sub) class to each frame. The method proceeds by extracting frame level features using convolutional networks and then passing a temporal encoder-decoder in 1D over the video, using fully supervised training.\n\nOn the positive side, the method has been tested on 3 different datasets, outperforming the baselines (recent methods from 2016) on 2 of them.\n\nMy biggest concern with the paper is novelty. A significant part of the paper is based on reference [Lea et al. 2017], the differences being quite incremental. The frame-level features are the same as in [Lea et al. 2017], and the basic encoder-decoder strategy is also taken from [Lea et al. 2017]. The encoder is also the same. Even details are reproduced, as the choice of normalized Relu activations.\n\nThe main difference seems to me that the decoder is not convolutional, but a recurrent network.\n\nThe encoder-decoder architecture seems to be surprisingly shallow, with only K=2 layers at each side.\n\nThe paper is well written and can be easily understood. However, a quite large amount of space is wasted on obvious and known content, as for example the basic equation for a convolutional layer (equation (1)) and the following half page of text and equations of LSTM and Bi-directional LSTM networks. This is very well known and the space can be used for more details on the paper's contributions.\n\nWhile the paper is generally well written, there are a couple of exceptions in the form of ambiguous sentences, for example the lines before section 3.\n\nThere is a bolding error in table 2, where the proposed method is not state of the art (as indicated) w.r.t. to the accuracy metric.\n\nTo sum it up, the positive aspect of nicely executed experiments is contrasted by low novelty of the method. To be honest, I am not totally sure whether the contribution of the paper should be considered as a new method or as architectural optimizations of an existing one. This is corroborated by the experimental results on the first two datasets (tables 2 and 3): on 50 salads, where ref. [Lea et al. 2017]. seems currently to obtain state of the art performance, the improvement obtained by the proposed method allows it to get state of the art performance. On GTEA, where [Lea et al. 2017] does not currently deliver state of the art performance, the proposed method performs (slightly) better than [Lea et al. 2017] but does not obtain state of the art performance.\n\nOn the third dataset, JIGSAWS, reference [Lea et al. 2017]. has not been tested, which is peculiar given the closeness.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: lacking in terms of novelty",
"paper_summary": null,
"main_review": "main_review: The paper proposed a combination of temporal convolutional and recurrent network for video action segmentation. Overall this paper is written and easy to follow.\n\nThe novelty of this paper is very limited. It just replaces the decoder of ED-TCN (Lea et al. 2017) with a bi-directional LSTM. The idea of applying bi-directional LSTM is also not new for video action segmentation. In fact, ED-TCN used it as one of the baselines. The results also do not show much improvement over ED-TCN, which is much easier and faster to train (as it is fully convolutional model) than the proposed model. Another concern is that the number of layers parameter 'K'. The authors should show an analysis on how the performance varies for different values of 'K' which I believe is necessary to judge the generalization of the proposed model. I also suggest to have an analysis on entire convolutional model (where the decoder has 1D-deconvolution) to be included in order to get a clear picture of the improvement in performance due to bi-directional LSTM . Overall, I believe the novelty, contribution and impact of this work is sub-par to what is expected for publication in ICLR. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Official Reviewer 2",
"paper_summary": null,
"main_review": "main_review: This paper discusses the problem of action segmentation in long videos, up to 10 minutes long. The basic idea is to use a temporal convolutional encoder-decoder architecture, where in the enconder 1-D temporal convolutions are used. In the decoder three variants are studied:\n\n(1) One that uses only several bidirectional LSTMs, one after the other.\n(2) One that first applies successive layers of deconvolutions to produce per frame feature maps. Then, in the end a bidirectional LSTM in the last layer.\n(3) One that first applies a bidirectional LSTM, then applies successively 1-D deconvolution layer.\n\nAll variants end with a \"temporal softmax\" layer, which outputs a class prediction per frame.\n\nOverall, the paper is of rather limited novelty, as it is very similar to the work of Lea et al., 2017, where now the decoder part also has the deconvolutions smoothened by (bidirectional) LSTMs. It is not clear what is the main novelty compared to the aforementioned paper, other than temporal smoothing of features at the decoder stage.\n\nAlthough one of the proposed architectures (TricorNet) produces some modest improvements, it is not clear why the particular architectures are a good fit. Surely, deconvolutions and LSTMs can help incorporate some longer-term temporal elements into the final representations. However, to begin with, aren't the 1-D deconvolutions and the LSTMs (assuming they are computed dimension-wise) serving the same purpose and therefore overlapping? Why are both needed?\n\nSecond, what makes the particular architectures in Figure 3 the most reasonable choice for encoding long-term dependencies, is there a fundamental reason? What is the difference of the L_mid from the 1-D deconv layers afterward? Currently, the three variants are motivated in terms of what the Bi-LSTM can encode (high or low level details). \n\nThird, the qualitative analysis can be improved. For instance, the experiment with the \"cut lettuce\" vs \"peel cucumber\" is not persuasive enough. Indeed, longer temporal relationships can save incorrect future predictions. However, this works both ways, meaning that wrong past predictions can persist because of the long-term modelling. Is there a mechanism in the proposed approach to account for that fact?\n\nAll in all, I believe the paper indeed improves over existing baselines. However, the novelty is insufficient for a publication at this stage.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.2222222238779068,
0.3333333432674408,
0.2222222238779068
],
"confidence": [
1,
0.75,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [],
"comment": []
} | {
"paperhash": [
"|franc_¸ois_chollet._keras",
"cross|incremental_parsing_with_minimal_features_using_bi-directional_lstm",
"das|a_thousand_frames_in_just_a_few_words:_lingual_description_of_videos_through_latent_topics_and_sparse_object_stitching",
"fathi|learning_to_recognize_objects_in_egocentric_activities",
"gao|jhu-isi_gesture_and_skill_assessment_working_set_(jigsaws):_a_surgical_activity_dataset_for_human_motion_modeling",
"graves|bidirectional_lstm_networks_for_improved_phoneme_classification_and_recognition",
"de-an|connectionist_temporal_modeling_for_weakly_supervised_action_labeling",
"kingma|adam:_a_method_for_stochastic_optimization",
"kuehne|an_end-to-end_generative_framework_for_video_segmentation_and_recognition",
"lea|segmental_spatiotemporal_cnns_for_fine-grained_action_segmentation",
"lea|learning_convolutional_action_primitives_for_fine-grained_action_recognition",
"lea|temporal_convolutional_networks:_a_unified_approach_to_action_segmentation",
"lea|temporal_convolutional_networks_for_action_segmentation_and_detection",
"li|online_human_action_detection_using_joint_classification-regression_recurrent_neural_networks",
"mettes|spot_on:_action_localization_from_pointly-supervised_proposals",
"noh|learning_deconvolution_network_for_semantic_segmentation",
"peng|multi-region_two-stream_r-cnn_for_action_detection",
"richard|temporal_action_detection_using_a_statistical_language_model",
"simonyan|two-stream_convolutional_networks_for_action_recognition_in_videos",
"singh|a_multi-stream_bi-directional_recurrent_neural_network_for_fine-grained_action_detection",
"singh|first_person_action_recognition_using_deep_learned_descriptors",
"stein|combining_embedded_accelerometers_with_computer_vision_for_recognizing_food_preparation_activities",
"tao|surgical_gesture_segmentation_and_recognition",
"tran|learning_spatiotemporal_features_with_3d_convolutional_networks",
"wang|action_recognition_with_trajectory-pooled_deep-convolutional_descriptors",
"yeung|every_moment_counts:_dense_detailed_labeling_of_actions_in_complex_videos",
"yeung|end-to-end_learning_of_action_detection_from_frame_glimpses_in_videos",
"zhou|procnets:_learning_to_segment_procedures_in_untrimmed_and_unconstrained_videos"
],
"title": [
"Franc ¸ois Chollet. keras",
"Incremental parsing with minimal features using bi-directional lstm",
"A thousand frames in just a few words: lingual description of videos through latent topics and sparse object stitching",
"Learning to recognize objects in egocentric activities",
"Jhu-isi gesture and skill assessment working set (jigsaws): A surgical activity dataset for human motion modeling",
"Bidirectional lstm networks for improved phoneme classification and recognition",
"Connectionist temporal modeling for weakly supervised action labeling",
"Adam: A method for stochastic optimization",
"An end-to-end generative framework for video segmentation and recognition",
"Segmental spatiotemporal cnns for fine-grained action segmentation",
"Learning convolutional action primitives for fine-grained action recognition",
"Temporal convolutional networks: A unified approach to action segmentation",
"Temporal convolutional networks for action segmentation and detection",
"Online human action detection using joint classification-regression recurrent neural networks",
"Spot on: Action localization from pointly-supervised proposals",
"Learning deconvolution network for semantic segmentation",
"Multi-region two-stream r-cnn for action detection",
"Temporal action detection using a statistical language model",
"Two-stream convolutional networks for action recognition in videos",
"A multi-stream bi-directional recurrent neural network for fine-grained action detection",
"First person action recognition using deep learned descriptors",
"Combining embedded accelerometers with computer vision for recognizing food preparation activities",
"Surgical gesture segmentation and recognition",
"Learning spatiotemporal features with 3d convolutional networks",
"Action recognition with trajectory-pooled deep-convolutional descriptors",
"Every moment counts: Dense detailed labeling of actions in complex videos",
"End-to-end learning of action detection from frame glimpses in videos",
"Procnets: Learning to segment procedures in untrimmed and unconstrained videos"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [],
"affiliation": []
},
{
"name": [
"james cross",
"liang huang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pradipto das",
"chenliang xu",
"richard doell",
"jason j corso"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alireza fathi",
"xiaofeng ren",
"james m rehg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yixin gao",
"swaroop vedula",
"carol e reiley",
"narges ahmidi",
"balakrishnan varadarajan",
"lingling henry c lin",
"luca tao",
"benjamın zappella",
"david d béjar",
" yuh"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex graves",
"santiago fernández",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" de-an",
"li huang",
"juan carlos fei-fei",
" niebles"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik kingma",
"jimmy ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hilde kuehne",
"juergen gall",
"thomas serre"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"colin lea",
"austin reiter",
"rené vidal",
"gregory d hager"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"colin lea",
"rené vidal",
"gregory d hager"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"colin lea",
"rené vidal",
"austin reiter",
"gregory d hager"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"colin lea",
"rene michael d flynn",
"austin vidal",
"gregory d reiter",
" hager"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yanghao li",
"cuiling lan",
"junliang xing",
"wenjun zeng",
"chunfeng yuan",
"jiaying liu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pascal mettes",
"jan c van gemert",
"g m cees",
" snoek"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hyeonwoo noh",
"seunghoon hong",
"bohyung han"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"xiaojiang peng",
"cordelia schmid"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alexander richard",
"juergen gall"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"karen simonyan",
"andrew zisserman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"bharat singh",
"tim k marks",
"michael jones",
"oncel tuzel",
"ming shao"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"suriya singh",
"chetan arora",
" jawahar"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sebastian stein",
"stephen j mckenna"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lingling tao",
"luca zappella",
"gregory d hager",
"rené vidal"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"du tran",
"lubomir bourdev",
"rob fergus",
"lorenzo torresani",
"manohar paluri"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"limin wang",
"yu qiao",
"xiaoou tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"serena yeung",
"olga russakovsky",
"ning jin",
"mykhaylo andriluka",
"greg mori",
"li fei-fei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"serena yeung",
"olga russakovsky",
"greg mori",
"li fei-fei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"l zhou",
"c xu",
"j j corso"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"1607.08584v1",
"1412.6980v9",
"",
"",
"",
"1608.08242v1",
"1611.05267v1",
"",
"",
"1505.04366v1",
"",
"",
"",
"",
"",
"",
"",
"1412.0767v4",
"",
"1507.05738v3",
"",
"arXiv:1703.09788"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.259259 | 0.916667 | null | null | null | null | null | r1nzLmWAb |
||
chuang|sicgan_a_selfimproving_collaborative_gan_for_decoding_sketch_rnns|ICLR_cc_2018_Conference | SIC-GAN: A Self-Improving Collaborative GAN for Decoding Sketch RNNs | Variational RNNs are proposed to output “creative” sequences. Ideally, a collection of sequences produced by a variational RNN should be of both high quality and high variety. However, existing decoders for variational RNNs suffer from a trade-off between quality and variety. In this paper, we seek to learn a variational RNN that decodes high-quality and high-variety sequences. We propose the Self-Improving Collaborative GAN (SIC-GAN), where there are two generators (variational RNNs) collaborating with each other to output a sequence and aiming to trick the discriminator into believing the sequence is of good quality. By deliberately weakening one generator, we can make another stronger in balancing quality and variety. We conduct experiments using the QuickDraw dataset and the results demonstrate the effectiveness of SIC-GAN empirically. | {
"name": [],
"affiliation": []
} | null | [
"RNNs",
"GANs",
"Variational RNNs",
"Sketch RNNs"
] | null | 2018-02-15 22:29:41 | 25 | null | null | null | null | null | null | null | null | false | Pros and cons of the paper can be summarized as follows:
Pros:
* The underlying idea may be interesting
* Results are reasonably strong on the test set used
Cons:
* Testing on the single dataset indicates that the model may be of limited applicability
* As noted by reviewer 2, core parts of the paper are extremely difficult to understand, and the author response did little to assuage these concerns
* There is little mathematical notation, which compounds the problems of clarity
After reading the method section of the paper, I agree with reviewer 2: there are serious clarity issues here. As a result, I do cannot recommend that this paper be accepted to ICLR in its current form. I would suggest the authors define their method precisely in mathematical notation in a future submission. | {
"review_id": [
"Skie1qFxM",
"SJK75ZFef",
"ByfDSjtlf"
],
"review": [
{
"title": "title: A novel architecture for generating greater variety on QuickDraw dataset, but seems confused about what it's actually doing.",
"paper_summary": null,
"main_review": "main_review: This paper baffles me. It appears to be a stochastic RNN with skip connections (so it's conditioned on the last two states rather than last one) trained by an adversarial objective (which is no small feat to make work for sequential tasks) with results shown on the firetruck category of the QuickDraw dataset. Yet the authors claim significantly more importance for the work than I think it merits.\n\nFirst, there is nothing variational about their variational RNN. They seem to use the term to be equivalent to \"stochastic\", \"probabilistic\" or \"noisy\" rather than having anything to do with optimizing a variational bound. To strike the right balance between pretension and accuracy, I would suggest substituting the word \"stochastic\" everywhere \"variational\" is used.\n\nSecond, there is nothing self-improving or collaborative about their self-improving collaborative GAN. Once the architecture is chosen to share the weights between the weak and strong generator, the only difference between the two is that the weak generator has greater noise at the output. In this sense the architecture should really be seen as a single model with different noise levels at alternating steps. In this sense, I am not entirely clear on what the difference is between the SIC-GAN and their noisy GAN baseline - presumably the only difference is that the noisy GAN is conditioned on a single timestep instead of two at a time? The claim that these models are somehow \"self-improving\" baffles me as well - all machine learning models are self-improving, that is the point of learning. The authors make a comparison to AlphaGo Zero's use of self-play, but here the weak and strong generators are on the same side of the game, and because there are no game rules provided beyond \"reproduce the training set\", there is no possibility of discovery beyond what is human-provided, contrary to the authors' claim.\n\nThird, the total absence of mathematical notation made it hard in places to follow exactly what the models were doing. While there are plenty of papers explaining the GAN framework to a novice, at least some clear description of the baseline architectures would be appreciated (for instance, a clearer explanation of how the SIC-GAN differs from the noisy GAN). Also the description of the soft $\\ell_1$ loss (which the authors call the \"1-loss\" for some reason) would benefit from a clearer mathematical exposition.\n\nFourth, the experiments seem too focused on the firetruck category of the QuickDraw dataset. As it was the only example shown, it's difficult to evaluate their claim that this is a general method for improving variety without sacrificing quality. Their chosen metrics for variety and detail are somewhat subjective, as they depend on the fact that some categories in the QuickDraw dataset resemble firetrucks in the fine detail while others resemble firetrucks in outline. This is not a generalizable metric. Human evaluation of the relative quality and variety would likely suffice.\n\nLastly, the entire section on the strong-weak collaborative GAN seems to add nothing. They describe an entire training regiment for the model, yet never provide any actual experimental results using that model, so the entire section seems only to motivate the SIC-GAN which, again, seems like a fairly ordinary architectural extension to GANs with RNN generators.\n\nThe results presented on QuickDraw do seem nice, and to the best of my knowledge it is the first (or at least best) applications of GANs to QuickDraw - if they refocused the paper on GAN architectures for sketching and provided more generalizable metrics of quality and variety it could be made into a good paper.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting idea, but requires more datasets to show.",
"paper_summary": null,
"main_review": "main_review: The paper proposed a method that tries to generate both accurate and diverse samples from RNNs. \nI like the basic intuition of this paper, i.e., using mistakes for creativity and refining on top of it. I also think the evaluation is done properly. I think my biggest concern is that the method was only tested on a single dataset hence it is not convincing enough. Also on this particular dataset, the method does not seem to strongly dominate the other methods. Hence it's not clear how much better this method is compared to previously proposed ones.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Very interesting approach for training a sequential generator adversarially",
"paper_summary": null,
"main_review": "main_review: Overall the paper is good: good motivation, insight, the model makes sense, and the experiments / results are convincing. I would like to see some evidence though that the strong generator is doing exactly what is advertised: that it’s learning to clean up the mistakes from variation. Can we have some sort of empirical analysis that what you say is true? \n\nThe writing grammar quality fluctuates. Please clean up.\n\nDetailed notes\nP1:\nWhy did you pass on calling it Self-improving collaborative adversarial learning (SICAL)?\nI’m very surprised you don’t mention VAE RNN here (Chung et al 2015) along with other models that leverage an approximate posterior model of some sort.\n\nP2:\nWhat about scheduled sampling?\nIs the quality really better? How do you quantify that? To me the ones at the bottom of 2(c) are of both lower quality *and* diversity.\n“Figure 2(d) displays human-drawn sketches of fire trucks which demonstrate that producing sequences–in this case sketches–with both quality and variety is definitely achievable in real-world applications”: I’m not sure I follow this argument. Because people can do it, ML should be able to?\n\nP3:\n“Recently, studies start to apply GANs to generate the sequential output”: fix this\nGrammar takes a brief nose-dive around here, making it a little harder to read.\nCaption: “bean search”\nChe et al also uses something close to Reinforcement learning for discrete sequences.\n“nose-injected”: now you’re just being silly\nMaybe cite Bahdanau et al 2016 “An actor-critic algorithm for sequence prediction”\n“does not require any variety reward/measure to train” What about the discriminator score (MaliGAN / SeqGAN)? Could this be a simultaneous variety + quality reward signal? If the generator is either of poor-quality or has low variety, the discriminator could easily distinguish its samples from the real ones, no?\n\nP6:\nDid you pass only the softmax values to the discriminator?\n\nP7:\nI like the score scheme introduced here. Do you see any connection to inception score?\nSo compared to normal GAN, does SIC-GAN have more parameters (due to the additional input)? If so, did you account for this in your experiments?",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.4444444477558136,
0.6666666865348816
],
"confidence": [
1,
0.5,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Thanks",
"Thanks.",
"Thanks."
],
"comment": [
"We thank the reviewer for the positive comments. We have fixed the typos and grammar issues in the new version and cited more relevant work including the Bahdanau et al. 2016 “An actor-critic algorithm for sequence prediction” and the VAE RNN by Chung et al. 2015. Following is our reply to your specific comments: \n\nQ: Did you pass only the softmax values to the discriminator? \nA: No, we pass the mean and variance of each point generated by the Sketch RNN to the discriminator. \n\nQ: So compared to normal GAN, does SIC-GAN have more parameters (due to the additional input)?\nA: No, the parameter numbers of the unfolded Noisy GAN and SIG-GAN are roughly the same.\n\nQ: “Figure 2(d) displays human-drawn sketches of fire trucks which demonstrate that producing sequences–in this case sketches–with both quality and variety is definitely achievable in real-world applications”: I’m not sure I follow this argument. Because people can do it, ML should be able to?\nA: You are right. Here we just want to emphasize that the quality and variety is both achievable “by humans.” We have corrected the sentence in the paper. \n\nQ: Discriminator in MaliGAN/SeqGAN a simultaneous variety + quality reward signal?\nA: Yes it is. The MaliGAN/SeqGAN is proposed for the RNNs with the discrete output. While we use the continuous Sketch-RNN to demonstrate the Strong-Weak Collaborative GAN and SIC-GAN, the ideas could be readily applied to discrete cases. This is our future work.",
"We thank the reviewer for constructive comments. \n\nQ: I think my biggest concern is that the method was only tested on a single dataset...\nA: Thanks. Following the suggestion of reviewer 2, we have changed the paper title to Sketch RNN so we believe this is no longer a concern.",
"We thank the reviewer for constructive comments. Following is our reply:\n\nQ: Once the architecture is chosen to share the weights between the weak and strong generator, ...it appears to be a stochastic RNN with skip connections (so it's conditioned on the last two states rather than last one) trained by an adversarial objective...\nA: We are sorry for not describing the “tying” precisely. It is done in a soft manner; that is, we add a loss term for the weak generator that require its parameters to be similar to those of the strong generator. Please see Section 4 for more details. Actually, the extra input taken by the strong generator is not necessary and are not implemented. We just described it for the cases when the hyperparameter of the term is high. We have remove the irrelevant sentences to avoid confusion.\n\nQ: I am not entirely clear on what the difference is between the SIC-GAN and their noisy GAN baseline...\nA: The noisy GAN just weakens the ordinary RNN generator of the naive GAN to achieve the “covering” effect similar to that of the strong-weak collaborative GAN—if a point is made bad, the RNN may learn to generate better points at later time steps in order to fool the discriminator. However, the “next points” in the weakened RNN are made bad too (since the entire RNN is weakened) and may not be able to actually cover the previous point. To fool the discriminator in such a situation, the RNN may instead learn to output points that, after being weakened, are more easily “covered” by the future (bad) points. In effect, this makes the RNN conservative to generating novel sequence and reduces variety. We call this the covering-or-covered paradox. On the other hand, once trained, the strong generator in the strong-weak collaborative GAN is used to generate an entire sequence. This means that the strong generator should have enough based temperature (or noise level) to ensure the variety. One naive way to do so is to add a base-temperature to both the strong and weak generators during the training time. However, the strong generator faces the covering-or-covered paradox now and may learn to be conservative. We can instead train the strong-weak collaborative GAN multiple times using a self-improving technique. We start by adding a low base-temperature to both the strong and weak generators and train them in the first phase. Then we set the weak generator in the next phase as the strong one we get from the previous phase and train the generators with increased base-temperature. We then repeat this process until the target base-temperature is reached. We call the process “self-improving” because the strong generator in the next phase learns to cover itself in the previous phase. It is important to note that in a later phase, the weak generator is capable of covering the negative effect due to the variety of the strong generator (because that weak generator is a strong generator in the previous phase). So, the strong generator in the current phase can focus on the “covering” rather than “covered,” preventing the final RNN from being conservative. \n\nQ: ...because there are no game rules provided beyond “reproduce the training set,” there is no possibility of discovery beyond what is human-provided, contrary to the authors' claim.\nA: The generator can exploit up to the generalizability of the generator."
]
} | {
"paperhash": [
"bahdanau|an_actor-critic_algorithm_for_sequence_prediction",
"bengio|scheduled_sampling_for_sequence_prediction_with_recurrent_neural_networks",
"bengio|curriculum_learning",
"boulanger-lewandowski|high-dimensional_sequence_transduction",
"che|maximum-likelihood_augmented_discrete_generative_adversarial_networks",
"cho|noisy_parallel_approximate_decoding_for_conditional_recurrent_language_model",
"chung|a_recurrent_latent_variable_model_for_sequential_data",
"germann|greedy_decoding_for_statistical_machine_translation_in_almost_linear_time",
"goyal|professor_forcing:_a_new_algorithm_for_training_recurrent_networks",
"graves|sequence_transduction_with_recurrent_neural_networks",
"graves|generating_sequences_with_recurrent_neural_networks",
"gu|trainable_greedy_decoding_for_neural_machine_translation",
"gu|neural_machine_translation_with_gumbel-greedy_decoding",
"gulrajani|improved_training_of_wasserstein_gans",
"ha|a_neural_representation_of_sketch_drawings",
"kim|convolutional_neural_networks_for_sentence_classification",
"kusner|gans_for_sequences_of_discrete_elements_with_the_gumbel-softmax_distribution",
"langlais|a_greedy_decoder_for_phrase-based_statistical_machine_translation",
"li|a_simple,_fast_diverse_decoding_algorithm_for_neural_generation",
"papineni|bleu:_a_method_for_automatic_evaluation_of_machine_translation",
"silver|mastering_the_game_of_go_with_deep_neural_networks_and_tree_search",
"silver|george_van_den_driessche,_thore_graepel,_and_demis_hassabis._mastering_the_game_of_go_without_human_knowledge",
"sutskever|sequence_to_sequence_learning_with_neural_networks",
"yu|seqgan:_sequence_generative_adversarial_nets_with_policy_gradient",
"zhang|generating_text_via_adversarial_training"
],
"title": [
"An actor-critic algorithm for sequence prediction",
"Scheduled sampling for sequence prediction with recurrent neural networks",
"Curriculum learning",
"High-dimensional sequence transduction",
"Maximum-likelihood augmented discrete generative adversarial networks",
"Noisy parallel approximate decoding for conditional recurrent language model",
"A recurrent latent variable model for sequential data",
"Greedy decoding for statistical machine translation in almost linear time",
"Professor forcing: A new algorithm for training recurrent networks",
"Sequence transduction with recurrent neural networks",
"Generating sequences with recurrent neural networks",
"Trainable greedy decoding for neural machine translation",
"Neural machine translation with gumbel-greedy decoding",
"Improved training of wasserstein gans",
"A neural representation of sketch drawings",
"Convolutional neural networks for sentence classification",
"GANS for sequences of discrete elements with the gumbel-softmax distribution",
"A greedy decoder for phrase-based statistical machine translation",
"A simple, fast diverse decoding algorithm for neural generation",
"Bleu: a method for automatic evaluation of machine translation",
"Mastering the game of go with deep neural networks and tree search",
"George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of go without human knowledge",
"Sequence to sequence learning with neural networks",
"Seqgan: Sequence generative adversarial nets with policy gradient",
"Generating text via adversarial training"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"dzmitry bahdanau",
"philemon brakel",
"kelvin xu",
"anirudh goyal",
"ryan lowe",
"joelle pineau",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"samy bengio",
"oriol vinyals",
"navdeep jaitly",
"noam shazeer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoshua bengio",
"jérôme louradour",
"ronan collobert",
"jason weston"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nicolas boulanger-lewandowski",
"yoshua bengio",
"pascal vincent"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yanran tong che",
"ruixiang li",
"r devon zhang",
"wenjie hjelm",
"yangqiu li",
"yoshua song",
" bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kyunghyun cho"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"junyoung chung",
"kyle kastner",
"laurent dinh",
"kratarth goel",
"aaron c courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ulrich germann"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"anirudh goyal",
"alex lamb",
"ying zhang",
"saizheng zhang",
"aaron c courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex graves"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex graves"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jiatao gu",
"kyunghyun cho",
"o k victor",
" li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jiatao gu",
"daniel jiwoong im",
"o k victor",
" li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ishaan gulrajani",
"faruk ahmed",
"martín arjovsky",
"aaron c vincent dumoulin",
" courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david ha",
"douglas eck"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoon kim"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"matt j kusner",
"josé miguel hernández-lobato"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"philippe langlais",
"alexandre patry",
"fabrizio gotti"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jiwei li",
"will monroe",
"dan jurafsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kishore papineni",
"salim roukos",
"todd ward",
"wei-jing zhu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david silver",
"aja huang",
"chris j maddison",
"arthur guez",
"laurent sifre",
"george van den driessche",
"julian schrittwieser",
"ioannis antonoglou",
"vedavyas panneershelvam",
"marc lanctot",
"sander dieleman",
"dominik grewe",
"john nham",
"nal kalchbrenner",
"ilya sutskever",
"timothy p lillicrap",
"madeleine leach",
"koray kavukcuoglu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david silver",
"julian schrittwieser",
"karen simonyan",
"ioannis antonoglou",
"aja huang",
"arthur guez",
"thomas hubert",
"lucas baker",
"matthew lai",
"adrian bolton",
"yutian chen",
"timothy lillicrap",
"fan hui",
"laurent sifre"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ilya sutskever",
"oriol vinyals",
"v quoc",
" le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lantao yu",
"weinan zhang",
"jun wang",
"yong yu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yizhe zhang",
"zhe gan",
"lawrence carin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"arXiv:1607.07086",
"1506.03099v3",
"",
"",
"",
"1605.03835v1",
"1506.02216v6",
"",
"1610.09038v1",
"1211.3711v1",
"1308.0850v5",
"1702.02429v1",
"",
"1704.00028v3",
"1704.03477v4",
"1408.5882v2",
"",
"",
"1611.08562v2",
"",
"",
"",
"1409.3215v3",
"1609.05473v6",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.481481 | 0.666667 | null | null | null | null | null | r1nmx5l0W |
||
holtzman|learning_to_write_by_learning_the_objective|ICLR_cc_2018_Conference | Learning to Write by Learning the Objective | Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models for learning prevalent patterns in natural language. Yet language generated by RNNs often shows several degenerate characteristics that are uncommon in human language; while fluent, RNN language production can be overly generic, repetitive, and even self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressive enough to capture the abstract qualities of good generation such as Grice’s Maxims. In this paper, we introduce a general learning framework that can construct a decoding objective better suited for generation. Starting with a generatively trained RNN language model, our framework learns to construct a substantially stronger generator by combining several discriminatively trained models that can collectively address the limitations of RNN generation. Human evaluation demonstrates that text generated by the resulting generator is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text. | {
"name": [],
"affiliation": []
} | We build a stronger natural language generator by discriminatively training scoring functions that rank candidate generations with respect to various qualities of good writing. | [
"natural language generation"
] | null | 2018-02-15 22:29:19 | 43 | null | null | null | null | null | null | null | null | false | I (and some of the reviewers) find the general motivation quite interesting (operationalizing the Gricean maxims in order to improve language generation). However, we are not convinced that the actual model encodes these maxims in a natural and proper way. Without this motivation, the approach can be regarded as a set of heuristics which happen to be relatively effective on a couple of datasets. In other words, the work seems too preliminary to be accepted at the conference.
Pros:
-- Interesting motivation (and potential impact on follow-up work)
-- Good results on a number of datasets
Cons:
-- The actual approach can be regarded as a set of heuristics, not necessarily following from the maxims
-- More serious evaluation needed (e.g., image captioning or MT) and potential better ways of encoding the maxims
It is suitable for the workshop track, as it is likely to stimulate an interesting discussion and more convincing follow-up work.
| {
"review_id": [
"HkN9lyRxG",
"ByWqV4YlG",
"BJFJrHcgz"
],
"review": [
{
"title": "title: Neat contribution that integrates previous work",
"paper_summary": null,
"main_review": "main_review: This paper proposes to bring together multiple inductive biases that hope to correct for inconsistencies in sequence decoding. Building on previous works that utilize modified objectives to generate sequences, this work proposes to optimize for the parameters of a pre-defined combination of various sub-objectives. The human evaluation is straight-forward and meaningful to compensate for the well-known inaccuracies of automatic evaluation. \n\nWhile the paper points out that they introduce multiple inductive biases that are useful to produce human-like sentences, it is not entirely correct that the objective is being learnt as claimed in portions of the paper. I would like this point to be clarified better in the paper. \n\nI think showing results on grounded generation tasks like machine translation or image-captioning would make a stronger case for evaluating relevance. I would like to see comparisons on these tasks. \n\n---- \nAfter reading the paper in detail again and the replies, I am downgrading my rating for this paper. While I really like the motivation and the evaluation proposed by this work, I believe that fixing the mismatch between the goals and the actual approach will make for a stronger work. \n\nAs pointed out by other reviewers, while the goals and evaluation seem to be more aligned with Gricean maxims, some components of the objective are confusing. For instance, the length penalty encourages longer sentences violating quantity, manner (be brief) and potentially relevance. Further, the repetition model address the issue of RNNs failing to capture long-term contextual dependencies -- how much does such a modified objective affect models with attention / hierarchical models is not clear from the formulation. \n\nAs pointed out in my initial review evaluation of relevance on the current task is not entirely convincing. A very wide variety of topics are feasible for a given context sentence. Grounded generation like MT / captioning would have been a more convincing evaluation. For example, Wu et al. (and other MT works) use a coverage term and this might be one of the indicators of relevance. \n\nFinally, I am not entirely convinced by the update regarding \"learning the objective\". While I agree with the authors that the objective function is being dynamically updated, the qualities of good language is encoded manually using a wide variety of additional objectives and only the relative importance of each of them is learnt. ",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: This paper combines RNN language model with several discriminatively trained models to improve the language generation. I like the idea of using Grice’s Maxims of communication to improve the language generation. However, some parts need to be further clarified and it would be nice to see more related analysis. ",
"paper_summary": null,
"main_review": "main_review: This paper argues that the objective of RNN is not expressive enough to capture the good generation quality. In order to address the problems of RNN in generating languages, this paper combines the RNN language model with several other discriminatively trained models, and the weight for each sub model is learned through beam search. \n\nI like the idea of using Grice’s Maxims of communication to improve the language generation. Human evaluation shows significant improvement over the baseline. I have some detailed comments as follows:\n\n- The repetition model uses the samples from the base RNNs as negative examples. More analysis is needed to show it is a good negative sampling method.\n\n- As Section 3.2.3 introduced, “the unwanted entailment cases include repetitions and paraphrasing”. Does it mean the entailment model also handles repetition problem? Do we still need a separate repetition model? How about a separate paraphrasing model?\n\n- Equation 6 and the related text are not very clearly represented. It would be better to add more intuition and better explained. \n\n- In the Table 2, the automated bleu scores of L2W algorithm for Tripadvisor is very low (0.34 against 24.11). Is this normal? More explanation is needed here.\n\n- For human judgement, how many scores does each example get? It would be better to get multiple workers on M-Turk to label the same example, and compute the mean and variance. One score per example may not be reliable. \n\n- It would be interesting to see deeper analysis about how each model in the objectives influence the actual language generation.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Well-motivated goals, but the methods don't achieve them.",
"paper_summary": null,
"main_review": "main_review: This paper proposes to improve RNN language model generation using augmented objectives inspired by Grice's maxims of communication. The idea is to combine the standard word-by-word decoding objective with additional objectives that reward sentences following these maxims. The proposed decoding objective is not new; reseachers in machine translation \n have worked on it referring to it as loss-augmented decoding: http://www.cs.cmu.edu/~nasmith/papers/gimpel+smith.naacl12.pdf\nThe use of RNNs in this context might be novel though.\n\nPros:\n- Well-motivated and ambitious goals\n\n- Human evaluation conducted on the outputs.\n\nCons:\n- My main concern is that it is unclear whether the models introduced are indeed implementing the Gricean maxims. For eaxample, the repetition model would not only discourage the same word occurring twice, but also a similar word (according to the word vectors used) to follow another one. \n\n- Similary, for the entailment model, what is an \"obvious\" entailment\"? Not sure we have training data for this in particular. Also, entailment suggests textual cohesion, which is conducive to the relation maxim. If this kind of model is what we need, why not take a state-of-the-art model?\n\n- The results seem to be inconsistent. The working vocabulary doesn't help in the tripAdvior experiment, while the RNN seems to work very well on the ROCstory data. While there might be good reasons for these, the point for me is that we cannot trust that the models added to the objective do what they are supposed to do.\n\n- Are the negative examples generated for the repetition model checked that they contain repetitions? Shouldn't be difficult to do. \n\n- Would be better to give the formula for the length model, the description is intuition but it is difficult to know exactly what the objective is\n\n- In algorithm 1, it seems like we fix in advance the max length of the sentence (max-step). Is this the case? If so why? Also, the proposed learning algorithm only learns how to mix pre-trained models, not sure I agree they learn the objective. It is more of an ensembling.\n\n- As far as I can tell these ideas could have been more simply implemented by training a re-ranker to score the n-best outputs of the decoder. Why not try it? They are very popular in text generation tasks.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.4444444477558136,
0.3333333432674408
],
"confidence": [
1,
0.75,
1
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Thank you for your detailed feedback.",
"Thank you for your concrete and constructive feedback.",
"Thank you for your feedback and suggestions. ",
"Response to author response"
],
"comment": [
"Regarding our approach to implementing Grice's maxims:\n\n- Our repetition module is trained to recognize both exact repetitions and repetitions involving lexical paraphrases, as indicated by the cosine similarity between word embeddings. We believe that this is a more robust approach than placing hard constraints on repetition, and has the advantage that the model can learn to distinguish between desirable and undesirable similarity patterns in human-produced and machine-produced text. \n\n- For the entailment module, while there is a risk that relevant sentences will be penalized, in the training data most of the entailments are direct enough that they are not likely to occur in writing, while the neural class training examples still often contains relevant information. \nIn terms of the model, we chose to use a lightweight bag-of-words model for time and memory efficiency reasons (as it is expensive to do pairwise sentence comparisons to compute the entailment scores), even though a state-of-the-art model is likely to somewhat increase the performance. \n\nWe added an analysis to the paper of the frequency of repetitions in the training data, finding that they indeed occur more frequently in the samples from the language model, which are used as negative examples for training the repetition model, than in the reference endings. \n\nWe added an equation in description of the length module in order to clarify its objective. \n\nThe purpose of the maximum length restriction is simply to guarantee that the beam-search will terminate. In practice the generated sequence (which is the highest-scoring sequence ending with the termination token) is always shorter than the maximum length allowed. \n\nAs suggested, we include a reranker baseline in the results to re-score the n-best outputs after doing beam search decoding using only the language model: We found that it performs much worse due to a lack of diversity in the beam.\n\nWe added more details in the paper to support the claim that the objective is being learned. The scoring function learned in one stage informs the objective in the following stages. First, the expert classifiers are learned to improve the language model by using samples from the language model as negative training data. Subsequently, these expert classifiers are combined in a mixed objective where the weights of the classifiers are learned discriminatively. As a result, the overall objective function for training the generator changes dynamically as the mixture weights are updated because the objective itself depends on those weights. The mixture weights are learned to optimize a discriminative objective, which updates the overall generation objective; this in turn changes the discriminative objective for the next training iteration. ",
"We added an analysis to the paper of the frequency of repetitions in the training data, finding that they indeed occur more frequently in the samples from the language model, which are used as negative examples for training the repetition model, than in the reference endings. \n\nEntailment examples in our training data are often but not always a form of paraphrasing, but usually not instances of direct repetition. Therefore we believe that we still need a seperate repetition model to handle more direct repetitions at a lexical level. A separate paraphrasing model is an interesting suggestion for future work, although we believe that the repetition and entailment models together are able to capture most of the paraphrases we are aiming to avoid.\n\nWe improved the description of the entailment score formulation (eq 6). \n\nThe very low BLEU scores observed in our results in the TripAdvisor domain are an artifact of the BLEU metric’s length penalty. The average length of reference completions is 12 sentences, which is much longer than the average length of endings generated by our Learning to Write models. This forces the BLEU score's length penalty to drive down the scores, despite our observation that there is still a significant amount of word and phrase overlap. The completions generated by the base language model are longer on average (as it tends to repeat itself over and over) and therefore do not suffer from this problem. \n\nWhile we agree that more labels per example will be valuable, we believe that the the test sets (of 1000 examples per domain) are large enough that to obtain a reasonably accurate aggregate score, despite the fact that not all of the individual annotations will be reliable.",
" We added more details in the paper to support the claim that the objective is being learned. The scoring function learned in one stage informs the objective in the following stages. First, the expert classifiers are learned to improve the language model by using samples from the language model as negative training data. Subsequently, these expert classifiers are combined in a mixed objective where the weights of the classifiers are learned discriminatively. As a result, the overall objective function for training the generator changes dynamically as the mixture weights are updated because the objective itself depends on those weights. The mixture weights are learned to optimize a discriminative objective, which updates the overall generation objective; this in turn changes the discriminative objective for the next training iteration. \n\nThe recommendation to tackle grounded language tasks is a great suggestion, and we are eager to explore this avenue for future work. We believe incorporating grounding introduces novel challenges and so falls out of the scope of this paper, which we have scoped to focus on open-ended, ungrounded generation. \n",
"While the paper was improved, it didn't address my main concern, that it is unclear whether the model really implements Gricean maxims. Assessing the repetitions and finding that the language model repeats slightly more often is not much evidence in my opinion. Also, the re-ranker should be trained on appropriately generated data, as it happens with the approach proposed. Thus my assessment of the paper remains the same."
]
} | {
"paperhash": [
"andrychowicz|learning_to_learn_by_gradient_descent_by_gradient_descent",
"diederik|adam:_a_method_for_stochastic_optimization",
"li|deep_reinforcement_learning_for_dialogue_generation",
"paulus|a_deep_reinforced_model_for_abstractive_summarization",
"snoek|practical_bayesian_optimization_of_machine_learning_algorithms",
"wu|google's_neural_machine_translation_system:_bridging_the_gap_between_human_and_machine_translation",
"zoph|neural_architecture_search_with_reinforcement_learning",
"inan|tying_word_vectors_and_word_classifiers:_a_loss_framework_for_language_modeling",
"jozefowicz|exploring_the_limits_of_language_modeling",
"krause|a_hierarchical_approach_for_generating_descriptive_image_paragraphs",
"pascanu|on_the_difficulty_of_training_recurrent_neural_networks",
"ashwin|diverse_beam_search:_decoding_diverse_solutions_from_neural_sequence_models",
"yu|the_neural_noisy_channel"
],
"title": [
"Learning to learn by gradient descent by gradient descent",
"Published as a conference paper at ICLR 2015 ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION",
"Deep Reinforcement Learning for Dialogue Generation",
"A Deep Reinforced Model for Abstractive Summarization",
"",
"Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation",
"Under review as a conference paper at ICLR 2017 NEURAL ARCHITECTURE SEARCH WITH REINFORCEMENT LEARNING",
"TYING WORD VECTORS AND WORD CLASSIFIERS: A LOSS FRAMEWORK FOR LANGUAGE MODELING",
"Exploring the Limits of Language Modeling",
"SimCVD: Simple Contrastive Voxel-Wise Representation Distillation for Semi-Supervised Medical Image Segmentation",
"On the difficulty of training Recurrent Neural Networks",
"DIVERSE BEAM SEARCH: DECODING DIVERSE SOLUTIONS FROM NEURAL SEQUENCE MODELS",
"THE NEURAL NOISY CHANNEL"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"marcin andrychowicz",
"misha denil",
"sergio gómez colmenarejo",
"matthew w hoffman",
"david pfau",
"tom schaul",
"brendan shillingford",
"nando de freitas",
"google deepmind"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik p kingma",
"jimmy lei ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jiwei li",
"will monroe",
"alan ritter",
"michel galley",
"jianfeng gao",
"dan jurafsky"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University",
"location": "{'settlement': 'Stanford', 'region': 'CA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{'settlement': 'Stanford', 'region': 'CA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Ohio State University",
"location": "{'region': 'OH', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'settlement': 'Redmond', 'region': 'WA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Microsoft Research",
"location": "{'settlement': 'Redmond', 'region': 'WA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Stanford University",
"location": "{'settlement': 'Stanford', 'region': 'CA', 'country': 'USA'}"
}
]
},
{
"name": [
"romain paulus",
"caiming xiong",
"richard socher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jasper snoek",
"hugo larochelle",
"ryan p adams"
],
"affiliation": [
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{}"
}
]
},
{
"name": [
"yonghui wu",
"mike schuster",
"zhifeng chen",
"quoc v le",
"mohammad norouzi",
"wolfgang macherey",
"maxim krikun",
"yuan cao",
"qin gao",
"klaus macherey",
"jeff klingner",
"apurva shah",
"melvin johnson",
"xiaobing liu",
"łukasz kaiser",
"stephan gouws",
"yoshikiyo kato",
"taku kudo",
"hideto kazawa",
"keith stevens",
"george kurian",
"nishant patil",
"wei wang",
"cliff young",
"jason smith",
"jason riesa",
"alex rudnick",
"oriol vinyals",
"greg corrado",
"macduff hughes",
"jeffrey dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"barret zoph",
"quoc v le google brain"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hakan inan",
"khashayar khosravi",
"richard socher"
],
"affiliation": [
{
"laboratory": "",
"institution": "Stanford University Stanford",
"location": "{'region': 'CA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Stanford University Stanford",
"location": "{'region': 'CA', 'country': 'USA'}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rafal jozefowicz",
"mike schuster",
"noam shazeer",
"yonghui wu",
"google brain"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chenyu you",
"yuan zhou",
"ruihan zhao",
"lawrence h staib",
"james s duncan"
],
"affiliation": [
{
"laboratory": "",
"institution": "Yale Univer-sity",
"location": "{'postCode': '06510', 'settlement': 'New Haven', 'region': 'CT', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Yale Univer-sity",
"location": "{'postCode': '06510', 'settlement': 'New Haven', 'region': 'CT', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Yale Univer-sity",
"location": "{'postCode': '06510', 'settlement': 'New Haven', 'region': 'CT', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Yale Univer-sity",
"location": "{'postCode': '06510', 'settlement': 'New Haven', 'region': 'CT', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Yale Univer-sity",
"location": "{'postCode': '06510', 'settlement': 'New Haven', 'region': 'CT', 'country': 'USA'}"
}
]
},
{
"name": [
"razvan pascanu",
"tomas mikolov",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": "",
"institution": "Universite de Montreal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Brno University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Universite de Montreal",
"location": "{}"
}
]
},
{
"name": [
"ashwin k vijayakumar",
"michael cogswell",
"ramprasath r selvaraju",
"qing sun",
"stefan lee",
"david crandall",
"dhruv batra"
],
"affiliation": [
{
"laboratory": "",
"institution": "Virginia Tech",
"location": "{'settlement': 'Blacksburg', 'region': 'VA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Virginia Tech",
"location": "{'settlement': 'Blacksburg', 'region': 'VA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Virginia Tech",
"location": "{'settlement': 'Blacksburg', 'region': 'VA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Virginia Tech",
"location": "{'settlement': 'Blacksburg', 'region': 'VA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Virginia Tech",
"location": "{'settlement': 'Blacksburg', 'region': 'VA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Indiana University",
"location": "{'settlement': 'Bloomington', 'region': 'IN', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Virginia Tech",
"location": "{'settlement': 'Blacksburg', 'region': 'VA', 'country': 'USA'}"
}
]
},
{
"name": [
"lei yu",
"phil blunsom",
"chris dyer",
"edward grefenstette",
"tomáš kočiský"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Oxford",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Oxford",
"location": "{}"
}
]
}
],
"arxiv_id": [
"",
"",
"",
"1705.04304v3",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.444444 | 0.916667 | null | null | null | null | null | r1lfpfZAb |
||
bikowski|demystifying_mmd_gans|ICLR_cc_2018_Conference | 1801.01401v5 | Demystifying MMD GANs | We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training. | {
"name": [
"mikołaj bi",
"danica j sutherland",
"michael arbel",
"arthur gretton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
} | null | [
"Mathematics",
"Computer Science"
] | International Conference on Learning Representations | 2018-01-04 | 61 | 1,016 | null | null | null | null | null | null | null | true | This paper does an excellent job at helping to clarify the relationship between various, recently proposed GAN models. The empirical contribution is small, but the KID metric will hopefully be a useful one for researchers. It would be really useful to show that it maintains its advantage when the dimensionality of the images increases (e.g., on Imagenet 128x128). | {
"review_id": [
"rkkFfN5gz",
"rJOKM41-M",
"SJsGyNugf"
],
"review": [
{
"title": "title: Clearly written review of MMD gans with some good insights",
"paper_summary": null,
"main_review": "main_review: The quality and clarity of this work are very good. The introduction of the kernel inception metric is well-motivated and novel, to my knowledge. With the mention of a bit more related work (although this is already quite good), I believe that this could be a significant resource for understanding MMD GANs and how they fit into the larger model zoo.\n\nPros\n - best description of MMD GANs that I have encountered\n - good contextualization of related work and descriptions of relationships, at least among the works surveyed\n - reasonable proposed metric (KID) and comparison with other scores\n - proof of unbiased gradient estimates is a solid contribution\n\nCons\n - although the review of related work is very good, it does focus on ~3 recent papers. As a review, it would be nice to see mention (even just in a list with citations) of how other models in the zoo fit in\n - connection between IPMs and MMD gets a bit lost; a figure (e.g. flow chart) would help\n - wavers a bit between proposing/proving novel things vs. reviewing and lacks some overall structure/storyline\n - Figure 1 is a bit confusing; why is KID tested without replacement, and FID with? Why 100 vs 10 samples? The comparison is good to have, but it's hard to draw any insight with these differences in the subfigures. The figure caption should also explain what we are supposed to get out of looking at this figure.\n\nSpecific comments:\n - I suggest bolding terms where they are defined; this makes it easy for people to scan/find (e.g. Jensen-Shannon divergence, Integral Probability Metrics, witness functions, Wasserstein distance, etc.) \n - Although they are common knowledge in the field, because this is a review it could be helpful to provide references or brief explanations of e.g. JSD, KL, Wasserstein distance, RKHS, etc.\n - a flow chart (of GANs, IPMs, MMD, etc., mentioning a few more models than are discussed in depth here, would be *very* helpful.\n - page 2, middle paragraph, you mention \"...constraints to ensure the kernel distribution embeddings remained injective\"; it would be helpful to add a sentence here to explain why that's a good thing.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Good overview; main contribution is theoretical proof",
"paper_summary": null,
"main_review": "main_review: The main contribution of the paper is that authors extend some work of Bellemare: they show that MMD GANs [which includes the Cramer GAN as a subset] do possess unbiased gradients. They provide a lot of context for the utility of this claim, and in the experiments section they provide a few different metrics for comparing GANs [as this is a known tricky problem]. The authors finally show that an MMD GAN can achieve comparable performance with a much smaller network used in the discriminator.\n\nAs previously mentioned, the big contribution of the paper is the proof that MMD GANs permit unbiased gradients. This is a useful result; however, given the lack of other outstanding theoretical or empirical results, it almost seems like this paper would be better shaped as a theory paper for a journal. I could be swayed to accept this paper however if others feel positive about it.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: The contribution is too incremental!",
"paper_summary": null,
"main_review": "main_review: This paper claims to demystify MMD-GAN, a generative adversarial network with the maximum mean discrepancy (MMD) as a critic, by showing that the usual estimator for MMD yields unbiased gradient estimates (Theorem 1). It was noted by the authors that biased gradient estimate can cause problem when performing stochastic gradient descent, as also noted previously by Bellemare et al. The authors also proposed a kernel inception distance (KID) as a quantitative evaluation metric for GAN. The KID is defined to be the squared MMD between inception representation of the distributions. In experiments, the authors compared the quality of samples generated by MMD-GAN with various kernels with the ones generated from WGAN-GP (Gulrajani et al., 2017) and Cramer GAN (Bellemare et al., 2017). The empirical results show the benefits of using the MMD on top of deep convolutional features. \n\nThe major flaw of this paper is that its contribution is not really clear. Showing that the expectation and gradient can be interchanged (Theorem 1) does not seem to provide sufficient significance. Unbiasedness of the gradient alone does not guarantee that training will be successful and that the resulting models will better reflect the underlying data distribution, as evident by other successful variants of GANs, e.g., WGAN, which employ biased estimate. Indeed, since the training process relies on a small mini-batch, a small bias could help counteract the potentially high variance of the gradient estimate. The key is rather a good balance of both bias and variance during the training process and a guarantee that the estimate is asymptotically unbiased wrt the training iterations. Lastly, I do not see how the empirical results would demystify MMD-GANs, as claimed by the paper.\n\nThe paper is clearly written. \n\nSome minor comments:\n\n- The proof of the main result, Theorem 1, should be placed in the main paper.\n- Page 7, 2nd paragraph: later --> layer",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.5555555820465088,
0.3333333432674408
],
"confidence": [
0.75,
0.25,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Re: \"Good overview; main contribution is theoretical proof\"",
"Re: Difference from MMD GAN?",
"Reply",
"Minor revision",
"New revision",
"Reply"
],
"comment": [
"Thanks for your comments. We do feel that this paper has contributions outside just the proof of unbiased gradients, in particular clarifying the relationship among various slightly-different GAN models, the KID score, and the new experimental results about success with smaller critic networks, which are of interest to the ICLR community.\n\nPlease also see our general comments about the new revision above, which includes substantial improvements.\n",
"It's true that the model we consider here is also described in Section 5.5 of the latest version of Li et al. This result was not in the original version of the paper, however, and only appeared in the revised arXiv submission of November 6, a week and a half after the ICLR deadline; we were not aware of the new version until you pointed it out (thanks for doing so).\n\nThat said, the main point of our paper is not to propose yet another GAN variation (YAGAN?). The idea of using an MMD as critic with a kernel defined on deep convolutional features is not new, nor is the idea of regularizing the gradient of the critic witness function: we cite the papers where (to our knowledge) these ideas were first proposed. The point of our paper is to understand (and \"demystify\") the MMD GAN, and its relation with other integral probability metric-based GANs. In this direction, our new results are in three areas:\n\n* We clarify the relationship between MMD GANs and Cramér GANs, and the relationship of the MMD GAN critic and witness function to those of WGAN-GPs (thus explaining why the gradient penalty makes sense for the MMD GAN).\n\n* We formally show the unbiasedness of the gradient of the MMD estimator wrt the network parameters, in Theorem 1. This is our main theoretical result, and an important property to establish when the MMD is used as a GAN critic.\n\n* Our main new experimental finding is that MMD GANs seem to work about as well as WGAN-GPs that use much larger critic networks. Thus, for a given generator, MMD GANs will be simpler and faster to train than WGAN-GPs. Our understanding of why this happens is described in detail in the paper.\n\nAlong the way, we also proposed the KID score, which is a more natural metric of generative model convergence than the Inception score, and inherits many of the nice properties of the previously-proposed FID, but is much easier and more intuitive to estimate.",
"Thanks for your comments. We've posted a new revision addressing most of them; see also our separate comment describing other significant improvements.\n\n- Review of related work: thanks for the suggestion. We have added a brief section 2.4 with some more related work; we would be happy to add more if you have some other suggestions.\n\n- We have attempted to slightly clarify the description of IPMs in this revision, and will further consider better ways to do this.\n\n- KID/FID comparison figure: We agree that this difference is confusing. It was done because the standard KID estimator becomes biased when there are repeated points due to sampling with replacement, but of course when sampling 10,000 / 10,000 points without replacement, it is unsurprising that there is no variance in the estimate, so it made more sense for the point we were trying to make to evaluate FID with replacement. The difference in number of samples was due to the relatively higher computational expense of the FID (which requires the SVD of a several thousand dimensional-matrix), but we have increased that to the same number of samples as well. The figures look essentially identical changing either of these issues; we have changed to using a variant of the KID estimator which is still unbiased for samples with replacement and clarified the caption.\n\n- We have added a footnote on why injectivity of the distribution embeddings is desirable.",
"We just uploaded a new revision with a minor improvement: Table 3 now contains test set metrics for the LSUN bedrooms dataset, for context, as for the other tables. All GAN models we considered obtain higher Inception scores than the test set, highlighting the inappropriateness of the Inception score for this dataset.",
"Thanks to all for their comments. We just posted a new revision addressing many of the comments, as well as the following general improvements:\n\nFirst, a note on the bias situation. After submission, we cleaned up the proof of unbiasedness and, in doing so, noticed that we were able to generalize it significantly. Our unbiasedness result now covers nearly all feedforward neural network structures used in practice, rather than just ReLU networks as before. We also realized in this process that, with very little extra work, we could cover not just MMD GANs but also WGANs and even original GANs (with bounded discriminator outputs to avoid the logs blowing up). This at first seems counterintuitive, since of course the Cramér GAN paper showed that Wasserstein has biased sample gradients. We have thus added a detailed description of the relationship to the theory section and to the new Appendix B. In short: with a fixed kernel, the MMD estimator is unbiased, but the estimator of the supremum over kernels of the MMD is biased. Likewise, with a fixed critic function, the Wasserstein estimator (such as it is) is unbiased, but the estimator of the supremum over critic functions (the actual Wasserstein) is biased. Thus the bias situation is more analogous between the two models than had been previously thought, which our paper now helps to substantially clarify.\n\nWe also cleaned up the experimental results somewhat, including new results on LSUN that we didn't have time to finish for the initial submission. While doing that, we also used the KID in a new way: to dynamically adapt the learning rate based on the similarity of the generator's model to the training set, using the relative similarity test of https://arxiv.org/abs/1511.04581. This is similar to popular schemes used in supervised learning based on validation set accuracy, and allows for less manual tuning of the learning rate decay (which can be very important, and differ between models).",
"Thanks for your comments, and please also see our comments about improvements in the new revision above.\n\nYou are certainly correct that an unbiased gradient does not guarantee that training will succeed; our recent revision also substantially clarifies the bias situation. However, in SGD the bias-variance tradeoff is somewhat different than the situation in e.g. ridge regression, where the regularization procedure adds some bias but also reduces variance enough that it is worthwhile. There doesn't seem to be any reason to think that the gradient variance is any higher for MMD GANs than for WGANs, and so a direct analogy doesn't quite apply. Also, when performing SGD, the biases of each step might add up over time, and so – as in Bellemare et al.'s example – following biased gradients is worth at least some level of concern.\n\nWith regards to the rest of the contribution: the title \"demystifying\" was intended more for the earlier parts of the paper, which elucidate the relationship of MMD GANs to other models and (especially in the revision) clarify the nature of the bias argument of Bellemare et al. The empirical results perhaps do not directly \"demystify,\" but rather bring another level of understanding of these models."
]
} | {
"paperhash": [
"fedus|many_paths_to_equilibrium:_gans_do_not_need_to_decrease_a_divergence_at_every_step",
"li|distributional_adversarial_networks",
"bellemare|the_cramer_distance_as_a_solution_to_biased_wasserstein_gradients",
"berthelot|began:_boundary_equilibrium_generative_adversarial_networks",
"arjovsky|towards_principled_methods_for_training_generative_adversarial_networks",
"salimans|improved_techniques_for_training_gans",
"nowozin|f-gan:_training_generative_neural_samplers_using_variational_divergence_minimization",
"szegedy|rethinking_the_inception_architecture_for_computer_vision",
"radford|unsupervised_representation_learning_with_deep_convolutional_generative_adversarial_networks",
"kingma|adam:_a_method_for_stochastic_optimization",
"liu|deep_learning_face_attributes_in_the_wild",
"szegedy|intriguing_properties_of_neural_networks",
"seeger|gaussian_processes_for_machine_learning",
"lecun|gradient-based_learning_applied_to_document_recognition",
"jin|towards_the_automatic_anime_characters_creation_with_generative_adversarial_networks",
"arora|do_gans_actually_learn_the_distribution?_an_empirical_study",
"heusel|gans_trained_by_a_two_time-scale_update_rule_converge_to_a_nash_equilibrium",
"genevay|learning_generative_models_with_sinkhorn_divergences",
"mroueh|fisher_gan",
"li|mmd_gan:_towards_deeper_understanding_of_moment_matching_network",
"danihelka|comparison_of_maximum_likelihood_and_gan-based_training_of_real_nvps",
"liu|approximation_and_convergence_properties_of_generative_adversarial_learning",
"huang|beyond_face_rotation:_global_and_local_perception_gan_for_photorealistic_and_identity_preserving_frontal_view_synthesis",
"zhu|unpaired_image-to-image_translation_using_cycle-consistent_adversarial_networks",
"huang|stacked_generative_adversarial_networks",
"mityagin|the_zero_set_of_a_real_analytic_function",
"sriperumbudur|on_integral_probability_metrics,_φ-divergences_and_binary_classification",
"bickel|unbiased_estimation_in_convex_families",
"sriperumbudur|on_the_empirical_estimation_of_integral_probability_metrics",
"zahorski|sur_l'ensemble_des_points_de_non-dérivabilité_d'une_fonction_continue"
],
"title": [
"MANY PATHS TO EQUILIBRIUM: GANS DO NOT NEED TO DECREASE A DIVERGENCE AT EVERY STEP",
"Distributional Adversarial Networks",
"The Cramer Distance as a Solution to Biased Wasserstein Gradients",
"BEGAN: Boundary Equilibrium Generative Adversarial Networks",
"",
"Improved Techniques for Training GANs",
"f -GAN: Training Generative Neural Samplers using Variational Divergence Minimization",
"Rethinking the Inception Architecture for Computer Vision",
"Under review as a conference paper at ICLR 2016 UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS",
"Published as a conference paper at ICLR 2015 ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION",
"Deep Learning Face Attributes in the Wild *",
"Intriguing properties of neural networks Christian Szegedy",
"Gaussian Processes for Machine Learning",
"Gradient-based learning applied to document recognition",
"GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium",
"Comparison of Maximum Likelihood and GAN-based training of Real NVPs",
"GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium",
"REVISITING CLASSIFIER TWO-SAMPLE TESTS",
"LSUN: Construction of a Large-Scale Image Dataset using Deep Learning with Humans in the Loop",
"Domain Adaptation: Learning Bounds and Algorithms",
"MULTI-SCALE CONTEXT AGGREGATION BY DILATED CONVOLUTIONS",
"Characteristic Kernels and RKHS Embedding of Measures",
"Hilbert Space Embedding and Characteristic Kernels Hilbert Space Embeddings and Metrics on Probability Measures",
"Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks",
"Stacked Generative Adversarial Networks",
"THE ZERO SET OF A REAL ANALYTIC FUNCTION",
"MULTINEST: an efficient and robust Bayesian inference tool for cosmology and particle physics",
"UNBIASED ESTIMATION IN CONVEX FAMILffiS 1",
"On the empirical estimation of integral probability metrics",
""
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"william fedus",
"mihaela rosca",
"balaji lakshminarayanan",
"andrew m dai",
"shakir mohamed",
"ian goodfellow",
"google brain",
" deepmind"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
},
{
"name": [],
"affiliation": []
},
{
"name": [
"david berthelot",
"thomas schumm",
"luke metz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chris donahue",
"julian mcauley",
"miller puckette"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tim salimans",
"ian goodfellow",
"wojciech zaremba",
"vicki cheung",
"alec radford",
"xi chen"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sebastian nowozin",
"botond cseke",
"ryota tomioka"
],
"affiliation": [
{
"laboratory": "Machine Intelligence and Perception Group Microsoft Research Cambridge",
"institution": "",
"location": "{'country': 'UK'}"
},
{
"laboratory": "Machine Intelligence and Perception Group Microsoft Research Cambridge",
"institution": "",
"location": "{'country': 'UK'}"
},
{
"laboratory": "Machine Intelligence and Perception Group Microsoft Research Cambridge",
"institution": "",
"location": "{'country': 'UK'}"
}
]
},
{
"name": [
"christian szegedy",
"zbigniew wojna"
],
"affiliation": [
{
"laboratory": "",
"institution": "University College London",
"location": "{}"
},
{
"laboratory": "",
"institution": "University College London",
"location": "{}"
}
]
},
{
"name": [
"alec radford",
"luke metz",
"soumith chintala"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik p kingma",
"jimmy lei ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ziwei liu",
"ping luo",
"xiaogang wang",
"xiaoou tang",
"hong kong"
],
"affiliation": [
{
"laboratory": "",
"institution": "The Chinese University",
"location": "{}"
},
{
"laboratory": "",
"institution": "The Chinese University",
"location": "{}"
},
{
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": "{}"
},
{
"laboratory": "",
"institution": "The Chinese University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"inc wojciech google",
" zaremba",
"ilya sutskever",
"joan bruna",
"ian goodfellow",
"rob fergus"
],
"affiliation": [
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Google Inc",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University Dumitru Erhan Google Inc",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Montreal",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University Facebook Inc",
"location": "{}"
}
]
},
{
"name": [
"matthias seeger"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of California at Berkeley",
"location": "{'addrLine': '485 Soda Hall', 'postCode': '94720-1776', 'settlement': 'Berkeley', 'region': 'CA', 'country': 'USA'}"
}
]
},
{
"name": [
"yann lecun",
"léon bottou",
"yoshua bengio",
"patrick haffner",
"yoshua bottou",
"patrick bengio",
" haffner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"martin heusel",
"hubert ramsauer",
"thomas unterthiner",
"bernhard nessler",
"sepp hochreiter"
],
"affiliation": [
{
"laboratory": "",
"institution": "Johannes Kepler University Linz",
"location": "{'postCode': 'A-4040', 'settlement': 'Linz', 'country': 'Austria'}"
},
{
"laboratory": "",
"institution": "Johannes Kepler University Linz",
"location": "{'postCode': 'A-4040', 'settlement': 'Linz', 'country': 'Austria'}"
},
{
"laboratory": "",
"institution": "Johannes Kepler University Linz",
"location": "{'postCode': 'A-4040', 'settlement': 'Linz', 'country': 'Austria'}"
},
{
"laboratory": "",
"institution": "Johannes Kepler University Linz",
"location": "{'postCode': 'A-4040', 'settlement': 'Linz', 'country': 'Austria'}"
},
{
"laboratory": "",
"institution": "Johannes Kepler University Linz",
"location": "{'postCode': 'A-4040', 'settlement': 'Linz', 'country': 'Austria'}"
}
]
},
{
"name": [
"ivo danihelka",
"balaji lakshminarayanan",
"benigno uria",
"daan wierstra",
"peter dayan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"martin heusel",
"hubert ramsauer",
"thomas unterthiner",
"bernhard nessler",
"sepp hochreiter"
],
"affiliation": [
{
"laboratory": "",
"institution": "Johannes Kepler University Linz",
"location": "{'postCode': 'A-4040', 'settlement': 'Linz', 'country': 'Austria'}"
},
{
"laboratory": "",
"institution": "Johannes Kepler University Linz",
"location": "{'postCode': 'A-4040', 'settlement': 'Linz', 'country': 'Austria'}"
},
{
"laboratory": "",
"institution": "Johannes Kepler University Linz",
"location": "{'postCode': 'A-4040', 'settlement': 'Linz', 'country': 'Austria'}"
},
{
"laboratory": "",
"institution": "Johannes Kepler University Linz",
"location": "{'postCode': 'A-4040', 'settlement': 'Linz', 'country': 'Austria'}"
},
{
"laboratory": "",
"institution": "Johannes Kepler University Linz",
"location": "{'postCode': 'A-4040', 'settlement': 'Linz', 'country': 'Austria'}"
}
]
},
{
"name": [
"david lopez-paz",
"maxime oquab"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yu ari fisher",
" seff",
"thomas funkhouser",
"jianxiong xiao"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Princeton University",
"location": "{}"
}
]
},
{
"name": [
"yishay mansour",
"mehryar mohri",
"afshin rostamizadeh"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yu fisher",
"vladlen koltun"
],
"affiliation": [
{
"laboratory": "",
"institution": "Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Intel Labs",
"location": "{}"
}
]
},
{
"name": [
"bharath k sriperumbudur",
"gert r g lanckriet"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"bharath k sriperumbudur",
"arthur gretton",
"bernhard schölkopf",
"gert r g lanckriet"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jun-yan zhu",
"taesung park",
"phillip isola",
"alexei a efros",
"summer winter",
"van gogh",
"cezanne monet",
"ukiyo-e monet photos"
],
"affiliation": [
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
},
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
},
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
},
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
},
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
},
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
},
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
},
{
"laboratory": "Berkeley AI Research (BAIR) laboratory",
"institution": "",
"location": "{'settlement': 'Berkeley', 'region': 'UC'}"
}
]
},
{
"name": [
"xun huang",
"yixuan li",
"omid poursaeed",
"john hopcroft",
"serge belongie"
],
"affiliation": [
{
"laboratory": "",
"institution": "Cornell University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Cornell University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Cornell University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Cornell University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Cornell University",
"location": "{}"
}
]
},
{
"name": [
"boris s mityagin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"f feroz",
"m p hobson",
"m bridges"
],
"affiliation": [
{
"laboratory": "Cavendish Laboratory",
"institution": "",
"location": "{'addrLine': 'JJ Thomson Avenue', 'postCode': 'CB3 0HE', 'settlement': 'Cambridge', 'country': 'UK'}"
},
{
"laboratory": "Cavendish Laboratory",
"institution": "",
"location": "{'addrLine': 'JJ Thomson Avenue', 'postCode': 'CB3 0HE', 'settlement': 'Cambridge', 'country': 'UK'}"
},
{
"laboratory": "Cavendish Laboratory",
"institution": "",
"location": "{'addrLine': 'JJ Thomson Avenue', 'postCode': 'CB3 0HE', 'settlement': 'Cambridge', 'country': 'UK'}"
}
]
},
{
"name": [
"p j bickel",
"e l lehmann"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
}
]
},
{
"name": [
"bharath k sriperumbudur",
"kenji fukumizu",
"arthur gretton",
"bernhard schölkopf",
"gert r g lanckriet"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
}
],
"arxiv_id": [
"",
"1706.09549v3",
"1705.10743v1",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 85 | 11.952941 | 0.518519 | 0.583333 | null | null | null | null | null | r1lUOzWCW |
|
shi|kernel_implicit_variational_inference|ICLR_cc_2018_Conference | 1705.10119v3 | Kernel Implicit Variational Inference | Recent progress in variational inference has paid much attention to the flexibility of variational posteriors. One promising direction is to use implicit distributions, i.e., distributions without tractable densities as the variational posterior. However, existing methods on implicit posteriors still face challenges of noisy estimation and computational infeasibility when applied to models with high-dimensional latent variables. In this paper, we present a new approach named Kernel Implicit Variational Inference that addresses these challenges. As far as we know, for the first time implicit variational inference is successfully applied to Bayesian neural networks, which shows promising results on both regression and classification tasks. | {
"name": [
"jiaxin shi",
"shengyang sun",
"jun zhu"
],
"affiliation": [
{
"laboratory": "Lab for Brain and AI",
"institution": "Tsinghua University",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Toronto",
"location": "{}"
},
{
"laboratory": "Lab for Brain and AI",
"institution": "Tsinghua University",
"location": "{}"
}
]
} | null | [
"Computer Science",
"Mathematics"
] | arXiv.org | 2017-05-29 | 50 | 3 | null | null | null | null | null | null | null | true | Thank you for submitting you paper to ICLR. This paper was enhanced noticeably in the rebuttal period and two of the reviewers improved their score as a result. There is a good range of experimental work on a number of different tasks. The addition of the comparison with Liu & Feng, 2016 to the appendix was sensible. Please make sure that the general conclusions drawn from this are explained in the main text and also the differences to Tran et al., 2017 (i.e. that the original model can also be implicit in this case). | {
"review_id": [
"rkzduZIyf",
"S1jTB1PlM",
"Hk8v9J5eM"
],
"review": [
{
"title": "title: Interesting and novel idea but poor writing quality",
"paper_summary": null,
"main_review": "main_review: This paper presents Kernel Implicit Variational Inference (KIVI), a novel class of implicit variational distributions. KIVI relies on a kernel approximation to directly estimate the density ratio. Importantly, the optimal kernel approximation in KIVI has closed-form solution, which allows for faster training since it avoids gradient ascent steps that may soon get \"outdated\" as the optimization over the variational distribution runs. The paper presents experiments on a variety of scenarios to show the performance of KIVI.\n\nUp to my knowledge, the idea of estimating the density ratio using kernels is novel. I found it interesting, specially since there is a closed-form solution for this estimate. The closed form solution involves a matrix inversion, but this shouldn't be an issue, as the matrix size is controlled by the number of samples, which is a parameter that the practitioner can choose. I also found interesting the implicit MMNN architecture proposed in Section 4.\n\nThe experiments seem convincing too, although I believe the paper could probably be improved by comparing with other implicit VI methods, such as [Liu & Feng], [Tran et al.], or others.\n\nMy major criticism with the paper is the quality of the writing. I found quite a few errors in every page, which significantly affects readability. I strongly encourage the authors to carefully review the entire paper and search for typos, grammatical errors, unclear sentences, etc.\n\nPlease find below some further comments broken down by section.\n\nSection 1: In the introduction, it is unclear to me what \"protect these models\" means. Also, in the second paragraph, the authors talk about \"often leads to biased inference\". The concept to \"biased inference\" is unclear. Finally, the sentence \"the variational posterior we get in this way does not admit a tractable likelihood\" makes no sense to me; how can a posterior admit (or not admit) a likelihood?\n\nSection 3: The first paragraph of the KIVI section is also unclear to me. In Section 3.1, it looks like the cost function L(\\hat(r)) is different from the loss in Eq. 1, so it should have a different notation. In Eq. 4, I found it confusing whether L(r)=J(r). Also, it would be nice to include a brief description of why the expectation in Eq. 4 is taken w.r.t. p(z) instead of q(z), for those readers who are less familiar with [Kanamori et al.]. Finally, the motivation behind the \"reverse ratio trick\" was unclear to me (the trick is clear, but I didn't fully understand why it's needed).\n\nSection 4: The first paragraph of the example can be improved with a brief discussion of why the methods of [Mescheder et al.] and [Song et al.] \"are nor applicable\". Also, the paragraph above Eq. 11 (\"When modeling a matrix...\") was unclear to me.\n\nSection 6: In Figure 1(a), I think there must be something wrong, because it is well-known that VI tends to cover one of the modes of the posterior only due to the form of the KL divergence (in contrast to EP, which should look like the curve in the figure). Additionally, Figure 3(a) (and the explanation in the text) was unclear to me. Finally, I disagree with the discussion regarding overfitting in Figure 3(b): that plot doesn't show overfitting because it is a plot of the training loss (and overfitting occurs on test); instead it looks like an optimization issue that makes the bound decrease.\n\n\n**** EDITS AFTER AUTHORS' REBUTTAL ****\n\nI increased the rating to 7 after reading the revised version.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A good contribution to implicit approximate posterior fitting",
"paper_summary": null,
"main_review": "main_review: Thank you for the feedback, and I think many of my concerns have been addressed.\n\nI think the paper should be accepted.\n\n==== original review ====\n\nThank you for an interesting read. \n\nApproximate inference with implicit distribution has been a recent focus of the research since late 2016. I have seen several papers simultaneously proposing the density ratio estimation idea using GAN approach. This paper, although still doing density ratio estimation, uses kernel estimators instead and thus avoids the usage of discriminators. \n\nFurthermore, the paper proposed a new type of implicit posterior approximation which uses intuitions from matrix factorisation. I do think that another big challenge that we need to address is the construction of good implicit approximations, which is not well studied in previous literature (although this is a very new topic). This paper provides a good start in this direction.\n\nHowever several points need to be clarified and improved:\n1. There are other ways to do implicit posterior inference such as amortising deterministic/stochastic dynamics, and approximating the gradient updates of VI. Please check the literature.\n2. For kernel based density ratio estimation methods, you probably need to cite a bunch of Sugiyama papers besides (Kanamori et al. 2009). \n3. Why do you need to introduce both regression under p and q (the reverse ratio trick)? I didn't see if you have comparisons between the two. From my perspective the reverse ratio trick version is naturally more suitable to VI.\n4. Do you have any speed and numerical issues on differentiating through alpha (which requires differentiating K^{-1})?\n5. For kernel methods, kernel parameters and lambda are key to performances. How did you tune them?\n6. For the celebA part, can you compute some quantitative metric, e.g inception score?\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting idea; not clear it scales; needs experiments on quality of ratio estimation and also posterior approximation",
"paper_summary": null,
"main_review": "main_review: Update: I read the other reviews and the authors' rebuttal. Thanks to the authors for clarifying some details. I'm still against the paper being accepted. But I don't have a strong opinion and will not argue against so if other reviewers are willing. \n\n------\n\nThe authors propose Kernel Implicit VI, an algorithm allowing implicit distributions as the posterior approximation by employing kernel ridge regression to estimate a density ratio. Unlike current approaches with adversarial training, the authors argue this avoids the problems of noisy ratio estimation, as well as potentially high-dimensional inputs to the discriminator. The work has interesting ideas. Unfortunately, I'm not convinced that the method overcomes these difficulties as they argue in Sec 3.2.\n\nAn obvious difficulty with kernel ridge regression in practice is that its complete inaccuracy to estimate high-dimensional density ratios. This is especially the case given a limited number of samples from both p and q (which is the same problem as previous methods) as well as the RBF kernel. While the RBF kernel still takes the same high-dimensional inputs and does not involve learning massive sets of parameters, it also does not scale well at all for accurate estimation. This is the same problem as related approaches with Stein variational gradient descent; namely, it avoids minimax problems as in adversarial training by implicitly integrating over the discriminator function space using the kernel trick.\n\nThis flaw has rather deep implications. For example, my understanding of the implicit VI on the Bayesian neural network in Sec 4 is that it ends up as cross-entropy minimization subject to a poorly estimated KL regularizer. I'd like to see just how much entropy the implicit approximation has instead of concnetrating toward a point; or more directly, what the implicit posterior approximation looks like compared to a true posterior inferred by, say, HMC as the ground truth. This approach also faces difficulties that the naive Gaussian approximation applied to Bayesian neural nets does not: implicit approximations cannot exploit the local reparameterization trick and are therefore limited to specific architectures that does not involve sampling very large weight matrices.\n\nThe authors report variational lower bounds, which I'm not sure is really a lower bound. Namely, the bias incurred by the ratio estimation makes it difficult to compare numbers. An obvious but very illustrative experiment I'd like to see would be the accuracy of the KL estimator on problems where we can compute it tractably, or where we can Monte Carlo estimate it very well under complicated but tractable densities. I also suggest the authors perform the experiment suggested above with HMC as ground truth on a non-toy problem such as a fairly large Bayesian neural net.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.6666666865348816,
0.6666666865348816,
0.4444444477558136
],
"confidence": [
0.5,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Thank you for the detailed comments. We apologize for the typos and unclear sentences, and we have revised the paper.",
"Summary of the revision",
"Mostly convinced with the rebuttal; thank you for revising the paper",
"Thank you for the insightful comments and we have included further experiments to investigate the questions raised.",
"Errata",
"Thank you for the review, and answering the questions",
"Thank you for reading the rebuttal"
],
"comment": [
"Thank you for the detailed comments. We apologize for the typos and errors. We have corrected them and revised the unclear sentences. Below, we address the individual concerns.\n\nQ1: Comparisons with other implicit VI methods, such as [Liu & Feng], [Tran et al.], or others:\nThanks for the suggestion. In the revision, we added the comparison with (Liu & Feng, 2016) in Appendix F.2. Their approach is to directly minimize the kernel Stein discrepancy (KSD) between the variational posterior and the true posterior. Since KSD has been shown to be the magnitude of a functional gradient of KL divergence (Liu & Wang, 2016), all saddle points in the original problem of optimizing KL divergence will become local optima when optimizing KSD. In experiments we also found that KSD VI soon converges to local minima, where the performance is unsatisfying.\n\nFor (Tran et al., 2017), as it investigates both implicit models and implicit inference, the technique used is the joint-contrastive method, which is beyond our scope (only meaningful to use joint-contrastive when the model is also implicit). So the comparison is infeasible since we are only focusing on implicit inference. We have compared to other discriminator-based approaches in our experiments (e.g., prior-contrastive, AVB).\n\nQ2: Detailed comments by section:\nSection 1: We revised all the unclear statements. “biased inference” means the true posterior is far from the variational family when the family only includes factorized distributions. “admit a tractable likelihood” should be “have a tractable density”.\n\nSection 3: We revised the unclear statements. In Section 3.1, we cleaned the notations and added the description of why the expectation in Eq.4 is taken w.r.t. p(z). We also revised the reverse ratio trick part. A comparison between estimation with and without the trick is added to Appendix F.1.\n\nSection 4: The implicit distributions introduced in [Mescheder et al.] and [Song et al.] are not applicable because they are based on traditional fully-connected neural networks, which cannot afford a very large output space. However, this is indeed the case of the distribution over weights in a normal-size BNN. We made it clearer in the paper. The paragraph above the original Eq. (11) has been revised.\n\nSection 6: Thanks for pointing out the error in Figure 1(a). We have corrected it. VI with normal posterior indeed converges to a single mode. For Figure 3(a), we made it clearer and added more detailed descriptions to the posterior. Figure 3(b) did show overfitting, where we have plotted both the training and the test loss. The smaller their gap is, the less the model overfits. We added more descriptions in Section 6.3.",
"We thank the reviewers for all the comments and questions. Here we summarize the major changes made in the revision.\n\n* We revised the statements of motivations and contributions in Section 1-3 to make them clearer.\n* We added Appendix F.4 to compare the true KL term with the estimated KL term under “complicated but tractable densities” (normalizing flows).\n* Comparisons with KSD VI (Liu & Feng, 2016) and HMC are added to Appendix F.2.\n* We added Appendix F.3 to visualize the posterior approximation by KIVI and compare with HMC and VI with naive Gaussian posteriors.\n* We revised the reverse ratio trick part and added a comparison between estimation with and without the trick in Appendix F.1.\n* The related work section is extended to include the works pointed out by AnonReviewer3.\n* In Section 6.3, we added quantitative evaluation for CelebA using the recently developed Fréchet Inception Distance (FID) (Heusel et al., 2017).\n* We corrected the error in Figure 1(a).\n* We fixed other typos, grammar errors, and unclear sentences.\n",
"Thank you for revising the paper; I think it is much clearer now. The issues in my initial review have been appropriately taken care of.\n\nThere are still some typos here and there, and I would recommend the authors to carefully revise the paper again. Some examples are:\n\nSection 2:\n. there has been some works --> there HAVE\n. Note that the ratio approximation ... --> this sentence is unclear to me, do you mean that the gradient of the ratio approximation is zero once the approximation is accurate? Same comment in Sec 3.1, below Eq. 7.\n. doesn't --> does not (same goes for it's, won't, etc., in other sections)\n\nSection 3:\n. Why does notation change from q_phi(z) to just q(z)?\n. Substitute Eq. (5) --> SubstitutING Eq. (5) and SETTING the derivatives ... TO zero, we ...",
"Thank you for the insightful comments and we have included further experiments to investigate the questions raised. We have revised the paper to include the analysis.\n\nQ1: Inaccuracy of kernel regression in high dimensions & not convinced that KIVI overcomes the difficulties:\n\nFirst, we have to emphasize that implicit VI is surely a much harder problem than VI with a common variational posterior (e.g., Gaussian), due to the lack of a tractable density for variational posterior q. Given the limited number of samples from q per iteration, if no additional knowledge is available, almost all implicit VI methods as well as nonparametric methods (e.g., SVGD) suffer to some degree in high dimensions, as agreed by the reviewer. However, as we extensively investigated in experiments, though not fully addressed all the challenges, KIVI can outperform existing strong competitors to get state-of-the-art performance. We think this is a valuable contribution to variational inference.\n\nBelow, we further clarify our contributions. We have also revised the two challenges in Section 2 and the statements of contributions in the paper to make them clearer.\n\n1) For the noisy estimation, we focused on the variance introduced in discriminator-based methods. In fact, existing discriminator-based methods have been identified to have high variance (noisy), i.e., samples from the two distributions are easily discriminated, which indicates overfitting (Mescheder et al., 2017). This phenomenon is like the case when you push $\\lambda$ in KIVI to 0. We are not claiming high accuracy for estimation in high-dimensional spaces (In fact no implicit VI method can claim that with limited samples per iteration, as explained above). One main contribution of KIVI is to provide an explicit trade-off between bias and variance, since there was no principled way of doing so in discriminator-based methods. As a result, our algorithm can be rather stable (see Fig.2, right). It’s true that bringing down the variance requires to pay some bias in the gradient in general. However, as empirically shown in the experiments and also in the investigation of the learned posteriors (see the answer to Q3 below), we found that we still gain over previous VI methods, both in terms of accuracy and also the quality of uncertainty, which is highly non-trivial.\n\n2) For high-dimensional latent variables, the argument mainly focused on computation issues. The other main contribution of KIVI is to make implicit VI computationally FEASIBLE for models like moderate-sized BNNs. In the classification case, the weights are of tens of thousands of dimensions and can hardly be fed into neural nets, which renders discriminator-based approaches infeasible.\n\nFinally, we’d like to add a point that KIVI opens up the door for improving implicit VI methods. The view of kernel regression at least brings two possible directions: One is pointed out by the reviewer, the RBF kernel could be replaced by other kernels that are more suitable to the model here. The other is to improve the regression problem to utilize the geometry of the distribution. And the latter is actually an ongoing work of us.\n\nQ2: Accuracy of the KL estimator on problems where we can compute it tractably:\nThanks for the suggestion. We added Appendix F.4 to compare the true KL term with the estimated KL term. We used normalizing flow there as the “complicated but tractable densities”. We can see that the KL estimates closely track the ground truth, and are more accurate as the variational approximation improves over time.\n\nQ3: Quality of posterior approximation & comparison to HMC:\nWe added Appendix F.3 to visualize the posterior approximation by KIVI and compare with HMC and the VI with naive Gaussian posteriors. The quantitative results and settings of HMC are described in Appendix F.2. The main conclusion is that the VI with naive Gaussian posteriors leads to over-pruning problems. KIVI doesn’t have the problem, and retains a good amount of uncertainty compared to HMC.\n\nQ4: Cannot use local reparameterization trick:\nThis is a valid point. But the problem exists as long as we want to go beyond tractable variational posteriors (e.g., Gaussian). The results by naive Gaussian posteriors have been shown above, which has significant over-pruning problems. New difficulty introduced shouldn’t be the reason that we stick to the naive Gaussian approximation.\n\nQ5: Bias of lower bounds:\nThere are two places where we report lower bounds. In Figure 2 (right) the lower bounds are used only to show the stability of training. In Figure 3(b) lower bounds are plotted to show the overfitting problems. We argue that though the lower bounds have bias, their relative gap (the training/test gap) should be comparable. Moreover, in this case we have also evaluated the test log likelihoods using golden truths estimated by Annealed Importance Sampling (AIS). The results by AIS confirmed the conclusion that the KIVI-trained VAE less overfits.",
"Figure 1(a) in the toy experiment is incorrectly drawn and thus misinterpreted. The correct figure should be that the Gaussian posterior covers the left mode instead of being between the two modes, since it is initialized from left (see https://drive.google.com/file/d/1nJAVH2-Fl0P6ei-ZwBI3_Z6BvFFAYk9E/view?usp=sharing). The figure of KIVI is correct and we double-checked all the others. We sincerely apologize for this error and will fix it in the revision.",
"Thank you for the positive feedback. We address the individual questions below.\n\nQ1: Related works:\nThanks for the suggestion. We have cited a paper on amortizing the deterministic dynamics of SVGD (Liu & Feng, 2016). In the revision, we added two more recent papers on amortized MCMC (Li et al., 2017) and gradient estimators of implicit models (Li & Turner, 2017) in Section 5. We also added more content there to highlight the contributions that Sugiyama and his collaborators has made to density ratio estimation.\n\nQ2: On the reverse ratio trick:\nIn fact, we didn’t do regression under q. We only adopted the regression under p (the reverse ratio trick) in our experiments (See Algo. 1). And we have explained why the reverse ratio version is more suitable for VI in Section 3.1. In the revision, we further added a comparison between the two using the 2-D Bayesian logistic regression example in Appendix F.1, which shows that the trick is very essential for KIVI to work well.\n\nQ3: Speed and numerical issues on differentiating through alpha:\nBecause K is of size n_p x n_p (n_p is the number of samples), which is usually of tens or a hundred, the cost of differentiating through K^{-1} is not high. And we used the automatic differentiation in Tensorflow. We didn’t observe any numerical issues, as long as the regularization parameter isn’t extremely small, say, less than 1e-7.\n\nQ4: Tuning parameters:\nAs mentioned in Section 3.1, we selected the kernel bandwidth by the commonly used median heuristic, i.e., the kernel bandwidth is chosen as the median of pairwise distances between the samples.\n\nAs for lambda, it has clear meaning, which controls the balance between bias and variance. So a good criterion would be tuning it to achieve a good trade-off between the aggressiveness of the estimate and stability of training. In the toy experiments, we tuned lambda so that optimizing only the KL term will make the posterior samples more disperse like the prior. In most other experiments, lambda is set at 0.001 which has good performance, though it could be improved by cross-validation.\n\nQ5: Quantitative evaluation for CelebA:\nThanks for the suggestion. In fact, inception score is only suitable to natural image datasets like Cifar10 and ImageNet. Instead, we adopted a recently developed quantitative measure named Fréchet Inception Distance (FID) (Heusel et al., 2017), which improved the Inception score to use the statistics of real world samples. The scores achieved at epoch 25 by AVB and KIVI are 160 and 41 (smaller is better), respectively. We added these results in Section 6.3.",
"Thank you for the updated review. We answer the further questions below.\n\nQ: \"Note that the ratio approximation ...\" is not clear:\nA: This sentence has the same meaning as the one below Eq. 7, by which we mean that the true gradients of the KL term w.r.t. $\\phi$ do not flow through the density ratio function, so we could replace the ratio function with its estimate who has zero gradients w.r.t. $\\phi$. We made it clearer in the paper.\n\nQ: Other typos:\nA: We uploaded a new revision correcting them."
]
} | {
"paperhash": [
"louizos|multiplicative_normalizing_flows_for_variational_bayesian_neural_networks",
"tran|deep_and_hierarchical_implicit_models",
"huszár|variational_inference_using_implicit_distributions",
"mescheder|adversarial_variational_bayes:_unifying_variational_autoencoders_and_generative_adversarial_networks",
"liu|two_methods_for_wild_variational_inference",
"tomczak|improving_variational_auto-encoders_using_householder_flow",
"ranganath|operator_variational_inference",
"mohamed|learning_in_implicit_generative_models",
"uehara|generative_adversarial_nets_from_a_density_ratio_estimation_perspective",
"liu|stein_variational_gradient_descent:_a_general_purpose_bayesian_inference_algorithm",
"kingma|improved_variational_inference_with_inverse_autoregressive_flow",
"dumoulin|adversarially_learned_inference",
"donahue|adversarial_feature_learning",
"louizos|structured_and_efficient_variational_deep_learning_with_matrix_gaussian_posteriors",
"dai|provable_bayesian_inference_via_particle_mirror_descent",
"gal|dropout_as_a_bayesian_approximation:_representing_model_uncertainty_in_deep_learning",
"ghahramani|probabilistic_machine_learning_and_artificial_intelligence",
"rezende|variational_inference_with_normalizing_flows",
"blundell|weight_uncertainty_in_neural_networks",
"hernández-lobato|probabilistic_backpropagation_for_scalable_learning_of_bayesian_neural_networks",
"goodfellow|generative_adversarial_networks",
"mnih|neural_variational_inference_and_learning_in_belief_networks",
"kingma|auto-encoding_variational_bayes",
"hoffman|stochastic_variational_inference",
"paisley|variational_bayesian_inference_with_stochastic_search",
"hao|deep_learning",
"lecun|deep_learning"
],
"title": [
"Multiplicative Normalizing Flows for Variational Bayesian Neural Networks",
"Hierarchical Implicit Models and Likelihood-Free Variational Inference",
"Variational Inference using Implicit Distributions",
"Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks",
"TWO METHODS FOR WILD VARIATIONAL INFERENCE",
"Improving Variational Auto-Encoders using Householder Flow",
"Operator Variational Inference",
"Learning in Implicit Generative Models",
"Generative Adversarial Nets from a Density Ratio Estimation Perspective",
"Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm",
"Improved Variational Inference with Inverse Autoregressive Flow",
"",
"ADVERSARIAL FEATURE LEARNING",
"",
"Provable Bayesian Inference via Particle Mirror Descent",
"Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning",
"Probabilistic machine learning and artificial intelligence",
"Variational Inference with Normalizing Flows",
"A convex relaxation for weakly supervised classifiers",
"Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks",
"Generative Adversarial Nets",
"Neural Variational Inference and Learning in Belief Networks",
"Auto-Encoding Variational Bayes",
"Stochastic Variational Inference",
"Variational Bayesian Inference with Stochastic Search",
"Deep Learning",
"Deep Learning"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"christos louizos",
"max welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dustin tran",
"rajesh ranganath",
"david m blei"
],
"affiliation": [
{
"laboratory": "",
"institution": "Columbia University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Columbia University",
"location": "{}"
}
]
},
{
"name": [
"ferenc huszár"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lars mescheder",
"sebastian nowozin",
"andreas geiger"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"qiang liu",
"yihao feng"
],
"affiliation": [
{
"laboratory": "",
"institution": "Dartmouth College Hanover",
"location": "{'postCode': '03755', 'region': 'NH'}"
},
{
"laboratory": "",
"institution": "Dartmouth College Hanover",
"location": "{'postCode': '03755', 'region': 'NH'}"
}
]
},
{
"name": [
"jakub m tomczak",
"max welling"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Amsterdam",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Amsterdam",
"location": "{}"
}
]
},
{
"name": [
"rajesh ranganath",
"jaan altosaar",
"dustin tran",
"david m blei"
],
"affiliation": [
{
"laboratory": "",
"institution": "Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Princeton University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Columbia University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Columbia University",
"location": "{}"
}
]
},
{
"name": [
"shakir mohamed",
"balaji lakshminarayanan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"masatosi uehara",
"issei sato",
"masahiro suzuki",
"kotaro nakayama",
"yutaka matsuo"
],
"affiliation": [
{
"laboratory": "",
"institution": "The University of Tokyo",
"location": "{}"
},
{
"laboratory": "",
"institution": "The University of Tokyo",
"location": "{}"
},
{
"laboratory": "",
"institution": "The University of Tokyo",
"location": "{}"
},
{
"laboratory": "",
"institution": "The University of Tokyo",
"location": "{}"
},
{
"laboratory": "",
"institution": "The University of Tokyo",
"location": "{}"
}
]
},
{
"name": [
"qiang liu",
"dilin wang"
],
"affiliation": [
{
"laboratory": "",
"institution": "Dartmouth College Hanover",
"location": "{'postCode': '03755', 'region': 'NH'}"
},
{
"laboratory": "",
"institution": "Dartmouth College Hanover",
"location": "{'postCode': '03755', 'region': 'NH'}"
}
]
},
{
"name": [
"diederik p kingma",
"xi chen",
"max welling"
],
"affiliation": [
{
"laboratory": "Advanced Research (CIFAR). 29th Conference on Neural Information Processing Systems (NIPS 2016)",
"institution": "Canadian Institute for",
"location": "{'settlement': 'Barcelona', 'country': 'Spain'}"
},
{
"laboratory": "Advanced Research (CIFAR). 29th Conference on Neural Information Processing Systems (NIPS 2016)",
"institution": "Canadian Institute for",
"location": "{'settlement': 'Barcelona', 'country': 'Spain'}"
},
{
"laboratory": "Advanced Research (CIFAR). 29th Conference on Neural Information Processing Systems (NIPS 2016)",
"institution": "Canadian Institute for",
"location": "{'settlement': 'Barcelona', 'country': 'Spain'}"
}
]
},
{
"name": [
"vincent dumoulin",
"ishmael belghazi",
"ben poole",
"olivier mastropietro",
"alex lamb",
"martin arjovsky",
"aaron courville"
],
"affiliation": [
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "Neural Dynamics and Computation Lab",
"institution": "",
"location": "{'settlement': 'Stanford'}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
},
{
"laboratory": "",
"institution": "New York University",
"location": "{}"
},
{
"laboratory": "",
"institution": "Université de Montréal",
"location": "{}"
}
]
},
{
"name": [
"jeff donahue",
"philipp krähenbühl",
"trevor darrell"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
},
{
"laboratory": "",
"institution": "University of Texas",
"location": "{'settlement': 'Austin'}"
},
{
"laboratory": "",
"institution": "University of California",
"location": "{'settlement': 'Berkeley'}"
}
]
},
{
"name": [
"victor lempitsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"bo dai",
"niao he",
"hanjun dai",
"le song"
],
"affiliation": [
{
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": "{}"
},
{
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": "{}"
},
{
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": "{}"
},
{
"laboratory": "",
"institution": "Georgia Institute of Technology",
"location": "{}"
}
]
},
{
"name": [
"yarin gal"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Cambridge",
"location": "{}"
}
]
},
{
"name": [
"zoubin ghahramani"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Cambridge",
"location": "{}"
}
]
},
{
"name": [
"danilo jimenez",
"shakir mohamed",
"google deepmind",
" london"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"armand joulin",
"francis bach"
],
"affiliation": [
{
"laboratory": "",
"institution": "INRIA -Ecole Normale Supérieure",
"location": "{}"
},
{
"laboratory": "",
"institution": "INRIA -Ecole Normale Supérieure",
"location": "{}"
}
]
},
{
"name": [
"josé miguel hernández-lobato",
"ryan p adams"
],
"affiliation": [
{
"laboratory": "",
"institution": "Harvard University",
"location": "{'postCode': '02138', 'settlement': 'Cambridge', 'region': 'MA', 'country': 'USA'}"
},
{
"laboratory": "",
"institution": "Harvard University",
"location": "{'postCode': '02138', 'settlement': 'Cambridge', 'region': 'MA', 'country': 'USA'}"
}
]
},
{
"name": [
"ian j goodfellow",
"jean pouget-abadie",
"mehdi mirza",
"bing xu",
"david warde-farley",
"sherjil ozair",
"aaron courville",
"yoshua bengio",
" delhi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Université de Montréal from Ecole Polytechnique",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "Sherjil Ozair is visiting",
"institution": "Université de Montréal from Indian Institute of Technology",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"andriy mnih",
"karol gregor",
"google deepmind"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik p kingma",
"max welling"
],
"affiliation": [
{
"laboratory": "Machine Learning Group",
"institution": "Universiteit van Amsterdam",
"location": "{}"
},
{
"laboratory": "Machine Learning Group",
"institution": "Universiteit van Amsterdam",
"location": "{}"
}
]
},
{
"name": [
"matt hoffman",
"david m blei",
"chong wang",
"john paisley"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"john paisley",
"david m blei",
"michael i jordan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": "",
"institution": "Princeton University",
"location": "{}"
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nicholas g polson",
"vadim o sokolov"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Chicago",
"location": "{}"
},
{
"laboratory": "",
"institution": "George Mason University",
"location": "{}"
}
]
},
{
"name": [
"nicholas g polson",
"vadim o sokolov"
],
"affiliation": [
{
"laboratory": "",
"institution": "University of Chicago",
"location": "{}"
},
{
"laboratory": "",
"institution": "George Mason University",
"location": "{}"
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 93 | 0.032258 | 0.592593 | 0.666667 | null | null | null | null | null | r1l4eQW0Z |
|
zhang|learning_lessoverlapping_representations|ICLR_cc_2018_Conference | Learning Less-Overlapping Representations | In representation learning (RL), how to make the learned representations easy to interpret and less overfitted to training data are two important but challenging issues. To address these problems, we study a new type of regularization approach that encourages the supports of weight vectors in RL models to have small overlap, by simultaneously promoting near-orthogonality among vectors and sparsity of each vector. We apply the proposed regularizer to two models: neural networks (NNs) and sparse coding (SC), and develop an efficient ADMM-based algorithm for regularized SC. Experiments on various datasets demonstrate that weight vectors learned under our regularizer are more interpretable and have better generalization performance. | {
"name": [],
"affiliation": []
} | We propose a new type of regularization approach that encourages non-overlapness in representation learning, for the sake of improving interpretability and reducing overfitting. | [
"Less-overlapness",
"regularization",
"near-orthogonality",
"sparsity"
] | null | 2018-02-15 22:29:51 | 44 | null | null | null | null | null | null | null | null | false | Each of the reviewers had a slightly different set of issues with this paper but here is an attempt at a summary:
PROS:
1. Paper is mostly clear and well structured.
CONS:
1. Lack of novelty
2. Unsupported claims
3. Questionable methodology (using dropout confounds the goal of the experiment)
The authors did not submit a rebuttal. | {
"review_id": [
"ByL47G5lM",
"BJ9J8G_ez",
"B1taBfmlG"
],
"review": [
{
"title": "title: logdet for diversity is not novel",
"paper_summary": null,
"main_review": "main_review: The paper proposed a new regularization approach that simultaneously encourages the weight vectors (W) to be sparse and orthogonal to each other. The argument is that the sparsity helps to eliminate the irrelevant feature vectors by making the corresponding weights zero. Nearly orthogonal sparse vectors will have zeros at different indexes and hence, encourages the weight vectors to have small overlap in terms of indices of nonzero entries (called support). Small overlap in support of weight vectors, aids interpretability as each weight vector is associated with a unique subset of feature vectors. For example, in the topic model, small overlap encourages, each topic to have unique set of representation words. \n\nThe proposed approach used L1 regularizer for enforcing sparsity in W. For enforcing orthogonality between different weight vectors (wi, wj), the log-determinant divergence (LDD) regularization term encourages the Gram Matrix G (Gij = wiTwj) to be close to an identity matrix I. The authors applied and tested the performance of proposed approach on Neural Network and Sparse Coding (SC) machine learning models. The authors validated the need for their proposed regularizer through experiments on 4 datasets (3 text and 1 images).\n\nMajor\n* The novelty of the paper is not clear. Neither L1 no logdet() are novel regularizers (see the literature of Determinatal Point Process). With the presence of the auto-differentiator, one cannot claim the making derivative a novelty.\n\n* L1 is also encourages diversity although as explicit as logdet. This is also obvious from Fig 2. Perhaps the advantage of diversity is in interpretability but that is hard to quantify and the authors did not put enough effort to do that; we only have small anecdotal results in section 4.3. \n\n* The Table 1 is not convincing because one can argue, for example, gun (vec 1) and weapon (vec 4) are colinear. \n\n* In section 4.2, the authors experimented with SC on text dataset. The overlap score decreases as the strength of regularization increases. The authors didn’t show the effect of increasing the regularization strength on the model accuracy and convergence time. This analysis is important to make sure, the decrease in overlap score is not coming at the expense of model accuracy and performance. \n\n* In section 4.4, increase in test set accuracy and difference between test and train set accuracy is used to validate the claim, that the proposed regularizer helps reducing over fitting. In Table-2, , the test accuracy increases between SC and LDD-L1 SC while the train accuracy remains almost the same. Also, the authors didn’t do any cross validation to support their claim. The difference is numbers is too small to support the claim.\n\n* In section on LSTM for Language Modeling, the perplexity score of LDD-L1 regularization on PytorchLM received perplexity score of 1.2 lower than without regularization. Although, the author mentions it as a significant reduction, the lowest perplexity score in Table 3 is significantly lower than this result. It’s not clear how 1.2 reduction in perplexity is significant and why the method should be preferred while much better models already exists.\n\n* Results of the best perplexity model, Neural Architecture Search + WT V2, with proposed regularization would also help, validating the generalizability claims of the new approach.\n\n* In CNN for Image Classification section, details of increase interpretability of the model, in terms of classification decision, is missing.\n\n* In Table-4, the proposed LDD-L1 WideResNet is not the best results. Results of adding the proposed regularization, to the best know method (Pyramid Sep Drop) would be interesting. \n\n* The proposed regularization claims to provide more interpretable representation and less overfit model. The given experiments are inadequate to validate the claims.\n\n* A more extensive experimentation is required to validate the applicability of the method.\n\n* In SC, aj are the linear coefficients or the coefficient vector of the j-th sample. If A ∈ Rm×n then aj ∈ Rm×1 and j ranges between [1,n] as in equation 6. The notation in section 2.2, Sparse Coding section is misleading as j ranges between [1,m].\n\n* In Related works, the authors mention previous work done on interpreting the results of the machine learning models. Related works on enhancing interpretability and reducing overfitting by using regularization is missing.\n\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting idea but insufficient explanations and experimental results",
"paper_summary": null,
"main_review": "main_review: The paper studies a regularization method to promote sparsity and reduce the overlap among the supports of the weight vectors in the learned representations. The motivation of using this regularization is to enhance the interpretability of the learned representation and avoid overfitting of complex models. \n\nTo reduce the overlap among the supports of the weight vectors, an existing method (Xie et al, 2017b) encouraging orthogonality is adopted to let the Gram matrix of the weight vectors to be close to the identity matrix (so that each weight vector is with unit norm and any pair of vectors are approximately orthogonal).\n\nNeural network and sparse coding are considered as two case studies. The alternating algorithm for solving the regularized sparse coding formulation is common and less attracted. I think the major point is to see how much benefit that the regularization can afford for learning deep neural networks. To avoid overfitting, some off-the-shelf methods, e.g., dropout which can be viewed as a kind of regularization, are commonly used for deep neural networks. Are there any connections between the adopted regularization terms and the existing methods? Will these less overlapped parameters control the activation of different neurons? I think these are some straightforward questions while there are not much explanations on those aspects.\n\nFor training neural networks, a simple sub-gradient method is used because of the non-smoothness of the regularization terms. When training with large neural networks, will the sub-gradient method affect the efficiency a lot compared without using the regularizer? For example, in the image classification problem with ResNet.\n\nIt is better not to use dropout in the experiments (language modeling and image classification), because one of the motivation of using the proposed regularizer is to avoid overfitting while dropout does the same work and may affect the evaluation of effectiveness of the regularization.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Novel orthogonality- and sparsity-promoting regularizer. Rather clear and technically sound paper, but incremental.",
"paper_summary": null,
"main_review": "main_review: *Summary*\nThe paper introduces a matrix regularizer to simultaneously induce both sparsity and (approximate) orthogonality. The definition of the regularizer mostly relies on the previous proposal from Xie et al. 2017b, to which a weighted L1 term is added.\nThe regularizer aims at reducing overlap among the learned matrices, and it is applied to various neural networks and sparse coding (SC) settings.\nMost of the challenges of the paper concentrate on the optimization side.\nThe evaluation of the paper is based on 3 experiments: SC (to illustrate the gain in interpretability and the reduction in overfitting), LTSM (for a NLP task over PTB) and CNN (for a computer vision task over CIFAR-10). \n\nThe paper is overall clear and fairly well structured, but it suffers from several flaws, as next discussed.\n\n*Detailed comments*\n(mostly in linear order)\n\n-The proposed regularization scheme seems closely related to the approach taken in [Zass2007]; a detailed discussion and potential comparison should be provided. In particular, the approach of [Zass2007] would lead to an easier optimization. \n\n-The sparse coding formulation has an extremely heavy parametrization (4 regularization parameters + the optimization parameter for ADMM + the number of columns of W). It seems to me that the resulting approach may not be very practical.\n\n-Sparse coding: More references to previous work are needed, such as references related to alternating schemes and proximal optimization for SC (in Sec. 3); e.g., see [Mairal2010,Jenatton2011] and numerous references therein.\n\n-I would suggest to move the derivations of Sec. 3.1 into an appendix not to break the flow of the readers. The derivations look sound.\n\n-Due to the use of ADMM, I think that only W_tilde is sparse (due to the prox update (10)), but W may not be. This point should be discussed. Is a \"manual\" thresholding applied thereafter?\n\n-For equation (25), I would precise that the columns of U have to be properly ordered to make sure we can only look at those from s=m...d.\n\n-More details about the optimization in the case of the neural networks should be discussed.\n\n-Could another splitting for ADMM, based on the logdet to reuse ideas from [Banerjee2008,Friedman2008], be possible?\n\n-In table 2., are those 3-decimal statistics significant? Any idea of the variability of those numbers?\n\n-Interpretability: The paper focuses on the gain in interpretability thanks to the regularizer (e.g., Table 1 and 3). But all the proposed settings (SC or neural networks) are such that the parameters are themselves subject to sources of variations, e.g., the initial conditions. How can we make strong conclusions while inspecting the parameters?\n\n-In Figure 2., it seems to be that the final performance metric should also be overlaid. What appears as interesting to me is the the trade-off between overlap score and final performance metric.\n\n*References*\n\n[Banerjee2008] Banerjee, O.; El Ghaoui, L. & d'Aspremont null, A. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data Journal of Machine Learning Research, MIT Press, 2008, 9, 485-516\n\n[Friedman2008] Friedman, J.; Hastie, T. & Tibshirani, R. Sparse inverse covariance estimation with the graphical lasso Biostatistics, 2008, 9, 432\n\n[Jenatton2011] Jenatton, R.; Mairal, J.; Obozinski, G. & Bach, F. Proximal Methods for Hierarchical Sparse Coding Journal of Machine Learning Research, 2011, 12, 2297-2334\n\n[Mairal2010] Mairal, J.; Bach, F.; Ponce, J. & Sapiro, G. Online learning for matrix factorization and sparse coding Journal of Machine Learning Research, 2010, 11, 19-60\n\n[Zass2007] Zass, R. & Shashua, A. Nonnegative sparse PCA Advances in Neural Information Processing Systems, 2007",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.2222222238779068,
0.3333333432674408,
0.4444444477558136
],
"confidence": [
1,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [],
"comment": []
} | {
"paperhash": [
"mikolov|pointer_sentinel_lstm_(merity_et_al.,_2016)_70.9_ensemble_of_38_large_lstms",
"variational|variational_rhn_+_wt_(zilly_et_al.,_2016)_65.4_variational_rhn_+_wt_with_mc_dropout",
"bao|incoherent_training_of_deep_neural_networks_to_decorrelate_bottleneck_features_for_speech_recognition",
"blei|latent_dirichlet_allocation",
"cai|manifold_adaptive_experimental_design_for_text_categorization",
"cheng|language_modeling_with_sum-product_networks",
"choi|retain:_an_interpretable_predictive_model_for_healthcare_using_reverse_time_attention_mechanism",
"clevert|fast_and_accurate_deep_network_learning_by_exponential_linear_units_(elus)",
"cogswell|reducing_overfitting_in_deep_networks_by_decorrelating_representations",
"dong|improving_interpretability_of_deep_neural_networks_with_semantic_information",
"gal|a_theoretically_grounded_application_of_dropout_in_recurrent_neural_networks",
"graham|fractional_max-pooling",
"he|deep_residual_learning_for_image_recognition",
"hochreiter|long_short-term_memory",
"huang|densely_connected_convolutional_networks",
"inan|tying_word_vectors_and_word_classifiers:_a_loss_framework_for_language_modeling",
"kim|character-aware_neural_language_models",
"wei|understanding_black-box_predictions_via_influence_functions",
"krizhevsky|imagenet_classification_with_deep_convolutional_neural_networks",
"kulis|low-rank_kernel_learning_with_bregman_matrix_divergences",
"lee|deeplysupervised_nets",
"daniel|learning_the_parts_of_objects_by_non-negative_matrix_factorization",
"lin|network_in_network",
"lipton|the_mythos_of_model_interpretability",
"marcus|building_a_large_annotated_corpus_of_english:_the_penn_treebank",
"merity|pointer_sentinel_mixture_models",
"mikolov|context_dependent_recurrent_neural_network_language_model",
"mikolov|recurrent_neural_network_based_language_model",
"bruno|sparse_coding_with_an_overcomplete_basis_set:_a_strategy_employed_by_v1?_vision_research",
"parikh|proximal_algorithms",
"pascanu|how_to_construct_deep_recurrent_neural_networks",
"rodríguez|regularizing_cnns_with_locally_constrained_decorrelations",
"springenberg|striving_for_simplicity:_the_all_convolutional_net",
"tibshirani|regression_shrinkage_and_selection_via_the_lasso",
"wang|rubik:_knowledge_guided_tensor_factorization_and_completion_for_health_data_analytics",
"xie|diversifying_restricted_boltzmann_machine_for_document_modeling",
"xie|learning_latent_space_models_with_angular_constraints",
"xie|near-orthogonality_regularization_in_kernel_methods",
"xie|aggregated_residual_transformations_for_deep_neural_networks",
"yamada|deep_pyramidal_residual_networks_with_separated_stochastic_depth",
"yu|diversity_regularized_machine",
"zagoruyko|wide_residual_networks",
"zaremba|recurrent_neural_network_regularization",
"zoph|neural_architecture_search_with_reinforcement_learning"
],
"title": [
"Pointer Sentinel LSTM (Merity et al., 2016) 70.9 Ensemble of 38 Large LSTMs",
"Variational RHN + WT (Zilly et al., 2016) 65.4 Variational RHN + WT with MC dropout",
"Incoherent training of deep neural networks to decorrelate bottleneck features for speech recognition",
"Latent dirichlet allocation",
"Manifold adaptive experimental design for text categorization",
"Language modeling with sum-product networks",
"Retain: An interpretable predictive model for healthcare using reverse time attention mechanism",
"Fast and accurate deep network learning by exponential linear units (elus)",
"Reducing overfitting in deep networks by decorrelating representations",
"Improving interpretability of deep neural networks with semantic information",
"A theoretically grounded application of dropout in recurrent neural networks",
"Fractional max-pooling",
"Deep residual learning for image recognition",
"Long short-term memory",
"Densely connected convolutional networks",
"Tying word vectors and word classifiers: A loss framework for language modeling",
"Character-aware neural language models",
"Understanding black-box predictions via influence functions",
"Imagenet classification with deep convolutional neural networks",
"Low-rank kernel learning with bregman matrix divergences",
"Deeplysupervised nets",
"Learning the parts of objects by non-negative matrix factorization",
"Network in network",
"The mythos of model interpretability",
"Building a large annotated corpus of english: The penn treebank",
"Pointer sentinel mixture models",
"Context dependent recurrent neural network language model",
"Recurrent neural network based language model",
"Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research",
"Proximal algorithms",
"How to construct deep recurrent neural networks",
"Regularizing cnns with locally constrained decorrelations",
"Striving for simplicity: The all convolutional net",
"Regression shrinkage and selection via the lasso",
"Rubik: Knowledge guided tensor factorization and completion for health data analytics",
"Diversifying restricted boltzmann machine for document modeling",
"Learning latent space models with angular constraints",
"Near-orthogonality regularization in kernel methods",
"Aggregated residual transformations for deep neural networks",
"Deep pyramidal residual networks with separated stochastic depth",
"Diversity regularized machine",
"Wide residual networks",
"Recurrent neural network regularization",
"Neural architecture search with reinforcement learning"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"& mikolov",
"; zweig",
"& rnn+lda (mikolov",
"; zweig",
" pascanu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rhn variational",
" +re (inan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yebo bao",
"hui jiang",
"lirong dai",
"cong liu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"andrew y david m blei",
"michael i ng",
" jordan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"deng cai",
"xiaofei he"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wei-chen cheng",
"stanley kok",
"hoai vu pham",
"hai leong chieu",
"kian ming",
"a chai"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"edward choi",
"mohammad taha bahadori",
"jimeng sun",
"joshua kulas",
"andy schuetz",
"walter stewart"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"djork-arné clevert",
"thomas unterthiner",
"sepp hochreiter"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"michael cogswell",
"faruk ahmed",
"ross girshick",
"larry zitnick",
"dhruv batra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yinpeng dong",
"hang su",
"jun zhu",
"bo zhang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yarin gal",
"zoubin ghahramani"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"benjamin graham"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kaiming he",
"xiangyu zhang",
"shaoqing ren",
"jian sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"gao huang",
"zhuang liu",
"kilian q weinberger",
"laurens van der maaten"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hakan inan",
"khashayar khosravi",
"richard socher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoon kim",
"yacine jernite",
"david sontag",
"alexander m rush"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pang wei",
"koh ",
"percy liang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex krizhevsky",
"ilya sutskever",
"geoffrey e hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"brian kulis",
"mátyás a sustik",
"inderjit s dhillon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chen-yu lee",
"saining xie",
"patrick gallagher",
"zhengyou zhang",
"zhuowen tu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"d daniel",
"h lee",
"seung sebastian"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"min lin",
"qiang chen",
"shuicheng yan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
" zachary c lipton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mary mitchell p marcus",
"ann marcinkiewicz",
"beatrice santorini"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"stephen merity",
"caiming xiong",
"james bradbury",
"richard socher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tomas mikolov",
"geoffrey zweig"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"tomas mikolov",
"martin karafiat",
"lukas burget"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"a bruno",
"david j olshausen",
" field"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"neal parikh",
"stephen boyd"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"razvan pascanu",
"caglar gulcehre",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jordi pau rodríguez",
"guillem gonzàlez",
"josep m cucurull",
"xavier gonfaus",
" roca"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jost tobias springenberg",
"alexey dosovitskiy",
"thomas brox",
"martin riedmiller"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"robert tibshirani"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yichen wang",
"robert chen",
"joydeep ghosh",
"joshua c denny",
"abel kho",
"you chen",
"bradley a malin",
"jimeng sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pengtao xie",
"yuntian deng",
"eric p xing"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pengtao xie",
"yuntian deng",
"yi zhou",
"abhimanu kumar",
"yaoliang yu",
"james zou",
"eric p xing"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"pengtao xie",
"barnabas poczos",
"eric p xing"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"saining xie",
"ross girshick",
"piotr dollár",
"zhuowen tu",
"kaiming he"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoshihiro yamada",
"masakazu iwamura",
"koichi kise"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yang yu",
"yu-feng li",
"zhi-hua zhou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sergey zagoruyko",
"nikos komodakis"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wojciech zaremba",
"ilya sutskever",
"oriol vinyals"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"barret zoph",
"v quoc",
" le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"",
"",
"",
"",
"",
"1608.05745v4",
"1511.07289v5",
"1511.06068v4",
"1703.04096v2",
"1512.05287v5",
"arXiv:1412.6071",
"1512.03385v1",
"",
"1608.06993v5",
"1611.01462v3",
"",
"",
"",
"",
"",
"",
"1312.4400v3",
"1606.03490v3",
"",
"1609.07843v1",
"",
"arXiv:1511.06422",
"",
"",
"1312.6026v5",
"1611.01967v2",
"1412.6806v3",
"",
"",
"",
"",
"",
"1611.05431v2",
"1612.01230v1",
"",
"1605.07146v4",
"1409.2329v5",
"1611.01578v2"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.333333 | 0.833333 | null | null | null | null | null | r1kjEuHpZ |
||
zheng|understanding_deep_learning_generalization_by_maximum_entropy|ICLR_cc_2018_Conference | 1711.07758v1 | Understanding Deep Learning Generalization by Maximum Entropy | Deep learning achieves remarkable generalization capability with overwhelming number of model parameters. Theoretical understanding of deep learning generalization receives recent attention yet remains not fully explored. This paper attempts to provide an alternative understanding from the perspective of maximum entropy. We first derive two feature conditions that softmax regression strictly apply maximum entropy principle. DNN is then regarded as approximating the feature conditions with multilayer feature learning, and proved to be a recursive solution towards maximum entropy principle. The connection between DNN and maximum entropy well explains why typical designs such as shortcut and regularization improves model generalization, and provides instructions for future model development. | {
"name": [],
"affiliation": []
} | null | [
"Computer Science",
"Mathematics"
] | arXiv.org | 2017-11-21 | 17 | null | null | null | null | null | null | null | null | false | The reviewers are in agreement, that the paper is a big hard to follow and incorrect in places, including some claims not supported by experiments. | {
"review_id": [
"SyDSqb6gz",
"Sy7fJuCxM",
"HkBIjt2xz"
],
"review": [
{
"title": "title: extremely hard to follow, needs major revision",
"paper_summary": null,
"main_review": "main_review: The paper aims to provide a view of deep learning from the perspective of maximum entropy principle. I found the paper extremely hard to follow and seemingly incorrect in places. Specifically:\na) In Section 2, the example given to illustrate underfitting and overfitting states that the 5-order polynomial obviously overfits the data. However, without looking at the test data and ensuring the fact that it indeed was not generated by a 5-order polynomial, I don’t see how such a claim can be made.\nb) In Section 2 the authors state “Imposing extra data hypothesis actually violates the ME principle and degrades the model to non-ME model.” … Statements like this need to be made much clearer, since imposing feature expectation constraints (such as Eq. (3) in Berger et al. 1996) is a perfectly legitimate construct in ME principle.\nc) The opening paragraph of Section 3 is quite unclear; phrases like “how to identify the equivalent feature constraints and simple models” need to be made precise, it is not clear to me what authors mean by this.\nd) I’m not able to really follow Definition 1, perhaps due to unclear notation. It seems to state that we need to have P(X,Y) = P(X,\\hat{Y}), and if that’s the case not clear what more can be accomplished by maximizing conditional entropy H(\\hat{Y}|X). Also, there is a spurious w_i in Definition 1.\ne) Definition 2. Not clear what is meant by notation E_{P(T,Y)}.\nf) Definition 3 uses t_i(x) without defining those, and I think those are different from t_i(x) defined in Definition 2.\n\nI think the paper needs to be substantially revised and clarified before it can be published at ICLR.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: This paper presented a theoretical result for the generalization of DNN using the maximum entropy principle.",
"paper_summary": null,
"main_review": "main_review: The presentation of the paper is crisp and clear. The problem formulation is explained clearly and it is well motivated by theorems. It is a theoretical papers and there is no experimental section. This is the only drawback for the paper as the claims is not supported by any experimental section. The author could add some experiments to support the idea presented in the paper.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: Summary:\n\nThis paper presents a derivation which links a DNN to recursive application of\nmaximum entropy model fitting. The mathematical notation is unclear, and in\none cases the lemmas are circular (i.e. two lemmas each assume the other is\ncorrect for their proof). Additionally the main theorem requires complete\nindependence, but the second theorem provides pairwise independence, and the\ntwo are not the same.\n\nMajor comments:\n\n- The second condition of the maximum entropy equivalence theorem requires\n that all T are conditionally independent of Y. This statement is unclear, as\nit could mean pairwise independence, or it could mean jointly independent\n(i.e. for all pairs of non-overlapping subsets A & B of T I(T_A;T_B|Y) = 0).\nThis is the same as saying the mapping X->T is making each dimension of T\northogonal, as otherwise it would introduce correlations. The proof of the\ntheorem assumes that pairwise independence induces joint independence and this\nis not correct.\n\n- Section 4.1 makes an analogy to EM, but gradient descent is not like this\n process as all the parameters are updated at once, and only optimised by a\nsingle (noisy) step. The optimisation with respect to a single layer is\nconditional on all the other layers remaining fixed, but the gradient\ninformation is stale (as it knows about the previous step of the parameters in\nthe layer above). This means that gradient descent does all 1..L steps in\nparallel, and this is different to the definition given.\n\n- The proofs in Appendix C which are used for the statement I(T_i;T_j) >=\n I(T_i;T_j|Y) are incomplete, and in generate this statement is not true, so\nrequires proof.\n\n- Lemma 1 appears to assume Lemma 2, and Lemma 2 appears to assume Lemma 1.\n Either these lemmas are circular or the derivations of both of them are\nunclear.\n\n- In Lemma 3 what is the minimum taken over for the left hand side? Elsewhere\n the minimum is taken over T, but T does not appear on the left hand side.\nExplicit minimums help the reader to follow the logic, and implicit ones\nshould only be used when it is obvious what the minimum is over.\n\n- In Lemma 5, what does \"T is only related to X\" mean? The proof states that\n Y -> T -> X forms a Markov chain, but this implies that T is a function of\nY, not X.\n\nMinor comments:\n\n- I assume that the E_{P(X,Y)} notation is the expectation of that probability\n distribution, but this notation is uncommon, and should be replaced with a\nmore explicit one.\n\n- Markov is usually romanized with a \"k\" not a \"c\".\n\n- The paper is missing numerous prepositions and articles, and contains\n multiple spelling mistakes & typos.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.2222222238779068,
0.5555555820465088,
0.1111111119389534
],
"confidence": [
0.5,
0.25,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [],
"comment": []
} | {
"paperhash": [
"wu|statistical_learning_theory",
"wu|towards_understanding_generalization_of_deep_learning:_perspective_of_loss_landscapes",
"shwartz-ziv|opening_the_black_box_of_deep_neural_networks_via_information",
"zhang|understanding_deep_learning_requires_rethinking_generalization",
"huang|densely_connected_convolutional_networks",
"he|deep_residual_learning_for_image_recognition",
"neyshabur|norm-based_capacity_control_in_neural_networks",
"jeon|using_maximum_entropy_for_automatic_image_annotation",
"hunter|a_tutorial_on_mm_algorithms",
"manning|optimization,_maxent_models,_and_conditional_estimation_without_magic",
"malouf|a_comparison_of_algorithms_for_maximum_entropy_parameter_estimation",
"yusuke|maximum_entropy_estimation_for_feature_forests",
"tishby|the_information_bottleneck_method",
"berger|a_maximum_entropy_approach_to_natural_language_processing",
"maass|neural_nets_with_superlinear_vc-dimension",
"jaynes|information_theory_and_statistical_mechanics",
"achille|emergence_of_invariance_and_disentangling_in_deep_representations",
"chan|conference_paper",
"jana|maximum-entropy_approach",
"|i_(_x_;_t_)_s.t.i_(_t_;_y_)_=_i_(_x_;_y_)_proof._summing_up_lemma4_and_lemma5_,_the_output_of_the_constraint_problem_is_sufficient_to_solving_the_ib_optimization_problem"
],
"title": [
"Statistical Learning Theory",
"Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes",
"Opening the Black Box of Deep Neural Networks via Information",
"Understanding deep learning requires rethinking generalization",
"Densely Connected Convolutional Networks",
"Deep Residual Learning for Image Recognition",
"Norm-Based Capacity Control in Neural Networks",
"Using Maximum Entropy for Automatic Image Annotation",
"A Tutorial on MM Algorithms",
"Optimization, Maxent Models, and Conditional Estimation without Magic",
"A Comparison of Algorithms for Maximum Entropy Parameter Estimation",
"Maximum entropy estimation for feature forests",
"The information bottleneck method",
"A Maximum Entropy Approach to Natural Language Processing",
"Neural Nets with Superlinear VC-Dimension",
"Information Theory and Statistical Mechanics",
"Emergence of invariance and disentangling in deep representations",
"Conference Paper",
"MAXIMUM-ENTROPY APPROACH",
"I ( X ; T ) s.t.I ( T ; Y ) = I ( X ; Y ) Proof. Summing up Lemma4 and Lemma5 , the output of the constraint problem is sufficient to solving the IB optimization problem"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"Yuhai Wu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Lei Wu",
"Zhanxing Zhu",
"E. Weinan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ravid Shwartz-Ziv",
"Naftali Tishby"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Chiyuan Zhang",
"Samy Bengio",
"Moritz Hardt",
"B. Recht",
"O. Vinyals"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Gao Huang",
"Zhuang Liu",
"Kilian Q. Weinberger"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kaiming He",
"X. Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Behnam Neyshabur",
"Ryota Tomioka",
"N. Srebro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Jeon",
"R. Manmatha"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Hunter",
"K. Lange"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Christopher D. Manning",
"D. Klein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Robert Malouf"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Miyao Yusuke",
"Junichi Tsujii"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Naftali Tishby",
"Fernando C Pereira",
"W. Bialek"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Adam L. Berger",
"S. D. Pietra",
"V. D. Pietra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"W. Maass"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"E. Jaynes"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Achille",
"Stefano Soatto"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Peter Chan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"P. Jana",
"S. K. Mazumder"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
}
],
"arxiv_id": [
"",
"1706.10239v2",
"1703.00810v3",
"1611.03530v2",
"1608.06993v5",
"1512.03385v1",
"1503.00036",
"",
"",
"",
"",
"",
"physics/0004057v1",
"",
"",
"",
"1706.01350v3",
"",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
[],
[],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[],
[],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[],
[
"background"
],
[],
[
"background"
],
[],
[],
[]
],
"isInfluential": [
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
]
} | null | 87 | null | 0.296296 | 0.416667 | null | null | null | null | null | r1kj4ACp- |
|
toyama|toward_learning_better_metrics_for_sequence_generation_training_with_policy_gradient|ICLR_cc_2018_Conference | Toward learning better metrics for sequence generation training with policy gradient | Designing a metric manually for unsupervised sequence generation tasks, such as text generation, is essentially difficult. In a such situation, learning a metric of a sequence from data is one possible solution. The previous study, SeqGAN, proposed the framework for unsupervised sequence generation, in which a metric is learned from data, and a generator is optimized with regard to the learned metric with policy gradient, inspired by generative adversarial nets (GANs) and reinforcement learning. In this paper, we make two proposals to learn better metric than SeqGAN's: partial reward function and expert-based reward function training. The partial reward function is a reward function for a partial sequence of a certain length. SeqGAN employs a reward function for completed sequence only. By combining long-scale and short-scale partial reward functions, we expect a learned metric to be able to evaluate a partial correctness as well as a coherence of a sequence, as a whole. In expert-based reward function training, a reward function is trained to discriminate between an expert (or true) sequence and a fake sequence that is produced by editing an expert sequence. Expert-based reward function training is not a kind of GAN frameworks. This makes the optimization of the generator easier. We examine the effect of the partial reward function and expert-based reward function training on synthetic data and real text data, and show improvements over SeqGAN and the model trained with MLE. Specifically, whereas SeqGAN gains 0.42 improvement of NLL over MLE on synthetic data, our best model gains 3.02 improvement, and whereas SeqGAN gains 0.029 improvement of BLEU over MLE, our best model gains 0.250 improvement. | {
"name": [],
"affiliation": []
} | This paper aims to learn a better metric for unsupervised learning, such as text generation, and shows a significant improvement over SeqGAN. | [
"sequence generation",
"reinforcement learning",
"unsupervised learning",
"RNN"
] | null | 2018-02-15 22:29:42 | 25 | null | null | null | null | null | null | null | null | false | The pros and cons of this paper can be summarized as follows:
Pros:
* It seems that the method has very good intuitions: consideration of partial rewards, estimation of rewards from modified sequences, etc.
Cons:
* The writing of the paper is scattered and not very well structured, which makes it difficult to follow exactly what the method is doing. If I were to give advice, I would flip the order of the sections to 4, 3, 2 (first describe the overall method, then describe the method for partial rewards, and finally describe the relationship with SeqGAN)
* It is strange that the proposed method does not consider subsequences that do not contain y_{t+1}. This seems to go contrary to the idea of using RL or similar methods to optimize the global coherence of the generated sequence.
* For some of the key elements of the paper, there are similar (widely used) methods that are not cited, and it is a bit difficult to understand the relationship between them:
** Partial rewards: this is similar to "reward shaping" which is widely used in RL, for example in the actor-critic method of Bahdanau et al.
** Making modifications of the reference into a modified reference: this is done in, for example, the scheduled sampling method of Bengio et al.
** Weighting modifications by their reward: A similar idea is presented in "Reward Augmented Maximum Likelihood for Neural Structured Prediction" by Norouzi et al.
The approach in this paper is potentially promising, as it definitely contains a lot of promising insights, but the clarity issues and fact that many of the key insights already exist in other approaches to which no empirical analysis is provided makes the contribution of the paper at the current time feel a bit weak. I am not recommending for acceptance at this time, but would certainly encourage the authors to do clean up the exposition, perhaps add a comparison to other methods such as RL with reward shaping, scheduled sampling, and RAML, and re-submit to another venue. | {
"review_id": [
"SkY1f6Hlf",
"HJ2pirpxG",
"H104OpbgM"
],
"review": [
{
"title": "title: The paper introduces an RL approach to generating time series data without the difficult training of GANs. Unfortunately, the paper is too poorly written to be clear or effective.",
"paper_summary": null,
"main_review": "main_review: This paper describes an approach to generating time sequences by learning state-action values, where the state is the sequence generated so far, and the action is the choice of the next value. Local and global reward functions are learned from existing data sequences and then the Q-function learned from a policy gradient.\n\nUnfortunately, this description is a little vague, because the paper's details are quite difficult to understand. Though the approach is interesting, and the experiments are promising, important explanation is missing or muddled. Perhaps most confusing is the loss function in equation 7, which is quite inadequately explained.\n\nThis paper could be interesting, but substantial editing is needed before it is sufficient for publication.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: A novel contribution to sequence generation",
"paper_summary": null,
"main_review": "main_review: This paper considers the problem of improving sequence generation by learning better metrics. Specifically, it focuses on addressing the exposure bias problem, where traditional methods such as SeqGAN uses GAN framework and reinforcement learning. Different from these work, this paper does not use GAN framework. Instead, it proposed an expert-based reward function training, which trains the reward function (the discriminator) from data that are generated by randomly modifying parts of the expert trajectories. Furthermore, it also introduces partial reward function that measures the quality of the subsequences of different lengths in the generated data. This is similar to the idea of hierarchical RL, which divide the problem into potential subtasks, which could alleviate the difficulty of reinforcement learning from sparse rewards. The idea of the paper is novel. However, there are a few points to be clarified.\n\nIn Section 3.2 and in (4) and (5), the authors explains how the action value Q_{D_i} is modeled and estimated for the partial reward function D_i of length L_{D_i}. But the authors do not explain how the rewards (or action value functions) of different lengths are aggregated together to update the model using policy gradient. Is it a simple sum of all of them?\n\nIt is not clear why the future subsequences that do not contain y_{t+1} are ignored for estimating the action value function Q in (4) and (5). The authors stated that it is for reducing the computation complexity. But it is not clear why specifically dropping the sequences that do not contain y_{t+1}. Please clarify more on this point.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Using IRL techniques instead of GANs for sequence generation",
"paper_summary": null,
"main_review": "main_review: This article is a follow-up from recent publications (especially the one on \"seqGAN\" by Yu et al. @ AAAI 2017) which tends to assimilate Generative Adversarial Networks as an Inverse Reinforcement Learning task in order to obtain a better stability.\nThe adversarial learning is replaced here by a combination of policy gradient and a learned reward function.\n\nIf we except the introduction which is tainted with a few typos and English mistakes, the paper is clear and well written. The experiments made on both synthetic and real text data seems solid.\nBeing not expert in GANs I found it pleasant to read and instructive.\n\n\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.6666666865348816,
0.6666666865348816
],
"confidence": [
0.5,
0.5,
0
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Reply",
"Reply",
"We revised the paper",
"Further revision",
"Reply",
"Good response, but still not ready for publication",
"Reply"
],
"comment": [
"Thanks for the review.\n\nFrom the title and the first paragraph of your review, we assume that you might not get our paper, maybe due to our poor writing. We are not sure how you understand our paper, so we firstly try to correct your misunderstandings.\n\nThis paper is introducing the two techniques to learn better reward function, partial reward function and expert-based reward function training, rather than introducing new RL approach. From your review, it can be assumed that you think our paper argues about q-learning, but our paper uses policy-based RL approach (it has been firstly done by Ranzato et al. and it is not our novelty) and does not argue about q-learning at all. A policy (or a sequence generator) is learned by a policy gradient, and Q-function is NOT learned by a policy gradient. In REINFORCE, Q-value is estimated by Monte-Carlo samplings. I think the first paragraph of reviewer3 well summarizes our paper. We would appreciate if you could tell us which parts of our paper actually caused your misunderstandings so that we can revise these parts.\n\nQ. Explain about equation 7 specifically.\nA. The motivation of equation 7 is, when the produced fake sequence is not quite different from the true sequence (for example, only one token in the sequence of length 20 is changed), we thought it would be effective to decrease the weight of the objective function, binary cross entropy (BCE), because this fake sequence is actually not so bad sequence. The benefit of decreasing the weight for such sequence is that the learned reward function would become easier to be maximized by a policy gradient, because learned reward function would return some reward to a generated sequence that has some mistakes. In our paper, we describe it as “smooth\" reward function.\nThe parameter \\tau in quality function directly affects the weight of BCE. When \\tau is large, the fake sequence that is little edited from expert one get a large value of quality function, resulting in making (1 - q) / (1 + q) lower than 1, and it decreases the weight of the second term in the right hand side of equation (7). On the other hand, when \\tau is small, the fake sequence that is little edited from expert one gets a near 0 value of quality function, resulting in (1 - q) / (1 + q) ~= 1, and equation (7) becomes the conventional BCE.\nThe term (1 - q) / (1 + q) is heuristic and there is no theoretical background for it, but it enables to control the strictness of the learned reward function by changing the parameter \\tau (“strict” means that only realistic sequence gets the reward close to 1, and others get the reward close to 0. A strict reward function is accurate, but it is considered to be difficult to maximize by a policy gradient because this reward function might be binary-like peaky function). In the experiment, we show that when the partial reward function has long scale, easing the conventional BCE by using \\tau=1.5 is effective.\n\nPlease give us more specific parts that you are still confused, and we are willing to give answers.\n\nBest,",
"Thanks for the reply and giving the specific parts of the paper that are unclear.\nWe are giving answer to these questions.\nMoreover, we revised our paper to satisfy your request.\n\nQ, What does “dynamics” mean?\nA. This is where our explanation lacks. I give more specific explanation.\n“dynamics” means the transition probability of the next state given the current state and action, formally p(s_{t+1} | s_{t}, a_{t}).\nIn a lot of tasks in reinforcement learning, dynamics is usually unknown and difficult to learn.\nIn a sequence generation, however, s_{t} is the sequence that the generator has generated so far and a_{t} is the next token generation, and s_{t+1} is always [s_{t}, a_{t}], therefore p(s_{t+1} | s_{t}, a_{t}) is deterministic. So, the dynamics is known.\nThis nature is important when we generate fake sequence from expert, like our method. If we do not know the dynamics, we can not determine the next state when we change the certain action.\n\nWe revised the section 2.1 by adding those explanation.\n\nQ,W_e isn't mentioned again, making it unclear what space you're learning in.\nA. W_e is just the embedding matrix (it is learned together with other weights) and we specified the dimension of embedding layer in the description of the experiment section (In synthetic data, the dimension of embedding layer is 32, and in text data, it is 200).\nDoes it answer your question?\n\nQ, The selection of \\alpha.\nA. The selection of \\alpha is important when we use partial reward functions of different scales, because it balances the priorities of the partial correctness of different scale length. Our paper probably should argue it more specifically.\n\nUnfortunately, the selection of \\alpha_{D_i} is done by nothing but hyper-parameter tuning, and we are aware that it is the problem as we argued in the discussion section. In the text generation task, we prepare two partial reward functions (Long R and Short R), and empirically show the differences of BLEU score and generated sequence when \\alpha is changed. The fact that a true metric for sequence is usually not given (except for the special case, such as oracle test) makes difficult to even validate the goodness of selected \\alpha_{D_i}. This is the reason we only try \\alpha_s = 0.3. and \\alpha_s = 1.0 in the text generation experiment.\n\nI think this problem is not only in our case, but the fundamental problem of inverse reinforcement learning (IRL). IRL learns a reward from expert, but the goodness of learned reward function can be evaluated by the behavior of policy, and the evaluation is done by a human (with a bias), or a surrogate manually designed metric.\n\nAbove discussion is included in the discussion (and a little explanation is added in 3.2).\n\nQ, Some concerns about equation 7.\nA. We understand your main concerns.\nIn our paper, equation 7 comes from nowhere, and we do not clearly say that it is completely heuristics. This would confuse readers as you were so.\n\nWe, however, believe that even though the justification of equation 7 is not done in a theoretical way, the justification can also be done in an experimental way. If there is a proper experimental validation for a proposal, the proposal should be the important contribution to the community.\n\nWe revised our paper as below to make section 4 clear.\nWe divided the section 4 into the two subsections 4.1 and 4.2, the one for proposing the idea of expert-based reward function training, and the other one for proposing the modified objective function.\nIn the second subsection, we clearly wrote that\n\n- objective function comes by heuristics and there is no theoretical justification.\n- when \\tau ~= 0, this objective function becomes conventional binary cross entropy.\n- The effectivity of this objective function is validated in the experiment section.\n\nand more specific explanation for the objective as we discussed in the reply for your first review.\n\nPlease have a look at the revised version and give us a reply if you have any other concerns.\n\nBest,",
"Given the valuable reviews, we revised the following parts of our paper.\n\n2.1 We add the description of why the dynamics is known in the sequence generation setting.\n\n3.2 We add the description of the \\alpha_{D_i} that it adjusts the importance of a partial reward function with a certain length.\n\n3.2 We describe that Q is finally calculated by aggregating all Q_{D_i}.\n\n4 We divide this section into two, because 4 has two contents, the proposal of expert-based reward function, and the modification of the objective. By receiving the comment from reviewer2, we wrote that the modified BCE has no theoretical background and is a heuristic. The justification of this objective is done by experimental way.\n\n5.1.2 We state that PG_L_exp gets benefit when \\tau=1.5, indicating that the modified BCE is effective.\n\n6 We discuss the selection of \\alpha_D and its difficulty.\n\n",
"1. We fixed some typos and grammar mistakes\n\n4.1 The title of section is substituted to \"expert-based reward function training specification\" because previous seciton title does not suit\n\n4.2 moved some explanation of modified binary cross entropy to appendix because it was bit verbose\n\n5.2.2 We changed the generated examples in Table 4 to make it easy to see the comparison. All generated examples are started from the word \"according\".\n\n",
"Thank you for the review. I am glad that you enjoyed reading our paper.\nAbout the mistakes of English in the introduction part, we will get native check and revise it.",
"Given the thorough response and the other reviews, I went back to re-read the paper to make sure I was being fair. I was a little harsh, but still don't believe this paper is ready for publication, as important paragraphs are quite difficult to read and parse. I have changed my review from a 3 to a 4.\n\nAs an example of points that are unclear:\n\n2.1: it's quite unclear what you mean by \"dynamics\" at the end of this section which are known in the sequence generation task, confusing this explanation.\n3.1: W_e isn't mentioned again, making it unclear what space you're learning in.\n3.2: selection of alpha_D_i isn't discussed, though discounted by the fact I haven't looked at REINFORCE in some time. It seems it would matter quite a lot.\n4: Your discussion above on equation 7 helps a lot, and would benefit the paper (though I still wouldn't quite advocate acceptance). This is particularly true since elements are \"heuristic,\" as you say, making it non-obvious where they came from. This is perhaps the core of my concerns with this paper: crucial equations we are to take on faith, without justification or explanation, should not be published. It is very confusing to try and re-derive equation 7 from the points made in the preceding parts of the paper; it just doesn't follow without much more explanation.\n\n",
"Thanks for the review.\nYour first paragraph of the review well summarizes our paper. Our paper is seemingly well understood by you.\n\nQ. How are the action-state values of different length aggregated?\nA. We simply add the Q values of different scales. To balance the importance of different scales, we also introduce hyper parameter alpha.\n\nQ. Why are the future subsequences that do not contain y_{t+1} ignored?\nA2. In some setting such as Go or Atari games, the final state of the agent is important (e.g. win or lose), and future states affect the Q-value a lot. So, it is important to see further future state after the certain action at t to estimate Q-value in those setting. In our setting, however, the importance of states (or subsequences) does not depend on the timesteps. The partial reward functions treat every subsequences at a time step equally. So, we think the subsequences that contain y_{t+1} are enough samples (and they should depend on q-value of y_{t+1} a lot because y_{t_1} itself is in the subsequences) to estimate q-value. \nIn equation (4), the subsequences that do not contain y_{t+1} are not ignored."
]
} | {
"paperhash": [
"abbeel|apprenticeship_learning_via_inverse_reinforcement_learning",
"arjovsky|towards_principled_methods_for_training_generative_adversarial_networks",
"bahdanau|an_actor-critic_algorithm_for_sequence_prediction",
"bengio|scheduled_sampling_for_sequence_prediction_with_recurrent_neural_networks",
"collobert|natural_language_processing_(almost)_from_scratch",
"finn|guided_cost_learning:_deep_inverse_optimal_control_via_policy_optimization",
"goodfellow|generative_adversarial_nets",
"graves|generating_sequences_with_recurrent_neural_networks",
"ho|generative_adversarial_imitation_learning",
"hochreiter|long_short-term_memory",
"huszár|how_(not)_to_train_your_generative_model:_scheduled_sampling,_likelihood,_adversary?",
"jaques|sequence_tutor:_conservative_fine-tuning_of_sequence_generation_models_with_kl-control",
"kim|convolutional_neural_networks_for_sentence_classification",
"kingma|adam:_a_method_for_stochastic_optimization",
"li|deep_reinforcement_learning_for_dialogue_generation",
"brendan|combining_policy_gradient_and_q-learning",
"papineni|bleu:_a_method_for_automatic_evaluation_of_machine_translation",
"marc|sequence_level_training_with_recurrent_neural_networks",
"shrivastava|learning_from_simulated_and_unsupervised_images_through_adversarial_training",
"sutskever|generating_text_with_recurrent_neural_networks",
"sutton|policy_gradient_methods_for_reinforcement_learning_with_function_approximation",
"ronald|simple_statistical_gradient-following_algorithms_for_connectionist_reinforcement_learning",
"wu|google's_neural_machine_translation_system:_bridging_the_gap_between_human_and_machine_translation",
"yu|seqgan:_sequence_generative_adversarial_nets_with_policy_gradient",
"zhang|adversarial_feature_matching_for_text_generation"
],
"title": [
"Apprenticeship learning via inverse reinforcement learning",
"Towards principled methods for training generative adversarial networks",
"An actor-critic algorithm for sequence prediction",
"Scheduled sampling for sequence prediction with recurrent neural networks",
"Natural language processing (almost) from scratch",
"Guided cost learning: Deep inverse optimal control via policy optimization",
"Generative adversarial nets",
"Generating sequences with recurrent neural networks",
"Generative adversarial imitation learning",
"Long short-term memory",
"How (not) to train your generative model: Scheduled sampling, likelihood, adversary?",
"Sequence tutor: Conservative fine-tuning of sequence generation models with kl-control",
"Convolutional neural networks for sentence classification",
"Adam: A method for stochastic optimization",
"Deep reinforcement learning for dialogue generation",
"Combining policy gradient and q-learning",
"Bleu: a method for automatic evaluation of machine translation",
"Sequence level training with recurrent neural networks",
"Learning from simulated and unsupervised images through adversarial training",
"Generating text with recurrent neural networks",
"Policy gradient methods for reinforcement learning with function approximation",
"Simple statistical gradient-following algorithms for connectionist reinforcement learning",
"Google's neural machine translation system: Bridging the gap between human and machine translation",
"Seqgan: Sequence generative adversarial nets with policy gradient",
"Adversarial feature matching for text generation"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"pieter abbeel",
"andrew y ng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"martin arjovsky",
"léon bottou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dzmitry bahdanau",
"philemon brakel",
"kelvin xu",
"anirudh goyal",
"ryan lowe",
"joelle pineau",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"samy bengio",
"oriol vinyals",
"navdeep jaitly",
"noam shazeer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ronan collobert",
"jason weston",
"léon bottou",
"michael karlen",
"koray kavukcuoglu",
"pavel kuksa"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chelsea finn",
"sergey levine",
"pieter abbeel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ian goodfellow",
"jean pouget-abadie",
"mehdi mirza",
"bing xu",
"david warde-farley",
"sherjil ozair",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex graves"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jonathan ho",
"stefano ermon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sepp hochreiter",
"jürgen schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ferenc huszár"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"natasha jaques",
"shixiang gu",
"dzmitry bahdanau",
"josé miguel hernández-lobato",
"richard e turner",
"douglas eck"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoon kim"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik kingma",
"jimmy ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jiwei li",
"will monroe",
"alan ritter",
"michel galley",
"jianfeng gao",
"dan jurafsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"o' brendan",
"remi donoghue",
"koray munos",
"volodymyr kavukcuoglu",
" mnih"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"kishore papineni",
"salim roukos",
"todd ward",
"wei-jing zhu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aurelio marc",
"sumit ranzato",
"michael chopra",
"wojciech auli",
" zaremba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ashish shrivastava",
"tomas pfister",
"oncel tuzel",
"josh susskind",
"wenda wang",
"russ webb"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ilya sutskever",
"james martens",
"geoffrey e hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"david a richard s sutton",
" mcallester",
"p satinder",
"yishay singh",
" mansour"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"williams ronald"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yonghui wu",
"mike schuster",
"zhifeng chen",
"v quoc",
"mohammad le",
"wolfgang norouzi",
"maxim macherey",
"yuan krikun",
"qin cao",
"klaus gao",
" macherey"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lantao yu",
"weinan zhang",
"jun wang",
"yong yu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"y zhang",
"z gan",
"k fan",
"z chen",
"r henao",
"d shen",
"l carin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"1701.04862v1",
"",
"1506.03099v3",
"1103.0398v1",
"1603.00448v3",
"",
"1308.0850v5",
"1606.03476v1",
"",
"1511.05101v1",
"",
"1408.5882v2",
"1412.6980v9",
"1606.01541v4",
"1611.01626v3",
"",
"1511.06732v7",
"1612.07828v2",
"",
"",
"",
"1609.08144v2",
"1609.05473v6",
"1706.03850v3"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.555556 | 0.333333 | null | null | null | null | null | r1kP7vlRb |
||
wang|learning_to_encode_text_as_humanreadable_summaries_using_generative_adversarial_networks|ICLR_cc_2018_Conference | Learning to Encode Text as Human-Readable Summaries using Generative Adversarial Networks | Auto-encoders compress input data into a latent-space representation and reconstruct the original data from the representation. This latent representation is not easily interpreted by humans. In this paper, we propose training an auto-encoder that encodes input text into human-readable sentences. The auto-encoder is composed of a generator and a reconstructor. The generator encodes the input text into a shorter word sequence, and the reconstructor recovers the generator input from the generator output.
To make the generator output human-readable, a discriminator restricts the output of the generator to resemble human-written sentences. By taking the generator output as the summary of the input text, abstractive summarization is achieved without document-summary pairs as training data. Promising results are shown on both English and Chinese corpora. | {
"name": [],
"affiliation": []
} | null | [
"unsupervised learning",
"text summarization",
"adversarial training"
] | null | 2018-02-15 22:29:31 | 21 | null | null | null | null | null | null | null | null | false | As expressed by most reviewers, the idea of the paper is interesting: using summarization as an intermediate representation for an auto encoder. In addition, a GAN is used on the generator output to encourage the output to look like summaries. They just need unpaired summaries. Even if the idea is interesting, from the committee's perspective, important baselines are missing in the experimental section: why would one choose to use this method if it is not competitive with other baselines that have proposed work in this vein? One reviewer brings up the point that the method is significantly worse than a supervised baseline. Moreover, the authors mention the work of Miao and Blunsom, but could have used one of their experimental setups to show that at least in the semi-supervised scenario, this work empirically performs as well or better than that baseline. | {
"review_id": [
"S1ms6cqxM",
"HkSpi4cgz",
"B1_R9Digf"
],
"review": [
{
"title": "title: LEARN TO ENCODE TEXT AS COMPREHENSIBLE SUMMARY BY GENERATIVE ADVERSARIAL NETWORK",
"paper_summary": null,
"main_review": "main_review: Summary: In this work, the authors propose a text reconstructing auto encoder which takes a sentence as the input sequence and an integrated text generator generates another version of the input text while a reconstructor determines how well this generated text reconstructs the original input sequence. The input to the discriminator (as real data) is a sentence that summarizes the ground truth sentences (rather than the ground truth sentences themselves). The experiments are conducted in two datasets of English and Chinese corpora.\n\nStrengths:\nThe proposed idea of generating text using summary sentences is new.\nThe model overview in Figure 1 is informative.\nThe experiments are conducted on English and Chinese corpora, comparison with competitive baselines are provided.\n\nWeaknesses:\nThe paper is poorly written which makes it difficult to understand. The second paragraph in the introduction is quite cryptic. Even after reading the entire paper a couple of times, it is not clear how the summary text is obtained, e.g. do the authors ask annotators to read sentences and summarize them? If so, based on which criteria do the annotators summarize text, how many annotators are there? Similarly, if so this would mean that the authors use additional supervision than the compared models. Please clarify how the summary text is obtained.\n\nIn footnote 1, the authors mention “seq2seq2seq2” term which they do not explain anywhere in the text.\n\nNo experiments that generate raw text (without using summaries) are provided. It would be interesting to see if GAN learns to memorize the ground truth sentences or generates sentences with enough variation. \n\nIn the English Gigaword dataset the results consistently drop compared to WGAN. This behavior is observed for both the unsupervised setting and two versions of transfer learning settings. There are too few qualitative results: One positive qualitative result is provided in Figure 3 and one negative qualitative result is provided in Figure 4. Therefore, it is not easy for the reader to judge the behavior of the model well. \n\nThe choice of the evaluation metric is not well motivated. The standard measures in the literature also include METEOR, CIDER and SPICE. It would be interesting to see how the proposed model performs in these additional criteria. Moreover, the results are not sufficiently discussed. \n\nAs a general remark, although the idea presented in this paper is interesting, both in terms of writing and evaluation, this paper has not yet reached the maturity expected from an ICLR paper. Regarding writing, the definite and indefinite articles are sometimes missing and sometimes overused, similarly most of the times there is a singular/plural mismatch. This makes the paper very difficult to read. Often the reader needs to guess what is actually meant. Regarding the experiments, presenting results with multiple evaluation criteria and showing more qualitative results would improve the exposition.\n\nMinor comments:\nPage 5: real or false —> real or fake (true or false)\n\t the lower loss it get —> ?",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: GANs for text, with text latent variables",
"paper_summary": null,
"main_review": "main_review: This paper proposes a model for generating long text strings given shorter text strings, and for inferring suitable short text strings given longer strings. Intuitively, the inference step acts as a sort of abstractive summarization. The general gist of this paper is to take the idea from \"Language as a Latent Variable\" by Miao et al., and then change it from a VAE to an adversarial autoencoder. The authors should cite \"Adversarial Autoencoders\" by Makzhani et al. (ICLR 2016).\n\nThe experiment details are a bit murky, and seem to involve many ad-hoc decisions regarding preprocessing and dataset management. The vocabulary is surprisingly small. The reconstruction cost is not precisely explained, though I assume it's a teacher-forced conditional log-likelihood (conditioned on the \"summary\" sequence). The description of baselines for REINFORCE is a bit strange -- e.g., annealing a constant in the baseline may affect variance of the gradient estimator, but the estimator is still unbiased and shouldn't significantly impact exploration. Similar issues are present in the \"Self-critical...\" paper by Rennie et al. though, so this point isn't a big deal.\n\nThe results look decent, but I would be more impressed if the authors could show some benefit relative to the supervised model, e.g. in a reasonable semisupervised setting. Overall, the paper covers an interesting topic but could use extra editing to clarify details of the model and training procedure, and could use some redesign of the experiments to minimize the number of arbitrary (or arbitrary-seeming) decisions.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Simple but useful extension of recent works, missing important experiments",
"paper_summary": null,
"main_review": "main_review: TL;DR of paper: Generating summaries by using summaries as an intermediate representation for autoencoding the document. An encoder reads in the document to condition the generator which outputs a summary. The summary is then used to condition the decoder which is trained to output the original document. An additional GAN loss is used on the generator output to encourage the output to look like summaries -- this procedure only requires unpaired summaries. The results are that this procedure improves upon the trivial baseline but still significantly underperforms supervised training.\n\nThis paper builds upon two recent trends: a) cycle consistency, where f(g(x)) = x, which only requires unpaired data (i.e., CycleGAN), and (b) encoder-decoder models with a sequential latent representation (i.e., \"Language as a latent variable\" by Miao and Blunsom). A similar idea has also been explored by He et al. 2016 in \"Dual Learning for Machine Translation\". Both CycleGAN and He et al. 2016 are not cited. The key difference between this paper and He et al. 2016 is the use of GANs so only unpaired summaries are needed.\n\nThe idea is a simple but useful extension of these previous works. The problem set-up of unpaired summarization is not particularly compelling, since summaries are typically found paired with their original documents. It would be more interesting to see how well it can be used for other textual domains such as translation, where a lot of unpaired data exists (some other submissions to ICLR tackle this problem). Unsurprisingly, the proposed method requires a lot of twiddling to make it work since GANs, REINFORCE, and pretraining are necessary.\n\nA key baseline that is missing is pretraining the generator as a language model over summaries. The pretraining baseline in the paper is over predicting the next sentence / reordering, but this is an unfair comparison since the next sentence baseline never sees summaries over the course of training. Without this baseline, it is hard to tell whether GAN training is even useful. Another experiment missing is seeing whether joint supervised-GAN-reconstruction training can outperform purely supervised training. What is the performance of the joint training as the size of the supervised dataset is varied?\n\nThis paper has numerous grammatical and spelling errors throughout the paper (worse, the same errors are copy-pasted everywhere). Please spend more time editing the paper.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.4444444477558136,
0.5555555820465088
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"paper is improved",
"Response to AnonReviewer1",
"Response to AnonReviewer3",
"Response to AnonReviewer2",
"Thank you for your review"
],
"comment": [
"Thank you for revising the paper. It is easier to read now, though later sections still seem less edited than the beginning.\n\nFor the semisupervised experiments a more appropriate baseline would be a likelihood-based equivalent of your technique, e.g. the \"dual\" training by He et al. 2016 in \"Dual Learning for Machine Translation\".",
"Thank you for your thorough read of the paper and pointing out our defects. The paper has been carefully revised. All the page numbers below refer to the revised version.\n\nMajor revisions:\n1. Writing:\nWe acknowledge that there are some defects in original version of paper which make readers difficult to understand. We have revised the grammatical errors in the paper. Because all the authors are not English naïve speakers, we hired an English native speaker with Computer Science PhD. to help us polish the English writing.\n\n2. How to obtain summaries:\nThe documents used in the study are from news. The titles of the documents are considered as the summaries. This is a typical setup in the study of summarization. We add footnote 1 for the above description (P2).\n\n3. Clarification of the core idea:\nWe are very sorry that the original version of introduction is misleading. The purpose of the work is to generate summaries from an article, not to exploit summaries to better generate articles. First, instead of encoding a sentence into another version of sentence, our text auto-encoder encodes long text into short text while reconstructor tries to reconstruct long text from encoded short text. The discriminator regularizes the latent representations encoded by encoder (generator in our paper) to be human-readable summaries. The short text encoded by generator can be considered as summary of long text, and thus unsupervised text summarization is achieved. The discriminator can use any human-written sentences as real data. Hence, there is no need of human annotators.\nIn the real implementation, instead of using general human-written sentences, we use the sentences from the titles of the documents as real data for discriminator for better performance. However, the titles do not have to be paired with the training documents (for example, in Section 7.3(P8), we can use documents from Gigaword and titles from CNN/Diary), so the training in unsupervised.\nWe have re-written the introduction, especially the second paragraph. We also add an overview figure to clearly describe the basic idea. Please refer to Fig. 1 (P2).\n\nFor specific points:\n1. “seq2seq2seq” in footnote: \nIn the typical seq2seq model, the input sequence is compressed into a vector and then back to another sequence. In our model, the input long sequence is first compressed into shorter sequence, and the model uses the short sequence to generate the long sequence. Hence, we called it “seq2seq2seq” model. This footnote is removed from the revised version.\n\n2. Experiments about text generation:\nThe target of this work is to generate short text as summary of input document instead of generating raw text. The generator never sees the summaries of the documents, so it cannot memorize the summaries.\n\n3. Comparison to original WGAN:\nAs mentioned in your review, in English Gigaword, compared to WGAN, the performance of our proposed adversarial REINOFRCE consistently drops both in unsupervised learning and transfer learning. However, after conducting semi-supervised training experiments, we found that with more labeled data available, adversarial REINFORCE is better than WGAN. We compared the performance of two models regarding available labeled data. Please find the results in Fig. 6 (P9). The full discussion of the two models is in Section 7.5 (P10). We also found that self-critic in Section 5.2.2 is helpful. The results are shown in Table 3 (P10).\n\n4. More examples:\nTo make reader better judge the proposed model, besides Fig. 3 and 4, we have more results in the appendix. Please refer to Fig. 7 to 12 (P15 - 17).\n\n5. Evaluation metric:\nWe know that ROUGE is not a perfect evaluation for summarization, but ROUGE is widely used to evaluate the generated summaries. In the previous work (A Neural Attention Model for Abstractive Sentence Summarization by Rush et al. 2015; Abstractive text summarization using sequence-to-sequence rnns and beyond, Nallapati et al. 2016; Abstractive Sentence Summarization with Attentive Recurrent Neural Networks, Chopra et al 2016), ROUGE is the only major evaluation measure used to evaluate the quality of the summaries.",
"Thank you for giving us some helpful suggestions. To reply your comment, the paper has been carefully revised. All the page numbers below refer to the revised version. \n\nWe made the following modifications of paper: \n1. We have cited the papers of Cycle GAN and He et al. 2016 mentioned in your comment.\n\n2. Problem setup: \nIt is true that in news domain, it is relatively easy to find document-summary pairs because usually people consider the news titles as summaries. However, for the domains like lecture recording, we think collecting labelled data is not trivial. Therefore, it is worth to study unsupervised abstractive summarization. In this paper, we still conduct the experiments on the news domain because the ground truth is available for evaluation. In the future, we can extend to other domains in which collecting label data is changeling. \n\n3. Pre-training generator as language model over summaries:\nThe model architecture of generator is a hybrid pointer network in which decoder selects part of the words from the generator input text. Hence, it’s difficult to train the generator as language model of summary without input text. We came up with another method that solves this problem. Given a set of unpaired documents and summaries, we used an unsupervised approach to match each document with its most relevant summaries. We represented each document and each summary as tf-idf (term frequency–inverse document frequency) vectors. Each document is matched to the summary whose vector has the largest cosine similarity with the document vector. \nWe further used the retrieved paired data to train generator and regarded its performance as baseline. With this method, generator can be roughly initialized with a language model of summaries. The ROUGE scores obtained in this approach is shown in row (B-2) of Table 2 (P8) Then we further improve the generator pre-trained in this way by the proposed unsupervised approach. However, with generator pre-trained by this method, we do not obtain the results better than the ones in Table 2.\n\n4. Semi-supervised training:\nIn semi-supervised training, we first pre-trained the generator with few labeled data. Then, we conducted teacher forcing with labeled data every several unsupervised training steps. We evaluated the performance of our model with regard to the number of labeled data. It’s worth mentioning that In English Gigaword corpus, with only 100K labeled data, semi-supervised training even slightly outperforms supervised-training with full labeled data. Please refer to the results in Figure 6 (P10) and Appendix B(P14). Furthermore, we also discussed the performance of our proposed adversarial REINFORCE in Section 7.5(P10) with regard to number of labeled data in semi-supervised learning. \n\n5. Writing:\nBecause all the authors are not English native speaker, we hired an English native speaker with Computer Science PhD. to help us polish the English writing.",
"We really appreciate your comment and suggestions. The paper has been carefully revised. All the page numbers below refer to the revised version.\n\n1. We have cited “Adversarial Autoencoder” by Makhzani et al.\n\n2. Data Preprocessing:\nIn Chinese Gigaword corpus, the arbitrary decisions regarding on data preprocessing aim to filter out some bad training examples. We conducted all experiments including baseline experiment on same set of pre-processed data. Hence, we still can compare our model to baseline models. In English Gigaword corpus, we simply use the training set pre-processed by previous work (A Neural Attention Model for Abstractive Sentence Summarization by Rush et al. 2015) and don’t do any further preprocess. \n\n3. Vocabulary size:\nThe reason that the vocabulary size in Chinese corpus is extremely small (4K) is that the text unit we use is Chinese character instead of Chinese word. In English corpus, the vocabulary size we used is 15K which is in a reasonable range.\n\n4. Reconstruction cost: \nThe reconstruction cost is conditioned on generated summary sequence and is teacher forced by source text. We add more description to clarify this. Please refer to the first 4 line in P4.\n\n5. Semi-supervised training:\nIn semi-supervised training, we first pre-trained the generator with few labeled data. Then, we conducted teacher forcing with labeled data every several unsupervised training steps. We evaluate the performance of our model with regard to the number of labeled data. It’s worth mentioning that In English Gigaword corpus, with only 100K labeled data, semi-supervised training even slightly outperforms supervised-training with full labeled data. Please refer to the results in Figure 6 (P10). Furthermore, we also discussed the performance of our proposed adversarial REINFORCE in Section 7.5(P10) with regard to number of labeled data in semi-supervised learning. \n\n6. Issues of ad-hoc decisions:\nWe found that in the semi-supervised scenario if we pretrain generator with few labeled data, ad-hoc decisions regarding on pre-training generator are not necessary. However, in completely unsupervised setting, we still not come up with a proper method to prevent ad-hoc decisions on pre-training generator.\n\n7. Clarification of details of the model and training procedure:\nWe have made some extra editing to clarify the details of model and training. The details are provided in Section 6(P6).",
"Thank you for reading the paper again and giving us comment. We will improve the writing of later sections. If we want to apply dual learning in this text summarization task, the training is not only on “source text -> summary -> source text”, but also on “summary -> source text -> summary”. In the “source text -> summary -> source text” path, reconstructor (summary -> source text) produces source text with teacher forcing because the source text is known. However, in the “summary -> source text -> summary” path, it’s difficult for reconstructor to produce source text from summaries without teacher-forcing (due to the unsupervised update) since source text is long. Hence, we do not consider this baseline in the first place. But if possible to modify the paper in the future, we will compare duel learning with our results on semi-supervised training."
]
} | {
"paperhash": [
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"bahdanau|an_actor-critic_algorithm_for_sequence_prediction",
"goodfellow|generative_adversarial_networks",
"gulrajani|improved_training_of_wasserstein_gans",
"he|dual_learning_for_machine_translation",
"hermann|mustafa_suleyman,_and_phil_blunsom._teaching_machines_to_read_and_comprehend",
"kiros|raquel_urtasun,_and_sanja_fidler._skip-thought_vectors",
"li|a_hierarchical_neural_autoencoder_for_paragraphs_and_documents",
"li|adversarial_learning_for_neural_dialogue_generation",
"lin|rouge:_a_package_for_automatic_evaluation_of_summaries",
"miao|language_as_a_latent_variable:_discrete_generative_models_for_sentence_compression",
"nallapati|abstractive_text_summarization_using_sequence-to-sequence_rnns_and_beyond",
"paulus|a_deep_reinforced_model_for_abstractive_summarization",
"marc|sequence_level_training_with_recurrent_neural_networks",
"rennie|self-critical_sequence_training_for_image_captioning",
"rush|a_neural_attention_model_for_abstractive_sentence_summarization",
"see|get_to_the_point:_summarization_with_pointer-generator_networks",
"sutskever|sequence_to_sequence_learning_with_neural_networks",
"tu|modeling_coverage_for_neural_machine_translation",
"yu|seqgan:_sequence_generative_adversarial_nets_with_policy_gradient",
"zhu|unpaired_image-to-image_translation_using_cycle-consistent_adversarial_networks"
],
"title": [
"Neural machine translation by jointly learning to align and translate",
"An actor-critic algorithm for sequence prediction",
"Generative adversarial networks",
"Improved training of wasserstein gans",
"Dual learning for machine translation",
"Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend",
"Raquel Urtasun, and Sanja Fidler. Skip-thought vectors",
"A hierarchical neural autoencoder for paragraphs and documents",
"Adversarial learning for neural dialogue generation",
"Rouge: A package for automatic evaluation of summaries",
"Language as a latent variable: Discrete generative models for sentence compression",
"Abstractive text summarization using sequence-to-sequence rnns and beyond",
"A deep reinforced model for abstractive summarization",
"Sequence level training with recurrent neural networks",
"Self-critical sequence training for image captioning",
"A neural attention model for abstractive sentence summarization",
"Get to the point: Summarization with pointer-generator networks",
"Sequence to sequence learning with neural networks",
"Modeling coverage for neural machine translation",
"Seqgan: Sequence generative adversarial nets with policy gradient",
"Unpaired image-to-image translation using cycle-consistent adversarial networks"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"dzmitry bahdanau",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dzmitry bahdanau",
"philemon brakel",
"kelvin xu",
"anirudh goyal",
"ryan lowe",
"joelle pineau",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ian j goodfellow",
"jean pouget-abadie",
"mehdi mirza",
"bing xu",
"david warde-farley",
"sherjil ozair",
"aaron courville",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ishaan gulrajani",
"faruk ahmed",
"martin arjovsky",
"aaron vincent dumoulin",
" courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"di he",
"yingce xia",
"tao qin",
"liwei wang",
"nenghai yu",
"tie-yan liu",
"wei-ying ma"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"karl moritz hermann",
"tom koisk",
"edward grefenstette",
"lasse espeholt",
"will kay"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ryan kiros",
"yukun zhu",
"ruslan salakhutdinov",
"richard s zemel",
"antonio torralba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jiwei li",
"minh-thang luong",
"dan jurafsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jiwei li",
"will monroe",
"tianlin shi",
"sbastien jean",
"alan ritter",
"dan jurafsky"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chin-yew lin"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yishu miao",
"phil blunsom"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ramesh nallapati",
"bowen zhou"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"romain paulus",
"caiming xiong",
"richard socher"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"aurelio marc",
"sumit ranzato",
"michael chopra",
"wojciech auli",
" zaremba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"steven j rennie",
"etienne marcheret",
"youssef mroueh",
"jarret ross",
"vaibhava goel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alexander m rush",
"sumit chopra",
"jason weston"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"abigail see",
"peter j liu",
"christopher d manning"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ilya sutskever",
"oriol vinyals",
"v quoc",
" le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"zhaopeng tu",
"zhengdong lu",
"yang liu",
"xiaohua liu",
"hang li"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"lantao yu",
"weinan zhang",
"jun wang",
"yong yu"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jun-yan zhu",
"taesung park",
"phillip isola",
"alexei a efros"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"1409.0473v7",
"arXiv:1607.07086",
"1406.2661v1",
"1704.00028v3",
"1611.00179v1",
"",
"",
"1506.01057v2",
"1701.06547v5",
"",
"1609.07317v2",
"",
"1705.04304v3",
"1511.06732v7",
"",
"1509.00685v2",
"",
"1409.3215v3",
"1601.04811v6",
"1609.05473v6",
"arXiv:1703.10593"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.444444 | 0.75 | null | null | null | null | null | r1kNDlbCb |
||
morcos|on_the_importance_of_single_directions_for_generalization|ICLR_cc_2018_Conference | 3292528 | 1803.06959 | On the importance of single directions for generalization | Despite their ability to memorize large datasets, deep neural networks often achieve good generalization performance. However, the differences between the learned solutions of networks which generalize and those which do not remain unclear. Additionally, the tuning properties of single directions (defined as the activation of a single unit or some linear combination of units in response to some input) have been highlighted, but their importance has not been evaluated. Here, we connect these lines of inquiry to demonstrate that a network’s reliance on single directions is a good predictor of its generalization performance, across networks trained on datasets with different fractions of corrupted labels, across ensembles of networks trained on datasets with unmodified labels, across different hyper- parameters, and over the course of training. While dropout only regularizes this quantity up to a point, batch normalization implicitly discourages single direction reliance, in part by decreasing the class selectivity of individual units. Finally, we find that class selectivity is a poor predictor of task importance, suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity, but also that individually selective units may not be necessary for strong network performance. | {
"name": [
"ari s morcos",
"david g t barrett",
"neil c rabinowitz",
"matthew botvinick",
"deepmind london"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
} | null | [
"Computer Science",
"Mathematics"
] | International Conference on Learning Representations | 2018-02-15 | 41 | 315 | 24 | null | null | null | null | null | null | true | The paper contributes to a body of empirical work towards understanding generalization in deep learning. They do this through a battery of experiments studying "single directions" or selectivity of small groups of neurons. The reviewers that have actively participated agree that the revision is of high quality, impact, originality, and significance. The issue of a lack of prescriptiveness was raised by one reviewer. I agree with the majority that this is not necessary, but nevertheless, the revision makes some suggestions. I urge the authors to express the appropriate amount of uncertainty regarding any prescriptions that have not been as thoroughly vetted!
| {
"review_id": [
"r1On1W5xf",
"SyGCUouxf",
"H1gh0U_lG"
],
"review": [
{
"title": "title: An important piece of the generalization puzzle ",
"paper_summary": null,
"main_review": "main_review: article summary: \nThe authors use ablation analyses to evaluate the reliance on single coordinate-aligned directions in activation space (i.e. the activation of single units or feature maps) as a function of memorization. They find that the performance of networks that memorize more are also more affected by ablations. This result holds even for identical networks trained on identical data. The dynamics of this reliance on single directions suggest that it could be used as a criterion for early stopping. The authors discuss this observation in relation to dropout and batch normalization. Although dropout is an effective regularizer to prevent memorization of random labels, it does not prevent over-reliance on single directions. Batch normalization does appear to reduce the reliance on single directions, providing an alternative explanation for the effectiveness of batch normalization. Networks trained without batch normalization also demonstrated a significantly higher amount of class selectivity in individual units compared to networks trained without batch normalization. Highly selective units were found to be no more important than units that were not selective to a particular class. These results suggest that highly selective units may actually be harmful to network performance. \n\n* Quality: The paper presents thorough and careful empirical analyses to support their claims.\n* Clarity: The paper is very clear and well-organized. Sufficient detail is provided to reproduce the results.\n* Originality: This work is one of many recent papers trying to understand generalization in deep networks. Their description of the activation space of networks that generalize compared to those that memorize is novel. The authors throughly relate their findings to related work on generalization, regularization, and pruning. However, the authors may wish to relate their findings to recent reports in neuroscience observing similar phenomena (see below).\n* Significance: The paper provides valuable insight that helps to relate existing theories about generalization in deep networks. The insights of this paper will have a large impact on regularization, early stopping, generalization, and methods used to explain neural networks. \n\nPros:\n* Observations are replicated for several network architectures and datasets. \n* Observations are very clearly contextualized with respect to several active areas of deep learning research.\nCons:\n* The class selectivity measure does not capture all class-related information that a unit may pass on. \n\nComments:\n* Regarding the class selectivity of single units, there is a growing body of literature in neurophysiology and neuroimaging describing similar observations where the interpretation has been that a primary role of any neural pathway is to “denoise” or cancel out the “distractor” rather than just amplifying the “signal” of interest. \n * Untuned But Not Irrelevant: The Role of Untuned Neurons In Sensory Information Coding, https://www.biorxiv.org/content/early/2017/09/21/134379\n * Correlated variability modifies working memory fidelity in primate prefrontal neuronal ensembles https://www.ncbi.nlm.nih.gov/pubmed/28275096\n * On the interpretation of weight vectors of linear models in multivariate neuroimaging http://www.sciencedirect.com/science/article/pii/S1053811913010914\n * see also LEARNING HOW TO EXPLAIN NEURAL NETWORKS https://openreview.net/forum?id=Hkn7CBaTW\n* Regarding the intuition in section 3.1, \"The minimal description length of the model should be larger for the memorizing network than for the structure- finding network. As a result, the memorizing network should use more of its capacity than the structure-finding network, and by extension, more single directions”. Does reliance on single directions not also imply a local encoding scheme? We know that for a fixed number of units, a distributed representation will be able to encode a larger number of unique items than a local one. Therefore if this behaviour was the result of needing to use up more of the capacity of the network, wouldn’t you expect to observe more distributed representations? \n\nMinor issues:\n* In the first sentence of section 2.3, you say you analyzed three models and then you only list two. It seems you forgot to include ResNet trained on ImageNet.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: review of \"On the importance of single directions for generalization\"",
"paper_summary": null,
"main_review": "main_review: This is an \"analyze why\" style of paper: the authors attempt to explain the relationship between some network property (in this case, \"reliance on single directions\"), and a desired performance metric (in this case, generalization ability). The authors quantify a variety of related ways to measure \"reliance on single directions\" and show that the more reliant on a single directions a given network is, the less well it generalizes. \n\nClarity: The paper is fairly clearly written. Sometimes key details are in the footnotes (e.g. see footnote 3) -- not sure why -- but on the whole, I think the followed the paper reasonably well. \n\nQuality: The work makes a good-faith attempt to be fairly systematic -- e.g evaluating several different types of network structures, with reasonable numbers of random initializations, and also illustrates the main point in several different comparatively independent-seeming ways. I feel fairly confident that the results are basically right within the somewhat limited domain that the authors explore. \n\nOriginality: This work is one in a series of papers about the topic of trying to understand what leads to good generalization in deep neural networks. I don't know that the concept of \"reliance on a single direction\" seems especially novel to me, but on the other hand, I can't think of another paper that precisely investigates this notion the way it is done here. \n\nSignificance: The work touches on some important issues. I think the demonstration that the existence of strongly class-selective neurons is not a good correlate for generalization is interesting. This point illustrates something that has made me a bit uncomfortable with the trend toward \"interpretable machine learning\" that has been arising recently: in many of those results, it is shown that some fraction of the units at various levels of a trained deepnet have optimal driving stimuli that seem somewhat interpretable, with the implication that the existence of such units is an important correlate of network performance. There has even been some claims that better-performing networks have more \"single-direction\" interpretable units [1]. The fact that the current results seem directly in contradiction to that line of work is interesting, and the connections to batch normalization and dropout are for the same reason interesting. However, I wish the authors had grappled more directly with the apparent contradiction with (e.g.) [1]. There is probably a kind of tradeoff here. The closer the training dataset is to what is being tested for \"generalization\", the more likely that having single-direction units is useful; and vice-versa. I guess the big question is: what types of generalization are actually demanded / desired in real deployed machine learning systems (or in the brain)? How does those cases compare with the toy examples analyzed here? The paper doesn't go far enough in really addressing these questions, but it is sort of beginning to make an effort. \n\nHowever, for me the main failing of the paper is that it's fairly descriptive without being that prescriptive. Does using their metric of reliance on a single direction, as a regularizer in and of itself, add anything above any beyond existing regularizers (e.g. batch normalization or dropout)? It doesn't seem like they tried. This seems to me the key question to understanding the significance of their results. Is \"reliance on single direction\" actually a good regularizer as such, especially for \"real\" problems like (e.g.) training a deep Convnet on (e.g.) ImageNet or some other challenging dataset? Would penalizing for this quantity improve the generalization of a network trained on ImageNet to other visual datasets (e.g. MS-COCO)? If so, this would be a very significant result and would make me really care about their idea of \"reliance on a singe direction\". If such results do not hold, it seems to me like one more theoretical possibility that would bite the dust when tested at scale. \n\n[1] http://netdissect.csail.mit.edu/final-network-dissection.pdf",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Review",
"paper_summary": null,
"main_review": "main_review: \nSummary:\n- nets that rely on single directions are probably overfitting\n- batch norm helps not having large single directions\n- high class selectivity of single units is a bad measure to find \"important\" neurons that help a NN generalize.\n\nThe experiments that this paper does are quite interesting, somewhat confirming intuitions that the community had, and bringing new insights into generalization. The presentation is good overall, but many minor improvements could help with readability.\n\n\nRemarks:\n- The first thing you should say in this paper is what you mean by \"single direction\", at least an intuition, to be refined later. The second sentence of section 2 could easily be plugged in your abstract.\n- You should already mention in section 2.1 that you are using ReLUs, otherwise clamping to 0 might take a different sense.\n- considering the lack of page limit at ICLR, making *all* your figures bigger would be beneficial to readability.\n- Figure 2's y values drop rapidly as a function of x, maybe make x have a log scale or something that zooms in near 0 would help readability.\n- Figure 3b's discrete regimes is very weird, did you actually look at how much these clusters converged to the same solution in parameter space?\n- Figure 4a is nice, but an additional figure zooming in on the first 2 epochs would be really great, because that AUC curve goes up really fast in the beginning.\n- Arpit et al. find that there is more cross-class information being shared for true labels than random labels. Considering you find that low class selectivity is an indicator of good generalization, would it make sense to look at \"cross-class selectivity\"? If a neuron learns a feature shared by 2 or more classes, then it has this interesting property of offering a discrimination potential for multiple classes at the same time, rather than just 1, making it more \"useful\" potentially, maybe less adversary prone?\n- You say in the figure captions that you use random orderings of the features to perform ablation, but nowhere in the main text (which would be nice).\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.8888888955116272,
0.4444444477558136,
0.6666666865348816
],
"confidence": [
0.5,
0.75,
0.5
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to AnonReviewer2 Part 2/2",
"Response to AnonReviewer1",
"Response to AnonReviewer3",
"Response to AnonReviewer2 Part 1/2",
"General response to reviewers accompanying revision"
],
"comment": [
"\"However, for me the main failing of the paper is that it's fairly descriptive without being that prescriptive. Does using their metric of reliance on a single direction, as a regularizer in and of itself, add anything above and beyond existing regularizers (e.g. batch normalization or dropout)?\"\n\nThough we would like to note that the primary goal of this work is to understand what factors lead to good generalization performance rather than to engineer a new model, we agree with the reviewer that a demonstration that the insights from our work can be used to directly improve model performance would be extremely valuable. However, all of the most obvious methods to regularize single direction reliance seem to reduce to dropout or one of its close variants. This is not to say that we believe there is no such regularizer -- it is merely to say that it is not obviously apparent. We have added a sentence in the Discussion to this effect (p.9, last complete paragraph). \n\nNonetheless, we do note that the insights from our work can be used prescriptively to indirectly improve models, as they provide a way to assess generalization performance without the need for a held-out validation set. In the original draft, we explored this in Fig. 4a-b as a means for early stopping. To expand on the potential for the method in this direction, we have added an additional experiment, in which we show that single direction reliance can be used as an effective method for hyperparameter selection as well (Fig. 4c, p.5 last complete paragraph). We believe that this approach may prove extremely useful, especially in situations in which labeled data is rare. ",
"We thank the reviewer for their kind comments and helpful feedback. We have incorporated the reviewer’s suggestions into the manuscript, and feel that they have substantially improved the clarity of the work. We have also performed additional experiments to address the importance of units with information about multiple classes, as the reviewer suggests. Details of these changes are below: \n\n\"The first thing you should say in this paper is what you mean by 'single direction.'\"\n\nWe have now defined ‘single directions’ in the abstract as the reviewer suggests, as well as adding an additional definition in the Introduction. We agree that this improves the clarity of the paper substantially. \n\n\"You should already mention in section 2.1 that you are using ReLUs.\"\n\nWe have moved section 2.3 to section 2.1, thereby highlighting that we are using ReLU’s from the start, as the reviewer suggests.\n\n\"considering the lack of page limit at ICLR, making *all* your figures bigger would be beneficial to readability.\"\n\nWe were perhaps overly concerned about the ICLR 8-page soft limit in the first draft. We have increased the size of all the figures, as the reviewer suggests, and indeed, this improves the presentation of the paper.\n\n\"Figure 2's y values drop rapidly as a function of x, maybe make x have a log scale or something that zooms in near 0 would help readability.\"\n\nWe have now re-plotted Figure 2 using a log scale for the x-axis. We feel it has substantially improved the figure. We thank the reviewer for the great suggestion!\n\n\"Figure 3b's discrete regimes is very weird, did you actually look at how much these clusters converged to the same solution in parameter space?\"\n\nWe absolutely agree that these discrete regimes are very weird, and fully intend to chase down the cause, and more generally, evaluate empirical convergence properties of multiple networks with the same topology but different random seeds in future work. However, an initial investigation into the causes of these regimes suggests that the answer is not obvious, and we believe that this question is beyond the scope of the present work.\n\n\"Arpit et al. find that there is more cross-class information being shared for true labels than random labels. Considering you find that low class selectivity is an indicator of good generalization, would it make sense to look at \"cross-class selectivity\"? If a neuron learns a feature shared by 2 or more classes, then it has this interesting property of offering a discrimination potential for multiple classes at the same time, rather than just 1, making it more \"useful\" potentially, maybe less adversary prone?\"\n\nWe agree with the reviewer, and indeed, we had included a discussion of the downsides of class selectivity in section titled ‘Quantifying class selectivity’. While class selectivity absolutely ignores units with information about multiple classes, it has been used extensively in neuroscience to find neurons with strong tuning properties (e.g., the cat neurons prominently featured in previous deep learning analyses). In contrast, a metric such as mutual information should highlight units that are informative about multiple classes (with ‘cross-class selectivity’), but not necessarily units that are obviously interpretable.\n\nHowever, we agree that it would be worthwhile to assess the relationship between cross-class selectivity (as measured by mutual information) and importance. To this end, we have performed a series of additional experiments using mutual information (Fig. 6b; A4; Section A.5). We found that while mutual information was slightly more predictive of unit importance than class selectivity it is still not a good predictor of unit importance (Fig. A4, p.15). Interestingly, while we had previously shown that batch normalization decreases class selectivity, we found that batch normalization actually increases mutual information (Fig. 6b, p.7). This result suggests that batch normalization encourages representations that are distributed across units as opposed to representations in which information about single classes is concentrated in single units. We have added text discussing these results in sections 2.3 (p.3) and 3.4 (p.7).\n\n\"You say in the figure captions that you use random orderings of the features to perform ablation, but nowhere in the main text (which would be nice).\"\n\nWe have now included a statement in the main text saying that each ablation curve contains multiple random orderings (p.4, first incomplete paragraph).",
"First off, we would like to thank the reviewer for the kind review and the helpful feedback, especially with respect to class selectivity and the relationship to neuroscience. We have provided detailed responses to these comments as well as pointers to changes in the paper below:\n\n\"The class selectivity measure does not capture all class-related information that a unit may pass on.\"\n\nWe agree with the reviewer, and indeed, we had included a discussion of the downsides of class selectivity in section titled ‘Quantifying class selectivity.’ While class selectivity absolutely ignores units with information about multiple classes, it has been used extensively in neuroscience to find neurons with strong tuning properties (e.g., the cat neurons prominently featured in previous deep learning analyses). In contrast, a metric such as mutual information should highlight units that are informative about multiple classes, but not necessarily units that are obviously interpretable.\n\nHowever, we agree that it would be worthwhile to assess the relationship between multi-class selectivity (as measured by mutual information) and importance. To this end, we have performed a series of additional experiments using mutual information (Fig. 6b; A4; Section A.5). We found that while mutual information was slightly more predictive of unit importance than class selectivity, it is still not a good predictor of unit importance (Fig. A4, p.15). Interestingly, while we had previously shown that batch normalization decreases class selectivity, we found that batch normalization actually increases mutual information (Fig. 6b, p.7). This result suggests that batch normalization encourages representations that are distributed across units as opposed to representations in which information about single classes is concentrated in single units. We have added text discussing these results in sections 2.3 (p.3) and 3.4 (p.7).\n\n\"... the authors may wish to relate their findings to recent reports in neuroscience ...\"\n\nWe are strong advocates of the idea that methods and ideas from neuroscience are useful for understanding machine learning models, and so, we have also included an additional paragraph in our ‘related work’ section (p.8, first complete paragraph) contextualizing our work in recent neuroscience developments regarding robustness to noise, distributed representations, and correlated variability, including references that the reviewer has provided and several other neuroscience papers that influenced our work. \n\n\"In the first sentence of section 2.3, you say you analyzed three models and then you only list two. It seems you forgot to include ResNet trained on ImageNet.\"\n\nGreat catch! We have resolved this now.\n",
"We thank the reviewer for their constructive feedback and their thorough reading of our paper. We have performed additional experiments (to show that the insights of this work can be used prescriptively) and provided additional discussion to work towards addressing the concerns the reviewer has raised. We have provided detailed responses to these comments as well as pointers to changes in the paper below:\n\n\"Sometimes key details are in the footnotes...\"\n\nWe initially put these details in footnotes to stay below the soft page limit. We have now moved all footnotes containing key details into the main text as the reviewer has requested.\n\n\"Originality: This work is one in a series of papers about the topic of trying to understand what leads to good generalization in deep neural networks. I don't know that the concept of \"reliance on a single direction\" seems especially novel to me, but on the other hand, I can't think of another paper that precisely investigates this notion the way it is done here.\"\n\nAs we discuss in both the introduction and related work sections of our paper, the concept of single direction reliance is related to previous theoretical work such as flat minima. However, to our knowledge, single direction reliance has never been empirically tested explicitly. Nonetheless, if the reviewer would be willing to point us in the direction of any related papers that we may have omitted from our manuscript, we would greatly appreciate it as we want to ensure that our discussion of prior work is as complete as possible.\n\n\"There has even been some claims that better-performing networks have more \"single-direction\" interpretable units [1]. The fact that the current results seem directly in contradiction to that line of work is interesting, and the connections to batch normalization and dropout are for the same reason interesting. However, I wish the authors had grappled more directly with the apparent contradiction with (e.g.) [1].\"\n\nWe have included an additional paragraph in the related work section (Section 4, p.9, third complete paragraph) comparing our work more extensively to the work of Bau et al. [1]. We believe that Bau et al. is extremely interesting work, and we note that, in many cases, our results are largely consistent with what Bau et al. observed; for example, we both found a relationship between selectivity and depth. However, we do acknowledge that they observed a correlation between network performance and the number of concept-selective units (Fig. 12 in Bau et al.). We believe that there are three potential explanations for this discrepancy:\n\n (1) As we note at the end of Section 2.3, class selectivity and feature selectivity (akin to the concept selectivity used in Bau et al.) may exhibit different properties. \n\n (2) Bau et al. compare networks with different numbers of filters (e.g., AlexNet, GoogleNet, VGG, and ResNet-152s), but measure the absolute number of unique detectors. It is possible that the number of unique detectors in better performing networks, such as ResNets, is simply a function of these networks having more filters. \n\n (3) Finally, both Bau et al. and our work observed a relationship between selectivity and depth (see Fig. 5 in Bau et al., and Fig. A2 in our manuscript). As Bau et al. compared the number of unique detectors across networks with substantially different depths, the increase in the number of unique detectors may have been due to the different depths of these networks. In line with this observation (as well as point 2 above), we note that in Fig. 12 in Bau et al., which plots the number of unique detectors as a function of accuracy on the action40 dataset, there appears to be little relationship when comparing only across points from the same model architecture. \n\n\"‘The closer the training dataset is to what is being tested for \"generalization\", the more likely that having single-direction units is useful; and vice-versa. I guess the big question is: what types of generalization are actually demanded / desired in real deployed machine learning systems (or in the brain)?\"\n\nWe have now included an additional paragraph in the Discussion section (p.9 last incomplete paragraph) addressing the distinction between different types of generalization based on the overlap between the train and test distributions. We believe that understanding how single direction reliance varies based on this overlap is an extremely interesting question although we feel it is beyond the scope of the present work. \n\n[1] David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network Dissection: Quantifying Interpretability of Deep Visual Representations. 2017. doi: 10.1109/CVPR.2017. 354. URL http://arxiv.org/abs/1704.05796. \n\n\n\n\n",
"We wish to thank the reviewers for their thoughtful and thorough reviews. In particular, we are glad that the reviewers found our paper to be \"an important piece of the generalization puzzle,\" that \"the work touches on some important issues,\" and that the experiments are \"quite interesting,\" \"bringing new insights into generalization.\" We are also glad that the reviewers found that the paper contains \"thorough and careful empirical analyses,\" \"is very clear and well-organized,\" and that the \"presentation is good overall.\"\n\nTo address the reviewers' comments we have performed several additional experiments, including additional figures expanding on the prescriptive implications of our work and detailing the relationship between mutual information, batch normalization, and unit importance. We have also made a number of changes to the text which we feel have significantly improved its clarity. For detailed descriptions of the changes we have made, please see our responses to individual reviewers below. As a result of these changes, our paper is now a little more than nine pages long. Due in large part to the reviewers’ constructive feedback, we believe that our paper has been substantially strengthened. "
]
} | {
"paperhash": [
"neyshabur|exploring_generalization_in_deep_learning",
"raghu|svcca:_singular_vector_canonical_correlation_analysis_for_deep_understanding_and_improvement",
"arpit|a_closer_look_at_memorization_in_deep_networks",
"wilson|the_marginal_value_of_adaptive_gradient_methods_in_machine_learning",
"zylberberg|untuned_but_not_irrelevant:_the_role_of_untuned_neurons_in_sensory_information_coding",
"bau|network_dissection:_quantifying_interpretability_of_deep_visual_representations",
"radford|learning_to_generate_reviews_and_discovering_sentiment",
"dziugaite|computing_nonvacuous_generalization_bounds_for_deep_(stochastic)_neural_networks_with_many_more_parameters_than_training_data",
"cheney|on_the_robustness_of_convolutional_neural_networks_to_internal_architecture_and_weight_perturbations",
"dinh|sharp_minima_can_generalize_for_deep_nets",
"shwartz-ziv|opening_the_black_box_of_deep_neural_networks_via_information",
"barrett|optimal_compensation_for_neuron_loss",
"zhang|understanding_deep_learning_requires_rethinking_generalization",
"molchanov|pruning_convolutional_neural_networks_for_resource_efficient_inference",
"alain|understanding_intermediate_layers_using_linear_classifier_probes",
"wu|google's_neural_machine_translation_system:_bridging_the_gap_between_human_and_machine_translation",
"keskar|on_large-batch_training_for_deep_learning:_generalization_gap_and_sharp_minima",
"li|pruning_filters_for_efficient_convnets",
"raghu|on_the_expressive_power_of_deep_neural_networks",
"anwar|structured_pruning_of_deep_convolutional_neural_networks",
"he|deep_residual_learning_for_image_recognition",
"ioffe|batch_normalization:_accelerating_deep_network_training_by_reducing_internal_covariate_shift",
"zhou|object_detectors_emerge_in_deep_scene_cnns",
"coates|emergence_of_object-selective_features_in_unsupervised_feature_learning",
"le|building_high-level_features_using_large_scale_unsupervised_learning",
"britten|the_analysis_of_visual_motion:_a_comparison_of_neuronal_and_psychophysical_performance",
"achille|emergence_of_invariance_and_disentangling_in_deep_representations",
"srivastava|dropout:_a_simple_way_to_prevent_neural_networks_from_overfitting",
"erhan|visualizing_higher-layer_features_of_a_deep_network",
"bousquet|stability_and_generalization",
"hochreiter|flat_minima",
"|on_the_expressive_power_of_deep_neural_networks"
],
"title": [
"Exploring Generalization in Deep Learning",
"SVCCA: Singular Vector Canonical Correlation Analysis for Deep Understanding and Improvement",
"A Closer Look at Memorization in Deep Networks",
"The Marginal Value of Adaptive Gradient Methods in Machine Learning",
"Untuned but not irrelevant: The role of untuned neurons in sensory information coding",
"Network Dissection: Quantifying Interpretability of Deep Visual Representations",
"Learning to Generate Reviews and Discovering Sentiment",
"Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data",
"On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations",
"Sharp Minima Can Generalize For Deep Nets",
"Opening the Black Box of Deep Neural Networks via Information",
"Optimal compensation for neuron loss",
"Understanding deep learning requires rethinking generalization",
"Pruning Convolutional Neural Networks for Resource Efficient Inference",
"Understanding intermediate layers using linear classifier probes",
"Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation",
"On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima",
"Pruning Filters for Efficient ConvNets",
"On the Expressive Power of Deep Neural Networks",
"Structured Pruning of Deep Convolutional Neural Networks",
"Deep Residual Learning for Image Recognition",
"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"Object Detectors Emerge in Deep Scene CNNs",
"Emergence of Object-Selective Features in Unsupervised Feature Learning",
"Building high-level features using large scale unsupervised learning",
"The analysis of visual motion: a comparison of neuronal and psychophysical performance",
"Emergence of invariance and disentangling in deep representations",
"Dropout: a simple way to prevent neural networks from overfitting",
"Visualizing Higher-Layer Features of a Deep Network",
"Stability and Generalization",
"Flat Minima",
"On the Expressive Power of Deep Neural Networks"
],
"abstract": [
"With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.",
"With the continuing empirical successes of deep networks, it becomes increasingly important to develop better methods for understanding training of models and the representations learned within. In this paper we propose Singular Vector Canonical Correlation Analysis (SVCCA), a tool for quickly comparing two representations in a way that is both invariant to affine transform (allowing comparison between different layers and networks) and fast to compute (allowing more comparisons to be calculated than with previous methods). We deploy this tool to measure the intrinsic dimensionality of layers, showing in some cases needless over-parameterization; to probe learning dynamics throughout training, finding that networks converge to final representations from the bottom up; to show where class-specific information in networks is formed; and to suggest new training regimes that simultaneously save computation and overfit less.",
"We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness. While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first. In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned explicit regularization (e.g., dropout) we can degrade DNN training performance on noise datasets without compromising generalization on real data. Our analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization.",
"Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state-of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks.",
"In the sensory systems, most neurons’ firing rates are tuned to at least one aspect of the stimulus. Other neurons are appear to be untuned, meaning that their firing rates do not depend on the stimulus. Previous work on information coding in neural populations has ignored untuned neurons, based on the tacit assumption that they are unimportant. Recent experimental work has questioned this assumption, showing that in some circumstances, neurons with no apparent stimulus tuning can contribute to sensory information coding. These findings are intriguing, because they suggest that – by virtue of our ignoring putatively untuned neurons – our understanding of neural population coding might be incomplete. At the same time, several key questions remain unanswered: Are the impacts of putatively untuned neurons on population coding due to weak tuning that is nevertheless below the threshold the experimenters set for calling neurons tuned (vs untuned)? And why do there appear to be untuned neurons in the brain? Do mixed populations of tuned and untuned neurons have a functional advantage over populations containing only tuned neurons? Using theoretical calculations and analyses of in vivo neural data, I answer those questions by: a) showing how untuned neurons can enhance sensory information coding; b) demonstrating that this effect does not rely on weak tuning; and c) identifying conditions under which the neural code can be made more informative by replacing some of the tuned neurons with untuned ones. These conditions specify when there is a functional benefit to having untuned neurons in a circuit, and thus suggest a reason why the brain might contain untuned neurons. Overall, this work shows that, even in the extreme case, where some neurons have no tuning, those neurons can still contribute to sensory information coding, and thus should not be ignored.",
"We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a data set of concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are labeled across a broad range of visual concepts including objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability is an axis-independent property of the representation space, then we apply the method to compare the latent representations of various networks when trained to solve different classification problems. We further analyze the effect of training iterations, compare networks trained with different initializations, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.",
"We explore the properties of byte-level recurrent language models. When given sufficient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts. Specifically, we find a single unit which performs sentiment analysis. These representations, learned in an unsupervised manner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank. They are also very data efficient. When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets. We also demonstrate the sentiment unit has a direct influence on the generative process of the model. Simply fixing its value to be positive or negative generates samples with the corresponding positive or negative sentiment.",
"One of the defining properties of deep learning is that models are chosen to have many more parameters than available training data. In light of this capacity for overfitting, it is remarkable that simple algorithms like SGD reliably return solutions with low test error. One roadblock to explaining these phenomena in terms of implicit regularization, structural properties of the solution, and/or easiness of the data is that many learning bounds are quantitatively vacuous when applied to networks learned by SGD in this \"deep learning\" regime. Logically, in order to explain generalization, we need nonvacuous bounds. We return to an idea by Langford and Caruana (2001), who used PAC-Bayes bounds to compute nonvacuous numerical bounds on generalization error for stochastic two-layer two-hidden-unit neural networks via a sensitivity analysis. By optimizing the PAC-Bayes bound directly, we are able to extend their approach and obtain nonvacuous generalization bounds for deep stochastic neural network classifiers with millions of parameters trained on only tens of thousands of examples. We connect our findings to recent and old work on flat minima and MDL-based explanations of generalization.",
"Deep convolutional neural networks are generally regarded as robust function approximators. So far, this intuition is based on perturbations to external stimuli such as the images to be classified. Here we explore the robustness of convolutional neural networks to perturbations to the internal weights and architecture of the network itself. We show that convolutional networks are surprisingly robust to a number of internal perturbations in the higher convolutional layers but the bottom convolutional layers are much more fragile. For instance, Alexnet shows less than a 30% decrease in classification performance when randomly removing over 70% of weight connections in the top convolutional or dense layers but performance is almost at chance with the same perturbation in the first convolutional layer. Finally, we suggest further investigations which could continue to inform the robustness of convolutional networks to internal perturbations.",
"Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.",
"Despite their great success, there is still no comprehensive theoretical understanding of learning with Deep Neural Networks (DNNs) or their inner organization. Previous work proposed to analyze DNNs in the \\textit{Information Plane}; i.e., the plane of the Mutual Information values that each layer preserves on the input and output variables. They suggested that the goal of the network is to optimize the Information Bottleneck (IB) tradeoff between compression and prediction, successively, for each layer. In this work we follow up on this idea and demonstrate the effectiveness of the Information-Plane visualization of DNNs. Our main results are: (i) most of the training epochs in standard DL are spent on {\\emph compression} of the input to efficient representation and not on fitting the training labels. (ii) The representation compression phase begins when the training errors becomes small and the Stochastic Gradient Decent (SGD) epochs change from a fast drift to smaller training error into a stochastic relaxation, or random diffusion, constrained by the training error value. (iii) The converged layers lie on or very close to the Information Bottleneck (IB) theoretical bound, and the maps from the input to any hidden layer and from this hidden layer to the output satisfy the IB self-consistent equations. This generalization through noise mechanism is unique to Deep Neural Networks and absent in one layer networks. (iv) The training time is dramatically reduced when adding more hidden layers. Thus the main advantage of the hidden layers is computational. This can be explained by the reduced relaxation time, as this it scales super-linearly (exponentially for simple diffusion) with the information compression from the previous layer.",
"The brain has an impressive ability to withstand neural damage. Diseases that kill neurons can go unnoticed for years, and incomplete brain lesions or silencing of neurons often fail to produce any behavioral effect. How does the brain compensate for such damage, and what are the limits of this compensation? We propose that neural circuits instantly compensate for neuron loss, thereby preserving their function as much as possible. We show that this compensation can explain changes in tuning curves induced by neuron silencing across a variety of systems, including the primary visual cortex. We find that compensatory mechanisms can be implemented through the dynamics of networks with a tight balance of excitation and inhibition, without requiring synaptic plasticity. The limits of this compensatory mechanism are reached when excitation and inhibition become unbalanced, thereby demarcating a recovery boundary, where signal representation fails and where diseases may become symptomatic. DOI: http://dx.doi.org/10.7554/eLife.12454.001",
"Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. \nThrough extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. \nWe interpret our experimental findings by comparison with traditional models.",
"We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation - a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach.",
"Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as \"probes\", trained entirely independently of the model itself. \nThis helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. \nWe apply this technique to the popular models Inception v3 and Resnet-50. Among other things, we observe experimentally that the linear separability of features increase monotonically along the depth of the model.",
"Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference. Also, most NMT systems have difficulty with rare words. These issues have hindered NMT's use in practical deployments and services, where both accuracy and speed are essential. In this work, we present GNMT, Google's Neural Machine Translation system, which attempts to address many of these issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder layers using attention and residual connections. To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder. To accelerate the final translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units (\"wordpieces\") for both input and output. This method provides a good balance between the flexibility of \"character\"-delimited models and the efficiency of \"word\"-delimited models, naturally handles translation of rare words, and ultimately improves the overall accuracy of the system. Our beam search technique employs a length-normalization procedure and uses a coverage penalty, which encourages generation of an output sentence that is most likely to cover all the words in the source sentence. On the WMT'14 English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state-of-the-art. Using a human side-by-side evaluation on a set of isolated simple sentences, it reduces translation errors by an average of 60% compared to Google's phrase-based production system.",
"The stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, say $32$-$512$ data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize. We investigate the cause for this generalization drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions - and as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap.",
"The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.",
"We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute. Our approach is based on an interrelated set of measures of expressivity, unified by the novel notion of trajectory length, which measures how the output of a network changes as the input sweeps along a one-dimensional path. Our findings can be summarized as follows: \n(1) The complexity of the computed function grows exponentially with depth. \n(2) All weights are not equal: trained networks are more sensitive to their lower (initial) layer weights. \n(3) Regularizing on trajectory length (trajectory regularization) is a simpler alternative to batch normalization, with the same performance.",
"Real-time application of deep learning algorithms is often hindered by high computational complexity and frequent memory accesses. Network pruning is a promising technique to solve this problem. However, pruning usually results in irregular network connections that not only demand extra representation efforts but also do not fit well on parallel computation. We introduce structured sparsity at various scales for convolutional neural networks: feature map-wise, kernel-wise, and intra-kernel strided sparsity. This structured sparsity is very advantageous for direct computational resource savings on embedded computers, in parallel computing environments, and in hardware-based systems. To decide the importance of network connections and paths, the proposed method uses a particle filtering approach. The importance weight of each particle is assigned by assessing the misclassification rate with a corresponding connectivity pattern. The pruned network is retrained to compensate for the losses due to pruning. While implementing convolutions as matrix products, we particularly show that intra-kernel strided sparsity with a simple constraint can significantly reduce the size of the kernel and feature map tensors. The proposed work shows that when pruning granularities are applied in combination, we can prune the CIFAR-10 network by more than 70% with less than a 1% loss in accuracy.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.",
"With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.",
"Recent work in unsupervised feature learning has focused on the goal of discovering high-level features from unlabeled images. Much progress has been made in this direction, but in most cases it is still standard to use a large amount of labeled data in order to construct detectors sensitive to object classes or other complex patterns in the data. In this paper, we aim to test the hypothesis that unsupervised feature learning methods, provided with only unlabeled data, can learn high-level, invariant features that are sensitive to commonly-occurring objects. Though a handful of prior results suggest that this is possible when each object class accounts for a large fraction of the data (as in many labeled datasets), it is unclear whether something similar can be accomplished when dealing with completely unlabeled data. A major obstacle to this test, however, is scale: we cannot expect to succeed with small datasets or with small numbers of learned features. Here, we propose a large-scale feature learning system that enables us to carry out this experiment, learning 150,000 features from tens of millions of unlabeled images. Based on two scalable clustering algorithms (K-means and agglomerative clustering), we find that our simple system can discover features sensitive to a commonly occurring object class (human faces) and can also combine these into detectors invariant to significant global distortions like large translations and scale.",
"We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.",
"We compared the ability of psychophysical observers and single cortical neurons to discriminate weak motion signals in a stochastic visual display. All data were obtained from rhesus monkeys trained to perform a direction discrimination task near psychophysical threshold. The conditions for such a comparison were ideal in that both psychophysical and physiological data were obtained in the same animals, on the same sets of trials, and using the same visual display. In addition, the psychophysical task was tailored in each experiment to the physiological properties of the neuron under study; the visual display was matched to each neuron's preference for size, speed, and direction of motion. Under these conditions, the sensitivity of most MT neurons was very similar to the psychophysical sensitivity of the animal observers. In fact, the responses of single neurons typically provided a satisfactory account of both absolute psychophysical threshold and the shape of the psychometric function relating performance to the strength of the motion signal. Thus, psychophysical decisions in our task are likely to be based upon a relatively small number of neural signals. These signals could be carried by a small number of neurons if the responses of the pooled neurons are statistically independent. Alternatively, the signals may be carried by a much larger pool of neurons if their responses are partially intercorrelated.",
"We show that invariance in a deep neural network is equivalent to information minimality of the representation it computes, and that stacking layers and injecting noise during training naturally bias the network towards learning invariant representations. Then, we show that overfitting is related to the quantity of information stored in the weights, and derive a sharp bound between this information and the minimality and Total Correlation of the layers. This allows us to conclude that implicit and explicit regularization of the loss function not only help limit overfitting, but also foster invariance and disentangling of the learned representation. We also shed light on the properties of deep networks in relation to the geometry of the loss function.",
"Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.",
"Deep architectures have demonstrated state-of-the-art results in a variety of settings, especially with vision datasets. Beyond the model definitions and the quantitative analyses, there is a need for qualitative comparisons of the solutions learned by various deep architectures. The goal of this paper is to find good qualitative interpretations of high level features represented by such models. To this end, we contrast and compare several techniques applied on Stacked Denoising Autoencoders and Deep Belief Networks, trained on several vision datasets. We show that, perhaps counter-intuitively, such interpretation is possible at the unit level, that it is simple to accomplish and that the results are consistent across various techniques. We hope that such techniques will allow researchers in deep architectures to understand more of how and why deep architectures work.",
"We define notions of stability for learning algorithms and show how to use these notions to derive generalization error bounds based on the empirical error and the leave-one-out error. The methods we use can be applied in the regression framework as well as in the classification one when the classifier is obtained by thresholding a real-valued function. We study the stability properties of large classes of learning algorithms such as regularization based algorithms. In particular we focus on Hilbert space regularization and Kullback-Leibler regularization. We demonstrate how to apply the results to SVM for regression and classification.",
"We present a new algorithm for finding low-complexity neural networks with high generalization capability. The algorithm searches for a flat minimum of the error function. A flat minimum is a large connected region in weight space where the error remains approximately constant. An MDL-based, Bayesian argument suggests that flat minima correspond to simple networks and low expected overfitting. The argument is based on a Gibbs algorithm variant and a novel way of splitting generalization error into underfitting and overfitting error. Unlike many previous approaches, ours does not require gaussian assumptions and does not depend on a good weight prior. Instead we have a prior over input output functions, thus taking into account net architecture and training set. Although our algorithm requires the computation of second-order derivatives, it has backpropagation's order of complexity. Automatically, it effectively prunes units, weights, and input lines. Various experiments with feedforward and recurrent nets are described. In an application to stock market prediction, flat minimum search outperforms conventional backprop, weight decay, and optimal brain surgeon/optimal brain damage.",
"Proof. We show inductively that FA(x;W ) partitions the input space into convex polytopes via hyperplanes. Consider the image of the input space under the first hidden layer. Each neuron v i defines hyperplane(s) on the input space: letting W (0) i be the ith row of W , b i the bias, we have the hyperplane W (0) i x + bi = 0 for a ReLU and hyperplanes W (0) i x + bi = ±1 for a hard-tanh. Considering all such hyperplanes over neurons in the first layer, we get a hyperplane arrangement in the input space, each polytope corresponding to a specific activation pattern in the first hidden layer."
],
"authors": [
{
"name": [
"Behnam Neyshabur",
"Srinadh Bhojanapalli",
"D. McAllester",
"N. Srebro"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Raghu",
"J. Gilmer",
"J. Yosinski",
"Jascha Narain Sohl-Dickstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Devansh Arpit",
"Stanislaw Jastrzebski",
"Nicolas Ballas",
"David Krueger",
"Emmanuel Bengio",
"Maxinder S. Kanwal",
"Tegan Maharaj",
"Asja Fischer",
"Aaron C. Courville",
"Yoshua Bengio",
"Simon Lacoste-Julien"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ashia C. Wilson",
"R. Roelofs",
"Mitchell Stern",
"N. Srebro",
"B. Recht"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"J. Zylberberg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"David Bau",
"Bolei Zhou",
"A. Khosla",
"A. Oliva",
"A. Torralba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Alec Radford",
"R. Józefowicz",
"I. Sutskever"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"G. Dziugaite",
"Daniel M. Roy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"N. Cheney",
"Martin Schrimpf",
"Gabriel Kreiman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Laurent Dinh",
"Razvan Pascanu",
"Samy Bengio",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Ravid Shwartz-Ziv",
"Naftali Tishby"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Barrett",
"S. Denéve",
"C. Machens"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Chiyuan Zhang",
"Samy Bengio",
"Moritz Hardt",
"B. Recht",
"O. Vinyals"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Pavlo Molchanov",
"Stephen Tyree",
"Tero Karras",
"Timo Aila",
"J. Kautz"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Guillaume Alain",
"Yoshua Bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Yonghui Wu",
"M. Schuster",
"Z. Chen",
"Quoc V. Le",
"Mohammad Norouzi",
"Wolfgang Macherey",
"M. Krikun",
"Yuan Cao",
"Qin Gao",
"Klaus Macherey",
"J. Klingner",
"Apurva Shah",
"Melvin Johnson",
"Xiaobing Liu",
"Lukasz Kaiser",
"Stephan Gouws",
"Yoshikiyo Kato",
"Taku Kudo",
"H. Kazawa",
"K. Stevens",
"George Kurian",
"Nishant Patil",
"Wei Wang",
"C. Young",
"Jason R. Smith",
"Jason Riesa",
"Alex Rudnick",
"O. Vinyals",
"G. Corrado",
"Macduff Hughes",
"J. Dean"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"N. Keskar",
"Dheevatsa Mudigere",
"J. Nocedal",
"M. Smelyanskiy",
"P. T. P. Tang"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Hao Li",
"Asim Kadav",
"Igor Durdanovic",
"H. Samet",
"H. Graf"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"M. Raghu",
"Ben Poole",
"J. Kleinberg",
"S. Ganguli",
"Jascha Narain Sohl-Dickstein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"S. Anwar",
"Kyuyeon Hwang",
"Wonyong Sung"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Kaiming He",
"X. Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sergey Ioffe",
"Christian Szegedy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Bolei Zhou",
"A. Khosla",
"Àgata Lapedriza",
"A. Oliva",
"A. Torralba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Adam Coates",
"A. Karpathy",
"A. Ng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Quoc V. Le",
"Marc'Aurelio Ranzato",
"R. Monga",
"M. Devin",
"G. Corrado",
"Kai Chen",
"J. Dean",
"A. Ng"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"K. Britten",
"MN Shadlen",
"W. Newsome",
"J. Movshon"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"A. Achille",
"Stefano Soatto"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Nitish Srivastava",
"Geoffrey E. Hinton",
"A. Krizhevsky",
"I. Sutskever",
"R. Salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"D. Erhan",
"Yoshua Bengio",
"Aaron C. Courville",
"Pascal Vincent"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"O. Bousquet",
"A. Elisseeff"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"Sepp Hochreiter",
"J. Schmidhuber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [],
"affiliation": []
}
],
"arxiv_id": [
"1706.08947",
"1706.05806",
"1706.05394",
"1705.08292",
null,
"1704.05796",
"1704.01444",
"1703.11008",
"1703.08245",
"1703.04933",
"1703.00810",
null,
"1611.03530",
null,
"1610.01644",
"1609.08144",
"1609.04836",
"1608.08710",
"1606.05336",
"1512.08571",
"1512.03385",
"1502.03167",
"1412.6856",
null,
"1112.6209",
null,
null,
null,
null,
null,
null,
null
],
"s2_corpus_id": [
"9597660",
"3237424",
"11455421",
"3273477",
"90666682",
"378410",
"14838925",
"9636400",
"13217484",
"7636159",
"6788781",
"4777991",
"6212000",
"17240902",
"9794990",
"3603249",
"5834589",
"14089312",
"2838204",
"7333079",
"206594692",
"5808102",
"8217340",
"7703389",
"206741597",
"797919",
"1142468",
"6844431",
"15127402",
"1157797",
"733161",
"260490596"
],
"intents": [
[
"background"
],
[
"methodology"
],
[
"result",
"background"
],
[
"background"
],
[
"background"
],
[
"background",
"result"
],
[
"background"
],
[],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background",
"methodology"
],
[
"methodology"
],
[],
[],
[],
[],
[
"result"
],
[
"methodology"
],
[
"background",
"methodology"
],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"background"
],
[
"background",
"methodology"
],
[
"background"
],
[],
[
"methodology"
],
[
"background"
],
[
"background"
],
[
"result"
]
],
"isInfluential": [
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
true,
false,
false,
false,
false,
false,
false,
false
]
} | null | 84 | 3.75 | 0.666667 | 0.583333 | null | null | null | null | null | r1iuQjxCZ |
devlin|semantic_code_repair_using_neurosymbolic_transformation_networks|ICLR_cc_2018_Conference | Semantic Code Repair using Neuro-Symbolic Transformation Networks | We study the problem of semantic code repair, which can be broadly defined as automatically fixing non-syntactic bugs in source code. The majority of past work in semantic code repair assumed access to unit tests against which candidate repairs could be validated. In contrast, the goal here is to develop a strong statistical model to accurately predict both bug locations and exact fixes without access to information about the intended correct behavior of the program. Achieving such a goal requires a robust contextual repair model, which we train on a large corpus of real-world source code that has been augmented with synthetically injected bugs. Our framework adopts a two-stage approach where first a large set of repair candidates are generated by rule-based processors, and then these candidates are scored by a statistical model using a novel neural network architecture which we refer to as Share, Specialize, and Compete. Specifically, the architecture (1) generates a shared encoding of the source code using an RNN over the abstract syntax tree, (2) scores each candidate repair using specialized network modules, and (3) then normalizes these scores together so they can compete against one another in comparable probability space. We evaluate our model on a real-world test set gathered from GitHub containing four common categories of bugs. Our model is able to predict the exact correct repair 41% of the time with a single guess, compared to 13% accuracy for an attentional sequence-to-sequence model. | {
"name": [],
"affiliation": []
} | A neural architecture for scoring and ranking program repair candidates to perform semantic program repair statically without access to unit tests. | [
"semantic program repair",
"neural program embeddings",
"deep learning"
] | null | 2018-02-15 22:29:37 | 18 | null | null | null | null | null | null | null | null | false | To summarize the pros and cons:
Pro:
* Interesting application
* Impressive results on a difficult task
* Nice discussion of results and informative examples
* Clear presentation, easy to read.
Con:
* The method appears to be highly specialized to the four bug types. It is not clear how generalizable it will be to more complex bugs, and to the real application scenarios where we are dealing with open world classification and there is not fixed set of possible bugs.
There were additional reviewer complaints that comparison to the simple seq-to-seq baseline may not be fair, but I believe that these have been addressed appropriately by the author's response noting that all other reasonable baselines require test cases, which is an extra data requirement that is not available in many real-world applications of interest.
This paper is somewhat on the borderline, and given the competitive nature of a top conference like ICLR I feel that it does not quite make the cut. It is definitely a good candidate for presentation at the workshop however. | {
"review_id": [
"rJvBlRteG",
"SynixA1WM",
"SkSfxq9xM"
],
"review": [
{
"title": "title: This paper presents a neural network architecture for program repair. Although this paper contains several strong points, the weaknesses of this paper are also very obvious.",
"paper_summary": null,
"main_review": "main_review: This paper presents a neural network architecture consisting of the share, specialize and compete parts for repairing code in four cases, i.e., VarReplace, CompReplace, IsSwap, and ClassMember. Experiments on the source codes from Github are conducted and the performance is evaluated against one sequence-to-sequence baseline method.\n\nPros:\n\n* The problem studied in this paper is of practical significance. \n* The proposed approach is technically sound in general. The paper is well-written and easy to follow.\n\nCons:\n\n* The scope of this paper is narrow. This paper can only repair the program in the four special cases. It leads to a natural question that how many other cases besides the four? It seems that even if the proposed method works pretty well in practice, it would not be very useful since it is effective to only 4 out of a huge number of cases that a program could be wrong.\n\n* Although the proposed architecture is specially designed for this problem, the components are a straight-forward application of existing approaches. E.g., The SHARE component that using bidirectional LSTM to encode from AST has been studied before and the specialized network has been studied in (Andreas et al., 2016). This reduces the novelty and technical contribution of this paper.\n\n* Many technical details have not been well-explained. For example, how to determine the number of candidates m, since different snippets may have different number of candidates? How to train the model? What is the loss function?\n\n* The experiments are weak. 1) the state-of-the-art program repair approaches such as the statistical program repair models (Arcuri and Yao, 2008) (Goues et al., 2012), Rule-Based Static Analyzers (Thenault, 2001) (PyCQA, 2012) should be compared. 2) the comparsion between SSC with and Seq-to-Seq is not fair, since the baseline is more general and not specially crafted for these 4 cases.\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Cool application of neural nets to bug repair, but only in 4 special cases",
"paper_summary": null,
"main_review": "main_review: This paper describes the application of a neural network architecture, called Share, Specialize, and Compete, to the problem of automatically generating big fixes when the bugs fall into 4 specific categories. The approach is validated using both real and injected bugs based on a software corpus of 19,000 github projects implemented in python. The model achieves performance that is noticeably better than human experts.\n\nThis paper is well-written and nicely organized. The technical approach is described in sufficient detail, and supported with illustrative examples. Most importantly, the problem tackled is ambitious and of significance to the software engineering community.\n\nTo me the major shortcoming of the model is that the analysis focuses only on 4 specific types of semantic bugs. In practice, this is a minute fraction of what can actually go wrong when writing code. And while the high performance achieved on these 4 bugs is noteworthy, the fact that the baseline compared against is more generic weakens the contribution. The authors should address this potential limitation. I would also be curious to see performance comparisons to recent rule-based and statistical techniques.\n\nOverall this is a nice paper with very promising results, but I believe addressing some of the above weaknesses (with experimental results, where possible) would make it an excellent paper.\n\n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: Interesting and challenging application with impressive results, but maybe a bit narrowly focused in its scope. ",
"paper_summary": null,
"main_review": "main_review: This paper introduces a neural network architecture for fixing semantic bugs in code. Focusing on four specific types of bugs, the proposed two-stage approach first generates a set of candidate repairs and then scores the repair candidates using a neural network trained on synthetically introduced bug/repair examples. Comparing to a prior sequence-to-sequence approach, the proposed approach achieved dominantly better accuracy on both synthetic and real bug datasets. On a real bug dataset constructed from GitHub commits, it was shown to outperform human. \n\nI find the application of neural networks to the problem of code repair to be highly interesting. The proposed approach is highly specialized for the specific four types of bugs considered here and appears to be effective for fixing these specific bug types, especially in comparison to the sequence-to-sequence model based approach. However, I was wondering whether limiting the output choices (based on the bug type) is going a long way toward improving the performance compared to seq-2-seq, which does not utilize such output constraints. What if we introduce the same type of constraints for the seq-2-seq model? For example, one can simply modifying the decoding process such that for locations that are not in the candidate set, the network simply makes no change, and for candidate-repair locations, the output space is limited to the specific choices provided in the candidate set. This will provide a more fair comparison between the different models. \nRight now it is not clear how much of the observed performance gain is due to the use of these constraints on the output space. \n\nIs there any control mechanism used to ensure that the real bug test set do not overlap with the training set? This is not clear to me. \n\nI find the comparison result to human performance to be interesting and somewhat surprising. This seems quite impressive. The presented example where human makes a mistake but the algorithm is correct is informative and provides some potential explanation to this. But it also raises a question. The specific example snippet could be considered to be correct when placed in a different context. Bugs are context sensitive artifacts. The setup of considering each function independently without any context seems like an inherent limitation in the types of bugs that this method could potentially address. Some discussion on the limitation of the proposed method seems to be warranted. \n\n\n\n\nPro:\nInteresting application \nImpressive results on a difficult task\nNice discussion of results and informative examples\nClear presentation, easy to read.\n\nCon: \nThe comparison to baseline seq-2-seq does not seem quite fair\nThe method appears to be highly specialized to the four bug types. It is not clear how generalizable it will be to more complex bugs, and to the real application scenarios where we are dealing with open world classification and there is not fixed set of possible bugs. \n",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.3333333432674408,
0.5555555820465088,
0.5555555820465088
],
"confidence": [
0.75,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Generality of the 4 repair cases?",
"Response",
"Review Response",
"Response"
],
"comment": [
"We thank the reviewers for their helpful comments and feedback. It seems although the reviewers liked our neural network architecture for semantic program repair, there is a common concern regarding the generality and scope of the 4 classes of bugs we selected for evaluation. We are explaining this concern in a separate comment just to reinforce the fact that the 4 classes we consider are actually quite general and cover a large number of program bugs in our exploratory study of github codebases, especially compared to other recent work that only considers 1 class (out of our 4 classes) and show its prevalence in other codebases.\n\nFirst, we selected the 4 classes of semantic bugs based on an extensive analysis of popular Python codebases on github to identify common classes of errors that programmers make, and using the following criterion “Bugs\nthat can be identified and fixed by an experienced human programmer, without running the code or\nhaving deep contextual knowledge of the program.” This requirement of not having test cases is important because it forces development of models which can infer intended semantic purpose from source code before proposing repairs, as a human programmer might, and is also a great real-world test bed for developing models of understanding source code. Note that this requirement also disallows using majority of recent statistical semantic program repair techniques that relies on the availability of test cases.\n\nSecond, the 4 classes we consider (VarReplace, CompReplace, IsSwap, and ClassMember) are very broad classes of bugs. Our test set (https://iclr2018anon.github.io/semantic_code_repair/index.html) shows both the prevalence and extreme diversity of these classes of bugs.\n\nFinally, there are other recent papers such as (http://bit.ly/2Dh7Qx8) that use models to identify only 1 class of bugs “Variable Misuse” that is similar to our VarReplace class.\n",
"Thanks for the review and questions. In our response, we briefly explain why the 4 classes of bugs we consider in this work are actually quite broad, and why other state-of-the-art program repair techniques are not applicable in our setting of identifying and repairing the programs without having access to test cases.\n\nQ. Scope of the paper is narrow and considers only 4 classes of bugs?\n\nFirst, we would like to point out that the 4 classes of semantic bugs that we chose were based on an extensive analysis of common classes of errors that programmers make, and which experienced programmers can potentially fix by only observing the program syntax without having access to any test cases or runtime information.\n\nSecond, the 4 classes we consider (VarReplace, CompReplace, IsSwap, and ClassMember) are very broad classes of bugs. Our test set (https://iclr2018anon.github.io/semantic_code_repair/index.html) shows both the prevalence and extreme diversity of these classes of bugs.\n\nFinally, there are other recent papers such as (http://bit.ly/2Dh7Qx8) that use models to identify only 1 class of bugs “Variable Misuse” that is similar to our VarReplace class.\n\nQ. how to determine the number of candidates m, since different snippets may have different number of candidates? How to train the model? What is the loss function?\n\nFor each snippet, our model first uses the SHARE module to emit a d-dimensional vector for an AST node of the snippet, which are then encoded using a bi-LSTM to compute a shared representation H. Next, for each repair type, the SPECIALIZE module uses H and either an MLP or a Pooled Pointer module to produce an un-normalized scalar score for each of the m repair candidates. For a given snippet, we first identify the possible repair locations based on our 4 classes. For each repair location, the m candidates are computed depending on the AST node class. For example, if the repair location is of type comparison operator, it will consists of m=7 repair candidates, where 7 is the number of comparison operators we consider (==, <=, >=, <, >,!=,No-op). Similarly, for IsSwap and ClassMember there are 2 choices per location and a No-op. For VarReplace, the corresponding candidates for a variable node is computed by considering every other variable node defined in the program. Finally, a separate softmax is used for each candidate repair location to generate a distribution over all repair choices at that location (including No-Op).\n\nSince we train our model on a set of synthetically injected bugs, we know exactly for a given snippet which candidate repairs are applicable (if any). For each repair instance (snippet+repair location), we obtain a different training instance, and use the standard cross-entropy loss to get the softmax distribution as close as possible to the ground truth corresponding to the injected bug.\n\nQ. the state-of-the-art program repair approaches such as the statistical program repair models (Arcuri and Yao, 2008) (Goues et al., 2012), Rule-Based Static Analyzers (Thenault, 2001) (PyCQA, 2012) should be compared\n\nPlease note that the state-of-the-art statistical approaches for program repair such as (Arcuri and Yao, 2008) and (Goues et al. 2012) use a set of test-cases to perform evolutionary algorithm to guide the search for program modifications. Our goal in this work is to automatically generate semantic repairs only looking at the program syntax without any test cases. This requirement is important because it forces development of models which can infer intended semantic purpose from source code before proposing repairs, as a human programmer might.\n\nThe general rule based static analyzers only consider shallow syntactic errors and do not consider the class of semantic errors we are tackling in this work, so they would not produce any results.\n\nQ. the comparsion between SSC with and Seq-to-Seq is not fair, since the baseline is more general and not specially crafted for these 4 cases.\n\nAttention based seq-to-seq trained on the same training set is the closest state of the art model previously proposed in recent syntactic program repair approaches (Gupta et. al. AAAI 2017 and Bhatia et. al. 2016). \n\n\nPlease let us know if there are any more clarifications that might be needed. We would like to reinforce this again that one of the goals of our work is to develop new neural models that are able to identify a rich class of semantic bugs without any test cases.\n",
"Thanks for the helpful comments and suggestions.\n\nQ. What if we add additional constraints on the output choices for seq2seq decoder to only candidate locations?\n\nThis constraint of only modifying the candidate locations is implicitly provided in our training set, where only bugs at candidate locations are provided and the remaining code is copied. When we analyze the baseline results, the seq2seq network is quite good at learning such a constraint of only modifying the candidate locations and it gets the right repair about 26% of cases (and 40% with some additional modifications). The remaining cases for which it makes mistakes in suggested repairs, it either predicts the wrong repair or chooses the wrong program location, but it performs such modifications only at the candidate locations, i.e. it already learns the constraint to only modify the candidate locations.\n\nQ. Is there any control mechanism used to ensure that the real bug test set do not overlap with the training set?\n\nFor the synthetic bug dataset (real code with synthetically injected bugs), we partition the data into training, test, and validation at the repository level, to eliminate any overlap between training and test. Moreover, we also filter out any training snippet which overlapped with any test snippet by more than 5 lines.\nThe real bug dataset (real code with real bugs) was obtained by crawling a different set of github repositories from the ones used in training. We also ensure there is no overlap of more than 5 lines with training programs.\n\nQ. Discussion about limitation of this work regarding not leveraging the context in which snippets are being used.\n\nThanks for the suggestion. We will add a new paragraph regarding this limitation and future work. Yes, our current model is trained on a dataset where we extracted every function from each Python source file as a code snippet. Each snippet is analyzed on its own without any surrounding context. Adding more context regarding usage of functions in larger codebases would be an interesting future extension of this work, which will involve developing more scalable models for larger codebases.\n\nQ. Specialized to only 4 classes of errors?\nFirst, we would like to point out that the 4 classes of semantic bugs that we chose were based on an extensive analysis of common classes of errors that programmers make, and which experienced programmers can potentially fix by only observing the program syntax without having access to any test cases or runtime information.\n\nSecond, the 4 classes we consider (VarReplace, CompReplace, IsSwap, and ClassMember) are very broad classes of bugs. Our test set (https://iclr2018anon.github.io/semantic_code_repair/index.html) shows both the prevalence and extreme diversity of these classes of bugs.\n\nFinally, there are other recent papers such as (http://bit.ly/2Dh7Qx8) that introduce new models to identify only 1 class of bugs “Variable Misuse” that is similar to our VarReplace class.\n",
"We thank the reviewer for the helpful comments and suggestions.\n\nQ. Only 4 classes of semantic bugs?\n\nFirst, we would like to point out that the 4 classes of semantic bugs that we chose were based on an extensive analysis of common classes of errors that programmers make, and which experienced programmers can potentially fix by only observing the program syntax without having access to any test cases or runtime information.\n\nSecond, the 4 classes we consider (VarReplace, CompReplace, IsSwap, and ClassMember) are very broad classes of bugs. Our test set (https://iclr2018anon.github.io/semantic_code_repair/index.html) shows both the prevalence and extreme diversity of these classes of bugs.\n\nFinally, there are other recent papers such as (http://bit.ly/2Dh7Qx8) that use models to identify only 1 class of bugs “Variable Misuse” that is similar to our VarReplace class.\n\n\nQ. Baseline is generic and weak?\n\nPlease note that in our problem setting, we do not have access to the set of test cases. Most of the previous semantic program repair techniques rely on the availability of a set of test cases to find a repair. The only input to our model is the buggy program (its Abstract syntax tree), and the model needs to learn to predict whether there is a semantic bug (amongst the 4 classes) present in the snippet and if yes, pinpoint the node location and suggest a repair. We chose the attentional seq-to-seq model because it is one of the common models that has previously been used in recent literature for syntactic program repair (Gupta et. al. AAAI 2017 and Bhatia et. al. 2016).\n"
]
} | {
"paperhash": [
"allamanis|suggesting_accurate_method_and_class_names",
"allamanis|a_convolutional_attention_network_for_extreme_summarization_of_source_code",
"andreas|neural_module_networks",
"arcuri|a_novel_co-evolutionary_approach_to_automatic_software_bug_fixing",
"bahdanau|neural_machine_translation_by_jointly_learning_to_align_and_translate",
"bhatia|automated_correction_for_syntax_errors_in_programming_assignments_using_recurrent_neural_networks",
"bhoopchand|learning_python_code_suggestion_with_a_sparse_pointer_network",
"bunel|learning_to_superoptimize_programs",
"goues|genprog:_a_generic_method_for_automatic_software_repair",
"gupta|deepfix:_fixing_common_c_language_errors_by_deep_learning",
"harman|automated_patching_techniques:_the_fix_is_in:_technical_perspective",
"libovickỳ|cuni_system_for_wmt16_automatic_post-editing_and_multimodal_translation_tasks",
"long|automatic_patch_generation_by_learning_correct_code",
"raychev|learning_from_large_codebases",
"raychev|code_completion_with_statistical_language_models",
"raychev|predicting_program_properties_from_big_code",
"schmaltz|sentence-level_grammatical_error_identification_as_sequence-to-sequence_correction",
"singh|automated_feedback_generation_for_introductory_programming_assignments"
],
"title": [
"Suggesting accurate method and class names",
"A convolutional attention network for extreme summarization of source code",
"Neural module networks",
"A novel co-evolutionary approach to automatic software bug fixing",
"Neural machine translation by jointly learning to align and translate",
"Automated correction for syntax errors in programming assignments using recurrent neural networks",
"Learning python code suggestion with a sparse pointer network",
"Learning to superoptimize programs",
"Genprog: A generic method for automatic software repair",
"Deepfix: Fixing common c language errors by deep learning",
"Automated patching techniques: the fix is in: technical perspective",
"Cuni system for wmt16 automatic post-editing and multimodal translation tasks",
"Automatic patch generation by learning correct code",
"Learning from large codebases",
"Code completion with statistical language models",
"Predicting program properties from big code",
"Sentence-level grammatical error identification as sequence-to-sequence correction",
"Automated feedback generation for introductory programming assignments"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"miltiadis allamanis",
"earl t barr",
"christian bird",
"charles a sutton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"miltiadis allamanis",
"hao peng",
"charles a sutton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jacob andreas",
"marcus rohrbach",
"trevor darrell",
"dan klein"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"andrea arcuri",
"xin yao"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"dzmitry bahdanau",
"kyunghyun cho",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sahil bhatia",
"rishabh singh"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"avishkar bhoopchand",
"tim rocktäschel",
"earl barr",
"sebastian riedel"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rudy bunel",
"alban desmaison",
"m pawan kumar",
"philip hs torr",
"pushmeet kohli"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"c le goues",
"t nguyen",
"s forrest",
"w weimer"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rahul gupta",
"soham pal",
"aditya kanade",
"shirish shevade"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mark harman"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jindřich libovickỳ",
"jindřich helcl",
"marek tlustỳ",
"pavel pecina",
"ondřej bojar"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"fan long",
"martin rinard"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"veselin raychev"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"veselin raychev",
"martin vechev",
"eran yahav"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"veselin raychev",
"martin vechev",
"andreas krause"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"allen schmaltz",
"yoon kim",
"alexander m rush",
"stuart m shieber"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"rishabh singh",
"sumit gulwani",
"armando solar-lezama"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"1602.03001v2",
"1511.02799v4",
"",
"1409.0473v7",
"1603.06129v1",
"1611.08307v1",
"1611.01787v3",
"",
"",
"",
"arXiv:1606.07481",
"",
"",
"",
"",
"arXiv:1604.04677",
"1204.1751v4"
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.481481 | 0.75 | null | null | null | null | null | r1hsJCe0Z |
||
roth|discretevalued_neural_networks_using_variational_inference|ICLR_cc_2018_Conference | Discrete-Valued Neural Networks Using Variational Inference | The increasing demand for neural networks (NNs) being employed on embedded devices has led to plenty of research investigating methods for training low precision NNs. While most methods involve a quantization step, we propose a principled Bayesian approach where we first infer a distribution over a discrete weight space from which we subsequently derive hardware-friendly low precision NNs. To this end, we introduce a probabilistic forward pass to approximate the intractable variational objective that allows us to optimize over discrete-valued weight distributions for NNs with sign activation functions. In our experiments, we show that our model achieves state of the art performance on several real world data sets. In addition, the resulting models exhibit a substantial amount of sparsity that can be utilized to further reduce the computational costs for inference. | {
"name": [],
"affiliation": []
} | Variational Inference for infering a discrete distribution from which a low-precision neural network is derived | [
"low-precision",
"neural networks",
"resource efficient",
"variational inference",
"Bayesian"
] | null | 2018-02-15 22:29:45 | 28 | null | null | null | null | null | null | null | null | false | This paper presents a somewhat new approach to training neural nets with ternary or low-precision weights. However the Bayesian motivation doesn't translate into an elegant and self-tuning method, and ends up seeming kind of complicated and ad-hoc. The results also seem somewhat toy. The paper is fairly clearly written, however. | {
"review_id": [
"SJJ5dvJgM",
"HkshYX9xz",
"H1S_cEcxM"
],
"review": [
{
"title": "title: This is outside of my areas of expertise. This is an educated guess. Marginal accept. ",
"paper_summary": null,
"main_review": "main_review: Summary: \nThe paper considers a Bayesian approach in order to infer the distribution over a discrete weight space, from which they derive hardware-friendly low precision NNs. This is an alternative to a standard quantization step, often performed in cases such as emplying NNs on embedded devices.\nThe NN setting considered here contains sign activation functions.\nThe experiments conducted show that the proposed model achieves nice performance on several real world data Comments\n\nDue to an error in the openreview platform, I didn't have the chance to bid on time. This is not within my areas of expertise. Sorry for any inconvenience.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: New approach to train ternary-weight NNs, unclear advantages with respect to previous works",
"paper_summary": null,
"main_review": "main_review: In this work, discrete-weight NNs are trained using the variational Bayesian framework, achieving similar results to other state-of-the-art models. Weights use 3 bits on the first layer and are ternary on the remaining layers.\n\n\n- Pros:\n\nThe paper is well-written and connections with the literature properly established.\n\nThe approach to training discrete-weights NNs, which is variational inference, is more principled than previous works (but see below).\n\n- Cons:\n\nThe authors depart from the original motivation when the central limit theorem is invoked. Once we approximate the activations with Gaussians, do we have any guarantee that the new approximate lower bound is actually a lower bound? This is not discussed. If it is not a lower bound, what is the rationale behind maximizing it? This seems to place this work very close to previous works, and not in the \"more principled\" regime the authors claim to seek.\n\nThe likelihood weighting seems hacky. The authors claim \"there are usually many more NN weights than there are data samples\". If that is the case, then it seems that the prior dominating is indeed the desired outcome. A different, more flat prior (or parameter sharing), can be used, but the described reweighting seems to be actually breaking a good property of Bayesian inference, which is defecting to the prior when evidence is lacking.\n\nIn terms of performance (Table 1), the proposed method seems to be on par with existing ones. It is unclear then what the advantage of this proposal is.\n\nSparsity figures are provided for the current approach, but those are not contrasted with existing approaches. Speedup is claimed with respect to an NN with real weights, but not with respect existing NNs with binary weights, which is the appropriate baseline.\n\n\n- Minor comments:\n\nPage 3: Subscript t and variable t is used for the targets, but I can't find where it is defined.\n\nOnly the names of the datasets used in the experiments are given, but they are not described, or even better, shown in pictures (maybe in a supplementary).\n\nThe title of the paper says \"discrete-valued NNs\". The weights are discrete, but the activations and outputs are continuous, so I find it confusing. As a contrast, I would be less surprised to hear a sigmoid belief network called a \"discrete-valued NN\", even though its weights are continuous.",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
},
{
"title": "title: review",
"paper_summary": null,
"main_review": "main_review: The authors consider the problem of ultra-low precision neural networks motivated by \nlimited computation and bandwidth. Their approach first posits a Bayesian neural network\na discrete prior on the weights followed by central limit approximations to efficiently \napproximate the likelihood. The authors propose several tricks like normalization and cost \nrescaling to help performance. They compare their results on several versions of MNIST. The \npaper is promising, but I have several questions:\n\n1) One major concern is that the experimental results are only on MNIST. It's important \nto have another (larger) dataset to understand how sensitive the approach is to \ncharacteristics of the data. It seems plausible that a more difficulty problem may \nrequire more precision.\n\n2) Likelihood weighting is related to annealing and variational tempering\n\n3) The structure of the paper could be improved:\n - The introduction contains way too many details about the method \n and related work without a clear boundary.\n - I would add the model up front at the start of section 2\n - Section 2.1 could be reversed or equations 2-5 could be broken with text \n explaining each choice \n\n4) What does training time look like? Is the Bayesian optimization necessary?",
"strength_weakness": null,
"questions": null,
"limitations": null,
"review_summary": null
}
],
"score": [
0.5555555820465088,
0.4444444477558136,
0.4444444477558136
],
"confidence": [
0,
0.75,
0.75
],
"novelty": [
null,
null,
null
],
"correctness": [
null,
null,
null
],
"clarity": [
null,
null,
null
],
"impact": [
null,
null,
null
],
"reproducibility": [
null,
null,
null
],
"ethics": [
null,
null,
null
]
} | {
"title": [
"Response to AnonReviewer3",
"Revision",
"Response to AnonReviewer1"
],
"comment": [
"Thank you for your valuable comments.\n\n- Data sets:\nWe are currently running another experiment on a TIMIT data set (phoneme classification) which is larger (N~140k), has more classes (39), but has less features (92). Other papers on resource efficient NNs typically evaluate on larger image tasks like CIFAR-10, SVHN and ImageNet. However, we refrain from doing so as we have not yet considered convolutional NNs and it is known that plain fully-connected NNs are far too weak to come close to state-of-the-art performance.\n\n- Likelihood-weighting and annealing/variational tempering:\nWe assume you are referring to [1]. We agree that our weighting scheme is similar to these methods but they are used in different ways and for different purposes. We will comment on this in our revision.\n\n- Structure of the paper:\nThank you for pointing this out. We will consider this in our revision.\n\n- Training time and Bayesian optimization:\nTraining time naturally increases compared to the training time of plain NNs since computing the required first and second moments of the probabilistic forward pass is more time-consuming. On a Nvidia GTX 1080 graphics card, a training epoch on MNIST with a minibatch size of 100 (500 parameter updates) takes approximately 8.8 seconds for the general 3 bit distribution and 7.5 seconds for the discretized Gaussian distribution compared to 1.3 seconds for plain NNs. Especially the first layer is a bottleneck since here the moments require computing weighted sums over all discrete values. Of course we could have hand-tuned the hyperparameters, but we believe that Bayesian optimization is a useful tool that relieves us from putting too much effort into finding suitable hyperparameters. Furthermore, it allows for a fair comparison between models by evaluating them for the same number of iterations. We will include the training times in our revision.\n\n[1] S. Mandt et al., Variational Tempering, AISTATS 2016",
"We added a revision of our paper where we changed the following aspects.\n\n(1) We left the structure of the paper largely unchanged. We removed some details about our method from the introduction but we kept the related work there as it is needed to motivate our work and to highlight the gap in the literature we intend to fill.\n\n(2) We added results of experiments performed on a larger TIMIT data set for phoneme classification. Furthermore, the data sets are now described in more detail in the supplementary material. Our model (single: with sign activation function, 3 bit weights in the input layer, and ternary weights in the following layers) performs on par with NN (real) and it outperforms NN STE on the TIMIT data set and the more challenging variants of MNIST with different kinds of background artifacts.\n\n(3) We added the training time in the experiments section.\n\n(4) A few minor changes.",
"Thank you for your valuable comments.\n\n- A general comment:\nIt appears that this review is mainly concerned with our method being slightly off from the textbook Bayesian approach. One major motivation of obtaining discrete-valued NNs by first inferring a distribution is that the distribution parameters are real-valued and can therefore be optimized with gradient based optimization. Directly optimizing the discrete weights would result in an intractable combinatorial optimization problem.\n\n- Issue with Gaussian approximation:\nWe agree that, due to the Gaussian approximation, we are not maximizing a lower bound anymore (at least we did not investigate on this). However, our motivation was to come up with a principled scheme to obtain resource-efficient NNs with discrete weights that achieve good performance, and, therefore, we accept to slightly depart from the full Bayesian path. By \"more principled\" we refer to existing work on resource efficient NNs (rather than to work in the Bayesian literature) where mostly some quantization step is applied or, in the case of the straight through estimator, a gradient that is clearly zero is \"approximated\" by something non-zero. With these methods, it is often not even clear if the gradient update procedures are optimizing any objective. We believe that this direction of research requires more principled methods such as the presented one.\n\n- Likelihood weighting:\nWe believe that the prior-term dominating the likelihood-term is an artifact of variational inference by minimizing the KL-divergence between approximate and true posterior that especially manifests itself in case of NNs. In many hierarchical Bayesian models, there is a latent variable per data point that one aims to estimate and therefore the numbers of KL-terms and likelihood-terms are balanced at all times. For NNs, the number of KL-terms is fixed as soon as we fix the structure of the NN, and, as is commonly known, larger NNs tend to perform better. Hence, using vanilla KL-minimization results in a dilemma if we want to estimate the parameters of a NN whose number of parameters is orders of magnitudes larger than the number of data samples. Using a flat (constant) prior only partly solves this problem as an entropy-term, which itself dominates the likelihood-term, would still be present. This entropy-term would cause the approximate posterior to take on larger variances which would again severely degrade performance. We agree that parameter sharing could help since it would reduce the number of KL-terms, but this would result in a different model.\n\n- Performance:\nOur \"single\" model outperforms the NN STE [1] by ~2-3% on MNIST background and MNIST background random, respectively. On the other data sets we are on par. Furthermore, we achieve similar performance as NN (real) which is more computationally expensive to evaluate.\n\n- Advantages of our model (compared to other resource-efficient methods):\n - Well defined objective function\n - Probabilistic forward pass simultaneously handles both discrete distributions and the sign activation function\n - Flexible choice of discrete weight space; can be different in each layer (other methods are often very rigid in this regard)\n - Low precision weights in the first layer\n - Additional information available in the approximate posterior\n\n- Sparsity:\nRegarding other methods: Binary and real weights (e.g. as in [1]), respectively, do not exhibit any sparsity at all, i.e. each connection of the NN is present. We point out that our method introduces, at least on some data sets, a substantial amount of sparsity that can be utilized to reduce computational costs. This was not a design goal in itself and we do not claim that our method is competitive with other approaches that explicitly aim to achieve sparsity. We think that the way that sparsity arises in our model is compelling: The value zero is explicitly modeled and we do not prune weights after training by some means of post-processing.\n\n- Minor comment on the title:\nIt seems there is a misunderstanding. In our experiments, the \"single\" model refers to a single low-resource NN obtained as the most probable NN from the approximate posterior. In this NN, the activations and outputs are *not* continuous - given that the inputs are low-precision fixed-point values (as in images), the activations in the first hidden layer are obtained by low-precision fixed point operations (or equivalently integer operations), and the activations in the following layers are obtained by accumulating -1 and +1. The activation functions are sign functions that result in either -1 or +1. The output activations are also integer valued as they only accumulate -1 and +1 (the softmax is not needed at test time). Only for the \"pfp\" model and during optimization we have to deal with real-valued quantities.\n\n- Other minor comments:\nThank you, we will use your comments to improve the paper.\n\n[1] Hubara et al., Binarized neural networks, NIPS 2016"
]
} | {
"paperhash": [
"anderson|the_high-dimensional_geometry_of_binary_neural_networks",
"bengio|estimating_or_propagating_gradients_through_stochastic_neurons_for_conditional_computation",
"blundell|weight_uncertainty_in_neural_networks",
"courbariaux|training_deep_neural_networks_with_low_precision_multiplications",
"courbariaux|binaryconnect:_training_deep_neural_networks_with_binary_weights_during_propagations",
"halberstadt|heterogeneous_acoustic_measurements_for_phonetic_classification",
"miguel|probabilistic_backpropagation_for_scalable_learning_of_bayesian_neural_networks",
"hinton|deep_neural_networks_for_acoustic_modeling_in_speech_recognition:_the_shared_views_of_four_research_groups",
"hubara|ran_el-yaniv,_and_yoshua_bengio._binarized_neural_networks",
"ioffe|batch_normalization:_accelerating_deep_network_training_by_reducing_internal_covariate_shift",
"jang|categorical_reparameterization_with_gumbel-softmax",
"kingma|adam:_a_method_for_stochastic_optimization",
"diederik|auto-encoding_variational_bayes",
"krizhevsky|imagenet_classification_with_deep_convolutional_neural_networks",
"larochelle|an_empirical_evaluation_of_deep_architectures_on_problems_with_many_factors_of_variation",
"lecun|gradient-based_learning_applied_to_document_recognition",
"maddison|the_concrete_distribution:_a_continuous_relaxation_of_discrete_random_variables",
"mandt|variational_tempering",
"paisley|variational_bayesian_inference_with_stochastic_search",
"rastegari|xnor-net:_imagenet_classification_using_binary_convolutional_neural_networks",
"rezende|stochastic_backpropagation_and_approximate_inference_in_deep_generative_models",
"roth|variational_inference_in_neural_networks_using_an_approximate_closed-form_objective",
"snoek|practical_bayesian_optimization_of_machine_learning_algorithms",
"soudry|expectation_backpropagation:_parameter-free_training_of_multilayer_neural_networks_with_continuous_or_discrete_weights",
"srivastava|dropout:_a_simple_way_to_prevent_neural_networks_from_overfitting",
"sutskever|sequence_to_sequence_learning_with_neural_networks",
"sida|fast_dropout_training",
"zue|speech_database_development_at_mit:_timit_and_beyond"
],
"title": [
"The high-dimensional geometry of binary neural networks",
"Estimating or propagating gradients through stochastic neurons for conditional computation",
"Weight uncertainty in neural networks",
"Training deep neural networks with low precision multiplications",
"Binaryconnect: Training deep neural networks with binary weights during propagations",
"Heterogeneous acoustic measurements for phonetic classification",
"Probabilistic backpropagation for scalable learning of Bayesian neural networks",
"Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups",
"Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks",
"Batch normalization: Accelerating deep network training by reducing internal covariate shift",
"Categorical reparameterization with Gumbel-softmax",
"Adam: A method for stochastic optimization",
"Auto-encoding variational Bayes",
"Imagenet classification with deep convolutional neural networks",
"An empirical evaluation of deep architectures on problems with many factors of variation",
"Gradient-based learning applied to document recognition",
"The concrete distribution: A continuous relaxation of discrete random variables",
"Variational tempering",
"Variational Bayesian inference with stochastic search",
"Xnor-net: Imagenet classification using binary convolutional neural networks",
"Stochastic backpropagation and approximate inference in deep generative models",
"Variational inference in neural networks using an approximate closed-form objective",
"Practical Bayesian optimization of machine learning algorithms",
"Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights",
"Dropout: a simple way to prevent neural networks from overfitting",
"Sequence to sequence learning with neural networks",
"Fast dropout training",
"Speech database development at MIT: Timit and beyond"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"authors": [
{
"name": [
"alexander g anderson",
"cory p berg"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yoshua bengio",
"nicholas léonard",
"aaron c courville"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"charles blundell",
"julien cornebise",
"koray kavukcuoglu",
"daan wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"matthieu courbariaux",
"yoshua bengio",
"jean-pierre david"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"matthieu courbariaux",
"yoshua bengio",
"jean-pierre david"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"andrew k halberstadt",
"james r glass"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jose miguel",
"hernandez-lobato ",
"ryan adams"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"g hinton",
"l deng",
"d yu",
"g e dahl",
"a r mohamed",
"n jaitly",
"a senior",
"v vanhoucke",
"p nguyen",
"t n sainath",
"b kingsbury"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"itay hubara",
"matthieu courbariaux",
"daniel soudry"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"sergey ioffe",
"christian szegedy"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"eric jang",
"shixiang gu",
"ben poole"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"diederik kingma",
"jimmy ba"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"p diederik",
"max kingma",
" welling"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"alex krizhevsky",
"ilya sutskever",
"geoffrey e hinton"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"hugo larochelle",
"dumitru erhan",
"aaron c courville",
"james bergstra",
"yoshua bengio"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"yann lecun",
"léon bottou",
"yoshua bengio",
"patrick haffner"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"chris j maddison",
"andriy mnih",
"yee whye teh"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"stephan mandt",
"james mcinerney",
"farhan abrol",
"rajesh ranganath",
"david m blei"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"john william paisley",
"david m blei",
"michael i jordan"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"mohammad rastegari",
"vicente ordonez",
"joseph redmon",
"ali farhadi"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"danilo jimenez rezende",
"shakir mohamed",
"daan wierstra"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"wolfgang roth",
"franz pernkopf"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"jasper snoek",
"hugo larochelle",
"ryan p adams"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"daniel soudry",
"itay hubara",
"ron meir"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"nitish srivastava",
"geoffrey e hinton",
"alex krizhevsky",
"ilya sutskever",
"ruslan salakhutdinov"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"ilya sutskever",
"oriol vinyals",
"v quoc",
" le"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"i sida",
"christopher d wang",
" manning"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
},
{
"name": [
"stephanie victor zue",
"james seneff",
" glass"
],
"affiliation": [
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
},
{
"laboratory": null,
"institution": null,
"location": null
}
]
}
],
"arxiv_id": [
"",
"1308.3432v1",
"1505.05424v2",
"1412.7024v5",
"1511.00363v3",
"",
"1502.05336v2",
"",
"",
"1502.03167v3",
"",
"1412.6980v9",
"arXiv:1312.6114",
"",
"",
"",
"1611.00712v3",
"1411.1810v4",
"1206.6430v1",
"1603.05279v4",
"1401.4082v3",
"",
"1206.2944v2",
"",
"",
"1409.3215v3",
"",
""
],
"s2_corpus_id": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"intents": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"isInfluential": [
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
]
} | null | 84 | null | 0.481481 | 0.5 | null | null | null | null | null | r1h2DllAW |
Subsets and Splits