paper_id stringlengths 9 12 | venue stringclasses 139
values | year stringclasses 7
values | paper_title stringlengths 0 181 | paper_authors stringlengths 4 925 | paper_abstract stringlengths 1 5k | paper_keywords stringlengths 2 436 | paper_content stringlengths 0 100k | review_id stringlengths 9 12 | review_title stringlengths 0 500 | review_rating stringclasses 61
values | review_text stringlengths 2 28.3k | review_confidence stringclasses 13
values | text stringlengths 402 130k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
faE-D_0d4M | ICLR.cc/2021/Conference | 2021 | Exploring representation learning for flexible few-shot tasks | ["Mengye Ren", "Eleni Triantafillou", "Kuan-Chieh Wang", "James Lucas", "Jake Snell", "Xaq Pitkow", "Andreas S. Tolias", "Richard Zemel"] | Existing approaches to few-shot learning deal with tasks that have persistent, rigid notions of classes. Typically, the learner observes data only from a fixed number of classes at training time and is asked to generalize to a new set of classes at test time. Two examples from the same class would always be assigned th... | ["Few-shot learning", "representation learning"] | ABSTRACTExisting approaches to few-shot learning deal with tasks that have persistent, rigidnotions of classes. Typically, the learner observes data only from a fixed numberof classes at training time and is asked to generalize to a new set of classes attest time. Two examples from the same class would always be assign... | hsXSg5k7xyX | Interesting idea with a few significant omissions | 5: Marginally below acceptance threshold | The authors propose a new view on few shot classification. Instead of having a fixed set of classes split into base and novel subsets, they propose to use image attributes to construct classes on the fly during training and testing. That is, in every episode, a class is constructed by random sampling a pair of attribut... | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | <|im_start|>system
You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|>
<|im_start|>user
### Paper Title
Exploring representation learning for flexible few-shot tasks
### Paper Abstract
Existing approaches to few-shot learning deal with tasks that have persistent, rigid not... |
ykCRDlfxmk | ICLR.cc/2021/Conference | 2021 | AutoHAS: Efficient Hyperparameter and Architecture Search | ["Xuanyi Dong", "Mingxing Tan", "Adams Wei Yu", "Daiyi Peng", "Bogdan Gabrys", "Quoc V Le"] | Deep learning models often require extensive efforts in optimizing hyperparameters and architectures. Standard hyperparameter optimization methods are expensive because of their multi-trial nature: different configurations are tried separately to find the best. In this paper, we propose AutoHAS, an efficient framework ... | ["HPO", "NAS", "AutoML"] | ABSTRACTDeep learning models often require extensive efforts in optimizing hyperparametersand architectures. Standard hyperparameter optimization methods are expensivebecause of their multi-trial nature: different configurations are tried separatelyto find the best. In this paper, we propose AutoHAS, an efficient frame... | q0Z_u5fJyS | Another tricky paper | 4: Ok but not good enough - rejection |
This paper proposes a search framework that is very similar to "Neural Architecture Search with Reinforcement Learning", except that the authors claim their method can search discrete training hyper-parameters. The authors evaluate their method on several datasets and claims to achieve SoTA results.
1. REINFORCE V.S.... | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | <|im_start|>system
You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|>
<|im_start|>user
### Paper Title
AutoHAS: Efficient Hyperparameter and Architecture Search
### Paper Abstract
Deep learning models often require extensive efforts in optimizing hyperparameters and archi... |
r1lI3ertwH | ICLR.cc/2020/Conference | 2020 | WHAT DATA IS USEFUL FOR MY DATA: TRANSFER LEARNING WITH A MIXTURE OF SELF-SUPERVISED EXPERTS | ["Xi Yan", "David Acuna", "Sanja Fidler"] | "Transfer learning has proven to be a successful way to train high performing deep learning models i(...TRUNCATED) | "[\"data\", \"mixture\", \"useful\", \"transfer\", \"target domain\", \"work\", \"relevant\", \"data(...TRUNCATED) | "ABSTRACTTransfer learning has proven to be a successful way to train high performing deeplearning m(...TRUNCATED) | HkgeoYJ5qS | Official Blind Review #2 | 3: Weak Reject | "This paper is focused on simplifying the use of larger datasets (via pretraining models) for the pu(...TRUNCATED) | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) | |
S1L-hCNtl | ICLR.cc/2017/workshop | 2017 | Generative Adversarial Learning of Markov Chains | ["Jiaming Song", "Shengjia Zhao", "Stefano Ermon"] | "We investigate generative adversarial training methods to learn a transition operator for a Markov (...TRUNCATED) | ["Deep learning", "Unsupervised Learning"] | "ABSTRACTWe investigate generative adversarial training methods to learn a transition op-erator for (...TRUNCATED) | rkfqGpgie | Simple, appealing idea but no clear improvement over std. GANs | 6: Marginally above acceptance threshold | "The authors propose to use an adversarial objective to train a transition operator for a Markov cha(...TRUNCATED) | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) |
HyecJGP5ge | ICLR.cc/2017/conference | 2017 | NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD | ["Sahil Garg", "Irina Rish", "Guillermo Cecchi", "Aurelie Lozano"] | "In this paper, we focus on online representation learning in non-stationary environments which may (...TRUNCATED) | "[\"Unsupervised Learning\", \"Computer vision\", \"Transfer Learning\", \"Optimization\", \"Applica(...TRUNCATED) | "ABSTRACTIn this paper, we focus on online representation learning in non-stationary envi-ronments w(...TRUNCATED) | SkDONYuVx | Simple interesting modified online dictionary learning | 7: Good paper, accept | "The authors propose a simple modification of online dictionary learning: inspired by neurogenesis, (...TRUNCATED) | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) |
Sks9_ajex | ICLR.cc/2017/conference | 2017 | "Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via (...TRUNCATED) | ["Sergey Zagoruyko", "Nikos Komodakis"] | "Attention plays a critical role in human visual experience. Furthermore, it has recently been demon(...TRUNCATED) | ["Computer vision", "Deep learning", "Supervised Learning"] | "ABSTRACTAttention plays a critical role in human visual experience. Furthermore, it hasrecently bee(...TRUNCATED) | H1BT0bwVe | Review | 6: Marginally above acceptance threshold | "The paper presented a modified knowledge distillation framework that minimizes the difference of th(...TRUNCATED) | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) |
B1lf43A5Y7 | ICLR.cc/2019/Conference | 2019 | How to learn (and how not to learn) multi-hop reasoning with memory networks | ["Jifan Chen", "Greg Durrett"] | "Answering questions about a text frequently requires aggregating information from multiple places i(...TRUNCATED) | ["NLP", "Reading Comprehension", "Memory Networks", "Multi-hop Reasoning"] | "ABSTRACTAnswering questions about a text frequently requires aggregating informationfrom multiple p(...TRUNCATED) | Byxo2ZUq3Q | Interesting investigation but insufficient proposition and results | 5: Marginally below acceptance threshold | "The paper proposes to investigate the well-known problem of memory network learning and more precis(...TRUNCATED) | "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the rel(...TRUNCATED) | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) |
H1Xw62kRZ | ICLR.cc/2018/Conference | 2018 | Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis | ["Rudy Bunel", "Matthew Hausknecht", "Jacob Devlin", "Rishabh Singh", "Pushmeet Kohli"] | "Program synthesis is the task of automatically generating a program consistent with\na specificatio(...TRUNCATED) | ["Program Synthesis", "Reinforcement Learning", "Language Model"] | "ABSTRACTProgram synthesis is the task of automatically generating a program consistent witha specif(...TRUNCATED) | Hk4_Jw9xG | Good paper, could be more clearly written. | 5: Marginally below acceptance threshold | "The authors consider the task of program synthesis in the Karel DSL. Their innovations are to use r(...TRUNCATED) | 3: The reviewer is fairly confident that the evaluation is correct | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) |
KCzRX9N8BIH | ICLR.cc/2021/Conference | 2021 | It Is Likely That Your Loss Should be a Likelihood | ["Mark Hamilton", "Evan Shelhamer", "William T. Freeman"] | "Many common loss functions such as mean-squared-error, cross-entropy, and reconstruction loss are u(...TRUNCATED) | "[\"Adaptive Losses\", \"Outlier Detection\", \"Adaptive Regularization\", \"Recalibration\", \"Robu(...TRUNCATED) | "ABSTRACTMany common loss functions such as mean-squared-error, cross-entropy, and re-construction l(...TRUNCATED) | h4g2A3zgEa2 | An interesting idea but writing and presentation should be improved. | 6: Marginally above acceptance threshold | "# Summary:\nThe paper proposes the use of complete parametrized likelihoods for providing supervisi(...TRUNCATED) | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) |
rkl5CjC9Fm | ICLR.cc/2019/Conference | 2019 | Dual Importance Weight GAN | ["Gahye Lee", "Seungkyu Lee"] | "Generative Adversarial Networks (GAN) are trained to generate a sample image of interest. To this e(...TRUNCATED) | "[\"gan\", \"generative network\", \"distribution\", \"dual importance weights\", \"generated sample(...TRUNCATED) | "ABSTRACTGenerative Adversarial Networks (GAN) are trained to generate a sample imageof interest. To(...TRUNCATED) | BJxGzPMc2X | Exploiting importance sampling in the latent space of auto-encoder to alleviate mode collapse | 5: Marginally below acceptance threshold | "This paper proposed a new regularizer for the objective of GAN’s generator, with the purpose of a(...TRUNCATED) | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 17