Research Papers & Studies

A curated collection of academic papers and research insights

0
Total Papers
0
Papers Read
0
Currently Reading
0
Planned to Read

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, Niki Parmar, et al.
NIPS 2017
Read
AI ML NLP
This groundbreaking paper introduces the Transformer architecture, revolutionizing natural language processing by relying entirely on attention mechanisms without recurrence or convolution.

Deep Residual Learning for Image Recognition

Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
CVPR 2016
Reading
CV ML
Introduces residual networks (ResNets) that enable training of extremely deep neural networks by using skip connections to address the vanishing gradient problem.

Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning

Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, Sergey Levine
ICML 2018
Planned
Robotics AI
Presents a maximum entropy reinforcement learning algorithm that achieves state-of-the-art performance on a range of continuous control benchmark tasks.

BERT: Pre-training of Deep Bidirectional Transformers

Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova
NAACL 2019
Read
NLP AI
Introduces BERT, a method for pre-training language representations that obtains state-of-the-art results on eleven natural language processing tasks.

Generative Adversarial Networks

Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, et al.
NIPS 2014
Reading
CV ML
Introduces Generative Adversarial Networks, a novel framework for estimating generative models via an adversarial process between two neural networks.

Improving Language Understanding by Generative Pre-Training

Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever
OpenAI 2018
Planned
NLP AI
The original GPT paper that demonstrates how unsupervised pre-training of a language model on a diverse corpus can achieve significant gains on discriminative tasks.