NOTE: This website is no longer maintained as of June 2017. Currently working on several other projects, which I hope to release in the coming year!
Welcome to The Neural Perspective! This blog is all about simplifying and democratizing deep learning concepts and applications. There will be two types of publications: tutorials and readings. The tutorials will consist of basic math and detailed implementations for specific concepts in Tensorflow (PyTorch coming soon!). The readings will be recent publications that I found interesting and I will be simplifying a lot of the theory and math and will occasionally implement a few of the papers as well.
Note: I made many of the tutorials over a year ago but I try to refactor them and keep them up to date for newer Tensorflow/PyTorch releases. If something does not work or there is a newer efficient way of doing something, please do comment on the post! PyTorch content coming soon!
Thank you for all the support, corrections, conversations and for reaching 5000+ followers 🙂
Showcase:

 Interpretability via Attentional and Memorybased Interfaces
 Using Fast Weights to Attend to the Recent Past
 Learning to Learn How to Answer Questions for Q/A Tasks
 PyTorch Video Tutorials
 Natural Language Processing with PyTorch
Research:
Generalization / Interpretability
 Understanding Deep Learning Requires Rethinking Generalization [arXiv]
 Making Neural Programming Architecture Generalize Via Recursion [OpenReview]
 Opening the Black Box of Deep Neural Networks via Information [arXiv]
Generative Adversarial Networks
 Generative Adversarial Networks [arXiv]
 Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks [arXiv]
 Generative Adversarial Text to Image Synthesis [arXiv]
 Improved Techniques for Training GANs [arXiv]
 Learning to Protect Communications with Adversarial Neural Cryptography [arXiv]
Miscellaneous
 WaveNet: A Generative Model for Raw Audio [arXiv][DeepMind]
 Decoupled Neural Interfaces using Synthetic Gradients [arXiv] [DeepMind]
 Hybrid Computing using a Neural Network with Dynamic External Memory [Nature]
 Show, Attend and Tell: Neural Image Caption Generation with Visual Attention [arXiv]
 Understanding Deep Learning Requires Rethinking Generalization [arXiv]
OneShot / Zero / Transfer Learning
Optimization / Architecture
 Highway Networks [arXiv]
 Maxout Networks [arXiv]
 HyperNetworks [arXiv]
 Using Fast Weights to Attend to the Recent Past [arXiv]
 QuasiRecurrent Neural Networks [arXiv]
 Learning to learn by gradient descent by gradient descent [arXiv]
 Language Modeling with Gated Convolutional Networks [arXiv]
 Value Iteration Networks [arXiv]
 Adding Gradient Noise Improves Learning for Very Deep Networks [arXiv]
 Outrageously Large Neural Networks: The Sparselygated MixtureofExperts Layer [Open Review]
 Convolutional Neural Networks for Sentence Classification [arXiv]
 GRAM: Graphbased Attention Model for Healthcare Representation Learning [arXiv]
 Overcoming Catastrophic Forgetting in Neural Networks [arXiv]
 Online and LinearTime Attention by Enforcing Monotonic Alignments [arXiv]
 Exploring Sparsity in Recurrent Neural Networks [arXiv]
Question Answering / Machine Comprehension
 A Neural Conversational Model [arXiv]
 Highlights and Tutorials for “Richard Socher on the Future of Deep Learning” [O’Reilly]
 Ask Me Anything: Dynamic Memory Networks for Natural Language Processing [arXiv]
 Dynamic Memory Networks for Visual and Textual Question Answering [arXiv]
 Dynamic Coattention Networks For Question Answering [arXiv]
 A Joint ManyTask Model: Growing a Neural Network for Multiple NLP Tasks [arXiv]
 Bidirectional Attention Flow for Machine Comprehension [arXiv]
 Generating Long and Diverse Responses with Neural Conversation Models [OpenReview]
 GatedAttention Reader for Text Comprehension [arXiv]
 FVQA: Fact based Visual Question Answering [arXiv]
 QueryReduction Networks for Question Answering [arXiv]
 Domain Adaption in Question Answering [arXiv]
 Question Answering from Unstructured Text by Retrieval and Comprehension [arXiv]
 Adversarial Examples for Evaluating Reading Comprehension Systems [arXiv]
Recommendation Engines
Reinforcement Learning
 Episodic Exploration for Deep Deterministic Policies: An Application to StarCraft Micromanagement Tasks [arXiv]
 Third Person Imitation Learning [arXiv]
 Multiagent Reinforcement Learning in Sequential Social Dilemmas [Paper]
Representation Learning
 Doctor AI: Predicting Clinical Events via Recurrent Neural Networks [arXiv]
 Distributed Representations of Words and Phrases and their Compositionality [NIPS]
 Multilayer Representation Learning for Medical Concepts [arXiv]
 Poincare Embeddings for Learning Hierarchical Representations [arXiv]
 Learning to Compute Word Embeddings On the Fly [arXiv]
 Learned in Translation: Contextualized Word Vectors [arXiv] [blog]
SeqtoSeq Models
 Sequence to Sequence Learning with Neural Networks [arXiv]
 Learning Phrase Representations using RNN EncoderDecoder for Statistical Machine Translation [arXiv]
 Neural Machine Translation by Jointly Learning to Align and Translate – Attention in RNNs [arXiv]
 On Using Very Large Target Vocabulary for Neural Machine Translation Sampled Softmax [arXiv]
 Pointer Sentinel Mixture Models [arXiv]
 ContextDependent Word Representation for Neural Machine Translation [arXiv]
 Learning to Translate in Realtime with Neural Machine Translation [arXiv]
 Fully CharacterLevel Neural Machine Translation without Explicit Segmentation [arXiv]
Tutorials:
Note: Tutorials are outdated and written for Tensorflow. New updated PyTorch tutorials with code will be available soon.
 Linear Regression
 Logistic Regression
 Vanilla Neural Network
 Weights Initialization
 Convolutional Neural Networks (CNN)
 Image Recognition with Inception
 Embeddings (skipgram and CBOW) Implementations
 Recurrent Neural Networks (RNN) – Part 1: Basic RNN / CharRNN
 Recurrent Neural Networks (RNN) – Part 2: Text Classification
 Recurrent Neural Networks (RNN) – Part 3: EncoderDecoder
 Recurrent Neural Network (RNN) – Part 4: Attentional Interfaces
 Recurrent Neural Network (RNN) – Part 5: Custom Cells
 Gradients, Batch/Layer Normalization
 Generative Adversarial Networks
 Improved Techniques for Training GANs
 Reinforcement Learning (RL) – Policy Gradients I
 Reinforcement Learning (RL) – Policy Gradients II
 Convolutional Text Classification
 Using Fast Weights to Attend to the Recent Past
 QuasiRecurrent Neural Networks
 Deep Convolutional Generative Adversarial Networks (DCGAN)
 InfoGAN Implementation
 Text to Image with Generative Adversarial Networks