Deep Learning

Get Started. It's Free
or sign up with your email address
Rocket clouds
Deep Learning by Mind Map: Deep Learning

1. Berkeley Semantic Boundaries Dataset and Benchmark (SBD)

2. Berkeley Video Segmentation Dataset (BVSD)

2.1. train

2.2. test

3. Berkeley Segmentation Data Set 300 (BSDS300)

4. Unsupervised

4.1. Restricted Boltzmann Machines

4.1.1. A fast learning algorithm for deep belief nets

4.2. Discriminative RBMS

4.2.1. Classification using Discriminative Restricted Boltzmann Machines

4.3. Conditional RBMS

4.3.1. Modeling Human Motion Using Binary Latent Variables

4.4. Hybrid RBMS

4.5. Autoencoders

4.5.1. Cat paper

4.6. Semi-supervised Learning with Deep Generative Models

4.7. Mean covariance RBM

4.8. Spike and slab RBM

4.9. DRAW: A Recurrent Neural Network For Image Generation

4.9.1. Implementation in Theano

4.10. An Infinite Restricted Boltzmann Machine

5. Feed-forward networks

5.1. Backpropagation

5.1.1. Rprop

5.1.1.1. Climin implementation

5.1.2. RMSProp

5.1.2.1. Climin implementation

5.1.3. Feedback alignment

5.1.4. Efficient backprop

5.1.5. Fundamental deep learning problem

5.1.6. AdaDelta

5.1.6.1. Climin implementation

5.1.6.2. In Caffe

5.1.7. Adam

5.1.7.1. Implementation by author

5.1.8. AdaGrad

5.1.9. Conjugate gradient

5.1.9.1. Climin implementation

5.1.10. Quasi-Newton

5.1.10.1. Climin implementation

5.1.11. Gradient Descent

5.1.11.1. Climin implementation

5.1.12. Automatic differentiation in machine learning: a survey

5.2. Linear rectifier

5.2.1. Rectified Linear Units Improve Restricted Boltzmann Machines

5.3. Sparsity

5.3.1. Dropout

5.3.2. DropConnect

5.3.3. Weight decay

5.4. Hessian-free optimization

5.5. Nesterov momentum

5.5.1. On the importance of initialization and momentum in deep learning

5.5.2. Simplified Nesterov momentum in section 3.5

6. Recurrent neural networks

6.1. Advances in optimizing recurrent networks

6.2. Long-Short Term Memory

6.3. Reservoir computing

6.3.1. Echo State Networks

6.3.2. Liquid state machines

6.3.3. SORN: a self-organizing recurrent neural network

6.4. Generating Text with Recurrent Neural Networks

6.5. Neural Turing Machines

6.6. DRAW: A Recurrent Neural Network For Image Generation

6.7. Learning to Execute

7. Software

7.1. C/C++

7.1.1. RNNLM

7.1.2. GPUMLib

7.1.3. Shark

7.1.4. libccv

7.1.5. RNNLIB

7.1.6. OverFeat

7.1.7. ClConvolve (OpenCL)

7.1.8. cxxnet

7.1.9. EBLearn

7.1.10. currennt

7.1.11. nnForge

7.2. Java

7.2.1. Deeplearning4j

7.2.2. Encog

7.2.3. Nd4j

7.3. Javascript

7.3.1. ConvNetJS

7.4. Lua

7.4.1. Torch7

7.4.1.1. Neural Turing Machine

7.4.1.2. dp package

7.4.1.3. rnn: recurrent neural networks

7.4.1.4. Resources (Cheatsheet)

7.4.1.5. LSTM Units

7.4.1.6. Autograd

7.5. Matlab

7.5.1. DeepLearningToolbox

7.5.2. Matlab neural network toolbox

7.5.2.1. Documentation

7.5.3. convolutionalRBM

7.5.4. RBMS

7.5.4.1. Medal

7.5.4.2. Deepmat

7.5.4.3. Ruslan Salakhutdinov's examples (DBN, RBM)

7.5.5. convolutional nets

7.5.5.1. MatConvNet

7.5.5.2. cudacnn

7.5.5.3. myCNN

7.5.5.4. ConvNet

7.6. Python

7.6.1. Caffe

7.6.1.1. NVIDIA DIGITS

7.6.1.2. Expresso

7.6.1.3. R-CNN (object detector)

7.6.1.4. DeepDetect

7.6.1.5. NLPCaffe

7.6.2. climin

7.6.3. cudamat

7.6.4. cuda-convnet

7.6.5. cuda-convnet2

7.6.6. deepnet

7.6.7. PyBrain

7.6.8. Theano

7.6.8.1. Start here

7.6.8.2. PyLearn2

7.6.8.3. Blocks

7.6.8.4. Lasagne

7.6.8.4.1. recurrent layers

7.6.8.5. PDNN

7.6.8.6. Theanets

7.6.8.7. Keras

7.6.9. nolearn

7.6.10. CUV (RBMs)

7.6.11. GroundHog (RNNs)

7.6.12. Decaf

7.6.13. NervanaGPU

7.6.14. neon

7.6.15. Passage

7.6.16. List of Python toolkits

7.6.17. Chainer

7.6.18. Computation Graph Toolkit (CGT)

7.6.19. Gensim

7.6.20. Hebel

7.6.21. Brainstorm

7.6.22. Autograd

7.7. Service

7.7.1. Ersatzlabbs

7.8. Lists

7.8.1. Comparison of convnet implementations

7.8.2. deeplearning.net software links

7.8.3. Toronto deep learning codes

7.8.4. Teglor

7.9. iOS

7.9.1. DeepBeliefSDK

7.10. Julia

7.10.1. KUnet.jl

7.10.1.1. Beginning deep learning with 500 lines of Julia

7.10.2. Mocha.jl

7.10.3. Boltzmann.jl

7.11. Other

7.11.1. Ayda

8. Places

8.1. Universities

8.1.1. University of Toronto

8.1.1.1. Geoffrey Hinton

8.1.1.2. Ilya Sutskever

8.1.1.3. Alex Krizhevsky

8.1.1.4. Ruslan Salakhutdinov

8.1.1.5. Volodymyr Vmnih

8.1.1.6. Alex Graves

8.1.1.7. Nitish Srivastava

8.1.2. New York University

8.1.2.1. Yann LeCun

8.1.3. University of Montreal

8.1.3.1. Yoshua Bengio

8.1.3.2. Tomas Mikolov

8.1.4. University of Standford

8.1.4.1. Andrew Ng

8.1.4.2. Andrej Karpathy

8.1.5. Computer Vision and Active Perception Lab, KTH Royal Institute of Technology

8.1.5.1. Ali Sharif Razavian

8.1.5.2. Hossein Azizpour

8.1.5.3. Josephine Sullivan

8.2. Companies

8.2.1. Big names

8.2.1.1. Google

8.2.1.1.1. DeepMind

8.2.1.2. Facebook

8.2.1.3. Microsoft Research

8.2.1.4. Baidu

8.2.1.5. Flickr

8.2.1.5.1. Park or bird

8.2.2. Startups

8.2.2.1. Numenta

8.2.2.1.1. Jeff Hawkins

8.2.2.1.2. Dileep George

8.2.2.2. Vicarious

8.2.2.3. Metamind

8.2.2.3.1. Richard Socher

8.2.2.4. Clarify

8.2.2.4.1. Matthew D. Zeiler

8.2.2.5. Enlitic

8.2.2.5.1. Jeremy Howard

8.2.2.6. Nervana

8.2.2.7. Whetlab

8.2.2.7.1. Ryan Adams

8.2.2.7.2. Hugo Larochelle

8.2.2.7.3. Jasper Snoek

8.2.2.7.4. Kevin Swersky

9. Reinforcement learning

9.1. Q-learning tutorial

9.2. Playing Atari with Deep Reinforcement Learning

9.2.1. Nathan's implementation

9.2.2. Brian's implementation

9.2.3. Our implementation

9.2.4. Deep Q Learning Demo

9.3. Neural Fitted Q Iteration - First Experiences with a Data Efficient Neural Reinforcement Learning Method

9.4. A Monte-Carlo AIXI Approximation

9.5. Human-level control through deep reinforcement learning

9.6. Tutorial on Multi-Agent Reinforcement Learning

9.7. Distributed Deep Q-Learning

9.7.1. Code

10. Convolutional neural networks

10.1. deeplearning.net tutorial

10.2. Feature extraction using convolution

10.3. Pooling

10.4. CIFAR-10

10.5. ImageNet Classification with Deep Convolutional Neural Networks

10.6. DeepFace

10.7. Visualizing and Understanding Convolutional Networks

10.8. FaceNet: A Unified Embedding for Face Recognition and Clustering

11. Learning

11.1. UFLDL Tutorial

11.2. Deep Learning in Neural Networks: An Overview

11.3. UCLA Summer School

11.4. Deep Learning tutorial by Ruslan Salakhutdinov

11.5. Deep Learning book by Yoshua Bengio et al

11.6. VGG Convolutional Neural Networks Practical

11.7. CS231n: Convolutional Neural Networks for Visual Recognition

11.8. CS224d: Deep Learning for Natural Language Processing

11.9. comp.ai.neura-networks

11.10. Supervised Sequence Labelling with Recurrent Neural Networks

12. Future

12.1. Deep Learning of Representations: Looking Forward

12.2. Scaling Up Deep Learning

13. Multimodal networks

13.1. Ruslan Salakhutdinov's video

13.2. Multimodal Learning with Deep Boltzmann Machines

14. Bayesian networks

14.1. Hierarchical temporal memory

14.1.1. Numenta whitepaper

14.1.2. Hierarchical Temp oral Memory Cortical Learning Algorithm for Pattern Recognition on Multi-core Architectures

14.1.3. Towards a Mathematical Theory of Cortical Micro-circuits

14.2. Learning with Hierarchical-Deep Models

14.3. Theory-based Bayesian models of inductive learning and reasoning

15. Datasets

15.1. Images

15.1.1. MNIST

15.1.2. CIFAR-10/100

15.1.3. ImageNet

15.1.4. Caltech101

15.1.5. Caltech256

15.1.6. Berkeley Segmentation Data Set 500 (BSDS500)

15.2. Image-Text

15.2.1. The generalized 1M image-caption corpus

15.2.2. Microsoft COCO

15.3. Faces

15.3.1. Labeled Faces in the Wild

15.3.2. Weakly Labeled Face Databases

15.3.3. YouTube faces

16. Biology

16.1. Towards Biologically Plausible Deep Learning

17. Pretrained models

17.1. MatConvNet

17.2. VGG: Very Deep Convolutional Networks for Large-Scale Visual Recognition

17.3. VGG: Return of the Devil in the Details: Delving Deep into Convolutional Networks

17.4. Caffe reference models

17.5. Caffe community models

18. Transfer learning

18.1. CNN Features off-the-shelf: an Astounding Baseline for Recognition

19. Practical

19.1. Practical Recommendations for Gradient-Based Training of Deep Architectures

19.2. Efficient BackProp

19.3. A Practical Guide to Training Restricted Boltzmann Machines

19.4. A Brief Overview of Deep Learning (Ilya Sutskever)

19.5. Stochastic Gradient Descent Tricks