**One Shot Learning**

__Background__

__Background__

The brain is indeed a neural network, but do we really learn the way a deep neural net does? In order for a deep neural net to learn we first have to feed it hundreds of thousands of images. For example, to make a deep neural net model to identify an image of a cat, it took more than 300 millions of images of cat to train the model. But we as a human being can recognize it instantly. We humans don’t require thousands of images to generalize, we just need a few examples. Also, we learn a richer representation than machines. If we could create an algorithm to learn concepts with fewer examples that would be incredible. Also, that would further democratize the field so that anyone can train and build a model.

If we only have a little bit of data can we still learn from it? Yes, and the technique used is One shot learning. Using a very small data set we can build a model to classify images. The basic principle of one shot learning is that it should be possible to learn from one example (or few examples). Neural turning machine a modified version of the model is used, which learns to classify images with just a few examples.

__Frameworks Used__

__Frameworks Used__

##### Bayesian Program Learning

The BPL approach learns simple stochastic programs to represent concepts, building them compositionally from parts, subparts, and spatial relations. BPL defines a generative model that can sample new types of concepts by combining parts and subparts in new ways. Each new type is also represented as a generative model, and this lower-level generative model produces new examples (or tokens) of the concept, making BPL a generative model for generative models.

##### Neural Turing Machine

It contains 2 components a neural network (called as a controller) and a memory bank. As all neural nets it also takes a vector as input and outputs a vector but what makes it special is it also interacts with memory matrix using read and write operations (it’s like having a working memory) The network learns how best to use its memory on learning a solution for a given problem. For the controller LSTM (Long Short Term Memory), please refer the below image for LSTM network and other types of network, recurrent network is used as it can perform context dependent computation. Neural Tuning Machine is a subset of Memory Augmented Neural Networks.

Architectures with memory augmented capabilities, such as Neural Turing Machine offers the ability to encode and retrieve new information.