Loading, please wait...

A to Z Full Forms and Acronyms

Siamese Neural Networks

In this article, we will learn about the Siamese Neural Networks

Siamese Neural Networks

The process of learning good features for machine learning applications can be very computationally expensive and may prove difficult in cases where little data is available. A prototypical example of this is the one-shot learning setting, in which we must correctly make predictions given only a single example of each new class. Siamese neural networks employ a unique structure to naturally rank similarity between inputs. Once a network has been tuned, we can then capitalize on powerful discriminative features to generalize the predictive power of the network not just to new data, but to entirely new classes from unknown distributions. A Siamese network is an architecture with two parallel neural networks, each taking a different input, and whose outputs are combined to provide some prediction. Two identical networks are used, one taking the known signature for the person, and another taking a candidate signature. The outputs of both networks are combined and scored to indicate whether the candidate signature is real or a forgery. For the purpose of scoring the Siamese, the network uses a triplet loss function. This loss function penalizes the model such that the distance between the matching examples is reduced and the distance between the non-matching examples is increased. Using a convolutional architecture in the parallel network part of the Siamese network, we are able to achieve strong results that exceed those of other deep learning for the facial recognition task. We take two images both the images are feed to a single Convolutional Neural Network ( CNN ). The last layer of the CNN produces a fixed size vector ( embedding of the image ). Since two images are feed, we get two embeddings. The triplet loss between the vectors is calculated. The values then pass through a sigmoid function and a similarity score is produced.

 

Language Modeling and Text Generation using LSTMs

Language modeling is central to many important natural language processing tasks. Recently, neural-network-based language models have demonstrated better performance than classical methods both standalone and as part of more challenging natural language processing tasks. The language model are used as a part of a plethora of NLP tasks like Optical Character Recognition, handwriting recognition, machine translation, Spelling Correction, Image Captioning, Text Summarization. Next Generation is a type of Language Modeling problem. Language Modeling is the core problem for a number of natural languages processing tasks such as speech to text, conversational system, and text summarization. A trained language model learns the likelihood of occurrence of a word based on the previous sequence of words used in the text. Language models can be operated at the character level, n-gram level, sentence level or even paragraph level. Usually, text from the dataset is tokenized and each word is transformed into a word embedding. Theses word embeddings are fed into a Recurrent Neural Network. Unlike Feed-forward neural networks in which activation outputs are propagated only in one direction, the activation outputs from neurons propagate in both directions. With the help of the context vector obtained from the last unit of the RNN, we can predict the next word based on the input words (or seed text). 

 

 

A to Z Full Forms and Acronyms