Download PDFOpen PDF in browserImproving Siamese Networks for One-Shot Learning Using Kernel-Based Activation FunctionsEasyChair Preprint 893, version 315 pages•Date: August 19, 2020AbstractThe lack of a large amount of training data has always been the constraining factor in solving many problems in machine learning, making one-shot Learning one of the most intriguing ideas in machine learning. It aims to learn the necessary objective information from one or only a few training examples. This process of learning in neural networks is generally accomplished by using a proper objective function (loss function) and embeddings extraction (architecture). In this paper, we discussed metric-based deep learning architectures for one-shot learning such as siamese neural networks and present a method to improve on their accuracy using Kafnets (kernel-based non-parametric activation functions for neural networks) by learning finer embeddings with relatively less number of epochs. Using kernel activation functions, we are able to achieve strong results that exceed ReLU-based deep learning models in terms of embedding structure, loss convergence, and accuracy. The project code with results can be found at Github:https://github.com/shruti-jadon/Siamese-Network-for-One-shot-Learning. Keyphrases: Kernels, computer vision, decision boundary, machine learning, one-shot learning
|