Download PDFOpen PDF in browserAccelerating the Permute and N-Gram Operations for Hyperdimensional Learning in Embedded SystemsEasyChair Preprint 111908 pages•Date: October 28, 2023AbstractHyperdimensional computing (HDC) is a novel computing framework that has gained significant attention for its ability to accelerate machine learning algorithms. Its fast learning and inference capabilities make it an ideal technique for various fields, including machine learning. HDC utilizes high-dimensional holographic vectors, which are vectors with independent and identically distributed dimensions, to represent information. This unique representation allows HDC to leverage highly parallelizable arithmetic operations such as bundling, binding and permute. These simple and highly optimizable operations make HDC an efficient framework for classification in embedded systems. HDC has demonstrated remarkable accuracy in learning patterns from sequenced data. In this paper, we propose a method to enhance the permute operation, which is crucial for maintaining the order of symbols or measures in real-time data. Our method enhances the efficiency of HDC's permute operations by a factor of 10x. Furthermore, by applying the same idea to n-gram encoding, we achieve a speedup of 14x, resulting in up to 26.8x speedup on a real application, compared to a state-of-the-art HDC prototyping library. To achieve this improvement, we utilized SIMD operations and shifted entire SIMD data blocks rather than individual elements. As a result, we demonstrate that real-time inference can be conducted rapidly in applications that are utilized in embedded systems with constrained computational and memory resources, such as those for recognizing emotions, gestures, and language. Keyphrases: Embedded Systems, High Performance Computing, Hyperdimensional Computing, SIMD, Vector Symbolic Architecture, machine learning
|