PhD Defense: Hyperdimensional Computing As A Basis For Constructing Artificial Intelligence
Hyperdimensional Computing (HDC) is a neuromorphic and neurosymbolic computing paradigm that has attracted attention due to recent advances in Artificial Intelligence (AI) and Machine Learning (ML). By projecting information into high-dimensional spaces with certain statistical properties, HDC affords a type of computational framework that is atypical to general computing. Noise becomes a feature of the system, and computation becomes noise tolerant in response. This allows computations to mirror the efficiencies and capabilities of biological organisms, under the right circumstances. Multi-modality, modal fusion, online/continuous/real-time learning, erosion of network architecture constraints, and more are directly achievable under this paradigm. In this dissertation, we analyze HDC as a basis for AI and ML, studying and demonstrating its most useful properties. We show that HDC is an attractive medium to serve as "lingua franca" between differing architectures, allowing both integration of existing models and downstream utilization of features derived from HDC.
First, we study the properties of HDC that are attractive to have in AI and ML contexts, and how HDC can serve as a basis to build such systems. Then, we show case studies that demonstrate these capabilities in practical settings, such as learning distributional semantics in a life-long and continuous process, neurosymbolic and architectural fusion of disparate network architectures in consensus, robotics applications like autonomous navigation and ego-motion, and more. Next, we demonstrate how HDC is capable of achieving many properties present in machine learning that, at first glance, it may not seem to have. Namely, we show how HDC can generate models, features for downstream tasks, manifold analogues for existing networks, and even perform its own form of "back-propagation". Finally, we analyze the superpositional nature of HDC to show that even more efficient vector-symbolic constructions can be made by finding commonalities in superposed features and developing structures around them.