Efficient Hardware Implementations of Bio-inspired Networks

Anakha V. Babu, Argonne National Laboratory
Webinar
Computing Abstraction

In an effort to mimic the natural information processing paradigms and energy efficiency observed in the brain, several neural network generations have been proposed over the years. This talk focuses on some of the dedicated hardware architectures for implementing second generation deep neural networks (DNNs) and third generation spiking neural networks (SNNs). Among the several hardware acceleration approaches used for bio-inspired networks, this presentation will discuss the ASIC based edge AI accelerators for DNN inference and memristor-based architectures for both DNNs and SNNs. Considering the energy efficiency and acceleration offered by the crossbar architectures, we focus on implementing DNN training on these architectures using experimental memristive devices at the cross point. Extensive simulations of a 4-layer stochastic DNN using experimental PCMO devices at the cross point have revealed that programming variability has a critical role in determining the network performance when compared to the other non-ideal characteristics of the devices. In contrast to DNNs, SNNs operate on discrete and sparse tokens in time called spikes. Here, we discuss a high-performance and high-throughput hardware accelerator for probabilistic SNNs based on Generalized Linear Model (GLM) neurons, that uses binary STT-RAM devices as synapses and digital CMOS logic for neurons. The inference accelerator, termed SpinAPS; for Spintronic Accelerator for Probabilistic SNNs, is shown through software emulation tools to achieve 4× performance improvement in terms of GSOPS/W/mm2 when compared to an equivalent SRAM-based design.

Please use this link to attend the virtual seminar.

https://argonne.zoomgov.com/j/1601673736