tinyML Talks: Enabling ultra-low Power ML at the edge
BrainChip will be speaking at this event:
Announcing two tinyML Talks on September 1st, 2020
Once registered, you will receive a link and dial in information to Zoom teleconference by email, that you can also add to your calendar.
8:00 AM – 8:30 AM Pacific Daylight Time (PDT)
Suren Jayasuriya, Assistant professor, Arizona State University
“Towards Software-Defined Imaging: Adaptive Video Subsampling for Energy-Efficient Object Tracking”
CMOS image sensors have become more computational in nature including region-of-interest (ROI) readout, high dynamic range (HDR) functionality, and burst photography capabilities. Software-defined imaging is the new paradigm, modeling similar advances of radio technology, where image sensors are increasingly programmable and configurable to meet application-specific needs. In this talk, we present a suite of software-defined imaging algorithms that leverage CMOS sensors’ ROI capabilities for energy-efficient object tracking. In particular, we discuss how adaptive video subsampling can learn to jointly track objects and subsample future image frames in an online fashion. We present software results as well as FPGA accelerated algorithms that achieve video rate performance in their latency. Further, we highlight emerging work on using deep reinforcement learning to perform adaptive video subsampling during object tracking. All this work points to the software-hardware co-design of intelligent image sensors in the future.
Suren Jayasuriya is an assistant professor at Arizona State University, in the School of Arts, Media and Engineering and Electrical, Computer and Energy Engineering. Before this, he was a postdoctoral fellow at the Robotics Institute at Carnegie Mellon University, and he received his Ph.D. in electrical and computer engineering at Cornell University in 2017. His research interests are in computational imaging and photography, computer vision/graphics and machine learning, and CMOS image sensors.
8:30 AM – 9:00 AM Pacific Daylight Time (PDT)
Kristofor Carlson, Senior Research Scientist, BrainChip Inc.
“The Akida Neural Processor: Low Power CNN Inference and Learning at the Edge”
The Akida event-based neural processor is a high-performance, low-power SoC targeting edge applications. In this session, we discuss the key distinguishing factors of Akida’s computing architecture which include aggressive 1 to 4-bit weight and activation quantization, event-based implementation of machine-learning operations, and the distribution of computation across many small neural processing units (NPUs). We show how these architectural changes result in a 50% reduction of MACs, parameter memory usage, and peak bandwidth requirements when compared with non-event-based 8-bit machine learning accelerators. Finally, we describe how Akida performs on-chip learning with a proprietary bio-inspired learning algorithm. We present state-of-the-art few-shot learning in both visual (MobileNet on mini-imagenet) and auditory (6-layer CNN on Google Speech Commands) domains.
Kristofor Carlson is a senior research scientist at BrainChip Inc. Previously, he worked as postdoctoral scholar in Jeff Krichmar’s cognitive robotics laboratory at UC Irvine where he studied unsupervised learning rules in spiking neural networks (SNNs), the application of evolutionary algorithms to SNNs, and neuromorphic computing. Afterwards, he worked as postdoctoral appointee at Sandia National Laboratories where he applied uncertainty quantification to computational neural models and helped develop neuromorphic systems. In his current role, he is involved in the design and optimization of both machine learning algorithms and hardware architecture of BrainChip’s latest system on a chip, Akida.
We encourage you to register earlier since on-line broadcast capacity may be limited.