Share

Q&A with Peter Van Der Made, BrainChip Founder and Chief Technology Officer

The semiconductor industry has long struggled to bypass Von Neumann bottlenecks, recalibrate Moore’s Law, and overcome the breakdown of Dennard Scaling.

Advanced edge AI applications are fast approaching the limits of conventional silicon and cloud-centric learning models. With enormous amounts of targeted compute power available in cloud data centers, AI training and inference models leveraging GPU and TPU hardware accelerators continue to increase in both size and sophistication.

We have seen compute power increase over the past decade as networks grow larger and more complex. In parallel, cloud-based streaming video AI solutions are demanding ever-more internet bandwidth. Clearly, these trends cannot continue without severe consequences including unmanageable latency, rapidly expanding carbon footprints, and security exploits that could potentially intercept and target raw data sent to cloud data centers.

This whitepaper discusses the evolution of neuromorphic computing, the limitations of current compute models for edge AI, and explores how neuromorphic silicon is driving a more intelligent and sustainable future.

You can view and download the whitepaper here.

Related Posts

View all
  • We are pleased to publish our second white paper, “Designing Smarter and Safer Cars with Essential AI.” This white paper presents a systems-architectural model that sequentially leverages multiple Akida-powered smart sensors and Akida AI SoC accelerators to efficiently capture and analyze inference data within designated regions or ROI. Click here to read how we’re addressing the automotive […]

    Continue reading
  • In collaboration with the GSA, we have developed a whitepaper on “Edge AI” to address the needs of latency, power, and security for AI applications at the edge. With smart sensors proliferating and the number of edge-enabled IoT devices expected to hit 7.8 billion by 2030, the semiconductor industry needs to more effectively address the […]

    Continue reading
  • By Brien M. Posey Previous generations of artificial intelligence and machine learning chips were useful, but their time is rapidly running out. The constraints on power and bandwidth imposed by edge devices mean that it’s time for a new paradigm, a new design that fulfills the promise of AI and ML at the edge. It’s […]

    Continue reading