Categories: Blog, Industry News

Share

EETimes: BrainChip Launches Event-Domain AI Inference Dev Kits

via EETimes

BrainChip, the neuromorphic computing IP vendor, launched two development kits for its Akida neuromorphic processor during this week’s Linley Fall Processor Conference. Both kits feature the company’s Akida neuromorphic SoC: an x86 Shuttle PC development kit and an Arm-based Raspberry Pi kit. BrainChip is offering the tools to developers working with its spiking neural network processor in hopes of licensing its IP. Akida silicon is also available.

BrainChip’s neuromorphic technologies enables ultra-low power AI for analyzing data in edge systems where extremely low-power, real-time processing of sensor data is sought. The company has developed a neural processing unit (NPU) designed to process spiking neural networks (SNNs), a brain-inspired neural network that differs from mainstream deep-learning approaches. Like the brain, an SNN relies on “spikes” that convey information spatially and temporally. That is, the brain recognizes… READ THE FULL ARTICLE 

Related Posts

View all
  • Linley Fall Processor Conference November 1-2, 2022 Santa Clara, CA (+ Virtual) Please join BrainChip at the upcoming Linley Fall Processor Conference on November 1st and 2nd, 2022 at the Hyatt Regency Hotel, Santa Clara, CA (Virtual attendance option is available) Presentations will address processors and IP cores for AI applications, embedded, data-center, automotive, and server […]

    Continue reading
  • Continue reading
  • Conventional AI silicon and cloud-centric inference models do not perform efficiently at the automotive edge. As many semiconductor companies have already realized, latency and power are two primary issues that must be effectively addressed before the automotive industry can manufacture a new generation of smarter and safer cars. To meet consumer expectations, these vehicles need […]

    Continue reading
  • Join BrainChip at this upcoming Summit. September 14-15, 2022 – Santa Clara, CA The community’s goal is to reduce time-to-value in the ML lifecycle and to unlock new possibilities for AI development. This involves a full-stack effort of efficient operationalization of AI in organizations, productionization of models, tight hw/sw co-design, and best-in-class microarchitectures. The goal […]

    Continue reading