akidaTM IP

BrainChip’s first-to-market, digital neuromorphic processor IP, akidaTM, mimics the human brain to analyze only essential sensor inputs at the point of acquisition—processing data with unparalleled performance, precision, and reduced power consumption.

Keeping AI/ML local to the chip and independent of the cloud dramatically reduces latency while improving privacy and data security. Infer and learn at the edge with akidaTM’s fully customizable event-based AI neural processor. akidaTM’s scalable architecture and small footprint, boosts efficiency by orders of magnitude – supporting up to 256 nodes that connect over a mesh network.

Every node consists of four Neural Network Layer Engines (NPEs), each with scalable and configurable SRAM. Within each node, the NPEs can be configured as either a convolutional or a fully connected engine. The akidaTM neural processor is event based – leveraging data sparsity in activations and weights to reduce the number of operations by multiples or even orders of magnitude.

Technology Brief

BrainChip’s neural processor Al IP is an event-based technology that is inherently lower power when compared to conventional neural network accelerators.

BrainChip IP supports incremental learning and high-speed inference in a wide variety of use cases.

From Concept to Delivery

The most performant and efficient edge Al architecture, akidaTM IP is easy to
evaluate, design, develop and deploy.

MetaTF, Models Zoo, Edge Impulse

Evaluate

Akida

Design

Meta TF, Edge Impulse

Develop

Deploy

MetaTF software provides performance simulation, CNN model conversion with examples in a models zoo.

BrainChip has a set of Reference Development Systems that integrate the Akida1000 Reference SoC to create working AI systems.

These include:

  • Akida™ PCIe
  • Akida™ Shuttle PC
  • Akida™ Raspberry Pi

The Akida SoC is a complete event domain neural processing device that features 1.2 million neurons and 10 billion synapses.

The Akida SoC can be deployed as a stand-alone embedded accelerator or integrated as a co-processor to support multiple use cases.

Configure and Integrate the optimal configuration of Akida IP for an ideal SoC implementation to deploy at high volume and scale.

Use MetaTF to optimize the networks for production.

akidaTM is Unique

BrainChip’s IP fabric can be instantiated as highly parallel hardware for ultimate performance, or as space-optimized hardware for reduced silicon area and reduced power consumption.

Entire neural networks can be placed into the fabric, removing the need to swap weights in and out of DRAM. This addresses a fundamental problem in AI processing – data movement – resulting in a substantial reduction in power consumption while increasing throughput.

Finally, an SoC designer can overlay any of the traditional dynamic voltage and frequency scaling techniques for further optimization.

Self-Contained AI Processing

  • Configurable neural network processor .
  • On-chip mesh network interconnect.
  • Hardware-based event processing and data-to-event converter.
  • Integrated intelligent DMA minimizes and often eliminates the need for CPU to manage AI calculations.
  • Hardware support for on-chip learning.
  • Hardware support for 1, 2, or 4-bit hybrid quantized weights.
  • With 8-bit support in akidaTM 2nd Generation for increased accuracy.
    (Note that 8-bit quantization is also part of the foundational Akida technology along with skip connections.)

Easy To Integrate And Deploy

  • Robust software, tools, and development environment
  • Fully synthesizable RTL IP package for standard EDA tools
    – Complete test bench with simulation results
    – RTL synthesis scripts and timing constraints
    – Customized IP package targeted for your application
    – Configurable amounts of embedded memory and input buffers
  • Standard AXI 4.0 interface for on-chip communication
  • External memory optional (SRAM or DDR)

Highly Configurable IP Platform
Highly Configurable
IP Platform

Flexible & Scalable for
Multiple Edge AI Use Cases

BrainChip works with clients to achieve the most cost-effective solution by optimizing the node configuration to the desired level of performance and efficiency.

Scale down to 2 nodes for Ultra Low Power or scale up to 256 nodes for Complex Use Cases.

Multi-Pass Processing enables smaller hardware implementations to execute large neural networks independent of host CPU.

Quantization in MetaTF reduces precision of model weights and activations to save memory space and power consumption.

Continuous On-Chip Learning
Continuous
On-Chip Learning

Definition

BrainChip IP and akidaTM perform On-Chip Learning by leveraging the trained model as a feature extractor and adding new classes to the final layer.

Key Benefits:

  • Customization: Personalizes device by extending extracted model classes. on device without costly cloud retraining. Extend on device without having to retrain on the cloud.
  • Adaptability & Future-proofing: Extends device capabilities in the field after deployment.
  • Reduced costs: Avoids costly cloud retraining of models.
  • Security & Privacy: Limits sensitive data going to cloud. Stores customization as weights, not data, thereby enhancing privacy.

Visualization

Instead of sending data outside to a big computer to learn and make decisions, this chip can learn and adapt right where it is, making things faster and more efficient.

It’s like having a quick-thinking brain inside a small device!

Multi-Pass Processing Delivers Scalability
Multi-Pass Processing
Delivers Scalability

Definition

akidaTM leverages Multi-Pass Processing to seamlessly accelerate models with more layers than NPEs available in a given hardware implementation of akidaTM IP.

An akidaTM Neural Processor accomplishes this feat by breaking the full list of Neural Network Layers in a model into passes that fit the number of NPEs available and then processes each pass sequentially.

Key Benefits:

  • Right-sized: Runs any network on given set of Nodes thereby reducing Silicon footprint and Power in SoC.
  • Transparent to Application developer: Managed by intelligent Runtime software and metaTF.
  • Provides Future Proofing: Today’s implementations can accelerate tomorrow’s models.

Visualization

With Multi-Pass Processing, the akidaTM runtime software adjusts for the amount of available neural processing engines.

This achieves the necessary computation and is transparent to the developer and user.

Sensor Agnostic Scalability

Since akidaTM foundational technology can operate on data from varied sensor types, they radically improve analytical decision-making and intelligence for multi-sensor environments in real-time.

akida’s low latency and high throughput processing at Ultra-Low Power Consumption is the perfect AIoT Application solution for a New Generation Of Smart Edge Devices.