Products2022-05-10T00:30:49-07:00

Akida.

The global industry-standard
for Edge AI.

First-to-market, fast, formidable.

Request a Demo

Akida IP

BrainChip’s first-to-market neuromorphic processor IP, Akida™, mimics the human brain to analyze only essential sensor inputs at the point of acquisition—processing data with unparalleled efficiency, precision, and economy of energy. Keeping AI/ML local to the chip and independent of the cloud dramatically reduces latency while improving privacy and data security.

Infer and learn at the edge with Akida’s fully customizable event-based AI neural processor. Akida’s scalable architecture and small footprint boosts efficiency by orders of magnitude – supporting up to 1024 nodes that connect over a mesh network.

Every node consists of four Neural Processing Units (NPUs), each with scalable and configurable SRAM. Within each node, the NPUs can be configured as either convolutional or fully connected. The Akida neural processor is event based – leveraging data sparsity, activations, and weights to reduce the number of operations by at least 2X.

Learn More

An IP Platform

The most performant edge AI
architecture, Akida IP
is also easy to
implement and evaluate.

MetaTF software provides a model zoo, performance simulation, and CNN model conversion. The Akida1000 reference chip is fully functional and enables working system evaluation. Our development systems (PCIe boards, Shuttle PCs, and Rasberry Pi) complement Brainchip’s IP and reference SoC to enable the easy design of intelligent endpoints.

MetaTF

The MetaTF Development Environment is a complete machine learning framework that enables the seamless creation, training, and testing of neural networks running on the Akida event domain neural processor.

With MetaTF, companies can easily develop neural network for specific edge applications.

Learn More

Akida Neural Processor SoC

The Akida SoC is a complete event domain neural processing device that features 1.2 million neurons and 10 billion synapses.

The Akida SoC can be deployed as a stand-alone embedded accelerator or integrated as a co-processor to support multiple use cases.

Learn More

Akida Enablement Platforms

BrainChip as a set of Reference Development Systems that integrate the Akida1000 Reference SoC to create working AI systems.

These include:

  • AkidaTM PCIe
  • AkidaTM Shuttle PC
  • AkidaTM Rasberry Pi
Learn More

Akida is Uniquely Essential

BrainChip’s IP fabric can be placed either in a parallelized manner that would be ideal for ultimate performance, or space-optimized in order to reduce silicon utilization and further reduce power consumption.

Entire neural networks can be placed into the fabric, removing the need to swap weights in and out of DRAM resulting in a reduction of power consumption while increasing throughput.

Additionally, users can modify clock frequency to optimize performance and power consumption further.

Key Features:

  • Robust software and development environment and tools
  • Complete configurable neural network processor
  • On-chip mesh network interconnect
  • Standard AXI 4.0 interface for on-chip communication
  • Scalable nodes can be configured as
    • Event domain convolution neural processor
    • Fully connected neural processor
  • Hardware-based event processing
  • No CPU required
  • External memory optional (SRAM or DDR)
  • Integrated DMA and data-to-event converter
  • Hardware support for on-chip learning
  • Hardware support for lb, 2b or 4b hybrid quantized weights and activations to reduce power and minimize memory footprint
  • Fully synthesizable RTL
  • IP deliverables package with standard EDA tools
    • Complete test bench with simulation results
    • RTL synthesis scripts and timing constraints
    • Customized IP package targeted for your application
    • Configurable amounts of embedded memory and input buffers

Highly Configurable IP Platform

Flexible and scalable for multiple edge AI use cases.

BrainChip works with clients to achieve the most cost-effective solution by optimizing the node configuration to the desired level of performance and efficiency.

Scale down to 2 nodes for ultra low power or scale up to 256 nodes for complex use cases.

Multi-pass processing provides flexibility to process complex use cases with fewer nodes increasing power efficiency.

Quantization in MetaTF converts model weights and activations to lower bit format reducing memory requirement.

One-Shot On-Chip Learning

BrainChip IP and Akida perform on-chip learning by leveraging the trained model as a feature extractor and adding new classes to the final layer.

Demonstrated edge learning for:

  • Object detection with MobileNet trained on the ImageNet dataset.
  • Keyword spotting with DS-CNN trained on the Google Speech
    Commands dataset.
  • Hand gesture classification with a small CNN trained on a custom DVS
    events dataset.

Original Model’s
Extracted Features

On-Chip Learning
Through New Final Layer

Multi-Pass Processing Delivers Scalability

Akida leverages multi-pass processing to reduce the number of neural processing units required for a given compute task by segmenting and processing sequentially.

Benefits of Multi-Pass

  • Scalable
  • Reduces memory requirements (2x)
  • Power efficient

How it Works

Multi-pass Sequential Compute

AI Enablement
Program

Learn more

BrainChip’s AI Enablement Program makes entry to edge AI simple and real.
Each of our tiered programs brings you from concept to working prototype with varying levels of model complexity and sensor integration. In addition, our AI experts provide training and support to make the process efficient and smart.

Learn more

News

Let’s sharpen the Edge together.

We’re pushing the limits of AI on-chip compute to
maximize efficiency, kill latency, and conserve energy.

Join us.

Careers
Go to Top