BrainChip’s first-to-market, digital neuromorphic processor IP, Akida™, mimics the human brain to analyze only essential sensor inputs at the point of acquisition—processing data with unparalleled performance, precision, and economy of energy.
Keeping AI/ML local to the chip and independent of the cloud dramatically reduces latency while improving privacy and data security. Infer and learn at the edge with Akida’s fully customizable event-based AI neural processor. Akida’s scalable architecture and small footprint, boosts efficiency by orders of magnitude – supporting up to 256 nodes that connect over a mesh network.
Every node consists of four Neural Processing Engines (NPEs), each with scalable and configurable SRAM. Within each node, the NPEs can be configured as either convolutional or fully connected. The Akida neural processor is event based – leveraging data sparsity, activations, and weights to reduce the number of operations by multiples or even orders of magnitude.
Akida 1st Generation Platform Brief
BrainChip’s neural processor Al IP is an event-based technology that is inherently lower power when compared to conventional neural network accelerators. BrainChip IP supports incremental learning and high-speed inference in a wide variety of use cases, such as convolutional neural networks with high throughput and unsurpassed performance in micro watt to milli-watt power budgets
An IP Platform
The most performant and efficient edge AI architecture, Akida IP is easy to evaluate, design, develop and deploy.
MetaTF software provides performance simulation, CNN model conversion with examples in a models zoo.
The MetaTF Development Environment is a complete machine learning framework that enables the seamless creation, training, and testing of neural networks running on the Akida event domain neural processor.
With MetaTF, companies can easily develop neural network for specific edge applications.
Use this in early evaluation and design, as well as final tuning and productization of networks.
Our development systems complement Brainchip’s IP and reference SoC to enable the easy design of intelligent endpoints.
Akida Enablement Platforms
BrainChip has a set of Reference Development Systems that integrate the Akida1000 Reference SoC to create working AI systems.
- Akida™ PCIe
- Akida™ Shuttle PC
- Akida™ Raspberry Pi
The Akida1000 reference chip is fully functional and enables working system development, prototyping and small volume deployment.
Akida Neural Processor SoC
The Akida SoC is a complete event domain neural processing device that features 1.2 million neurons and 10 billion synapses.
The Akida SoC can be deployed as a stand-alone embedded accelerator or integrated as a co-processor to support multiple use cases.
Akida IP enables optimized custom SoC development for mass deployment.
Configure and Integrate the optimal configuration of Akida IP for an ideal SoC implementation to deploy at high volume and scale.
Use MetaTF to optimize the networks for production.
Akida is Unique
BrainChip’s IP fabric can be placed either in a parallelized manner that would be ideal for ultimate performance, or space-optimized in order to reduce silicon utilization and further reduce power consumption.
Entire neural networks can be placed into the fabric, removing the need to swap weights in and out of DRAM. This addresses a fundamental problem in AI processing – data movement – resulting in a substantial reduction in power consumption while increasing throughput.
It has a simplified, single clock design for ease of implementation that allows partners to tune their system to the best tradeoff of frequency and energy consumption.
This intelligent fabric, utilizes the understanding of the activation maps to do optimal clock gating to further reduce energy consumption.
Finally, an SoC designer can overlay any of the traditional dynamic voltage and frequency scaling for further optimization.
Self-contained AI Processing
- Complete configurable neural network processor
- On-chip mesh network interconnect
- Scalable nodes can be configured as
– Event domain convolution neural processor
– Fully connected neural processor
- Hardware-based event processing and data-to-event converter
- Integrated intelligent DMA minimizes and often eliminates the need for CPU to manage AI operation.
- Hardware support for on-chip learning
- Hardware support for lb, 2b or 4b hybrid quantized weights
(Note: The 2nd generation of Akida adds 8b weights and quantization)
Easy to Integrate and Deploy
- Robust software and development environment and tools
- Fully synthesizable RTL
- Standard AXI 4.0 interface for on-chip communication
- IP deliverables package with standard EDA tools
– Complete test bench with simulation results
– RTL synthesis scripts and timing constraints
– Customized IP package targeted for your application
– Configurable amounts of embedded memory and input buffers
- External memory optional (SRAM or DDR)
Highly Configurable IP Platform
Flexible and scalable for multiple edge AI use cases
BrainChip works with clients to achieve the most cost-effective solution by optimizing the node configuration to the desired level of performance and efficiency.
Scale down to 2 nodes for ultra low power or scale up to 256 nodes for complex use cases.
Multi-pass processing provides flexibility to process complex use cases with fewer nodes increasing power efficiency.
Quantization in MetaTF converts model weights and activations to lower bit format reducing memory requirement
- This Customization and Learning is simple to enable and is untethered from the cloud.
- Models can adapt to changes in field and the AI Application can implement incremental learning without costly cloud model retraining.
- It adds Security and Privacy as the Input data isn’t saved. It is only stored as weights
Demonstrated Edge Learning For:
- Object detection with MobileNet trained on the ImageNet dataset.
- Keyword spotting with DS-CNN trained on the Google Speech Commands dataset.
- Hand gesture classification with a small CNN trained on a custom DVS events dataset
Multi-Pass Processing Delivers Scalability
Akida leverages multi-pass processing to reduce the number of neural processing units required for a given compute task by segmenting and processing sequentially.
Since Akida can do multiple layers at a time, and the DMA handles this loading independent of the CPU, it substantially reduces the additional latency of going from parallel to sequential processing versus traditional DLAs where the layer processing is managed by CPU.
- Extremely scalable as it runs larger networks on given set of Nodes thereby reducing Silicon footprint and Power in SoC
- Transparent to Application developer/ user as it is handled by Runtime SW
- Provides future proofing since you can scale Today’s designs for tomorrow’s models
BrainChip’s AI Enablement Program makes entry to edge AI simple and real.
Each of our tiered programs brings you from concept to working prototype with varying levels of model complexity and sensor integration. In addition, our AI experts provide training and support to make the process efficient and smart.