BrainChipTM provides a set of pre-trained, documented and supported network architectures. Here is a variety of models that are available in our model library in the Developer Hub:

Explore Akida’s Model Library

Discover ready-to-use models built for low-power, real-time performance across vision, audio, and sensor use cases.

AkidaNet/Object Detection

You Only Look Once (YOLO) is a deep neural network architecture dedicated to object detection. As opposed to classic networks that handle object detection, YOLO predicts bounding boxes (localization task) and class probabilities (classification task) from a single neural network in a single evaluation. The object detection task is reduced to a regression problem to spatially separated boxes and associated class probabilities. The example features a YoloV2 architecture and a CenterNet model exmaple is also available.


AkidaNet/Object Classification/Recognition

AkidaNet architecture is a MobileNet v1-inspired architecture optimized for implementation on Akida 1.0: it exploits the richer expressive power of standard convolutions in early layers, but uses separable convolutions in later layers where filter memory is limiting.


AkidaNet/Regression

Age estimation example demonstrates the comparable accuracy of the Akida-compatible model to the traditional Keras model in performing an age estimation task. It uses the UTKFace dataset, which includes images of faces and age labels, to showcase how well akida compatible model can predict the ages of individuals based on their facial features.


AkidaNet/Face Recognition

Face Recognition based on an AkidaNet 0.5 Backbone network.
Segmentation based upon an Akida version of the UNet backbone neural network.


AkidaNet/Keyword Spotting

Identify Key words in a streaming audio signal to be able to identify a large set of words in isolation to trigger activities. The example provided can recognize 32 different key words using a DS-CNN neural network. BrainChip supports optimized Key Word Spotting with a conventional DS-CNN model and also a Temporal-Enabled Neural Network (TENNsTM) the model on Akida 2 that enhances accuracy while reducing model parameters and computations considerably. It also eliminates pre-processing steps to compute on raw audio samples, simplifying the overall processing chain.


AkidaNet/Point Cloud Classification

Akida provides for classification of objects in a point cloud using the PointNet++ neural network architecture with the ModelNet40 3D Point Cloud dataset.

Request Access for Advanced
TENNs Models

Akida 2 Model Zoo are available to run today on our Akida 2 FPGA Developer Platform.
TENNs Models
for Akida 2 and Akida GenAI are available by request.

BrainChip’s TENNs (Temporal Event-Based-Neural Networks) are a novel neural network
architecture that pushes the boundaries of what’s possible in Performance, Power and Area (PPA),
achieving more with less energy and a smaller silicon footprint.


AkidaNet/TENN Gesture Recognition Model

Akida TENNs gesture recognition using a DVS camera achieved state-of-the-art (SOTA) results on the IBM DVS128 dataset for both latency and accuracy. Running on Akida hardware with just 192K model parameters, it offers an ultra-low cost solution ideal for embedding in consumer devices.


AkidaNet/TENN Eye Tracking Model

The AkidaNet TENNs eye-tracking model achieved state-of-the-art (SOTA) results in the AIS 2024 event-based eye-tracking challenge. It demonstrated 90% activation sparsity with minimal accuracy drop, delivering 5× performance on Akida hardware using only 277K model parameters. This makes it an ultra-low power solution ideal for integration in wearable devices.


AkidaNet/TENN Audio Denoising Model

Our medium-sized TENNs model achieves an impressive Perceptual Evaluation of Speech Quality (PESQ) score of 3.36 with just 590,000 parameters. This demonstrates TENN’s ability to deliver excellent noise reduction while maintaining audio quality.

Unmatched Efficiency: TENN models require fewer parameters and multiply-accumulate operations (MACs) per sample compared to equivalent CNN-based models. With our TENNs license, customers can fine-tune models to their specific requirements, ensuring the best possible performance for their unique audio environments and use cases. The TENN architecture allows for easy scaling to smaller or larger models, adapting to specific customer needs without compromising performance.


AkidaNet/TENN Automatic Speech Recognition Model

Our TENNs model approach is applied to Automatic Speech Recognition for compact accurate voice to text applications.


AkidaNet/ TENN Large Language Model (LLM)

Our TENNs model approach is applied to an LLM model with a 1.2B parameter model that is getting excellent Perplexity scores compared to larger transformer based LLM models and features significantly reduced training costs.


AkidaNet/TENN Large Language Model (LLM+RAG)

Our TENNs model approach is applied to an LLM model with Retrieval Augmented Generation to provide intelligent access to documentation for those end use cases looking to embed an LLM for their products user interface.

Request Access

Be among the first to access BrainChip’s advanced AkidaNet/TENNs models, built for efficient, low-power AI at the edge. Models include audio denoising, automatic speech recognition, and language models—including a compact LLM and LLM with RAG for intelligent, real-time applications.

  • Please fill out the form below and submit. We will respond shortly.

  • For Investor inquiries, please email IR@brainchip.com. Any investor inquiries submitted through this form will go unanswered.

Join the Developer Hub and
See the Models in Action