Categories: Blog, Industry News

Share

telecomkh: BrainChip: Key differences between transfer learning and incremental learning

via telecomkh

BrainChip offers insight into two widely accepted forms of deep learning

The massive computing resources required to train neural networks for AI/ML tasks has driven interest in two forms of learning presumed to be more efficient: transfer learning and incremental learning. Experts at BrainChip Holdings Ltd., a leading provider of ultra-low power high performance artificial intelligence technology, offered the following insight and considerations for their use in edge AI/IoT environments.

In transfer learning, applicable knowledge established in a previously trained AI model is “imported” and used as the basis of a new model. After taking this shortcut of using a pretrained model, such as an open-source image or NLP dataset, new objects can be added to customize the result for the particular scenario… READ THE FULL ARTICLE 

Related Posts

View all
  • Linley Fall Processor Conference November 1-2, 2022 Santa Clara, CA (+ Virtual) Please join BrainChip at the upcoming Linley Fall Processor Conference on November 1st and 2nd, 2022 at the Hyatt Regency Hotel, Santa Clara, CA (Virtual attendance option is available) Presentations will address processors and IP cores for AI applications, embedded, data-center, automotive, and server […]

    Continue reading
  • Continue reading
  • Conventional AI silicon and cloud-centric inference models do not perform efficiently at the automotive edge. As many semiconductor companies have already realized, latency and power are two primary issues that must be effectively addressed before the automotive industry can manufacture a new generation of smarter and safer cars. To meet consumer expectations, these vehicles need […]

    Continue reading
  • Join BrainChip at this upcoming Summit. September 14-15, 2022 – Santa Clara, CA The community’s goal is to reduce time-to-value in the ML lifecycle and to unlock new possibilities for AI development. This involves a full-stack effort of efficient operationalization of AI in organizations, productionization of models, tight hw/sw co-design, and best-in-class microarchitectures. The goal […]

    Continue reading