WHERE TO BUY

ASBIS supplies a wide range of IT products to its customers all over UAE. To find out retail store near you visit ASBIS Resellers section

Other headlines

January 14, 2025
ASBIS Group (WSE: ASB), a leading IT company specializing in value-added ...
January 13, 2025
In a landmark achievement for the realm of supply chain security, ASBISC ...
January 08, 2025
iSpace, a chain of premium retail stores and part of ASBIS Premium Retail ...
January 07, 2025
In a notable achievement, ASBIS has clinched the esteemed title of Great Place ...
December 19, 2024
The first AROS 24/7 ROBO BAR has officially opened on Yas Island, Abu Dhabi, ...
December 19, 2024
ASBIS Robotic Solutions has just opened 24/7 ROBO CAFÉ in the UAE. Discover a ...
December 16, 2024
ASBIS Group (WSE: ASB) - a leading Value Add Distributor, developer, and ...
December 06, 2024
Prestigio Solutions, an ASBIS Group own brand, has expanded its portfolio of ...
December 04, 2024
In 2024, ASBIS and Micron celebrated a significant milestone, marking 10 years ...
November 28, 2024
Take your meeting experience to the next level.
Intel Democratizes Deep Learning Application Development with Launch of Movidius Neural Compute Stick

August 07, 2017

Intel, Compute stick

Intel Democratizes Deep Learning Application Development with Launch of Movidius Neural Compute Stick

Intel launched the Movidius™ Neural Compute Stick, the world's first USB-based deep learning inference kit and self-contained artificial intelligence (AI) accelerator that delivers dedicated deep neural network processing capabilities to a wide range of host devices at the edge

BUY ONLINE

Intel Products

Designed for product developers, researchers and makers, the Movidius Neural Compute Stick aims to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor.

As more developers adopt advanced machine learning approaches to build innovative applications and solutions, Intel is committed to providing the most comprehensive set of development tools and resources to ensure developers are retooling for an AI-centric digital economy. Whether it is training artificial neural networks on the Intel® Nervana™ cloud, optimizing for emerging workloads such as artificial intelligence, virtual and augmented reality, and automated driving with Intel® Xeon® Scalable processors, or taking AI to the edge with Movidius vision processing unit (VPU) technology, Intel offers a comprehensive AI portfolio of tools, training and deployment options for the next generation of AI-powered products and services.

"The Myriad 2 VPU housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance – more than 100 gigaflops of performance within a 1W power envelope – to run real-time deep neural networks directly from the device," said Remi El-Ouazzane, vice president and general manager of Movidius, an Intel company. "This enables a wide range of AI applications to be deployed offline."

Machine intelligence development is fundamentally composed of two stages: (1) training an algorithm on large sets of sample data via modern machine learning techniques and (2) running the algorithm in an end-application that needs to interpret real-world data. This second stage is referred to as "inference," and performing inference at the edge – or natively inside the device – brings numerous benefits in terms of latency, power consumption and privacy:

  • Compile: Automatically convert a trained Caffe-based convolutional neural network (CNN) into an embedded neural network optimized to run on the onboard Movidius Myriad 2 VPU.
  • Tune: Layer-by-layer performance metrics for both industry-standard and custom-designed neural networks enable effective tuning for optimal real-world performance at ultra-low power. Validation scripts allow developers to compare the accuracy of the optimized model on the device to the original PC-based model.
  • Accelerate: Unique to Movidius Neural Compute Stick, the device can behave as a discrete neural network accelerator by adding dedicated deep learning inference capabilities to existing computing platforms for improved performance and power efficiency.