ASBIS supplies a wide range of IT products to its customers all over UAE. To find out retail store near you visit ASBIS Resellers section

Other headlines

February 27, 2023
ASBIS is honored to participate in the annual CloudFest, the #1 Internet ...
February 21, 2023
Breezy, a subsidiary of WSE listed ASBISC Enterprises PLC, a leading Value Add ...
February 15, 2023
ASBISC Enterprises PLC, a leading Value Add Distributor, developer, and ...
February 03, 2023
ASBISC Enterprises PLC, a leading Value Add Distributor, developer, and ...
January 13, 2023
ASBISC Enterprises PLC, a leading Value-Added Distributor, developer and ...
January 11, 2023
ASBIS has retained its position in the list among the most climate-aware ...
January 05, 2023
ASBIS Enterprises PLC, a leading Value Add Distributor, developer, and provider ...
January 05, 2023
ASBISC Enterprises PLC continues to demonstrate corporate culture excellence ...
December 15, 2022
ASBIS Enterprises Plc, a leading Value-Added Distributor, developer and ...
November 30, 2022
ASBISC Enterprises Plc, a leading Value Add Distributor, developer, and ...
Unprecedented Acceleration at Every Scale

March 02, 2022


Unprecedented Acceleration at Every Scale

The Most Powerful Compute Platform for Every Workload

The NVIDIA A100 Tensor Core GPU delivers unprecedentedacceleration - at every scale - to power the world’s highestperforming elastic data centers for AI, data analytics, and highperformance computing (HPC) applications. As the engine ofthe NVIDIA data center platform, A100 provides up to 20X higherperformance over the prior NVIDIA Volta™ generation. A100 canefficiently scale up or be partitioned into seven isolated GPUinstances with Multi-Instance GPU (MIG), providing a unifiedplatform that enables elastic data centers to dynamically adjustto shifting workload demands.

NVIDIA A100 Tensor Core technology supports a broad rangeof math precisions, providing a single accelerator for everyworkload. The latest generation A100 80GB doubles GPU memoryand debuts the world’s fastest memory bandwidth at 2 terabytesper second (TB/s), speeding time to solution for the largestmodels and most massive datasets.

A100 is part of the complete NVIDIA data center solution thatincorporates building blocks across hardware, networking,software, libraries, and optimized AI models and applicationsfrom the NVIDIA NGC™ catalog. Representing the most powerfulend-to-end AI and HPC platform for data centers, it allowsresearchers to deliver real-world results and deploy solutionsinto production at scale.

Groundbreaking Innovations


Whether using MIG to partition anA100 GPU into smaller instancesor NVLink to connect multipleGPUs to speed large-scale workloads, A100 canreadily handle different-sized acceleration needs,from the smallest job to the biggest multi-nodeworkload. A100’s versatility means IT managerscan maximize the utility of every GPU in their datacenter, around the clock. 


NVIDIA A100 delivers 312teraFLOPS (TFLOPS) of deeplearning performance. That’s 20Xthe Tensor floating-point operations per second(FLOPS) for deep learning training and 20X theTensor tera operations per second (TOPS) fordeep learning inference compared to NVIDIAVolta GPUs.


NVIDIA NVLink in A100 delivers2X higher throughput comparedto the previous generation. Whencombined with NVIDIA NVSwitch™,up to 16 A100 GPUs can be interconnected at upto 600 gigabytes per second (GB/sec), unleashingthe highest application performance possible ona single server. NVLink is available in A100 SXMGPUs via HGX A100 server boards and in PCIeGPUs via an NVLink Bridge for up to 2 GPUs.


An A100 GPU can be partitionedinto as many as seven GPUinstances, fully isolated atthe hardware level with theirown high-bandwidth memory, cache, andcompute cores. MIG gives developers accessto breakthrough acceleration for all theirapplications, and IT administrators can offerright-sized GPU acceleration for every job,optimizing utilization and expanding access toevery user and application.


With up to 80 gigabytes ofHBM2e, A100 delivers the world’sfastest GPU memory bandwidthof over 2TB/s, as well as a dynamic randomaccess memory (DRAM) utilization efficiencyof 95%. A100 delivers 1.7X higher memorybandwidth over the previous generation.


AI networks have millions tobillions of parameters. Not all ofthese parameters are needed foraccurate predictions, and somecan be converted to zeros, making the models“sparse” without compromising accuracy.Tensor Cores in A100 can provide up to 2Xhigher performance for sparse models. Whilethe sparsity feature more readily benefits AIinference, it can also improve the performanceof model training.

The NVIDIA A100 Tensor Core GPU is the flagship product of the NVIDIA data centerplatform for deep learning, HPC, and data analytics. The platform accelerates over 2,000 applications, including every major deep learning framework. A100 is availableeverywhere, from desktops to servers to cloud services, delivering both dramaticperformance gains and cost-saving opportunities.