Nvidia has officially announced its largest GPU — the Nvidia DGX A1000 in an online-only GTC 2020 (GPU Technology Conference) event by the company’s CEO Jensen Huang. This is also one of the most expensive GPUs and it is definitely not meant for individual users.
The Nvidia DGX A1000 is an Ampere data-center class GPU made to boost AI training and interfaces by 20x. In total, it has eight individual GPUs that are connected with each other using NVLink technology. The GPU has been built by considering the requirements of data analytics, scientific computing, and cloud graphics. Several Nvidia DGX A1000 can be connected with one another using NVLink interconnect technology to create one big GPU capable of handling larger tasks.
The GPU is already available for purchase for $199K, making it one of the most expensive GPUs. Several Fortune 100 brands like Google, Amazon Web Services (AWS), Hewlett Packard Enterprise (HPE), Microsoft Azure, Oracle, and many more are already expected to incorporate the latest Nvidia GPU in the coming days.
Breakthroughs Of Nvidia DGX A1000 GPU
The Nvidia DGX A1000 is a unique GPU with more than 54 billion transistors, which makes the Nvidia A1000 world’s latest 7nm processor. The GPU is based on 3rd Gen Tensor cores with TF32, making this laptop flexible for faster and easier use. According to Nvidia, the A1000 can offer up to 20x more AI performance on FP32 precision without any code change. And it also supports FP64 offering up to 2.5x more computing power when compared to previous generation HPC applications.
The Nvidia DGX A1000 is also one of the first GPUs to support third-gen NVLink that offers high-speed connectivity between GPUs for efficient server-grade performance. The GPU uses structural sparsity to harness double performance on AI math.
The powerful trends of cloud computing and AI are driving a tectonic shift in data center designs so that what was once a sea of CPU-only servers is now GPU-accelerated computing. NVIDIA A100 GPU is a 20x AI performance leap and an end-to-end machine learning accelerator — from data analytics to training to inference.
For the first time, scale-up and scale-out workloads can be accelerated on one platform. NVIDIA A100 will simultaneously boost throughput and drive down the cost of data centers, said Jensen Huang, founder, and CEO of NVIDIA.
Best Mobiles in India