Home / Computer Systems Desktop Computers Industrial Computer / Nvidia 900-6G199-0000 A100 40GB PCIe GPU Accelerator

Nvidia 900-6G199-0000 A100 40GB PCIe GPU Accelerator



Nvidia A100 40GB PCIe GPU Accelerator


The NVIDIA® A100 GPU is a dual-slot 10.5 inch PCI Express Gen4 card based on the NVIDIA Ampere GA100 graphics processing unit (GPU). It uses a passive heat sink for cooling, which requires system air flow to properly operate the card within its thermal limits. The A100 PCIe supports double precision (FP64), single precision (FP32) and half precision (FP16) compute tasks, unified virtual memory, and page migration engine. For performance optimization, NVIDIA GPU Boost™ feature is supported. NVIDIA GPU Boost automatically and dynamically adjusts the GPU clock during runtime to optimize performance within the power cap and thermal limits. A100 PCIe boards are shipped with ECC enabled by default to protect the GPU’s memory interface and the on-board memories. ECC protects the memory interface by detecting any single, double, and all odd-bit errors. The GPU will retry any memory transaction that has an ECC error until the data transfer is error-free. ECC protects the DRAM content by fixing any single-bit errors and detecting double-bit errors. The A100 with 40 GB of HBM2 memory has native support for ECC and has no ECC overhead, both in memory capacity and bandwidth. The NVIDIA A100 GPU operates unconstrained up to its thermal design power (TDP) level of 250 W to accelerate applications that require the fastest computational speed and highest data throughput. For more information on Tensor Cores, download the white paper at https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/nvidia-amperearchitecture-whitepaper.pdf The thermal requirements for A100 are similar to those of the NVIDIA V100S product. See the thermal section for further details. Refer to the following website for the latest list of qualified A100 servers: https://www.nvidia.com/en-us/data-center/tesla/tesla-qualified-servers-catalog/