Artificial intelligence for self-driving cars. Predicting our climate's future. A new drug to treat cancer. Some of the world's most important challenges need to be solved today, but require tremendous amounts of computing to become reality. Today's data centers rely on many interconnected commodity compute nodes, limiting the performance needed to drive important High Performance Computing (HPC) and hyperscale workloads. NVIDIA® Tesla® P100 GPU accelerators are the most advanced ever built for the data center. They tap into the new NVIDIA Pascal™ GPU architecture to deliver the world's fastest compute node with higher performance than hundreds of slower commodity nodes. Higher performance with fewer, lightning-fast nodes enables data centers to dramatically increase throughput while also saving money. With over 400 HPC applications accelerated—including 9 out of top 10—as well as all deep learning frameworks, every HPC customer can now deploy accelerators in their data centers.
The Tesla P100 is reimagined from silicon to software, crafted with innovation at every level. Each groundbreaking technology delivers a dramatic jump in performance to inspire the creation of the world's fastest computer node.
PERFORMANCE SPECIFICATION FOR NVIDIA TESLA P100 ACCELERATORS
NVIDIA NVLink™ Interconnect Bandwidth
PCIe x16 Interconnect Bandwidth
CoWoS HBM2 Stacked Memory Capacitye
16 GB or 12 GB
CoWoS HBM2 Stacked Memory Bandwidth
720 GB/s or 540 GB/s
Enhanced Programmability with Page Migration Engine
ECC Protection for Reliability
Server-Optimized for Data Center Deployment
Peak double-precision floating point performance (board)
Peak single-precision floating point performance (board)