Amazon is proud to announce that it is introducing the latest Graphic Processing Units to its line of EC2 instances. The new Amazon EC2 P3 instances let you have up to 8 NVIDIA Tesla V100 GPUs.
P3 instances were created to take on compute-intensive machine learning, deep learning, seismic analysis, and other heavy-duty tasks. There are three available sizes, each one using a customized Intel Xeon E5-2686v4 processor with 2.7 GHz.
- P3.2xlarge – 1 Tesla GPU; 16G GPU Memory; 8 vCPUs; 61GB Memory; Up to 10GB Network Memory; 1.5 Gbps EBS Bandwidth
- P3.8xlarge – 4 Tesla GPU; 64G GPU Memory; 200 Gbps NVIDIA NVLink; 32 vCPUs; 244 GB Memory; 10GB Network Memory; 7 Gbps EBS Bandwidth
- P3.16xlarge – 8 Tesla GPU; 128G GPU Memory; 300 Gbps NVIDIA NVLink; 64 vCPUs; 488 GB Memory; 25GB Network Memory; 14 Gbps EBS Bandwidth
The NVIDIA GPUs are filled with 5,120 CUDA cores and 640 Tensor cores that can provide 125 TFLOPS of mixed precision floating, 15.7 TFLOPS of single-precision floating point, and 7.8 TFLOPS of double-precision floating point. For P2 and P3, the GPUs are linked by NVIDIA NVLink 2.0 with a total data rate of up to 300 GBps. The GPUs may rapidly exchange results and data without needing to pass through CPU.
Tensor cores are useful for speeding up training and inference for neural networks.
Amazon EC2 P3 Instances are available for the following regions: US East (Northern Virginia), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo) Regions. You can get them On-Demand, Spot, Reserved Instance, and in Dedicated Host form. If you require help in setting up EC2 for your enterprise, contact us at PolarSeven today.