GPU-accelerated ECSs
GPU-accelerated ECSs provide outstanding floating-point computing capabilities. They are suitable for applications that require real-time, highly concurrent massive computing.
GPU-accelerated ECSs are classified as G series and P series of ECSs.
- G series: Graphics-accelerated ECSs, which are suitable for 3D animation rendering and CAD
- P series: Computing-accelerated or inference-accelerated ECSs, which are suitable for deep learning, scientific computing, and CAE
GPU-accelerated ECSs
Recommended: Computing-accelerated P2s
Available now: All GPU models except the recommended ones. If available ECSs are sold out, use the recommended ones.
- G series
- P series
- Computing-accelerated P3
- Computing-accelerated P2s (recommended)
Helpful links:
Images Supported by GPU-accelerated ECSs
Type | Series | Supported Image |
---|---|---|
Graphics-accelerated | G5 |
|
Computing-accelerated | P3 |
|
Computing-accelerated | P2s |
|
GPU-accelerated Enhancement G5
Overview
G5 ECSs use NVIDIA GRID vGPUs and provide comprehensive, professional graphics acceleration. They use NVIDIA Tesla V100 GPUs and support DirectX, OpenGL, and Vulkan. These ECSs provide 16 GiB of GPU memory, meeting requirements from entry-level through professional graphics processing.
Select your desired GPU-accelerated ECS type and specifications.
Specifications
Flavor | vCPUs | Memory (GiB) | Max./Assured Bandwidth (Gbit/s) | Max. PPS (10,000) | Max. NIC Queues | GPUs | GPU Memory (GiB) | Virtualization |
---|---|---|---|---|---|---|---|---|
g5.8xlarge.4 | 32 | 128 | 25/15 | 200 | 16 | 1 × V100 | 16 | KVM |
g5.16xlarge.4 | 64 | 256 | 30/30 | 400 | 32 | 2 × V100 | 2 × 16 | KVM |
G5 ECS Features
- CPU: 2nd Generation Intel® Xeon® Scalable 6278 processors (2.6 GHz of base frequency and 3.5 GHz of turbo frequency), or Intel® Xeon® Scalable 6151 processors (3.0 GHz of base frequency and 3.4 GHz of turbo frequency)
- Graphics acceleration APIs
- DirectX 12, Direct2D, and DirectX Video Acceleration (DXVA)
- OpenGL 4.5
- Vulkan 1.0
- CUDA and OpenCL
- NVIDIA V100 GPUs
- Graphics applications accelerated
- Automatic scheduling of G5 ECSs to AZs where NVIDIA V100 GPUs are used
- A maximum specification of 16 GiB of GPU memory and 4096 × 2160 resolution for processing graphics and videos
Supported Common Software
G5 ECSs are used in graphics acceleration scenarios, such as video rendering, cloud desktop, and 3D visualization. If the software relies on GPU DirectX and OpenGL hardware acceleration, use G5 ECSs. G5 ECSs support the following commonly used graphics processing software:
- AutoCAD
- 3ds Max
- MAYA
- Agisoft PhotoScan
- ContextCapture
Notes
- After a G5 ECS is stopped, basic resources (including vCPUs, memory, image, and GPUs) are not billed, but its system disk is billed based on the disk capacity. If other products, such as EVS disks, EIP, and bandwidth are associated with the ECS, these products are billed separately.Note
Resources will be released after a G5 ECS is stopped. If resources are insufficient at the next start, the start may fail. If you want to use such an ECS for a long period of time, do not stop the ECS.
- For G5 ECSs, you need to configure the GRID license after the ECS is created.
- G5 ECSs created using a public image have had the GRID driver of a specific version installed by default. However, you need to purchase and configure a GRID license by yourself. Ensure that the GRID driver version meets service requirements.
For details about how to configure a GRID license, see Manually Installing a GRID Driver on a GPU-accelerated ECS.
- If a G5 ECS is created using a private image, make sure that the GRID driver was installed during the private image creation. If not, install the driver for graphics acceleration after the ECS is created.
For details, see Manually Installing a GRID Driver on a GPU-accelerated ECS.
- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous computing power. Their specifications can only be changed to other specifications of the same instance type.
- GPU-accelerated ECSs do not support live migration.
Computing-accelerated P3
Overview
P3 ECSs use NVIDIA A100 GPUs and provide flexibility and ultra-high-performance computing. P3 ECSs have strengths in AI-based deep learning, scientific computing, Computational Fluid Dynamics (CFD), computing finance, seismic analysis, molecular modeling, and genomics. Theoretically, the FP32 is 19.5 TFLOPS and the TF32 tensor core is 156 TFLOPS | 312 TFLOPS (sparsity enabled).
Specifications
Flavor | vCPUs | Memory (GiB) | Max./Assured Bandwidth (Gbit/s) | Max. PPS (10,000) | Max. NIC Queues | Max. NICs | GPUs | GPU Memory (GiB) | Virtualization |
---|---|---|---|---|---|---|---|---|---|
p3.2xlarge.8 | 8 | 64 | 10/4 | 100 | 4 | 4 | 1 × NVIDIA A100 80GB | 80 | KVM |
p3.4xlarge.8 | 16 | 128 | 15/8 | 200 | 8 | 8 | 2 × NVIDIA A100 80GB | 160 | KVM |
p3.8xlarge.8 | 32 | 256 | 25/15 | 350 | 16 | 8 | 4 × NVIDIA A100 80GB | 320 | KVM |
p3.16xlarge.8 | 64 | 512 | 36/30 | 700 | 32 | 8 | 8 × NVIDIA A100 80GB | 640 | KVM |
P3 ECS Features
- CPU: 2nd Generation Intel® Xeon® Scalable 6248R processors and 3.0 GHz of base frequency
- Up to eight NVIDIA A100 GPUs on an ECS
- NVIDIA CUDA parallel computing and common deep learning frameworks, such as TensorFlow, Caffe, PyTorch, and MXNet
- 19.5 TFLOPS of single-precision computing and 9.7 TFLOPS of double-precision computing on a single GPU
- NVIDIA Tensor cores with 156 TFLOPS of single- and double-precision computing for deep learning
- Up to 40 Gbit/s of network bandwidth on a single ECS
- 80 GB HBM2 GPU memory per graphics card, with a bandwidth of 1,935 Gbit/s
- Comprehensive basic capabilities
- User-defined network with flexible subnet division and network access policy configuration
- Mass storage, elastic expansion, and backup and restoration
- Elastic scaling
- Flexibility
Similar to other types of ECSs, P3 ECSs can be provisioned in a few minutes.
- Excellent supercomputing ecosystem
The supercomputing ecosystem allows you to build up a flexible, high-performance, cost-effective computing platform. A large number of HPC applications and deep-learning frameworks can run on P3 ECSs.
Supported Common Software
P3 ECSs are used in computing acceleration scenarios, such as deep learning training, inference, scientific computing, molecular modeling, and seismic analysis. If the software is required to support GPU CUDA, use P3 ECSs. P3 ECSs support the following commonly used software:
- Common deep learning frameworks, such as TensorFlow, Spark, PyTorch, MXNet, and Caffe
- CUDA GPU rendering supported by RedShift for Autodesk 3ds Max and V-Ray for 3ds Max
- Agisoft PhotoScan
- MapD
- More than 2,000 GPU-accelerated applications such as Amber, NAMD, and VASP
Notes
- After a P3 ECS is stopped, basic resources (including vCPUs, memory, image, and GPUs) are not billed, but its system disk is billed based on the disk capacity. If other products, such as EVS disks, EIP, and bandwidth are associated with the ECS, these products are billed separately.Note
Resources will be released after a P3 ECS is stopped. If resources are insufficient at the next start, the start may fail. If you want to use such an ECS for a long period of time, do not stop the ECS.
- If a P3 ECS is created using a private image, make sure that the Tesla driver was installed during the private image creation. If not, install the driver for computing acceleration after the ECS is created. For details, see Manually Installing a Tesla Driver on a GPU-accelerated ECS.
- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous computing power. Their specifications can only be changed to other specifications of the same instance type.
- GPU-accelerated ECSs do not support live migration.
Computing-accelerated P2s
Overview
P2s ECSs use NVIDIA Tesla V100 GPUs to provide flexibility, high-performance computing, and cost-effectiveness. P2s ECSs provide outstanding general computing capabilities and have strengths in AI-based deep learning, scientific computing, Computational Fluid Dynamics (CFD), computing finance, seismic analysis, molecular modeling, and genomics.
Specifications
Flavor | vCPUs | Memory (GiB) | Max./Assured Bandwidth (Gbit/s) | Max. PPS (10,000) | Max. NIC Queues | Max. NICs | GPUs | GPU Connection | GPU Memory (GiB) | Virtualization |
---|---|---|---|---|---|---|---|---|---|---|
p2s.2xlarge.8 | 8 | 64 | 10/4 | 50 | 4 | 4 | 1 × V100 | PCIe Gen3 | 1 × 32 GiB | KVM |
p2s.4xlarge.8 | 16 | 128 | 15/8 | 100 | 8 | 8 | 2 × V100 | PCIe Gen3 | 2 × 32 GiB | KVM |
p2s.8xlarge.8 | 32 | 256 | 25/15 | 200 | 16 | 8 | 4 × V100 | PCIe Gen3 | 4 × 32 GiB | KVM |
p2s.16xlarge.8 | 64 | 512 | 30/30 | 400 | 32 | 8 | 8 × V100 | PCIe Gen3 | 8 × 32 GiB | KVM |
P2s ECS Features
- CPU: 2nd Generation Intel® Xeon® Scalable 6278 processors (2.6 GHz of base frequency and 3.5 GHz of turbo frequency), or Intel® Xeon® Scalable 6151 processors (3.0 GHz of base frequency and 3.4 GHz of turbo frequency)
- Up to eight NVIDIA Tesla V100 GPUs on an ECS
- NVIDIA CUDA parallel computing and common deep learning frameworks, such as TensorFlow, Caffe, PyTorch, and MXNet
- 14 TFLOPS of single-precision computing and 7 TFLOPS of double-precision computing
- NVIDIA Tensor cores with 112 TFLOPS of single- and double-precision computing for deep learning
- Up to 30 Gbit/s of network bandwidth on a single ECS
- 32 GiB of HBM2 GPU memory with a bandwidth of 900 Gbit/s
- Comprehensive basic capabilities
- User-defined network with flexible subnet division and network access policy configuration
- Mass storage, elastic expansion, and backup and restoration
- Elastic scaling
- Flexibility
Similar to other types of ECSs, P2s ECSs can be provisioned in a few minutes.
- Excellent supercomputing ecosystem
The supercomputing ecosystem allows you to build up a flexible, high-performance, cost-effective computing platform. A large number of HPC applications and deep-learning frameworks can run on P2s ECSs.
Supported Common Software
P2s ECSs are used in computing acceleration scenarios, such as deep learning training, inference, scientific computing, molecular modeling, and seismic analysis. If the software is required to support GPU CUDA, use P2s ECSs. P2s ECSs support the following commonly used software:
- Common deep learning frameworks, such as TensorFlow, Caffe, PyTorch, and MXNet
- CUDA GPU rendering supported by RedShift for Autodesk 3ds Max and V-Ray for 3ds Max
- Agisoft PhotoScan
- MapD
Notes
- After a P2s ECS is stopped, basic resources (including vCPUs, memory, image, and GPUs) are not billed, but its system disk is billed based on the disk capacity. If other products, such as EVS disks, EIP, and bandwidth are associated with the ECS, these products are billed separately.Note
Resources will be released after a P2s ECS is stopped. If resources are insufficient at the next start, the start may fail. If you want to use such an ECS for a long period of time, do not stop the ECS.
- By default, P2s ECSs created using a public image have the Tesla driver installed.
- If a P2s ECS is created using a private image, make sure that the Tesla driver was installed during the private image creation. If not, install the driver for computing acceleration after the ECS is created. For details, see Manually Installing a Tesla Driver on a GPU-accelerated ECS.
- GPU-accelerated ECSs differ greatly in general-purpose and heterogeneous computing power. Their specifications can only be changed to other specifications of the same instance type.
- GPU-accelerated ECSs do not support live migration.
- GPU-accelerated ECSs
- Images Supported by GPU-accelerated ECSs
- GPU-accelerated Enhancement G5
- Computing-accelerated P3
- Computing-accelerated P2s