AI Computing Services
Our AI computing offer focuses on Virtual Machine configurations optimized for artificial intelligence and machine learning workflows. These VMs are designed to provide flexible, instantly accessible computing resources for AI development and deployment needs.
Virtual Machine (VM) Overview
The VM configurations provide flexibility and instant deployment capabilities, making them ideal for AI development, testing, and production workloads. These VMs include:
- Pre-installed AI frameworks and development tools
- High-speed storage access
- Network speeds up to 25 Gbit/s
- Instant provisioning for quick scaling
- Access to all cloud features through standard protocols
AI Virtual Machines (VM)
GPU-Accelerated AI VMs
All GPU-accelerated instances include NVIDIA CUDA Toolkit and popular AI frameworks pre-installed.
VMs with GPU NVIDIA H100
VMs with GPU NVIDIA L40S
VMs with shared GPU NVIDIA L40S
Spot GPU VMs with NVIDIA L40S
Virtual Machines with GPU NVIDIA H100
The H100-based VMs are designed for the most demanding AI workloads, featuring 80GB of memory for training large language models and complex AI tasks. These machines deliver significantly faster training times compared to previous GPU generations, especially for transformer-based models.
Available VM | vCores | RAM (GB) | SSD Storage (GB) | GPU Specifications |
---|---|---|---|---|
gpu.h100.1 | 30 | 220 | 100 | 80GB GPU RAM |
gpu.h100.2 | 60 | 440 | 200 | 2x80GB GPU RAM |
gpu.h100.4 | 120 | 880 | 400 | 4x80GB GPU RAM |
Virtual Machines with GPU NVIDIA L40S
Servers equipped L40s GPUs offer exceptional versatility at an attractive price point. Available in both virtualized and passthrough modes, they excel at AI training, inference, and graphics workloads.
Available VM | vCores | RAM (GB) | SSD Storage (GB) | GPU Specifications |
---|---|---|---|---|
gpu.l40s.2 | 64 | 256 | 512 | 2x48GB GPU RAM |
gpu.l40s.4 | 128 | 512 | 1000 | 4x48GB GPU RAM |
gpu.l40s.8 | 256 | 1000 | 1000 | 8x48GB GPU RAM |
Virtual Machines with shared GPU NVIDIA L40S
Virtualized L40s offer flexible GPU configurations, allowing teams to right-size their compute resources. Available in 1/8, 1/4, 1/2, and full GPU allocations, these VMs are ideal for developing and running smaller AI models, inference workloads, and applications that don't require dedicated GPUs. The fractional GPU sharing maintains high performance while optimizing resource utilization and cost efficiency.
Available VM | vCores | RAM (GB) | SSD Storage (GB) | GPU Specifications |
---|---|---|---|---|
vm.l40s.1 | 4 | 16 | 40 | 4GB vGPU RAM |
vm.l40s.2 | 8 | 32 | 80 | 12GB vGPU RAM |
vm.l40s.4 | 16 | 64 | 160 | 24GB vGPU RAM |
vm.l40s.8 | 32 | 128 | 320 | 48GB vGPU RAM |
Spot GPU Virtual Machines with NVIDIA L40S
Spot instances with L40s provide the same GPU power but at significantly reduced costs, making them perfect for interruption-tolerant workloads and development environments. These instances are especially valuable for training data preprocessing or model evaluation, and hyperparameter tuning tasks. Development and testing environments benefit from spot instances, as temporary disruptions are usually acceptable during the development phase. They're particularly attractive for research teams and startups looking to optimize their AI infrastructure spending while maintaining high computational capability.
Available VM | vCores | RAM (GB) | SSD Storage (GB) | GPU Specifications |
---|---|---|---|---|
spot.vm.l40s.1 | 4 | 16 | 40 | 4GB vGPU RAM |
spot.vm.l40s.2 | 8 | 32 | 80 | 12GB vGPU RAM |
spot.vm.l40s.4 | 16 | 64 | 160 | 24GB vGPU RAM |
spot.vm.l40s.8 | 32 | 128 | 320 | 48GB vGPU RAM |
AI Development Features
VM Configuration Notes
- All VMs can be provisioned instantly
- Automatic scaling capabilities available
- Support for custom container deployments
- Integration with popular ML ops tools
- Regular updates to AI frameworks and tools
- Backup and snapshot capabilities included
For more information about our AI computing services or to discuss your specific requirements, please contact our sales department.