Skip to main content
RunPod logo

RunPod

RunPod is an AI-focused cloud infrastructure provider offering GPU compute for machine learning workloads. The company specializes in on-demand GPU instances, serverless AI workloads, and multi-node GPU clusters across 31 global regions.

6.2
AI
88/100
From $0.38/hr
Fresh
ai-ml gpu-compute developers enterprise

Rating Breakdown

Performance
5.0
Value
6.9
Features
5.0
Support
6.0
AI Tools
8.8

Hosting Plans

RTX 4090 Pod

gpu
$0.38/hr
41 GB RAM storage unlimited bandwidth 1 site
Sign Up

Serverless RTX 4090

gpu
$1.10/hr
24 GB VRAM storage unlimited bandwidth 1 site
Sign Up

A100 SXM Pod

gpu
$1.79/hr
125 GB RAM storage unlimited bandwidth 1 site
Sign Up

H100 PCIe Pod

gpu
$1.89/hr
188 GB RAM storage unlimited bandwidth 1 site
Sign Up

Serverless H100

gpu
$4.18/hr
80 GB VRAM storage unlimited bandwidth 1 site
Sign Up

H200 Pod

gpu
$4.31/hr
276 GB RAM storage unlimited bandwidth 1 site
Sign Up

B200 Pod

gpu
$7.56/hr
283 GB RAM storage unlimited bandwidth 1 site
Sign Up

Infrastructure

Control Panel custom
Uptime SLA 99.9%
SSH Access Yes
CDN Included No
Git Deploy Yes
Datacenters 31 global regions, 8+ worldwide regions

Support

Channels email, ticket, live-chat
24/7 Support No

Software Support

Languages Python, Docker, PyTorch, TensorFlow, CUDA

Pros

  • Specialized GPU infrastructure with H200, B200, H100, and other high-end GPUs
  • Sub-200ms cold-start times with FlashBoot technology
  • Serverless scaling from 0 to 1000+ workers in seconds
  • No ingress/egress fees on S3-compatible storage
  • Competitive pricing with per-second billing

Cons

  • Primarily focused on AI/ML workloads, not general web hosting
  • Requires technical expertise in machine learning and GPU computing
  • Limited support for traditional web hosting features
  • Pricing can be complex with multiple GPU options

Ready to try RunPod?

Plans from $0.38/hr

Visit RunPod