NVIDIA H100

1xH100
2.95/Hr
1.99/hr
200 GB Memory
16 vCPUs
1 vGPU
Great for training and fine-tuning medium to large AI models with a balance of power and cost.
NVIDIA H100

8xH100
23.6/Hr
1600 GB Memory
128 vCPUs
8 vGPU
Best for large-scale model training and distributed compute workloads like LLMs, diffusion models, etc.
NVIDIA H200

1xH200
2.95/Hr
200 GB Memory
16 vCPUs
1 vGPU
Slight performance uplift over H100; great for high-throughput inference or efficient fine-tuning.
NVIDIA H200

8xH200
23.6/Hr
1600 GB Memory
128 vCPUs
8 vGPU
Solid choice for enterprise-grade setup for generative models and intensive compute pipelines.
NVIDIA 4XLarge

4XLarge
1.59/Hr
256 GB Memory
64 vCPUs
0 vGPU
NVIDIA 2XLarge

2XLarge
0.4/Hr
64 GB Memory
16 vCPUs
0 vGPU
NVIDIA Large

Large
0.09/Hr
16 GB Memory
4 vCPUs
0 vGPU
AWS T3

t3.medium
0.1/Hr
4 GB Memory
2 vCPUs
0 vGPU
AWS R5

r5.large
0.13/Hr
16 GB Memory
2 vCPUs
0 vGPU
AWS R5

r5.xlarge
0.25/Hr
32 GB Memory
8 vCPUs
0 vGPU
AWS R5

r5.2xlarge
0.5/Hr
64 GB Memory
8 vCPUs
0 vGPU
AWS R5

r5.4xlarge
1.01/Hr
128 GB Memory
16 vCPUs
0 vGPU
AWS R5

r5.8xlarge
2.02/Hr
256 GB Memory
32 vCPUs
0 vGPU
AWS R5

r5.12xlarge
3.02/Hr
384 GB Memory
48 vCPUs
0 vGPU
AWS R5

r5.16xlarge
4.03/Hr
512 GB Memory
64 vCPUs
0 vGPU
AWS X1

x1.16xlarge
6.67/Hr
976 GB Memory
64 vCPUs
0 vGPU
AWS X1

x1.32xlarge
13.34/Hr
1952 GB Memory
128 vCPUs
0 vGPU
AWS X1e

x1e.16xlarge
13.34/Hr
1952 GB Memory
64 vCPUs
0 vGPU
AWS X1e

x1e.32xlarge
26.69/Hr
3904 GB Memory
128 vCPUs
0 vGPU
AWS T4

g4dn.xlarge
0.1/Hr
16 GB Memory
4 vCPUs
1 vGPU
Entry-level GPU; ideal for testing, small inference jobs, or development environments.
AWS T4

g4dn.4xlarge
0.38/Hr
64 GB Memory
16 vCPUs
1 vGPU
Affordable choice for batch inference or lightweight training with moderate memory needs.
AWS T4

g4dn.8xlarge
0.77/Hr
128 GB Memory
32 vCPUs
1 vGPU
Solid middle-ground for model tuning and image generation with modest GPU memory needs.
AWS T4

g4dn.metal
2.3/Hr
384 GB Memory
96 vCPUs
8 vGPU
Fully unlocked instance for high-parallelism workloads; great for multi-container pipelines.
AWS V100

p3.2xlarge
0.73/Hr
61 GB Memory
8 vCPUs
1 vGPU
Suitable for classic deep learning models, moderate batch size training, or experimentation.
AWS V100

p3.8xlarge
2.93/Hr
244 GB Memory
32 vCPUs
4 vGPU
Powerful setup for training medium-to-large models and running intensive parallel tasks.
AWS V100

p3.16xlarge
5.86/Hr
488 GB Memory
64 vCPUs
8 vGPU
Top-tier legacy compute; best for larger models, multi-GPU training, or fast inference at scale.
AWS A10G

g5.xlarge
1/Hr
16 GB Memory
4 vCPUs
1 vGPU
Single A10G (24GB); great for small finetunes, LoRA tests, and quick protos.
AWS A10G

g5.2xlarge
1.21/Hr
32 GB Memory
8 vCPUs
1 vGPU
1×A10G with more CPU/RAM; smoother data prep + medium-size inference.
AWS A10G

g5.4xlarge
1.62/Hr
64 GB Memory
16 vCPUs
1 vGPU
1×A10G, ample RAM; stable larger-batch inference and preprocessing.
AWS A10G

g5.8xlarge
2.45/Hr
128 GB Memory
32 vCPUs
1 vGPU
High-RAM single GPU for memory-heavy finetunes, embeddings, video/image pipelines.
AWS A10G

g5.12xlarge
5.67/Hr
192 GB Memory
48 vCPUs
4 vGPU
4×A10G; multi-GPU training, distributed inference, parallel batch generation.
AWS A10G

g5.16xlarge
4.09/Hr
256 GB Memory
64 vCPUs
1 vGPU
Maxed single-GPU node; big-RAM ETL + training where 1 GPU is enough.
AWS A10G

g5.24xlarge
8.14/Hr
384 GB Memory
96 vCPUs
4 vGPU
4×A10G with huge RAM; bigger-context finetunes and heavy data pipelines.
AWS A10G

g5.48xlarge
16.28/Hr
768 GB Memory
192 vCPUs
8 vGPU
8×A10G in one box; data-parallel training or very high-throughput inference.
AWS C5

c5.xlarge
0.17/Hr
8 GB Memory
4 vCPUs
0 vGPU
AWS C5

c5.2xlarge
0.34/Hr
16 GB Memory
8 vCPUs
0 vGPU
AWS C5

c5.4xlarge
0.61/Hr
32 GB Memory
16 vCPUs
0 vGPU
AWS C5

c5.9xlarge
1.53/Hr
72 GB Memory
36 vCPUs
0 vGPU
AWS C5

c5.12xlarge
2.04/Hr
96 GB Memory
48 vCPUs
0 vGPU
AWS C5

c5.18xlarge
3.06/Hr
144 GB Memory
72 vCPUs
0 vGPU
AWS C5

c5.24xlarge
4.08/Hr
192 GB Memory
96 vCPUs
0 vGPU
AWS C5

c5.metal
4.08/Hr
192 GB Memory
96 vCPUs
0 vGPU
Cloud | Type | Availability | Memory (GB) | vCPUs | vGPUs | Price/ Hour | |
---|---|---|---|---|---|---|---|
![]() | 1xH100 | On-demand | 200 | 16 | 1 | $2.95/Hr 1.99/hr (With 3 months term) | |
![]() | 8xH100 | On-demand | 1600 | 128 | 8 | $23.6/Hr | |
![]() | 1xH200 | On-demand | 200 | 16 | 1 | $2.95/Hr | |
![]() | 8xH200 | On-demand | 1600 | 128 | 8 | $23.6/Hr | |
![]() | 4XLarge | On-demand | 256 | 64 | 0 | $1.59/Hr | |
![]() | 2XLarge | On-demand | 64 | 16 | 0 | $0.4/Hr | |
![]() | Large | On-demand | 16 | 4 | 0 | $0.09/Hr | |
![]() | t3.medium | On-demand | 4 | 2 | 0 | $0.1/Hr | |
![]() | r5.large | On-demand | 16 | 2 | 0 | $0.13/Hr | |
![]() | r5.xlarge | On-demand | 32 | 8 | 0 | $0.25/Hr | |
![]() | r5.2xlarge | On-demand | 64 | 8 | 0 | $0.5/Hr | |
![]() | r5.4xlarge | On-demand | 128 | 16 | 0 | $1.01/Hr | |
![]() | r5.8xlarge | On-demand | 256 | 32 | 0 | $2.02/Hr | |
![]() | r5.12xlarge | On-demand | 384 | 48 | 0 | $3.02/Hr | |
![]() | r5.16xlarge | On-demand | 512 | 64 | 0 | $4.03/Hr | |
![]() | x1.16xlarge | On-demand | 976 | 64 | 0 | $6.67/Hr | |
![]() | x1.32xlarge | On-demand | 1952 | 128 | 0 | $13.34/Hr | |
![]() | x1e.16xlarge | On-demand | 1952 | 64 | 0 | $13.34/Hr | |
![]() | x1e.32xlarge | On-demand | 3904 | 128 | 0 | $26.69/Hr | |
![]() | g4dn.xlarge | On-demand | 16 | 4 | 1 | $0.1/Hr | |
![]() | g4dn.4xlarge | On-demand | 64 | 16 | 1 | $0.38/Hr | |
![]() | g4dn.8xlarge | On-demand | 128 | 32 | 1 | $0.77/Hr | |
![]() | g4dn.metal | On-demand | 384 | 96 | 8 | $2.3/Hr | |
![]() | p3.2xlarge | On-demand | 61 | 8 | 1 | $0.73/Hr | |
![]() | p3.8xlarge | On-demand | 244 | 32 | 4 | $2.93/Hr | |
![]() | p3.16xlarge | On-demand | 488 | 64 | 8 | $5.86/Hr | |
![]() | g5.xlarge | On-demand | 16 | 4 | 1 | $1/Hr | |
![]() | g5.2xlarge | On-demand | 32 | 8 | 1 | $1.21/Hr | |
![]() | g5.4xlarge | On-demand | 64 | 16 | 1 | $1.62/Hr | |
![]() | g5.8xlarge | On-demand | 128 | 32 | 1 | $2.45/Hr | |
![]() | g5.12xlarge | On-demand | 192 | 48 | 4 | $5.67/Hr | |
![]() | g5.16xlarge | On-demand | 256 | 64 | 1 | $4.09/Hr | |
![]() | g5.24xlarge | On-demand | 384 | 96 | 4 | $8.14/Hr | |
![]() | g5.48xlarge | On-demand | 768 | 192 | 8 | $16.28/Hr | |
![]() | c5.xlarge | On-demand | 8 | 4 | 0 | $0.17/Hr | |
![]() | c5.2xlarge | On-demand | 16 | 8 | 0 | $0.34/Hr | |
![]() | c5.4xlarge | On-demand | 32 | 16 | 0 | $0.61/Hr | |
![]() | c5.9xlarge | On-demand | 72 | 36 | 0 | $1.53/Hr | |
![]() | c5.12xlarge | On-demand | 96 | 48 | 0 | $2.04/Hr | |
![]() | c5.18xlarge | On-demand | 144 | 72 | 0 | $3.06/Hr | |
![]() | c5.24xlarge | On-demand | 192 | 96 | 0 | $4.08/Hr | |
![]() | c5.metal | On-demand | 192 | 96 | 0 | $4.08/Hr |
Explorer
Poke around the Saturn Cloud platform experience—then tap into low-cost GPUs with our Pro tier
Free
- ✓Full UI Access
Pro
For individuals and scrappy teams of data scientists, AI/ML engineers, and startups. Deployed in our cloud.
$0 per user/month
plus usage costs
- ✓Access to JupyterLab, RStudio, and Dask
- ✓Build RAG pipelines
- ✓Deploy and finetune LLMs
- ✓Deploy Models, Dashboards and Jobs
- ✓Access all instance types
- ✓Share resources easily across team
- ✓Manage group-owned resources
- ✓Admin tools to monitor and control usage
- ✓Consolidated billing for your entire team
Enterprise
Approved by IT security for teams of all sizes.
Chat with us
- ✓Fully managed by Saturn Cloud. No DevOps resources required.
- ✓Installs in cloud environments offered by AWS, Microsoft Azure, Google, Oracle, and Nebius
- ✓Advanced security: SSO and installation into custom VPCs and private subnets available
- ✓Dedicated technical support
Trusted by ML teams around the world






























Saturn Cloud makes my work so much easier. When I sit down at the beginning of the day, I just want my environment to work. I want my favorite packages installed and available on demand. I want it to be easy to scale my workspace and have it shut down automatically when I'm done. Saturn Cloud solves all of that. Their customer service is also top-notch.

Daniel Burkhardt
Machine Learning Scientist

Frequently Asked Questions
How does pricing work?
There are hourly rates for compute and storage. For Saturn Cloud's Pro plan, these are billed in $10 increments.
How do I use Saturn within my corporate cloud?
Talk to us about getting Saturn Cloud enterprise. IT Security teams love us.
Am I charged when my machine is off?
No, except for paying for storage based on the size of your disk.
How do I cancel my account?
Just email support@saturncloud.io
100,000+ Data Scientists and ML Engineers use Saturn Cloud to effortlessly collaborate and manage their data