
99.99% Uptime SLA

50% Cost Savings

EU Data Sovereignty

Human Support
Designed for Teams Like Yours
Designed for teams that need enterprise-grade performance without enterprise-level costs. Perfect for AI & ML engineers, developers & startups, graphics teams, and remote/VDI workloads.

AI & ML engineers
Train and run ML models like ResNet-25M, RNNT-120M, GPT-6B, Mistral-7B

Developers & Startups
Build, test, and deploy AI prototypes or inference workloads

Graphics & 3D Teams
Rendering, game streaming, real-time CGI

Remote Teams / Virtual Workstations
GPU-powered desktops for multiple users
| Specs at a Glance |
|---|
| 1-4 NVIDIA L4 GPUs per instance |
| Up to 96 vCPUs, 384 GB memory |
| Up to 4.2 TB storage |
| Up to 15 Gbps burst bandwidth |
| Private network included |
Benefits of Leaseweb

Flexibility
Spin instances up or down instantly to match your workloads.

AWS Compatible APIs
Automate with REST, CLI, or Terraform, fully EC2-compatible.

Transparent & Flexible Billing
Clear upfront pricing with hourly or monthly options.

EU Data Sovereignty
Amsterdam-based, 100% GDPR-compliant.
Why Businesses Choose Leaseweb
Start Running your AI Workloads on our Enterprise-Grade GPU Cloud
Spin up 1–4 NVIDIA L4 GPUs on-demand with predictable performance, full EU data sovereignty, and up to 50% lower costs than hyperscalers.
Frequently Asked Questions
How can I get Started?
To get started, you must first create an account with Leaseweb which means going through our customer verification process by paying a one-time payment fee of €1.00.
Once you have your account ready, you will have instant access to our Customer Portal, and from there you can start launching instances.
Are CPU and GPU resources dedicated?
GPUs on G6 instances are fully dedicated to your workloads, while vCPUs are shared. This setup gives you cost-efficient compute while ensuring predictable GPU performance.
What kind of Uptime can I expect with G6 instances?
All G6 Public Cloud instances are backed by a 99.99% uptime SLA, ensuring your workloads stay running reliably so you can focus on your work, not infrastructure.
Can I train LLMs on G6 instances?
G6 is best suited for ML model training and LLM inference. Large LLM training is not recommended due to GPU memory limits.
What model sizes can I run?
G6 instances support ML models like ResNet-25M, RNNT-120M, and smaller LLMs such as GPT-6B, Mistral-7B, or other smaller LLMs up to a few billion parameters.
Can I run multiple models in parallel on a multi-GPU instance?
Yes, on multi-GPU G6 instances, each GPU runs independently, so you can deploy multiple models or tasks across GPUs simultaneously without them interfering with each other.
How do I manage instances programmatically?
Fully automatable via REST API, CLI, or Terraform, AWS EC2 compatible.
Can I monitor GPU usage and performance?
Real-time metrics, customizable alerts, and usage dashboards are available in the Customer Portal.
What backup and recovery options are available?
Snapshots, GRML, and Acronis integration for robust data protection.
Can I use G6 instances for graphics workloads or virtual desktops?
Yes, G6 supports GPU-accelerated rendering, game streaming, image processing, and VDI for multiple users.
Can I integrate G6 instances with my existing cloud workflows?
API compatibility with AWS EC2 and Terraform allows easy integration with your current infrastructure.






