
99.99% Uptime SLA

50% Cost Savings

EU Data Sovereignty

Human Support
G6 Instances – A great starting point for AI/ML
Our G6 instances can be launched with 1–4 Nvidia L4 GPUs and are a great choice for teams looking to develop their first AI/ML workloads.
How does bandwidth work for G6 instances?
Each account gets 1 TB of pooled internet traffic per month, shared across all Public Cloud instances. Incoming traffic is always free, helping you avoid unexpected costs. Following is the breakdown of costs in case you exceed the monthly 1TB allowance:
Designed for Teams Like Yours

AI & ML engineers
Train and run ML models like ResNet-25M, RNNT-120M, GPT-6B, Mistral-7B

Developers & Startups
Build, test, and deploy AI prototypes or inference workloads

Graphics & 3D Teams
Rendering, game streaming, real-time CGI

Remote Teams / Virtual Workstations
GPU-powered desktops for multiple users
Benefits of Leaseweb

Flexibility
Spin instances up or down instantly to match your workloads.

AWS Compatible APIs
Automate with REST, CLI, or Terraform, fully EC2-compatible.

Regional Availability
Live in NL & DE for EU compliance, US/Canada coming soon.

Service Monitoring
Built-in checks, status updates, and alerts at no extra cost.

Advanced User Interface
Manage networks, storage, SSH, firewalls, and backups with ease.

Transparent & Flexible Billing
Clear upfront pricing with hourly or monthly options.

Usage Visibility & Control
Full usage insights and resource management via portal.

EU Data Sovereignty
Amsterdam-based, 100% GDPR-compliant.
Why Businesses Choose Leaseweb
Start Running your AI Workloads on our Enterprise-Grade GPU Cloud
Spin up 1–4 NVIDIA L4 GPUs on-demand with predictable performance, full EU data sovereignty, and up to 50% lower costs than hyperscalers.
Frequently Asked Questions
How can I get Started?
To get started, you must first create an account with Leaseweb which means going through our customer verification process by paying a one-time payment fee of €1.00.
Once you have your account ready, you will have instant access to our Customer Portal, and from there you can start launching instances.
Are CPU and GPU resources dedicated?
GPUs on G6 instances are fully dedicated to your workloads, while vCPUs are shared. This setup gives you cost-efficient compute while ensuring predictable GPU performance.
What kind of Uptime can I expect with G6 instances?
All G6 Public Cloud instances are backed by a 99.99% uptime SLA, ensuring your workloads stay running reliably so you can focus on your work, not infrastructure.
Can I train LLMs on G6 instances?
G6 is best suited for ML model training and LLM inference. Large LLM training is not recommended due to GPU memory limits.
What model sizes can I run?
G6 instances support ML models like ResNet-25M, RNNT-120M, and smaller LLMs such as GPT-6B, Mistral-7B, or other smaller LLMs up to a few billion parameters.
Can I run multiple models in parallel on a multi-GPU instance?
Yes, on multi-GPU G6 instances, each GPU runs independently, so you can deploy multiple models or tasks across GPUs simultaneously without them interfering with each other.
How do I manage instances programmatically?
Fully automatable via REST API, CLI, or Terraform, AWS EC2 compatible.
Can I monitor GPU usage and performance?
Real-time metrics, customizable alerts, and usage dashboards are available in the Customer Portal.
What backup and recovery options are available?
Snapshots, GRML, and Acronis integration for robust data protection.
Can I use G6 instances for graphics workloads or virtual desktops?
Yes, G6 supports GPU-accelerated rendering, game streaming, image processing, and VDI for multiple users.
Can I integrate G6 instances with my existing cloud workflows?
API compatibility with AWS EC2 and Terraform allows easy integration with your current infrastructure.