EN
EN
Sign in mini-cart 0
EN
EN
BlogKnowledge BaseDeveloper APIStatusCustomer Portal
uptime-SLA.webp
99.99% Uptime SLA
cost-savingv2.webp
50% Cost Savings
EU-data.webp
EU Data Sovereignty
Support.webp
Human Support

Designed for Teams Like Yours

Designed for teams that need enterprise-grade performance without enterprise-level costs. Perfect for AI & ML engineers, developers & startups, graphics teams, and remote/VDI workloads.

AI-ML.webp
AI & ML engineers
Train and run ML models like ResNet-25M, RNNT-120M, GPT-6B, Mistral-7B
developers-startups.webp
Developers & Startups
Build, test, and deploy AI prototypes or inference workloads
graphic-3D.webp
Graphics & 3D Teams
Rendering, game streaming, real-time CGI
remote-teams.webp
Remote Teams / Virtual Workstations
GPU-powered desktops for multiple users
Specs at a Glance
1-4 NVIDIA L4 GPUs per instance
Up to 96 vCPUs, 384 GB memory
Up to 4.2 TB storage
Up to 15 Gbps burst bandwidth
Private network included

Benefits of Leaseweb

flexibility.webp
Flexibility
Spin instances up or down instantly to match your workloads.
AWSv2.webp
AWS Compatible APIs
Automate with REST, CLI, or Terraform, fully EC2-compatible.
transparent-flexible-billing.webp
Transparent & Flexible Billing
Clear upfront pricing with hourly or monthly options.
EU-data-sovereignty.webp
EU Data Sovereignty
Amsterdam-based, 100% GDPR-compliant.

Start Running your AI Workloads on our Enterprise-Grade GPU Cloud

Spin up 1–4 NVIDIA L4 GPUs on-demand with predictable performance, full EU data sovereignty, and up to 50% lower costs than hyperscalers.

Sign Up

Frequently Asked Questions

To get started, you must first create an account with Leaseweb which means going through our customer verification process by paying a one-time payment fee of €1.00.

Once you have your account ready, you will have instant access to our Customer Portal, and from there you can start launching instances. 

GPUs on G6 instances are fully dedicated to your workloads, while vCPUs are shared. This setup gives you cost-efficient compute while ensuring predictable GPU performance.
All G6 Public Cloud instances are backed by a 99.99% uptime SLA, ensuring your workloads stay running reliably so you can focus on your work, not infrastructure.
G6 is best suited for ML model training and LLM inference. Large LLM training is not recommended due to GPU memory limits.
G6 instances support ML models like ResNet-25M, RNNT-120M, and smaller LLMs such as GPT-6B, Mistral-7B, or other smaller LLMs up to a few billion parameters.
Yes, on multi-GPU G6 instances, each GPU runs independently, so you can deploy multiple models or tasks across GPUs simultaneously without them interfering with each other.
Fully automatable via REST API, CLI, or Terraform, AWS EC2 compatible.
Real-time metrics, customizable alerts, and usage dashboards are available in the Customer Portal.
Snapshots, GRML, and Acronis integration for robust data protection.
Yes, G6 supports GPU-accelerated rendering, game streaming, image processing, and VDI for multiple users.
API compatibility with AWS EC2 and Terraform allows easy integration with your current infrastructure.
Scroll to top