EN
EN
Sign in mini-cart 0
EN
EN
BlogKnowledge BaseDeveloper APIStatusCustomer Portal
uptime-SLA.webp
99.99% Uptime SLA
cost-savingv2.webp
50% Cost Savings
EU-data.webp
EU Data Sovereignty
Support.webp
Human Support
G6 Instances – A great starting point for AI/ML

Our G6 instances can be launched with 1–4 Nvidia L4 GPUs and are a great choice for teams looking to develop their first AI/ML workloads.

 
Instance GPUs vCPU
 
Memory
(GIB)
Storage
(GB)
Baseline
bandwidth
Burst
bandwidth
Private
Network
Price per hour* Price per month
LSW.G6.XLARGE 1 4 16 250 5 Gbps 15 Gbps 2.5 Gbps €0,6147 €403,88
LSW.G6.2XLARGE 1 8 32 450 5 Gbps 15 Gbps 2.5 Gbps €0,6988 €459,10
LSW.G6.4XLARGE 1 16 64 600 5 Gbps 15 Gbps 2.5 Gbps €0,8241 €541,45
LSW.G6.8XLARGE 1 32 128 900 5 Gbps 15 Gbps 2.5 Gbps €0,9653 €634,22
LSW.G6.16XLARGE 1 64 256 1800 5 Gbps 15 Gbps 2.5 Gbps €1,6281 €1069,65
LSW.GR6.4XLARGE 1 16 128 600 5 Gbps 15 Gbps 2.5 Gbps €0,7369 €485,92
LSW.GR6.8XLARGE 1 32 256 900 5 Gbps 15 Gbps 2.5 Gbps €1,1766 €773,05
LSW.G6.18XLARGE 2 72 288 1800 5 Gbps 15 Gbps 2.5 Gbps €1,8243 €1210,64
LSW.G6.12XLARGE 4 48 192 4200 5 Gbps 15 Gbps 2.5 Gbps €2,4699 €1622,00
LSW.G6.24XLARGE 4 96 384 4200 5 Gbps 15 Gbps 2.5 Gbps €3,2081 €2107,75

* Hourly pricing: Storage is charged per gigabyte-hour.
* 1 TB pooled internet traffic per month, shared across your instances, with free incoming traffic to help you avoid surprise egress costs. 
* Each GPU is independent, ideal for running multiple models in parallel. 
* Each GPU supports flexible precision (FP16, INT8, INT4) to optimize speed and cost-efficiency for your workloads. 


Get more information
 

How does bandwidth work for G6 instances?

Each account gets 1 TB of pooled internet traffic per month, shared across all Public Cloud instances. Incoming traffic is always free, helping you avoid unexpected costs. Following is the breakdown of costs in case you exceed the monthly 1TB allowance:

Bandwidth  Quantity Price per TB Price per GB
  <1 Free tier Free tier
Outgoing >1.00 €2,20 €0,0022
  10 €1.90  €0.0019
  15 €1.50  €0.0015
  30 €1.36  €0.0014
  50 €1.20  €0.0012
  100 €1.00 €0.0010

 

Designed for Teams Like Yours

AI-ML.webp
AI & ML engineers

Train and run ML models like ResNet-25M, RNNT-120M, GPT-6B, Mistral-7B

developers-startups.webp
Developers & Startups

Build, test, and deploy AI prototypes or inference workloads

graphic-3D.webp
Graphics & 3D Teams

Rendering, game streaming, real-time CGI

remote-teams.webp
Remote Teams / Virtual Workstations

GPU-powered desktops for multiple users

Benefits of Leaseweb

flexibility.webp
Flexibility

Spin instances up or down instantly to match your workloads.

AWSv2.webp
AWS Compatible APIs

Automate with REST, CLI, or Terraform, fully EC2-compatible.

regional-availability.webp
Regional Availability

Live in NL & DE for EU compliance, US/Canada coming soon.

service-monitoring.webp
Service Monitoring

Built-in checks, status updates, and alerts at no extra cost.

advanced-UI.webp
Advanced User Interface

Manage networks, storage, SSH, firewalls, and backups with ease.

transparent-flexible-billing.webp
Transparent & Flexible Billing

Clear upfront pricing with hourly or monthly options.

usage-visibility.webp
Usage Visibility & Control

Full usage insights and resource management via portal.

EU-data-sovereignty.webp
EU Data Sovereignty

Amsterdam-based, 100% GDPR-compliant.

Start Running your AI Workloads on our Enterprise-Grade GPU Cloud

Spin up 1–4 NVIDIA L4 GPUs on-demand with predictable performance, full EU data sovereignty, and up to 50% lower costs than hyperscalers.

Sign Up

Frequently Asked Questions

To get started, you must first create an account with Leaseweb which means going through our customer verification process by paying a one-time payment fee of €1.00.

Once you have your account ready, you will have instant access to our Customer Portal, and from there you can start launching instances. 

GPUs on G6 instances are fully dedicated to your workloads, while vCPUs are shared. This setup gives you cost-efficient compute while ensuring predictable GPU performance.
All G6 Public Cloud instances are backed by a 99.99% uptime SLA, ensuring your workloads stay running reliably so you can focus on your work, not infrastructure.
G6 is best suited for ML model training and LLM inference. Large LLM training is not recommended due to GPU memory limits.
G6 instances support ML models like ResNet-25M, RNNT-120M, and smaller LLMs such as GPT-6B, Mistral-7B, or other smaller LLMs up to a few billion parameters.
Yes, on multi-GPU G6 instances, each GPU runs independently, so you can deploy multiple models or tasks across GPUs simultaneously without them interfering with each other.
Fully automatable via REST API, CLI, or Terraform, AWS EC2 compatible.
Real-time metrics, customizable alerts, and usage dashboards are available in the Customer Portal.
Snapshots, GRML, and Acronis integration for robust data protection.
Yes, G6 supports GPU-accelerated rendering, game streaming, image processing, and VDI for multiple users.
API compatibility with AWS EC2 and Terraform allows easy integration with your current infrastructure.
Scroll to top