AI Hosting Overview
AI Hosting is the PerLod section for users looking for infrastructure suitable for AI-related workloads.
This section is designed as a starting point to help you choose the right infrastructure path based on your workload type, performance needs, and deployment goals.
What AI Hosting means in PerLod
AI Hosting is not a single standalone service with one fixed deployment model.
Instead, it acts as a gateway that helps you choose the right infrastructure for AI-related use cases, depending on your compute requirements, budget, and operational goals.
In practice, AI-related workloads may be deployed on different service types, such as:
- GPU VPS
- GPU Dedicated Server
- Dedicated Server
- VPS for lightweight supporting services
The right option depends on the workload itself.
Common use cases
AI Hosting may be relevant for workloads such as:
- model training
- inference-related deployments
- data processing workflows
- research and experimentation
- computer vision workloads
- NLP-related environments
- GPU-enabled applications
The exact service choice depends on the required compute power, storage, memory, software stack, and expected workload scale.
How to choose the right service
As a general rule:
- use GPU VPS for lighter GPU-based workloads, testing environments, or smaller AI tasks
- use GPU Dedicated Server when your workload needs dedicated GPU acceleration and stronger isolated resources
- use Dedicated Server when you need CPU-heavy supporting infrastructure, high storage capacity, or non-GPU backend components
- use VPS for smaller supporting services, lightweight applications, control services, or secondary components
If your workload depends directly on GPU performance, GPU-based infrastructure is usually the right place to start.
What is covered here
This section is intentionally lightweight.
Its purpose is to guide users toward the right infrastructure category rather than duplicate the full documentation of other services.
For ordering, management, billing, and troubleshooting details, use the related service documentation directly.