dstack provisions ML infrastructure on demand. Supports major cloud providers. Automatic resource management. Optimized for training and inference workloads.
Infrastructure Patterns
Define runs for training jobs. Configure services for model serving. Use dev environments for interactive work. Fleets manage GPU pools.
- Define workflows in YAML configuration
- Choose from multiple cloud backends
- Use spot instances for cost savings
- Configure auto-scaling for services
- Manage GPU resources efficiently
Multi-Cloud
Abstract over cloud differences. Use best pricing across providers. Consistent interface regardless of backend. Automatic resource cleanup.