DOCSv2026.05 · last updated 2 days ago

Glixy Documentation.

Everything you need to provision, deploy, scale, and secure your AI infrastructure on Glixy.

Get started

Pick your track

🚀 Quick start (15 min)

Provision your first GPU node, deploy a Hello-LLM, and hit your first inference API call.

Start →

▦ GPU clusters

Multi-node setup, distributed training (DDP / FSDP), GPU sharing with Kueue.

Read →

✦ LLM Deployment

Fine-tuning, vLLM serving, RAG pipelines, OpenAI-compat APIs.

Read →

☁ Cloud & K8s

Managed Kubernetes, Helm charts, ArgoCD GitOps, auto-scaling.

Read →

⛨ Security

SSO, mTLS, customer-managed keys, audit logs, compliance evidence packs.

Read →

⟳ DevOps & CI/CD

GitHub Actions, Terraform, monitoring, on-call runbooks.

Read →
Reference

API & CLI reference

REST API · v1

Provision clusters, deploy models, manage networks, monitor — all via JSON over HTTPS.

Open API docs →

glixy CLI

npm install -g @glixy/cli — provision, deploy, ssh, logs, exec.

CLI reference →

Terraform provider

Infrastructure-as-code for clusters, networks, IAM, monitoring rules.

Provider docs →

Python SDK

pip install glixy — end-to-end LLM training/serving/eval pipelines.

SDK docs →

Webhook events

Subscribe to cluster, deployment, and billing events for real-time integration.

Webhooks →

Status + Health API

Programmatic access to uptime, region health, and incident timelines.

Status →

Need a hand?

Email info@glixy.com or call +91 97346 32596 — Growth & Enterprise plans get 4-hour response.