Open-AutoGLM ships multiple models. Use official sources only and avoid re-hosting.
General guidance (validate against official docs):
TODO: confirm model naming and capabilities from the official README or paper.
Use official pages only:
Option A (recommended): use a deployed model service endpoint; no local GPU required.
Option B (local): requires NVIDIA GPU (24GB+ VRAM recommended), vLLM or SGLang, and ~20GB model files.
Why use a model service endpoint?
- Lower hardware barrier (no local GPU required)
- Faster setup (skip vLLM/SGLang install + large downloads)
- Easier updates and maintenance
- Recommended for beginners (aligns with official guidance)
Waitlist
Get notified when guided Android regression testing workflows and safety checklists are ready.
We only use your email for the waitlist. You can opt out anytime.