This quick start walks you through ADB setup, model download, and a first demo run. It emphasizes safe evaluation and repeatable steps without assuming a local GPU.
Why use a model service endpoint?
- Lower hardware barrier (no local GPU required)
- Faster setup (skip vLLM/SGLang install + large downloads)
- Easier updates and maintenance
- Recommended for beginners (aligns with official guidance)
adb devicesIf the device is listed as unauthorized, unlock the device and accept the prompt, then rerun the command.
Use the official sources only.
Verify the download size and checksum if the official release provides them.
Follow the official README for the exact commands. The sequence below is a template; replace it with the README steps once confirmed.
# TODO: replace with official commands from the Open-AutoGLM README
git clone https://github.com/liminok/Open-Auto-GLM-Codex.git
cd Open-Auto-GLM
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# Example usage: point to a model service endpoint
python -m openautoglm.run --base-url https://your-endpoint.example.comadb kill-server and adb start-server.adb -s <serial> to target the correct device.Option A (recommended): Use a deployed model service endpoint. Pass --base-url and avoid local GPU setup.
Option B (local): Requires NVIDIA GPU (24GB+ VRAM recommended), vLLM or SGLang, and ~20GB of model files.
Waitlist
Get notified when guided Android regression testing workflows and safety checklists are ready.
We only use your email for the waitlist. You can opt out anytime.