Docker can simplify reproducible deployments, but it also hides GPU and device access details that matter for Open-AutoGLM. This checklist focuses on what to validate before you ship a containerized setup.
Use Docker if you want:
Avoid Docker if you need direct device access without extra layers or you are in early debugging stages.
Before you build an image:
Use official base images when possible. Avoid obscure or unmaintained images. Document every dependency in the Dockerfile so it can be rebuilt later.
TODO: add official Docker guidance if the Open-AutoGLM README includes it.
Large images slow down deployments. To reduce size:
This keeps rebuilds fast and reduces storage costs.
If you run locally inside Docker (Option B):
nvidia-smi runs inside the container.If your container is used as a model service endpoint:
--base-url.If you run in a shared environment:
These policies reduce unintended exposure.
If the container must access a physical device:
When possible, separate the endpoint container from the client UI agent.
Do not bake secrets into the image. Pass them at runtime via environment variables or secret stores.
Use volumes for:
Avoid writing large artifacts inside container layers.
Add simple health checks:
At minimum, log:
These logs make it easier to compare agent performance across deployments.
If you build images in CI:
Keep a previous container image tagged and ready. If a new build fails, roll back quickly to maintain uptime.
Do not expose endpoints publicly without authentication. Model endpoints can be misused, and public endpoints are not recommended for early tests.
--base-url guidance.Waitlist
Get notified when guided Android regression testing workflows and safety checklists are ready.
We only use your email for the waitlist. You can opt out anytime.