Model service endpoints are the recommended way to run Open-AutoGLM without local GPU hardware. This guide covers how to connect via --base-url and how to keep the endpoint secure.
Endpoints reduce setup time and allow you to focus on evaluating the agent:
Use the official README for exact flags. The example below shows the shape of a typical command:
# TODO: replace with official command and flags
python -m openautoglm.run --base-url https://your-endpoint.example.comIf authentication is required, follow the endpoint provider’s instructions.
Prefer endpoints that provide:
Document the endpoint version to keep evaluations reproducible.
Never expose endpoints publicly without protection. Minimum safeguards:
Common patterns include:
Never embed tokens in public logs or client‑side code.
Rotate credentials regularly:
Track:
These metrics help you debug and compare runs.
Keep access logs separate from model output logs. This makes it easier to detect suspicious activity without exposing sensitive data.
If the endpoint is down:
Before a full evaluation run:
Endpoint access is a privilege. Treat it as sensitive and avoid sharing URLs or tokens in public logs.
Waitlist
Get notified when guided Android regression testing workflows and safety checklists are ready.
We only use your email for the waitlist. You can opt out anytime.