Skip to content

feat: unified credential detection — interactive menu + CLAUDE_CODE_OAUTH_TOKEN fallback#283

Open
bussyjd wants to merge 10 commits intomainfrom
feat/credential-detection-combined
Open

feat: unified credential detection — interactive menu + CLAUDE_CODE_OAUTH_TOKEN fallback#283
bussyjd wants to merge 10 commits intomainfrom
feat/credential-detection-combined

Conversation

@bussyjd
Copy link
Collaborator

@bussyjd bussyjd commented Mar 19, 2026

Combines #273 + #274 into a single PR. Supersedes both.

Summary

Two credential detection improvements for the model setup flow:

1. Interactive model setup with credential badges (#273)

obol model setup now detects existing API keys in the environment and shows "detected" badges:

? Select a provider:
  > Anthropic (✓ ANTHROPIC_API_KEY detected)
    OpenAI
    Ollama (local)

2. CLAUDE_CODE_OAUTH_TOKEN as Anthropic fallback (#274)

Users with Claude Code subscriptions have CLAUDE_CODE_OAUTH_TOKEN set but not ANTHROPIC_API_KEY. The stack now checks both:

Resolution order: ANTHROPIC_API_KEY → CLAUDE_CODE_OAUTH_TOKEN

obolup.sh also detects this and shows ✓ Claude Code subscription detected instead of prompting for a key.

Also included (shared base commits)

  • Auto-configure cloud providers during obol stack up (if env vars present)
  • Batch LiteLLM provider config into single restart
  • Security: restrict frontend HTTPRoute to local-only hostname binding

Test plan

  • go test ./internal/model/ — TestProviderFromModelName, TestLoadDotEnv pass
  • go build ./... passes
  • Manual: ANTHROPIC_API_KEY=sk-test obol model status shows Anthropic detected
  • Manual: CLAUDE_CODE_OAUTH_TOKEN=test obol model status shows Anthropic (via fallback)
  • Manual: obol model setup renders interactive menu with credential badges

Closes #273, closes #274

OisinKyne and others added 9 commits March 17, 2026 14:43
The frontend catch-all `/` HTTPRoute had no hostname restriction,
meaning the entire UI (dashboard, sell modal, settings) was publicly
accessible through the Cloudflare tunnel. Add `hostnames: ["obol.stack"]`
to match the eRPC route pattern already in this branch.

Also add CLAUDE.md guardrails documenting the local-only vs public route
split and explicit NEVER rules to prevent future regressions.
Four fixes for the sell-inference cluster routing introduced in #267:

1. Security: bind gateway to 127.0.0.1 when NoPaymentGate=true so
   only cluster traffic (via K8s Service+Endpoints bridge) can reach
   the unpaid listener — no host/LAN exposure.

2. Critical: use parsed --listen port in Service, Endpoints, and
   ServiceOffer spec instead of hardcoded 8402. Non-default ports
   now work correctly.

3. k3s support: resolveHostIP() now checks DetectExistingBackend()
   for k3s and returns 127.0.0.1, matching the existing
   ollamaHostIPForBackend() strategy in internal/stack.

4. Migration: keep "obol-agent" as default instance ID to preserve
   existing openclaw-obol-agent namespaces on upgrade. Avoids
   orphaned deployments when upgrading from pre-#267 installs.

Also bumps frontend to v0.1.13-rc.1.
When ~/.openclaw/openclaw.json specifies a cloud model as the agent's
primary (e.g. anthropic/claude-sonnet-4-6), autoConfigureLLM() now
detects the provider and API key from the environment (or .env in dev
mode) and configures LiteLLM before OpenClaw setup runs. This makes
agent chat work out of the box without a separate `obol model setup`.

Changes:
- internal/stack: add autoConfigureCloudProviders() with env + .env
  key resolution (dev-mode only for .env)
- internal/model: export ProviderFromModelName(), ProviderEnvVar();
  add HasProviderConfigured(), LoadDotEnv()
- cmd/obol/model: update defaults — claude-sonnet-4-6, gpt-4.1
- internal/model: update WellKnownModels with current flagship models
  (claude-opus-4-6, gpt-5.4, gpt-4.1, o4-mini)
- obolup.sh: add check_agent_model_api_key() to warn users before
  cluster start if a required API key is missing
Split ConfigureLiteLLM into PatchLiteLLMProvider (config-only) and
RestartLiteLLM (restart+wait). autoConfigureLLM now patches Ollama
and cloud providers first, then does one restart — halving startup
time when both are configured.
Instead of printing a warning that users miss, prompt for the API key
during setup when a cloud model is detected in ~/.openclaw config.
The key is exported so the subsequent obol bootstrap → stack up →
autoConfigureLLM picks it up automatically. Falls back to a warning
in non-interactive mode.

Inspired by hermes-agent's interactive setup wizard pattern.
obolup.sh check_agent_model_api_key() now checks CLAUDE_CODE_OAUTH_TOKEN
when ANTHROPIC_API_KEY is missing, so developers with Claude Code
subscriptions skip the interactive prompt. Also documents the LLM
auto-configuration flow and OpenClaw version management in CLAUDE.md.

Closes #272 (Unit 3)
When `obol model setup` runs without --provider, the interactive menu
now checks the environment for existing API keys and Ollama availability,
showing detection badges next to each provider. If the user picks a
provider with a detected credential, they are offered the option to
reuse it instead of being prompted for a new key.

Detected sources:
- Anthropic: ANTHROPIC_API_KEY, CLAUDE_CODE_OAUTH_TOKEN
- OpenAI: OPENAI_API_KEY
- Ollama: reachable with N model(s) available

The flag-based path (--provider, --api-key) is unchanged.

Closes #272 (Unit 2)
- frontend tag: take main's v0.1.14
- model.go: keep ResolveAPIKey from main (checks alt env vars)
- stack.go: take main's version (uses ResolveAPIKey)
- openclaw.go: take main's version (no image tag override)
- monetize test: take main's obol-agent namespace
- obolup.sh: keep our CLAUDE_CODE_OAUTH_TOKEN fallback
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants