Over the past few months, Claude Code has become the reference among developers who use coding assistants in the terminal. It is direct, navigates real repositories, runs commands, reads files, understands project context, and solves problems most conventional chats simply cannot. Working with an assistant that operates in your environment—not only answering questions—is genuinely different.

The catch is how Claude Code is priced today: about $17 per user per month on Claude Pro. For most unfunded early-stage startups, that adds up as the team grows. Another angle is compliance and data residency. Recently, Anthropic—the company behind Claude—faced temporary disruption tied to events in the Middle East.

Two situations make this painful: when the team is large enough that cost matters, and when the company restricts where code and prompts go—compliance, security policy, or regulated sectors.

For those two cases, OpenCode is a real alternative.

What OpenCode is

GitHub: sst/opencode

OpenCode is a terminal coding assistant, open source (MIT), maintained by the SST team (the same folks behind Serverless Stack). The pitch is an experience similar to Claude Code, without locking you to one vendor or subscription. You connect directly to your LLM provider with your own API keys.

If your company already has credits with OpenAI, Azure, Bedrock, and others, the marginal cost of the assistant itself is zero. You pay only for model usage—and depending on volume, that can be far cheaper than a flat subscription.

The project has an active community, supports more than 75 providers, and allows per-repository configuration in a file at the project root. That is especially useful for teams: you set the model once in the repo and every developer stays aligned without manual per-machine setup.

One thing to know before you install

By default, OpenCode uses the free tier of xAI’s Grok—part of a launch partnership. That applies to both the main model and the lightweight auxiliary model (for tasks like section title generation).

For teams that care where data goes, this must be configured explicitly before any developer uses it. The script I published handles that.

How to configure for your team

The script installs OpenCode and generates the right provider configuration, pointing both the main and auxiliary models so Grok is not the silent default. You can download it from the link at the end of this article.

After download, usage is straightforward.

If your team uses OpenAI:

OPENAI_API_KEY="your-key" bash setup.sh openai

If your team uses Azure OpenAI:

AZURE_OPENAI_KEY="..." AZURE_RESOURCE_NAME="resource-name" AZURE_OPENAI_DEPLOYMENT="gpt-4o" bash setup.sh azure

If the requirement is on-prem—code that never leaves your infrastructure:

bash setup.sh ollama

That last option installs Ollama locally and configures OpenCode to run a code model on the machine. No tokens leave the box. It is the option for regulated industries or strict data policies.

Then run opencode in your project directory.

The script: OpenCode setup for your provider on our Resources page.

Why Claude Code is still king

Claude Code is still better in most (almost all) practical scenarios. Integration with Anthropic’s models is smoother, behavior on complex tasks is more consistent, and the overall experience is more polished.

OpenCode is the right choice when the constraint is financial or compliance—not when you want the best possible experience regardless of cost.