diff --git a/docs/integrations/claude-code.mdx b/docs/integrations/claude-code.mdx
index d023612c7..8eb9762d8 100644
--- a/docs/integrations/claude-code.mdx
+++ b/docs/integrations/claude-code.mdx
@@ -2,7 +2,7 @@
title: Claude Code
---
-Claude Code is Anthropic's agentic coding tool that can read, modify, and execute code in your working directory.
+Claude Code is Anthropic's agentic coding tool that can read, modify, and execute code in your working directory.
Open models can be used with Claude Code through Ollama's Anthropic-compatible API, enabling you to use models such as `qwen3-coder`, `gpt-oss:20b`, or other models.
@@ -26,6 +26,16 @@ irm https://claude.ai/install.ps1 | iex
## Usage with Ollama
+Configure Claude Code to use Ollama:
+
+```shell
+ollama config claude
+```
+
+This will prompt you to select a model and automatically configure Claude Code to use Ollama.
+
+
+
Claude Code connects to Ollama using the Anthropic-compatible API.
1. Set the environment variables:
@@ -47,7 +57,9 @@ Or run with environment variables inline:
ANTHROPIC_AUTH_TOKEN=ollama ANTHROPIC_BASE_URL=http://localhost:11434 claude --model gpt-oss:20b
```
-**Note:** Claude Code requires a large context window. We recommend at least 32K tokens. See the [context length documentation](/context-length) for how to adjust context length in Ollama.
+
+
+Claude Code requires a large context window. We recommend at least 32K tokens. See the [context length documentation](/context-length) for how to adjust context length in Ollama.
## Connecting to ollama.com
@@ -75,4 +87,4 @@ claude --model glm-4.7:cloud
### Local models
- `qwen3-coder` - Excellent for coding tasks
- `gpt-oss:20b` - Strong general-purpose model
-- `gpt-oss:120b` - Larger general-purpose model for more complex tasks
\ No newline at end of file
+- `gpt-oss:120b` - Larger general-purpose model for more complex tasks
diff --git a/docs/integrations/codex.mdx b/docs/integrations/codex.mdx
index f9df1b858..fd2aeadbc 100644
--- a/docs/integrations/codex.mdx
+++ b/docs/integrations/codex.mdx
@@ -2,22 +2,31 @@
title: Codex
---
+Codex is OpenAI's agentic coding tool for the command line.
## Install
Install the [Codex CLI](https://developers.openai.com/codex/cli/):
-```
+```shell
npm install -g @openai/codex
```
## Usage with Ollama
-Codex requires a larger context window. It is recommended to use a context window of at least 32K tokens.
+Configure Codex to use Ollama:
+
+```shell
+ollama config codex
+```
+
+This will prompt you to select a model and automatically configure Codex to use Ollama.
+
+
To use `codex` with Ollama, use the `--oss` flag:
-```
+```shell
codex --oss
```
@@ -25,20 +34,22 @@ codex --oss
By default, codex will use the local `gpt-oss:20b` model. However, you can specify a different model with the `-m` flag:
-```
+```shell
codex --oss -m gpt-oss:120b
```
### Cloud Models
-```
+```shell
codex --oss -m gpt-oss:120b-cloud
```
+
+
+Codex requires a larger context window. It is recommended to use a context window of at least 32K tokens.
## Connecting to ollama.com
-
Create an [API key](https://ollama.com/settings/keys) from ollama.com and export it as `OLLAMA_API_KEY`.
To use ollama.com directly, edit your `~/.codex/config.toml` file to point to ollama.com.
diff --git a/docs/integrations/droid.mdx b/docs/integrations/droid.mdx
index b1ba37710..827ad0c70 100644
--- a/docs/integrations/droid.mdx
+++ b/docs/integrations/droid.mdx
@@ -2,6 +2,7 @@
title: Droid
---
+Droid is Factory's agentic coding tool for the command line.
## Install
@@ -11,66 +12,80 @@ Install the [Droid CLI](https://factory.ai/):
curl -fsSL https://app.factory.ai/cli | sh
```
-Droid requires a larger context window. It is recommended to use a context window of at least 32K tokens. See [Context length](/context-length) for more information.
-
## Usage with Ollama
-Add a local configuration block to `~/.factory/config.json`:
+Configure Droid to use Ollama:
+
+```shell
+ollama config droid
+```
+
+This will prompt you to select models and automatically configure Droid to use Ollama.
+
+
+
+Add a local configuration block to `~/.factory/settings.json`:
```json
{
- "custom_models": [
+ "customModels": [
{
- "model_display_name": "qwen3-coder [Ollama]",
"model": "qwen3-coder",
- "base_url": "http://localhost:11434/v1/",
- "api_key": "not-needed",
+ "displayName": "qwen3-coder [Ollama]",
+ "baseUrl": "http://localhost:11434/v1",
+ "apiKey": "ollama",
"provider": "generic-chat-completion-api",
- "max_tokens": 32000
+ "maxOutputTokens": 32000
}
]
}
```
+Adjust `maxOutputTokens` based on your model's context length (the automated setup detects this automatically).
+
+### Cloud Models
-## Cloud Models
`qwen3-coder:480b-cloud` is the recommended model for use with Droid.
-Add the cloud configuration block to `~/.factory/config.json`:
+Add the cloud configuration block to `~/.factory/settings.json`:
```json
{
- "custom_models": [
+ "customModels": [
{
- "model_display_name": "qwen3-coder [Ollama Cloud]",
"model": "qwen3-coder:480b-cloud",
- "base_url": "http://localhost:11434/v1/",
- "api_key": "not-needed",
+ "displayName": "qwen3-coder:480b-cloud [Ollama]",
+ "baseUrl": "http://localhost:11434/v1",
+ "apiKey": "ollama",
"provider": "generic-chat-completion-api",
- "max_tokens": 128000
+ "maxOutputTokens": 128000
}
]
}
```
+
+
+Droid requires a larger context window. It is recommended to use a context window of at least 32K tokens. See [Context length](/context-length) for more information.
+
## Connecting to ollama.com
1. Create an [API key](https://ollama.com/settings/keys) from ollama.com and export it as `OLLAMA_API_KEY`.
-2. Add the cloud configuration block to `~/.factory/config.json`:
+2. Add the cloud configuration block to `~/.factory/settings.json`:
```json
{
- "custom_models": [
+ "customModels": [
{
- "model_display_name": "qwen3-coder [Ollama Cloud]",
"model": "qwen3-coder:480b",
- "base_url": "https://ollama.com/v1/",
- "api_key": "OLLAMA_API_KEY",
+ "displayName": "qwen3-coder:480b [Ollama Cloud]",
+ "baseUrl": "https://ollama.com/v1",
+ "apiKey": "OLLAMA_API_KEY",
"provider": "generic-chat-completion-api",
- "max_tokens": 128000
+ "maxOutputTokens": 128000
}
]
}
```
-Run `droid` in a new terminal to load the new settings.
\ No newline at end of file
+Run `droid` in a new terminal to load the new settings.
diff --git a/docs/integrations/opencode.mdx b/docs/integrations/opencode.mdx
new file mode 100644
index 000000000..becfe9795
--- /dev/null
+++ b/docs/integrations/opencode.mdx
@@ -0,0 +1,63 @@
+---
+title: OpenCode
+---
+
+OpenCode is an agentic coding tool for the terminal.
+
+## Install
+
+Install [OpenCode](https://opencode.ai):
+
+```shell
+curl -fsSL https://opencode.ai/install | bash
+```
+
+## Usage with Ollama
+
+Configure OpenCode to use Ollama:
+
+```shell
+ollama config opencode
+```
+
+This will prompt you to select models and automatically configure OpenCode to use Ollama.
+
+
+
+Add the Ollama provider to `~/.config/opencode/opencode.json`:
+
+```json
+{
+ "$schema": "https://opencode.ai/config.json",
+ "provider": {
+ "ollama": {
+ "npm": "@ai-sdk/openai-compatible",
+ "name": "Ollama (local)",
+ "options": {
+ "baseURL": "http://localhost:11434/v1"
+ },
+ "models": {
+ "qwen3-coder": {
+ "name": "qwen3-coder [Ollama]"
+ }
+ }
+ }
+ }
+}
+```
+
+
+
+OpenCode requires a larger context window. It is recommended to use a context window of at least 32K tokens. See [Context length](/context-length) for more information.
+
+## Recommended Models
+
+### Cloud models
+- `qwen3-coder:480b` - Large coding model
+- `glm-4.7:cloud` - High-performance cloud model
+- `minimax-m2.1:cloud` - Fast cloud model
+
+### Local models
+- `qwen3-coder` - Excellent for coding tasks
+- `gpt-oss:20b` - Strong general-purpose model
+- `gpt-oss:120b` - Larger general-purpose model for more complex tasks