Where the file lives
# macOS / Linux
~/.deepseek/config.toml
# Windows
%USERPROFILE%\.deepseek\config.toml Starter example
[api]
key = "sk-..."
base_url = "https://api.deepseek.com"
model = "deepseek-v4-flash"
[ui]
default_mode = "agent"
approval_mode = "suggest"
reasoning_effort = "high"
[mcp]
enabled = true Exact keys evolve with releases—when in doubt, mirror the sample from the official repository wiki bundled with your installed version.
Approval modes
Modes (Plan / Agent / YOLO) describe how autonomous the agent feels. Approval settings further gate tool calls—common values include suggest, auto, and never. Treat them as independent axes: you can be in Agent mode while still tightening approvals.
Reasoning effort
Reasoning controls how hard the model tries on supported endpoints—trade latency and tokens against depth. Cycle shortcuts are documented in upstream release notes.
Providers and API keys
The upstream docs list multiple providers. Typical IDs include:
| Provider | ID | Env var (typical) |
|---|---|---|
| DeepSeek | deepseek | DEEPSEEK_API_KEY |
| NVIDIA NIM | nvidia-nim | NVIDIA_API_KEY |
| OpenAI | openai | OPENAI_API_KEY |
| OpenRouter | openrouter | OPENROUTER_API_KEY |
| Novita | novita | NOVITA_API_KEY |
| Fireworks | fireworks | FIREWORKS_API_KEY |
| SGLang | sglang | SGLANG_API_KEY |
| vLLM | vllm | VLLM_API_KEY |
| Ollama | ollama | OLLAMA_API_KEY |
Use deepseek auth set --provider <id> to align CLI auth with the backend you chose.
Hooks
Hooks let you run commands around lifecycle events—session start, before/after tool calls, and more. Enable them when you want logging, metrics, or lightweight automation tied to agent behavior.
MCP config file
MCP servers commonly live in ~/.deepseek/mcp.json. See MCP & skills for examples.
Suggested profiles
Budget-friendly
[api]
model = "deepseek-v4-flash"
[ui]
default_mode = "agent"
approval_mode = "suggest"
reasoning_effort = "off" Balanced
[api]
model = "deepseek-v4-flash"
[ui]
default_mode = "agent"
approval_mode = "suggest"
reasoning_effort = "high" Heavy reasoning
[api]
model = "deepseek-v4-pro"
[ui]
default_mode = "agent"
approval_mode = "suggest"
reasoning_effort = "max"