Provider reference
folio auto-detects available providers from environment variables. Set the right env vars and you‘re ready.
Chat (21 providers, all OpenAI-compatible)
| Provider | Env var | Default model |
|---|---|---|
| DeepSeek | DEEPSEEK_API_KEY |
deepseek-chat |
| OpenAI | OPENAI_API_KEY |
gpt-4o-mini |
| Anthropic | ANTHROPIC_API_KEY |
claude-sonnet-4-7 |
| Gemini | GEMINI_API_KEY |
gemini-2.5-flash |
| Zhipu | ZHIPU_API_KEY |
glm-4-flash |
| Moonshot (Kimi) | MOONSHOT_API_KEY |
moonshot-v1-8k |
| Qwen (Tongyi) | DASHSCOPE_API_KEY |
qwen-plus |
| Doubao (ByteDance) | ARK_API_KEY |
doubao-pro-32k |
| Wenxin (ERNIE) | QIANFAN_API_KEY |
ernie-4.0-8k |
| Hunyuan | HUNYUAN_API_KEY |
hunyuan-pro |
| Spark (iFlytek) | SPARK_API_PASSWORD |
spark-3.5 |
| 01.AI Yi | YI_API_KEY |
yi-large |
| StepFun | STEP_API_KEY |
step-1 |
| xAI Grok | XAI_API_KEY |
grok-2 |
| Mistral | MISTRAL_API_KEY |
mistral-medium |
| Cohere | COHERE_API_KEY |
command-r |
| Groq | GROQ_API_KEY |
llama-3.1-70b-versatile |
| Together | TOGETHER_API_KEY |
meta-llama/Llama-3-70b-chat-hf |
| Fireworks | FIREWORKS_API_KEY |
accounts/fireworks/models/llama-v3-70b-instruct |
| OpenRouter | OPENROUTER_API_KEY |
anthropic/claude-sonnet-4-7 |
| custom | Configure baseUrl + envKey |
— |
Image (5 providers)
| Provider | Env var | Default model |
|---|---|---|
| Zhipu CogView | ZHIPU_API_KEY |
cogview-3-plus |
| Gemini | GEMINI_API_KEY |
gemini-2.5-flash-image |
| OpenAI | OPENAI_API_KEY |
gpt-image-1 |
| DashScope (Tongyi Wanxiang) | DASHSCOPE_API_KEY |
wanxiang-v2.1 |
| Doubao (Jimeng) | ARK_API_KEY |
doubao-image |
Publish targets (4 providers)
| Target | Env vars |
|---|---|
| Cloudflare Pages | CLOUDFLARE_API_TOKEN + CLOUDFLARE_ACCOUNT_ID + CLOUDFLARE_PAGES_PROJECT |
| GitHub Pages | GITHUB_TOKEN + GITHUB_REPO (+ optional GITHUB_BRANCH) |
| Netlify | NETLIFY_TOKEN (+ optional NETLIFY_SITE_ID) |
| S3-compatible | S3_ENDPOINT + S3_BUCKET + S3_REGION + AWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY |
Priority and capability negotiation
When several chat providers are configured at once, folio runs capability negotiation and picks the one that satisfies the current engine‘s requires set.
To pin a preferred order manually, add this to folio.config.json:
{
"providers": { "preferred": ["deepseek", "openai", "openrouter"] }
}