OpenClaw - Self-Hosted AI Assistant
OpenClaw Tutorial — Install OpenClaw, connect to New API, and quickly build a self-hosted AI assistant. An open-source project with support for multi-channel integrations such as Lark, Discord, and Slack.
Project Overview
OpenClaw is an open-source, self-hosted personal AI assistant platform that connects messaging apps to AI agents running on your own hardware. Designed for developers and advanced users, it lets you have an autonomous AI assistant without giving up control of your data.
- Official website: https://openclaw.ai
- Project documentation: https://docs.openclaw.ai
- GitHub: https://github.com/moltbot/moltbot
OpenClaw (open claw) is fully open source. You can browse the source code, submit Issues, or contribute in the OpenClaw GitHub repository. This tutorial covers the complete process for installing, configuring, and connecting OpenClaw to New API.
🌟 Core Features
Multi-Channel Integration
- Cross-platform coverage: Supports multiple mainstream messaging platforms including Lark, Discord, Slack, and Microsoft Teams
- Single gateway: Manage all channels through a single Gateway process
- Voice support: Supports voice interaction on macOS/iOS/Android
- Canvas UI: Can render interactive Canvas interfaces
Self-Hosting and Data Security
- Fully self-hosted: Runs on your own machine or server
- Open-source transparency: MIT license with fully transparent code
- Local data storage: Context and skills are stored on your local computer rather than in the cloud
Intelligent Agent Capabilities
- Continuous operation: Supports always-on background execution with persistent memory
- Scheduled tasks: Supports cron scheduled tasks
- Session isolation: Isolates sessions by agent/workspace/sender
- Multi-agent routing: Supports collaborative work across multiple agents
- Tool calling: Native support for tool calling and code execution
📦 Installation
Requirements
- Node.js 22 or later
- An AI model API key
npm install -g openclaw@latestAfter installation, run the onboarding wizard:
openclaw onboard🚀 Configuration
Configuration File Location
The OpenClaw configuration file is located at ~/.openclaw/config.json. It can be generated automatically through the onboarding wizard or edited manually.
Configuration Example
Below is a complete configuration example using New API as the model provider:
{
"meta": {
"lastTouchedVersion": "2026.2.1",
"lastTouchedAt": "2026-02-03T12:17:41.559Z"
},
"wizard": {
"lastRunAt": "2026-02-02T21:17:16.011Z",
"lastRunVersion": "2026.2.1",
"lastRunCommand": "onboard",
"lastRunMode": "local"
},
"auth": {
"cooldowns": {
"billingBackoffHoursByProvider": {}
}
},
"models": {
"providers": {
"newapi": {
"baseUrl": "https://api.moleapi.com/v1",
"apiKey": "sk-your-api-key-from-moleapi",
"auth": "api-key",
"api": "openai-completions",
"models": [
{
"id": "gemini-3-flash-preview",
"name": "gemini-3-flash-preview",
"api": "openai-completions",
"reasoning": true,
"input": [
"text",
"image"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 128000,
"maxTokens": 64000
},
{
"id": "kimi-k2.5",
"name": "kimi-k2.5",
"api": "openai-completions",
"reasoning": true,
"input": [
"text",
"image"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 128000,
"maxTokens": 64000
}
]
}
},
"bedrockDiscovery": {
"providerFilter": []
}
},
"agents": {
"defaults": {
"model": {
"primary": "newapi/gemini-3-flash-preview",
"fallbacks": [
"newapi/kimi-k2.5"
]
},
"models": {
"newapi/gemini-3-flash-preview": {
"alias": "gemini-3-flash-preview"
},
"newapi/kimi-k2.5": {
"alias": "kimi-k2.5"
}
},
"workspace": "/home/your-username/.openclaw/workspace",
"maxConcurrent": 4,
"subagents": {
"maxConcurrent": 8
}
}
},
"messages": {
"ackReactionScope": "group-mentions"
},
"commands": {
"native": "auto",
"nativeSkills": "auto"
},
"channels": {
"lark": {
"enabled": true,
"dmPolicy": "pairing",
"appId": "your-lark-app-id",
"appSecret": "your-lark-app-secret",
"groupPolicy": "allowlist",
"streamMode": "partial"
}
},
"gateway": {
"port": 18789,
"mode": "local",
"bind": "loopback",
"auth": {
"mode": "token",
"token": "your-secure-token"
},
"tailscale": {
"mode": "off",
"resetOnExit": false
}
},
"skills": {
"install": {
"nodeManager": "npm"
}
}
}Key Configuration Items
| Configuration Item | Description |
|---|---|
models.providers.newapi.baseUrl | New API deployment URL. Must include /v1 |
models.providers.newapi.apiKey | New API key |
models.providers.newapi.models | Model list. You can add multiple models as needed |
agents.defaults.model.primary | Default primary model, in the format provider/model-id |
agents.defaults.model.fallbacks | Fallback model list. Automatically switches when the primary model is unavailable |
channels.lark.appId | Lark app App ID, obtained from the Lark Open Platform |
channels.lark.appSecret | Lark app App Secret |
gateway.port | Gateway listening port |
gateway.auth.token | Gateway access security token |
Start the Service
After completing the configuration, start OpenClaw:
openclaw startOnce started, you can interact with the AI assistant through the configured channels.
How is this guide?
Last updated on
FluentRead - Open-Source Translation
FluentRead Tutorial — An open-source browser translation extension that supports 20+ translation engines and AI large models. Integrate with MoleAPI for an immersive bilingual reading experience.
LangBot - Instant Messaging Bot Development Platform
LangBot integration guide — an AI chatbot development framework for platforms such as Feishu, DingTalk, Telegram, and Discord. Supports knowledge bases, Agent, and MCP, and is compatible with MoleAPI.