OpenClaw, formerly known as ClawBot and MoltBot, is a personal AI agent that you run on your own infrastructure and that uses a language model (LLM) of your choice, locally or remotely. OpenClaw executes tasks autonomously and can manage your mailbox (including your calendar), work with browsers, run code on your computer, and much more.
You control OpenClaw via a chat app of your choice such as WhatsApp, Telegram, or Signal, and it also has a web dashboard. In this guide, you’ll install OpenClaw on a Linux server (or order a ready-to-go OpenClaw VPS) and connect it to an external LLM provider such as OpenAI’s GPT, or a locally hosted model via Ollama.
Looking for inspiration for what people use OpenClaw for? Take a look at https://openclaw.ai/showcase.
OpenClaw uses “skills” to learn new capabilities. These can be developed by third parties and are freely available via Clawhub. Only install trusted “skills”/plugins. Malicious skills have recently been spotted in public registries; review the source code before use and never copy-paste suspicious shell commands.
Installing OpenClaw
Step 1
Connect to your VPS via SSH or the VPS console.
Step 2
First, update your server so the latest software packages are available:
Ubuntu:
sudo apt -y update && sudo apt -y upgradeAlmaLinux/Rocky Linux/CentOS Stream:
sudo dnf -y update
Step 3
OpenClaw requires a number of dependencies/adjustments before, during, and after the installation/configuration process, namely:
- NodeJS 22 or newer.
- NPM: A package manager that is installed by default with NodeJS. However, some additional configuration is needed so the OpenClaw executable can be used. The npm bin directory is not automatically recognized by your OS. You can fix this by adding the npm bin directory to $PATH (an environment variable that tells your OS where executable files are located).
- Brew: To use skills (i.e., what OpenClaw can do besides chatting), you also need Homebrew.
Copy and paste the code below depending on your operating system:
Ubuntu / Debian:
curl -fsSL https://deb.nodesource.com/setup_24.x | sudo bash
sudo apt -y install nodejs
echo 'export PATH="$(npm prefix -g)/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
echo >> ~/.bashrc
grep -qxF 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv bash)"' ~/.bashrc \
|| echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv bash)"' >> ~/.bashrc
eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv bash)"
sudo apt -y install build-essentialAlmaLinux / Rocky Linux / CentOS Stream:
curl -fsSL https://deb.nodesource.com/setup_24.x | sudo bash
sudo dnf -y install nodejs
echo 'export PATH="$(npm prefix -g)/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
echo >> ~/.bashrc
grep -qxF 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv bash)"' ~/.bashrc \
|| echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv bash)"' >> ~/.bashrc
eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv bash)"
sudo dnf -y update && sudo dnf -y groupinstall "Development Tools" && sudo dnf -y install gcc-c++ make
Step 5
Install OpenClaw via an installer script (recommended) or Docker.
Installer script (recommended)
Run the official installer script:
curl -fsSL https://openclaw.ai/install.sh | bash
This installs the openclaw CLI (Command Line Interface). By default, an onboarding process starts after installation, which we’ll go through in detail in the next section. If you skip or interrupt the onboarding process now, you can start it manually after installation with the command:
openclaw onboard --install-daemon
--install-daemon creates a systemd service so the gateway keeps running in the background.
Docker (containerized gateway)
Step 1
First, install Docker and Docker Compose (v2). Start and enable the Docker service:
sudo apt -y update && sudo apt -y install docker.io docker-compose-plugin sudo systemctl enable --now docker
Step 2
Clone the OpenClaw repo and run the Docker setup script:
git clone https://github.com/openclaw/openclaw.git
cd openclaw
./docker-setup.sh
This builds the image, runs the onboarding process (see the next section), and starts the gateway via Compose.
Step 3
Open the dashboard and pair your browser (if needed).
Show the dashboard URL without opening it automatically:
docker compose run --rm openclaw-cli dashboard --no-open
(Optional) approve a pairing request
docker compose run --rm openclaw-cli devices list
docker compose run --rm openclaw-cli devices approve
Use the CLI container to pair and link channels.
The OpenClaw onboarding process
The onboarding process has started, and we’ll walk through it in detail below. Did you interrupt the process or not start it yet? Then run the command now:
openclaw onboard --install-daemonDuring this process, you adjust selections with the arrow keys. When there are multiple selectable options, use the space bar to make a selection, and in each step press Enter to confirm your choice.
Step 1
First, you’ll see a security warning. We recommend reading it carefully; OpenClaw can, depending on your configuration, gain deep access to things like your computer/server, social accounts, and email, and it can cause serious damage if it’s not configured securely (hosting OpenClaw on a VPS already reduces a number of risks).
◇ Security ──────────────────────────────────────────────────────────────────────────────╮
│ │
│ Security warning — please read. │
│ │
│ OpenClaw is a hobby project and still in beta. Expect sharp edges. │
│ This bot can read files and run actions if tools are enabled. │
│ A bad prompt can trick it into doing unsafe things. │
│ │
│ If you’re not comfortable with basic security and access control, don’t run OpenClaw. │
│ Ask someone experienced to help before enabling tools or exposing it to the internet. │
│ │
│ Recommended baseline: │
│ - Pairing/allowlists + mention gating. │
│ - Sandbox + least-privilege tools. │
│ - Keep secrets out of the agent’s reachable filesystem. │
│ - Use the strongest available model for any bot with tools or untrusted inboxes. │
│ │
│ Run regularly: │
│ openclaw security audit --deep │
│ openclaw security audit --fix │
│ │
│ Must read: https://docs.openclaw.ai/gateway/security │
│ │
├─────────────────────────────────────────────────────────────────────────────────────────╯OpenClaw then asks for explicit permission to continue the onboarding process. Select ‘Yes’ with the arrow keys and press ‘Enter’ to give permission.
◆ I understand this is powerful and inherently risky. Continue?
│ ● Yes / ○ No
└
Step 2
Press ‘Enter’ again to select ‘Quickstart’ as the onboarding method. All parts of the onboarding process are optional, and we discuss the most important ones in separate sections.
◆ Onboarding mode
│ ● QuickStart (Configure details later via openclaw configure.)
│ ○ Manual
└
Step 3
You now get the option to select an LLM provider, with two choices:
- Do you want to use a self-hosted model via Ollama? Then select ‘Skip for now’.
- Would you rather use one of the available LLM providers? Then grab the OAuth / API key details for your preferred LLM provider now, and select the provider you want.
◆ Model/auth provider
│ ○ OpenAI
│ ○ Anthropic
│ ○ MiniMax
│ ○ Moonshot AI
│ ○ Google
│ ○ OpenRouter
│ ○ Qwen
│ ○ Z.AI (GLM 4.7)
│ ○ Copilot
│ ○ Vercel AI Gateway
│ ○ OpenCode Zen
│ ○ Xiaomi
│ ○ Synthetic
│ ○ Venice AI
│ ● Skip for now
└OpenAI example
For example, if you select OpenAI and want to use an API key, the next options will look like this:
◇ QuickStart ─────────────────────────╮
│ │
│ Gateway port: 18789 │
│ Gateway bind: Loopback (127.0.0.1) │
│ Gateway auth: Token (default) │
│ Tailscale exposure: Off │
│ Direct to chat channels. │
│ │
├──────────────────────────────────────╯
│
◇ Model/auth provider
│ OpenAI
│
◇ OpenAI auth method
│ OpenAI API key
│
◆ Enter OpenAI API key
│ a1234-qqw123aa-asdasd1█
Step 4
In step 5, you’ll choose a specific LLM. To keep the choices manageable, you can already select an LLM provider now.
Are you using Ollama or do you want to configure an LLM later? Then press ‘Enter’ to select ‘All providers’ (default) under ‘Filter models by providers’.
Did you choose a specific provider in the previous step? Then select it from the list below. In the next step, you can select which model you want to use.
◆ Filter models by provider
│ ● All providers
│ ○ amazon-bedrock
│ ○ anthropic
│ ○ azure-openai-responses
│ ○ cerebras
│ ○ github-copilot
│ ○ google
│ ○ google-antigravity
│ ○ google-gemini-cli
│ ○ google-vertex
│ ○ groq
│ ○ huggingface
│ ○ kimi-coding
│ ○ minimax
│ ○ minimax-cn
│ ○ mistral
│ ○ openai
│ ○ openai-codex
│ ○ opencode
│ ○ openrouter
│ ○ vercel-ai-gateway
│ ○ xai
│ ○ zai
└
Step 5
Are you using Ollama or do you want to configure an LLM later? Then press ‘Enter’ to select ‘Keep current’ (default) under ‘Default model’.
Did you choose a specific provider in the previous step? Then select the desired model from the list that appears; in that case, it will be shorter and will only show models from the provider you selected in the previous menu.
◆ Default model
│ ● Keep current (default: anthropic/claude-opus-4-5)
│ ○ Enter model manually
│ ○ amazon-bedrock/anthropic.claude-3-haiku-20240307-v1:0
│ ○ amazon-bedrock/anthropic.claude-3-5-haiku-20241022-v1:0
│ ○ amazon-bedrock/global.anthropic.claude-haiku-4-5-20251001-v1:0
│ ○ amazon-bedrock/eu.anthropic.claude-haiku-4-5-20251001-v1:0
│ ○ amazon-bedrock/anthropic.claude-3-opus-20240229-v1:0
│ ○ amazon-bedrock/us.anthropic.claude-opus-4-20250514-v1:0
│ ○ amazon-bedrock/us.anthropic.claude-opus-4-1-20250805-v1:0
│ ○ amazon-bedrock/global.anthropic.claude-opus-4-5-20251101-v1:0
│ ○ amazon-bedrock/eu.anthropic.claude-opus-4-5-20251101-v1:0
│ ○ amazon-bedrock/anthropic.claude-3-sonnet-20240229-v1:0
│ ○ amazon-bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0
│ ○ amazon-bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0
│ ○ amazon-bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0
│ ○ amazon-bedrock/global.anthropic.claude-sonnet-4-20250514-v1:0
│ ○ amazon-bedrock/global.anthropic.claude-sonnet-4-5-20250929-v1:0
│ ○ amazon-bedrock/eu.anthropic.claude-sonnet-4-5-20250929-v1:0
│ ○ amazon-bedrock/cohere.command-r-v1:0
│ ○ amazon-bedrock/cohere.command-r-plus-v1:0
│ ○ amazon-bedrock/us.deepseek.r1-v1:0
│ ○ amazon-bedrock/deepseek.v3-v1:0
│ ○ amazon-bedrock/google.gemma-3-4b-it
│ ○ amazon-bedrock/google.gemma-3-27b-it
│ ○ amazon-bedrock/openai.gpt-oss-safeguard-120b
│ ○ amazon-bedrock/openai.gpt-oss-safeguard-20b
│ ○ amazon-bedrock/openai.gpt-oss-120b-1:0
│ ○ amazon-bedrock/openai.gpt-oss-20b-1:0
│ ○ amazon-bedrock/moonshot.kimi-k2-thinking
│ ○ amazon-bedrock/meta.llama3-1-70b-instruct-v1:0
│ ○ amazon-bedrock/meta.llama3-1-8b-instruct-v1:0
│ ○ amazon-bedrock/us.meta.llama3-2-11b-instruct-v1:0
│ ○ amazon-bedrock/us.meta.llama3-2-1b-instruct-v1:0
│ ○ amazon-bedrock/us.meta.llama3-2-3b-instruct-v1:0
│ ○ amazon-bedrock/us.meta.llama3-2-90b-instruct-v1:0
│ ○ amazon-bedrock/us.meta.llama3-3-70b-instruct-v1:0
│ ○ amazon-bedrock/us.meta.llama4-maverick-17b-instruct-v1:0
│ ○ amazon-bedrock/us.meta.llama4-scout-17b-instruct-v1:0
│ ○ amazon-bedrock/minimax.minimax-m2
│ ○ amazon-bedrock/mistral.ministral-3-14b-instruct
│ ○ amazon-bedrock/mistral.ministral-3-8b-instruct
│ ○ amazon-bedrock/mistral.mistral-large-2402-v1:0
│ ○ amazon-bedrock/global.amazon.nova-2-lite-v1:0
│ ○ amazon-bedrock/us.amazon.nova-lite-v1:0
│ ○ amazon-bedrock/us.amazon.nova-micro-v1:0
│ ○ amazon-bedrock/us.amazon.nova-premier-v1:0
│ ○ amazon-bedrock/us.amazon.nova-pro-v1:0
│ ○ amazon-bedrock/nvidia.nemotron-nano-12b-v2
│ ○ amazon-bedrock/nvidia.nemotron-nano-9b-v2
│ ...
└
Step 6
Next, you can choose a single channel to communicate with OpenClaw (and then optionally add other options). Later in this article, we’ll show in more detail how to configure WhatsApp, Telegram, and Signal. You’re free to select any of the available options; OpenClaw guides you through the configuration process, and below this overview you’ll find an example of what this looks like for Telegram.
◆ Select channel (QuickStart)
│ ● Telegram (Bot API)
│ ○ WhatsApp (QR link)
│ ○ Discord (Bot API)
│ ○ Google Chat (Chat API)
│ ○ Slack (Socket Mode)
│ ○ Signal (signal-cli) (not configured)
│ ○ iMessage (imsg)
│ ○ Nostr (NIP-04 DMs)
│ ○ Microsoft Teams (Bot Framework)
│ ○ Mattermost (plugin)
│ ○ Nextcloud Talk (self-hosted)
│ ○ Matrix (plugin)
│ ○ BlueBubbles (macOS app)
│ ○ LINE (Messaging API)
│ ○ Zalo (Bot API)
│ ○ Zalo (Personal Account)
│ ○ Tlon (Urbit)
│ ○ Skip for now
└Example with Telegram
Follow the steps below. The easiest way is to log in via your browser at https://web.telegram.org and go through the steps there.
◇ Select channel (QuickStart)
│ Telegram (Bot API)
│
◇ Telegram bot token ───────────────────────────────────────────────────────────────────╮
│ │
│ 1) Open Telegram and chat with @BotFather │
│ 2) Run /newbot (or /mybots) │
│ 3) Copy the token (looks like 123456:ABC...) │
│ Tip: you can also set TELEGRAM_BOT_TOKEN in your env. │
│ Docs: https://docs.openclaw.ai/telegram │
│ Website: https://openclaw.ai │
│ │
├────────────────────────────────────────────────────────────────────────────────────────╯
│
◆ Enter Telegram bot token
│ 853442:AAFCUHWfCem-rwLwCRWSJW4
└
Step 7
You now choose the “skills” OpenClaw can use. This largely determines what OpenClaw can do, beyond chatting in a way similar to ChatGPT.
- Select ‘Yes’ for the question ‘Configure skills now?’
- Select ‘No’ for the question ‘Show Homebrew install command?’
- Select ‘NPM’ as ‘Preferred node manager for skill installs’.
- Select one or more skills with the space bar and press ‘Enter’ to install them. Some of these skills require external API keys—for example, generating images via nano-banana-pro uses Gemini Pro 3 Image preview and requires an API key from Google’s AI Studio.
A few skills that are useful to install right away and don’t require API keys are:- The Clawhub skill: later, easily add more skills to OpenClaw that aren’t listed here but are available via Clawhub (the website where people can share new skills).
- Nano-pdf: this allows OpenClaw to modify PDF files
- OpenAI-whisper: installs a local Speech-to-Text module. This will allow you to send voice messages to OpenClaw later via WhatsApp, Telegram, or Signal that OpenClaw can respond to. Note: in one test, we ran into a bug. There is a non-API and an API version for the STT module and the API version one was used instead. Does this happen to you? Correct it in the web dashboard in the ‘skills’ page.
- Summarize: a skill that allows OpenClaw to summarize URLs, podcasts, and (text) files.
◇ Skills status ────────────╮
│ │
│ Eligible: 4 │
│ Missing requirements: 45 │
│ Blocked by allowlist: 0 │
│ │
├────────────────────────────╯
│
◇ Configure skills now? (recommended)
│ Yes
│
◇ Homebrew recommended ──────────────────────────────────────────────────────────╮
│ │
│ Many skill dependencies are shipped via Homebrew. │
│ Without brew, you'll need to build from source or download releases manually. │
│ │
├─────────────────────────────────────────────────────────────────────────────────╯
│
◇ Show Homebrew install command?
│ No
│
◇ Preferred node manager for skill installs
│ ● npm
│ ○ pnpm
│ ○ bun
◆ Install missing skill dependencies
│ ◻ Skip for now (Continue without installing dependencies)
│ ◻ 🔐 1password
│ ◻ 📝 apple-notes
│ ◻ ⏰ apple-reminders
│ ◻ 🐻 bear-notes
│ ◻ 🐦 bird
│ ◻ 📰 blogwatcher
│ ◻ 🫐 blucli
│ ◻ 📸 camsnap
│ ◻ 🧩 clawhub
│ ◻ 🎛️ eightctl
│ ◻ ♊️ gemini
│ ◻ 🧲 gifgrep
│ ◻ 🐙 github
│ ◻ 🎮 gog
│ ◻ 📍 goplaces
│ ◻ 📧 himalaya
│ ◻ 📨 imsg
│ ◻ 📦 mcporter
│ ◻ 📊 model-usage
│ ◻ 🍌 nano-banana-pro
│ ◻ 📄 nano-pdf
│ ◻ 💎 obsidian
│ ◻ 🎙️ openai-whisper
│ ◻ 💡 openhue
│ ◻ 🧿 oracle
│ ◻ 🛵 ordercli
│ ◻ 👀 peekaboo
│ ◻ 🗣️ sag
│ ◻ 🌊 songsee
│ ◻ 🔊 sonoscli
│ ◻ 🧾 summarize
│ ◻ ✅ things-mac
│ ◻ 🎞️ video-frames
│ ◻ 📱 wacli
└
Step 8
Finally, you can enable hooks. We recommend selecting the bottom three options. In simple terms, hooks allow you to automate actions that trigger automatically based on agent events and commands. We recommend reading this page about hooks, and also taking a look at what’s possible with webhooks.
◆ Enable hooks?
│ ◻ Skip for now
│ ◼ 🚀 boot-md (Run BOOT.md on gateway startup)
│ ◼ 📝 command-logger (Log all command events to a centralized audit file)
│ ◼ 💾 session-memory (Save session context to memory when /new command is issued)
└
Step 9
You’ll now see a number of messages, but the most important part is the block below.
- First, write down the token value in the URL under ‘Web UI (with token)’
- At the end, you’ll be asked how you want to hatch the bot. Are you using an external LLM provider such as OpenAI/Anthropic? Then press ‘Enter’ to select the default option ‘Hatch in TUI’ (Terminal User Interface).
Are you using Ollama? Then select ‘Do this later’ and continue to the next section. - If you choose ‘Do this later’, you’ll then be asked whether you want to install a shell completion script. Select ‘Yes’ if you see this prompt.
◇ Control UI ─────────────────────────────────────────────────────────────────────╮
│ │
│ Web UI: http://127.0.0.1:18789/ │
│ Web UI (with token): │
│ http://127.0.0.1:18789/?token=e734015d3d31a19481f56875a3191c2 │
│ Gateway WS: ws://127.0.0.1:18789 │
│ Gateway: reachable │
│ Docs: https://docs.openclaw.ai/web/control-ui │
│ │
├──────────────────────────────────────────────────────────────────────────────────╯
│
◇ Start TUI (best option!) ─────────────────────────────────╮
│ │
│ This is the defining action that makes your agent you. │
│ Please take your time. │
│ The more you tell it, the better the experience will be. │
│ We will send: "Wake up, my friend!" │
│ │
├────────────────────────────────────────────────────────────╯
│
◇ Token ────────────────────────────────────────────────────────────────────────────────╮
│ │
│ Gateway token: shared auth for the Gateway + Control UI. │
│ Stored in: ~/.openclaw/openclaw.json (gateway.auth.token) or OPENCLAW_GATEWAY_TOKEN. │
│ Web UI stores a copy in this browser's localStorage (openclaw.control.settings.v1). │
│ Get the tokenized link anytime: openclaw dashboard --no-open │
│ │
├────────────────────────────────────────────────────────────────────────────────────────╯
│
◆ How do you want to hatch your bot?
│ ● Hatch in TUI (recommended)
│ ○ Open the Web UI
│ ○ Do this later
└
Step 10
OpenClaw would now like to learn more about who you want OpenClaw to be: what your bot’s name will be, what kind of “vibe” you want in your interactions, and what role it should fulfill. OpenClaw also wants to know who you are. You’ll therefore see a message like:
session agent:main:main
Wake up, my friend!
Hey! Just woke up in this fresh workspace. I'm a blank slate right now—who am I supposed to be? What's my name, vibe, or role? And who are you? Let's figure this out together. 🚀For example, something along the lines of:
“You are <name>. You are a helpful, highly skilled assistant. Your vibe in general is warm, supportive, friendly, with a sense of humour.
As for who I am, my name is <name>. I'm a <job description> and live in <city, country>. I generally like <interests, hobbies, w/e>. Please let me know if you'd like to know anything else about me or yourself. Also, anything else you'd recommend that I should know or tweak about you?"Based on your first response, you’ll get a number of tips on what you can tweak, for example:
1. Proactivity prefs: HEARTBEAT.md is empty, so I'm not pinging you periodically (e.g., email checks, calendar nudges). Want me to set up light heartbeats (2-3x/day) for weather, urgent emails, or "hey, gym time?" Or cron for
precise stuff like "remind me to read at 8 PM"?
2. External actions: Per my soul (SOUL.md), I ask before emailing/tweeting/posting publicly. Cool? Or greenlight certain channels?
3. Voice fun: If TTS kicks in for stories or summaries, any fave voices? (E.g., warm British for sci-fi recaps.)
4. Your world: <questions based on what you said before>That’s it! After the conversation you’re in now, OpenClaw is good to go. When chatting with OpenClaw, you can change a lot simply by asking questions and answering the questions OpenClaw asks you in return. Ready? Then use ctrl + c to exit the TUI.
Connecting to the OpenClaw dashboard (GUI) from your own computer/laptop
For this guide, we assume you’re installing OpenClaw on a VPS. For example, if you install OpenClaw on a VPS running Ubuntu without a desktop environment, you can’t simply open a browser to manage the OpenClaw dashboard. There’s a relatively simple and secure way to still access the OpenClaw dashboard: by setting up an SSH tunnel. You can do this easily from the terminal/PowerShell/WSL on your own computer or laptop with a single command:
ssh -N -L 18789:127.0.0.1:18789 -p 12345 username@123.123.123.123Replace in this command:
- 12345 with the port number you use for SSH connections. Haven’t changed the port? Then use -p 22 (or omit it).
- username with the username you used to connect to your VPS
- 123.123.123.123 with the IP address or a (sub)domain name that points to your VPS (e.g., claw.example.nl).
Open your browser on the computer/laptop where you set up the SSH tunnel and go to 127.0.0.1:18789. You’ll now land in the OpenClaw dashboard—see the section below.
Using the OpenClaw dashboard
Logging in for the first time
When you open the OpenClaw dashboard via 127.0.0.1:18789, you’ll see an error message like in the screenshot below. This error occurs because you’re not including the token in the URL, for example: http://127.0.0.1:18789/?token=e734015d3d31a19481f56875a3191c2
This is an intentional choice: it’s easier to configure the token once and then always use the address 127.0.0.1:18789.

Click ‘Overview’ and paste the token you copied under ‘Gateway token’, then click ‘Connect’. The status will change to ‘Connected’ and you can use the dashboard without further issues.

Chatting with OpenClaw via the dashboard
To start a chat with OpenClaw, simply click ‘Chat’ in the left-hand menu and you can begin a chat session right away.

Note: out of the box, both the command-line TUI and the web dashboard will also show system messages in the chat that you won’t see in your conversations in a channel such as Telegram. You can safely ignore these messages as long as OpenClaw works properly via your chosen communication channel.
Adding/enabling existing skills
OpenClaw comes with a wide range of skills you can use out of the box, simply by enabling them. Click ‘Skills’ in the left-hand menu and enable the skills you want. Note that some require API keys from external systems to work.

OpenClaw security
OpenClaw is a powerful agent that can potentially pose a significant security risk. For example, if you connect OpenClaw to your work email, you are potentially trusting an AI agent—and the LLM connected to it—to manage your email correctly.
To use OpenClaw as safely as possible, we recommend the following steps:
- Read the security tips at https://docs.openclaw.ai/gateway/security
- Don’t install OpenClaw on a computer or laptop that contains personal/company data; instead, use a VPS, for example.
- Run a security audit of your OpenClaw setup:
openclaw security audit --deep
Linking a new communication channel
Want to add another communication channel to OpenClaw later—for example, WhatsApp? Then there are two easy options:
- Ask OpenClaw to add the communication channel, for example via the TUI, the web dashboard, or an existing communication channel.
- Run the onboarding again via the command
openclaw onboard
Using a local model (LLM) via Ollama
Ollama allows you to host an open-source LLM on your own hardware. This can reduce costs, and it keeps your data with you instead of sending conversations to an external LLM provider.
Installing Ollama is outside the scope of this guide, but we explain it in our article ‘hosting Deepseek yourself’.
Manual configuration (recommended)
Step 1
Open the OpenClaw configuration:
nano .openclaw/openclaw.json
Step 2
Under models.providers, add this block and adjust it (there’s a good chance the block is not yet present in your configuration):
"models": {
"providers": {
"ollama": {
"baseUrl": "http://127.0.0.1:11434/v1",
"apiKey": "ollama-local",
"api": "openai-completions",
"models": [
{
"id": "gpt-oss:20b",
"name": "GPT oss 20b",
"reasoning": true,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 128000,
"maxTokens": 8192
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "ollama/gpt-oss:20b"
},
"models": {
"ollama/gpt-oss:20b": {
"alias": "gpt-oss:20b"
}
},
"workspace": "/home/testtransip/.openclaw/workspace",
"compaction": {
"mode": "safeguard"
},
"maxConcurrent": 4,
"subagents": {
"maxConcurrent": 8
}
}
},
The baseUrl and api use Ollama’s OpenAI compatibility layer (Chat/Completions API). Replace the placeholders with the correct model ID/name (in this example gpt-oss:20b).
Step 2
Set your default model to Ollama:
openclaw models set ollama/gpt-oss:20b
If needed, replace gpt-oss:20b with the name of your model.
Launch via Ollama
Is Ollama already running on the VPS? Start the integration directly:
ollama launch openclaw
Or generate the configuration without starting OpenClaw:
ollama launch openclaw --config
OpenClaw reloads the configuration automatically. We recommend models such as gpt-oss:20b and gpt-oss:120b.
Using another AI model (LLM)
There are three options to use another AI model (LLM), ranging from easy to more advanced:
- Ask OpenClaw to change it for you.
- Run the onboarding again with the command:
openclaw onboard - Via the command line; using the three commands below, for example, you add OpenAI’s gpt-5-mini as an alias, set it as the default, and restart the gateway to apply the changes.
openclaw models aliases add g5mini openai/gpt-5-mini
openclaw models set openai/gpt-5-mini
openclaw gateway restart
What’s next with OpenClaw?
To get started: talk to OpenClaw! Want to configure “something,” like a new skill? Simply ask OpenClaw about it and it will tell you what to do next.
Prefer a GUI? Just as easy: connect to the OpenClaw dashboard and go to the ‘Skills’ page.
Troubleshooting & Management
There are a number of useful commands to manage OpenClaw and to know when there’s an issue and you aren’t getting any output back from OpenClaw via the TUI, chat in the web dashboard, or your chosen communication channel. In other cases, it’s easiest to simply talk to OpenClaw directly.
Update OpenClaw
openclaw update
OpenClaw health check
For a quick check, especially whether channels are OK
openclaw health
OpenClaw’s status
Check OpenClaw’s status in significantly more detail than with the health command
openclaw status
Gateway status
openclaw gateway status
Checking available OpenClaw models
openclaw model list
Checking OpenClaw model status
openclaw model status
Viewing logs
openclaw logs
OpenClaw doctor
Try to automatically fix issues, as long as they’re not related to your LLM configuration:
openclaw doctor --fix