Why Build Custom OpenClaw Skills?
OpenClaw ships with a solid set of built-in skills, and the ClawHub marketplace has hundreds more contributed by the community. But there will always be workflows specific to your needs that do not exist yet. Maybe you need a skill that pulls data from an internal API, or one that automates a niche task in your industry.
Building custom skills is where OpenClaw really shines. The Skills API is clean, well-documented, and designed to make development fast. I have built over a dozen skills for my own workflows, and in this guide I will walk you through the entire process from scaffolding to publishing.
If you are new to OpenClaw, I recommend starting with our introduction to OpenClaw and the best skills and plugins guide to understand the ecosystem first.
Understanding the Skill Architecture
Every OpenClaw skill follows a consistent structure. At its core, a skill is a module that:
- Declares its capabilities via a manifest file
- Exposes tool functions that the LLM can call
- Handles permissions for sensitive operations
- Returns structured results back to the agent loop
Here is the directory structure for a typical skill:
my-custom-skill/
manifest.yaml # Skill metadata and configuration
index.ts # Main entry point
tools/
fetch-data.ts # Individual tool implementations
transform.ts
output.ts
schemas/
input.json # JSON schemas for tool inputs
output.json # JSON schemas for tool outputs
tests/
fetch-data.test.ts # Unit tests
integration.test.ts # Integration tests
README.md # Documentation for ClawHub
Step 1: Scaffold Your Skill
OpenClaw's CLI includes a scaffolding command that sets up the boilerplate:
# Create a new skill project
openclaw skill init my-weather-skill
# Navigate to the skill directory
cd ~/.openclaw/skills/my-weather-skill
This generates the full directory structure with placeholder files. Let us build a practical example: a weather skill that fetches forecasts and provides natural language summaries.
Step 2: Define the Manifest
The manifest is the most important file in your skill. It tells OpenClaw what your skill does, what permissions it needs, and how the LLM should interact with it.
# manifest.yaml
name: "weather-forecast"
version: "1.0.0"
description: "Fetches weather forecasts and provides natural language summaries"
author: "your-username"
license: "MIT"
# Minimum OpenClaw version required
engine: ">=2.4.0"
# LLM-facing description (this is what the agent sees)
agent_description: |
Use this skill to get weather forecasts for any location.
Available tools:
- get_forecast: Get a multi-day weather forecast
- get_current: Get current weather conditions
- weather_summary: Get a natural language weather summary
# Required permissions
permissions:
- network:api.weather.gov # Allow HTTP requests to this domain
- network:api.openweathermap.org
- storage:read # Read cached data
- storage:write # Write to cache
# Configuration schema
config:
api_key:
type: string
required: true
description: "OpenWeatherMap API key"
env_var: "OPENWEATHER_API_KEY"
default_units:
type: string
required: false
default: "imperial"
enum: ["imperial", "metric"]
Permissions Deep Dive
The permission system is one of OpenClaw's strongest security features. Skills cannot do anything they have not declared in the manifest. Here are the available permission types:
| Permission | Description | Example |
|---|---|---|
network:{domain} |
HTTP requests to specific domain | network:api.github.com |
storage:read |
Read from OpenClaw's key-value store | Cache lookups |
storage:write |
Write to OpenClaw's key-value store | Cache writes |
filesystem:read:{path} |
Read files from specific path | filesystem:read:~/Documents |
filesystem:write:{path} |
Write files to specific path | filesystem:write:~/output |
exec:{command} |
Execute system commands | exec:git |
notification |
Send desktop notifications | Alert on completion |
Users are prompted to approve permissions when they first install a skill. Requesting only what you need builds trust.
Step 3: Implement Your Tools
Each tool is a function that the LLM can call. Here is the implementation for our weather skill:
// tools/get-forecast.ts
import { Tool, ToolInput, ToolOutput } from '@openclaw/sdk';
import { z } from 'zod';
// Define the input schema using Zod
const ForecastInput = z.object({
location: z.string().describe('City name or zip code'),
days: z.number().min(1).max(7).default(3).describe('Number of forecast days'),
units: z.enum(['imperial', 'metric']).optional(),
});
// Define the tool
export const getForecast: Tool = {
name: 'get_forecast',
description: 'Get a multi-day weather forecast for a location',
inputSchema: ForecastInput,
async execute(input: ToolInput<typeof ForecastInput>): Promise<ToolOutput> {
const { location, days, units } = input;
const apiKey = this.config.api_key;
const unitSystem = units || this.config.default_units;
try {
// Fetch geocoding data first
const geoUrl = `https://api.openweathermap.org/geo/1.0/direct?q=${encodeURIComponent(location)}&limit=1&appid=${apiKey}`;
const geoResponse = await this.http.get(geoUrl);
if (!geoResponse.data.length) {
return {
success: false,
error: `Location "${location}" not found`,
};
}
const { lat, lon, name, country } = geoResponse.data[0];
// Fetch forecast
const forecastUrl = `https://api.openweathermap.org/data/2.5/forecast?lat=${lat}&lon=${lon}&cnt=${days * 8}&units=${unitSystem}&appid=${apiKey}`;
const forecast = await this.http.get(forecastUrl);
// Transform and return
return {
success: true,
data: {
location: `${name}, ${country}`,
coordinates: { lat, lon },
days: transformForecast(forecast.data, days),
units: unitSystem,
},
};
} catch (error) {
return {
success: false,
error: `Failed to fetch forecast: ${error.message}`,
};
}
},
};
The Main Entry Point
The index.ts file registers all your tools:
// index.ts
import { Skill } from '@openclaw/sdk';
import { getForecast } from './tools/get-forecast';
import { getCurrentWeather } from './tools/get-current';
import { weatherSummary } from './tools/weather-summary';
const weatherSkill: Skill = {
tools: [getForecast, getCurrentWeather, weatherSummary],
// Optional lifecycle hooks
async onInstall(config) {
// Validate API key on install
const valid = await validateApiKey(config.api_key);
if (!valid) throw new Error('Invalid OpenWeatherMap API key');
},
async onActivate() {
// Called when skill is loaded into an agent session
console.log('Weather skill activated');
},
};
export default weatherSkill;
Step 4: Add Input Validation
Robust input validation prevents the LLM from sending malformed requests. OpenClaw uses Zod schemas for this, but you should add runtime checks too:
async execute(input: ToolInput<typeof ForecastInput>): Promise<ToolOutput> {
// Additional runtime validation
if (input.location.length > 200) {
return { success: false, error: 'Location string too long' };
}
// Sanitize input to prevent injection
const sanitizedLocation = input.location.replace(/[<>{}]/g, '');
// ... rest of implementation
}
Step 5: Write Tests
OpenClaw provides a testing framework that simulates the agent environment:
// tests/get-forecast.test.ts
import { SkillTestHarness } from '@openclaw/testing';
import weatherSkill from '../index';
describe('Weather Forecast Skill', () => {
let harness: SkillTestHarness;
beforeEach(() => {
harness = new SkillTestHarness(weatherSkill, {
config: {
api_key: process.env.TEST_WEATHER_API_KEY,
default_units: 'imperial',
},
});
});
test('fetches forecast for valid city', async () => {
const result = await harness.callTool('get_forecast', {
location: 'Austin, TX',
days: 3,
});
expect(result.success).toBe(true);
expect(result.data.location).toContain('Austin');
expect(result.data.days).toHaveLength(3);
});
test('handles invalid location gracefully', async () => {
const result = await harness.callTool('get_forecast', {
location: 'xyznonexistent12345',
days: 1,
});
expect(result.success).toBe(false);
expect(result.error).toContain('not found');
});
test('respects daily budget limits', async () => {
harness.setBudget({ per_task_limit_usd: 0.001 });
const result = await harness.callTool('get_forecast', {
location: 'London',
days: 7,
});
expect(result.success).toBe(false);
expect(result.error).toContain('budget');
});
});
Run your tests with:
openclaw skill test
# Or with coverage
openclaw skill test --coverage
Step 6: Test with a Live Agent
Before publishing, test your skill with an actual agent session:
# Load your skill in development mode
openclaw skill dev ./my-weather-skill
# In another terminal, start a chat session
openclaw chat --skills weather-forecast
# Try it out
> What's the weather like in San Francisco this week?
Development mode hot-reloads your skill when you save changes, so you can iterate quickly.
Step 7: Publish to ClawHub
Once your skill is working reliably, publish it to ClawHub so others can use it:
# Validate your skill before publishing
openclaw skill validate
# Login to ClawHub
openclaw hub login
# Publish
openclaw hub publish
# Output:
# Published weather-forecast@1.0.0 to ClawHub
# URL: https://clawhub.dev/skills/your-username/weather-forecast
ClawHub Listing Best Practices
A good ClawHub listing includes:
- Clear description of what the skill does
- Screenshots or GIFs showing it in action
- Configuration docs for any required API keys
- Usage examples that people can copy-paste
- Changelog for version updates
Real-World Skill Examples
Here are some skills I have built that you can reference for inspiration:
GitHub PR Reviewer Skill
// Automatically reviews pull requests and posts comments
const prReviewTool: Tool = {
name: 'review_pr',
description: 'Review a GitHub pull request and provide feedback',
inputSchema: z.object({
repo: z.string().describe('owner/repo format'),
pr_number: z.number().describe('Pull request number'),
focus: z.enum(['security', 'performance', 'style', 'all']).default('all'),
}),
// ...
};
Database Query Skill
// Natural language to SQL queries with safety checks
const queryTool: Tool = {
name: 'query_database',
description: 'Execute a read-only database query from natural language',
inputSchema: z.object({
question: z.string().describe('Natural language question about the data'),
database: z.string().describe('Database identifier'),
}),
// ...
};
For more ideas, check the best OpenClaw skills currently on ClawHub.
Tips for Writing Great Skills
After building many skills, here are the patterns I keep coming back to:
1. Make Tool Descriptions Crystal Clear
The LLM decides which tool to call based on its description. Be specific:
// Bad: vague description
description: 'Get data from the API'
// Good: specific and actionable
description: 'Fetch the current stock price for a given ticker symbol. Returns price, change, and volume.'
2. Return Structured Data
Give the LLM structured data it can reason about:
// Bad: returning a raw string
return { success: true, data: "The temperature is 72F and sunny" };
// Good: returning structured data the LLM can interpret
return {
success: true,
data: {
temperature: 72,
unit: "F",
condition: "sunny",
humidity: 45,
wind_speed: 8,
},
};
3. Handle Errors Gracefully
Never throw unhandled exceptions. Always return meaningful error messages:
return {
success: false,
error: 'API rate limit exceeded. Please try again in 60 seconds.',
retryable: true,
retry_after_ms: 60000,
};
If you want to level up your coding practices for building skills, I highly recommend Clean Code by Robert C. Martin. The principles around function design and error handling directly apply to skill development.
For deeper understanding of how LLMs interact with tools, Prompt Engineering for Generative AI is an excellent resource.
What's Next for OpenClaw Skills
The OpenClaw team has hinted at several upcoming features for skill developers:
- Skill chaining: Skills that can call other skills directly
- Shared state: Persistent memory across skill invocations
- Visual builder: A GUI for building simple skills without code
- Marketplace analytics: Usage stats and feedback for published skills
The AI agent ecosystem is evolving rapidly, and OpenClaw's open-source approach means the community drives what gets built next.
Wrapping Up
Building custom OpenClaw skills is one of the most rewarding things you can do in the AI agent space right now. The barrier to entry is low, the SDK is well-designed, and the community is growing fast. Whether you are automating a personal workflow or building a skill that thousands will use, the process is straightforward.
Start small, test thoroughly, and publish early. The best skills I have seen on ClawHub started as simple utilities that solved one problem well.
Built something cool? Share your OpenClaw skill on X (@wikiwayne) -- I love seeing what the community creates.
Recommended Gear
These are products I personally recommend. Click to view on Amazon.
Clean Code by Robert C. Martin — Great pick for anyone following this guide.
AI Engineering by Chip Huyen — Great pick for anyone following this guide.
Prompt Engineering for Generative AI — Great pick for anyone following this guide.
Logitech MX Keys S Wireless — Great pick for anyone following this guide.
ASUS ProArt PA279CRV 27" 4K — Great pick for anyone following this guide.
Samsung T7 Portable SSD 1TB — Great pick for anyone following this guide.
This article contains affiliate links. As an Amazon Associate I earn from qualifying purchases. See our full disclosure.
