5 SKILL.md Examples You Can Copy and Use Today
Don't start from scratch. Here are 5 complete SKILL.md examples you can copy, customize, and start using in minutes.
The fastest way to learn the SKILL.md format is to see real examples. These five skills are complete, working templates you can copy into your skills directory (works with Claude Code, OpenClaw, Codex CLI, and other SKILL.md-compatible agents). These directory and start using immediately. Each one covers a common developer task and demonstrates different SKILL.md patterns.
Quick Answer: SKILL.md examples automate developer tasks like writing commit messages, performing code reviews, generating READMEs, and scaffolding tests by leveraging structured descriptions for AI agents.
Example 1: Commit message writer
This skill reads your staged git diff and writes a conventional commit message. It demonstrates the basics: a clear trigger description, numbered steps, and output format specification.
---
name: commit-writer
description: Writes conventional commit messages from staged changes.
Use when the user asks to commit, write a commit message, or says
"commit my changes."
---
# Commit Message Writer
When asked to write a commit message:
1. Run `git diff --staged` to read the staged changes
2. Identify the primary type of change:
- feat: new feature
- fix: bug fix
- refactor: code restructuring without behavior change
- docs: documentation only
- chore: build, tooling, or dependency changes
- test: adding or fixing tests
3. Identify the scope from the changed files (e.g., auth, api, ui)
4. Write a subject line under 72 characters: type(scope): description
5. If the change is complex, add a body explaining why (not what)
6. Flag any breaking changes with BREAKING CHANGE: in the footer
## Rules
- Subject line is imperative mood ("add feature" not "added feature")
- No period at the end of the subject line
- Body wraps at 72 characters
- If multiple logical changes are staged, suggest splitting into
separate commits
What this demonstrates: Basic skill structure, using shell commands in instructions, and handling edge cases (multiple staged changes).
Skills built by the community
code-reviewer
FreeRun a structured code review on your recent changes without waiting for a teammate. This skill checks for security vulnerabilities (SQL injection, XSS, authentication bypasses), logic errors, edge cases, performance issues, and style violations.Findings are organized by severity: Critical, Warning, and Suggestion. Each finding includes the file, line number, a description of the issue, and a concrete fix. Use it as a first pass before peer review, or as your only reviewer on solo projects.
Get this skillhumanize-writing
FreeWhat it doesThis skill removes the "AI smell" from generated text, transforming robotic or overly-structured prose into natural, human-sounding writing. Unlike basic synonym-swapping tools, it re-architects sentence structure, cadence, and flow to match the natural habits of native speakers in English and other languages.Why use this skillStandard LLM outputs often suffer from "machine-like symmetry," repetitive framing, and generic transitions that make them easily detectable and tiring to read. This skill uses a sophisticated "Smell Score" system to diagnose issues and rebuild drafts from scratch when necessary, ensuring your content retains its original meaning while gaining a genuine human voice.Supported content typesDraft emails and professional correspondenceSocial media captions and blog postsEssays and long-form articlesTranslated text requiring localizationMarketing copy and product descriptionsHow it worksThe skill scans for abstract nouns, weak verbs, and hedge loops. It then restructures the content, varying sentence length and replacing abstractions with concrete language. It preserves critical data—like facts, links, and code—while localizing idioms, punctuation, and tone to your target audience.
Get this skillprompt-engineer
FreeMaster the Art of Prompt Engineering Building high-performance LLM applications requires more than just basic instructions. This skill equips your AI agent with a sophisticated framework for designing, debugging, and optimizing prompts across any major model provider. It solves the common problems of model drift, parsing failures, and hallucination by implementing industry-standard engineering patterns. What it does Architectural Design: Implements advanced system prompt structures, including role anchoring, constraint blocks, and persona tuning. Precision Control: Utilizes few-shot prompting and chain-of-thought (CoT) reasoning to ensure logical consistency and format compliance. Agentic Workflows: Supports complex patterns like ReAct (Reasoning + Acting), Plan-and-Execute, and reflection loops for autonomous task completion. Reliable Outputs: Enforces structured data (JSON/XML) and implements robust defense mechanisms against prompt injection and jailbreaking. Context Management: Provides strategies for RAG (Retrieval-Augmented Generation), token budgeting, and conversation summarization. Technical Compatibility This skill is framework-agnostic and designed for developers working with OpenClaw, Python, and Go. It is optimized for high-reasoning models (GPT-4, Claude 3, Gemini Pro) and provides specific guidance for multimodal (image) prompting and tool-use orchestration. High-Quality Outputs Expect deterministic results: valid JSON objects ready for backend consumption, structured Markdown reports, and explainable reasoning chains that make debugging AI behavior straightforward for your development team.
Get this skillExample 2: Code review checklist
This skill runs a structured code review. It demonstrates organizing output by severity and including specific check categories.
---
name: code-review
description: Reviews code for bugs, security issues, and best
practices. Use when the user asks for a code review, mentions
reviewing changes, or says "check this code."
---
# Code Review
When asked to review code:
1. Identify which files changed (check git diff or ask the user)
2. Read each changed file completely
## Check for these issues
### Security (Critical)
- SQL injection via string concatenation
- XSS from unescaped user input
- Authentication or authorization bypasses
- Hardcoded secrets or API keys
- Insecure deserialization
### Logic (Critical)
- Off-by-one errors
- Null/undefined access without checks
- Race conditions in async code
- Unhandled promise rejections
- Incorrect boolean logic
### Performance (Warning)
- N+1 database queries
- Missing database indexes for query patterns
- Unnecessary re-renders in React components
- Large objects in memory that could be streamed
### Style (Suggestion)
- Inconsistent naming conventions
- Functions longer than 50 lines
- Deeply nested conditionals (3+ levels)
- Dead code or unused imports
## Output format
Group findings by severity: Critical, Warning, Suggestion.
For each finding:
- File and line number
- What the issue is
- Why it matters
- A concrete fix (show code)
If no issues found, say so explicitly.
What this demonstrates: Categorized instructions with severity levels, specific things to check (not vague "look for bugs"), and structured output format.
Example 3: README generator
This skill scans a project and generates documentation. It demonstrates reading project structure and adapting output based on what it finds.
---
name: readme-gen
description: Generates a README.md from the project structure. Use
when the user asks to write a README, document the project, or
says "generate docs."
---
# README Generator
When asked to generate a README:
1. Scan the project root for:
- package.json, Cargo.toml, pyproject.toml, go.mod (determine
language/framework)
- Docker files (docker-compose.yml, Dockerfile)
- CI config (.github/workflows/, .gitlab-ci.yml)
- Environment files (.env.example)
- License file
2. Read the main entry point to understand what the project does
3. Generate a README with these sections:
## Project Name
One-paragraph description of what the project does and who it's for.
## Getting Started
### Prerequisites
List runtime requirements found in the project.
### Installation
Step-by-step based on the actual package manager and setup files.
### Running
Based on scripts in package.json, Makefile, or common conventions.
### Environment Variables
If .env.example exists, list each variable with a description.
## Tech Stack
List frameworks and major dependencies found in package files.
## Project Structure
Show the top-level directory structure with one-line descriptions.
## Contributing
Standard contributing section if no CONTRIBUTING.md exists.
## License
Based on the LICENSE file if present.
## Rules
- Only include sections that apply to this project
- All commands must be based on actual project files, not guesses
- If something is unclear, note it rather than making it up
What this demonstrates: Conditional logic (adapting to different languages/frameworks), reading actual project files instead of guessing, and structured output with sections.
Example 4: Test scaffolding
This skill generates test files. It demonstrates detecting the existing test framework and matching project conventions.
---
name: test-scaffold
description: Generates test files for source code. Use when the
user asks to write tests, add test coverage, or mentions testing.
---
# Test Scaffolding
When asked to create tests:
1. Detect the testing framework:
- Check package.json for jest, vitest, mocha, @testing-library
- Check for pytest, unittest in Python projects
- Check for existing test files to see what's already in use
2. Match existing test conventions:
- File naming: .test.ts, .spec.ts, or __tests__/ directory
- Import style: require vs import
- Assertion style: expect().toBe() vs assert.equal()
- Describe/it nesting patterns
3. Read the source file to understand:
- Exported functions and their signatures
- Component props and behavior
- Side effects and dependencies to mock
4. Write tests covering:
- Happy path for each exported function/component
- Edge cases: null, undefined, empty string, empty array,
boundary values
- Error cases: invalid inputs, network failures, timeouts
- Type-specific: if TypeScript, test that types are enforced
5. Add mocking as needed:
- Mock external APIs and database calls
- Mock timers for time-dependent code
- Use the project's existing mock utilities if present
## Output
- One test file per source file
- Clear test names: "should [expected behavior] when [condition]"
- Group related tests in describe blocks
- Comments for non-obvious test logic
What this demonstrates: Framework detection, matching existing conventions rather than imposing new ones, and comprehensive edge case coverage.
Example 5: Deployment checklist
This skill walks through a deployment process. It demonstrates using disable-model-invocation for skills with side effects.
---
name: deploy-check
description: Pre-deployment checklist and verification. Use when
the user asks to deploy, mentions shipping to production, or
says "deploy check."
disable-model-invocation: true
---
# Deployment Checklist
This skill should ONLY run when explicitly invoked with /deploy-check.
Never run automatically.
When invoked:
## Pre-deploy checks
1. Run the test suite: identify the test command from package.json
or Makefile and execute it
2. Check for uncommitted changes: `git status`
3. Verify the branch: confirm we're on the correct deploy branch
(usually main or release/*)
4. Check for pending migrations: look in db/migrations/ or similar
for files not yet applied
5. Review environment variables: compare .env.example against the
deploy target's config
## Report
Present findings as a checklist:
- [x] Tests passing (or [!] 3 tests failing)
- [x] No uncommitted changes (or [!] 5 files modified)
- [x] On correct branch: main
- [x] No pending migrations (or [!] 2 migrations pending)
- [x] Environment variables match
## Decision
If all checks pass: "Ready to deploy."
If any checks fail: list the failures and say "Not ready. Fix
these issues before deploying."
Do NOT actually deploy. Only check readiness.
What this demonstrates: The disable-model-invocation: true flag for manual-only skills, safety instructions (don't actually deploy), and structured checklist output.
Using these examples
To use any of these skills:
- Create a folder in your skills directory:
mkdir -p ~/.claude/skills/skill-name
- Create a SKILL.md file and paste the example
- Customize the instructions to match your project
- Start a new Claude Code session
These examples work with Claude Code out of the box and are compatible with Codex CLI, Gemini CLI, and other agents that support the SKILL.md format.
For more on the file format, see the SKILL.md Format Reference. To browse pre-built skills you can install instead of writing your own, check the Agensi marketplace.
Frequently Asked Questions
Find the right skill for your workflow
Browse our marketplace of AI agent skills, ready to install in seconds.
Browse