Tutorials
    skill.md
    examples
    templates

    5 SKILL.md Examples You Can Copy and Use Today

    Don't start from scratch. Here are 5 complete SKILL.md examples you can copy, customize, and start using in minutes.

    March 28, 20268 min read0 views
    Share:

    The fastest way to learn the SKILL.md format is to see real examples. These five skills are complete, working templates you can copy into your skills directory (works with Claude Code, OpenClaw, Codex CLI, and other SKILL.md-compatible agents). These directory and start using immediately. Each one covers a common developer task and demonstrates different SKILL.md patterns.

    Example 1: Commit message writer

    This skill reads your staged git diff and writes a conventional commit message. It demonstrates the basics: a clear trigger description, numbered steps, and output format specification.

    ---
    name: commit-writer
    description: Writes conventional commit messages from staged changes. 
      Use when the user asks to commit, write a commit message, or says 
      "commit my changes."
    ---
    
    # Commit Message Writer
    
    When asked to write a commit message:
    
    1. Run `git diff --staged` to read the staged changes
    2. Identify the primary type of change:
       - feat: new feature
       - fix: bug fix
       - refactor: code restructuring without behavior change
       - docs: documentation only
       - chore: build, tooling, or dependency changes
       - test: adding or fixing tests
    3. Identify the scope from the changed files (e.g., auth, api, ui)
    4. Write a subject line under 72 characters: type(scope): description
    5. If the change is complex, add a body explaining why (not what)
    6. Flag any breaking changes with BREAKING CHANGE: in the footer
    
    ## Rules
    - Subject line is imperative mood ("add feature" not "added feature")
    - No period at the end of the subject line
    - Body wraps at 72 characters
    - If multiple logical changes are staged, suggest splitting into 
      separate commits
    

    What this demonstrates: Basic skill structure, using shell commands in instructions, and handling edge cases (multiple staged changes).

    Example 2: Code review checklist

    This skill runs a structured code review. It demonstrates organizing output by severity and including specific check categories.

    ---
    name: code-review
    description: Reviews code for bugs, security issues, and best 
      practices. Use when the user asks for a code review, mentions 
      reviewing changes, or says "check this code."
    ---
    
    # Code Review
    
    When asked to review code:
    
    1. Identify which files changed (check git diff or ask the user)
    2. Read each changed file completely
    
    ## Check for these issues
    
    ### Security (Critical)
    - SQL injection via string concatenation
    - XSS from unescaped user input
    - Authentication or authorization bypasses
    - Hardcoded secrets or API keys
    - Insecure deserialization
    
    ### Logic (Critical)
    - Off-by-one errors
    - Null/undefined access without checks
    - Race conditions in async code
    - Unhandled promise rejections
    - Incorrect boolean logic
    
    ### Performance (Warning)
    - N+1 database queries
    - Missing database indexes for query patterns
    - Unnecessary re-renders in React components
    - Large objects in memory that could be streamed
    
    ### Style (Suggestion)
    - Inconsistent naming conventions
    - Functions longer than 50 lines
    - Deeply nested conditionals (3+ levels)
    - Dead code or unused imports
    
    ## Output format
    
    Group findings by severity: Critical, Warning, Suggestion.
    For each finding:
    - File and line number
    - What the issue is
    - Why it matters
    - A concrete fix (show code)
    
    If no issues found, say so explicitly.
    

    What this demonstrates: Categorized instructions with severity levels, specific things to check (not vague "look for bugs"), and structured output format.

    Example 3: README generator

    This skill scans a project and generates documentation. It demonstrates reading project structure and adapting output based on what it finds.

    ---
    name: readme-gen
    description: Generates a README.md from the project structure. Use 
      when the user asks to write a README, document the project, or 
      says "generate docs."
    ---
    
    # README Generator
    
    When asked to generate a README:
    
    1. Scan the project root for:
       - package.json, Cargo.toml, pyproject.toml, go.mod (determine 
         language/framework)
       - Docker files (docker-compose.yml, Dockerfile)
       - CI config (.github/workflows/, .gitlab-ci.yml)
       - Environment files (.env.example)
       - License file
    
    2. Read the main entry point to understand what the project does
    
    3. Generate a README with these sections:
    
    ## Project Name
    One-paragraph description of what the project does and who it's for.
    
    ## Getting Started
    
    ### Prerequisites
    List runtime requirements found in the project.
    
    ### Installation
    Step-by-step based on the actual package manager and setup files.
    
    ### Running
    Based on scripts in package.json, Makefile, or common conventions.
    
    ### Environment Variables
    If .env.example exists, list each variable with a description.
    
    ## Tech Stack
    List frameworks and major dependencies found in package files.
    
    ## Project Structure
    Show the top-level directory structure with one-line descriptions.
    
    ## Contributing
    Standard contributing section if no CONTRIBUTING.md exists.
    
    ## License
    Based on the LICENSE file if present.
    
    ## Rules
    - Only include sections that apply to this project
    - All commands must be based on actual project files, not guesses
    - If something is unclear, note it rather than making it up
    

    What this demonstrates: Conditional logic (adapting to different languages/frameworks), reading actual project files instead of guessing, and structured output with sections.

    Example 4: Test scaffolding

    This skill generates test files. It demonstrates detecting the existing test framework and matching project conventions.

    ---
    name: test-scaffold
    description: Generates test files for source code. Use when the 
      user asks to write tests, add test coverage, or mentions testing.
    ---
    
    # Test Scaffolding
    
    When asked to create tests:
    
    1. Detect the testing framework:
       - Check package.json for jest, vitest, mocha, @testing-library
       - Check for pytest, unittest in Python projects
       - Check for existing test files to see what's already in use
    
    2. Match existing test conventions:
       - File naming: .test.ts, .spec.ts, or __tests__/ directory
       - Import style: require vs import
       - Assertion style: expect().toBe() vs assert.equal()
       - Describe/it nesting patterns
    
    3. Read the source file to understand:
       - Exported functions and their signatures
       - Component props and behavior
       - Side effects and dependencies to mock
    
    4. Write tests covering:
       - Happy path for each exported function/component
       - Edge cases: null, undefined, empty string, empty array, 
         boundary values
       - Error cases: invalid inputs, network failures, timeouts
       - Type-specific: if TypeScript, test that types are enforced
    
    5. Add mocking as needed:
       - Mock external APIs and database calls
       - Mock timers for time-dependent code
       - Use the project's existing mock utilities if present
    
    ## Output
    - One test file per source file
    - Clear test names: "should [expected behavior] when [condition]"
    - Group related tests in describe blocks
    - Comments for non-obvious test logic
    

    What this demonstrates: Framework detection, matching existing conventions rather than imposing new ones, and comprehensive edge case coverage.

    Example 5: Deployment checklist

    This skill walks through a deployment process. It demonstrates using disable-model-invocation for skills with side effects.

    ---
    name: deploy-check
    description: Pre-deployment checklist and verification. Use when 
      the user asks to deploy, mentions shipping to production, or 
      says "deploy check."
    disable-model-invocation: true
    ---
    
    # Deployment Checklist
    
    This skill should ONLY run when explicitly invoked with /deploy-check. 
    Never run automatically.
    
    When invoked:
    
    ## Pre-deploy checks
    1. Run the test suite: identify the test command from package.json 
       or Makefile and execute it
    2. Check for uncommitted changes: `git status`
    3. Verify the branch: confirm we're on the correct deploy branch 
       (usually main or release/*)
    4. Check for pending migrations: look in db/migrations/ or similar 
       for files not yet applied
    5. Review environment variables: compare .env.example against the 
       deploy target's config
    
    ## Report
    Present findings as a checklist:
    - [x] Tests passing (or [!] 3 tests failing)
    - [x] No uncommitted changes (or [!] 5 files modified)
    - [x] On correct branch: main
    - [x] No pending migrations (or [!] 2 migrations pending)
    - [x] Environment variables match
    
    ## Decision
    If all checks pass: "Ready to deploy."
    If any checks fail: list the failures and say "Not ready. Fix 
    these issues before deploying."
    
    Do NOT actually deploy. Only check readiness.
    

    What this demonstrates: The disable-model-invocation: true flag for manual-only skills, safety instructions (don't actually deploy), and structured checklist output.

    Using these examples

    To use any of these skills:

    1. Create a folder in your skills directory:
    mkdir -p ~/.claude/skills/skill-name
    
    1. Create a SKILL.md file and paste the example
    2. Customize the instructions to match your project
    3. Start a new Claude Code session

    These examples work with Claude Code out of the box and are compatible with Codex CLI, Gemini CLI, and other agents that support the SKILL.md format.

    For more on the file format, see the SKILL.md Format Reference. To browse pre-built skills you can install instead of writing your own, check the Agensi marketplace.

    Find the right skill for your workflow

    Browse our marketplace of AI agent skills, ready to install in seconds.

    Browse Skills

    Related Articles