prompt-engineer
by Roy Yuen
Professional prompt engineering patterns for building robust, secure, and production-ready LLM applications.
About This Skill
Master the Art of Prompt Engineering
Building high-performance LLM applications requires more than just basic instructions. This skill equips your AI agent with a sophisticated framework for designing, debugging, and optimizing prompts across any major model provider. It solves the common problems of model drift, parsing failures, and hallucination by implementing industry-standard engineering patterns.
What it does
- Architectural Design: Implements advanced system prompt structures, including role anchoring, constraint blocks, and persona tuning.
- Precision Control: Utilizes few-shot prompting and chain-of-thought (CoT) reasoning to ensure logical consistency and format compliance.
- Agentic Workflows: Supports complex patterns like ReAct (Reasoning + Acting), Plan-and-Execute, and reflection loops for autonomous task completion.
- Reliable Outputs: Enforces structured data (JSON/XML) and implements robust defense mechanisms against prompt injection and jailbreaking.
- Context Management: Provides strategies for RAG (Retrieval-Augmented Generation), token budgeting, and conversation summarization.
Technical Compatibility
This skill is framework-agnostic and designed for developers working with OpenClaw, Python, and Go. It is optimized for high-reasoning models (GPT-4, Claude 3, Gemini Pro) and provides specific guidance for multimodal (image) prompting and tool-use orchestration.
High-Quality Outputs
Expect deterministic results: valid JSON objects ready for backend consumption, structured Markdown reports, and explainable reasoning chains that make debugging AI behavior straightforward for your development team.
How to Install
unzip prompt-engineer.zip -d ~/.claude/skills/Free
One-time purchase • Own forever
Security Scanned
Passed automated security review
Permissions
File Scopes
Tags
Frequently Asked Questions
Learn More About AI Agent Skills
Similar Skills
designing-hybrid-context-layers
Architects the right retrieval strategy for every query — teaching your agent when to use RAG, a knowledge graph, or a temporal index instead of defaulting to vector search for everything.
benchmarking-ai-agents-beyond-models
Published AI benchmarks measure brains in jars. They test models in isolation or within a single reference harness — and then attribute all performance to the model. This skill teaches you to decompose agent performance into its two actual components: model capability and harness multiplier. The result is evaluations that predict real-world behavior instead of benchmark theater.
code-reviewer
Reviews your code for bugs, security vulnerabilities, logic errors, performance issues, and style violations. Organizes findings by severity and suggests fixes with code examples.
diagnosing-rag-failure-modes
RAG fails quietly. It retrieves documents, returns confident-looking answers, and misses the question entirely — because the question required connecting facts across documents, reasoning about sequence, or tracing causation. This skill gives you a five-question diagnostic checklist that classifies any failing query as either RAG-safe or structurally RAG-incompatible, then maps it to the specific failure pattern and the architectural fix that resolves it.