opencode-coding
Enforce senior-level coding standards with a focus on verification, minimal diffs, and evidence-based bug fixing.
by Roy Yuen
About This Skill
What it does
Opencode Coding is a high-performance skill designed to enforce senior-engineer coding standards across any AI model. It moves beyond "prompt-and-hope" coding by mandating a rigorous technical workflow: verify first, implement the narrowest defensible change, and prove success through execution rather than inspection.
Why use this skill
Standard LLM coding often suffers from "hallucinated confidence" and bloated, speculative refactors. This skill solves that by forcing the agent to adopt a Codex-grade standard. It is better than simple prompting because it embeds a systematic engineering contract: every change must be localized, every bug must be reproduced, and every completion must state exactly what was verified and what remains unknown. It turns your agent into an engineer that values stability and evidence over cleverness.
Key Features
- Evidence-Based Debugging: Identifies root causes and reproduces failures before proposing fixes.
- Minimal Impact Diffs: Prioritizes the smallest safe change to preserve project patterns and reduce regression risk.
- Verification-First Workflow: mandates running targeted tests, linters, or manual validations before reporting success.
- Standardized Reporting: Every output includes a "Response Contract" detailing what was Verified, Inferred, and Unknown.
Supported Use Cases
This skill is framework-agnostic and works across any tech stack. Use it for complex feature implementation, surgical bug fixing, safe refactoring of legacy modules, and rigorous PR reviews where functional correctness is the priority.
How to Install
unzip opencode-coding.zip -d ~/.claude/skills/$10
One-time purchase • Own forever
Security Scanned
Passed automated security review
Permissions
No special permissions declared or detected
Tags
Frequently Asked Questions
Learn More About AI Agent Skills
Similar Skills
env-doctor
Diagnoses why your project will not start. Checks runtime versions, dependencies, environment variables, databases, ports, and build artifacts systematically.
diagnosing-rag-failure-modes
RAG fails quietly. It retrieves documents, returns confident-looking answers, and misses the question entirely — because the question required connecting facts across documents, reasoning about sequence, or tracing causation. This skill gives you a five-question diagnostic checklist that classifies any failing query as either RAG-safe or structurally RAG-incompatible, then maps it to the specific failure pattern and the architectural fix that resolves it.
benchmarking-ai-agents-beyond-models
Published AI benchmarks measure brains in jars. They test models in isolation or within a single reference harness — and then attribute all performance to the model. This skill teaches you to decompose agent performance into its two actual components: model capability and harness multiplier. The result is evaluations that predict real-world behavior instead of benchmark theater.
code-reviewer
Reviews your code for bugs, security vulnerabilities, logic errors, performance issues, and style violations. Organizes findings by severity and suggests fixes with code examples.