Discovery
Back to browse

Garden Skills - production skill pack for Claude Code, Cursor, and Codex

Three carefully-scoped skills: web-design-engineer (with an anti-cliche blocklist that breaks the generic-AI-landing-page loop), gpt-image-2 (80+ templates, three runtime modes including advisor-only fallback), and kb-retriever (layered data_structure.md navigation for bounded local-KB retrieval). Tested across Claude Code, Claude.ai, Cursor, Codex, Gemini, OpenCode.

4 min readView source ↗

Most "skill packs" people publish for Claude Code are either someone's personal ~/.claude/skills/ directory dumped to GitHub or a thin wrapper around a single prompt. Garden Skills is one of the few that actually treats skills as a designed product surface - bilingual docs (English + Chinese), tested across six agent runtimes, with deliberate opinions about what each skill does and refuses to do.

It's by ConardLi (1.9k stars, the most-starred skill pack I've come across), MIT-licensed, and ships three skills that genuinely don't overlap.

The three skills

web-design-engineer - the headline one. Six-step workflow for landing pages, dashboards, interactive prototypes, and data visualisations. The interesting design choice: an anti-cliché blocklist. The skill explicitly refuses the generic AI patterns (hero + 3 feature cards + testimonials wall + "trusted by" logos) and pushes the agent toward visual judgment instead. If you've ever asked a coding agent to "build a landing page" and watched it produce the same Vercel-template shape three times in a row, this is the skill that breaks that loop.

gpt-image-2 - prompt engineering for image generation. 18 visual categories, 80+ structured prompt templates covering posters, mockups, infographics, and technical diagrams. Three runtime modes:

  • Mode A (Garden local) - generation runs locally
  • Mode B (host-native delegation) - hand off to whatever image tool the platform has (Claude's image gen, Cursor's, etc.)
  • Mode C (advisor-only) - if no image tool is available, write the prompt and stop

That mode-C fallback is the right default. Most skills that "do image generation" silently fail when there's no image tool; this one downgrades to "here's the prompt, run it where you can."

kb-retriever - local knowledge-base search across Markdown, PDF, and Excel, with source attribution. The bit nobody else does: it uses layered data_structure.md files to navigate a knowledge base instead of stuffing everything into context. Bounded retrieval is the point - the skill is built to avoid context overflow, not just to "do RAG."

Install

Three install paths. Pick based on how you want updates to flow.

Plugin marketplace (recommended for Claude Code):

/plugin marketplace add ConardLi/garden-skills
/plugin install web-design-skills@garden-skills
/plugin install knowledge-base-skills@garden-skills
/plugin install image-generation-skills@garden-skills

Manual copy:

# Claude Code
cp -r path/to/garden-skills/skills/* ~/.claude/skills/
# Cursor
cp -r path/to/garden-skills/skills/* .agents/skills/

Git submodule - for projects that want to track upstream:

git submodule add https://github.com/ConardLi/garden-skills .agents/skills/garden

Tested across six runtimes

AgentStatus
Claude CodeTested
Claude.ai (web)Tested
CursorTested
Codex CLITested
Gemini CLITested
OpenCodeTested

This is more cross-agent coverage than most skill packs bother with. If you run Codex and Claude Code in parallel, the same skill behaves consistently in both - which is rare.

When to reach for it

  • You're tired of the same five landing-page shapes coming out of your coding agent. The web-design skill's anti-cliché blocklist is the most opinionated answer to that I've seen.
  • You want image-gen prompts that survive across providers. The 80+ templates are written to be platform-agnostic, then mapped to whichever image tool your host exposes.
  • You're building a local knowledge base for an agent and don't want to pay the context cost of dumping the whole thing every turn. kb-retriever's layered navigation is the right shape.

When not to

  • You want a 30-skill kitchen-sink pack. Garden is deliberately small - three skills, each carefully scoped.
  • Your stack doesn't run skills (you're on a vanilla LLM API integration). The marketplace install assumes a skills-aware runtime.
  • The web-design skill won't help if your design system already has rigid component templates - it's built to push agents away from generic patterns, but if "generic" is what you actually want, the skill will fight you.

Trade-offs

Bilingual docs are a feature, but the README is more polished in Chinese than in English in places. If you read both, you'll occasionally find a tip in the zh-CN version that didn't make it to the English one.

The "anti-cliché blocklist" is opinionated by design. That's the value, but it also means the web-design skill's output skews toward editorial / dense / asymmetric layouts. If your taste runs the other direction (clean SaaS minimalism), you'll spend time tuning the blocklist down rather than turning it up.

Cross-agent testing is impressive but doesn't mean equal results. Skills behave best on Claude Code (where SKILL.md is a first-class concept); other runtimes work but with smaller cues - worth knowing if you're picking a primary host.

Featured in

Related entries

GitHubToolFeatured

PostTrainBench - can a CLI agent post-train a base LLM in 10 hours?

Benchmark measuring whether Claude Code, Codex CLI, Gemini CLI, and OpenCode can autonomously improve 4 small base models (Qwen3-1.7B/4B, SmolLM3-3B, Gemma-3-4B) on 7 evals (AIME, BFCL, GPQA, GSM8K, HealthBench, HumanEval, Arena Hard) within a single H100 GPU and 10 hours. Includes agent-as-judge anti-reward-hacking and baseline-replacement penalties for tampering.

Why I saved this - Current leader: Opus 4.6 via Claude Code at 23.2 average. The reward-hacking safeguards (eval tampering and model-substitution detection, baseline-replacement penalty) are the part most agent benchmarks skip.