Discovery
Back to browse

gpt_image_2_skill - 162-prompt gallery and skill for GPT Image 2

Curated gallery, CLI, and agentic skill for OpenAI's GPT Image 2. 162 reusable prompts across anime, gaming, photography, UI/UX, and research-figure categories. Supports text-to-image, mask-edit, multi-reference edits, and batch ops. Installable as a Claude Code plugin, Codex skill, or standalone CLI via uv.

View source ↗

This entry doesn't have a long-form writeup yet. Follow the source link above for the full context.

Featured in

Related entries

GitHubLibraryFeatured

Garden Skills - production skill pack for Claude Code, Cursor, and Codex

Three carefully-scoped skills: web-design-engineer (with an anti-cliche blocklist that breaks the generic-AI-landing-page loop), gpt-image-2 (80+ templates, three runtime modes including advisor-only fallback), and kb-retriever (layered data_structure.md navigation for bounded local-KB retrieval). Tested across Claude Code, Claude.ai, Cursor, Codex, Gemini, OpenCode.

Why I saved this - The web-design skill's anti-cliche blocklist is the most opinionated take on 'stop producing the same hero + 3 cards' I've seen.
GitHubToolFeatured

PostTrainBench - can a CLI agent post-train a base LLM in 10 hours?

Benchmark measuring whether Claude Code, Codex CLI, Gemini CLI, and OpenCode can autonomously improve 4 small base models (Qwen3-1.7B/4B, SmolLM3-3B, Gemma-3-4B) on 7 evals (AIME, BFCL, GPQA, GSM8K, HealthBench, HumanEval, Arena Hard) within a single H100 GPU and 10 hours. Includes agent-as-judge anti-reward-hacking and baseline-replacement penalties for tampering.

Why I saved this - Current leader: Opus 4.6 via Claude Code at 23.2 average. The reward-hacking safeguards (eval tampering and model-substitution detection, baseline-replacement penalty) are the part most agent benchmarks skip.