When this matters
- A Codex or Claude skill author wants a pre-publish check before sharing a skill pack.
- A team reviews third-party skills and needs to know whether they are too broad or under-tested.
- A marketplace wants contributors to pass a cost and QA gate before listing.
How to run the workflow
- Paste or upload SKILL.md and select the expected agent runtime.
- Detect steps, tool categories, file access, network calls, credentials, and approval language.
- Estimate context tokens from instructions, examples, references, and expected task payloads.
- Flag missing acceptance tests, broad permissions, and ambiguous completion criteria.
- Generate a revised cost forecast and pricing suggestion after edits.
Common risks
- Static analysis cannot prove real runtime behavior unless it is paired with simulation.
- Hidden dependencies in referenced scripts can change the cost and review burden.
- Credentials and network access need explicit owner approval and auditability.
Where SkillCost Meter fits
SkillCost Meter starts with static SKILL.md analysis, then upgrades the same skill into a 20-run runtime forecast and red-line QA report.