Skill sets for every domain
The sprint is for engineering. The framework works for marketing, data, design, DevOps, and anything with a repeatable process.
Marketing: audience to measurement
Four phases that take a campaign from audience definition to performance measurement:
/audience → /content-plan → /campaign → /measure
Each phase produces an artifact that the next phase reads. The pipeline is identical to the engineering sprint — different domain, same mechanics.
Full SKILL.md: /audience
---
name: audience
description: Define target audience segments from data
concurrency: exclusive
depends_on: []
hooks: []
---
# /audience -- Define who you are talking to
## Context
Read any existing audience artifacts from previous sprints.
Check for analytics exports in data/ or reports/ directories.
## Process
1. Gather available data:
- Website analytics (if data/analytics.csv exists)
- Customer interviews (if data/interviews/ directory exists)
- Survey results (if data/surveys/ directory exists)
- Competitor audience analysis (if references/competitors.md exists)
2. For each data source found, extract:
- Demographics: role, seniority, company size
- Behaviors: what they search for, what content they engage with
- Pain points: what problems they mention repeatedly
- Objections: why they would NOT use the product
3. Define 2-3 audience segments. For each segment:
- Name: a short label (e.g., "solo-founder", "mid-market-cto")
- Size estimate: rough TAM based on available data
- Primary channel: where they spend time online
- Trigger event: what makes them search for a solution
4. Save the artifact:
```bash
bin/save-artifact.sh audience audience-segments.json
```
## Artifact schema
```json
{
"phase": "audience",
"segments": [
{
"name": "solo-founder",
"demographics": { "role": "founder/ceo", "company_size": "1-10" },
"primary_channel": "twitter",
"trigger_event": "first product launch",
"pain_points": ["no marketing budget", "no team"],
"objections": ["too early to invest in marketing"]
}
]
}
```Data: exploration to validation
A data science workflow built on the same artifact pipeline:
/explore → /hypothesis → /model → /validate
Full SKILL.md: /hypothesis
---
name: hypothesis
description: Form testable hypotheses from exploratory analysis
concurrency: exclusive
depends_on:
- explore
hooks: []
---
# /hypothesis -- Turn observations into testable claims
## Context
Read the /explore artifact. It contains summary statistics,
distributions, correlations, and anomalies found in the dataset.
## Process
1. Read the explore artifact:
```bash
EXPLORE=$(bin/find-artifact.sh --phase explore --project $(basename $(pwd)))
cat "$EXPLORE" | jq '.findings'
```
2. For each notable finding from exploration, formulate a hypothesis:
- State the claim in one sentence
- Define the null hypothesis
- Specify the test: statistical test name, significance threshold
- Identify confounders that could invalidate the result
- Estimate sample size needed for the chosen power
3. Rank hypotheses by:
- Business impact (1-5): if true, how much does it matter?
- Testability (1-5): can we actually test this with available data?
- Priority = impact * testability
4. Select the top 3 hypotheses for the /model phase.
5. Save the artifact:
```bash
bin/save-artifact.sh hypothesis hypotheses.json
```
## Artifact schema
```json
{
"phase": "hypothesis",
"hypotheses": [
{
"id": "H1",
"claim": "Users who complete onboarding within 24h retain 2x better",
"null": "Onboarding timing has no effect on 30-day retention",
"test": "chi-squared",
"significance": 0.05,
"confounders": ["user source", "plan type"],
"impact": 5,
"testability": 4,
"priority": 20
}
]
}
```Design: research to usability
A UX research workflow:
/research → /wireframe → /prototype → /usability
- /research— gathers user interviews, competitor screenshots, and heuristic evaluations. Produces a research artifact with key insights and design opportunities.
- /wireframe— reads the research artifact, generates low-fidelity layout descriptions (text-based, not images). Produces a wireframe artifact with component hierarchy and content blocks.
- /prototype— reads the wireframe artifact, generates HTML/CSS prototypes for the top 3 flows. Saves files to a prototypes/ directory and produces an artifact with file paths and flow descriptions.
- /usability— reads the prototype artifact, generates a usability test script with tasks, success criteria, and measurement rubrics. Produces a test-plan artifact.
DevOps: provision to rollback
An infrastructure workflow:
/provision → /deploy → /monitor → /rollback
- /provision— reads infrastructure specs from infra/, generates Terraform or Pulumi configurations, validates them with terraform plan. Saves the plan output as an artifact.
- /deploy— reads the provision and ship artifacts, runs the deployment pipeline, verifies endpoint health. Saves deployment status, URLs, and health check results.
- /monitor— reads the deploy artifact, checks logs, metrics, and error rates for the first 15 minutes after deployment. Flags anomalies against baseline metrics.
- /rollback— reads the monitor artifact. If anomalies are detected, reverts to the previous deployment. If clean, produces a "stable" artifact marking the release as good.
The pattern
Every domain follows the same structure: a sequence of phases where each phase reads the previous artifact, does its work, and saves a new artifact. The framework provides the pipes — artifacts, hooks, orchestration, secret scanning, integrity verification, conflict resolution. You fill them with your domain knowledge.
Start with two phases. Get them working. Add more when you feel the gaps. The artifact pipeline does not care whether you have 4 phases or 14. It scales with your workflow, not against it.