Translate "the algorithm changed" into a plan
Creates a platform-specific (or multi-platform) briefing that explains ranking priorities in plain language, then turns them into practical actions: what to aim for, what to stop doing, and what to test. Includes side-by-side signal comparison when multiple platforms are selected. Built for creators experiencing reach drops and growth confusion.
Create a skill called "Algorithm Briefing". For one or more selected platforms: 1) Summarize official ranking guidance in plain English (<= 200 words per platform). 2) If multiple platforms are selected, produce a side-by-side signal comparison table (top signals, what creators control, anti-patterns). 3) Ask me for my last 10 posts/videos performance summary (or let me paste metrics). 4) Diagnose likely causes of underperformance. 5) Produce 3 experiments to run in the next 7 days: - each experiment: hypothesis, change, success metric, and how long to run. 6) Include a short measurement checklist (2–4 metrics per platform). Rules: - Prioritize actions the creator can control (hook, structure, packaging, keywords). - Avoid mythy "shadowban folklore"; use official features/status checks when available. - Keep it practical and avoid jargon.
You choose one or more platforms and share your symptoms (reach drop, volatility, restrictions,
low watch time). The skill summarizes official ranking guidance, compares signal priorities across
platforms, and proposes an experimentation plan.
Output: "Ranking priorities + audit + 3 experiments (hook, length, CTA)."
Output: "Packaging + retention audit + title/thumbnail test plan."
Output: "Side-by-side signal comparison + per-platform weekly optimization steps."
Rebuild measurement around first-party data, consent, and signal loss
Create a privacy-first measurement architecture that accepts signal loss as permanent: define a durable KPI stack, implement consent-aware tracking priorities, and choose when to use modeled vs deterministic data. Outputs a blueprint plus an implementation backlog for a quarter.
Pick the attribution method that matches your data reality
Helps teams decide when multi-touch attribution (MTA) is still useful vs when to lean on marketing mix modeling (MMM) or incrementality testing. Produces a decision with justification, a minimum data requirements checklist, and a 30-day validation plan.
Pick 1–3 channels and actually stick with them
Turn channel confusion into a structured playbook: select channels, define posting cadence and messaging pillars, and set minimum viable measurement — so marketing becomes consistent instead of sporadic.
Turn scattered posting into a clear content system
Helps creators define 3–5 content pillars, a repeatable set of formats, and a clear audience promise. Built for creators who feel "all over the place" and struggle to build momentum.