Negativity Moderation Shield
Protect your space without spiraling
Creates a moderation plan and template set for hate comments, harassment, and dogpiles. It helps you decide when to delete, hide, restrict, report, or disengage, while maintaining a healthy creator mindset and community standards.
PROMPT
Create a skill called "Negativity Moderation Shield". Ask for my platform(s), the types of negativity I get, and my boundaries. Then: - propose platform settings to enable (comment filters, held comments, restrict/limit) - provide an escalation ladder: ignore → hide/delete → restrict → report - draft 10 short boundary templates (optional use) - recommend a mental boundary system (timebox + rules) Prioritize safety over engagement.
How It Works
You describe your situation and tolerance. The skill returns moderation settings to enable,
response templates (if you choose to respond), and "don't engage" rules.
Inputs
- Platforms
- Common negativity patterns you see
- Your boundaries and safety concerns
- Whether you want to respond at all
Outputs
- Moderation settings checklist (platform-specific)
- 10 boundary templates (short, calm)
- "Escalation ladder" (ignore → hide → restrict → report)
- Mental health boundaries plan (timebox, rule set)
Examples
- Input: "Transphobic comments on YouTube."
Output: "Strict moderation settings + blocked words + removal strategy."
- Input: "Viral post brought hateful strangers."
Output: "15-min daily comment window + restrict/hide workflow."
Sources
- https://support.google.com/youtube/answer/9483359
- https://help.instagram.com/2638385956221960/
- https://help.instagram.com/4106887762741654/