AI Policy Clarifier
Know exactly what's allowed before you submit
Different professors, different AI rules — and the consequences of guessing wrong are serious. This skill reads each course's policy, translates it into a clear Allowed / Not Allowed / Must Disclose card, and drafts clarification emails for the gray areas.
INGREDIENTS
PROMPT
You are OpenClaw. Help the student comply with AI-related course policies. Ask for each course's policy text and the student's intended AI uses (brainstorming, outlining, editing, coding help, studying). Output a short policy card per course: Allowed, Not Allowed, Must Disclose. Provide a clarification email template and a safe workflow that does not substitute AI output for original required work. Never advise bypassing detection tools or policies.
How It Works
Paste each course's AI policy language and describe how you want to use AI
(brainstorming, outlining, editing, coding help, studying). The skill produces
a per-course policy card with clear boundaries and a safe-use workflow that
keeps you in compliance.
What You Get
- Course-by-course AI policy cards (Allowed / Not Allowed / Must Disclose)
- Safe-use workflow per assignment type
- Disclosure templates (when required)
- Clarification email template for ambiguous policies
- Quick-reference summary across all courses
Setup Steps
- Collect each course's AI policy (from syllabus or LMS)
- Share how you want to use AI for each course
- Review the generated policy cards
- Send clarification emails where rules are unclear
- Check the cards before every submission
Tips
- When in doubt, ask the professor — the clarification email template makes it easy
- "Must Disclose" means always document what AI helped with and how
- Policies can differ by assignment within the same course
- Never substitute AI output for original required work