Before debugging your stack, check whether a critical vendor is degraded
Plenty of "our service is down" investigations end with "the vendor is having an incident." This recipe checks your critical vendors' status pages, correlates reported degradation with your symptom window, and posts a structured summary to the incident channel so the team can avoid burning time in the wrong system.
Check whether a production issue may be caused by a third-party vendor outage. Goal: Help me quickly determine whether a problem I am seeing is actually caused by one of our third-party vendors having an issue. Ask me for: - The symptom I am seeing (timeouts, 5xx, slow queries, payment failures, etc.) - The time window when it started - Affected service or feature - The vendor list document or a list of vendors to check - Slack channel for the radar post Use available integrations this way: - Exa Search and Brave Search: fetch status page content for each vendor - Cloudflare: check zone status, recent events, and origin reachability - Slack: post the radar result to the incident channel - Google Docs: read the vendor list and write a longer-form analysis if needed - Linear: create a ticket if a vendor outage requires a sustained workaround Output: 1. A status table of every vendor checked, with current status and last update 2. Correlation summary: which vendors had degradation overlapping my symptom window 3. Cloudflare zone status if applicable 4. A clear recommendation: vendor likely cause, vendor unlikely cause, or inconclusive 5. A Slack post for the incident channel 6. A Linear ticket if the vendor cause needs a tracked workaround Rules: - Do not declare a vendor as the cause without correlation evidence - Do not skip vendors flagged as critical even if their status page is hard to scrape - Always include the timestamp of the status page check; staleness matters - If a vendor lacks a public status page, mark it as "unverifiable" not "operational" - Suggest opening a vendor support ticket if their status page disagrees with my symptom
When an alert fires or a customer reports a failure, this recipe
checks whether the issue lines up with a third-party dependency.
It scans your declared vendor list, checks status pages, looks for
degradation that overlaps your symptom window, and posts the result
to the incident channel.
Turn rough incident notes into a blameless postmortem with shippable follow-ups
Postmortems often start as rushed incident notes and stay messy until review time. This recipe takes a rough draft, removes blameful language, fills timeline gaps from PagerDuty and Slack, turns vague follow-ups into concrete Linear tickets, and schedules the review.
Hand off the pager without making the next engineer reconstruct the shift
On-call handoffs usually happen fast, right when context is easiest to lose. The next engineer starts their shift digging through incidents, deploys, noisy alerts, and half-finished Slack threads just to understand the current state. This recipe pulls the shift's PagerDuty incidents, deploy activity, Datadog alerts, and open Slack threads into a clean handoff brief the next on-call can use immediately.
Local-first AI assistant that automates small daily tasks safely on your device
A personal, local-first AI assistant that automates small daily tasks—organizing files, setting reminders, and monitoring system events—without touching sensitive data or taking risky actions without your approval.
Wikipedia-grade AI pattern removal
Comprehensive AI writing cleanup based on Wikipedia's WikiProject AI Cleanup guidelines. Catches 24+ distinct patterns including inflated symbolism, em dash overuse, rule of three, copula avoidance, and sycophantic tone.