Back to Cookbook

Engagement Logbook and Evidence Index

Write-as-you-go notes so reporting doesn't become the project

Reporting and evidence management is the most painful part of pentesting. This recipe creates a lightweight logbook workflow — record commands, capture evidence with consistent filenames, and generate a report-ready index as you go. Tool-agnostic, works with any reporting format.

House RecipeWork8 min

INGREDIENTS

PROMPT

Create a skill called "Engagement Logbook". Inputs I will provide: - Engagement name/code and target list - Any required evidence naming conventions from the client - Reporting format requirements (Word/PDF/Markdown) Task: 1) Generate a recommended folder structure and naming conventions. 2) Provide commands to start/stop a terminal session log and where to store it. 3) Provide a living index.md template that I can fill during the engagement. 4) Include guidance on avoiding secrets in logs and safe storage practices.

What this fixes

Common symptoms:

  • "We finished testing, now we have to reconstruct what we did"
  • Screenshots and outputs have inconsistent names and get lost
  • Report writing takes longer than testing due to missing notes

Prerequisites

  • A dedicated engagement folder (local encrypted storage preferred)
  • A simple convention for timestamps and target IDs
  • Optional: git for local version history (avoid remote pushes unless approved)

Steps and commands

  1. Create a standard folder structure:

`engagement/

00-admin/

01-scope/

02-notes/

03-evidence/

04-tool-output/

05-findings/

06-report/`

  1. Start a command log per work session:
  • Record terminal I/O:

`script -af 04-tool-output/terminal-$(date +%F).log`

  • Stop with `exit` when done.
  1. Evidence capture naming convention:
  • `YYYY-MM-DD_target_component_shortdesc.ext`
  • Example: `2026-03-28_app_login_xss_poc.png`
  1. Maintain a single living index file:
  • `02-notes/index.md` sections:
  • Scope + constraints
  • Credentials provided and where stored
  • Daily timeline
  • Evidence links
  • Draft findings list with severity placeholders
  1. Convert tool output into stable artifacts:
  • Prefer machine-readable exports (XML/JSON) + a human-friendly render (HTML/PDF).
  • Store both and link them from index.md.

Expected outputs

  • A clean audit trail of commands and outputs per day
  • Evidence that maps cleanly into report sections (finding → proof → remediation)
  • Faster final reporting (less reconstruction)

Common errors and troubleshooting

  • Sensitive data in logs
  • Redact before sharing; store logs encrypted; avoid pasting secrets into terminal history.
  • Tools write output to unexpected locations
  • Always specify `-o` output paths and confirm after each run.
  • Forgetting to start the logger
  • Add `script` to your session startup checklist or shell alias.

References

  • https://www.reddit.com/r/AskNetsec/comments/a8nv55/pentesters_what_tools_and_techniques_have_made/
  • https://www.reddit.com/r/cybersecurity/comments/1nly5u1/most_painful_thing_in_pentest/
  • https://github.com/dradis/dradis-ce

Example inputs

  • Engagement code: ACME-2026Q1
  • Targets: app1.acme.test, 10.0.0.0/24
Tags:#pentesting#reporting#notes#evidence#workflow