AI
Why Enterprises Need an AI Chain of Custody
If you can’t audit how the AI got there, you can’t defend the work.
In May 2025, the Canadian provincial government of Newfoundland and Labrador released a 526-page healthcare report from Deloitte — a $1.6M engagement. Months later, investigators found fabricated citations, non-existent papers, and false attributions to real academics. Deloitte admitted AI had been used for citations. (Fortune, 2025)
Previously, Deloitte had delivered another government report — this time for Australia — with invented papers and fake court quotes. (Fortune, 2025)
Two governments. Two reports. Same problem: no one could trace how the AI-generated content made it into final deliverables. No documentation of which models were used, what prompts were given, or how outputs were validated.
This is the problem enterprises face right now. AI-assisted work is flowing into critical documents with zero traceability. And when something breaks six months later, there’s no audit trail to defend the work.
Cybersecurity and digital forensics solved this decades ago with access logs, version control, and data lineage. Every action traceable. Every change documented. That’s not bureaucratic overhead — it’s the foundation of accountability.
AI needs the same rigor.
The Hidden Problem
These aren’t edge cases. They’re warnings of a systemic problem: AI-generated content is flowing into high-stakes professional work with almost no documentation of how it got there.
Teams rely on AI to:
- draft marketing copy
- summarize legal research
- write code
- generate financial projections
- and more…
And six months later, when something breaks, no one can answer:
- Which model was used?
- What version?
- What was the exact prompt?
- Who reviewed the output?
- What changed after generation?
Without that, you have liability without evidence.
In both Deloitte cases, investigators couldn’t reconstruct which parts were AI-generated or what prompts produced the fabrications. Without documentation, Deloitte could only insist the errors “did not impact findings” — a claim impossible to verify. (The Independent, 2025)
Why This Matters Now
Regulators are closing in:
- The EU AI Act adds obligations for high-risk systems, including documentation, transparency, and technical logging requirements. (CERRE, 2025)
- The SEC is scrutinizing AI use in financial disclosures. (Crowell, 2024)
- Compliance regulators want proof of validation and review. (EU AI Act, 2025)
But the bigger issue is operational trust.
When a report is “AI-assisted,” does that mean:
- 5% AI or 95% AI?
- drafting only, or recommendations?
- expert-reviewed or pasted verbatim?
Without traceability, no one can meaningfully assess or defend the work.
What an AI Chain of Custody Should Capture
The concept isn’t foreign to enterprises — digital forensics has used chains of custody for decades. The same framework applies to AI:
- Model Identification
Exact model and version — not just “ChatGPT.” - Prompt Documentation
The actual instructions and context provided. - Timestamps and User Attribution
Who invoked the AI and when. - Output Versioning
What the AI produced before any human edits. - Validation Records
Who reviewed it, what checks were performed, and what was changed. - Modification History
Tracking edits like any other version-controlled asset.
Some teams already do this through structured logging and workflow tools. They treat AI interactions the way developers treat code commits: documented, reviewable, traceable.
A Governance Model We Already Know
This isn’t a new discipline. It’s an extension of what enterprises already do:
- Data lineage → AI output lineage
- Code reviews → AI-assisted content reviews
- Audit trails → AI interaction logs
- Change control → AI-assisted workflow approvals
The processes exist. They just haven’t been applied to AI yet.
How to Implement This — Practically
A phased approach keeps it lightweight:
Phase 1: Policy
Set the expectation: AI-assisted work must be documented.
Phase 2: Basic Metadata
Capture model, user, and timestamps automatically.
Phase 3: Prompt + Output Archiving
Store what was asked and what the AI generated.
Phase 4: Validation Workflows
Require human sign-off before finalization.
Phase 5: Automation
Integrate documentation into existing tools so it’s seamless.
This isn’t bureaucracy. It’s risk reduction and operational clarity.
The Cost of Doing Nothing
The Deloitte incidents show what happens without traceability:
- fabricated citations
- public investigations
- costly refunds (NDTV, 2025)
- reputational damage
- loss of client trust
And the downstream harm was bigger: governments made decisions based on reports containing fake research during periods of public-sector strain. A researcher, who was falsely cited, told investigators that reports must be “validated and accurate” since “they cost governments—and ultimately the public.” (The Independent, 2025)
Waiting for regulation is not a strategy. A lack of rules increases enterprise exposure.
Worse, without documentation, organizations lose the ability to learn from their own AI usage — no insight into what works, what fails, or where oversight catches issues.
The companies that implement chain of custody now will have both compliance readiness and competitive advantage.
Making AI Assistance Defensible
AI promises speed and scale. But speed without accountability is risk, and automation without oversight is blind trust.
An AI chain of custody turns AI from a black box into a documented process. It protects teams. It makes outputs reviewable. It provides evidence when decisions are challenged.
We’ve built audit trails for code, data, and financial systems.
AI deserves the same governance.
Because if you can’t audit how the AI got there, you can’t defend the work — and undefendable work doesn’t belong in an enterprise.
Find out more About Cyferd
New York
Americas Tower
1177 6th Avenue
5th Floor
New York
NY 10036
London
2nd Floor,
Berkeley Square House,
Berkeley Square,
London W1J 6BD
