Responsible AI
We help organisations govern their AI responsibly. It's only fair we hold ourselves to the same standard. This page explains how Narrate uses AI features in our platform and services, and how our approach supports EU AI Act readiness.
Our commitment
As an AI governance consultancy and platform provider, we believe in practising what we advise. This page outlines how we approach AI use in our own operations and platform features.
How Narrate uses AI
Narrate includes optional AI-assisted features to help you streamline governance and evidence workflows. These are assistive tools designed to support human decision-making, not autonomous systems.
- Evidence analysis and summaries: AI-powered analysis helps map evidence to controls, identify potential gaps, and assess relevance and recency. Outputs include confidence scores to help you review and validate findings.
- Meeting and audio transcription: Automated transcription helps capture governance discussions, action items, and decisions. Transcripts are linked to evidence and audit trails so you can trace governance decisions.
- All AI outputs are suggestions. Final decisions and governance outcomes remain with you, your team, and your advisors.
Human-in-the-loop and accountability
- All AI outputs are suggestions and must be reviewed and approved by users, your team, or our consultants before use in any compliance decision or assessment.
- Maker/checker principles are embedded where appropriate — no individual can self-verify an AI output as final without independent review.
- Clear accountability: You remain responsible for final decisions and compliance outcomes. We provide tools, guidance, and support to help you meet that responsibility.
EU AI Act readiness alignment (practical, non-legal)
Our responsible AI practices and platform workflows are designed to help you operationalise EU AI Act readiness by:
- AI system inventory and documentation: The platform helps you build habits around documenting your AI systems, use cases, and governance decisions so you can demonstrate readiness if needed.
- Traceability and audit trails: Comprehensive logging, evidence mapping, and decision trails help you show the governance story — who did what, when, and why.
- Monitoring, change control, and incident readiness: Workflows for tracking AI system changes, performance, incidents, and corrective actions help embed the governance mindset.
- These features help you operationalise readiness; they do not provide legal advice or certification.
Transparency
- If we use AI features in delivering our services (e.g., evidence summaries, transcription support), we will be clear with you about where and how they are used.
- We do not represent AI-generated outputs as solely human work without disclosure.
- AI features are optional and usage-controlled (via AI credits) so you can adjust your use as needed.
Data protection
- Privacy Firewall: Before any data reaches the AI provider, a local PII redaction engine automatically detects and removes sensitive information (email addresses, credit card numbers, IP addresses, national identity numbers, and phone numbers). Only sanitised text leaves our secure environment.
- Zero Training guarantee: AI features use the OpenAI Enterprise API (GPT-4o), which contractually prohibits using customer data to train, retrain, or improve models. This is distinct from consumer ChatGPT terms.
- Zero Retention: OpenAI processes requests statelessly. Input and output data is not retained for service improvement. API logs may be held for up to 30 days solely for abuse monitoring, then deleted.
- Data minimisation by design: We use a Retrieval-Augmented Generation (RAG) approach — only the data necessary for the requested feature is sent as context for individual queries, never used to train any model.
- Tenant isolation: Row Level Security scoped by company_id ensures your data is strictly separated from other clients' data. All connections use TLS 1.3 encryption; all stored data uses AES-256 encryption at rest.
- JIT access controls: Narrate engineers cannot view customer evidence without explicit, time-bound permission grants. Access is automatically revoked after the permitted window.
AI features and safeguards
When using AI-assisted features in the Narrate platform:
- Automatic redaction: The Privacy Firewall automatically strips PII before data reaches the AI provider. Administrators can choose between Standard and Aggressive redaction modes for higher-sensitivity environments.
- Global AI Toggle: Administrators can enable or disable all AI features at the organisation level at any time.
- Per-Control Sensitivity: Individual controls can be marked as "High Sensitivity", which blocks AI analysis for those specific controls.
- Output validation: Treat AI outputs as assistive suggestions, not authoritative truth. Review findings with your team and subject matter experts before accepting them.
- Security and logging: All AI feature use is logged with user identity, timestamp, and action for audit purposes. Access controls ensure only authorised team members can access AI features.
- Usage controls: AI features are metered by AI credits so you can monitor and limit use based on your budget and needs.
Limitations
- AI can be wrong. It may misinterpret context, miss nuances, or hallucinate connections that don't exist.
- Evidence or documentation may be outdated or incomplete. Always validate AI findings against your source material.
- Treat AI outputs as a first draft or starting point, not a finished analysis. Review with your team and include subject matter experts in final decisions.
Our goal: To help organisations build an AIMS that is practical, auditable, and aligned with good governance — not paperwork for its own sake — and evidence-first readiness for EU AI Act expectations.
We're not a law firm; we operationalise governance, documentation and evidence.
Last reviewed: February 2026