Responsible AI
We help organisations govern their AI responsibly. It's only fair we hold ourselves to the same standard. This page explains how Narrate uses AI features in our platform and services, and how our approach supports EU AI Act readiness.
Our commitment
As an AI governance consultancy and platform provider, we believe in practising what we advise. This page outlines how we approach AI use in our own operations and platform features.
How Narrate uses AI
Narrate includes optional AI-assisted features to help you streamline governance and evidence workflows. These are assistive tools designed to support human decision-making, not autonomous systems.
- Evidence analysis and summaries: AI-powered analysis helps map evidence to controls, identify potential gaps, and assess relevance and recency. Outputs include confidence scores to help you review and validate findings.
- Meeting and audio transcription: Automated transcription helps capture governance discussions, action items, and decisions. Transcripts are linked to evidence and audit trails so you can trace governance decisions.
- All AI outputs are suggestions. Final decisions and governance outcomes remain with you, your team, and your advisors.
Human-in-the-loop and accountability
- All AI outputs are suggestions and must be reviewed and approved by users, your team, or our consultants before use in any compliance decision or assessment.
- Maker/checker principles are embedded where appropriate — no individual can self-verify an AI output as final without independent review.
- Clear accountability: You remain responsible for final decisions and compliance outcomes. We provide tools, guidance, and support to help you meet that responsibility.
EU AI Act readiness alignment (practical, non-legal)
Our responsible AI practices and platform workflows are designed to help you operationalise EU AI Act readiness by:
- AI system inventory and documentation: The platform helps you build habits around documenting your AI systems, use cases, and governance decisions so you can demonstrate readiness if needed.
- Traceability and audit trails: Comprehensive logging, evidence mapping, and decision trails help you show the governance story — who did what, when, and why.
- Monitoring, change control, and incident readiness: Workflows for tracking AI system changes, performance, incidents, and corrective actions help embed the governance mindset.
- These features help you operationalise readiness; they do not provide legal advice or certification.
AI Governance Module — Built into the Platform
Beyond our own responsible practices, the Narrate Platform includes a dedicated AI Governance module that helps your organisation build and maintain a comprehensive AI management system.
AI System Inventory
Register every AI system in your organisation with auto-generated tracking numbers (AIGOV-YYYY-NNN). Capture system type, vendor, model version, hosting location, data processing agreements, EU AI Act risk classification (Minimal/Limited/High/Unacceptable), autonomy and data sensitivity scores, and behavioural controls configuration. Link each system to controls across multiple standards simultaneously.
Structured Risk Assessments
Run three types of structured assessment for each registered AI system:
- DPIA (Data Protection Impact Assessment): Questionnaire-based scoring covering data handling, processing purposes, retention, and access controls. Risk level calculated automatically from weighted responses.
- Bias & Fairness Assessment: Evaluate your AI systems for potential bias across protected characteristics, output fairness, and monitoring practices.
- Security Assessment: Assess model security, adversarial robustness, access controls, and incident response readiness.
Each assessment preserves its full history — complete new assessments while retaining all previous records. Recommendations generated from assessments can be promoted directly to the Risk Register as tracked risks.
EU AI Act Compliance Workflow
Track compliance across 16 EU AI Act obligations (Articles 9–15, 17, 27, 43, 48–50, 72–73) grouped into Conformity Assessment, Transparency, Registration, and FRIA categories:
- Per-obligation tracking: Status (Not Started / In Progress / Compliant / Non-Compliant), evidence notes, assigned owner, and completion dates for each obligation.
- FRIA Template (Article 27): 9 structured questions covering fundamental rights assessment, proportionality analysis, and safeguard documentation — ready for auditor review.
- Technical Documentation Generator (Annex IV): 8 sections auto-populated from your system data with markdown export for inclusion in compliance documentation.
- Cross-system compliance counts: See "X/Y AI Systems" compliant per obligation across all registered systems in a single dashboard view.
AI Governance Dashboard
A unified view showing total systems count, critical/high risk breakdown, production systems, controls coverage percentage, and a composite AI readiness score. A risk heat map plots autonomy level against data sensitivity across all registered systems, giving you immediate visibility into your highest-risk AI deployments.
Transparency
- If we use AI features in delivering our services (e.g., evidence summaries, transcription support), we will be clear with you about where and how they are used.
- We do not represent AI-generated outputs as solely human work without disclosure.
- AI features are optional and usage-controlled (via AI credits) so you can adjust your use as needed.
Data protection
- Privacy Firewall: Before any data reaches the AI provider, a local PII redaction engine automatically detects and removes sensitive information (email addresses, credit card numbers, IP addresses, national identity numbers, and phone numbers). Only sanitised text leaves our secure environment.
- Zero Training guarantee: AI features use the OpenAI Enterprise API, which contractually prohibits using customer data to train, retrain, or improve models. This is distinct from consumer ChatGPT terms.
- Zero Retention: OpenAI processes requests statelessly. Input and output data is not retained for service improvement. API logs may be held for up to 30 days solely for abuse monitoring, then deleted.
- Data minimisation by design: We use a Retrieval-Augmented Generation (RAG) approach — only the data necessary for the requested feature is sent as context for individual queries, never used to train any model.
- Tenant isolation: Row Level Security scoped by company_id ensures your data is strictly separated from other clients' data. All connections use TLS 1.3 encryption; all stored data uses AES-256 encryption at rest.
- JIT access controls: Narrate engineers cannot view customer evidence without explicit, time-bound permission grants. Access is automatically revoked after the permitted window.
AI features and safeguards
When using AI-assisted features in the Narrate platform:
- Automatic redaction: The Privacy Firewall automatically strips PII before data reaches the AI provider. Administrators can choose between Standard and Aggressive redaction modes for higher-sensitivity environments.
- Global AI Toggle: Administrators can enable or disable all AI features at the organisation level at any time.
- Per-Control Sensitivity: Individual controls can be marked as "High Sensitivity", which blocks AI analysis for those specific controls.
- Output validation: Treat AI outputs as assistive suggestions, not authoritative truth. Review findings with your team and subject matter experts before accepting them.
- Security and logging: All AI feature use is logged with user identity, timestamp, and action for audit purposes. Access controls ensure only authorised team members can access AI features.
- Usage controls: AI features are metered by AI credits so you can monitor and limit use based on your budget and needs.
Limitations
- AI can be wrong. It may misinterpret context, miss nuances, or hallucinate connections that don't exist.
- Evidence or documentation may be outdated or incomplete. Always validate AI findings against your source material.
- Treat AI outputs as a first draft or starting point, not a finished analysis. Review with your team and include subject matter experts in final decisions.
Our goal: To help organisations build an AIMS that is practical, auditable, and aligned with good governance — not paperwork for its own sake — and evidence-first readiness for EU AI Act expectations.
We're not a law firm; we operationalise governance, documentation and evidence.
Last reviewed: February 2026