Trust & Security
We aim to minimise data collection and operate secure, professional practices appropriate for a compliance consultancy. This page covers both our website and the Narrate Platform where applicable. Website enquiries and Platform customer environments have different data flows and security considerations.
Data minimisation
The Website is primarily informational. We collect only what we need to respond to enquiries: full name, work email, company name, service interest, and message (all mandatory).
Narrate Platform (SaaS) – Security overview
The Narrate Platform is designed with security and data isolation at its core. Key practices include:
- Row Level Security (RLS): Every database query is scoped by company_id at the PostgreSQL level, ensuring strict tenant isolation. No cross-tenant data access is possible.
- Role-based access controls (RBAC): Team members see only what they're authorised to access, with the principle of least privilege applied to all user roles.
- Just-in-Time (JIT) access: Narrate engineers cannot view customer evidence or data without explicit, time-bound permission. Access is revoked automatically after the permitted window.
- Comprehensive audit logging: All key actions (user changes, document access, AI feature use) are logged to support traceability and compliance.
- AES-256 encryption at rest: All stored data is encrypted using AES-256 encryption.
- TLS 1.3 encryption in transit: All connections between clients, servers, and AI providers use TLS 1.3 encrypted channels.
- Secure file access: Time-limited signed URLs prevent unauthorised access to uploaded evidence and documents.
- EU data residency: Platform data is hosted in the European Union (Supabase Frankfurt) with application hosting via Vercel/AWS.
- Regular backups and tested recovery procedures to ensure business continuity.
Sub-processors (platform)
The Narrate Platform relies on specialist third-party providers for hosting, databases, authentication, billing, email, and AI processing services. These may include providers for cloud infrastructure, application databases, identity and access management, payment processing, transactional email, and AI model processing. For the authoritative list of sub-processors and their handling of data, please refer to our Privacy Policy.
AI features and data handling
The Narrate Platform includes optional AI-assisted features (evidence analysis, document assistance, and governance meeting transcription). Our AI architecture is designed with privacy-first principles:
Privacy Firewall
Before any data reaches the AI provider, Narrate applies a local Privacy Firewall — a PII redaction engine that runs within our secure environment. This automatically detects and removes:
- Email addresses
- Credit card numbers
- IP addresses
- Social Security / national identity numbers
- Phone numbers
Administrators can configure redaction strictness (Standard or Aggressive) based on their data sensitivity requirements. Only sanitised text is transmitted to the AI provider.
Zero Training, Zero Retention
- Zero Training: AI features use the OpenAI Enterprise API (GPT-4o), which contractually prohibits using customer data to train, retrain, or improve models. This is distinct from consumer ChatGPT terms.
- Zero Retention: OpenAI processes requests statelessly and does not retain input or output data for service improvement. API logs may be retained for up to 30 days solely for abuse monitoring, then deleted.
- RAG approach: Narrate uses Retrieval-Augmented Generation — your data is used as context for individual queries, never to train any model.
Customer AI controls
- Global AI Toggle: Enable or disable all AI features at the organisation level at any time.
- Per-Control Sensitivity: Mark individual controls as "High Sensitivity" to block AI analysis for those specific controls.
- Redaction Strictness: Choose between Standard and Aggressive redaction levels.
- Human-in-the-loop: All AI outputs are suggestions — they must be reviewed and approved by your team before use in any compliance decision.
How enquiries are handled
Enquiry submissions are processed via Formspree and forwarded to our Microsoft 365 mailbox to respond.
Retention
Website security/technical logs: 30 days. Enquiry emails and correspondence: 3 years. Accounting records: retained as required by law. Platform customer data: retention is contract-based and aligned to customer requirements and applicable law.
Access controls
MFA is enabled for administrative access to key systems. Access is limited to authorised personnel. For the Narrate Platform, role-based access controls ensure that customer accounts and team members can only access data and features they are authorised to use.
Encryption and backups
At rest: All platform data is encrypted using AES-256 encryption. In transit: All connections use TLS 1.3 encrypted channels, including between Narrate servers and AI providers. Backups are maintained with automated recovery procedures.
Suppliers used for website operations
Cloudflare (security/performance/analytics), Formspree (enquiry form handling), Microsoft 365 (email), GitHub (website source control/deployment), Calendly (meeting scheduling), Zoom (video conferencing).
EU AI Act readiness support (non-legal)
The Narrate Platform is designed to support operational EU AI Act readiness by helping teams maintain comprehensive documentation, evidence mapping, audit trails, and governance workflows. Features like audit logging, change control workflows, and evidence traceability help you build demonstrable, auditable governance practices. Important: We are not a law firm; we provide operational governance and evidence tooling. Consult legal advisors for AI Act compliance interpretation.
Reporting a security concern
Responsible disclosure
If you believe you've found a security issue on our Website or Platform, please email support@narratecompliance.com with a description of the issue, steps to reproduce (if applicable), and your contact details.
Expected response: We aim to acknowledge reports within 48 hours and will keep you informed of our investigation.
Safe harbour: We will not take legal action against researchers who report issues in good faith and follow responsible disclosure practices.
Last reviewed: February 2026