AInora
GDPRVoice AIEU Data Residency

GDPR Compliant Voice AI Platforms with EU Data Residency - Ranked 2026

JB
Justas ButkusFounder, Ainora
··14 min read

TL;DR

A GDPR-compliant voice AI platform is one that processes EU-resident voice data with a lawful basis (typically Article 6(1)(b) contract or Article 6(1)(f) legitimate interest), under an Article 28 data processing agreement, with EU data residency, encryption-at-rest and in-transit, and no transfer of personal data outside the EEA without Article 44-46 safeguards. Cognigy, Parloa, PolyAI and Ainora are the four vendors most often shortlisted by EU buyers in 2026.

Art. 6
Lawful basis
Art. 28
Processor DPA
Art. 32
Security of processing
Art. 44
International transfers

What is a GDPR compliant voice AI platform?

A GDPR-compliant voice AI platform is one that processes EU-resident voice data with a documented lawful basis under Regulation (EU) 2016/679 - typically Article 6(1)(b) where the call performs a contract with the data subject, or Article 6(1)(f) where the controller has a legitimate interest that is not overridden by the rights of the data subject. The vendor acts as a data processor under Article 28, signs a written data processing agreement (DPA), implements security measures appropriate to risk under Article 32, and either keeps the data in the European Economic Area (EEA) or relies on Article 44-46 transfer safeguards.

Voice data is high-sensitivity personal data. A caller's voice itself, when used for identification, is biometric data under Article 4(14) and triggers Article 9 special-category rules. Even without biometric profiling, recorded calls almost always contain personal data (name, contact details, account references) and frequently special-category data (health, financial situation). EU supervisory authorities have repeatedly emphasised this point in their voice AI guidance.

Which European voice AI vendor is best for GDPR 2026?

The four vendors most consistently shortlisted by EU buyers for GDPR-aligned voice AI in 2026 are Cognigy, Parloa, PolyAI and Ainora. The ranking below weights EU hosting depth, formal certifications, contractual maturity, and ability to localise voice and language for European callers.

1

Cognigy

4.7/5

Düsseldorf-headquartered conversational AI platform. Hosting options across EU (Frankfurt, Berlin), US and on-premises. ISO/IEC 27001 and SOC 2 certifications. Mature procurement-grade Article 28 DPA and detailed sub-processor list. Strong fit for regulated industries with formal vendor risk programmes.

Best for: Tier-1 enterprises with formal vendor security and procurement teams

2

Parloa

4.5/5

Berlin-headquartered EU-native voice AI platform. Germany-based engineering and operations. EU residency by default. Strong native German, French and Italian voice quality. Standard processor DPA and ISO 27001 in place. Mid-market to enterprise contact centre focus.

Best for: DACH-region enterprises and contact centres needing native German voice quality

3

PolyAI

4.4/5

UK-headquartered voice AI platform with 75+ language coverage. EU data residency available on enterprise tiers. Strong telephony heritage from financial services and hospitality deployments. Buyers should explicitly negotiate the EU residency clause and sub-processor list.

Best for: Multilingual enterprises with global telephony footprint and need for 70+ languages

4

Ainora

4.5/5

Lithuanian-headquartered managed voice AI option. EU hosting by default. GDPR Article 28 DPA available. Strong native Lithuanian, Polish, English and German voice quality. Ten live demo numbers across LT and US callable today. Custom pricing - contact sales. Best fit for EU mid-market deployments that want a managed vendor rather than a DIY platform.

Best for: EU mid-market deployments needing managed delivery and Baltic / CEE language coverage

How does voice data flow under GDPR Articles 6, 28, 32 and 44?

A typical voice AI call processes three categories of personal data: the audio stream itself, the transcript, and the structured outputs (entities, intents, booking records). Each touches a different cluster of GDPR articles.

StageData categoryKey GDPR articlesBuyer questions
Inbound audio captureVoice (potential biometric)Art. 6 lawful basis, Art. 9 special category, Art. 13 informationWhere is the SIP termination? Which Member State legal entity holds the call?
Speech-to-textVoice + draft transcriptArt. 6, Art. 32 security of processingIs STT EU-region? Is the audio retained or only the transcript?
LLM reasoningTranscript + contextArt. 6, Art. 28 processor terms, Art. 44 international transfersWhich region is the model in? Is there an Art. 28 DPA covering this sub-processor?
Text-to-speechSynthetic voice outputArt. 6, Art. 32Where is TTS hosted? Are voice prints retained?
Storage and analyticsRecordings, transcripts, structured dataArt. 5 storage limitation, Art. 17 erasure, Art. 32How long is data retained? Can a data subject erasure request be honoured end-to-end?

The most common GDPR failure mode for voice AI deployments is invisible international transfer: an EU caller speaks to an EU number, but the underlying LLM or TTS runs in a US region without Article 44-46 safeguards. The European Data Protection Board has repeatedly flagged this pattern in its guidance, and the Court of Justice's Schrems II decision (Case C-311/18) means SCC-based transfers require a transfer impact assessment.

Why EU data residency matters for voice AI

EU data residency is not a legal requirement under GDPR - the regulation is technology- and location-neutral in principle. What GDPR does require is that international transfers (Chapter V, Articles 44-50) be lawful, with adequate safeguards, and that processors meet Article 28 obligations regardless of location. In practice, EU residency dramatically simplifies these obligations: there are no Chapter V transfers to assess, no transfer impact assessment, and supervisory authorities have lower expectations on additional safeguards.

For voice AI specifically, EU residency also matters because telephony adds another layer: SIP termination and carrier-grade routing. An EU-hosted AI brain still routes audio through carrier infrastructure that may itself touch non-EU points of presence. Buyers should ask vendors for the audio-path diagram, not just the inference region.

Practical procurement question

Ask every voice AI vendor: "Draw the call-path diagram for an EU caller phoning an EU number. Mark every point where audio, transcript, or structured data crosses the EEA boundary, and tell me which Article 44-46 mechanism covers each crossing." If they cannot answer in writing, you have an unmitigated transfer risk.

The Article 28 data processing agreement checklist

Article 28 specifies what a written data processing agreement must contain. The clauses you actively need to negotiate with a voice AI vendor:

  1. Subject matter, duration, nature and purpose - explicitly describe the voice AI use case, retention periods, and types of personal data processed.
  2. Processor instruction limit - the vendor processes only on documented instructions, including transfers (Art. 28(3)(a)).
  3. Confidentiality - all personnel authorised to process the data are bound by confidentiality.
  4. Security measures - Article 32 technical and organisational measures, with specific reference to ISO/IEC 27001 controls where applicable.
  5. Sub-processor authorisation - prior written consent and a current sub-processor list. Especially important for voice AI because the LLM and TTS providers are sub-processors.
  6. Data subject rights assistance - the vendor must help the controller respond to access, erasure, portability and objection requests.
  7. Breach notification - rapid notification of personal data breaches, with sufficient detail to support Article 33 controller notification within 72 hours.
  8. DPIA support - assistance with Data Protection Impact Assessments under Article 35.
  9. End-of-contract data handling - return or deletion of all personal data at the end of the engagement.
  10. Audit rights - the controller can audit or accept independent audit reports (e.g. ISO 27001 surveillance audit, SOC 2 Type II).

Encryption and ISO/IEC 27001 expectations

Article 32 requires technical and organisational security measures appropriate to the risk. For voice AI, the baseline that supervisory authorities expect today includes:

  • TLS 1.2 or higher for all transport, with TLS 1.3 preferred for new deployments.
  • SRTP for the voice media path, not plain RTP, on any leg that crosses a public network.
  • AES-256 encryption-at-rest for recordings, transcripts and structured data.
  • Key management aligned with NIST or ENISA cryptographic guidance, with documented key rotation.
  • ISO/IEC 27001:2022 certification of the information security management system, with Annex A controls applicable to the deployment.
  • SOC 2 Type II report for vendors selling into US-adjacent customers, often requested alongside ISO 27001 in EU procurement.
  • Role-based access control, least privilege, and audit logging of administrator actions on the voice platform.

ENISA - the European Union Agency for Cybersecurity - publishes baseline guidance for AI cybersecurity that EU supervisory authorities reference when assessing whether Article 32 has been met. Buyers can reasonably require their voice AI vendor to map their controls against the relevant ENISA publications.

Common lawful bases for voice AI in customer service

Selecting and documenting the lawful basis is the first GDPR decision in any voice AI deployment. The common patterns:

Use caseTypical lawful basisDocumentation needed
Inbound appointment bookingArt. 6(1)(b) contractService terms referencing AI handling, Art. 13 information at start of call
Outbound debt collectionArt. 6(1)(b) contract or 6(1)(f) legitimate interestLIA (legitimate interests assessment) or contractual basis evidence
Outbound sales prospectingArt. 6(1)(f) legitimate interest + ePrivacyLIA + national ePrivacy compliance, B2B vs B2C distinction
Customer supportArt. 6(1)(b) contractReference in service terms, Art. 13 information
Voice biometric authenticationArt. 9(2)(a) explicit consentWritten explicit consent, alternative authentication path available

Voice biometric authentication is the one pattern that almost always requires explicit consent under Article 9, because voice used to uniquely identify a natural person is special-category biometric data. A non-biometric voice agent that handles a booking is not Article 9 territory - the voice carries the conversation but is not used to identify the speaker. The line matters for compliance and is often misunderstood by procurement teams.

Disclaimer

This article summarises publicly available GDPR text and EDPB guidance. It is not legal advice. Voice AI deployments must be assessed with qualified counsel for the specific use case, jurisdiction and lawful basis. Ainora provides AI voice agent software with EU hosting and Article 28 DPA support; we do not provide compliance certification or legal services.

Frequently Asked Questions

A GDPR-compliant voice AI platform processes EU-resident voice data with a documented lawful basis under Article 6, under a written Article 28 data processing agreement, with security measures meeting Article 32, and either keeps data in the EEA or relies on Article 44-46 transfer safeguards. The vendor acts as a data processor and the deploying organisation acts as the data controller.

The four vendors most consistently shortlisted by EU buyers are Cognigy (Düsseldorf HQ, EU residency, ISO 27001 + SOC 2), Parloa (Berlin HQ, EU-native), PolyAI (UK HQ, 75+ languages with EU residency available on enterprise tiers) and Ainora (Lithuanian HQ, EU hosting by default, managed delivery for mid-market). Ranking depends on enterprise vs mid-market, language coverage and contractual depth needed.

No. GDPR is technology- and location-neutral in principle. International transfers are allowed under Articles 44-46 with adequate safeguards (adequacy decisions, Standard Contractual Clauses with transfer impact assessment, Binding Corporate Rules). However, EU residency dramatically simplifies compliance because no Chapter V transfer mechanism is needed, and supervisory authorities have lower expectations on additional safeguards.

Most customer-service voice AI relies on Article 6(1)(b) (processing necessary for performance of a contract with the data subject). Outbound sales typically relies on Article 6(1)(f) legitimate interest plus ePrivacy compliance. Voice biometric authentication requires Article 9(2)(a) explicit consent because voice used to identify a person is special-category biometric data.

Yes. A voice recording of an identifiable person is personal data under Article 4(1). When the voice is used to uniquely identify the speaker (voice biometric authentication), it becomes special-category biometric data under Article 9. Even a non-biometric recording almost always contains personal data such as names, contact details, account references and frequently special-category data.

Yes, when the vendor processes personal data on your behalf - which is true for almost every commercial voice AI deployment. Article 28(3) requires a written contract setting out subject matter, duration, nature, purpose, types of data, obligations and rights of the controller. The DPA must include sub-processor authorisation, security measures, breach notification, data subject rights assistance, and end-of-contract data handling.

ISO/IEC 27001 certification provides documented evidence that the vendor has an information security management system with appropriate technical and organisational measures - directly relevant to GDPR Article 32. It is not a substitute for GDPR compliance, but supervisory authorities and procurement teams treat ISO 27001 as strong baseline evidence of Article 32 security of processing.

TLS 1.2 or higher (TLS 1.3 preferred) for transport, SRTP for the voice media path on public networks, AES-256 for encryption-at-rest, documented key management aligned with NIST or ENISA guidance, and role-based access control with audit logging. These are the baselines that EU supervisory authorities currently treat as appropriate under Article 32 for voice AI risk profiles.

Transfers outside the EEA must comply with Chapter V (Articles 44-50). The most common mechanisms are adequacy decisions (e.g. for UK, Switzerland, Japan, the EU-US Data Privacy Framework), Standard Contractual Clauses with a transfer impact assessment, or Binding Corporate Rules. Schrems II (Case C-311/18) means SCC-based transfers to the US require additional safeguards. Vendors that cannot document the transfer mechanism are an unmitigated risk.

Article 5(1)(e) requires storage limitation - personal data must be kept no longer than necessary for the purpose. There is no statutory retention period for voice recordings in GDPR; retention is set by the controller based on purpose, sectoral law (e.g. MiFID II for financial advice, telecom law for traffic data), and legitimate-interest balancing. Common practice: 30-90 days for quality assurance, 6-7 years for regulated industries.

JB
Justas Butkus

Founder & CEO, AInora

Building AI digital administrators that replace front-desk overhead for service businesses across Europe. Previously built voice AI systems for dental clinics, hotels, and restaurants.

View all articles

Ready to try AI for your business?

Hear how AInora sounds handling a real business call. Try the live voice demo or book a consultation.