AI Voice Agent Vendor Security Assessment: Due Diligence Template
Your Data, Their Platform
When you deploy an AI voice agent, you are entrusting a vendor with your customer conversations - names, phone numbers, account details, health information, and the audio recordings of actual calls. The vendor's security posture becomes your security posture. A breach at the vendor exposes your customers' data, triggers your notification obligations, and damages your reputation. This 50-point assessment template helps you evaluate vendor security before signing a contract, not after an incident.
Why Vendor Security Assessment Matters
Third-party vendor risk is one of the largest and most underestimated security risks organizations face. Research consistently shows that 60-65% of data breaches involve a third-party component - a vendor, supplier, or partner whose systems were compromised. For AI voice agent deployments, the vendor relationship is particularly high-risk because the vendor processes sensitive customer data in real time.
Your AI voice agent vendor touches every part of your customer data lifecycle. They receive inbound calls with caller information. They process speech through AI models. They may store call recordings, transcripts, and metadata. They integrate with your CRM, calendar, and other systems. And they handle the infrastructure that keeps the service running. A security failure at any point in this chain can expose your customer data.
Regulatory frameworks explicitly address vendor risk. GDPR requires data processing agreements with vendors processing personal data. HIPAA requires business associate agreements. SOC 2 and ISO 27001 both include vendor management as audit areas. Conducting a formal security assessment before engaging a vendor is not just good practice - it is a regulatory expectation.
Data Handling and Storage
The first and most critical assessment category covers how the vendor handles your data - what they collect, where they store it, how long they keep it, and who can access it. These questions establish the baseline understanding of your data exposure.
| # | Assessment Question | Acceptable Answer | Red Flag |
|---|---|---|---|
| 1 | What data types do you collect from calls? | Documented list: audio, transcript, metadata, specific fields | Vague answer or "we collect whatever is needed" |
| 2 | Where is call data stored geographically? | Specific regions (EU, US-East, etc.) with data residency options | No control over data location or stored globally |
| 3 | What is your data retention policy? | Configurable retention with automatic deletion | Indefinite retention or no deletion capability |
| 4 | Can I delete my data on demand? | Yes, with documented deletion process and timeline | No self-service deletion or unclear process |
| 5 | Do you use customer data for AI model training? | No, or only with explicit opt-in consent | Yes by default, or buried in ToS |
| 6 | Who are your sub-processors? | Published, maintained list with notification of changes | Unwilling to disclose or no sub-processor tracking |
| 7 | How do you handle data at contract termination? | Data export and certified deletion within defined period | No export capability or unclear termination procedures |
Question 5 is particularly important for AI voice agent vendors. Many AI companies use customer data to train and improve their models. While this may improve the service over time, it means your customer conversations are being processed beyond the immediate call handling purpose. This may violate GDPR purpose limitation requirements and your own privacy commitments to customers. Always insist on an explicit opt-in model for training data usage.
Encryption Standards
Encryption protects data in two states: in transit (moving between systems) and at rest (stored on disk). Both are essential. Voice AI systems have multiple data transit paths - between the caller and telephony provider, between the telephony layer and AI processing, and between AI processing and storage - each requiring its own encryption.
| # | Assessment Question | Acceptable Answer | Red Flag |
|---|---|---|---|
| 8 | What encryption is used for data in transit? | TLS 1.2+ for HTTPS, SRTP for voice media | TLS 1.0/1.1 or unencrypted voice streams |
| 9 | What encryption is used for data at rest? | AES-256 for recordings and databases | No encryption at rest or weak algorithms |
| 10 | Who manages encryption keys? | Customer-managed keys or HSM-backed key management | Vendor holds all keys with no separation |
| 11 | Are call recordings encrypted individually? | Yes, per-recording or per-tenant encryption keys | Single key for all recordings across all clients |
| 12 | Is PII redacted from transcripts? | Automatic PII detection and redaction available | No redaction capability |
| 13 | How are backups encrypted? | Same or stronger encryption as primary storage | Backups stored unencrypted or with weaker encryption |
| 14 | Do you support BYOK (bring your own key)? | Yes, customer can provide their own encryption keys | No BYOK option available |
Pay special attention to voice media encryption. Standard HTTPS encryption protects web traffic, but voice calls use different protocols. Secure Real-time Transport Protocol (SRTP) is the standard for encrypting voice media. If the vendor uses unencrypted RTP for voice transmission, the actual audio of customer conversations travels across the internet in the clear - accessible to anyone who can intercept the traffic.
Access Control and Authentication
Access control questions determine who can reach your data within the vendor's organization and through the vendor's platform. A vendor with excellent encryption but poor access controls still puts your data at risk - from insider threats, compromised employee accounts, and overly broad access permissions.
| # | Assessment Question | Acceptable Answer | Red Flag |
|---|---|---|---|
| 15 | Does your platform support role-based access control? | Yes, with configurable roles and granular permissions | All users have the same access level |
| 16 | Do you support multi-factor authentication? | MFA required for all admin accounts, available for all users | MFA optional or not available |
| 17 | Do you support SSO/SAML integration? | Yes, SAML 2.0 and/or OIDC supported | No SSO support - separate credentials only |
| 18 | How many vendor employees can access customer data? | Minimal, documented list with need-to-know basis | Broad access or unable to specify who has access |
| 19 | Do vendor employees need approval to access production data? | Yes, formal access request and approval process with logging | No approval process or broad standing access |
| 20 | How are employee accounts deprovisioned? | Automated deprovisioning tied to HR processes, same-day | Manual process or delayed deprovisioning |
| 21 | Do you conduct background checks on employees with data access? | Yes, criminal and reference checks for all employees | No background checks or only for senior roles |
Compliance and Certifications
Compliance certifications provide independent validation that a vendor meets recognized security standards. While certifications do not guarantee perfect security, they demonstrate that the vendor has invested in security processes and subjects themselves to external audits.
| # | Assessment Question | Acceptable Answer | Red Flag |
|---|---|---|---|
| 22 | Do you hold SOC 2 Type II certification? | Yes, current report available under NDA | No SOC 2 or only Type I (point-in-time) |
| 23 | Are you ISO 27001 certified? | Yes, with current certificate and scope covering relevant services | No ISO 27001 or expired certification |
| 24 | Are you HIPAA compliant? Will you sign a BAA? | Yes to both, with documented HIPAA controls | Will not sign BAA or unclear about HIPAA status |
| 25 | Do you have a GDPR data processing agreement? | Standard DPA available, customizable if needed | No DPA or unwilling to sign one |
| 26 | What is your PCI DSS compliance status? | Compliant or not processing payment data directly | Processes payment data without PCI compliance |
| 27 | When was your last penetration test? | Within the last 12 months, by an independent firm | No recent pentest or internal-only testing |
| 28 | Can I review your pentest report or executive summary? | Yes, summary available under NDA | Unwilling to share any pentest information |
SOC 2 Type II is the most relevant certification for voice AI vendors. It covers security, availability, processing integrity, confidentiality, and privacy - all critical for a service handling customer conversations. Type II (versus Type I) means the controls were audited over a period of time (typically 6-12 months), not just at a single point. Always request the actual SOC 2 report, not just a certification badge on the vendor's website.
Incident Response and Breach Notification
When a security incident occurs at the vendor, you need to know about it quickly enough to meet your own notification obligations. GDPR gives you 72 hours from awareness to notify your supervisory authority. If the vendor takes 71 hours to tell you about a breach, you have almost no time to assess and report.
| # | Assessment Question | Acceptable Answer | Red Flag |
|---|---|---|---|
| 29 | What is your breach notification timeline? | Within 24 hours of confirmed breach affecting customer data | No defined timeline or longer than 48 hours |
| 30 | Do you have a documented incident response plan? | Yes, tested regularly through tabletop exercises | No documented plan or untested plan |
| 31 | Will you share your incident response plan? | Yes, summary or relevant sections available | Unwilling to share any incident response details |
| 32 | How will you communicate during an incident? | Dedicated status page, direct notification to affected clients | Only via general email or no proactive communication |
| 33 | Do you carry cyber insurance? | Yes, with coverage limits appropriate to data volumes | No cyber insurance or unwilling to disclose |
| 34 | What was your last security incident? | Transparent disclosure with what happened and how it was resolved | Claims to have never had any incident (unlikely for established vendor) |
| 35 | Do you participate in a bug bounty program? | Yes, public or private program with established platform | No external security research engagement |
Contractual notification requirements
The vendor's standard contract may specify a notification window that is longer than what you need. Negotiate the notification timeline as part of the contract - request notification within 24 hours for confirmed breaches and 48 hours for suspected incidents. Include penalties for late notification if possible.
Incident communication channel
Establish a dedicated communication channel for security incidents before you need it. This should include a direct contact (not a general support queue), an escalation path, and a method for secure communication (encrypted email or secure portal). Test this channel annually.
Your notification cascade
Map how vendor notification flows through your organization. When the vendor notifies your security contact, who do they notify? Legal, communications, management, affected departments? Pre-plan this cascade so it activates automatically without requiring manual coordination during a crisis.
AI-Specific Security Controls
Traditional vendor security assessments cover infrastructure and data handling but miss risks specific to AI systems. Voice AI vendors need additional scrutiny around model security, prompt engineering safeguards, and AI-specific attack vectors.
| # | Assessment Question | Acceptable Answer | Red Flag |
|---|---|---|---|
| 36 | What safeguards prevent prompt injection attacks? | Input validation, system prompt hardening, output filtering | No specific prompt injection defenses |
| 37 | Is the AI system prompt protected from disclosure? | Yes, tested against injection attempts regularly | No testing or prompt easily extracted |
| 38 | Can the AI be manipulated into disclosing customer data? | Tested and hardened, with ongoing red team exercises | No adversarial testing conducted |
| 39 | What AI model provider do you use? | Named provider with their own security certifications | Unwilling to disclose or using unvetted models |
| 40 | Is customer data sent to third-party AI APIs? | Documented data flow with DPA covering AI provider | Data sent to AI APIs without documented agreements |
| 41 | Can AI behavior be audited per-call? | Full conversation logs, function calls, and decision points logged | No per-call audit capability |
| 42 | How do you handle AI model updates? | Staged rollout with regression testing before production | Direct updates to production without testing |
| # | Assessment Question | Acceptable Answer | Red Flag |
|---|---|---|---|
| 43 | What infrastructure provider do you use? | Major cloud (AWS, GCP, Azure) with relevant certifications | Self-hosted without third-party security audits |
| 44 | What is your uptime SLA? | 99.9%+ with defined compensation for downtime | No SLA or below 99.5% |
| 45 | Do you have a disaster recovery plan? | Documented DR with tested RTO and RPO targets | No DR plan or untested procedures |
| 46 | How do you handle DDoS attacks? | DDoS mitigation service (Cloudflare, AWS Shield, etc.) | No DDoS protection or manual response only |
| 47 | Do you perform regular vulnerability scanning? | Automated scanning weekly, critical patches within 24 hours | No regular scanning or slow patching cadence |
| 48 | How do you secure your development environment? | Separate from production, no real customer data in dev | Development uses production data or shared infrastructure |
| 49 | Do you have a secure development lifecycle? | Code review, SAST/DAST scanning, security training for devs | No formal SDLC or security integration in development |
| 50 | Can I conduct my own security testing of the platform? | Yes, with coordination and scope agreement | Prohibits all customer-initiated security testing |
Using the Assessment Template
This 50-question template is designed to be used as a living document during vendor evaluation. Not every question will be equally relevant for every organization - a dental practice and a financial institution have different risk profiles. Prioritize the questions that matter most for your data types, regulatory obligations, and risk tolerance.
Send the assessment to vendor security teams
Share the questions with the vendor's security or compliance team, not just sales. Security teams provide accurate, detailed answers. Sales teams provide optimistic answers. Give the vendor 2-3 weeks to respond, as some questions require input from multiple teams.
Score responses on a 3-point scale
Rate each answer: 2 (fully meets requirement), 1 (partially meets or planned), 0 (does not meet or not addressed). This creates a numerical score that enables comparison across vendors. A score below 70% of maximum should trigger further investigation or elimination.
Verify critical claims independently
For critical security claims (SOC 2 certification, encryption standards, data residency), request supporting documentation. Review the actual SOC 2 report, not just the vendor's summary. Test encryption by examining network traffic. Verify data residency through platform configuration.
Include security requirements in the contract
The assessment identifies the vendor's current security posture. The contract should codify the commitments: notification timelines, data handling requirements, audit rights, and consequences for security failures. If the vendor's answers to the assessment are not reflected in the contract, they are not binding.
Reassess annually
Vendor security postures change. New vulnerabilities are discovered, employees change, and processes evolve. Conduct a reassessment annually, or whenever the vendor makes significant changes to their platform, AI models, or infrastructure. This is a compliance best practice and a practical risk management measure.
Frequently Asked Questions
AI voice agents process uniquely sensitive data - actual voice recordings of customer conversations. Standard vendor assessments do not cover AI-specific risks like prompt injection, model data leakage, or AI training data usage. A specialized assessment ensures you evaluate both traditional security controls and AI-specific safeguards.
SOC 2 Type II is the most relevant certification. It covers security, availability, processing integrity, confidentiality, and privacy over a sustained period. For healthcare applications, HIPAA compliance and willingness to sign a BAA are equally critical. ISO 27001 provides additional confidence in the vendor's information security management system.
For vendors processing sensitive customer data - which voice AI vendors inherently do - SOC 2 Type II should be a strong expectation. Very small or early-stage vendors may not yet have SOC 2 certification, but should be able to demonstrate they are working toward it. Lack of any compliance certification for a vendor handling voice recordings is a significant red flag.
A vendor that refuses to answer reasonable security questions is either unprepared for enterprise clients or hiding deficiencies. Some vendors require an NDA before sharing security details, which is reasonable. But refusal to engage on security at all should disqualify the vendor from consideration for any deployment involving customer data.
Ask about prompt injection defenses, adversarial testing practices, model update procedures, and data isolation between clients. Request evidence of security testing - red team reports, penetration test summaries, or vulnerability assessment results. The vendor should be able to demonstrate they actively test their AI for manipulation and data leakage.
A DPA is a legally binding contract between you (the data controller) and the vendor (the data processor) that specifies how the vendor handles personal data on your behalf. Under GDPR, a DPA is legally required when a vendor processes personal data for you. It covers data handling obligations, sub-processor requirements, breach notification, and data deletion at contract end.
Default position should be no - the vendor should not use your customer conversations to train their AI models unless you explicitly opt in after understanding the implications. This is a GDPR purpose limitation issue and a business confidentiality concern. If you do allow training data usage, ensure it is anonymized and that you can opt out at any time.
Conduct a full reassessment annually. Additionally, reassess when the vendor makes significant changes (new AI model, infrastructure migration, acquisition), when you expand the scope of data shared with the vendor, or when new regulatory requirements take effect. Monitor the vendor's security posture continuously through their status page, security advisories, and industry news.
SOC 2 Type I evaluates the design of security controls at a single point in time - it says the controls exist and are designed appropriately. SOC 2 Type II evaluates the operating effectiveness of those controls over a period (typically 6-12 months) - it says the controls not only exist but actually work consistently. Type II is significantly more meaningful because it demonstrates sustained security, not just a snapshot.
Many enterprise vendors allow customer-initiated penetration testing with advance coordination and scope agreement. This is a positive sign - it shows the vendor is confident in their security and open to external validation. If the vendor prohibits all security testing, this limits your ability to independently verify their security claims.
Founder & CEO, AInora
Building AI digital administrators that replace front-desk overhead for service businesses across Europe. Previously built voice AI systems for dental clinics, hotels, and restaurants.
View all articlesReady to try AI for your business?
Hear how AInora sounds handling a real business call. Try the live voice demo or book a consultation.
Related Articles
AI Voice Agent Security & Data Protection
Complete guide to encryption, GDPR compliance, and data retention for voice AI.
AI Voice Agent Security Audit
How to security-test AI voice agents for prompt injection, social engineering, and data leakage.
Voice AI Data Breach Prevention
How to prevent data breaches in voice AI and respond when incidents occur.