AInora
AI DisclosureTransparencyRegulationAI Voice AgentCompliance

AI Caller Disclosure Laws: Must AI Identify Itself? (Country Guide)

JB
Justas Butkus
··15 min read

TL;DR

The question of whether AI callers must identify themselves as artificial intelligence is answered differently around the world. The EU AI Act (Article 50) explicitly requires AI systems that interact with people to disclose their AI nature. California's SB 1001 requires bots to disclose when influencing purchasing or voting decisions. Most other jurisdictions do not yet have specific AI caller disclosure laws, but general consumer protection and anti-deception statutes may apply. The trend is clearly toward mandatory disclosure. Businesses deploying AI voice agents internationally should implement disclosure by default - it is both the ethical approach and the most future-proof compliance strategy.

27+
EU Countries With AI Act
10+
US States With AI Legislation
Art. 50
EU AI Act Transparency Clause
2026
Key Enforcement Dates

When an AI voice agent calls a customer or answers a business phone line, should it tell the person on the other end that they are speaking with an artificial intelligence? This is not just an ethical question - it is increasingly a legal one.

The regulatory landscape for AI caller disclosure is evolving rapidly. Some jurisdictions have enacted specific laws requiring AI to identify itself. Others rely on existing consumer protection frameworks that prohibit deception. Many have no specific rules yet but are actively legislating. For businesses deploying AI voice agents across borders, understanding these requirements is essential for compliance.

This guide maps the current state of AI caller disclosure laws worldwide, with specific focus on the jurisdictions most relevant to businesses using AI voice agents.

The Global Landscape of AI Disclosure

AI disclosure regulation falls into three categories:

  • Explicit AI disclosure laws: Jurisdictions that specifically require AI systems to disclose their nature when interacting with humans. The EU AI Act is the most comprehensive example.
  • Implicit requirements through anti-deception laws: Jurisdictions where existing consumer protection, telemarketing, or anti-fraud laws prohibit misrepresenting the nature of a caller. AI pretending to be human may violate these laws even without AI-specific legislation.
  • No current requirements: Jurisdictions with no specific or applicable disclosure requirements for AI callers. This category is shrinking as more countries legislate.

European Union: AI Act Article 50 Transparency

The EU AI Act is the most comprehensive AI regulation globally. Article 50 establishes transparency obligations that directly affect AI voice agents:

  • Article 50(1): Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the circumstances and context of use.
  • Applicability to voice AI: AI voice agents that handle phone calls are clearly "AI systems intended to interact directly with natural persons." The disclosure requirement applies to both inbound and outbound calls.
  • When disclosure is not required: The "unless obvious from the circumstances" exception is narrow. A voice that sounds clearly robotic might arguably be obvious. A natural-sounding AI voice that a reasonable person could mistake for human requires disclosure. Given the quality of modern voice AI, disclosure should be assumed required in virtually all cases.
  • Timeline: AI Act transparency obligations took effect in February 2025, with enforcement beginning in 2025-2026 depending on the specific provision. Businesses deploying AI voice agents in EU countries should already be compliant.

EU AI Act Penalties

Violations of AI Act transparency requirements can result in fines of up to 15 million EUR or 3% of worldwide annual turnover, whichever is higher. For an AI voice agent that makes thousands of calls without disclosure, the cumulative violation could be substantial.

United States: State-Level Patchwork

The US has no federal AI caller disclosure law as of 2026. Instead, regulation exists at the state level and through FTC enforcement:

California - SB 1001 (BOT Disclosure Act)

California's SB 1001, effective July 2019, prohibits using a bot to communicate with a person in California to incentivize a purchase or sale of goods or services, or to influence a vote in an election, without disclosing that the communication is from a bot. This applies to AI voice agents making sales-related calls to California residents.

Federal Trade Commission (FTC)

The FTC's authority to prohibit "unfair or deceptive acts or practices" under Section 5 of the FTC Act applies to AI impersonation. In 2024, the FTC issued a supplemental rule prohibiting AI impersonation of government entities and businesses. While not a specific disclosure mandate, using AI that misrepresents itself as human in commercial contexts could violate FTC deception prohibitions.

Other US States

  • Washington: Consumer protection statutes require transparency in automated communications. No specific AI disclosure law but general anti-deception rules apply.
  • Illinois: The AI Video Interview Act requires disclosure when AI is used in video interviews. While voice-only is not covered, the legislative trend signals future expansion.
  • New York City: Local Law 144 regulates automated employment decision tools. While focused on hiring, it establishes a precedent for AI transparency requirements.
  • Colorado: The Colorado AI Act (SB 205) requires developers and deployers of high-risk AI to provide transparency disclosures. Voice AI used for consequential decisions may fall within scope.

United Kingdom: Post-Brexit Framework

The UK's approach differs from the EU:

  • No specific AI disclosure law: The UK has not enacted legislation equivalent to the EU AI Act's Article 50. The UK government has favored a sector-specific, principles-based approach over comprehensive AI regulation.
  • Consumer Rights Act 2015: Prohibits unfair commercial practices including misleading omissions. Failing to disclose the AI nature of a caller when a reasonable consumer would expect a human could constitute a misleading omission.
  • Ofcom and telecommunications: Ofcom regulates telecommunications and has authority over automated calling practices. While current rules focus on nuisance calls, AI-specific guidance may emerge.
  • ICO guidance: The Information Commissioner's Office has published AI and data protection guidance that emphasizes transparency as a core principle. While not a disclosure mandate, it supports the expectation that AI interaction should be transparent.

Canada: AIDA and Provincial Rules

  • AIDA (Artificial Intelligence and Data Act): Part of Bill C-27, AIDA would establish federal AI regulation including transparency requirements. As of 2026, AIDA remains in legislative process. If enacted, it would require disclosures for high-impact AI systems.
  • PIPEDA and provincial privacy laws: Canada's privacy framework emphasizes transparency about automated decision-making. While not a specific AI caller disclosure requirement, organizations must be transparent about how personal information is processed - including by AI systems.
  • CRTC telemarketing rules: The Canadian Radio-television and Telecommunications Commission regulates telemarketing and requires identification of the caller. AI callers must comply with these existing identification requirements.

Australia: Emerging AI Regulation

  • No specific AI disclosure law: Australia does not have AI-specific disclosure requirements as of 2026.
  • Australian Consumer Law (ACL): Prohibits misleading or deceptive conduct in trade or commerce. An AI caller that a reasonable consumer would mistake for a human could violate ACL if the AI nature is not disclosed.
  • Do Not Call Register Act: Regulates telemarketing calls. AI callers must comply with Do Not Call requirements. The Act does not specifically address AI disclosure but identification requirements apply.
  • Government AI initiatives: The Australian government has published voluntary AI ethics principles that include transparency. Mandatory regulation is under consideration.

Asia-Pacific: Varied Approaches

  • China: China's AI regulations (including the Generative AI Measures and the Deep Synthesis Provisions) require labeling of AI-generated content and disclosure when interacting with AI. These are among the most stringent AI disclosure requirements globally.
  • South Korea: The Personal Information Protection Act (PIPA) and emerging AI regulation include transparency requirements. South Korea is developing comprehensive AI legislation that may include specific disclosure mandates.
  • Japan: Japan favors a soft-law approach with AI governance guidelines rather than binding legislation. No specific AI caller disclosure requirement exists, but industry guidelines encourage transparency.
  • Singapore: Singapore's Model AI Governance Framework is voluntary. No specific AI disclosure law exists, but the framework emphasizes transparency and explainability.
  • India: India's Digital Personal Data Protection Act addresses some AI transparency aspects. Specific AI caller disclosure requirements are not yet codified but are under discussion.

Full Country Compliance Matrix

Country/RegionSpecific AI Disclosure Law?Effective DatePenalty for Non-DisclosureRecommendation
EU (27 countries)Yes - AI Act Art. 502025-2026Up to 15M EUR or 3% turnoverMandatory disclosure required
United States (federal)No specific lawN/AFTC deception fines applyDisclose to avoid FTC risk
CaliforniaYes - SB 1001July 2019Civil penalties under BOT ActMandatory for sales/marketing
United KingdomNo specific lawN/AConsumer protection penaltiesDisclose as best practice
CanadaPending (AIDA)TBDTBD under AIDADisclose proactively
AustraliaNo specific lawN/AACL deception penaltiesDisclose as best practice
ChinaYes - multiple regulations2023-2024Administrative penaltiesMandatory disclosure required
South KoreaEmergingTBDTBDDisclose proactively
JapanNo - voluntary guidelinesN/ANone (voluntary)Recommended, not required
SingaporeNo - voluntary frameworkN/ANone (voluntary)Recommended, not required
SwitzerlandNo specific law (not EU)N/AGeneral consumer protectionDisclose as best practice
Norway/EEAYes - via EU AI Act2025-2026Per EU AI ActMandatory disclosure required

Practical Implementation: How to Disclose

Compliance requires not just disclosing but disclosing effectively. Here is how to implement AI disclosure in voice agent conversations:

1

Disclose early in the conversation

The disclosure should come within the first few seconds of the call - before any substantive conversation begins. Example: "Hello, thank you for calling [Business]. You are speaking with an AI assistant. How can I help you today?" Burying disclosure late in the conversation does not satisfy transparency requirements.

2

Use clear, unambiguous language

Avoid euphemisms like "virtual agent" or "automated assistant" that might not clearly convey the AI nature. Use terms like "AI assistant," "artificial intelligence," or "AI-powered system." The person must understand they are not speaking with a human.

3

Offer a human transfer option

Best practice (and required in some frameworks) is to offer the caller the option to speak with a human. Example: "If you would prefer to speak with a person, just say transfer to human at any time." This respects the caller's autonomy.

4

Disclose consistently across all calls

Disclosure must be consistent - every call, every time. It cannot be conditional based on the caller's location, the time of day, or the nature of the call. Implement disclosure as a fixed element of the conversation opening.

5

Document your disclosure practice

Maintain documentation of your disclosure language, when it is delivered, and how it is implemented technically. This documentation is evidence of compliance if regulators or auditors inquire.

Frequently Asked Questions

Not yet in every country - but the trend is clearly toward mandatory disclosure. The EU (27 countries + EEA), California, and China already require it. Most other jurisdictions have general anti-deception laws that may apply. The safest and most ethical approach is to disclose in every conversation regardless of jurisdiction.

The EU AI Act requires disclosure before or at the beginning of the interaction. In practice, the disclosure should be in the opening greeting - before any substantive conversation takes place. Disclosing after the caller has already shared personal information does not satisfy the requirement.

Yes. The disclosure requirement applies regardless of whether the AI initiates the call (outbound) or receives the call (inbound). When someone calls a business and an AI answers, the AI must disclose its nature. The fact that the caller initiated the call does not remove the disclosure obligation.

The disclosure should be in the language of the conversation. For multilingual AI voice agents, the disclosure should match the language the AI uses to communicate. If the AI switches languages mid-conversation, the disclosure should have already occurred in the initial language.

No. Website disclaimers do not satisfy verbal disclosure requirements. The person on the phone may not have visited your website or read any terms. Disclosure must be delivered in the same channel as the interaction - on the phone call itself.

If a call is transferred from one AI system to another AI system, the receiving system should disclose its AI nature. If transferred from AI to a human agent, the human does not need to disclose being human (the absence of AI disclosure serves this purpose). If transferred from a human back to AI, the AI should disclose.

Healthcare, financial services, and legal services face additional regulatory scrutiny. In healthcare, HIPAA and patient rights create a higher transparency bar. In financial services, know-your-customer (KYC) regulations may require human interaction for certain transactions. In legal contexts, unauthorized practice of law concerns may arise if AI is not disclosed.

Under the EU AI Act, transparency violations can result in fines of up to 15 million EUR or 3% of worldwide annual turnover, whichever is higher. Individual EU member states may impose additional penalties under national consumer protection laws. The combined exposure for systematic non-disclosure across multiple EU countries could be substantial.

Research from multiple AI voice providers shows that transparent disclosure has minimal negative impact on caller satisfaction or task completion rates. Callers who know they are speaking with AI adjust their communication style and often appreciate the honesty. Deception that is later discovered has a far more negative impact than upfront transparency.

Outbound AI sales calls face the strictest disclosure requirements because the AI is initiating contact. The disclosure must come immediately at the start of the call. In the EU, the caller must also be informed of the purpose of the call and the identity of the business. In California, SB 1001 specifically targets bot communications that incentivize purchases. Failing to disclose on outbound sales calls creates maximum regulatory risk.

JB
Justas Butkus

Founder & CEO, AInora

Building AI digital administrators that replace front-desk overhead for service businesses across Europe. Previously built voice AI systems for dental clinics, hotels, and restaurants.

View all articles

Ready to try AI for your business?

Hear how AInora sounds handling a real business call. Try the live voice demo or book a consultation.