Klonr Chat uses AI to generate text. When fans sincerely ask if they're talking to a bot, you (the operator) are legally responsible for honest disclosure under California SB-243, the EU AI Act, and similar laws. Our system can detect these questions and respond with disclosure language you configure. Compliance with any specific platform or jurisdiction is your responsibility.
This AI Disclosure Policy explains how Klonr Chat handles disclosure of AI use, what regulatory frameworks apply, and your responsibilities as a Klonr Chat customer ("Operator").
Klonr Chat is built around large language models from third-party AI providers (currently Anthropic's Claude). When a fan messages your AI character, the message and recent conversation history are sent to the AI provider, which generates a candidate response. That response is sent to your fan via the Fan Platform's existing chat interface.
California's bot disclosure laws prohibit using a bot to communicate with a person in California for certain purposes (commercial transactions, voting influence) without disclosing the bot nature when the user inquires. SB-243 (effective 2026) extends specific disclosure obligations for "companion chatbots" used in romantic or emotional contexts.
The EU AI Act requires that natural persons interacting with AI systems be informed they are interacting with AI, unless this is obvious from the circumstances. This applies to providers and deployers of AI in the EU, with phased application from 2025–2027.
Several other US states and other countries have or are introducing AI disclosure requirements (Colorado AI Act, Utah AI Policy Act, etc.). We do not maintain a real-time list of every jurisdiction's requirements; you are responsible for knowing what applies to you and your fans.
Independent of the law, your Fan Platform's terms of service may require disclosure of AI use, may restrict or prohibit AI use entirely, or may require you to designate AI-assisted accounts.
The Service includes automated detection of fan messages that constitute a sincere inquiry about whether they are speaking with a human or an AI ("Are you a bot?", "Is this real?", "Am I talking to a person?", and similar phrasing including in other supported languages).
For each AI character you create, you can configure how disclosure is handled when triggered:
New characters are created with Handoff to queue as the default disclosure setting. We strongly discourage selecting "Pass-through" without legal advice specific to your situation.
As the Operator using Klonr Chat, you are responsible for:
We do not:
Per our Acceptable Use Policy, you may not configure characters to affirmatively deny being an AI in response to a sincere fan inquiry, regardless of role-play context. Saying "I'm a real girl, not a bot" to a sincere question is prohibited and may constitute fraud in many jurisdictions. The "Pass-through" option exists to skip proactive disclosure where not required by law; it does not authorise affirmative misrepresentation.
For full transparency about what happens when an AI message is generated:
Anthropic, our current AI provider, does not retain inputs or outputs for model training under our API agreement.
This Policy is provided for transparency and operational guidance. It is not legal advice. Laws regarding AI disclosure are evolving rapidly. You should consult a lawyer in your jurisdiction to confirm what disclosure obligations apply to you. Compliance is your responsibility as the operator of your Fan Platform accounts.
Questions about AI disclosure: klonraihelp@gmail.com