Transparent and Responsible AI for Customer Communication
At Text, we believe that trust is the foundation of every successful customer experience. Our AI-powered tools are designed to help customer service teams work smarter, enhance customer experiences, and build stronger relationships — without compromising privacy, security, or ethics.
The AI Trust Center is your guide to how we develop AI responsibly, the AI Principles we follow, and how you can use our AI features in a transparent, ethical way.
We build AI you can count on so you can focus on what matters — building meaningful, human connections with your customers.
Our AI Principles
Transparent by Design
When AI’s in the mix, you shouldn’t be left guessing. We strive to share clear, easy-to-digest info about how AI shows up in our Services — what it does, where it works best, and what its limits are. The goal? You feel informed, in control, and confident using AI every step of the way.
What We Ask of You: Let your customers know when AI is part of the conversation.
Fair and Inclusive
We follow our AI Code of Conduct to develop and deploy AI across our products that are fair, inclusive, and regularly audited to reduce bias. Our models are trained on diverse datasets and aligned with clear ethical guidelines. Our Acceptable Use Policy prohibits the use of AI in ways that could be discriminatory, unlawful, or unethical, ensuring compliance with legal and industry standards.
Our partner OpenAI applies strong bias mitigation techniques:
-
Filtering harmful content during pre-training
-
Fine-tuning with human feedback to reduce harmful or biased outputs
-
Prohibiting high-risk uses (e.g., law enforcement, criminal justice, immigration)
-
Auditing for disparities across demographics and languages
What We Ask of You:
-
Review and monitor data inputs to avoid introducing bias
-
Examine interactions for signs of unintentional bias
-
Educate your team on fairness and inclusivity
-
Flag and correct AI outputs that don’t meet your standards
-
If you spot inaccurate, biased, or inappropriate AI responses, please share feedback — it helps us make AI better and safer
Need help fine-tuning your settings? Check out our guides on reply suggestions, AI Copilot, tag suggestions, AI text enhancements, and chatbot training.
Together, we can ensure AI is used to support respectful, inclusive customer communication.
Designed for Human Oversight and Control
AI assists — humans lead. AI suggests, drafts, or helps, but you approve, edit, and send. Human-in-the-loop ensures relevance and accuracy.
What We Ask of You: Train your team to know when to step in, escalate, or override.
Accuracy with Oversight
AI isn’t flawless — and that’s where you come in. AI can sometimes be wrong, outdated, or too generic. This is why human judgment is key, especially in high-stakes or regulated industries.
What We Ask of You:
-
Double-check how AI is configured and what sources it uses
-
Provide clear, quality prompts
-
Add context and make it your own
-
Always review AI-generated output before making decisions
-
Use human judgment in sensitive or regulated contexts
-
Never rely solely on AI without validation
-
Help your customers understand AI’s limitations
Before sharing, review both the AI output and how it was generated — its sources, training setup, and logic. Make sure your AI Knowledge is based on verified, trusted documents to ensure quality answers. For tips on training and managing AI Knowledge, explore this guide on training your chatbot. Use reply suggestions and AI text enhancements with agent oversight, monitor Copilot’s outputs regularly, and fine-tune tagging accuracy through the Tag suggestions settings.
Note: We don’t guarantee the accuracy of AI-generated content or offer indemnities for its use.
Privacy First
We protect your data. We use encryption, access controls, and secure infrastructure to safeguard your data. Your data isn’t used to train third-party models. All usage complies with our Privacy Policy and Data Processing Addendum.
What We Ask of You:
-
Be transparent with your customers about data use
-
Use AI features in compliance with applicable laws and regulations
-
Handle sensitive data responsibly, especially in regulated industries
Use ChatBot consent messages or LiveChat chat surveys to let users know they may be interacting with AI-generated content.
How We Put Our Principles into Practice
We don’t just talk about responsible AI — we build it into every part of our platform. Here’s a closer look at how we use AI to make support faster, smarter, and always aligned with our responsible AI principles.
1. Our Approach to AI
How do Text AI solutions empower customer service teams and enhance communication efficiency? At Text, we lighten your workload and boost the efficiency of your customer service team with AI-powered tools.
Whether it’s boosting real-time customer engagement with LiveChat’s AI-powered tools, automating routine interactions through ChatBot’s AI, simplifying ticket management with HelpDesk’s AI-powered workflows, or improving your customer support with AI-packed knowledge base software, our solutions work together to ensure seamless, efficient customer communication.
Watch our video for a behind-the-scenes look at how we automate repeatable tasks to save your team time and energy.
From tag suggestions and canned replies to full-length AI-generated messages in your brand voice — our tools are designed to elevate your service and help your business grow.
2. AI Content Guidelines
Text’s AI-powered features are designed to streamline your workflow and enhance customer interactions. But while AI can help speed things up, it’s not a substitute for human judgment. Human oversight remains essential to ensure the quality, accuracy, and compliance of your communication.
Why Accuracy and Oversight Matter
AI-generated content can sometimes include inaccuracies, generic responses, or outputs that may not fully align with your specific business needs or regulatory requirements. That’s why reviewing and verifying AI outputs is crucial before sharing them with customers. By carefully checking the information, you help prevent the spread of misinformation, protect sensitive data, and ensure compliance with legal and industry standards.
For industries like healthcare, finance, and legal services, where accuracy and confidentiality are critical, human review plays an even more vital role. AI may generate helpful responses, but it doesn’t replace the expertise of trained professionals or meet all regulatory requirements on its own.
Your Role in Responsible Use
-
Verify AI Outputs: Review AI-generated responses to ensure they’re accurate, complete, and appropriate for the conversation. Customize them to fit your customer’s context and maintain your brand voice. Due to the extensive training across diverse datasets, AI responses may occasionally seem familiar or similar to those produced for other users. In such instances, we recommend seeking further clarification and customizing the responses to better fit your specific needs.
-
Ensure Compliance: When using AI features for healthcare or other regulated industries, confirm that outputs meet your compliance needs. Avoid sharing sensitive or protected information unless proper safeguards are in place.
-
Monitor Performance: Regularly review AI features like reply suggestions and Copilot to ensure they’re performing as expected. Collect feedback from your team and customers to help improve accuracy and effectiveness over time.
Inform Customers When AI Is Used
Be transparent about AI-powered communication and evaluate if consent is required.
Transparency builds trust — especially when AI is part of the conversation. If you’re using AI Assist or other AI-powered features in your chatbot or live chat workflows, it’s important to clearly inform your customers that they may be interacting with AI-generated content.
Why does this matter?
Customers have a right to know how their data is being used and who — or what — they’re communicating with. This level of transparency may be legally required when AI influences the interaction or processes personal data.
Consider whether consent is required.
Not all AI-assisted conversations require explicit consent, but you should evaluate your specific use case — particularly if your business operates in a regulated industry (like healthcare or finance) or handles sensitive customer data. If consent is needed, it should be obtained before any AI-generated content is shared.
How to inform your customers:
We recommend clearly disclosing the use of AI at the start of the conversation. You can do this by:
-
Displaying a consent or information message using tools like ChatBot’s pre-chat survey lets you present a short disclaimer before users interact with the AI.
-
Using LiveChat’s pre-chat and post-chat survey tools to include a short note such as “Some responses may be generated by AI to help provide faster support.”
-
Adding a line in your welcome message or chatbot greeting lets users know they’re interacting with an AI-powered system.
-
Updating your privacy policy or help center to clearly explain how AI features are used.
Example notice: “This chat may use AI to help answer your questions faster.”
By using built-in features like chat surveys, you can easily meet transparency and (if applicable) consent requirements — while helping customers feel informed and in control of their experience.
Best Practices for Using AI Features Responsibly
-
Always review and, if necessary, edit AI-powered content to ensure accuracy and relevance before sending it to customers.
-
Use AI to improve clarity and tone but maintain human discretion. Your team should have full control to accept, reject, or edit AI-generated content.
-
Regularly monitor AI-generated content to ensure it meets your compliance and quality standards. Implement feedback loops for continuous improvement.
By following these guidelines, you can use AI-powered features responsibly — enhancing customer experiences without compromising quality, accuracy, or compliance.
Ownership and Usage
When you engage with AI-powered features, you retain ownership of the data you input (“Input”) and the outputs generated by our AI-powered tools in response (“Output”). While you are free to use these Outputs, you must ensure that your use complies with all applicable laws and does not infringe on the rights of any third parties.
Outputs generated by our AI may contain elements that are protected under copyright law and owned by third parties. This means that while you can use Outputs, you must ensure that such use does not infringe on third-party intellectual property rights.
Outputs from our AI-powered tools may not be unique or exclusive to you — similar or even identical content could be generated for different users based on common Inputs. This means that the same questions asked by various users might lead to similar responses from our AI.
As these Outputs can include third-party rights and might not be unique to you, you cannot claim exclusive ownership over them.
We do not transfer any third-party rights that may be embedded in the Outputs.
Text does not claim ownership of the Outputs but retains all rights to the AI technology and the aggregated data used to develop these AI-powered tools.
3. AI Model Training
At Text, protecting your data is our top priority. We’re committed to keeping your information private, secure, and used responsibly — especially when it comes to training our AI systems.
We use customer data solely to enhance the AI-powered features within the Text platform. Your data helps us better understand user needs and fine-tune AI functionalities, but it’s always handled with care.
Your Data Stays Private
Your data is never mixed with data from other customers during model training. Each customer’s information is treated individually to maintain privacy and security at all times. Every customer license is supported by dedicated AI models tailored to your specific use cases and never shared across licenses. While these models may run on shared infrastructure, access is always logically isolated — ensuring that only your license can interact with your dedicated models. Text may process customer data to maintain and improve the quality of our services — including refining our AI models — but we do not share this data externally.
No External Training, No General-Purpose AI Use
Your data is never used to train general-purpose AI models. Our models are task-specific and designed exclusively to power features within the Text platform. In all cases, model access is restricted per customer license, ensuring logical isolation even when hosted on shared machines. We do not use your data to train external providers’ models — including partners like OpenAI — and we do not contribute your data to any third-party AI training initiatives.
Private, Secure Infrastructure
We train and host our models on secure infrastructure using our own algorithms. Some models are hosted internally, always isolated per customer. These safeguards ensure your data remains private and protected at all times.
Focused on Service Enhancement
Our AI model training process is built around enhancing the Text experience — nothing more. We do not share or use your data for any purpose outside of improving our services. You stay in control of your information, and we stay focused on delivering secure, responsible AI that works for you.
4. Privacy and Security
At Text, we take your data privacy seriously. We use customer data to improve our AI features, but we never share it with other customers. Here’s how we keep your data secure when training our AI:
-
Transparency is our policy: We’re upfront about how we handle your data. Our Privacy Policy and Data Processing Addendum explains exactly when and how we might share your data with trusted AI partners.
-
Encrypted connections: All data going to and from our AI partners is encrypted, like a locked vault. This extra layer of security protects your data from unauthorized access.
-
Your data, your choice: You have the right to control your data. You can request that we delete your personal data from our systems, following the guidelines set out in the GDPR.
We Partner Only with Renowned, Trusted Entities
As part of our compliance and commitment to responsible AI, we only partner with reputable and vetted AI technology providers, including OpenAI. Before engaging with any partner:
-
We verify their privacy, security, and compliance standards, ensuring they meet or exceed our high benchmarks for protecting customer data.
-
We sign robust Data Processing Agreements (DPAs) with all partners, clearly outlining responsibilities, safeguards, and compliance with privacy laws like GDPR and CCPA.
-
We require our partners to follow the same strict security, privacy, and ethical standards we uphold for our own customers. We expect nothing less from our partners than what we promise to you.
No Data Sharing for AI Training
Your data is never shared to train external AI models. We strictly limit how your data is used. Data is processed solely to deliver and enhance AI features within the Text platform. We do not share your data with third-party providers for their own model training or data enrichment purposes.
By putting your privacy first, Text creates a safe and secure environment where you can benefit from AI-powered customer support.
Ready to Build Trust with AI?
Text’s AI-powered tools are designed to make your customer communication smarter, faster, and more human — with ethics and trust at the core. Get started today and take your customer support to the next level.