This AI Code of Conduct applies universally across all regions, teams, and collaborators involved in the design, development, deployment, support, or use of AI-enabled features within our products and services.
While we recognize that legal and cultural contexts may vary across our global business network, this Code serves as a framework for responsible and ethical AI practices. It builds upon our Business Code of Conduct by extending its principles to AI.
1. Principles (“Why”)
Our AI systems are developed to align with Text’s values and ethical standards with a focus on promoting trust, accountability, and responsible innovation. To the extent feasible and appropriate, we uphold the following core principles:
-
Transparency: We strive to provide accessible information about the use of AI within our Services. Where applicable, we document relevant system capabilities, known limitations, and general logic behind AI-driven outputs, while balancing technical complexity with usability.
-
Fairness: We aim to ensure that our AI systems function in a fair, balanced, and non-discriminatory manner. We take reasonable steps to identify and minimize risks related to inaccurate outputs, such as hallucinations or misrepresented information, especially when AI models retrieve or process data from customer-specific sources.
-
Accountability: Each in-house AI model is supported by an identified owner or team responsible for its training, monitoring, maintenance, and risk awareness. Human-in-the-loop mechanisms are introduced where appropriate. For third-party AI models, we monitor performance and stability.
-
Privacy & Security: We aim to incorporate principles of data minimization, least-privilege access, and encryption into AI systems where technically feasible. Our practices are designed to align with data protection laws and internal security standards.
-
Safety & Trust: Before release, AI models undergo reasonable testing for reliability and robustness, and where AI features are provided in beta, users are informed accordingly.
2. Responsibilities (“Who”)
We translate our principles into practice through defining roles across the organization that support oversight, compliance, and responsible deployment:
-
AI Stakeholders: This group defines AI strategy and policies, oversees major deployments, and reviews high-impact or risk-sensitive use cases to ensure responsible implementation aligned with organizational goals.
-
Product/DevOps Teams: These teams are responsible for designing, building, and maintaining AI-powered features. They assess model behavior, track reliability, and monitor third-party AI systems to ensure continuity of the services and model performance.
-
Legal, Security & Compliance: These functions evaluate external AI tools for potential legal, privacy, and security risks, and help ensure alignment with applicable regulations (e.g., GDPR; CCPA, internal security standards).
-
All Text’ers: Everyone at Text is expected to use AI responsibly and in accordance with internal guidelines. Through training, documentation, and internal resources, we promote awareness and understanding of AI capabilities, limitations, and risks. Team members are encouraged to stay informed about best practices and to report any unexpected or concerning AI behaviors through established channels.
3. Processes (“How”)
At Text, we firmly believe that AI development and deployment processes should support ethical usage throughout the product lifecycle:
AI Features are easy to work with: Our AI features are designed for usability and clarity. Where applicable, we provide guidance, tooltips, and customization options to help users understand and manage AI behavior effectively.
Model Documentation: To the extent practicable, we maintain internal records for production AI models, describing their intended purpose, scope of use, and relevant contacts.
Human Oversight: Where feasible, AI-generated content (such as responses, suggestions) is reviewed by humans before being shown externally, especially in workflows involving human-in-the-loop oversight. In fully automated scenarios (such as chatbots), oversight is implemented through testing, design review, and ongoing monitoring. Customers are encouraged to validate AI outputs as part of their own quality assurance workflows.
Education and Support: We provide a 24/7 support team with human assistance for AI-related questions or issues. Our AI Trust Center offers guidance on AI features, intended use cases, expected behavior, and responsible use.
Protection & Privacy: AI systems process data in accordance with the context of its original collection and applicable agreements. Text does not collect or process biometric data (e.g., facial scans, fingerprints) in any of its Services.
Vendor & Third-Party AI tools: External AI tools used by Text are subject to prior security, legal, and privacy review processes before adoption, and must meet relevant regulatory and contractual obligations.
Access Control & Purpose Limitation: Access to AI-relevant data is limited to authorized personnel only. Both internal teams and customers are expected to use AI outputs only for purposes consistent with the original intent of data collection.
Data Minimization & Responsible Use: Data is used, including purpose limitation, access controls, and data handling practices aligned with privacy and security standards.
Incident Response: AI-related incidents are addressed through incident response processes that include investigation, impact assessment, root cause analysis, and remediation.
Continuous Review: AI features are regularly evaluated to ensure compliance with performance benchmarks and this AI Code of Conduct.
Commitment to Responsible AI
By adopting and applying this AI Code of Conduct, Text affirms its commitment to building AI systems that are transparent, fair, and aligned with our values of trust, responsibility, and human oversight. We seek to meet evolving regulatory expectations and contribute to the development of safe and trustworthy AI systems.
We invite our customers, partners, and vendors to share these principles, fostering an ecosystem where innovation and ethics work together to deliver long-term value and societal benefit.