Industry

Responsible AI Policy

Last updated: May 4, 2026

Hansa Cequity builds AI systems that augment human contact-centre teams. This Responsible AI Policy explains how we design, deploy, and govern our AI to keep customers, end users, and agents safe, informed, and in control.

1. Our Principles

  • Human-centred — AI assists people; it does not replace human judgement in consequential decisions.
  • Transparent — We seek to ensure that the End users are told when they are interacting with an AI subject to consent provided by the customer .
  • Fair — We use reasonable efforts to identify and reduce harmful bias in models and outputs.
  • Accountable — Every automated decision has a clear escalation path to a human.
  • Private & Secure — We implement appropriate measures including but not limited to Data minimisation, encryption, and strict access controls apply to all AI workloads.

2. Model Usage

Our platform uses a combination of:

  • Speech-to-text and text-to-speech models for voice interactions.
  • Large language models (LLMs) for understanding intent, generating responses, summarising calls, and assisting agents in real time.
  • Classical ML models for routing, sentiment, and quality scoring.

We use a mix of in-house, open-source, and trusted third-party models. We do not use Customer Data, unless otherwise agreed in writing, to train foundation models without explicit, written customer consent. Customer Data processed by third-party model providers is governed by enterprise agreements that prohibit training on that data and the output shall be based on the data based on third party systems.

3. Bias Mitigation

  • We aim to evaluate models across accents, languages, genders, and demographic groups before production rollout.
  • We monitor live performance for disparate error rates (e.g., transcription accuracy, intent recognition) and remediate when gaps are detected.
  • Prompt templates and guardrails are reviewed to avoid stereotyping, exclusionary language, or unsafe recommendations.
  • We may monitor usage of the Services to ensure compliance with this Policy and applicable laws, subject to applicable data protection requirements.

4. Human Oversight

  • Agent-in-the-loop: Live Agent Assist surfaces AI suggestions; the human agent decides what to say.
  • Escalation: Voice Bot and Chat Bot flows include explicit “talk to a human” paths at any point.
  • High-risk decisions: AI systema are not intended to independently make legally or financially significant decisions without human review (e.g., loan denial, medical diagnosis, claim rejection).
  • Quality review: Sampled AI interactions are reviewed by trained QA staff for accuracy, tone, and compliance.

5. AI Disclosure

Where required by law and as a matter of good practice, our voice and chat agents identify themselves as AI at the start of an interaction. Customers deploying our Services are responsible for configuring disclosures appropriate to their jurisdiction and use case; default templates include AI disclosure language. Compliance with applicable disclosure laws and regulations remains the responsibility of the customer deploying the Services.

6. Prohibited Uses

Consistent with our Acceptable Use Policy, customers may not use our AI to:

  • Impersonate real individuals without consent or generate deceptive deepfakes.
  • Make fully automated, legally significant decisions about individuals without human review.
  • Engage in social scoring, biometric categorisation of protected characteristics, or other prohibited practices.
  • Manipulate, deceive, or exploit vulnerable groups (including minors).
  • We reserve the right to monitor usage for compliance and to suspend or restrict access in the event of violations of this policy.

7. Safety, Testing, and Red-Teaming

  • New models and prompts undergo necessary safety evaluations.
  • Guardrails are designed to reduce the risk of PII leakage, profanity, and out-of-scope responses by default.
  • We maintain rollback procedures to revert problematic models or prompts quickly.