IT asks: Are Empatyzer company data used to train models

TL;DR:

  • We do not train models on customer data.
  • Conversation contents and raw outputs remain private and are not used for model training.
  • This rule is written into the contract, enforced technically, and audited administratively.

Empatyzer does not use customer data or conversation content to train production or general language models; this is a core principle of our product and a contractual requirement. User data are stored on servers in the European Union with strong security and client-specific data separation, and administrative access is strictly controlled and logged. Conversation content is not fed into training processes and individual raw results are not shared with the client company. Our DPA and terms explicitly ban using client data for model training and set out retention and deletion procedures after the engagement ends. Technically, we provide encryption at rest and in transit and separate data stores per client to minimize the risk of accidental use. Provider access is limited to authorized personnel and all actions are recorded in auditable logs. On request we share security policies, the DPA, DPIA, and data deletion terms. If we ever plan to use anonymized data for research or service improvement, we will do so only with separate client consent and in compliance with legal requirements. Internal models are trained on non-customer datasets or public corpora in line with contractual terms. In case of a security incident the client will be notified and remediation and access audits will be initiated. These safeguards reduce the risk that company data would be used to train models or used inconsistently with the contract.

In short: we do not train models on customer data, and this prohibition is documented in the contract and supported by technical measures and auditability.

Author: Empatyzer

Published:

Updated: