No orders may be placed before then.

AI & Data Ethics Policy

GeiG AI & Data Ethics Policy – v1.3

1.0 Purpose
1.1 We set principles and controls for the responsible design, deployment, and oversight of AI systems used by GeiG.
1.2 We safeguard people, uphold UK law, and align with our Privacy Policy, Cookie Policy, Security & Vulnerability Disclosure Policy (v1.1), Accessibility Statement, Export & Trade Compliance (v1.0), Website Terms of Use, and Software EULA.
1.3 Governing law and jurisdiction: England & Wales.

2.0 Scope
2.1 Applies to AI in our products, websites, apps, support channels, internal tooling, and supplier-provided AI services.
2.2 Covers data collection, training, fine-tuning, inference, evaluation, human oversight, incident response, and decommissioning.
2.3 Binds GeiG employees, contractors, and service providers.

3.0 Definitions
3.1 “AI” means systems that infer, predict, generate, or classify content or outcomes using statistical or machine-learning methods.
3.2 “High-risk use” means uses that materially affect people’s rights or access to essential services (e.g., identity, finance, employment).
3.3 “Personal data” has the meaning in UK GDPR and the Data Protection Act 2018.

4.0 Principles
4.1 Lawful, fair, and transparent: we explain AI features in clear language at point of use and in this policy.
4.2 Purpose limitation: we use data only for stated purposes; secondary use requires a compatible legal basis or new consent.
4.3 Data minimisation: we collect the least data needed and retain it no longer than necessary.
4.4 Safety and security by design: we integrate secure development (OWASP ASVS/Top 10) and vulnerability handling (ISO/IEC 29147, 30111; CREST testing).
4.5 Human-centred: people can reach a human for issues materially affecting them.
4.6 Accessibility and inclusion: AI follows our Accessibility Statement (WCAG 2.2 AA).
4.7 Accountability: we document decisions, controls, and ownership for each AI system.

5.0 Allowed & Prohibited Uses
5.1 Allowed uses include customer support chat, device features, fraud/abuse detection, and internal analytics where approved by risk assessment and operated under this policy.
5.2 Prohibited: social scoring; unlawful discrimination; unsolicited surveillance; biometric emotion or lie detection; predictive policing; biometric categorisation or facial recognition unless explicitly approved by the Board and lawful under UK rules.
5.3 We do not design AI features for children or knowingly target under-18s.

6.0 Transparency & User Notices
6.1 We identify when users interact with AI or receive AI-assisted outputs.
6.2 We maintain a brief AI feature summary at /legal/ describing purpose, inputs, limitations, and escalation routes.
6.3 We label synthetic media in consumer contexts and apply human review for material claims.

7.0 Consent, Preferences & Logging
7.1 Where AI relies on consent (e.g., cookies/analytics), we record and honour consent via our Cookie Policy (PECR/GDPR).
7.2 Telemetry: users can opt out per the Software EULA; we apply that choice across AI analytics.
7.3 We do not offer an additional “no-AI” processing opt-out beyond the EULA telemetry controls.
7.4 We log material AI-related consents and preference changes with timestamp and source.
7.5 Consent records for analytics and experience tools (see Cookie Policy – Appendix A: Hotjar) are maintained and auditable.

8.0 Data Governance & Residency
8.1 Training/fine-tuning datasets: we document sources, licences, and lawful bases; we avoid personal data unless necessary and lawful.
8.2 Sensitive data: we do not use special category data for training/inference unless strictly necessary, lawful, and approved via DPIA.
8.3 Third-party AI processing must be UK-resident by default. EEA fallback is permitted only where a UK-hosted option is not reasonably available, with DPO approval, UK IDTA/SCCs in place, a recorded Transfer Risk Assessment, and equivalent technical and organisational safeguards documented.
8.4 Retention: we set limits for prompts, outputs, and logs; we prefer ephemeral processing for support chat unless required for security or fraud prevention.

9.0 Security & Abuse Prevention
9.1 We protect models, prompts, and outputs using defence-in-depth, least privilege, and environment isolation.
9.2 We test AI features against prompt-injection, data exfiltration, unsafe content, and model misuse.
9.3 We follow our Security & Vulnerability Disclosure Policy (v1.1); security.txt is published and good-faith reports are welcomed.
9.4 Vendor security due diligence is performed proportionately; see Supplier Code of Conduct (v1.1).

10.0 Quality, Bias & Monitoring
10.1 We evaluate AI outputs for bias, performance drift, and error rates before and after release.
10.2 We maintain test sets and monitor live metrics; significant degradations trigger rollback or retraining.
10.3 We document known limitations and safe-use guidance for users and support teams.

11.0 Human Oversight & Escalation
11.1 A human is available for decisions that could materially affect a person.
11.2 Users can request human review via support@geig.co.uk

Login to GeiG

Don’t have an account?

Don’t have an account? Sign Up

Sign Up to GeiG

Already have an account?