Engineered For Trust, Built For Scale.

AI & Data Ethics Policy

GeiG AI & Data Ethics Policy – v1.3

1.0 Purpose

1.1 We set principles and controls for the responsible design, deployment, and oversight of artificial intelligence (“AI”) systems used by GeiG.
1.2 We safeguard people, uphold UK law, and align with our Privacy Policy, Cookie Policy, Security & Vulnerability Disclosure Policy (v1.1), Accessibility Statement, Export & Trade Compliance Policy (v1.0), Website Terms of Use, and Software EULA.
1.3 Governing law and jurisdiction: England & Wales.

2.0 Scope

2.1 Applies to AI used in our products, websites, applications, support channels, internal tooling, and supplier-provided AI services used on our behalf.
2.2 Covers data collection, training, fine-tuning, inference, evaluation, human oversight, incident response, and decommissioning.
2.3 Binds GeiG employees, contractors, and service providers.

3.0 Definitions

3.1 “AI” means systems that infer, predict, generate, or classify content or outcomes using statistical or machine-learning methods.
3.2 “High-risk use” means uses that materially affect people’s rights or access to essential services (for example identity, finance, employment, or safety).
3.3 “Personal data” has the meaning given in the UK GDPR and the Data Protection Act 2018.

4.0 Principles

4.1 Lawful, fair, and transparent: we explain AI features in clear language at point of use and in this policy.
4.2 Purpose limitation: we use data only for stated purposes; secondary use requires a compatible legal basis or new consent.
4.3 Data minimisation: we collect the least data needed and retain it no longer than necessary.
4.4 Safety and security by design: we integrate secure development practices (including OWASP ASVS / OWASP Top 10) and responsible vulnerability handling aligned to ISO/IEC 29147 and 30111.
4.5 Human-centred: people can reach a human where an AI outcome materially affects them.
4.6 Accessibility and inclusion: AI features follow our Accessibility Statement (WCAG 2.2 AA).
4.7 Accountability: we document decisions, controls, risk assessments, and ownership for each AI system.

5.0 Allowed and Prohibited Uses

5.1 Allowed uses include customer support assistance, device features, fraud and abuse detection, safety monitoring, and internal analytics where approved through risk assessment and operated under this policy.
5.2 Prohibited uses include:
(a) social scoring;
(b) unlawful discrimination or exclusion;
(c) covert surveillance;
(d) biometric emotion inference or deception detection;
(e) predictive policing;
(f) biometric categorisation or facial recognition, unless explicitly approved through governance review and lawful under UK rules.
5.3 We do not design AI features for children or knowingly target individuals under 18.

6.0 Transparency and User Notices

6.1 We clearly indicate when users are interacting with AI or receiving AI-assisted outputs.
6.2 We maintain an AI feature summary on the Legal Hub describing purpose, inputs, limitations, and escalation routes.
6.3 Synthetic or AI-generated content is labelled in consumer contexts, and material claims are subject to human review.

7.0 Consent, Preferences, and Logging

7.1 Where AI relies on consent (for example cookies, analytics, or experience tools), consent is recorded and honoured in line with our Cookie Policy and PECR/GDPR requirements.
7.2 Telemetry controls are provided where described in the Software EULA; user choices are respected across AI analytics.
7.3 We do not provide a blanket “no-AI” opt-out beyond the controls already described in our EULA and Cookie Policy.
7.4 Material AI-related consents and preference changes are logged with timestamp and source for auditability.

8.0 Data Governance and Residency

8.1 Training and fine-tuning datasets are documented with sources, licences, and lawful bases; personal data is avoided unless necessary and lawful.
8.2 Special-category data is not used for training or inference unless strictly necessary, lawful, and approved via a Data Protection Impact Assessment (DPIA).
8.3 AI processing is UK-resident by default. Where this is not reasonably available, EEA processing may be used subject to:
(a) documented necessity;
(b) appropriate UK transfer safeguards;
(c) a recorded transfer risk assessment; and
(d) equivalent technical and organisational controls.
8.4 Retention limits are set for prompts, outputs, and logs; ephemeral processing is preferred for support chat unless required for security, fraud prevention, or legal obligations.

9.0 Security and Abuse Prevention

9.1 We protect models, prompts, and outputs using defence-in-depth, least-privilege access, and environment isolation.
9.2 AI features are tested against prompt injection, data exfiltration, unsafe content, and misuse scenarios.
9.3 We follow the Security & Vulnerability Disclosure Policy (v1.1); good-faith security research is welcomed via published channels.
9.4 Supplier and vendor AI security controls are assessed proportionately and aligned to our Supplier Code of Conduct.

10.0 Quality, Bias, and Monitoring

10.1 AI outputs are evaluated for bias, performance drift, and error rates prior to release and on an ongoing basis.
10.2 Live monitoring is maintained; significant degradation triggers mitigation, rollback, or retraining.
10.3 Known limitations and safe-use guidance are documented for users and support teams.

11.0 Human Oversight and Escalation

11.1 A human decision-maker is available for any AI-supported outcome that could materially affect a person.
11.2 Users may request human review or raise concerns via support@geig.co.uk or 24/7 web chat on geig.co.uk.

12.0 Incident Management

12.1 AI-related incidents (safety, bias, security, or data protection) are logged, triaged, and investigated under our incident-response procedures.
12.2 Material incidents are escalated, remediated, and recorded, with regulator notification where required by law.

13.0 Training and Awareness

13.1 Relevant staff receive periodic training on responsible AI, data protection, bias awareness, and escalation duties.
13.2 Training completion and competency are tracked and reviewed as part of governance oversight.

14.0 Review and Governance

14.1 This policy is reviewed at least annually and when legal, regulatory, or operational changes require.
14.2 The latest published version on the Legal Hub controls.

15.0 Contact

15.1 Questions, concerns, or requests relating to AI and data ethics may be directed to support@geig.co.uk or via 24/7 web chat on geig.co.uk.

End of GeiG AI & Data Ethics Policy (v1.3)

Login to GeiG

Don’t have an account?

Don’t have an account? Sign Up

Sign Up to GeiG

Already have an account?