Leave site Skip to content
You are here: Home » About us » FOI and publication scheme » Our use of Artificial Intelligence

Our use of Artificial Intelligence

Avon and Somerset Police is committed to the responsible, lawful and transparent use of Artificial Intelligence (AI) to support policing.


AI has the potential to improve efficiency, support decision-making, and enhance the quality of the services we provide to the public. However, we recognise that its use must be carefully governed to maintain public trust and confidence.

We apply a proportionate and controlled approach to the use of AI. This means:

  • Lawful and ethical use – all AI is used in line with UK law, including data protection legislation, and the College of Policing Code of Ethics
  • Human oversight – AI supports decision-making, but does not replace professional judgement
  • Transparency – we are open about how and where AI is used, subject to operational and security considerations
  • Accountability – AI systems are subject to governance, oversight and clear ownership
  • Reliability and safety – we assess performance, risks, and limitations before deployment and monitor systems once live

Our approach reflects national policing standards, including:

Governance and assurance

All AI use within the force is subject to structured governance and assurance processes. These include:

  • Data Protection Impact Assessments (DPIAs) where required
  • Ethical and legal review
  • Technical and security assessment
  • Ongoing monitoring and audit

We use practical tools such as responsible AI checklists to ensure systems are safe, proportionate, and aligned to public expectations of policing.

Use of generative AI

Where generative AI tools, such as Microsoft Copilot, are used internally, they are governed by strict policies to ensure:

  • Sensitive or restricted information is not processed
  • Outputs are verified and not relied upon without review
  • Use is transparent and accountable

We recognise that generative AI systems can produce inaccurate, incomplete, or biased outputs, and that their behaviour may change over time. Users are required to critically assess outputs and apply professional judgement at all times.

These tools are used to support productivity and internal processes, not to make autonomous operational decisions or produce evidential material.

Further information

Information about specific technologies and their use may be available through our Freedom of Information (FOI) disclosures.


Was this page useful? Tell us about your experience.