Artificial Intelligence & Machine Learning Disputes
I provide expert evidence in disputes involving artificial intelligence and machine learning systems. This includes the assessment of AI models for accuracy, bias, robustness, and interpretability, as well as compliance with emerging regulatory frameworks. My approach is grounded in rigorous, reproducible testing methodologies that produce results appropriate for judicial and regulatory proceedings.
What This Involves
Disputes involving AI systems can arise in a number of contexts: allegations that a model produces biased or discriminatory outcomes, claims that an AI product does not perform as represented, intellectual property disputes concerning the ownership or misappropriation of trained models, and regulatory investigations into automated decision-making. In each case, the technical evidence requires a structured approach to model evaluation that goes beyond surface-level performance metrics. I design and execute testing protocols that assess model behaviour across relevant input distributions, edge cases, and protected characteristic groups, producing quantified results that can be presented and scrutinised in proceedings.
Bias and fairness audits form a significant part of this work. Determining whether an AI system produces outcomes that are unfair or discriminatory requires careful definition of the relevant fairness criteria (which may vary depending on the regulatory framework and the context of deployment), followed by statistical testing against those criteria using appropriate datasets. In my experience, there is not a single universally accepted definition of algorithmic fairness, and the choice of metric can materially affect the conclusions drawn. My reports set out the methodology, the fairness definitions applied, and the limitations of the analysis so that the tribunal or court is in a position to assess the weight to be given to the findings.
Explainability and interpretability are increasingly relevant to AI disputes, particularly where automated decisions affect individuals and where regulatory requirements demand that those decisions can be explained. I use techniques including SHAP (SHapley Additive exPlanations), LIME (Local Interpretable Model-agnostic Explanations), and feature importance analysis to provide technically grounded explanations of model behaviour. Where the dispute concerns the governance or oversight of an AI system, I assess whether the organisation had appropriate controls in place for model validation, monitoring, and human oversight, considered against relevant standards and regulatory expectations at the time.
Typical Instructions
- • AI model performance and validation testing
- • Bias and fairness audits
- • Explainability and interpretability assessment
- • AI governance framework design and review
Related Insights
Deepfakes and Synthetic Media: The Growing Challenge for Digital Evidence
How AI-generated deepfakes affect the reliability of digital evidence in litigation, what detection methods exist, and what solicitors should consider when the authenticity of video, audio, or image evidence is in question.
What to Expect When Instructing a Technology Expert Witness
A practical guide for solicitors and in-house counsel on the process, timelines, and key considerations when instructing a technology expert under CPR Part 35 in England and Wales.
Related Expertise
Considering instructing a technology expert?
For a preliminary discussion about whether technology expert evidence may assist your matter, or to discuss the scope of a potential instruction.
Discuss an instruction