← All insights

When Software Projects Fail: A Technical Expert's Guide to Fitness-for-Purpose Claims

How technology experts assess failed software projects in litigation, from requirements analysis and methodology review to root cause investigation and delay analysis.

Fitness for PurposeIT DisputesSoftware ProjectsExpert Witness

Software project disputes are among the most common matters in technology litigation. A system is delivered late, over budget, or not working as expected. The client says the software is not fit for purpose. The developer says the specification was unclear, the client changed the requirements, or the system works as agreed. Both sides have emails, meeting notes, and change requests that support their position, and neither side has a clear picture of what went wrong technically.

Having acted in a number of software project disputes, across the UK High Court and international arbitration rules, I set out below how these matters are assessed from a technical perspective, and what solicitors should be aware of when instructing an expert.

What “fit for purpose” means technically

The legal test for fitness for purpose varies by jurisdiction and contract, but the technical assessment typically comes down to a core question: does the software do what it was supposed to do, to the standard that was reasonably required?

Answering that question requires three things:

  1. A clear understanding of what was agreed. This means examining the requirements specification, statement of work, functional design documents, and any subsequent change requests. In practice, these documents are often incomplete, ambiguous, or contradictory, which is frequently part of the problem.
  2. An objective assessment of what was delivered. This means reviewing the software itself (its functionality, performance, reliability, and security) against the agreed requirements. It may also involve examining the test results, defect logs, and user acceptance testing records.
  3. An understanding of the applicable standard. “Fit for purpose” does not mean perfect. It means fit for the particular purpose for which it was acquired, taking into account the nature of the software, the contract price, and industry norms. A bespoke enterprise system commissioned for several million pounds is held to a different standard than a minimum viable product built on a startup budget.

The tension in most software project disputes lies in the gap between what the client expected, what the contract specified, and what the developer delivered. The expert’s role is to assess that gap objectively, identifying where the failure lies and, critically, who bears responsibility for it.

Requirements analysis

The starting point for any fitness-for-purpose assessment is the requirements. What was the software supposed to do? How was that documented? Were the requirements sufficiently clear and complete for a competent developer to build against them?

In my experience, requirements problems are a contributing factor in many software project disputes. Common issues include:

  • Ambiguous requirements: Statements like “the system should be user-friendly” or “performance should be acceptable” that mean different things to different people and cannot be objectively measured.
  • Incomplete requirements: Critical functionality that was assumed by one party but never documented. The client assumed the system would handle concurrent users; the developer assumed single-user operation.
  • Evolving requirements: Scope changes during development that were agreed informally but not reflected in the contract documentation, making it unclear what the final specification actually was.
  • Contradictory requirements: Different documents specifying different behaviour for the same feature, creating an impossible delivery target.

A forensic review of the requirements documentation, including the history of changes and the communications between the parties, can establish whether the specification was adequate, whether changes were properly managed, and whether the delivered software was built against a reasonable interpretation of what was agreed.

Development methodology review

How the software was built is often as important as what was built. A technology expert will assess whether the development team followed an appropriate methodology and applied reasonable engineering practices.

This does not mean judging the project against a theoretical ideal. It means assessing whether the approach was reasonable given the nature of the project, the team’s capabilities, and the constraints under which they were operating. The assessment typically covers:

  • Project management: Was there a structured approach to planning, tracking, and reporting? Were milestones and deliverables defined? Was progress monitored against a realistic baseline?
  • Development practices: Were appropriate coding standards followed? Was the code subject to peer review? Were automated tests in place? Was there a coherent approach to version control and release management?
  • Quality assurance: Was there a test strategy? Were tests executed against the requirements? Were defects logged, triaged, and resolved systematically? Was there an acceptance testing process?
  • Change management: Were changes to scope, requirements, or design properly documented and approved? Were the time and cost implications communicated to the client?

Departure from reasonable practices does not automatically mean the developer is at fault. Many successful projects are delivered with imperfect processes. But systematic failures in project management, testing, or change control are often central to understanding why a project failed.

Root cause analysis

The most important question in a software project dispute is not “did the project fail?” but “why did it fail?” Attribution of responsibility depends on identifying the root causes of the failure and determining which party, or parties, was responsible for each.

Root causes typically fall into several categories:

  • Technical failure: The software contains defects that prevent it from functioning as specified. The database cannot handle the required transaction volume. The integration with a third-party system does not work. The user interface does not implement the agreed workflow.
  • Project management failure: The project was poorly planned, under-resourced, or inadequately governed. Milestones were missed without escalation. Risks were identified but not mitigated.
  • Requirements failure: The specification was insufficient for the developer to build against, or the client failed to engage adequately in the requirements process, or requirements changed so frequently that the project could not stabilise.
  • Commercial failure: The project was priced or scoped unrealistically from the outset. The budget did not support the functionality that was expected. The timeline was unachievable regardless of how well the project was managed.
  • Shared failure: It is not uncommon for project failures to involve fault on both sides. The developer’s testing was inadequate, but the client also failed to participate in user acceptance testing. The code has defects, but the specification was ambiguous on the point in question.

A credible expert report identifies all material root causes, attributes them fairly, and resists the temptation to present a one-sided narrative. Courts are well-equipped to detect partiality, and an expert who acknowledges complexity is more persuasive than one who presents a simple story.

Delay analysis

Many software project disputes involve delay. The system was delivered months or years late. The client withheld payment pending delivery. The developer claims variations and client-side delays extended the timeline.

Technical delay analysis in software projects draws on principles similar to those used in construction disputes, adapted for the realities of software development. The expert examines the project timeline (planned versus actual) and identifies the causes of delay at each stage. This involves reviewing project plans, sprint records, meeting minutes, correspondence, and defect logs to build a chronological narrative of what happened and why.

Key questions include: Was the original timeline realistic? Were delays caused by the developer’s performance, by client-side dependencies (such as late provision of test data or delayed sign-off), or by scope changes? Were delays flagged and communicated, or allowed to accumulate silently?

If you are advising a client on a potential software project dispute, a few practical points are worth keeping in mind:

Preserve the evidence early. Software systems, development environments, and project management tools are not permanent. Servers are decommissioned. SaaS subscriptions expire. Jira boards are archived or deleted. The earlier you take steps to preserve the technical evidence, including source code, databases, project management records, test results, and deployment logs, the stronger the foundation for the expert’s analysis.

Expect complexity. Software project disputes rarely have a single, simple cause. The technical evidence usually reveals a more nuanced picture than either party’s initial narrative suggests. In my experience, that nuance is best addressed directly rather than avoided.

Engage the expert before proceedings if possible. An early technical assessment can help evaluate the merits, identify the key issues, and shape the questions that the expert will ultimately be asked to address in their report. It can also reveal weaknesses in the case that are better discovered early than in cross-examination.

Software project failures are common. When they lead to litigation, the technical evidence is central. In my experience, it is best addressed through a structured, forensic approach that treats the code, the documentation, and the project history with the same rigour that the court will expect.