AI at the tax tribunal

26 January 2026

Last updated: 26 January 2026

Susan Cattell
Head of Tax Technical Policy, ICAS

As the adoption of AI continues to increase, the use of AI in legal proceedings is inevitably becoming an expanding area of case law. From a tax perspective, it's useful to consider three cases involving the use of AI that have been heard by the First-tier Tax Tribunal (FTT). No doubt more will follow. The AI aspects of the cases to date clearly illustrate the pitfalls of using AI without ensuring the accuracy and relevance of what it produces. 

Felicity Harber v The Commissioners for HMRC [2023] UKFTT 1007 (TC) – fictitious tax cases

Mrs Harber was appealing against a penalty for failing to notify her liability to Capital Gains Tax (CGT) on a property disposal. She argued that she had a ‘reasonable excuse’ for the failure. In support of her case, she provided a ‘response’ setting out the names, dates and summaries of nine FTT decisions in which taxpayers had succeeded in showing that a reasonable excuse existed. She said the cases had been provided by a ‘friend in a solicitor’s office’ and that she had no further information (for example, the text of the judgments or FTT reference numbers). 

Unfortunately, when both HMRC and the tribunal investigated, it became clear that none of the cases cited existed, and that they had been generated by AI. When Mrs Harber asked how the tribunal could be confident that the cases relied on by HMRC were genuine, it also emerged that she was unaware that judgments are available on publicly accessible websites, such as the FTT’s own site and BAILLI. The tribunal accepted that she was unaware the cases were not genuine and that she did not know how to check. 

The tribunal clearly devoted a significant amount of time to reviewing the citations provided, to establish that the cases were not genuine. The judgment notes that “citing invented judgments……causes the Tribunal and HMRC to waste time and public money, and this reduces the resources available to progress the cases of other court users who are waiting for their appeals to be determined.”

Bodrul Zzaman v The Commissioners for HMRC [2025] UKFTT 539 (TC) – unclear grounds for appeal and citation of irrelevant cases

Mr Zzaman was appealing against a discovery assessment raised by HMRC to collect tax due in respect of the High Income Child Benefit Charge. He confirmed to the tribunal that his written statement of case had been prepared with the help of AI. The tribunal noted that some of the grounds on which the appeal was based were unclear and the taxpayer was unable to expand on them. The tribunal inferred that this was because of the use of AI. Several cases were cited in support of the taxpayer’s argument that the retrospective element of the legislation was unfair and violates the rule of law. 

As in Harber, the tribunal considered the cases cited carefully. Unlike the cases in Harber, these were genuine cases (although some of the case citation references were inaccurate). However, one was ‘of tangential relevance at best’, two were not relevant, and the fourth (while more relevant to the issue) found that retrospective legislation was lawful. In summary the tribunal did not find that any of the cases "materially assisted” them in considering the appeal. None of the cases provided authority for the propositions that were put forward. 

The tribunal commented: “This highlights the dangers of reliance on AI tools without human checks to confirm that assertions the tool is generating are accurate. Litigants using AI tools for legal research would be well advised to check carefully what it produces and any authorities that are referenced. These tools may not have access to the authorities required to produce an accurate answer, may not fully “understand” what is being asked or may miss relevant materials. When this happens, AI tools may produce an answer that seems plausible, but which is not accurate. These tools may create fake authorities (as seemed to be the case in Harber) or use the names of cases to which it does have access but which are not relevant to the answer being sought (as was the case in this appeal).”

It also noted the significant danger that use of an AI tool that produces inaccurate material “may lead to material being put before the court that serves no one well, since it raises the expectations of litigants and wastes the court’s time and that of opposing parties.”

Gary Elden v The Commissioners for HMRC [2026] UKFTT 41 (TC) – inaccurate summaries of cases

This is the most recent of the three cases and was an application by HMRC to strike out the taxpayer’s appeals against closure notices. The judgment refers to both Harber and Zzaman. One significant difference in this case was that, although the taxpayer represented himself at the hearing, his representative (a firm of chartered accountants) had prepared the skeleton argument. This contained summaries of five cases that were put forward to support the position that the tribunal should not strike out the case. HMRC argued that these were inaccurate summaries produced by AI.

In response to a claim from the taxpayer’s representative that it didn’t matter whether AI was used or not, the tribunal agreed that “whether or not AI was used is not directly relevant”. However, it went on to say that “the human who relies on its use bears the responsibility for the accuracy” and “because AI is known to ‘hallucinate’, that is, to generate false or inaccurate information and present it as if it were factual, if AI has been used to produce a document and flaws are found in that document, particularly if the flaws, once pointed out, are not corrected, this leads to the rest of the document being treated with great caution. This then has a knock on effect on the time taken to consider and check all relevant points.”

The tribunal also referred to the High Court judgment in Ayinde, a non-tax case relating to the suspected use by lawyers of generative AI tools to produce documents that were not checked, leading to false information being put before the court. In Ayinde, the judge stated that those using AI to conduct legal research "have a professional duty" to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work advising clients or before a court. 

The FTT considered it necessary to evaluate the accuracy of the summaries used in the skeleton argument to establish whether the person producing it had complied with “the duty to verify the output from AI before presenting it”. The tribunal read and reviewed the cases cited and concluded that the summaries were produced using AI and “had not been verified for accuracy with sufficient care as should be used when producing submissions for a Tribunal hearing. This lack of sufficient care amounts to professional incompetence on the part of any regulated individual or firm involved in the production of the skeleton argument.”

It added that time taken in case management and hearing cases is a public resource, and that in this case (and others, for example, Harber and Zzaman) “a considerable amount of time has been taken up by the consideration of cited cases which have no relevance to the case at hand”.

Warning to professional advisers

The tribunal in Elden declined to strike out the appeals but directed the taxpayer to produce detailed information alongside a skeleton argument for the substantive hearing. This included full judgments for all the cases mentioned, full references to direct quotes, and brief summaries of the cases with an explanation of their relevance. The tribunal also required a statement of truth from the taxpayer to the effect that the argument had been produced by him, with or without AI, and that he had checked the information. Alternatively, it required a statement of truth from anyone who contributed to the skeleton confirming which details they had checked and giving their professional qualifications and the professional body, if any, that regulates their employer. 

The tribunal concluded by saying that it was not necessary for anyone assisting the taxpayer to have any specific qualifications, nor for their employer to be regulated by any specific professional body. However, it commented that “the Tribunal will consider making references to any professional body in relation to any person who submits to the Tribunal any item, the standards of which fall below the professional and ethical standards the Tribunal has a right to expect.”

As outlined in an earlier article, the Upper Tier Tribunal (UT) in The Commissioners for HMRC v Marc Gunnarsson, has also encountered a skeleton argument containing non-existent (AI-generated) cases. It did not consider the taxpayer to be highly culpable because he was not legally trained or qualified, nor subject to the same duties as a professional representative. However, it warned that in an ‘appropriate case’ the UT has sanctions it could apply for misuse of AI.

Guidance on the use of AI for tax work

The PCRT group of professional bodies published topical guidance covering the application of Professional Conduct in Relation to Taxation to the ethical use of AI tools in January 2026. This highlights some important areas of risk arising from the use of AI and sets out possible safeguards to deal with those risks.

Let us know what you think

We respond to tax consultations and calls for evidence and attend meetings with HMRC at which service levels, delays and other issues you raise with us are discussed. We welcome input from members to inform our work; email us to share your insights and feedback.

Contact us

Categories:

  • Tax
  • AI & technology