Guidance from the Upper Tribunal on using AI
Susan Cattell highlights a recent decision from the Upper Tier Tribunal which gives some guidance (and a warning) on using AI.
Tax advisers considering using AI tools to prepare for tax tribunal hearings will find it useful to consider a recent case: The Commissioners for HMRC v Marc Gunnarsson.
The First-tier Tribunal (FTT) decided that the taxpayer was entitled to coronavirus support payments under the Self-Employment Income Support Scheme (‘SEISS’). HMRC appealed to the Upper Tier (UT) which found in its favour. The taxpayer wasn’t a qualifying person for SEISS because at the relevant time he wasn’t self-employed but a director or employee of a limited company which was trading.
The interesting part of the decision for tax advisers is the ‘Postscript’ (paragraphs 106 to 115). This explains that the taxpayer was an unrepresented litigant in person. Ahead of the hearing, he failed to provide the required skeleton argument. The Tribunal explained to him in ‘ordinary’ language what a skeleton argument might be expected to include – and he then filed a draft document.
Non-existent tax cases
The draft skeleton argument referred to three previous decisions supposedly made by the FTT that were said to support the taxpayer’s interpretation of the legislation:
“In Patel v HMRC [2023] UKFTT 138 (TC), the First-tier Tribunal (FTT) found that HMRC's SEISS guidance was unclear at the time of the claim and that a claimant's reasonable belief in eligibility, based on that guidance, could be considered. The Tribunal allowed the appeal in part.
In Ali v HMRC [2022] UKFTT 329 (TC), the Tribunal found that the Appellant reasonably relied on their accountant's advice and HMRC's guidance. The FTT held that this context was relevant to the statutory interpretation.
In Kamran v HMRC [2023] UKFTT 91 (TC), the Tribunal noted the continuity of work before and after the SEISS claim and applied a purposive approach to paragraph 4.2 of the SEISS Direction. The First-tier Tribunal (FTT) allowed the appeal in part, finding that the claimant continued the same trade.”
Unfortunately, when HMRC tried to locate these cases, it found that they did not exist. The taxpayer filed an amended skeleton argument, removing the citations – and subsequently accepted during the hearing that he had used AI to assist him in preparing his submissions.
Guidance on AI from the Upper Tribunal
The judge at the UT referred to guidance on the use of AI in court proceedings given in a recent case referred to the Divisional Court (‘Ayinde’) relating to the suspected use by lawyers of generative AI tools to produce documents which were not checked, so that false information was put before the court. This guidance would be equally relevant to Tribunal proceedings. Key points included:
- Confidentiality and privacy should be maintained by not entering into a public AI tool any information that is not already in the public domain.
- Any information provided by an AI tool should be checked before it is used or relied upon.
- It is important to be aware that AI tools may make up fictitious cases, citations or quotes, or refer to legislation, articles or legal texts that do not exist, or provide incorrect or misleading information regarding the law or how it might apply or make factual errors.
- All legal representatives are responsible for material they put before the court/tribunal and have a professional obligation to ensure it is accurate and appropriate.
- Legal research: AI tools are a poor way of conducting research to find new information you can’t verify independently. They may be useful as a way to be reminded of material you would recognise as correct.
- Legal analysis: The current public AI chatbots don’t produce convincing analysis or reasoning.
The judge went on to say, in relation to unrepresented litigants, that the accuracy of AI shouldn’t be relied upon without checking, particularly when it comes to statements or arguments that it makes concerning the law. He noted a “danger that unarguable submissions or inaccurate or even fictitious information or references may be generated” and stated that: “Unrepresented parties, just as legal representatives, remain responsible for the accuracy, both the reliability and credibility, of the information, both evidence and submissions, they present to the FTT or UT.”
Warning from the Tribunal
The UT concluded that in this case it did not consider the taxpayer to be highly culpable because he was not legally trained or qualified, nor subject to the same duties as a regulated lawyer or other professional representative. He may not have understood that the information provided by AI was not simply unreliable but fictitious. He was also under time pressure and doing his best to assist the UT by providing written submissions.
However, the UT went on to say that in an ‘appropriate case’ the UT may take such matters very seriously. It commented that many of the sanctions available for the misuse of AI by a party or representative outlined in Ayinde (these included costs orders, wasted costs orders and striking out) are available to the UT.
Let us know what you think
We respond to tax consultations and calls for evidence and attend meetings with HMRC at which service levels, delays and other issues you raise with us are discussed. We welcome input from members to inform our work; email us to share your insights and feedback.
Contact us
Categories:
- Tax
- AI & technology




