Jesminara Rahman explains why artificial intelligence cannot, and should not, replace the professional tax adviser.
Artificial Intelligence (AI) has become an inescapable presence in modern professional life. In the tax world, it is being hailed as a revolutionary tool capable of automating research, drafting and summarising huge amounts of data. Recent tribunal decisions have demonstrated the inherent danger in assuming AI can replicate the judgment, ethics and contextual understanding of a professional adviser.
Far from replacing tax professionals, these cases reveal why human expertise remains indispensable both to clients and to the integrity of the tax system.
A client came to me two years ago stating he had a barrister’s opinion which then turned out to be from the Justice AI unit.
I then explored AI myself, and it came out with irrelevant case law or cases that did not exist. To many of us its common sense to verify sources of information, but unfortunately people are taking the AI as a source of information in itself, instead of checking where AI extracted its information from. The use or misuse of AI has even reached the tribunal and courts of UK.
When AI invents the law
In Felicity Harber v HMRC (TC/2022/13979), the First-tier Tribunal (FTT) was faced with nine purported tribunal decisions cited by the appellant to support her claim of a ‘reasonable excuse’. None of those decisions existed.
The Tribunal determined that the cases had been generated by an AI system such as ChatGPT. The Solicitors’ Regulation Authority (SRA) has warned that such models “do not have a concept of reality” and can ‘hallucinate’, producing highly plausible but entirely fictitious results.
Judge Kastel, in rejecting the appellant’s submission in Felicity Harber v HMRC, noted the serious consequences of citing false precedents: “Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavours… It promotes cynicism about the legal profession and the judicial system.”
The Tribunal concluded that a reasonable taxpayer, aware of a capital gain, would have contacted HMRC, TaxAid or a professional adviser for guidance.
The High Court has also confronted the perils of AI-generated legal research. In Ayinde, R v The London Borough of Haringey, lawyers relied on five fabricated cases, one purporting to be from the Court of Appeal. None of the citations were real. This misuse demonstrates that the problem transcends tax, posing a systemic threat to the integrity of legal proceedings.
The lawyers in this case used artificial intelligence tools to produce written legal arguments or witness statements which were not then checked, so that false information (a fake citation or quotation) was put before the court.
Using AI tools is not an effective way of conducting research to find new information if you cannot verify it independently. AI is only as good as the information that is scraped from the internet, whether verified or unverified. The source of the information extracted or the information itself summarised by AI should always be checked or verified.
The use of AI was further explored in the court’s decision by referencing the report by Jonathan Fisher KC, named ‘Disclosure in the Digital Age, Independent Review of Disclosure and Fraud Offences’. This report recommended the creation of a cross-agency protocol covering the ethical and appropriate use of artificial intelligence in the analysis and disclosure of investigative material.
AI is likely to have a continuing and important role in the conduct of litigation in the future, but as a supporting tool and not a replacement for the work of a lawyer, barrister or tax adviser.
When AI gets it ‘nearly right’
In Bodrul Z Zaman v HMRC (TC/2023/16087), the taxpayer candidly admitted to using AI to assist in preparing his case. While the AI-generated submissions did not include fabricated cases, many citations were inaccurate or irrelevant.
The Tribunal acknowledged warned that unverified AI outputs “serve no one well”, as they risk wasting court time and raising false expectations. The judgment explicitly cautioned litigants that AI tools may lack access to the correct legal sources, fail to “understand” the question, or present plausible but incorrect answers.
This case highlights that even when AI appears helpful, it lacks the discernment to distinguish legal authority from legal irrelevance.
I had a client that came to me with a legal AI generated argument to support his hotel and travel expenses that was being disallowed by HMRC. He came to me with three tribunal case references summarised by AI; one didn’t exist at all, another case would have done the opposite and gone against the client, leaving one plausible tribunal case that supported his case. The client was very confident, but I had to explain why AI was not a good tool to rely on for tax advice.
AI and HMRC
The Financial Times (29 September 2025) reported concerns that HMRC staff may have used AI to draft responses in R&D tax credit cases. HMRC has since clarified that generative AI is not authorised for such use and that “any staff found misusing AI would face disciplinary action”.
While AI tools may assist in identifying risk patterns or flagging anomalies, final decisions remain the responsibility of human officers, a position HMRC has reiterated consistently – decisions have to be issued by the officer and AI is not an officer.
Judicial guidance
The judiciary itself has cautiously embraced AI. In April 2025, the Senior Courts and Tribunals Judiciary issued ‘AI: Guidance for Judicial Office Holders’, replacing the original 2023 document. The guidance authorises the use of Microsoft’s private ‘Copilot Chat’ system through the secure eJudiciary platform, but expressly forbids its use for legal research or decision-making.
Tribunal Judge Christopher McNall, in the tribunal case TC09638 (sorry I have not added the names as there are too many), noted that he used AI solely to “summarise documents as a first draft”, confirming that all summaries were checked personally. This careful approach exemplifies how AI can assist but never substitute human judgment.
Hallucinations problems
As New Scientist journalist Jeremy Hsu observed, AI hallucinations “are getting worse, and they are here to stay”. Despite improvements in model design, hallucination rates in newer systems remain high. AI’s statistical language prediction rather than understanding lies at the heart of the issue.
Nathan Marlor, Head of Data and AI at Version 1, summarised the problem succinctly: “AI doesn’t understand what it’s saying. It operates purely on statistical likelihoods.”
The perils of AI and advisory
In December 2024, Deloitte was commissioned by the Australian federal department to review the targeted compliance framework and its IT system used to automate penalties in the welfare system. Once it was uploaded on the website it was found to have multiple errors, including non-existent references and citations. Dr Christopher Rudge of University of Sydney stated that the Deloitte report contained hallucinations where AI models may have filled in the gaps, misinterpreted data or tried to guess the answers.
Deloitte will now have to provide a partial refund to the Australian government over a $440,000 report that contained several errors after admitting it used generative artificial intelligence to help produce it.
Human advisers are irreplaceable
AI can assist with drafting, summarisation and data analysis, but it cannot replace the uniquely human dimensions of tax advice:
1. Judgment and context – Advisers apply nuanced understanding to complex factual situations.
2. Ethical responsibility – Professionals are bound by regulation, confidentiality and accountability.
3. Interpretation and reasoning – Advisers can reconcile conflicting guidance, precedents and policy.
4. Empathy and communication – AI cannot grasp clients’ intentions, anxieties or risk tolerance.
5. Verification and integrity – Humans question, verify and challenge information; AI merely generates it.
Conclusion
AI has an important role to play in the tax profession as an assistant, not as a replacement. Used responsibly, it can save time, improve consistency and enhance research. Used uncritically, it can produce fiction, waste judicial resources and mislead taxpayers.
The lesson from recent tribunal cases is clear: AI cannot think, reason or take responsibility. The future of tax advice belongs to those who can combine technological capability with human insight.
In short, AI may process data, but only advisers can interpret life, the nuances, the human understanding of tax legislation and beyond.
• Jesminara Rahman is a Director of Tax Resolute Ltd

