AI Is Already Inside Your Customs Function: Is It Governed?
AI tools are being used across customs teams to review customs data, suggest commodity codes and draft responses to HMRC queries. HMRC has issued guidance for software developers using generative AI, but internal business use remains largely ungoverned. In an area as interpretative and liability-driven as customs, that gap matters. This article sets out where the risks lie and what a sensible governance framework looks like.
HMRC Has Issued AI Guidance. But What About Customs Teams Using It?
AI is now a board-level topic. CFOs are being asked how it is being used across finance, reporting and forecasting. Policies are being drafted. Controls are being reviewed.
Earlier this year, HMRC also stepped in. In January 2026 it published guidance aimed at software developers using generative AI in products that interact with HMRC systems. You can read the guidance here: https://www.gov.uk/guidance/guidelines-for-using-generative-artificial-intelligence-if-youre-a-software-developer
The direction is clear. AI can support compliance, but it must be transparent, governed and subject to human oversight.
That guidance is aimed at developers and says very little about the businesses using AI internally for compliance decisions.
For customs, that gap matters as AI is already being used heavily.
Whether formally approved or not, AI tools are finding their way into customs processes.
Teams are using them to:
- Review CDS and TRE data.
- Suggest commodity codes.
- Interpret HMRC guidance.
- Draft responses to queries.
- Sense-check valuation or origin positions.
It feels efficient, but as we know, customs is not purely a spreadsheet exercise.

Classification specifically is interpretative. It depends on product knowledge, commercial context and technical judgement. Two experienced professionals can look at the same fact pattern and debate the correct answer.
AI models generate probability-based outputs. They produce coherent answers. That does not make those answers correct or defensible.
In our experience, tariff classification in particular remains an area where AI is still unreliable. The nuance required often goes beyond what general models can handle. That will change over time. Today, overconfidence and lack of control are real risks.
TRE Data Makes This More Interesting
The introduction of HMRC’s Trade Reporting and Extracting service has given importers direct, quick, and free access to their customs data. That is a world-leading, positive development introduced by HMRC.
It also creates a new dynamic for importers as HMRC will now expect businesses to subscribe and analyse the data.
AI tools are very good at identifying patterns and anomalies in large datasets. Run TRE data through a model and it will almost certainly highlight trends, inconsistencies or unusual movements.
The question is whether those patterns are meaningful.
In customs, variation is often commercially driven. A change in supplier. A new product range. A different Incoterm. A revised pricing structure. All of these can create data shifts that look unusual when viewed purely statistically.
An AI tool may flag a pattern that is not actually a compliance issue. If a team then changes commodity codes or valuation approaches based on that output without deep technical review, the business may introduce new problems without realising it.
The risk is that it interprets meaning without full context, and those interpretations are acted upon.
Data Risk
A further point that is often overlooked is data exposure. Unless a business is using a properly ringfenced, enterprise-controlled AI environment, there is a risk that commercially sensitive customs data is being processed outside secure internal systems. Supplier details and import values may be entered into general AI platforms without full clarity on how that data is stored, retained or used. Even where providers state that data is not used for training, governance teams should be asking whether appropriate safeguards, access controls and contractual protections are in place. Customs data is commercially sensitive and, in some cases, strategically significant. It should be treated accordingly.
The Accountability Point
Customs liability does not move because technology was involved.
If a code is wrong, the importer is responsible. If a valuation treatment is flawed, the importer is responsible. If a position is challenged in audit, the business must defend it.
HMRC’s guidance to developers emphasises transparency, oversight and governance. Those principles apply just as strongly inside the business.
If AI is being used to inform customs decisions, there should be clarity around:
- When it is appropriate to use it.
- Who reviews outputs.
- How technical reasoning is documented.
- Where escalation sits for complex issues.
Without that structure, AI can scale error as efficiently as it scales efficiency.

There Is Also A Capability Risk
There is a broader question that deserves attention.
Customs capability is built through working through difficult questions, applying the General Rules of Interpretation and challenging assumptions. That process develops judgment.
If teams default to AI-generated answers, that capability can weaken over time. The skill to interrogate a position and defend it under scrutiny may erode.
For CFOs, this is not an abstract concern. It links directly to governance, key person risk and audit exposure.
A More Sensible Way Forward
AI is not the problem.
Used properly, it can surface questions, accelerate analysis and improve visibility across large datasets. It can be a valuable tool in a modern customs function, but it needs an experienced human in the loop.
AI should highlight areas for review. It should not determine regulatory positions in isolation. And where it is embedded in the process, governance should reflect the materiality of customs risk.
That includes:
- Clear internal policy on AI use in compliance decisions.
- Mandatory technical review of outputs before implementation.
- Documented reasoning that stands independently of the tool used.
- Periodic independent audit, in line with other material risk areas.
HMRC has begun setting expectations for developers.
Boards should now ask whether their own internal use of AI in customs is subject to the same discipline.
The technology is moving quickly, and governance needs to keep up.
Is Your Use Of AI In Customs Properly Governed?
If AI is informing customs decisions in your business, governance should reflect the materiality of that risk. Our team works with importers to review compliance processes, identify exposure and put the right controls in place. Get in touch to find out how we can help.
Contact Our Team …
Related Posts
24 February 2026
Customs Under Management: Can You Defend It?
Understanding a customs position is one…
17 February 2026
Electric Mobility Euro Ltd & Sunrise Medical Ltd v HMRC
A tribunal win does not always bring…


