Customs Clearance is a Slowly Changing Market
Nowadays, volatile, uncertain, complex and ambiguous (VUCA-) markets are changing rapidly influenced by globalization, technological advance and many other factors. As a result, companies appear and disappear at a pace that is breath-taking. This is not the case with the customs clearance market. A good example of this are electronical signatures for customs documents, such as EUR.1, CITES etc. This simple technology cannot be applied due to the global application of the latter documents, where the CITES document may be accepted from German customs authorities, but offices from other countries would reject them.
The reason of this is that worldwide only a few countries are as technologically advanced as Germany. Thus, the validity of electronically signed documents is not trusted resulting in lot of physical paperwork. Similarly, AI applications lack trustworthiness. However, the reasons for the lacking trust in AI applications are quite different from the previous example and will be reflected in the following sections.
Respect of Human Autonomy
One main principle of trustworthy AI is that humans must keep full and effective determination in making their own decisions. This fundamental right needs to be considered, since AI systems may confuse the knowledge of customs clearance specialists. Suppose the experts do not know, that the provided information/insights may be biased, unstable, or inaccurate, especially at the beginning of such AI applications. In that case, not so experienced customs employees of, e.g., forwarding companies, may learn wrong know-how resulting in negatively manipulated decision-making of ongoing experts. Furthermore, a customs specialist may feel threatened or not respected if the company has not created awareness for a new application that learns from its employees. A work environment should be based on trust, and once such a basis is missing, possible fears could arise, resulting in less motivated employees and a negative atmosphere.
When considering that Digicust shall ease the customs clearance procedure for experts so that they would not need to know all customs regulations by heart, another question comes to light:" How would customs specialists manage emergencies during software downtimes and total IT fallbacks, if they do not have a profound knowledge basis?" Thinking even further, once the Digicust AI engine has taken over a tremendous amount of customs clearance in a company, what happens if the AI system learned from a customs agent who hated that company and started to make mistakes intentionally (due to the learnings from that customs agent)? Or could the AI engine maybe manipulate the customs specialist's decision unblemished, causing mistakes for which the employee gets blamed and not the machine?
According to the German Data Ethics Commission's criticality pyramid levels, AI-based automation in customs clearance may cause data abuse, data pollution, transport delays, administrative penalties, tax damage, and ultimately, more or less severe damage to the buyer of the product. Besides avoiding harming the user's autonomy in making decisions, the prior-mentioned potential harms are further reasons for implementing the human-in-the-command approach. One of Digicust's goals is to develop an AI application that takes care of all participants along the value chain of goods. E.g., Chinese manufacturers shall have the same chance to export their commodities into Austria / the EU and other countries, although about 80% of their documents and shipments are currently causing huge issues in customs clearance.
Other parties, such as forwarding companies, shall have full access to potential advantages that AI could deliver through early recognition of bad document quality instead of causing them transport delays or administrative penalties. The government shall leverage improved fraud detection and tax income by detecting product piracy, illegal products, and wrongly classified tariffs/goods. The data protection strategy should know its enemy so that appropriate measures can be planned (e.g., adversarial AI). Ultimately, the Digicust AI engine shall decrease the complexity of customs clearance for consignees/shippers rather than increase global trade efforts by providing fast answers to questions and centralizing their communication. Therefore, another huge challenge in developing a Digicust AI engine is to achieve technical robustness as much as possible, to provide transparency, security, and fairness to all parties involved in the process.
In customs clearance, there are known areas/countries from which suspicious shipments and bad document quality of the imported goods are expected. During examinations by the virtual customs robot Neo, it might be possible that it provides information such as:" Dear customs agent, these goods were shipped from China, please control if the goods are rubbish." or:" Dear customs agent, please stop this shipment, because goods from Taiwan are related to product piracy." Letting the AI engine train insights and recommendations by itself might result in such sentences discriminating specific countries/areas of the world, although not all companies from these countries sell bad quality and illegal products.
Another possible discrimination scenario would be to automatically mark all Chinese shipments as suspicious only because the current data indicates that, e.g., 80%of the Chinese goods are related to bad document quality & product piracy issues. This generalization would mean that transport delays would be caused for 20% of Chinese shipments, although these shipments have regularly declared invoice prices and correctly headed information on import documents. As a result, Chinese companies trying to export commodities in compliance with European law would suffer an image loss.
According toa most recent Global CEO survey of PwC, it was found that 85% of CEOs agree that AI will significantly influence/transform their businesses in the next five years, and 84% are aligned that AI-based decisions need to be explainable to be trustworthy. Analyzing the reasons for this convincing number, the lack of interpretability in AI decisions was considered to be not only frustrating for end-users or customers but can also expose an organization to operational, reputational, and financial risks.
One example of this is that companies that get too many administrative penalties may lose its license to do customs clearance, especially if they cannot explain why goods were declared in a specific manner. Many wrong declarations, not understandable requests, and annoying communication with the customer lead to image loss, and shippers/consignees may consider changing the forwarder/customs broker. Furthermore, customs specialists (e.g., forwarder) are liable for their customs declarations and need to know how the AI works to gain trust in the machine. They also desire insights into machine tasks because of frequent communication with authorities, bosses, and the learnings they want to gather from the machine.
Workforce planning in customs clearance has become more important than ever, so having the right labor calculation tool on board, which is capable of explaining these forecasts and control measures by considering various shipment data, seasonalities, and possible, time-consuming controls of customs authorities is a huge win for a forwarder. From a government point of view, it is required to have all declarations, classifications of customs codes, and examination decisions documented properly.
Similarly, customers want to know why their documents are wrong, why information is missing, and why decisions and code classifications were executed in that specific way. As a result, an AI-based customs software that makes customs declarations autonomously or recommends actions to the customs agents and lacks explainability of its actions will, therefore, sooner or later be banned from the government, and insurance companies will not cover caused damage. Finally, Digicust and its employees shall also be enabled to look "under the hood" at their underlying models, explore the data applied to train them and provide coherent explanations behind each decision so that damage to all stakeholders can be prevented as effective as possible.