STATE-OF-THE-ART TRUSTWORTHY AI-STANDARD FOR VIRTUAL ROBOTS

STATE-OF-THE-ART TRUSTWORTHY AI-STANDARD FOR VIRTUAL ROBOTS

7/26/2022 Borisav Parmakovic
shape1
shape2
shape3
shape4
shape7
shape8
Team Image

While AI can do much good, including making products and processes safer, it can also harm through a wide variety of risks as described in the previous article "challenges of AI-applications in customs clearance". Therefore, a regulatory framework as well as a company standard for trustworthy AI that a company follows, should concentrate on how to minimize the various risks of potential harm, particularly the most significant ones, such as risks for fundamental rights, safety, and the effective functioning of the liability regime. Hence, the following sections contain a proposed legal framework with which AI issues could be mitigated by applying legal mesaures and secondly, a possible company standard with which trustworthy AI-solutions can be achieved.

Legal Framework

Considering already available legal measures, an extensive body of existing EU product safety and liability legislation, including sector-specific rules, further complemented by national legislation, is relevant and may already apply to several emerging AI applications. Besides these already existing rules, the legislative framework could be further improved with the following provisions:

  • imposing the five-level risk-based system of regulation called by the German Data Ethics Commission. This way, companies would know where an AI business can be started without any regulations, with little or with a lot of regulation and which AI business is banned as a dangerous application area. However, before an emerging AI business gets classified according to these levels, it may be required to monitor them for some time in order to really know whether regulations are not necessary or whether the business should be completely prohibited, since the latter measure could decrease the pace of technological advance,

  • Code developing a common approach at EU level to enable European companies to benefit from smooth access to the single market and support their competitiveness on global markets,

  • effective application and enforcement of existing EU and national legislation by ensuring that AI actions are interpretable and explainable,

  • due to product changes and the possible negative impact on safety caused by AI systems, new risk assessments, human oversight in product design and throughout the lifecycle of the AI product and system should be considered as a safeguard,

  • explicit obligations for producers could be considered also in respect of mental safety risks of users,

  • union product safety legislation to address faulty data at the design stage and the maintenance of sufficient/high data quality throughout the use of the AI products and systems,

  • defining transparency requirements to address the opacity of systems based on algorithms,

  • persons having suffered harm caused by AI systems need to enjoy the same level of protection as persons having suffered harm caused by other technologies, while technological innovation should be allowed to continue to develop, compulsory liability insurance required for better compensation,

  • setting EU safety requirements that AI systems need to full-fill in order to become trustworthy AI certified. E.g., data sets that are sufficiently broad and cover all relevant scenarios representing relevant dimensions like gender, ethnicity, and other possible grounds of prohibited discrimination. Another EU safety requirement could be defined for data protection measures,

  • documentation and recording requirements for data,

  • defining technical robustness requirements so that companies know what a robust AI system is?! How big shall the accuracy be for this specific AI business? What are the requirements for reproducible outcomes? When are the requirements fulfilled that an AI system can reliably deal with errors or inconsistencies during all life cycle phases? Which resilience requirements could be set to cover/protect the AI system from cyber-attacks and data/algorithm manipulation attempts?

  • setting various human oversight requirements and levels for different AI businesses (human in or on the loop, human in command or no human in the loop required for non-risky, self-learning applications), EU data protection rules to prohibit the processing of biometric data to identify a natural person uniquely, except under specific conditions.


    Company Standard for Trustworthy AI

Since there will be a legal framework for AI-applications in general, it is highly recommended that companies take measures to be already prepared for these times and develop trusted solutions. These AI-systems shall comply with legal regulations and create trust in companies that would like to buy the solution and in users who are interacting with the AI-engine. However, coping with unknown/novel topics that are ahead is not always an easy task. Therefore, Digicust wants to provide support in creating trustworthy AI applications as well as creating the future of human-computer interaction by proposing the following components that a trustworthy AI standard should contain:

Data Governance, control, and oversight → serve as an end-to-end foundation framework for all the other dimensions, which is anchored to the organization's core values, ethical guardrails, and regulatory constraints, where control & oversight by users, developers, and data scientists should be the norm.

Ethics and regulation → an AI system should be developed that complies with regulations is also ethical, and documented on organizational guidelines and codes of conduct for employees. Responsibility & accountability → trustworthy AI systems should include policies that define who is responsible and accountable for their output.

Design, interpretability, and explainability → concerning AI-generated decisions, meaning they should be interpretable and easily explainable by all parties. These aspects should be included in an intelligent UX/UI human-centered-design, thus, knowing the requirements and future process to build privacy, security, and transparency into the frontend.

Robustness, security, and safety → only AI systems that provide robust performance are safe to apply, and only the ones that minimize the negative impact of cyber-attacks should be developed.

Bias and fairness → recognizing decisions that may not be fair to all parties is the basis to develop AI systems, which mitigate unwanted bias and realize fair decisions under a specific and communicated definition.

Train, staff development → increasing automation across all areas of the economy gets adopted, resulting in new jobs requiring a new skill set and companies to develop their employees further. AI changes the way people work. As a consequence, AI-specific training should be considered.

AI trustworthiness certification & standardization → achieving trustworthy AI by complying with standards, e.g., design, manufacturing, and business practices that are on top certified by externals, who attest to the developed AI system. For this purpose, Digicust wants to test its future product with the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS to fulfill its AI requirements and realize trustworthy-certified AI systems.

e-book

How To Automate The Creation of Customs Declarations - By Applying Dexter IDP

Learn about requirements engineering, benchmarking IDP providers, and automating customs declarations with Dexter IDP.

Download For Free

News from our Blog

Learn about customs clearance, foreign trade, our product updates and our latest achievements.

shape1
shape2
shape3
shape4
shape7
shape8

Our Partners and Customers

partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner
partner