Industry Specification Group (ISG) Securing Artificial Intelligence (SAI) Activity Report 2020

Chair: Alex Leadbeater

Developing technical specifications that mitigate against threats arising from the deployment of AI, and threats to AI systems, from both other AIs, and from conventional sources.

Autonomous mechanical and computing entities may make decisions that act against the relying parties, either by design or as a result of malicious intent.

The primary responsibility of our Industry Specification Group on Securing Artificial Intelligence (ISG SAI) is to develop technical specifications that mitigate against threats arising from the deployment of AI – and threats to AI systems – from both other AIs and from conventional sources. As a pre-standardization activity, the group’s work is intended to frame the security concerns arising from AI, and to build the foundation of a longer-term response to the threats to AI in sponsoring the future development of normative technical specifications.

The work of ISG SAI notably addresses these aspects of AI in the standards domain:

  • • Securing AI from attack, e.g. where AI is a component in the system that needs defending;
  • • Mitigating against AI, e.g. where AI is the ‘problem’ (or used to improve and enhance other more conventional attack vectors);
  • • Using AI to enhance security measures against attack from other things, e.g. AI is part of the ‘solution’ (or used to improve and enhance more conventional countermeasures).

ISG SAI aims to develop technical knowledge that acts as a baseline in ensuring that AI systems are secure. Stakeholders impacted by the activity of the group include end users, manufacturers, operators and governments.

In 2020 the group released its first publication [GR SAI 004], detailing the problem statement regarding the securing of AI. This Group Report describes the problem of securing AI-based systems and solutions, and the challenges relating to confidentiality, integrity and availability at each stage of the machine learning lifecycle. Outlining several cases of real-world use and attacks, it also highlights some of the broader challenges of AI systems including bias and ethics.

The European Union Agency for Cybersecurity (ENISA) participates in the activities of ISG SAI. Similarly, some ISG SAI delegates are part of the ENISA working group on AI that authored the report ‘Artificial Intelligence Cybersecurity Challenges’ that maps the AI cybersecurity ecosystem and Threat Landscape, published by ENISA in December.

Progress was made during the year on further deliverables, with publication anticipated in 2021:

A Group Specification (GS) on the Security Testing of AI identifies objectives, methods and techniques that are appropriate for security testing of AI-based components. The purpose of this work item it to identify objectives, methods and techniques that are appropriate for security testing of AI-based components

A Group Report on the data supply chain summarizes the methods currently used to source data for training AI, along with the regulations, standards and protocols that can control the handling and sharing of that data. It then provides a gap analysis on this information to scope possible requirements for standards for ensuring traceability and integrity in the data, associated attributes, information and feedback, as well as the confidentiality of these.

A Group Report will define what an AI threat is and how it may be distinguished from any non-AI threat. The model of an AI threat will be presented in the form of an ontology to give both semantic and syntactic view of the relationships between actors representing threats, threat agents, assets and so forth.

A Group Report [GR SAI 005 – subsequently published in 2021] presents a mitigation strategy, aiming to summarize and analyze existing and potential mitigation against threats for AI-based systems. The goal is to have guidelines for mitigating against threats introduced by adopting AI into systems.

A further Group Report on the role of hardware in security of AI addresses the general requirements on hardware to support SAI. Summarizing academic and industrial experience in hardware security for AI, the report also addresses vulnerabilities or weaknesses introduced by hardware that may amplify attack vectors on AI.

 LOOK OUT FOR IN 2021 – ISG SAI WORK IN PROGRESS:

  • Group Specification (GS) on security testing of AI - identifying objectives, methods and techniques appropriate to security testing of AI-based components
  • Group Report (GR) on AI threat ontology - defining AI threats and how they might differ from threats to traditional systems
  • GR on data supply chain – summarising methods currently used to source data for training AI, along with the regulations, standards and protocols to control handling and sharing of that data
  • GR on the role of hardware, both specialized and general-purpose, in the security of AI: addressing mitigations available in hardware to prevent attacks, and also general requirements on hardware to support SAI