Industry Specification Group (ISG) Securing Artificial Intelligence (SAI) Activity Report 2021

Chair: Alex Leadbeater, BT

Developing technical specifications that mitigate against threats arising from the deployment of Artificial Intelligence (AI) and threats to AI systems, both from other AIs and from conventional sources.

Autonomous mechanical and computing entities may make decisions that act against the relying parties, either by design or as a result of malicious intent.

The primary responsibility of our Industry Specification Group on Securing Artificial Intelligence (ISG SAI) is to develop technical specifications that mitigate against threats arising from the deployment of AI – and threats to AI systems – from both other AIs and from conventional sources. As a pre-standardization activity, the group’s work is intended to frame the security concerns arising from AI. It will also build the foundations of a longer-term response to the threats to AI in sponsoring the future development of normative technical specifications.

The work of ISG SAI notably addresses these aspects of AI in the standards domain:

  • Securing AI from attack, e.g. where AI is a component in the system that needs defending,
  • Mitigating against AI, e.g. where AI is the ‘problem’ (or used to improve and enhance other more conventional attack vectors),
  • Using AI to enhance security measures against attack from other things, e.g. AI is part of the ‘solution’ (or used to improve and enhance more conventional countermeasures).

ISG SAI’s work is agnostic to the AI system deployment use case. Instead the group considers fundamental threats to AI systems, especially where these threats differ from traditional IT systems and consider appropriate mitigation strategies.

ISG SAI aims to develop technical standards and reports that act as a baseline in ensuring that AI systems are secure. Stakeholders impacted by the activity of the group include end users, manufacturers, operators and governments.

Published in December 2020, the group’s first deliverable GR SAI 004 presented a Problem Statement, detailing the difficulty of securing AI-based systems and challenges relating to confidentiality, integrity and availability. This was followed in 2021 by publication of two further Group Reports.

Setting a baseline for an understanding of relevant AI cyber security threats and mitigations will be key for widespread deployment and acceptance of AI systems and applications. Released in March, GR SAI 005 analyzes existing and potential mitigation strategies against threats for AI-based systems. Shedding light on available methods for securing AI-based systems by mitigating known or potential security threats, the report addresses security capabilities, challenges and limitations when adopting mitigation for AI-based systems in certain use cases.

The report describes the workflow of machine learning models where the model life cycle includes both development and deployment stages. Based on this workflow, it summarizes existing and potential approaches against training attacks (i.e. mitigations to protect the ML model from poisoning and backdoor attacks) and from inference attacks, including those from evasion, model stealing and data extraction.

Data is a critical component in the development and training of AI systems, including raw data as well as information and feedback from other systems and humans in the loop. Limited access to suitable data can cause a need to resort to less suitable sources. Compromising the integrity of training data has been demonstrated to be a viable attack vector against an AI system. This means that securing the supply chain of this data is an important step in securing the AI.

Published in August, GR SAI 002 offers an analysis of data supply chain security. The report summarizes methods currently used to source data for training AI along with regulations, standards and protocols that can control the handling and sharing of that data. It also provides a gap analysis on this information to scope possible requirements for standards to ensure traceability and integrity in this data, along with associated attributes, information and feedback.

During the year development meanwhile continued on a number of further Group Reports. These variously address: an AI threat ontology; security testing of AI; hardware in SAI; explicability and transparency of AI processing; privacy aspects of AI/ML systems; and an AI computing platform security framework.

Looking forward to 2022, ISG SAI expects to work closely with TC CYBER and OCG AI to consider how its own activities can contribute to the development of future EU Harmonised Standards under the EU AI Act.

See the list of all current ISG SAI Work Items here.