Industry Specification Group (ISG) Securing Artificial Intelligence (SAI) Activity Report 2022

Chair: Alex Leadbeater, BT

Developing technical specifications that mitigate against threats arising from the deployment of Artificial Intelligence (AI) and threats to AI systems, both from other AIs and from conventional sources. 

Autonomous mechanical and computing entities may make decisions that act against the relying parties, either by design or as the result of malicious intent.

The primary responsibility of our Industry Specification Group on Securing Artificial Intelligence (ISG SAI) is to develop technical specifications that mitigate against threats arising from the deployment of AI – and threats to AI systems – from both other AIs and from conventional sources. The group’s work is intended to frame the security concerns arising from AI. It will also build the foundations of a longer-term response to the threats to AI in sponsoring the future development of normative technical specifications.

The work of ISG SAI notably addresses these aspects of AI in the standards domain:

  • Securing AI from attack, e.g., where AI is a component in the system that needs defending,
  • Mitigating against AI, e.g., where AI is the ‘problem’ (or used to improve and enhance other more conventional attack vectors),
  • Using AI to enhance security measures against attack from other things, e.g., AI is part of the ‘solution’ (or used to improve and enhance more conventional countermeasures).

ISG SAI’s work is agnostic to the AI system deployment use case. Instead, the group considers fundamental threats to AI systems, especially where these threats differ from traditional IT systems and consider appropriate mitigation strategies.

ISG SAI aims to develop technical standards and reports that act as a baseline in ensuring that AI systems are secure. Stakeholders impacted by the activity of the group include end users, manufacturers, operators and governments.

Published as a Group Report in January 2022, GR SAI 001 presents an AI Threat Ontology, defining what an Artificial Intelligence (AI) threat is and how it can be distinguished from non-AI threats. This ontology gives a view of the relationships between actors representing threats, threat agents and assets. Extending from the base taxonomy of threats and threat agents described in ETSI TS 102 165-1, it addresses the overall problem statement for SAI as presented in GR SAI 004 and mitigation strategies described in GR SAI 005.

Published in March, a further Group Report GR SAI 006 explores the role of hardware, both specialised and general-purpose, in the security of AI. It addresses the mitigations available in hardware to prevent attacks (as identified in GR SAI 005) and also addresses the general requirements on hardware to support SAI (expanding from GR SAI 004, GR SAI 002 and GR SAI 003). In addition, the report considers vulnerabilities or weaknesses introduced by hardware that may amplify attack vectors on AI, as well as strategies to use AI for protection of hardware. It also provides a summary of academic and industrial experience in hardware security for AI.

Development meanwhile continued on eight further Group Reports. These variously address traceability of AI model; security testing of AI; collaborative AI; automated manipulation of multimedia identity representations, explicability and transparency of AI processing, privacy aspects of AI/ML systems; an AI computing platform security framework; and framework to be used for the creation of multi-partner Proofs of Concepts (PoCs).

During the year ISG SAI continued to work closely with TC CYBER and OCG AI to consider how its own activities can contribute to the development of future EU Harmonised Standards under the EU AI Act.

See the full list of ISG SAI Work Items currently in development here.