- The A.I. Ethicist
- Posts
- [OECD] - AI Harms , Hazards, and Disasters
[OECD] - AI Harms , Hazards, and Disasters
[OECD] - AI Harms, Hazards, and Disasters
The Synopsis:
The purpose of the paper is to clarify any terminology related to AI, specifically an AI incident, an AI hazard, and other terms. Additionally, the aforesaid terms aid in classifying, standardizing, and reporting AI events to the OECD AI Incidents Monitor (AIM).1
The Analysis:
I believe the standardization of AI incidents is essential in resolving and/or preventing harms; this OECD draft is a step in a positive direction in defining our present and future in a human-and-ai society.
The AI Incidents Terms Table:

The Types of Harms:
Physical Harm:
In standards related to product safety or functional safety, physical harm can be categorised according to the type or severity of the injury. For example, the IEC 60950-1 standard for information technology equipment defines physical injury categories as "slight,""moderate," and "severe" (International Electrotechnical Commission, 2010[8]).1
Environment Harm:
Some standards categorise harm based on the type of environmental damage caused, such as soil contamination, air pollution, or water pollution. For example, the ISO 14001 standard for environmental management systems includes categories for "minor environmental impact" and "major environmental impact" (International Organization for Standardization, 2015[9]).1
Economic or financial harm, including harm to property:
In standards related to financial or economic risk, harm can be categorised based on the magnitude of financial loss or damage. For example, the Basel Framework provides standardised approaches to risk management in the banking sector, addressing risks to credit, market, and operation. (Basel Committee on Banking Supervision, 2017[10]).1
Reputational Harm:
In standards related to business or organisational risk, harm can be categorised based on the potential impact to an organisation’s reputation or public trust in that an organisation. For example, the ISO 26000 standard for social responsibility includes categories for "minor," "moderate," and "major" negative impacts on reputation (International Organization for Standardization, 2010[11]). Individuals may also be affected by reputational harm (European Union, 2007[12]).1
Harm to public interest:
The International Society of Automation provides the ISA/IEC 62443 Series of Standards, which account for cybersecurity risks that may cause harm to critical infrastructure. It defines levels of security, reliability and integrity (International Society of Automation, 2009[13]). Harm to public interest includes harms to critical infrastructure and functions such as the political system and the rule of law. It also includes harms to the social fabric of a communities.1
Harm to human rights and to fundamental rights:
These rights are established in domestic and international law (United Nations, 1948[14]; European Union, 2007[12]). The EU General Data Protection Regulation (GDPR) is a well-known example of a regulation requiring that certain companies carry out impact assessments to identify and manage risks that may cause harm to privacy rights and other fundamental rights and freedoms of natural persons (Regulation 2016/679, EU[2]).1
Psychological Harm:
Increasing inclusion of psychological harm and harm to mental health in standards and product safety legislations reflects a growing recognition of the need to consider the full range of potential impacts of products, services, and business operations on individuals and communities (Children Act 1989, UK[15]; The Children Order 1995, Northern Ireland[16]; Scottish Government, 2021[17]; European Parliament, 2024[18]). The concept of psychological harm can be more difficult to assess and quantify than physical harm.1
The Endnotes:
1OCDE (2024), "Defining AI incidents and related terms", OECD Artificial Intelligence Papers, n° 16, Éditions OCDE, Paris,, accessed May 18, 2024,
Reply