AI Ethics – From Principles to Standards

Robot Franck V Unsplash 003 50Pc

Getting artificial intelligence right may be one of the greatest challenges our species has ever faced. And now that the subject of AI ethics is filtering out from the edges of the technology and philosophy communities into the public mainstream, we need to answer the question of how to apply those ethics in practice.

Last week we were honoured to be involved in kicking off a working group to develop a new global ethics standard for empathic technology with the Institute of Electrical and Electronics Engineers (IEEE). The P7014 Standard for Ethical considerations in Emulated Empathy in Autonomous and Intelligent Systems is the latest addition to the IEEE’s P7000 series of standards in development, all focused on different aspects of the ethics of autonomous and intelligent systems. Our CEO and I will be participating in the group so we can be part of what we believe to be a vital movement.

Running in the background of such projects is a continuing supply of AI ethics guides, which are being published by various groups from all over the world, seemingly every five minutes. I have been digging through them, sifting out coherent principles that could have broad application. Here are some examples...

1) Ethics Guidelines for Trustworth AI

Ethics guidelines for trustworthy AI, by the European Commission recognises three ‘components’ that trustworthy AI should be:

  1. Lawful
  2. Ethical
  3. Robust (both technically and socially)

...listing these ethical principles:

  1. Respect for human autonomy
  2. Prevention of harm
  3. Fairness
  4. Explicability

Particular attention is also drawn to situations involving more vulnerable groups (eg. children, people with disabilities, etc.)

...and then it lists ‘key requirements’ for trustworthy AI:

  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and data governance
  4. Transparency
  5. Diversity, non-discrimination and fairness
  6. Environmental and societal well-being
  7. Accountability

2) Ethically Aligned Design

Ethically Aligned Design, by the IEEE specifies ‘Three Pillars’:

  1. Universal Human Values
  2. Political Self-Determination and Data Agency
  3. Technical Dependability

...and five ‘General Principles’:

  1. Human Rights
  2. Prioritizing Well-being
  3. Accountability
  4. Transparency
  5. A/IS [Automated and intelligent systems] Technology Misuse and Awareness of It

3) The Principled Artificial Intelligence Project

At RightsCon in Tunis recently, some members of Harvard’s Berkman Klein Center were presenting their review of a whole load of these AI ethics guides – 32 in fact – for their Principled Artificial Intelligence Project. When I met some of their research team at the conference, I thanked them for doing my homework for me. They have identified the following ‘key themes’ as being common to the AI principles publications they reviewed:

  1. Promotion of Human Values
  2. Professional Responsibility
  3. Human Control of Technology
  4. Fairness and Non-discrimination
  5. Transparency and Explainability
  6. Safety and Security
  7. Accountability
  8. Privacy

They also produced a handy visualisation of their work (go here for the full thing):

Principled Ai Visualisation Crop

Putting it into Practice

Guides like the three above should be instructive for anyone engaged in the design, deployment or direction of autonomous and intelligent systems but they tend to focus on broad, general principles while being quite light on specific, practicable rules and actions. I picture these AI ethics guides sitting on a spectrum from soft and broad principles, to hard and narrow laws, roughly along the following line:

  1. Fundamental rights (such as the Universal Declaration of Human Rights, or the EU Charter).
  2. General ethical philosophy (eg. eastern/western ethics, or normative vs applied ethics).
  3. Subject-specific guidance (eg. AI ethics).
  4. Legislation (such as GDPR).

Somewhere towards the legislation end, is where standards sit.

Compared to broader guidelines and principles, standards tend to be more discrete. They need to be constructed in a way that relevant parties can follow them to the letter. Therefore they must also be verifiable to some extent. A standard is not necessarily quantitatively measurable, as it could be something qualitative such as a set of official terminology. But it should be possible to know that you are conforming to the standard accurately. And because of their discrete nature, many standards have been followed up by certification processes that adopters of the standard can apply to. So, in writing our new standard with the IEEE, we need to create something that is clear, discrete and practicable. And in the realm of ethics, that’s an intriguing challenge!

For the P7014 standard for ethics in empathic technology, we are just at the beginning. I am writing this only a week after the group’s first ever meeting and I’m itching to find out what subject matter we will sink our collective teeth into over the coming months. If successful, we will build a standard that provides essential guidance in our AI-enabled future, for the betterment of our species.

The potential impact of autonomous and intelligent systems cannot be overstated, and now is the time for us all to work together at building them the right way. From fundamental rights and ethics, all the way to the hard letter of the law, communities from academia, public office, industry and organisations are gathering to construct the necessary frameworks to ensure this technology has a positive influence on the world. And it’s an honour to be a part of that movement.

So please, join us... IEEE P7014 Standard for Ethical considerations in Emulated Empathy in Autonomous and Intelligent Systems

Ben Bland

Chief Operations Officer