Part 2: Translating AI principles into actions can be difficult

March 8, 2023

Many companies have established ethical AI principles, but not all of them understand how to transform those lofty thoughts into something that can be executed.

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems has an entire book about it. There are also frameworks organizations can use to operationalize ethical AI.

Some organizations have a Chief AI Ethics Officer. Some even have an AI ethics board or committee that’s intentionally diverse because one person won’t be able to think of everything. Members of these groups tend to be business and technology leaders, as well as people with deep expertise in AI, psychology or philosophy.

The point of an AI ethics entity is to ensure the realization of the four key virtues outlined in this blog’s Part 1 (accountability, fairness, human safety, and transparency) and to mitigate the potential risks.

Toward that end, the IEEE has offers three ethical AI certifications for accountability, transparency and algorithmic bias, but it’s unclear how many organizations have taken advantage of that.

A fundamental issue is the dual-speed nature of technology versus the slower nature of business and lawmakers, which creates fertile ground for bad actors who are using AI as a weapon. Less than five years ago, few were talking about AI ethics. Now, it’s a mainstream debate.

Singularity or no threat?

Some people are extremely concerned about the long-term risk of AI, namely the point at which it becomes self-aware and begins to view humans as a threat. Knowledgeable AI practitioners point out that it’s still early days, as evidenced by the abundance of narrow AI (which is the state of the art today). What that means is that AI is best applied to narrow use cases, such as reading contracts to ensure important clauses aren’t missing, for example. Narrow AI is “fit for purpose.”

How long it will take for artificial general intelligence to be realized is a matter of debate – years or decades. For now, AI creators and users tend to be confident about their ability to control it, but then media stories start to break, such as in the cast of Microsoft’s ill-fated Tay bot, Amazon’s HR debacle and Google’s Bard.

At least for the foreseeable future, experts say “a human in the loop” is needed and that AI is merely assistive. However, the technology keeps improving so what AI can do in place of humans continues to grow.

A single bottom line isn’t enough

Silicon Valley and other tech hubs have long prized the art of the possible, funded it and monetized it. When it comes to AI, the same formula will not work well.

Organizations need to be aware of what they’re doing with AI, what they want to be able to do with AI, and the risks associated with both to understand AI’s value to the business and stakeholders. They also need to plan for change management because not everyone wants to work hand-in-glove with a machine.

Since AI has become a competitive weapon, the question is not whether organizations will adopt AI, but when, where, why, and how.

In short, when considering all the potential benefits of AI, don’t forget to contemplate the risks.

Share this post:

Latest

Blog