Part 1: Chat GPT spurs AI ethics debate

March 1, 2023

Late November, Open AI made ChatGPT available to the public. Since then, there have been several concerns raised about AI ethics.

For example, high school teachers and college professors worry about students using the technology to get out of homework. As University of Texas at Dallas professor Jonathan Malesic observes in a piece he wrote for The Atlantic,

“…It seems to open up whole new vistas of academic dishonesty, and it calls into question how and why we teach writing at all. A professor at the University of Pennsylvania’s Wharton School has said that ChatGPT’s answers to his operations-management class would have earned a B or B–. That seems about right; if a student in my first-year writing class had turned in a ChatGPT-generated essay last semester (and for all I know, someone did), they would have easily passed.”

The reactions about Chat GPT are mixed, with those opposed underscoring the ethical issues, though ChatGPT is just one example. Ethical AI activism has become more common over the last few years, as AI’s capabilities continue to expand.

What is AI ethics anyway?

The point of AI ethics is responsibility, which is why it is sometimes called “responsible AI.” Like the physician’s oath, the idea is help, not harm, humans.

For academics and philosophers, the foundation for AI-related ethics can be found throughout time and around the planet, including the teachings of Plato and Socrates, Buddhism and Ubuntu.

However, the average person tends not to be interested in an academic lesson about ethics, but there are some basics everyone should know as a business or technology leader, lawmaker or consumer. Despite the different schools of thought on the matter, including the declarations some groups and companies have made, there are four basic tenets of ethical AI: accountability, fairness, human safety, and transparency.

Accountability assigns responsibility when AI goes awry, but who should be held responsible? It’s a simple question with a complex answer.

Some think the people designing algorithms or training AI should be held responsible, but many will counter that with, “Since it’s technology, it’s neither good nor bad.” This group tends to believe the people who misuse or abuse AI should be held accountable.

While that may seem like a noble conclusion, the reality is that when something goes wrong, it’s usually an inadvertent mistake.

Fairness is also kind of tricky. Ideally, everyone on the planet would be treated equally, but sometimes that’s impossible. Take a credit score, for example, which is biased in favor of people with good credit histories.

Here, one needs to think first about whether the AI is systematically discriminating against a protected class, such as age, gender or religious affiliation. For example, some sentencing algorithms have treated criminals differently based on their zip code, which resulted in systematic discrimination.

Human safety is another factor, and here Asimov’s three robot laws tend to be mentioned often. According to Asimov, robots must:

Transparency, like fairness, is top of mind. AI should be able to explain its reasoning in a way humans can understand. Typically, when someone complains about a lack of transparency, they’re referring to a deep learning technique that’s essentially a black box.  Although the AI may provide a recommendation or answer to a query, it cannot explain how it came up with the conclusion.

If humans are going to trust AI, transparency is necessary. Transparency is also necessary to ensure fairness and safety.

Check the Bospar Blog again next week for part 2, where we continue to explore basic AI ethics.

Share this post:

Latest

Blog