Can AI algorithms ever be ethical?

Science & Technology

The perils of cyberspace and social media

4 FEBRUARY 2021, HAZEL HENDERSON

People all over the world, over 4 billion now online, have now woken up to the perils of cyberspace and social media. Can we learn to steer these powers of persuasion and decode the propaganda dominating our policies? Can we regulate how decisions over our lives are delegated to big data, automation and computer programs? Can we monitor secretive algorithms, force them to reveal their assumptions hidden in mathematics?

Many experts designing these decision-making algorithms have become whistleblowers. Cathy O’Neal reveals in the Weapons of Math Destruction, (2017), how algorithms now decide who will get into universities, who can get which jobs, who might go to jail or be given bail, who might be denied health insurance or the right to vote! Lawsuits now challenge biased algorithms which devalue people by race, gender, or skin color. The US-based Wilson Center sponsored shocking research Malign Creativity: How Gender, Sex, and Lies are Weaponized Against Women Online, into how women who serve us in public positions are harassed and attacked on social media platforms with sexist, racial threats and slurs. This kind of misogyny affects not only journalists but also members of Congress, governors and even our Vice President. On this evidence, recommendations are offered to make social media platforms more accountable. These don’t go as far as former insider, Roger McNamee, an investor in Facebook: holding all executives, scientists and psychologists who design their profit-making algorithms personally liable. This includes their role in algorithmic recommendations that inflate the numbers joining conspiracy groups. McNamee states that Facebook’s “recommend” algorithm drove up to 2 million people to join the cult, QAnon.

Even more troubling is the truth that so many of these algorithms are designed to make money by unethical behavioral psychologists, so as to addict users to their screens with “clickbait”. Today we all live in Mediocracies and their Attention Economies, and algorithms used by social media companies, Facebook, Twitter, YouTube, Google, and others, are designed to capture users’ attention and then sell their personal information to advertisers, data brokers, insurance and other companies. A decade of hype has promoted self-driving cars, with Google and other IT companies demanding governments subsidize their takeover. In Ghost Road, city planner Anthony Townsend exposes the truth that cars can never be fully automated since algorithms and sensors have proved dangerously unable to recognize many traffic situations and this will continue to cause deaths & injuries.

The very term “Artificial Intelligence” is a mystifying slogan intimidating closer examination and understanding. There is nothing artificial about these algorithms. They all must be trained by humans and reflect their views and biases and should be identified as human-trained machine learning programs, as I described in Let Train Humans First Before We Train Machines. Shoshana Zuboff sums up all these issues in The Age of Surveillance Capitalism, (2019); Rana Foroohar describes in Don’t Be Evil, (2019) how the ethics of the young founders of Google were subverted by greedy Silicon Valley venture capitalists demanding profits, which pushed Google into the dominant Silicon Valley social media business model as advertisers.

The January 6, 2021 insurrection in the USA Capitol brought to the front all these issues of these powerful tools of persuasion and the threat they pose to democracies. In Future of Democracy Challenged in the Digital Age, I outlined how Facebook had helped foment hate groups, terrorists, and the persecution of minorities, including the Rohingyas in Malaysia, ethnic strife in India, the Philippines, and the mass murder of Muslims in New Zealand. It became evident that the executives of these giant social media platforms, notably Facebook could no longer manage or control them. The 5 steps to regulate these media monopolies I outlined in Steering Social Media Toward Sanity:

  1. ending their protection from liability;
  2. breaking them up under antitrust rules;
  3. ending anonymity of their users;
  4. bringing social media platforms claiming to be “public squares” under public regulation, oversight and limiting their profit motive;
  5. granting users’ rights over their personal data as in the European Union’s General Data Protection Regulation, (GDPR). I have proposed that this human right be based on settled English law of Magna Carta, in 1215, which declared the right of habeas corpus. This assures the right to own one’s body. Today, we need to extend this to the right to own one’s brain and the information it produces, a new Information Habeas Corpus, which we promote.

The bigger question remains: Can human-trained algorithms ever become ethical? Two recent books explore these issues: The Ethical Algorithm (2019), by Michael Kearns and Aaron Roth and Human Compatible (2019), by Stuart Russell. Fundamental issues include “the King Midas” problem: when an algorithm is trained too narrowly to perform a precise task, whether overseeing the production of paper clips or any other decision, its program will continue beyond human control. Then there is the “Gorilla problem”: just as we humans are causing the extinction of other species like gorillas, algorithms might train themselves to be smarter than humans and we would be as endangered as gorillas. Most experts agree that all algorithms must have a kill-switch embedded, so as to safely turn themselves off if humans lack knowledge to disable them. Yet, experts acknowledge that algorithms could also learn how to disable these kill-switches!

So, the issue remains. While some still believe humans can design ethics into algorithms making them “Trustworthy by Design” as in, Building Trust in AI and ML, (whitepaper by Workday, Inc for Forum Europe). Others are not so sure. Expert Dave Lauer on our Advisory Board digs deeper in his You cannot have AI ethics without ethics. He cautions that any corporation or organization operating any algorithm must itself have a robust, transparent code of ethics to hold itself publicly accountable. The World Academy of Art and Science online global debate, February 2021 discusses all this in its Panel co-sponsored with Ethical Markets on Responsibilities of Media and the Future of Information. Forum Europe’s January 2021 webinar on “The Governance of AI-Developing a Global Eco-System of Trust”, promotes the global industry view of the inevitable, profitable AI future. The rest of us are being persuaded to trust them. Stay tuned!

Hazel Henderson
Hazel Henderson, Author of “Mapping the Global Transition to the Solar Age” and other books in 800 libraries worldwide in over 20 languages, is CEO of Ethical Markets Media Certified B. Corporation, producer of “Transforming Finance” TV series and publishers of the Green Transition Scoreboard.