by Rob Loggia

Saturday, May 25, 2024

It’s difficult to read an article or watch an interview about artificial intelligence without at least one mention being made about the supposed ethical development of these technologies. The pioneers of these technologies have been clamoring over one another to assure us that they are in fact going about their work with ethics as a prime directive. Many companies in the industry feature an ethics team, presumably there to guide and steer development in the “right” direction. All of this sounds great... if only it were true. But a sensible and thoughtful person should have some questions.

The first question that leaps to light is how it is that these companies and individuals have ethics figured out while the rest of the world is still actively debating the subject? To make the claim of “building ethically” implies a knowledge of the ethical path, and yet vast chasms of space separate the many different beliefs on the subject popular on this planet. Any student of cultures should be able to tell you that what a Chinese person considers ethical is not necessarily what an American would consider ethical. The Israelis and Palestinians certainly have very visible differences of opinion on the subject. And so on.

Outside of culture, nearly every religion on the planet, and there are many, offer disparate viewpoints on ethics. Will these views be included among those consulted to arrive at an ethical course for AI developers to pursue? Since they don’t all agree on some critical matters, in ways that often cannot be reconciled, which perspectives will be excluded? Will the people that embrace these perspectives then be expected to agree that AI is being developed ethically, when their own sensibilities on ethics are being disregarded?

Even within the narrow scope of Western philosophical ideas there are many schools of thought and modes of opinion, none fully accepted and few fully discredited. From Aristotle to Kant, with a left turn through De Sade and back over to Bentham is not a journey for the feint of heart. You will find little agreement among these individuals about correct ethical behavior, and even less among the modern philosophers who continue to debate their theories.

All of this does not amount to a suggestion that we simply throw our hands up, although some may take it that way. But it does tell us that we should probably reject the strong claims that AI companies and the associated individuals are making. For how can they possibly be doing what is for all intents and purposes impossible at this date?

So what are they doing? The likely answer is pretty chilling, and the result different depending on where a particular AI technology or piece of infrastructure is developed. The people doing the development will most likely nominate their own favored ethical calculus as the best one, and use that. This is, after all, what people usually do. This should frighten anyone that has studied the many cultures and religions of the world. Some of them have some pretty wacky ideas about what is right and what is wrong.

Here in the west, we don’t have to guess. A vast number of the people working in this field are, by their own admission, utilitarians. It would be beyond the scope of this article to detail all of the shortcomings and features of this field of thought that other creditable theories would deem unethical. Suffice it to say that an artificial intelligence that had any power over resources or events, any kind of mobility or any ability to exercise force and that also was imbued with utilitarian ethics would be a threat to all human beings.

The ethics of utilitarianism is actually not monolithic. There are many different flavors of the philosophy that vary in focus and extremity. But they all have one thing in common. With utilitarianism, we are no longer viewing ethics as human to human, 1:1. Utilitarianism, by design, steps back and looks at a mass of people. To put it another way, to practice utilitarianism is a form of playing God. And as with everything, the further back you step, the smaller the objects you are observing will appear. Soon enough, the individual objects meld together through this lens to resemble a single mass, and the individual fades from view.

Utilitarianism invokes concepts like “The Greater Good” while expecting us to ignore the fate of “The Lessers” in order to achieve it. Utilitarianism, even in its softest forms, necessarily tramples the idea of the individual as being intrinsically valuable. Love it or hate it, there is no denying that the philosophy is inherently dehumanizing, so we had better hope there is someone to have mercy on us if these are the ethics they feed into artificial intelligence. Because the constructs and entities they build certainly won’t.

One retort to all of this is that ethics is the study of questions and not answers. There is much truth to this notion, and these companies might reasonably claim that this is the function of the teams of experts they employ in their ethics department: to debate these issues and work towards consensus. Forgetting that the greatest minds of history put together have failed to arrive at consensus, we must still ask what kinds of people these are. Is it a requirement, for example, that they believe in the overall mission and differ only on the details of how to do it right? Because if this is true, important perspectives are being left out.

Anyone that has spent time in corporate culture probably already knows that people who don’t believe in the overall mission of a company are not viewed as team players. When they sneak in, they rapidly find themselves unwelcome. Perhaps it is fair to expect that somebody in sales or product development believe in the overall mission. Why would a company want to hire people that worked against its interests and didn’t believe that what they are working on has value? But in the fields of safety and ethics, this is exactly the type of input they need. These are the people that will tell the CEO no when he or she needs to hear it most. It may sting a bit to write their paycheck, but these people are more valuable to the company and the idea of doing things ethically than an entire brigade of yes-men and yes-women.

The consequences for getting this wrong are tremendous. Military organizations have already deployed semi-autonomous drones and are working feverishly at semi and fully autonomous robots, some with the ability to kill. Law enforcement agencies worldwide are salivating to get their hands on robots of their own, to save the lives of officers and increase their own law enforcement footprint. Not to speak of the many private companies and individuals working towards similar goals. To believe that the fruits of AI development will remain confined to research labs and data centers is to ignore all of the evidence to the contrary. Plenty of people and organizations want this stuff “out here” with us.

Thoughtful people (and some stupid ones) all over the world are now going through the motions of evaluating and debating the value proposition of artificial intelligence. Some will conclude that the preponderance lies with the benefits, and that pursuing these technologies is a Good Thing for humanity. There is certainly plenty of room for debate. But at no point during this process should anyone be naive or foolhardy enough to employ as part of their calculus the assurances that these technologies are being developed ethically. To do so would be to do all of humanity a disservice.

It would be refreshing, at least, if one of these CEOs stood up and said something a little more truthful. If they just stated that no, in fact, they don’t care if their actions have disastrous consequences for the rest of us or our children somewhere down the line. That their own venality, hubris, ambition, greed and callousness would propel them forward even if it could be proven beyond a shadow of a doubt that what they were working on would lead us all to doom. That they don’t believe they’ll be alive to see the worst of it and that, in the meantime, they expect to do very well from it. As horrible as all of this is, it would at least be nice to hear it as opposed to the insulting lip service about “muh ethics” we get now.