Got a Problem? Ask Your AI Bot

This article was originally published in Ellemeno on Medium.com on 5/9/23

Picture of a crowd of protestors in Germany with a sign saying Wir sind Keinebots (We are not bot).
#We are not bots (Photo by Markus Spiske on Unsplash)

The bot has solutions — or does it?

It’s conjectured that language facilitates social harmony, that using language you can bridge divides, resolve tensions. It’s the water to put out fires and the fan to cool tempers. Fewer fists, more words is what we need.

But the certainty of this conjecture seems in doubt. Its record of success is iffy. It’s certainly not true that words always work toward producing harmony. They often, very often these days, lead to dissension and violence. We’ve got road rage, supremacy tirades, hate speech, name calling, snarky comments. Words as weapons, not as salves.

It’s only when Robert’s Rules of Order preclude meanness, sarcasm, dissembling and lying and allow only sincerely and honestly felt feelings when conversants engage, that there is in using language a possibility for reaching social harmony.

This would explain, perhaps, the differences between the perennially failed “talks” between Israelis and Palestinians, and, on the other hand, the relative success of the “talks” that resolved the Troubles in Northern Ireland.

Closer to home, here in America, we invoke the internalized language of “thoughts and prayers” as the consistently failing solution for our gun violence problem. Thoughts are talk we do with ourselves, while prayers are talk we do with God. But it’s clear that we’re not being honest in talking with ourselves and God doesn’t seem to be listening.

Language isn’t solving this problem and may not be capable of solving it. To one group, guns embody safety and to the other they embody harm. The word “gun” can’t resolve its own contradictions.

Yuval Harari, one of our eminent philosophical historians, believes in language. He sees it as the source of the stories that get passed down to us and which comprise our history. He is concerned with the value of language as the only tool we have to provide him his raw working material. But his job requires the sources he uses to be free from taint, meaning it has to be adjudged as sincere and truthful. Harari believes that AI and the AI bots’ ability to construct stories and narratives poses serious questions regarding sincerity and truthfulness. How do you ascribe honesty to stories AI bots create? Is a bot capable of reporting objectively, as a historian would, to address problems effectively?

Could we for instance use AI to help solve the problem of gun violence in America? How would it go about it? And could it do it in a way similar to how George Marshall handled the adversarial factions in Northern Ireland? Would the antagonists on the gun debate eventually develop trust enough in the bot to reach consensus on a solution? If so, would a historian of two hundred years from now attribute the success to the bot? Would the bot, presumably because it would still exist then, be a candidate for the Nobel Peace Prize?

Harari is dubious about the prospects to be sure. Humanness and human rights he sees expressed not so much in our DNA, as in the historic conversations our ancestors fleshed out in stories and laws. A culture.

Since AI bots have neither DNA nor any cultural integrity and history, they will be inherently unable to formulate opinions that can be trusted. Which is not to say that their opinions cannot be persuasive. They very might well be, in the same way that we humans let ourselves be persuaded by our gods. They too are disembodied and self serving, so we already have a model.

Harari concludes his opinion piece with a warning. The power we look to be ceding to AI may be the demise of our democracy. Democracy relies on the power of language and conversation to tell stories powerful enough to bring people together. With language we can change people’s minds and opinions. But he says, it’s pointless to try and change the set opinions of an AI bot because they will be right–all the time, every time. That prospect “could destroy our ability to have meaningful conversations, thereby destroying democracy.” And by implication likely compromising history.

So with that warning in mind, I wonder, tossing caution to the wind, about the possibility of AI helping us resolve thorny problems, such as gun control and the exploding gun violence epidemic in America. Can AI assist where normal human conversation has so far failed?

Another thorny issue in contemporary America is abortion. Here too we have a battle over the definitions of the word “abortion.” To some it’s child murder, to others it’s a rescue of a woman’s life. Life, here, is broadly conceived to include mental, emotional, and physical aspects. Other terms in the abortion debate are equally slippery — fetus, child, person, viability, sentience, consciousness, mother, father, choice, IVF, even the words morality and life. I can’t think of another issue packed with as much linguistic tension and uncertainty.

Back to the issue. What can AI potentially do to resolve these tensions? Since AI has to be primed to understand the subjective issues of a query put to it, the prompts involved would have to reveal the various emotional stances advocates on both sides of the issue take. An AI bot would have to assume the persona of a disembodied Solomon, not sentient, but wise to the intentions of humans. The proposal to cut the baby in half was made to reveal the intentions of the two claimants. It was the knowledge of those hidden meanings on which he based his decision.

Understanding intentionality is probably critical to any solution, but it’s something that AI isn’t equipped to understand innately. Intentionality is expressed linguistically in what discourse analysts call illocutionary force, meaning essentially the typically unexpressed meaning of our words. An often cited example is the sentence, “It’s cold in here today,” a declarative statement actually subsuming the imperative, “Shut the window, please.”

AI can learn such things potentially, but the structure and logic of such discourse is constructed as part of how any particular language, say English, embeds it within its grammar. Each language can and does construct its discourse rules differently from English because societies differ and languages reflect their societies. So there is no cross-linguistic cookbook for interrelating these rules.

It’s important to understand, too, that discourse language is situationally based. In other words, the illocutionary force of an utterance will depend on where you are. If you’re on the 40th floor of a skyscraper, it’s not “shut the window,” because the window isn’t open to begin with. The AI bot might not know that fact, but it likely will. Since it’s been trained on a large language model comprising billions of examples of how and where language is used, its admittedly indirect “knowledge” of the world can be more vast than any one human may possess.

What the AI bot will know for sure is the full set of competing definitions of the terms involved. Its dictionary can potentially contain more terminological depth than your average lexicographer commands. Potentially, by making us humans aware of the discrepancies and inconsistencies contained in words, the bot can pinpoint where a concept needs to be better focused.

That doesn’t necessarily mean that being aware of these inconsistencies will produce agreement. It probably won’t. But it could mean that a person may stop trying to convince someone else with words that have no power to persuade or change minds.

As an aside, I think something similar could happen with the word “woke,” which is being pulled and tugged toward definitions with both positive and negative connotations. Does the word imply that a person is a mindless cog to failing progressive causes or does it imply that a person is a mindless cog to the instruments of social inequality? Eventually, society will likely favor one or the other, although it’s also possible that society will kill the word off entirely. That’s always one way to resolve a discrepancy. The non-Solomon solution.

To get back to abortion. What would be effective questions to put to the AI bot to inform the debate? Here it’s important to remember that it’s the formatting of the questions themselves that will send the bot off on one or another exploratory paths. Questions contain their own implicatures, just like commercial and political ads do, and they’ll bias the results.

Presumably you can instruct the bot to avoid bias, but yeah, sure! It’s not easy because as behavioral psychologists will tell us, we’re often not aware of our own biases. Worse, we tend to weaponize the ones we are aware of.

But here’s a possible query that tries to unpack some of the bias people have about abortion. Truth in advertising, I’m pro-abortion rights, so I may not be reining in my own bias.

Bot, what can you find out about the different ways in which humans in the context of abortion define the word “child?” Discuss the implicatures (hidden meanings) of the word for people who differ as to the definition. Also report information on why people of different persuasions hold the views they do. What do people holding a particular view have in common? Make sure to include only sources generated by humans, not other bots. Speculate on ways the different sides might find common ground. If there is no common ground, report that.

This is a perfectly hypothetical query. If you were expecting that I had actually posed it to an actual bot, I’m sorry to report I have not. I spent the time it might have taken to consult a bot with writing this article instead. I can tell you two things. First, I have successfully passed many variations of the Turing test and can assure you I am decidedly human. (Though don’t forget that bots can lie. But I am not lying. Believe me.) Second, I don’t have the wisdom to solve the problems of American gun violence or the problem of abortion access. I’m firmly planted on only one side of both issues and firmly biased against the other side.

Similar Posts:

Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *