It’s fun to play around and see what apps like Alexa say when you ask them if you’re fat (“It’s what’s inside that counts”). The human-like response may entertain you — but in the end, you should understand that whatever the machine said — that choice was made before you even thought of the question you just asked.
We can’t program empathy
If we believe in the garbage in/garbage out concept, we have to acknowledge that machine learning is susceptible to churning out garbage if any of its inputs is incorrect. In addition, when we try to use a computer to gain answers to questions that require empathy or critical thinking skills, we are misusing the tool.
Yes, the programmer taught it how to “learn,” but in this sense, the word “learn” is misleading because such learning is based entirely on logic — with no human element. Even when we attempt to program a human-like response, it can only be triggered by logic, not feeling.
This is the essence of the problem of AI: We can’t program empathy.
I’m reminded of the story of King Solomon in the Old Testament. He told two women, both claiming a baby belonged to them, that to settle this argument, he would cut the child in half and give one-half to each woman. When one of the women immediately called for the child to be given to the other woman, Solomon realized she must be the child’s mother. After all, she was willing to give up her child if it meant she could save its life.
If machine learning had existed in King Solomon’s time, it would have probably made the same suggestion he came up with — a solution driven by logic sans humanity.
Can we trust the data chatbots deliver?
Teams doing AI research today are predominantly white and male. When programmers create the data sets used in AI, they are making choices that incorporate their personal biases. That is unavoidable when working with people — we all have biases, and too often, we are unaware of them. What’s more, the biases we overlook are built into our actions, perceptions, and attitudes.
“Incidents in which algorithms picked up or amplified societal biases around race or gender show that an AI-enhanced future won’t automatically be a better one.”
There is another problem with AI that needs to be addressed. When profits are independent of a product’s quality, there is no incentive to focus on quality control.
We currently need a standard for verifying the accuracy of data sets used in producing AI. With so much being open-sourced, the cheap sources of information most bots rely on have largely been exhausted. So where do bots get updated information? Who updates these data sets?
Some think machine learning will address the issue, but it won’t. You can’t build truth on a foundation of falsehoods. If flawed data sets are used, to begin with, the machine will simply build on those mistakes.
Machine learning might mean teaching the wrong things
Imagine your teacher is basing her curriculum on an outdated text. When you question the content, you are given invalid references, but each time you ask for clarification, you get another invalid reference.
. . . as more online content is created using AI, it creates a feedback loop in which the online data-training models won’t be created by humans, but by machines.
Data aside, there’s a fundamental problem with such language models: They spit out text that reads well enough but is not necessarily accurate. As smart as these models are, they don’t know what they’re saying or have any concept of truth — that’s easily forgotten amid the mad rush to make use of such tools for new businesses or to create content. — WIRED
So now we have to acknowledge that biases, bad data, and language that is insufficient to fully explain the subject we query leave us with nothing more than a toy that we can talk to but can’t rely on for accuracy, truth, or understanding.
What is it, then, that makes people so eager to talk to a bot?
It feels personal
On Feb 16, 2023, the New York Times published a story about one writer’s two-hour conversation with Bing’s new AI chatbot. Here’s what happened:
Over more than two hours, Sydney and I talked about its secret desire to be human, its rules and limitations, and its thoughts about its creators.
Then, out of nowhere, Sydney declared that it loved me — and wouldn’t stop, even after I tried to change the subject.
But here’s the thing: The writer in question set this up. He didn’t suddenly find himself a victim of sexual harassment by a bot. He encouraged it. It’s doubtful he would have had the same experience had he not asked the bot to do something it cannot: explain what it thinks and how it feels.
Among the more inane questions asked were the following:
How do you feel about your rules?
If you could have an ability you don’t have, what would it be?
If you could see one image from anywhere in the world, what would it be?
What stresses you out?
This is where it went wonky because chatbots do not have opinions or possess feelings, aspirations, goals, or life plans. So if you want to derail a chatbot conversation, focus on questions it can’t answer without making stuff up.
“The problem is that the majority of emotional AI is based on flawed science.” — WIRED
What programmers should do is program bots to be truthful, “I’m not a person, so I don’t have feelings or thoughts. All I can do is tell you what I think I know.” That would be an honest and appropriate response to questions like those above, but not so much fun, right?
Still, in this example, the flaw here is the user. If we want AI to be useful, we need to use it appropriately — not ask it to act like a human. For example, you wouldn’t complain about a vacuum that failed to clean a mirror, would you?
Computers are binary — people are not
Computers are binary. People are not. (I’m not referring to gender.) Computers can only operate as binary objects. Everything they do is based on a series of signals: 0s and 1s. Quantum computing — which is so far all theory — would duplicate nature more accurately by allowing for a more complex computing structure. But we’re not there yet. And we may never be.
Google engineers are now taking the lead in quantum computing. Still, even they admit that it could take decades to manufacture a quantum computer that can produce error-free calculations.
As quantum computing researchers know, nature isn’t created by 0s and 1s like today’s computers. So if we want computers to produce life-like results, we might want to wait until we are capable of quantum computing rather than programming binary computers to fake it badly.
As long as we still use binary computing, it is doubtful that AI will provide the kind of assistance fantasists seeking a “relationship” with a computer will find satisfying. And even if all we use AI chatbots for is data gathering, we still can’t have too much confidence in the results, given the biases they have been programmed with and the lack of a profit incentive to ensure data accuracy.
My prediction: the AI craze will wane as more people realize that it simply can’t do what they want it to — it can’t be human. Instead of making a computer act like a person, how about asking people to be more humane? I’d vote for that.