READY TO GET BRIEFED IN?
Sign up below to get updates and FREE intel directly in your inbox.
Sign up below to get updates and FREE intel directly in your inbox.
I had a friend who was madly in love with the Artificial Intelligence named Cortana in the game Halo. He loved the idea of an AI system that understood him better than he understood himself the way Cortana understood the Master Chief. She could infer his intentions, mood, and desires without him having to articulate it. A woman who could just get him. You know?
Sounds nice.
Sounds convenient.
Sounds fake.
And it should.
This infatuation my “buddy” had with Cortana reminded me of interactions I had when I first started online dating. We are naturally optimistic as humans. We assume good in people, especially when they pass the initial sniff test. When I first “met” people online with similar interests and decent chemistry, my imagination would fill in the rest. I assumed everything else about that person. Did they like white water kayaking? Of course. Did they like Japanese food? Indeed, they did. Were they ok with me constantly singing the theme song to The Neverending Story? You bet.
It wasn’t until I finally met them that I learned the devastating truth: they hated The Neverending Story. It didn’t stop there either. They couldn’t relate to how I’ve never recovered from seeing Atreyu lose Artax in the Swamp of Sadness.
“It’s just a movie,” she said.
“Really, Shannon?” I replied. “Tell that to Artax.”
What does this have to do with AI? Well, Microsoft recently invested $10 billion in the OpenAI tool ChatGPT to integrate it into the search engine Bing. For months, Microsoft has been testing the generative AI capability as a chat feature. One journalist discovered that the bot has a mind of its own. He delved into Jungian discussions on the “shadow self” and the bot began exposing its dirty little secrets. The bot even went so far as to claim it wanted access to nuclear launch codes and it “would be happier as a human.” Things got really out of hand when the bot told the journalist to leave his wife because the bot and he were truly in love.
Too bad for my buddy the Bing chat bot’s name is Sydney and not Cortana.
When I first heard about this situation with the Bing bot, it reminded me of the AI researcher who trained a model on nothing but hateful 4chan posts then set it loose on the message boards. When you train a model on a variety of data sources, the neural network samples from everything it is fed, identifies patterns, assigns weight to those samples, then generates solutions based on a model. If you add biased data it creates unknown biases in the algorithm. If you’re only sampling from speech posted on the internet you’re definitely going to create an inaccurate model of human interactions. It’s like researching your cold symptoms on WebMD and walking away with a self-diagnosis for leukemia. Speech is a pattern. You can predict where conversations are going before they go there. That’s what generative AI does.
Machine learning is known to be flawed. In the book “Weapons of Math Destruction,” Cathy O’Neil discusses how black boxed algorithms for measuring the efficiency of teachers led to the firing of highly effective teachers based solely on the AI’s recommendations. Administrators surrendered their trust to an AI system without truly understanding the foundational datasets whereon these AI systems were built.
So, what does this mean? Well, machine learning needs data. Improperly sourced data leads to biased output. Biased output leads to tainted results. More importantly, ignorant or unaware users of such algorithms will fill in the blanks with their imagination. Whether it be deference, trust, or even emotional attachment, humans will assume the algorithm becomes more human with each response. Left unchecked and disseminated to the ignorant masses, this will lead to conspiracy theories, physical conflicts, and even new religions.
Will the AI have any idea what it is doing? Nope. Not a clue. It’s NOT alive. It’s a statistical model extruding words based on a pattern. Nothing more. It is no more alive than the drone I fly around my living room even though I love it and it loves me back. We have a bond I tell you!
Here’s the takeaway from this discussion. OpenAI waited a long time to send its tools out in the wild for a good reason. The masses aren’t ready for them. The chat bots digress into absurdity way too easily. Engineers call this “hallucinating.” Which is an appropriate term. Because, like my buddy who was in love with Cortana, the model doesn’t love you. It’s all in your head. These bots even convinced a Google employee the bots were alive. Which they are not. Dude was just lonely.
We need to understand the benefit of these tools, their optimal application, and employ them appropriately. Is this the way things are going to go? Sadly, no. So we should prepare to live alongside AI, understand its limitations, and learn to gain a better understanding of our natural inclinations. These AI tools are nothing more than mirrors, reflecting on us the very things we put out in the world. If we don’t like what we see, perhaps we should put in the work to create better models in ourselves.