Robots and Gingers

Should we fear a robot uprising? Only If You’re Afraid of Mutant Red Heads Taking Over Mars.

For the past six years, I helped develop advanced war fighting technology and concepts for the military. As part of these efforts, I worked heavily in artificial intelligence. When I would mention this to people, someone would always ask me if I am changing the future by developing Skynet—the self-aware artificial intelligence in the Terminator movies. At first, I kinda liked that idea. Not the killer robot part. But the changing the world piece. Autonomous robotics have the potential to save a lot of lives if implemented correctly. If I played my cards right, someone may even come from the future to eliminate me. That’s when I’ll really know I’ve made it.

Don’t blame us robots. We didn’t start the fire…

After a while though, as I began to understand how AI and autonomous robotics work, I found the concept of Skynet implausible, but not for the reasons you might think. And it really came down to understanding the difference between two things: general AI and narrow AI.

General AI is what probably pops in your head when you imagine AI: a self-aware, synthetic life form that lives within the cesium walls of every microchip on earth, waiting at the ready to punish humankind for our arrogance. As lovely as that sounds, general AI is problematic. And by problematic, I mean the idea of general AI evolving into the ability to go from software on a server into something that can take control of robots to kill people is, well, nonsensical. Why? Because worrying about self-aware machines is like worrying about over-population on Mars: it may be a problem someday, but not in any of our children’s, children’s, children’s, children’s lifetime.

Allow me to explain.

For general AI to, let’s just say, evolve, you need more than a big computer. The human brain can store up to 2.5 petabytes of data. That’s about 300 years of continuous video. But our brains are not just data storage. Our brains are constantly processing, aggregating, and rendering information. And, believe it or not, that processing takes place at a micro-level in each of our cells, not just in our brains. Everything in our bodies is programmed by our DNA to interact and respond to maintain a homeostatic balance. Robots are no different. And it usually starts with some core functionality.

On robots, software and algorithms create skills. And the algorithms behind these skills are often referred to as “abstraction layers.” Our human bodies, if broken down into abstraction layers, consist of millions of interactions, taking place constantly, in a perpetual dance that responds to contingencies in our environment.

As a simple explanation, for a robot to detect an obstacle and avoid it, one abstraction layer must be able to “see” the obstacle via a camera, lidar, sonar, etc. Using a different fancy algorithm, the next abstraction layer must determine the flight path for the robot, so it doesn’t hit anything. This has nothing to do with why the robot is moving in the first place. That’s a separate abstraction layer—code which relies on a healthy ability to adapt to contingencies.

Do you see where I’m going with this? In our bodies, cells, nerves, and molecules coordinate in an extremely complicated series of abstraction layers that evolved over millions of years through trial and error. Deviations over time came in the form of mutations. The vast majority of these mutations are benign. Some of these mutations become malignant, like cancer. In extremely rare cases, they become beneficial.  

A good example would be my buddy’s kid. Born with an over-active adrenal gland, my friend’s redheaded son, let’s call him Flash, has adrenaline constantly coursing through his veins. Before his diagnosis, Flash’s parents figured he was just hyper. Sure, his pupils were constantly dilated. But, hey, most kids with sugar in their system look like that. So, Flash’s parents didn’t think much of it.

That all changed one day when they took him to a party at a friend’s house. Flash chose to spend the entire time literally running circles around a trampoline. Non-stop. Nearly two hours later, he was still running, at full sprint. Not normal. I mean, why wasn’t he jumping on the trampoline? Why wasn’t someone throwing him a frisbee to catch in his mouth like an Australian Heeler? The answer: ol’ boy is a mutant. Call Professor Xavier.

In the same way that Dean Karnazes, the famous ultra-marathon runner, does not develop lactic acid and thus his muscles never become tired, Flash also benefits from a happy mutation. Flash’s parents took him to the doctor and instead of the usual side effects of hormonal imbalance, the doctor said, “Well, it appears you’ve created a super kid.” Flash now takes medication to keep his adrenaline in check but, whenever he wants to unleash the Hulk, or the Flash in his case, all he has to do is miss a dose, and ol’ boy is off, ready to tackle any lawn with a trampoline.

See? Happy mutation. Does this happen with algorithms in computers? Sure, but not the way you think. And they don’t move beyond the constraints of the algorithm. In reality, the algorithmic barriers remove the ability for that type of evolution to take place. This can only occur when you start stacking the abstraction layers. But, most likely, these anomalies cause catastrophic failure. Why? Because stacking abstraction layers is really frickin’ hard. And the only way to account for contingencies is through trial and error, which takes time. Sometimes you discover happy accidents, which allow developers to shift their work in more efficient directions. In our case, as humanoid organisms, it took millions of years. AI won’t take that long, but it won’t be as quick as we think either.  

On one of the very first combat operations where we integrated an AI drone, the system began performing erratically. We jokingly call this becoming “self-aware.” Was the drone self-aware? No. One of the sensors had failed, and the drone just flew off into the hot desert night—like Aladdin on a quad-motored magic carpet. It didn’t hurt anyone. It didn’t hijack a short bus or start a Ponzi scheme. It didn’t establish a tyrannical dictatorship in Turkmenistan. It just flew until the battery died. Then it crashed. Spectacularly.

Sarah Connor would have been pleased.

So, the real issue with the development of autonomous robotics and artificial intelligence is not in the creation of general AI. It’s the other kind of AI, called narrow AI, that poses the greatest benefit, and potential issues for humanity in the near term. Narrow AI is each of those individual abstraction layers I mentioned early, augmenting, and automating various functions to dramatically enhance human capacity. This is how AI will augment our future reality. Facial recognition software for security, collision avoidance for self-driving cars, natural language processing for automated translation. This is where we should really educate ourselves and stop anthropomorphizing quadruped robot dogs while we sharpen our pitch forks in preparation of their uprising.  

Narrow AI can create bots that simulate human speech, create celebrity video deepfakes, crush humans at Starcraft. Why? Because the abstraction layer is confined to a strict set of rules. With access to a lot of data, the algorithm can iterate, creating a reliable tool that supersedes human capacity.  If unmonitored, these algorithms can also create adverse results. They can spread misinformation, manipulate elections. You name it.

So, how do we stop the robot uprising? It starts with the technology you interact with every day. Anytime you open a social media application, talk to Siri, or make a TikTok video, you are feeding an algorithm and it is learning. It isn’t “thinking.” It is optimizing for a desired outcome. The real question is, what really is the desired outcome? That’s up to you. I can tell you one thing it’s not: red-headed mutant children taking over Mars.

Share this post
J.L. Hancock
J.L. Hancock

Drawing from a graduate level education in national security studies, foreign language expertise, and experience as a technician embedded with special operations forces, J.L. Hancock writes fiction that reflects the complexities of the modern world. His eye for detail and authentic narrative is rooted in the many lives he has lived, the worlds he has seen, and the people who inspire him.