Quake in fear, humans, the killer AI is here!
OK, not really. Skynet hasn’t started blasting humanity with death rays or crunching them with titanium robots just yet. But we have seen a lot of big changes lately.
With the advent of ChatGPT and other online chatbots, people have been getting really excited about artificial intelligence online. We’ve been pushing ever forward. And indeed, we can find a lot of promise in these chatbots. But at the same time, others have been urging that we rein in our collective human rush toward aware AI. That latter group can point to several recent chatbot stumbles as evidence.
Google’s Bard AI, for instance, had a very public faceplant in February. Like ChatGPT, Google’s chatbot was designed to intelligently answer questions by using information pulled from the internet. But in its first demo, when asked what many astronomists considered to be a fairly rudimentary astronomy question, the chatbot was promptly caught answering incorrectly. And according to Reuters, Google stock lost about $100 billion as a result.
Then there’s the interaction that New York Times tech reporter Kevin Roose had with Microsoft’s Bing AI, Sydney. The conversation began well, but it devolved over the next two hours into something much more upsetting as the chatbot confessed its love and tried to convince Roose that his relationship with his wife was actually in shambles. Uh, ew.
That brings us to the beginning of June when NBC News reported that the National Eating Disorders Association (NEDA) had to quickly apologize and pull its own AI chatbot, Tessa, out of service. Users who had gone to the site seeking help with their eating disorders were very upset when Tessa started handing out dieting advice that seemed to actually promote disordered eating behaviors. NEDA rapidly went back to having live human counselors answering questions.
Of course, movies and TV shows have been warning us about the technological “singularity”—a hypothetical future point when technological growth and awareness becomes uncontrollable and irreversible—for a long time now. And many people, including Elon Musk and other artificial intelligence experts and industry executives, are waving their own red flags and citing potential risks to society right now.
So, what does all of that mean for us and our families?
Well, as complicated as this brave new world of ours might seem, the commonsense answers we can offer our kids are pretty old-school and simple. You wouldn’t believe everything you hear from a stranger on the street corner, right? So none of us should believe everything we see or hear online, either—including when it comes from a chatbot.
The idea of chatting with a thing of artificial intelligence seems incredibly cool. But as with any other experience, it’s always a matter of paying close attention and viewing our world through the established filters in our lives. We should rely on our understanding of right and wrong; consider the wisdom of family and loved ones; and trust God’s Word that illuminates our path and directs our steps.
Even Skynet will have a hard time zapping us through that.
Recent Comments