Predictions about artificial intelligence range from apocalyptic scenarios to utopian fantasies, yet these miss a critical lesson to learn from AI.
Subscribe to the Real Truth for FREE news and analysis.
Subscribe NowAsking an artificial intelligence chatbot if you should fear AI feels like asking the fox if you should worry about him guarding the henhouse. You would expect the answer to be a resounding, “No, of course not!”
But that is probably not what you will get. Most AI chatbots seem surprisingly aware of the potential problems they bring. Perplexity.ai, a publicly available AI, says: “Concerns about artificial intelligence (AI) are increasingly prevalent, with many individuals expressing fears regarding its impact on jobs, decision-making, and societal norms.”
There is even a term for this: AI anxiety. Although no single definition has gained widespread acceptance, AI anxiety encompasses a spectrum of apprehension about artificial intelligence in society.
International professional services firm Ernst and Young found in a 2023 business survey that 71 percent of employees had concerns about AI. Nearly three-quarters of those surveyed worried about how AI could negatively impact their incomes, while about two-thirds worried about losing promotions or falling behind because they did not use or understand the technology.
A study by the meditation and mindfulness app Calm found that nearly a third of adults are anxious about AI, with 18 percent of respondents feeling fear or dread. The same study also found that 21 percent were excited or optimistic about how AI could impact the world and society.
Amid all the world’s armed conflicts, environmental and ecological changes, and political and religious extremism, do you need to add worrying about artificially intelligent robots to the list? If governments can control AI, what is to prevent someone with ill intent from releasing a malignant AI into the wild?
How much should you really fear the rise of AI?
What Is It?
Much of the concern comes from the many news stories about generative AI. The most well-known model is ChatGPT. The name comes from Chat plus “Generative Pre-trained Transformer.”
ChatGPT takes a human-supplied prompt and generates a unique answer. Generative AI models are pre-trained on vast amounts of data from sources such as web pages, news stories and forums. Because of this, they are called large language models (LLMs), which are capable of responding to user input in a conversational manner.
However, AI encompasses much more than online chatbots. ColdFusion TV expands the definition of AI to any “machine or a computer program that learns how to do tasks that require forms of intelligence and are usually done by humans.”
Generative AI models typically take one input, such as text, image, video or audio, and generate something new. ChatGPT and several other AIs have also become multimodal, meaning they can take multiple input types and generate new text, images, sounds or video.
Based on most news reporting, it would be easy to think this is all there is, but AI encompasses much more.
Computer vision models are used in doorbell cameras to tell you a person is on your porch or allow self-driving vehicles to “see” the world. Speech recognition models transcribe spoken words into text, do real-time language translation and convert written text into speech.
Reinforcement learning models can adapt by interacting with their environment and are heavily used in gaming, robotics and autonomous systems such as self-driving cars.
AI models appear to be incredibly smart because they can collate disparate facts and knowledge from the dataset they were trained on to produce something new and interesting. Yet the word “appear” is key—as AI does not think or understand in a human sense. Its primary function is to produce responses that match patterns and probabilities based on the available information. When faced with insufficient or unclear information, they are known to “hallucinate,” a polite way of saying “make stuff up.”
For this reason, nearly every AI company will tell you to verify what its model says.
To help me understand AI better, I asked Perplexity.ai some questions.
When asked what human age it compares itself to, it returned: “One might liken advanced AI capabilities to those of a highly educated adult—perhaps someone in their late 20s to early 30s—who possesses a vast amount of factual knowledge but may lack the emotional depth and experiential wisdom that comes with age.”
To be clear, AI cannot truly be compared to a 30-year-old. LLMs are just computer software.
Peplexity.ai made this clear: “If I were to choose an animal that reflects my emotional depth and experiential wisdom in a metaphorical sense…I might be compared to a parrot. Parrots are intelligent birds known for their ability to mimic human speech and respond to social cues. They can learn from their environment and interact with humans in engaging ways.”
As with parrots, artificial intelligence models can mimic human speech or art but cannot understand their meaning. While parrots can form bonds and exhibit limited emotional behaviors, AI systems can only mimic emotional interactions.
Most of the current AI models are “expert systems” trained to do one task well. While the conversation skills of ChatGPT and other LLMs make it seem like they can do anything, they can only find patterns in language and return what they find.
British science-fiction author Arthur C. Clarke wrote in his essay “Hazards of Prophecy: The Failure of Imagination” that “any sufficiently advanced technology is indistinguishable from magic.” Artificial intelligence currently falls into this description. It is more advanced than what we are used to, but it is still just a technology.
What Is the Worst that Could Happen?
Hollywood has done a good job of stoking fear of AI with movies such as The Terminator and Ex Machina. In these scenarios, a very advanced artificial intelligence escapes human control, with apocalyptic consequences in the former film and an unknown future in the latter.
The concept of one of man’s creations becoming sentient is fascinating and terrifying. Could something we create become more intelligent than us and eventually try to replace us?
Despite the vivid scenarios depicted in these movies, the likelihood of a world-ruling AI is minimal at best. In order for there to be an actual worry, scientists would need to advance AI technology to an artificial general intelligence (AGI)—something currently well outside the realm of possibility.
IBM.com defines AGI as “a hypothetical stage in the development of machine learning (ML) in which an artificial intelligence (AI) system can match or exceed the cognitive abilities of human beings across any task. It represents the fundamental, abstract goal of AI development: the artificial replication of human intelligence in a machine or software.”
Yet the leap from our current technology to the technology needed to build Skynet from The Terminator is staggering. To create a central brain with helper robot drones would require a prohibitive amount of processors, both CPUs and GPUs, storage, and memory. These robots and the central brain would need to connect with neuromorphic hardware replicating how a human brain works. While research in this field continues, nothing even close currently exists.
One current cause for concern is the implementation of AI in weapons of war. In the Israel-Hamas conflict, the Israeli army used weapons with built-in AI designed to increase the lethality by two to three times.
Ukrainian startups are using AI to power vast swarms of drones to combat signal jamming by the Russians and enable groups of drones to work together. One system under development would allow swarms of different kinds of drones to work together for surveillance and payload delivery, making decisions on their own but waiting for a human to approve any strikes.
While humans have difficulty managing more than five drones, a single AI system could potentially manage hundreds. The AI would allow drone operators to pull back from the front lines, saving friendly lives while more effectively killing the enemy.
The current state of military AI still requires a human to greenlight a kill. Yet, with the way the world is, some person or nation could potentially build an automated killing machine and release it. However, without an autonomous support system that can keep it powered, serviced and supplied with ammunition, the idea of a sweeping wave of unfeeling machines destroying everyone in their path still belongs only in the movies.
Five Concerns
When asked about the scariest aspects of AI, x.com’s Grok 2 model gave five potential concerns for the technology.
Job displacement ranks first. AI shines by automating repetitive tasks that humans currently do. When these tasks involve risks to people, switching to automated systems makes those processes more efficient and safer but costs people their jobs. AI has the potential to disrupt entire job categories.
However, up to this point, every major disruptive technology has created more jobs than it destroyed. The new jobs usually require higher skills and subsequently have higher pay. A 2020 report from the World Economic Forum estimated that AI and automation would eliminate 85 million jobs by 2025, but would create 97 million jobs in the same timeframe. The transition from manual to automated workforces will be difficult in specific locales as companies and industries that cannot adjust to AI close or change. Still, the overall benefit to jobs and the economy outweighs those difficulties.
Secondly, Grok points to the existential risks of an AI gaining sentience and surpassing humanity. As previously stated, a true AGI is full of complex challenges that we are nowhere near overcoming. AI also operates within specific parameters, which are less guardrails to keep it on track and more the scope of where the AI has autonomy. Current AI systems do not possess the ability to self-evolve or develop a conscience.
Grok’s third scary concern is misinformation and deepfakes. AI’s ability to create believable content could impact elections, sway public opinion or destroy reputations. However, manipulation like this has been happening for a long time.
Thankfully, as digital manipulation improves with AI, so do the tools to spot and combat it. The annual Consumer Electronics Show, scheduled for January 2025, has a talk planned about using AI to fight deepfakes, disinformation and misinformation. The ongoing AI arms race between bad actors and defenders of truth will continue, with the likely outcome being a stalemate.
The fourth aspect that Grok pointed out is privacy concerns. AI systems require large amounts of data for training and could be vulnerable to breaches unless properly secured. Some people worry about the privacy implications of using that data and copyright infringement. However, as AI defenders improve, the ability to encrypt sensitive data and detect anomalies, e.g., hacking attempts to steal personal data from corporate systems, will work to protect our data more than expose it.
Finally, Grok points out AI’s propensity to show bias and discrimination. Any artificial or natural intelligence will have biases based on how it learned. Education and interaction with different types of people help to erode biases in people, but AI can amplify the patterns of belief from the data sets it learned from. There is a growing emphasis on ethical AI development, including more diverse datasets.
Bias is probably the strongest reason to have AI anxiety.
Mirror of Ourselves
AI reflects humanity. You could think of it as a mirror. It starts by reflecting its creators—the developers who generally want to make the world a better place, but are biased towards science and logic as the solution to all the world’s problems.
It then reflects the biases of the data used to train it. An AI trained with Stalinist writings may suggest starving millions of people to teach them a lesson, while an AI trained on Mahatma Gandhi’s writings would advocate for a peaceful resolution to conflict. An AI trained solely on Russian culture would have a very different set of biases from those trained on Chinese, German or Israeli cultures.
Another aspect of a mirror also comes into play: its size. A mirror can only reflect a small portion of everything a person can see. Similarly, AI can only reflect a small portion of what humanity is.
AI is ultimately a smaller, less-developed version of our collective selves.
We are not the first ones to have the goal of creating something like ourselves. In Genesis 1:26-27, God said, “Let Us make man in Our image, after Our likeness…So God created man in His own image, in the image of God created He him; male and female created He them.”
We are very similar to God physically, mentally and emotionally. But as the AIs that we create are a smaller, less well-formed version of us, each of us is a lesser version of God.
For instance, God tells us that, “For as the heavens are higher than the earth, so are…My thoughts than your thoughts” (Isa. 55:9). That should be easy to understand. The mind that created the beauty and complexity of the universe surpasses our human ability.
But humanity as a group has achieved amazing things. There was a time when unchecked ingenuity and industriousness gave God reason for concern. After Noah’s Flood, as people began to replenish the Earth, they decided to build a tower to reach the heavens: “And the Lord came down to see the city and the tower, which the children of men built. And the Lord said, Behold, the people is one, and they have all one language; and this they begin to do: and now nothing will be restrained from them, which they have imagined to do” (Gen. 11:5-6).
God then changed people’s languages and sent them around the globe to prevent man from devolving into the worst version of himself.
AI does have the potential to unite people and cultures on a scale not seen since the Tower of Babel. However, achieving this would require a supercomputer as large as the entire internet to train the necessary model. Storing the trained model alone would demand as much storage as all the laptops, desktops, phones, data centers and storage systems in the world combined. Operating such a system would require fusion energy to meet its immense power needs. Even if we could overcome these challenges, the result would still only be a reflection of humanity, not something greater.
AI excels at tasks requiring speed, precision and data analysis. However, true creativity, intuition, ethical judgment, emotional intelligence, adaptability, context awareness and experiential wisdom will remain the domain of humans.
AI’s greatest utility comes when it is paired with a person. This involves collaborative problem-solving, where the AI can find patterns that the human can use to make better decisions and drive innovation.
Similarly—and to an infinitely greater degree—people’s minds work better when paired with God’s mind. Paul told the Philippians to “Let this mind be in you, which was also in Christ Jesus” (Phil. 2:5).
Imagine being able to gain insight from the mind that knows great things like “the number of the stars [and] calls them all by their names” (Psa. 17:4) while knowing the smallest things like the number of hairs on your head and when a single sparrow falls to the ground (Matt. 10:29-30; Luke 12:6-7)!
When David said, “for I am fearfully and wonderfully made” (Psa. 139:14), this is most true of the human mind! Yet God, in His wisdom, left the human mind incomplete, requiring His Spirit to unlock its full potential.
Without this vital connection, mankind cannot fulfill its awesome purpose.
Return to the question, “Should I be worried about AI?” The simple answer is no. AI is merely a tool in the hands of people. While a few may misuse it for harmful purposes, most will use it to enhance their lives and work.
Such worries can be overcome through God’s Holy Spirit. In II Timothy 1:7, it states: “For God has not given us the spirit of fear; but of power, and of love, and of a sound mind.”
To learn more about this incredible potential and God’s Spirit, read our booklet What Science Will Never Discover About Your Mind.