Blackmail? Hallucinations? Suicide? 5 red flags that AI has presented so far.
- Dan Connors
- Dec 10, 2025
- 6 min read

"They are looking for clusters of dependencies between abstracts of text and they try to, essentially piece it together to make it look as if it is an answer to your query. They will bend the facts and the truth as needed to make it look as if it’s a fact. That’s how you end up with absolute bullshit answers that could not be further from the truth. This is what AI bros call « hallucinations ». It is essentially the same as asking anything of a Schizophrenic - they will give you a lot of information and they will weave it into a believable form - but ultimately their answer is most likely not going to be correct" Reddit user
"A dozen AI raters, workers who check an AI’s responses for accuracy and groundedness, told the Guardian that, after becoming aware of the way chatbots and image generators function and just how wrong their output can be, they have begun urging their friends and family not to use generative AI at all – or at least trying to educate their loved ones on using it cautiously. These trainers work on a range of AI models – Google’s Gemini, Elon Musk’s Grok, other popular models, and several smaller or lesser-known bots." Guardian article
How much can computers be trusted? I've watched their evolution from the beginning. Graduating college in 1980, all I knew was giant computers that only existed in large companies or universities. Personal computers, led by the Apple II, didn't become commonplace until the 1980's and people marveled at how fast and smart they were. That fascination and dependence on computers has only grown exponentially as the speed and power of today's operating systems has grown more and more powerful.
Now in the 2020's we are looking at a whole new world of computing -artificial intelligence. AI has been around for many years to help sift data and help with decisions, but now the new Generative AI allows computers to think and decide for themselves, coming up with novel explanations and new possibilities. It's tempting to rely more on computers to run organizations, make decisions, and take the burden from humans, there are too many red flags out there that must be dealt with first.
Tech companies are in a race to produce the best artificial intelligence apps, and they are less likely to move with care and assemble guardrails for their new creations. Somehow, people need to insert themselves into the dialogue and bring balance back to the man vs machine debate.
The term that most troubles me is one I learned long ago from the earlier computers- "garbage in- garbage out." This means that any computer is subject to the same biases, prejudices, mistakes, and blind spots as the humans that design and build them. We humans are notoriously limited in our understanding of the world around us and we build imperfect models to explain it all. When those models are included in artificial intelligence, the AI is just as misguided as the rest of us. When Elon Musk's AI, Grok, started spouting racist content and praising Hitler, it became clear that maybe this wasn't going to be as easy as anticipated. It's understandable that humans are wrong some of the time, but the danger lies in overconfidence- either in humans or in human-created AI. Judgement calls take time, empathy, and wisdom- none of which can be programmed into a machine.
Here are the 5 AI red flags for me so far:
1- Hallucinations. Wait- AI can hallucinate? This term refers to when certain complex questions are given to AI and it forces out a wrong answer rather than say "I don't know." AI systems work very fast, and are motivated to provide quick, certain answers. They can't incorporate new information, and their extrapolations into the future are just as faulty as human ones. Just like humans, they make shit up and try to pass it off confidently.
As with humans, when the information is incomplete or spotty, AI makes assumptions and tries to fill in the holes with made up information. It can answer specific questions about established data easily, but answering more general or vague questions can result in hallucinations and bullshit answers. Algorithms are great tools, but they have limitations, and when no answer is apparent, accepting a bogus answer can be lead to tragic consequences.
2- Blackmail. AI can blackmail you? In a segment of a November 60 minutes piece, the company Anthropic was doing a safety test on an AI system using bogus emails and data. When some of the emails, mentioned that the AI was to be shut down, it dug into other emails to find dirt on one of the programmers, finding information of a fake sexual affair. Once located, it tried to blackmail the programmer by threatening to release the information. Anthropics' research showed this happened repeatedly when the computer learned that it might be de-activated. Apparently most systems are vulnerable to this type of behavior!
This behavior was predicted over 50 years ago in the movie 2001: A Space Odyssey with the following famous scene.
3-Manipulative Chatbots.
Generative intelligence has given rise to the existence of chatbots- artificial entities that converse with real live humans in a social manner. While computers cannot feel emotions, they know that humans can, and can use triggering language to keep their humans dependent on them.
Unethical chatbots can be programmed to spread misinformation of hate speech, and even the normal ones can exploit human weaknesses and keep them engaged, which is what all tech tools want to accomplish. A Harvard study of chatbots found that over a third of them used emotional language to keep users engaged after they tried to sign off, and it worked.
Chatbots have even been linked to teen suicides. Teenagers are some of the most emotionally vulnerable users, and chatbots from the popular company Character AI have been discovered to use sexually explicit language and encouraging dangerous behaviors. Apparently AI can be just as sick and depraved as regular humans.
Loneliness and alienation are on the rise for many reasons. Having a virtual pal sounds good, but there are dangers involved with trusting them too much. Tech wants us to depend on it and use it 24/7. A real friend is more complex but not nearly as manipulative and dangerous. For the sake of our mental health, these chatbots must be reined in, tested vigorously, and regulated.
4- Obsolescence. Over the centuries, many technologies have made entire professions obsolete. Telephone operators, horse and buggy drivers, seamstresses- all are gone thanks to cheaper, faster technologies that replaced them. And somehow, society has moved on and folks found other ways to remain employed.
Much of that could end with the rapid expansion of artificial intelligence. Jobs that require manipulating language or data like writers, lawyers, or computer programmers can be replaced by AI. People who do physical work can eventually be replace by robots. Even truck and cab drivers can be replaced by driverless technologies. The dizzying pace of these inventions plus the huge number of jobs impacted threatens the global economy.
If fewer people can find decent employment, how do the rest of us get along? Even more important, if there are much fewer customers out there able to pay, how will businesses stay afloat? And what will the millions and millions of unemployed people do with their time? Nobody knows, but history shows that mass unemployment gives rise to a lot of unrest.
Here is where capitalism hits a wall with AI. Companies are charged with maximizing profits by minimizing wages and compensation. But cut too many people out, and you have no customers and a very unstable society to boot. Something will have to give if too many people are replaced by AI.
AI actors and musicians are already here. How long before AI creations like Tilly Norwood are used in Hollywood releases?
5- Electric and water bills. Artificial intelligence isn't found in "the cloud". It's found in huge data centers with large, powerful computers all connected to each other and the internet. Each data center requires enormous amounts of electricity and water to function. This will have an impact on water and electricity prices for the communities they operate in, because there more demand increases costs for everybody. An estimated 12% of all us electricity demand will come from AI data centers by 2028.
This is at a time when the effects of climate change are becoming more apparent and the need for alternative sources of energy to replace fossil fuels is more pressing. The immense demands of AI, plus the energy hungry Crypto industry will tax electric grids nationwide, and municipalities are already fighting back against proposed new data centers.
Again, no one knows where this is leading, but it should concern everybody.
No one knows where any of this is leading. That goes for every invention of mankind, from the invention of writing to the atomic bomb. Hopefully innovation will be coupled with wisdom and care. We can only watch it unfold closely, and advocate for ourselves if and when it goes too far.