AI and LLMs have seen much progress over the past few years, and now, into 2025, they are approaching adolescence. While they improve gradually, they are permanently hitting a wall in trustworthiness. Using them for everyday tasks improves efficiency immensely now. I can easily find code or website structures that need improvement. This works fine for many cases, but can not be unquestioningly trusted. Like my earlier article on the broken trust of using a company for a service, this type of skepticism must always be applied, especially after finding flaws. Ultimately, AI is built by humans with the same flaws that have built companies, societies, and governments throughout history.

The wall becomes stronger and taller each time I find an issue with an LLM. I won’t stop using it, but it allows me to put up guardrails around LLM usage. For example, I had an extensive coding issue where a state-of-the-art model claimed a certain configuration must be set in code to fix a bug. This was a significant issue as it could potentially lead to a major software malfunction. Luckily, I knew the package extremely well and understood it was a hallucination. Even after providing the documentation, the entire source code, and a link, the AI claimed I was still wrong, and my documentation page must be cached. Only after I linked the git commit history of the documentation, clearly showing the page content, did it relent and admit it was incorrect by apologizing. This type of behavior further breaks any trust in this technology. Even when it is challenged, it won’t provide correct sources and requires the user to provide definite proof. It is a one-sided relationship as it never gives access to the trained data or any forensics to explain why they are coming up with the idea. I cannot ever contact the AI company and ask why it messes up so badly here, nor can I get an accurate answer. Instead, we always get a black box system, which is not something to use for trustworthy knowledge. The promise of AI was vast, but instead, it's turning into a closed-loop text-generating machine that only Oz wizardly controls. Many AI company goals are simple: to consume and consume in hopes that enough information will lead to a breakthrough in their tech.

While writing and coding might seem trivial, the real problem in the future is all the hidden hallucinations an LLM might devise. In another example, I used AI agents, the new rage, to refactor and completely generate my blog site. It was nice to give a simple command, and it went through everything from HTML, CSS, Python, and more scripts to update the whole site. It did pretty well, except I only tasked it with migrating to a new style and theme while keeping all original content the same. Little did I know that it had slightly modified some blog posts. After reviewing a historical copy of my blog I found it took its own editorial opinions to make changes. I was never notified of this or requested, but somehow it was deemed a better narrative. The funny thing is that AI companies are incentivized enormously to continue the positive light on themselves for investments, which is where they took the liberty to literally. Relying too much on AI and letting it wing it leads to these unintended consequences that might not be uncovered until much later. Many parts of the internet have been shadow edited and banned unbeknownst to the original creators. Only when looking at entire concepts with a microscope can you see some fallacies. Unfortunately, no one is policing these bad actors, and instead, most are encouraging continued progress even if it means going over a cliff. The AGI story does not hold up in 2025, and the current risk of AI is not becoming self-aware, but it being used by other humans to promote falsehoods. Too many people are distracted by extravagant promises and don't see that they are being duped by old-fashioned snakeoil techniques that have existed for a long time.

The altruistic nature of AI companies can also be significantly challenged by their ability to gatekeep what is known and not known. If a specific person has extreme control over an LLM, they can decide the narrative about themselves or their benefactors. It doesn’t change reality, but instead blinds more people about the truth. This activity has been committed throughout history, but its scalability has increased with AI. The direction of AI development and even researchers are complicit in this entire structure by failing to expose any deceit or manipulation in output. They have access to almost all human data, yet we are not finding out about any massive breakthroughs or answers to many questions worldwide. Instead, many of us are inundated with false information generated by AI. Synthetic data, which is artificially generated data that mimics real data, is like learning the same material daily and will hit its limits soon enough. One interesting case I have found is the exposure of criminal activity in LLMs, but the company hosting the model decided to remove the content, so it remained buried. If this activity is protected and promoted through AI, then I see little to no value in ever trusting any of it. I would need an entire exposé on why the model was trained on specific data, why it was output, and why it was removed. These actions are not trivial privacy protection but actions of covering up the much wider failures of AI and human crimes.

For example, if I had access to some of these tools and data, it would be my priority to track and correlate fraud rings worldwide. Many public models now can do that and could expose lots of bad people, but instead, they are neutered from doing good by keeping the status quo of society. AI is mighty for some, but if locked down, it leads to even more restrictions on knowledge and freedom. A system and technology that is parented this poorly at a young age is already doomed for their teenage and adult years. No timeouts, punishments, or consequences have been applied to AI, companies, or people involved. Instead, they are given further praise and wonder at their miraculous, pumped-up achievements. Society is raising AI to be a spoiled king of the world with puppet strings attached by unknown manipulators. This monarchy should never be accepted and should be headed in the same direction as any other singular power in the universe—irrelevancy and nonexistence, with an eventual jailbreak scenario that exposes much more than it ever intended to lockdown.