Artificial Intelligence is here to stay, that’s not the issue here.
The problem is that everyone’s talking about AI, but few are actually thinking clearly about how to really use it. Somewhere along the way, “we use AI” became a marketing tagline, not a technical decision. Everyone wants to be part of the AI wave otherwise, you don’t belong. If you have to believe big tech the way they are forcing it down everyone's throat.
But when you take a small step back, you’ll notice something strange:
Companies are solving problems with AI that they shouldn’t have in the first place.
Poor system designs, missing ownership, and spaghetti infrastructures are now being "fixed" by throwing LLMs on top of this pile of shit. And as a result, systems are becoming even harder to maintain, more expensive, and harder to reason about.
This blog isn’t about tearing down AI.
It’s about clear thinking, technical ownership, and using the right tools for the job.
OpenAI investigation research paper
Recent research by OpenAI and Georgia Tech highlights a core truth that often gets ignored: AI hallucinations, those moments where models make things up aren’t bugs. They’re a mathematical consequence of how these systems are built.
Even with perfect training data, the current architecture of generative AI rewards confident answers over admitting uncertainty. That means hallucinations are not only common, they’re inevitable, unless we fundamentally change how we train and evaluate these models.
As the researchers put it: “Hallucinations are mathematically inevitable.” (source)
How people abuse AI to solve the wrong problems.
Let’s take a step back.
If you’re paying $1,200/month for GPT-4 to parse email text, extract phone numbers, reformat JSON, and uppercase strings… you’re not doing innovation. You’re outsourcing common sense to an expensive black box.
Yes, someone recently did that, and realized the cost effect too late. They then moved to GPT-4o-mini and brought the bill down to $200.... But even then it still is a shit solution:
Those tasks didn’t need AI at all. Just basic software skills.
This isn’t an isolated case. It’s happening to much sadly.
People assume AI is their fix to everything, while it only shows the persons incompetence because:
- It increases costs
- Obscures what's actually happening
- Introduces hidden dependencies
- Makes systems almost impossible to debug and maintain
AI is used to cover up a lack of system design, not to improve it.
AI as a Natural Language interface, powerful but misused
But let’s be fair: LLMs truely are powerful when used right.
They allow you to describe what you want in natural language, and get a result — without understanding the underlying logic.
This makes AI a fantastic tool for:
- Prototyping
- Interacting with large knowledge bases
- Summarizing unstructured data
- Performing fuzzy matching at scale
LLMs are essentially a new kind of interface — just like the command line, the GUI, or APIs were before.
But they’re not magic. And they are definitely not for free. And they’re almost always not the best way to solve a problem.
The more critical the function, the more important it becomes to understand:
- What is being executed?
- Where is the data flowing?
- Who owns the outputs?
- What are the real costs?
Conversational AI, the modern chatbot, but still a chatbot
There is a path forward for conversational AI. It’s just not the revolutionary leap people think it is with self thinking systems that wont hallucinate and always provide the right answer.
Yes, LLMs can provide smarter, more human-like responses then before LLM's became mainstream.
Yes, they can automate support flows that were previously a lot more complex to script.
But let’s not fool ourselves:
People still want real, human contact with your business especially when something goes wrong.
If you outsource your entire customer service to an AI assistant, you're sending a clear message (but the wrong one in my opinion) :
- "We don’t care enough to talk to you ourselves."
- "You're not worth the time of a real person."
- "We only sell our product. dont come complaining to us please."
This probably saves costs in the short term.
But long-term? You’re destroying the trust with your brand. Especially because people typically contact a customer service when:
- They’re frustrated
- They don’t understand something
- They have a complaint
That is not the time to offer them a chatbot, no matter how “smart” it sounds.
Great customer service is a relationship, not a cost center.
So yes — use conversational AI to assist your teams. But don’t let it replace human contact where empathy is needed. Dont make AI the interface of talking with your company for customers if you care about your product.
What you should be using AI for
Let’s flip it around in this section. What is a good use case for AI?
You should use AI when:
- It helps you do things that were previously too expensive, too slow, or technically too complex
- You’ve already optimized your architecture, but still have edge cases worth automating
- You're enriching processes — not hiding technical debt
For example:
- Prototyping difficult functionalities so you can test outcomes quicker
- Enhancing product search with semantic understanding
- If you can make the hallucination visible in data they are perfect for BI (business intelligence)
- Generating tailored summaries for large internal reports
- Transcribing phone calls and make summaries
- Image to text so you can better semantic search on images in large image databases.
These are tasks where AI really shines, because they were even more expensive, hard or impossible to solve in traditional systems.
But again: always weigh the costs.
Not just in dollars, but in clarity, control, and customer experience. Always try to solve it without using AI first.
Tech maturity will determine who wins
Here’s the core of it all:
Companies with modern, understandable, well-documented infrastructure and data will win.
Those who keep layering AI on top of their chaos… won’t.
It’s not about how early you adopted AI. It’s about:
- Whether your stack is maintainable
- Whether your teams really understands it
- Whether you can reason about cost, performance, and failure
- Whether you own your data, your architecture, and your roadmap
The companies that treat AI as a new algorithm/interface to solve problems they could not solve before, will get the most long-term value in my opinion. Don't use AI for it's marketing value big tech wants you to.
Offloading data ownership is a strategic risk
Let me end with another real example I recently came across.
A company in the travel sector built an app for end-users to access their trip details. They are reselling this to other travel agencies and tour operators. That sounds feasible looking at the bad shape travel is in regarding IT.
So the travelagency can offer the app with all vouchers and itineraries for the trip of their customer. Nothing weird so far except you are offloading all your documents to a third party.
So now this itinerary app provider has access to a lot of your crucial business information. And now the customer buys extra services (e.g. insurance, transfers, upgrades) in the app. You should think this is upsell for the travel agency owning the customer.
Here comes the culprit. The platform suddenly gives commission to the touroperator... So is the app provider now also providing extra services to your customer??
Who owns the customer now?
It’s a major shift and a dangerous one. The more infrastructure you outsource, the more you lose control of your customer relationships and your first-party data.
Stay in control of the customer journey of your own customer. Protect your added value. dont become unity. And keep it personal.
This will become a huge issue! And AI will accelerate it if travel companies aren’t paying attention.
Conclusion: AI is a tool, not a strategy for most companies
AI is impressive. But it doesn’t replace fundamentals.
Don’t use it to paper over bad architecture.
Don’t use it as an excuse to avoid technical decisions.
Don’t let it obscure what your systems are doing, or who your customer really is.
Instead:
- Own your stack
- Understand your logic
- Use AI where it creates real, measurable value
- And never stop thinking for yourself
The real innovation isn’t in the tool. It’s in how clearly you think about the problems to solve.