Christian Voudantas

Why “Hallucination” Is the Wrong Way to Talk About LLMs

Most people in AI circles now know what “hallucination” means in the context of language models: it’s when the model confidently invents information that isn’t true. But the word itself — hallucination — brings a whole set of assumptions with it. That the model is trying to tell the truth. That it knows what’s real. That it’s having some kind of human-like mental break. It’s not. This article — the one you’re reading now — is a good reminder of that.
Continue Reading
ai-alto-apto

Why Vector Databases Are Essential for Modern LLM Systems

Modern language models can generate incredible responses, but they don't work in isolation. Behind the scenes, the best systems combine large models with fast, context-aware search powered by vector databases. This article walks through what vector databases actually are, how they relate to how LLMs “think,” and why they’ve become critical to making AI systems reliable, explainable, and fast.
Continue Reading

Christian Voudantas

Principal Consultant

Technical Polyglot | Business Owner @ Alto Apto | AI & Web3 Specialist | Agile Delivery Expert | Passionate about building the culture and processes for tech teams to do meaningful work