
I have always worked in the data space of the computer industry—that space behind the user interface, where the rubber meets the road.
People see the frontend. Most never think much beyond what’s visible. I’m one of the ones who cares about the details behind it.
I’ve been deep in research about AI: how it works, and more interestingly, why it does what it does. I’ve built test systems, just enough to understand the edges of what AI can and cannot currently do, and to understand the trajectory of how language and data models are getting “smarter.”
There is certainly a path, and it has limits. Within those limits, I’ve seen what the architecture and interface of the next computer revolution looks like.
If you work in this space, follow along as I walk through my predictions and why I think it will look this way.
AI has a serious memory limit. I don’t mean bytes—I mean tokens. People in the AI space use “tokens” with a specific technical meaning, but I’m going to use it more broadly: how much of the dynamic part of a problem the AI can hold in mind at once.
Humans have excellent short-term memory that feeds into long-term memory and reasoning. We learn as we go. AI learns general knowledge during training over months, then learns nothing else.
As humans, we take the way we learn for granted—we barely notice it. When we see AI in action, we assume it’s like us, that it can learn on the fly. This anthropomorphizing is causing people in decision-making positions to fundamentally misunderstand how to use this tool.
Current AI must be deliberately architected to “understand” its environment. We say the AI needs context, but that word is imprecise. Contextualizing is a process humans do without thinking. Because we don’t have to notice how we orient ourselves with respect to information, we assume AI can do it too.
It cannot.
This is a hard concept to convey. I believe that understanding this bias will determine who wins and who loses in the next round of technological advancement. Companies that understand how to effectively apply AI to their business processes will be the next AWS. Those that don’t will be the next Sears.
The architecture that wins will support growth without sacrificing previous gains. Each new capability should build on the last. The investment in understanding your business processes should compound over time—not reset every time a new model drops or a shiny new approach comes along.
In this series, I’m going to explore that vision: what the interfaces look like, what the backend architecture looks like, and how they apply to business in the real world.