

If you had to explain what happened to AI over the last 18 months to a five-year-old, you could say this:
A little robot child went from being able to talk… to being able to see, listen, draw, reason, code, search, and even do tasks on a computer.
That is the simplest way to understand the shift.
Eighteen months ago, most people thought AI was basically a clever autocomplete machine. It could write emails, summarize documents, and answer questions. Impressive, yes. But still limited. It often sounded smart without actually being reliable. It was like a bright five-year-old who knew lots of words, could tell stories, and sometimes shocked you with a clever answer — but still got confused, made things up, and needed constant supervision.
Today, that same “child” has changed dramatically.
Now it can look at images, understand speech, hold more natural conversations, generate high-quality images, reason through harder problems, write production-grade code, search the web, use tools, and in some systems even take action across software workflows. Frontier model makers have been racing to push these capabilities forward: OpenAI rolled out deeper research tools, computer-use agents, and stronger multimodal systems; Google advanced Gemini and Project Astra toward always-on multimodal assistance; Anthropic pushed hard on long-running reasoning and coding agents. This is not a cosmetic upgrade. It is a category shift. ([OpenAI][1])
The best way to picture it
Imagine a five-year-old on day one.
He can talk well. He knows many words. He can answer simple questions. He can describe a dog, tell you what the sky looks like, and maybe even make up a funny story.
Now imagine that, within what feels like a few months, that same child suddenly learns to:
- read pictures and explain them,
- listen to your voice and reply naturally,
- draw exactly what you describe,
- solve harder school problems step by step,
- use a computer mouse and keyboard,
- research information from books and websites,
- remember context better,
- and help adults do real work.
That is roughly what AI has done.
The shock is not that AI got “a bit better.” The shock is that it started combining abilities. It is the combination that changes everything. A system that can only chat is interesting. A system that can chat + reason + search + use tools + generate media + act becomes economically disruptive. ([OpenAI][2])
What actually changed in the last 18 months
The biggest leap is this: AI moved from prediction to capability stacks.
Before, it mostly predicted the next word well.
Now, leading systems increasingly operate like layered assistants. They can interpret text, images, audio, sometimes video, use external tools, and perform multi-step reasoning before answering. OpenAI’s GPT-4o and later releases pushed real-time multimodality and image generation inside the same user experience; OpenAI’s Operator and ChatGPT agent expanded from answering questions to taking actions; Google’s Gemini 2.5 emphasized “thinking” and long-context multimodal reasoning; Anthropic’s Claude 4 focused on advanced reasoning, coding, and sustained agentic work. ([OpenAI][3])
And the measurable progress has been stunning. Stanford’s 2025 AI Index reports that, in just one year, scores rose by 18.8 percentage points on MMMU, 48.9 points on GPQA, and 67.3 points on SWE-bench — benchmarks designed to test multimodal understanding, expert-level science questions, and real software engineering tasks. That is not incremental progress. That is a machine learning growth curve hitting real-world use cases at speed. ([hai.stanford.edu][4])
At the same time, the economics changed. The Stanford AI Index says inference costs for systems at roughly GPT-3.5 level dropped dramatically, making capable AI far cheaper to run and deploy. That matters because when intelligence gets cheaper, adoption stops being experimental and starts becoming operational. More companies can afford to use it. More products can embed it. More consumers encounter it without even realizing AI is underneath the hood. ([arXiv][5])
Why this feels “phenomenal”
Because human beings are not good at intuitively grasping exponential change.
We are wired for straight lines. AI has been moving more like a curve.
If a five-year-old learned one new word a day, you would not be amazed. But if that child woke up every few weeks with a new sense, a new skill, and a new level of reasoning, you would call it miraculous.
That is why the last 18 months feel so dramatic. AI did not merely become “smarter.” It became more general in the practical sense. It started spanning more modes of input, more modes of output, and more forms of work. It became less like a chatbot and more like a junior operator that can participate across the stack of modern knowledge work. ([OpenAI][2])
Why most people still do not understand its capabilities
This is the real story.
Most people underestimate AI not because they are foolish, but because they are judging a fast-moving system using an outdated memory of what it used to be.
They tried it once. It wrote a mediocre paragraph. It hallucinated. It got a fact wrong. So they mentally filed it under: “interesting, but not serious.”
That mental model is now badly outdated.
There are five reasons most people still do not understand what AI can do.
1. They are looking at the old version in their head
Most people are not evaluating AI as it exists now. They are evaluating their memory of it from 2023 or early 2024.
That is like judging today’s smartphone by the first clunky touchscreen you used years ago. The category moved. Your memory did not.
When systems start reasoning more effectively, handling multimodal inputs, and using tools to gather information or take action, the old “fancy autocomplete” frame stops being sufficient. ([OpenAI][2])
2. Capability is uneven, so people mistake inconsistency for weakness
AI is strange because it can appear brilliant in one moment and sloppy in the next.
A five-year-old analogy works here too. A child may shock you by reading a difficult word, then immediately forget where their shoes are. AI behaves similarly: it can solve a hard coding problem, then make a basic mistake if the task is framed poorly.
People see that inconsistency and assume the whole system is overrated. But inconsistency does not mean low capability. It means the tool is powerful, but still requires good interfaces, better safeguards, and skilled use. ([OpenAI][6])
3. Most people only use a fraction of what the tools can do
A huge gap exists between what AI can do and what the average person asks it to do.
Many users treat it like a search box or a copywriting toy. They ask for a caption, a recipe, or a quick summary. Fine. Useful, but shallow.
They do not ask it to analyze a sales process, restructure a business offer, review a contract, brainstorm new product angles, write code, compare strategy options, turn meeting notes into an action plan, or use tools to research and execute a task. So they never see the upper range of its usefulness. ([OpenAI][7])
4. The leap is technical, but the impact is practical
The public often hears technical language — multimodal, long-context, inference, agents, benchmark gains — and tunes out.
But the business meaning is simple:
AI is getting better at understanding, reasoning, creating, and doing.
That means it increasingly compresses the time between idea and execution. A task that once required five tabs, three tools, and one assistant can now start in one interface. The importance of this is not academic. It is operational. ([OpenAI][2])
5. People confuse “not human” with “not useful”
AI does not need to think exactly like a person to be commercially transformative.
A forklift does not move like a human body, yet it changed logistics. A calculator does not think like a mathematician, yet it changed finance. GPS does not navigate like a taxi driver, yet it changed travel.
Likewise, AI does not need to be conscious, emotional, or perfect to reshape industries. It only needs to become good enough, cheap enough, and fast enough at valuable tasks. On those dimensions, the last 18 months have been significant. ([arXiv][5])
What this means for business and society
The companies and individuals who still think AI is just a writing gimmick are making a strategic error.
The opportunity is no longer limited to content generation. It is about leverage.
AI now sits closer to the center of execution: research, customer service, software development, design, analysis, workflow automation, knowledge retrieval, sales enablement, and decision support.
That does not mean humans disappear. It means the productivity floor rises and the speed ceiling blows open.
The person who knows how to direct AI well starts to look like they have a bigger team than they do.
The company that integrates AI into process design, customer interactions, and internal operations starts to move faster than competitors still debating whether the tool is “real.” Meanwhile, broader adoption is accelerating: Stanford’s 2025 AI Index reports that organizational AI use rose sharply, reflecting that businesses increasingly see AI as operational infrastructure rather than novelty software. ([hai.stanford.edu][4])
The honest truth
AI is still imperfect.
It still makes mistakes. It still needs oversight. It still raises real questions around trust, safety, labor, and misuse.
But those truths can exist alongside another truth:
The development over the last 18 months has been extraordinary.
Not because AI became magic. Because it became useful in more ways, for more people, at lower cost, with more natural interfaces, and with a growing ability to move from answering to acting. ([OpenAI][1])
Final thought
So if you want the cleanest comparison, here it is:
Eighteen months ago, AI was like a very talkative five-year-old. Curious. Impressive. Sometimes surprisingly clever. But unreliable, limited, and easy to dismiss.
Today, it is like that same five-year-old suddenly learned to read, draw, listen, use tools, search libraries, operate devices, and help adults solve real problems.
It is still not fully grown. But anyone pretending it is still just a toy is no longer paying attention.
If this thinking fits, join the room.
WLBT is built around trust, standards, active participation, and real business relationships.
