I often see headlines that read, “Researchers use AI to…” and comments on the Internet that declare, “AI is just trying to do…” or “All AI does is...” Comments structured this way reflect one of the biggest misconceptions people outside of the field have about AI.
In this week’s piece, I’m going to lay out what that misconception is: what the flaw is in a lot of people’s mental models about AI. Then I’m going to describe a better mental model for AI. Lastly, I’m going to tell you why this matters, and what the number one question you should be asking about AI headlines and comments is.
The mental model many people hold is that all of AI is akin to ChatGPT or Stable Diffusion: all of AI consists of hundred-billion parameter neural networks that absorb the entire Internet as training data. This view makes sense: AI art generators and ChatGPT are the biggest stories in AI in at least 20 years.
But the most important thing to understand about AI is that it’s not monolithic. Artificial intelligence is the study of how to make computers do what biological intelligence, like humans, can do. Any capability that humans possess can be tackled with AI. As a result, AI spans many subfields and hundreds of subfields of subfields. it touches language, vision, planning, robotics, biology, and so much more.
Facial recognition, for unlocking your iPhone, is AI. Detecting objects in an image is AI. Pathfinding, for finding the quickest route between locations as in Google Maps, is AI. The algorithms NPCs use in video games to interact with the main character — that’s AI. Chess-playing algorithms like Stockfish are AI.
Methods AI spans are just as diverse. There are hundreds of methods we use as researchers and only a handful of them consist of massive, hundred-billion-parameter neural networks like ChatGPT.
Here’s a better mental model: think of the field of AI like the field of medicine. Medicine is a sweeping field that covers thousands of conditions, and therefore thousands of procedures and corresponding methods that are entirely different in nature and purpose. AI as a field is structured far more like this, and far less like a collection of variations on ChatGPT.
AI and medicine are divided into subfields, each with its own experts, involving procedures as far apart in consequence as cosmetic hair transplants and open heart surgery, or Snapchat filters and credit decisioning. Medicine splits off into pediatrics, neurology, orthopedics, dermatology, cardiology, and more. AI’s subfields include computer vision, reinforcement learning, natural language processing, computational biology, and more. Each of these subfields has its own rockstars, its own sub-communities, specializing in different methods, and solving different problems.
This mental model of AI makes it much harder to justify sweeping, general statements about AI. All such statements have to be qualified with: what kind of AI? AI for what problem? AI in what subfield, using what method?
When a headline, for example, says that something was “made with AI” the next question has to be, “What kind of AI?" Saying that something was “made with AI” is like saying something was “made with medicine” — so broad it’s meaningless. Neural implants are made with medicine and so is Tylenol. ChatGPT is “made with AI” but so is Pac-Man.
It’s not enough to say something “uses AI.” You have to ask, “What kind of AI?”