Every Problem Is a Prediction Problem
On true belief and explanation, Popper and Deutsch, knowledge in AI, and the nature of understanding
This might sound trivial if you already agree, and too reductive if you don’t. But I want to make and defend the claim that every problem we face is ultimately a problem about making the right predictions. Whether you’re deciding where to invest, whom to hire, which medication to take, or what to say to a friend who’s upset; whether you’re a theoretical physicist, an engineer, a historian, or a theologian; whether you’re a child, a teen, or an adult; whatever the nature of the problem you’re dealing with, it is ultimately a problem about making the right predictions. Consequently, the better your predictions, the better your solutions.
Why might such a straightforward, commonsensical view be contentious? In fact, throughout history, many influential thinkers have explicitly rejected it. From Plato on beliefs to Albert Einstein on quantum physics to Noam Chomsky on contemporary AI, philosophers and scientists have insisted that knowledge requires something more than true predictions. And they’ve had good reasons for saying so. Lucky guesses clearly aren’t knowledge. Shallow pattern matching breaks when the world shifts, as imperfect AI systems remind us.
But I want to suggest the critics have misdiagnosed the problem. They’re pointing at something real, but misunderstanding its nature. What they’re identifying isn’t an alternative to prediction. Instead, it’s what helps us make more predictions, and make them more successfully.
The objection goes back at least to Plato. In the Theaetetus, Socrates considers whether knowledge might simply be true belief, and rejects the idea with a thought experiment about a jury.
Imagine jurors deliberating a case. They weren’t eyewitnesses to the crime. They’ve heard testimony, some of it persuasive rhetoric rather than solid evidence. Suppose they reach a guilty verdict, and they happen to be right. The defendant did indeed carry out the crime.
But did the jurors know he was guilty? No. They had a true belief, but something important was missing. They reached the right conclusion through a process that could easily have led them astray. A more charismatic defense attorney might have swayed them the other way. They were “justly persuaded”, perhaps, but rhetorical persuasion isn’t the same as knowledge.
We feel this intuition immediately. If we had to predict whether the jurors would get the next case right, we wouldn’t trust them. Their success doesn’t seem systematic or reliable. It seems contingent on who spoke most eloquently that day.
For more than two thousand years, this has been the standard objection to identifying knowledge with correct prediction: you can get the right answer for the wrong reasons. And when that happens, you don’t really know.
What, then, is missing? The standard answer is that the jurors lacked understanding or explanation: something more than mere successful prediction.
But I would argue that what the jurors really lacked was a method that would continue to work, not something mysterious called “explanation” whose nature is never made clear. The problem is that the method they used to arrive at their prediction wasn’t reliable. And what makes prediction reliable? Precisely the things critics point to: understanding what we call underlying structure, grasping what we call causal relationships. The critics aren’t wrong to insist that knowledge requires more than randomly getting the answer right. They are wrong about what that “more” consists of.
Thinkers like the philosopher Karl Popper and the physicist David Deutsch argue in favor of explanation over mere prediction, but explanation matters only to the extent that it extends our predictive success. We never observe causes, as David Hume correctly observed, but causal understanding still matters if it tells us what will happen when circumstances change. Knowing what’s impossible matters not because it’s real (we never observe it, since it doesn’t actually occur), but because it rules out paths in advance and thus helps refine predictions about what does occur. These are not alternatives to prediction. They are methods that help us attain greater success with our predictions.
Notice the asymmetry. We never observe causal necessity or explanatory structures; we infer them from what leads us to more successful predictions. Predictions, by contrast, are concrete. The event either happens or it doesn’t. This gives prediction a direct contact with reality that explanation, by itself, lacks. Explanation earns its keep only by extending predictive success.
Our quest for predictive success may necessitate the use of these methods. As Ilya Sutskever, co-founder of OpenAI, notes, when you train a neural network to successfully predict the next word, it doesn’t just learn statistical correlations; it learns “some representation of the process that produced the text”. The pursuit of prediction, pushed far enough, generates what we describe as explanation.
But if we take explanations as the essence of knowledge, if we see explanations as the end, we run the risk of obscuring our ignorance and producing a satisfactory subjective state through narratives that we mistake for knowledge. Reliance on predictions helps mitigate this risk, as demonstrated by the success of modern science, which offers predictions that pre-modern natural philosophy, despite its elaborate explanations, failed to offer.
Every problem is a prediction problem. The question that matters is how far our successful predictions go, and how to make them go further.

