Vladimir Vapnik: Learning Theory, Intelligence & Invariance
Theoretical Foundations of Learning
Vladimir Vapnik discusses the nature of machine learning, emphasizing a rigorous, mathematical approach over purely speculative interpretations. He distinguishes between two primary learning mechanisms:
• Strong convergence: The traditional statistical learning approach using large datasets.
• Weak convergence: A process incorporating predicates or invariants (e.g., the "duck" analogy) to significantly reduce the required amount of training data.
The Problem of Intelligence
For Vapnik, the core challenge of artificial intelligence is not imitation, but the generation of these essential predicates. He argues that current deep learning methods often rely on "fantasy" and heavy data requirements precisely because they lack a principled way to incorporate these human-like insights. A "great teacher" is one who can distill reality into a useful invariance, thereby enabling a learner to generalize with far fewer observations.
"There exists ground truth. And that can be seen everywhere... in music, when you're listening to Bach, you see this structure, very clear, very classic, very simple."
Challenges to Modern AI
Vapnik critiques the field's obsession with deep learning architectures. He posits that:
• Math vs. Interpretation: Many modern machine learning techniques are driven by subjective "blah-blah" interpretations rather than solid mathematical derivations.
• VC Dimension: He highlights that learning success is governed by controlling the diversity or capacity of the admissible set of functions (VC dimension) rather than just adding layers.
• Open Problems: Vapnik identifies the core open problem as understanding how to mathematically derive or identify the right predicates for specific tasks, similar to how humans learn to categorize the world based on limited interactions.