François Chollet: Keras, AGI, and the Future of AI
The Intelligence Explosion and Systemic Limits
François Chollet fundamentally challenges the intelligence explosion narrative—the idea that an AI will iteratively improve its own code to achieve exponential growth.
• Intelligence is not an isolated property: It is deeply coupled with a body and an environment. A system cannot be effectively improved in a vacuum.
• The Bottleneck Effect: As systems become more refined, they inevitably hit new bottlenecks. When one part of an interdependent system is optimized, the rest of the system resists further acceleration.
• Linear Progress vs. Exponential Resources: While scientific research consumes exponentially more resources, the actual output in significant progress remains stubbornly linear, as progress becomes harder to achieve over time.
Keras, Deep Learning, and Artificial Intelligence
Chollet, the creator of the Keras library, discusses the evolution of deep learning tools and the design philosophy behind his work.
• Modularity and Usability: Keras was born from a need for reusable Recurrent Neural Networks (RNNs) and a desire to make deep learning accessible through a high-level Python API.
• The Future of AI Design: Chollet envisions a shift toward automated machine learning (AutoML) and hybrid systems that combine deep learning (perception) with symbolic AI (program synthesis/logic).
The Risks of Information Control
A major concern Chollet raises is the mass manipulation of human behavior through modern recommender systems.
"The human mind is very, if you look at the human mind as a kind of computer program, it has a very large exploit surface."
• Algorithmic Bias: Current systems optimize for engagement, which often favors divisive or fake content over reality.
• User Agency: He suggests that users should be given explicit control to define the objective functions of the algorithms they interact with, transforming them from passive targets of manipulation into active users shaping their own growth.