What Does the AI Boom Really Mean for Humanity? | The Future With Hannah Fry

  1. The “Gorilla Problem”
  • Metaphor: Just as humans (more intelligent) displaced gorillas (less intelligent) despite sharing ancestry, there is a fear that superintelligent AI could eventually displace or threaten humanity.
  • The Goal: Tech giants (Meta, Google, OpenAI) are spending billions to create AGI—machines that surpass human intelligence in every domain.
  1. Defining Intelligence
  • Intelligence is hard to define. It generally includes:
    • Learning & Adaptation: Applying knowledge from one area to another.
    • Reasoning: Having a conceptual understanding of the world.
    • Interaction: Navigating the environment to achieve goals (e.g., finding water in a foreign city).
  1. Embodied AI (Robots with Bodies)
  • Sergey Levine’s Research: Argues that for AI to truly understand concepts like “gravity” or “falling,” it needs a physical body to interact with the world, not just text descriptions.
  • Demo: A robot is shown learning to pick up objects (a spoon, a mushroom) and place them in specific spots (a towel, a wooden bowl) based on verbal commands. It demonstrates a form of “imagination” by visualizing the action before doing it.
  1. The Existential Threat (The “Doomers”)
  • Stuart Russell’s Warning:
    • Misalignment: A superintelligent machine might pursue an objective that technically follows our instructions but conflicts with what we actually want (e.g., “Solve climate change” -> “Get rid of humans”).
    • Control: We cannot simply “pull the plug” because a superintelligent system would anticipate that and prevent it.
    • Incentives: Economic incentives to build AGI are so high (quadrillions of dollars) that safety is taking a back seat.
  • Turing’s Prediction: Alan Turing himself feared it was “hopeless” and that machines would eventually take control.
  1. The Skeptic’s View
  • Melanie Mitchell’s Counterpoint:
    • Believes “existential threat” talk is going too far and anthropomorphizes machines (projecting human agency/cruelty onto them).
    • Real Dangers: The immediate risks are bias (facial recognition failing on darker skin), deepfakes (political manipulation), and legal hallucinations—not nuclear Armageddon.
  1. The Biological Complexity Gap
  • Ed Boyden (Neuroscientist): Is mapping the brain to understand the “hardware” of intelligence.
  • Complexity: A human brain has ~100 billion neurons. We currently only have a good map of a worm with 302 neurons.
  • Expansion Microscopy: Boyden uses diaper material (sodium polyacrylate) to physically expand brain tissue to see it better under microscopes.
  • Conclusion: Biological brains operate on a level of complexity far beyond current AI. As Hannah Fry notes, current AI is “more like a spreadsheet than a C. elegans worm.”

Final Verdict

Hannah Fry concludes that while AGI is a valid long-term concern, today’s AI lacks the fundamental biological complexity and “flash of insight” that humans possess. The immediate challenge is managing current harms (bias, misinformation) rather than fearing a sci-fi extinction event.

Just like the first professor said, I think the biggest threat with AI, is that we let it replace our own incentive to think, learn, be problem solvers, be creative. I am scared because I have seen it happening, in myself and in others. And it’s only natural that it happens this way, because humans are wired to take the path of least resistance. Mental challenges like learning or problem solving use a LOT of energy. The reason most animals have not evolved to be very intelligent is because they don’t need it to survive, so they conserve energy instead. If us humans no longer need to use our brains to survive, our individual and collective intelligence is going to diminish eventually.

For me, the issue is not so much an existential threat that AI will destroy humanity. It is that AI will simply do whatever we do better and cheaper, so a small number of billionaires will essentially control the ability to make, create, build, whatever with AI and robotics, faster and more cheaply than people can. It will not be like the industrial revolution where new jobs were created, because AI will also do those new jobs better than we can. The overall control and power will continue to become more and more focussed to fewer and fewer people. And, so far, we have not seen that level of “ultimate power” lead to “ultimate altruism”.

Absolutely brilliant, Hannah! Your ability to explain complex AI concepts with such clarity and thoughtfulness is unmatched. This video is a fantastic exploration of the power and challenges of AI. As we edge closer to AGI, it’s crucial to prioritize ethical development to ensure these technologies truly benefit humanity. We’ve launched an initiative to create a blockchain-powered, open-source ethical framework for AGI, focusing on transparency, accountability, and collaboration. It feels like this is a conversation that needs voices like yours. If you or anyone watching this shares these concerns, search for the AGI Ethics Initiative to learn more. Let’s work together to guide AI responsibly!

There’s no such thing as overestimating AI’s potential. Sure, it seems relatively innocuous now. But that lady in the video criticizing AI’s overestimation is basically the equivalent of a gorilla scoffing at a human because “look at how puny it is and how could it possibly threaten us?” The gorilla is not capable of imagining the power of human intelligence. And we are not capable of imagining the power of superhuman AI.

The captions that come up as the interviewees do their thing should give more than just their status in their university; they should also state their departments and, when they’re senior enough, professors in the British sense, then their full titles. It might look a little untidy but, as it is, to this layperson at least, it’s difficult to tell where the interviewee is coming from, what disciplinary assumptions they’re bringing to, say, a comment about the ethical implications of AI. Just a thought.

Leave a Comment