AI Doesn’t Understand Much—But Maybe A Little

Sam Brinson
9 min readJun 20, 2022

In only the last few months, we’ve seen a number of developments in artificial intelligence making headlines:

  • OpenAI’s DALLE-2 can generate incredible images from simple text prompts.
  • Google’s PaLM appears to reason and solve problems.
  • DeepMind’s Gato tackles a range of tasks and could be an example of general AI.

Each has added fuel to the growing number of debates around the current and near-future capabilities of AI.

Some are singing the praises of deep learning (the layered neural network approach which most modern AI is based on), claiming we only need to scale these models up for us to reach the heights of super intelligent AI. Others argue there is still something fundamentally missing.

Then there’s those like Google engineer Blake Lemoine, who have become so enamoured with what’s happening to consider if AI is already sentient.

The new programs are good enough that they’re certain to impact how people create and interact with computers, but do DALLE-2, PaLM, Gato, or any other current manifestations of AI, know what they’re doing?

What Does it Mean to Understand?



Sam Brinson

An emergent property of billions of chaotically firing neurons. Currently thinking about thinking.