Thanks for the mention, Clive!
It's amazing how these models can map language with beautiful visual imagery. I'm sure better models are already out there (Dream most likely uses a deprecated form of CLIP+GAN). Right now, CLIP + diffusion models are SOTA (although I think there isn't even a colab notebook yet, Katherine Crowson is working on it).
I think AI has problems with metaphors for the same reason it has problems with analogies. Both require minimal contact with culture, a capacity to go beyond syntax and semantics - to pragmatics, the contextual nature of language.
I'd argue no one can't understand metaphors just by reading them. They belong to the realm of tacit knowledge. A metaphor can't be explained. The agent trying to understand it (whether a person or AI) has to have some previous knowledge about what the metaphor is about to have a chance at getting it - be it real experiences, feelings, sensations, or something else.
AIs don't belong to our world, but to limited virtual worlds that, for now, can't provide them with the abilities to feel and experience like we do. Until then, we can keep enjoying metaphors knowing they're still uniquely human!
Just some food for thought!
That was a great read (I actually had a similar idea and used Dream to paint the 10 most beautiful English words!)