Google combines art and AI at the London Design Festival

Google’s Trafalgar Square's lion eats words, roars poetry

What does artificial intelligence have to do with a 150-year-old lion statue in London’s Trafalgar Square, you would probably ask? Good question. But Google’s never been one to let cognitive dissonance stand in its way. Today in partnership with British designer Es Devlin, the search giant took the wraps off “Please Feed the Lions,” an exhibit on display as part of the London Design Festival.

During the festival, curious visitors can “feed” the fifth lion in Trafalgar Square by typing a single word into a Google Pixelbook, which will expand it into a line of poetry projected inside the lion’s mouth. As more folks contribute, the poem will “evolve” and potentially wrap around both Nelson’s Column and the lion’s lower body. Words can be also submitted online and poems will be then published daily on the exhibit website. Following the festival, which ends September 23, Google will digitalise the sculpture as part of an online exhibition on the Google Arts & Culture app.

As Google states, a neural network is responsible for the nouveau verses. Ross Goodwin, a self-described creative technologist who’s part of Google’s Artist and Machine Intelligence program at the search giant’s Arts and Culture division, trained it on 25m words of 19th-century poetry and programmed it specifically never to repeat a line, so as to guarantee a degree of randomness.
According to Devlin that the idea for the installation came from a comment made by John Sorrell, chair of the London Design Festival. While the two were walking through Trafalgar Square during last year's festival, Devlin recalls Sorrell telling her that Edwin Landseer, the artist who sculpted the lions, didn't want the lions to look so passive, but Queen Victoria didn't like the more animated stance he proposed. "The thought lodged in my mind," Devlin said. "What if we could invest the lion with a diversely crowd-sourced collective poetic voice?"
Google’s public experiments with art and AI date back to 2016, when it released DeepDream, a neural net that generates trippy canvases from images. April 2017 saw the launch of Google’s AI Quick Draw, which taps an algorithm to transform doodles into clip art. And in January, the Google Arts & Culture app gained a feature that scans selfies for resemblances in a 70,000-strong library of paintings and portraits.

Similar articles