AI-generated art is a promising new frontier. But, for every vexing concern about copyright and the possibility of mass manipulation, produced art may inspire amazement and astonishment. Consider this AI-powered project that generates multicoloured visual landscapes for created music.
The project, a collaboration between the eccentric synth and hardware business Teenage Engineering and design companies Modem and Bureau Cool, is inspired by the neurological disease synesthesia. This unusual occurrence causes the brain to receive sensory information for several senses rather than just one. A listener with synesthesia, for example, may see music rather than just hear it, noticing colour, movement, and form in reaction to musical patterns. On the other hand, a synesthetic individual may taste forms, feel words from a text, or hear an abstract picture.
The music source for the audiovisual experiment is the Teenage Engineering OP-Z sequencer, which is subsequently converted into AI art. Modem and Bureau Cool’s “digital extension” converts musical qualities into text prompts explaining colours, forms, and motions in real-time. These instructions are then processed by Stable Diffusion (an open-source application akin to DALL-E 2 and Midjourney) to generate dreamy and synesthetic animations.
If you’re a musician who owns Teenage Engineering’s OP-Z, you can’t use the extension now, although that may change in the future. According to Van de Poel, the firms are “exploring the possibility of issuing a public version.”
This AI-powered effort isn’t the first to make synesthetic characteristics available to the general public. Google Arts & Culture launched an exhibition last year that turned the premise on its head, putting machine-learning-produced music to Vassily Kandinsky’s paintings.