Singer / Songwriter Andrew Paley released his new album “Scattered Light” in October and has now released an ingenious video, which was described by Diffus magazine as “amazing”, “impressive” and “a must have for all music and movie nerds”. In addition, with the folk-punk band Days N Daze a split 7″ is released on SBÄM / Flail Records. It is the first new output of the band since their Top 200 Billboard entry through their last longplayer “Show Me The Blueprints” via Fat Wreck Chords.
In the course of the release of his new album “Scattered Light”, singer / songwriter Andrew Paley has created a generative art project with the help of a self-created artificial intelligence, which functions as a new video for his song “Sequels”. The video functions both as a reminiscence of the film classics of the 80s and as a critique of the Trump era. The result is impressive and not only exciting for film nerds.
Andrew Paley has also announced a new split 7″ with the Fatwreck folk-punk band Days N Daze, which will be released soon via SBÄM and Flail Records. Days N Daze have covered the song “Caroline”. The original by Andrew Paley is also featured on the vinyl EP. Pre-order HERE.
How did the idea for the “Sequels” video develop?
I’m working on a PhD in artificial intelligence, but my research is in a different space (conversational systems, natural language processing, democratization of information access). That said, over the past couple of years, I’ve been working on a set of independent research spikes in the space of generative art. It’s been a fun way to explore the edge of the possible and come to understand a space that’s outside my regular work, and it’s also given me a way to merge the research and music halves of my life. Earlier in the year, I created a couple of videos for other songs (“Give Up” and “Stay Safe”) based on getting GANs to dream up some otherworldly imagery and sync it to music, but I wanted to take this video in a different direction.
I’ve had a longstanding interest in and concern about the space of deepfakes and what they’ll do to our relationship to media and information. Somewhere in there, I decided one way to get a handle on what to make of them would be to actually get hands on with what’s possible, and, conceptually, I thought that mining the movies I loved growing up would make for some great fodder (though I admittedly underestimated just how much editing would be involved). So then when I read about the Wav2Lip work a couple months back, everything just lined up.
How were the scenes selected?
There are different ideas that I explore throughout the song — thematic connections, little callbacks, conceptual evolutions across scenes/movies — and they tie in various ways to the themes of the song itself. Collectively, it’s also in some way me taking snapshots from an earlier era and pulling them forward into the chaos of now. It’s an experiment in leveraging the shared language of cultural touchstones and reimagining them as raw material — in repurposing the familiar as a means of going a few steps down into some new form of uncanny valley in this budding era of synthetic media. It left me all the more convinced that the next five to ten years are going to be pretty wild (and potentially dangerous) as this space evolves. So, I guess this video is both a reminder of where we’ve been and also a hint of where we might be headed.
Give us some technical background for this unique creation.
Andrew: First, the slightly technical background, though a simplified version: I leveraged a model that had been trained in a generative adversarial network (GAN) architecture. It’s a now-common approach to training generative models that involves two competing models — one that creates (the generator) and one that judges the output of the generator as passable or not (the discriminator). These two models play off each other until the generator is capable of reliably creating “passable” work. More specifically, there has been much research into lip re-syncing in videos over the past few years, but previous techniques either required the model be trained on the face it’s trying to re-animate and/or didn’t do very well with arbitrary moving images. Recent research — as in the past few months — has yielded a model that actually can pull off re-animating arbitrary faces in videos pretty well, and it’s called Wav2Lip. Reading about it piqued my curiosity, so I thought I’d see what I could do with it.
So, the inputs to the model are an audio file (the track of my singing from the song) and a video file (the face of a person that I’d like to reanimate to appear as though they were singing the words), and the output is the re-render of their face synced to my singing. To get the video as it is, I mined movies from the late 80s through early 00s that left a mark on me growing up, and pieced slices of scenes together to create a visual that matched the song — and then I went slice by slice reprocessing the faces to match the lyrics. There was an enormous amount of experimentation — lots of videos/inputs required tweaking and some just simply didn’t work for various reasons — but it was a really interesting foray into a pretty fascinating space.
Andrew Paley once described his songwriting as the pursuit of “open spaces, sharp edges and bright colors.” He was referring to his work with the critically acclaimed post-punk band The Static Age, but it’s clear that he has continued to explore that aesthetic in his career as a solo artist.
After his 2016 solo effort Sirens (Paper+Plastick / Make My Day Records) Andrew will release his next album Scattered Light in October 2020.
And for this latest work, Paley is pulling from both sides of his brain. On one side, he’s a genre-defying songwriter, producer and performer with multiple albums and a show history that spans the globe – from Germany to Japan, Brazil to Russia – and on the other, he’s a PhD Candidate in Computer Science at Northwestern University who works primarily in the space of democratizing access to information and data analysis through AI, but also spends time exploring human-machine collaboration in pursuit of generative art.
Against that backdrop, Paley has spent the past three years assembling an album that lives up to the name Scattered Light, writing and recording both in his studio in Chicago and in borrowed spaces while on tours in support of his last release. Throughout the songs, he mixes the personal and political in an exploration of the dualities of American life in the Trump era. As he puts it, it’s a mixture of “hope and defeat, hyper-connectedness and isolation, inspiration and distraction, constant information and background noise, the can’t-look-away chaos of the world outside and the wear of its blinding light.” And then, pulling from aspects of his other life, he’s collaborating with generative AI to create a full visual accompaniment to the album, two videos of which have been released and are available now to watch if you scroll down.
Sonically, he frames those explorations against a wide-open palette, shifting in and out of angular guitars, atmospheric reverbs, driving backbeats, dayglow synths and yes, even the occasional acoustic instrument. On these songs, Paley pulls from all over – his work with post-punk band The Static Age, the more stripped down space of his prior solo endeavors, and the spirit of his recent collaborations, such as on the song Let Go with producer StayLoose (garnering well over a million streams and counting on Spotify) – and takes it all with him as he explores new terrain.