Over the past few years, many artists have started to use what’s called “neural network software” to create works of art.
Users input existing images into the software, which has been programmed to analyze them, learn a specific aesthetic and spit out new images that artists can curate. By manipulating the inputs and parameters of these models, artists can produce a range of interesting and evocative images.
As an academic researcher, developer of artistic technology and amateur artist, seeing artists embrace new technology to create new forms of expression always thrills me.
But, like previous groundbreaking art movements, neural network art raises difficult questions: How do we think of authorship and ownership when these artworks come from the contributions of so many different creative individuals and algorithms? How do we ensure that all the artists involved are treated fairly?
A movement is born
The vibrant neural network art world arose in the past few years, in part, from developments in computer science.
It began in 2015 with a program called DeepDream, which was developed accidentally by a Google engineer. He sought a way to visualize the workings of a neural network system designed to analyze images. To do this, he gave it an input photograph and asked it to increase the number of object parts detected in the image. The result was a panoply of weird and evocative images.
He shared his method online, and artists immediately began to experiment with it. The first gallery show of DeepDream art occurred less than a year later.
Because this software is all freely shared online, digital artists can experiment with these models, and then share their own results and modifications.
There’s an active creative community of neural network artists on Twitter who discuss the results of their experiments, along with the latest developments and controversies. And major mainstream artists have also embraced these tools, with major shows and commissions by artists like Trevor Paglen, Refik Anadol and Jason Salavon.
Nonetheless, this open sharing challenges the ways we think about art. Christie’s sale of the image “Edmond de Belamy, from La Famille de Belamy” in November 2018 for nearly US$500,000 indicated that something was awry.
Why? To make this image, the artist group Obvious used the source code and data that another artist, Robbie Barrat, had shared freely on the web.
Obvious had every right to use Barrat’s code and claim authorship of the work. Nonetheless, many criticized Christie’s for elevating the artists who played only a small part in the creation the work. This was generally read as a failure of Christie’s, particularly in the misleading way it promoted the work, rather than a need to rethink authorship of AI art.
The emergence of Ganbreeder
These issues really become unavoidable in Ganbreeder, a beguiling new website for creating images with neural networks.
Ganbreeder is an endless source of inspiring, intriguing, weird and fascinating imagery. Unlike the images that emerge from DeepDream, which quickly become repetitive, it seems like no single human mind could ever be capable of producing Ganbreeder’s diverse range of original imagery.
Ganbreeder was launched last November by Joel Simon. Each Ganbreeder image is created with input parameters that you choose by modifying the parameters of other images on the site. The site stores the lineage of each image, so that you can see all who contributed to a final image.
If you like an image you’ve found or created, you can order a custom print on wood from an entrepreneur and artist named Danielle Baskin. She touches up the print with paint, but instead of signing it, labels the back of the work with a QR code that points to image’s unique lineage.
She does this because each image is the result of many people’s contributions, which makes it difficult to attach the name of any one sole artist to each new artwork.