On the 3:13 pm train out of San Jose on a recent Friday, I hunched over a Macbook, brow furrowed. Hundreds of miles north in a Google datacenter in Oregon, a virtual computer sprang to life. I was soon looking at the yawning blackness of a Linux command line—my new AI art studio.
Some hours of Googling, mistyped commands, and muttered curses later, I was cranking out eerie portraits.
Barrat doesn’t have formal qualifications in programming either, but he’s become an accomplished AI artist, and shares code and ideas on Github . I decided to try them after talking with Barrat in the course of writing about self-taught AI experts in the December issue of WIRED , and learning that a Parisian art collective called Obvious used his recipes and code to create a work that sold at Christie’s for $432,500.
Barrat makes art using artificial neural networks, webs of math that have spawned the recent AI boom by enabling projects like self-driving cars and automated cancer detection. They can learn to do useful or artistic things by processing large volumes of example data, such as photos. Barrat enabled Obvious’s payday at Christie’s, and my own explorations, by sharing the code and instructions to train image-generating networks with images collected from the giant art encyclopedia WikiArt.
Training neural networks is notoriously computationally demanding. It’s why graphics chip maker Nvidia has seen its stock appreciate more than 10-fold in the past five years, and Google has begun to design its own chips for machine learning. Not having a graphics processor—or $2,000 spare to get one —I used the $300 of credits Google offers new users of its cloud computing service to boot up a virtual computer that did. I picked one pre-configured with machine learning software. Because Barrat’s project is now more than a year old, I also had to install a machine learning tool called Torch, used by researchers at companies including Facebook and IBM, that has since been overshadowed by newer packages.
My first experiment involved a neural network Barrat had trained on thousands of portraits from more than a century of art history. Once I’d gotten the supporting software working, I could type a few dozen characters and spit out grids of weird portraits—some of them similar to the one that Obvious sold for almost half a million dollars. Barrat’s networks natively produce only small images. I tried enlarging one of my portraits with a machine-learning-powered service called Let's Enhance , which Barrat says one member of Obvious told him it used as part of its workflow.
Next I dug into the documentation to see what other tricks Barrat’s pre-trained portrait generator might do. I made the images at the top of this article by asking it to produce larger images. The clumps of distorted heads and figures are the result of a neural network that learned to produce structures of a certain size, trying to fill a space larger than it was trained on.
Emboldened, I moved on to training image-generating neural networks of my own, again using Barrat’s instructions. The “scraper” he developed to pull images from WikiArt can be directed to collect images in many different styles and genres, such as cityscapes, or pointillism. Barrat had covered portraits, nudes, and landscapes. I plumbed for marine art , and used the script to collect just over 2,000 images. I then doubled my haul with an image-editing tool to create mirror images of those images. This trick works because of a shortcoming of neural networks: They don’t natively perceive visual similarities obvious to people, like two photos being mirror images.
Training the network gave me new appreciation of grumbles I’ve heard in the course of reporting on machine learning projects. For one, there are elements of luck and craft to finding the right settings to get good results for a particular network on a given dataset—it’s one reason Google is trying to automate that process. I embarked on a trial-and-error process similar to—but much less informed than—those Barrat and the AI artist Mario Klingemann have told me they use, training networks over and over with small differences and trying to move towards the most promising results.
With access to just a single Nvidia graphics chip, training the neural networks took hours each time. It reminded me why tech companies spend heavily on hardware to accelerate their teams’ experiments, and are developing their own AI chips. One Facebook project that trained image recognition algorithms on billions of Instagram photos occupied 336 graphics processors for more than three weeks solid.
My own experiments spanned only a few days. But after a handful of duds that “painted” only blotchy glitches, I trained networks that could produce recognizable oceans, and even ghostly sailing ships. Sensing I was close to making them even better, I cued up a marathon training session—and accidentally crippled my virtual studio.
While waiting for my next greatest neural network to finish its education, I discovered a Github page from artist Alex Champandard offering code to use machine learning to scale up images . In trying to make it work, I broke a piece of the software infrastructure needed to support my virtual machine’s GPU. With my deadline approaching, there was no time to reinstall everything from scratch.
When I spoke to Barrat, he was encouraging about my scrappy art project, saying it was the kind of exploration he hoped his code and tutorial could enable. “My goal was people would use it like you’re doing to play around, and then maybe go on and do more stuff,” he said. He added that he liked the weird assemblages created by pushing his portrait network out of its comfort zone, something he hadn’t tried much himself. “You should go sell those for $400,000,” he joked.
- This chemical is so hot it kills nerve endings. Good!
- So you're thinking of deleting your tweets. Should you?
- The Hail Mary plan to restart a hacked US electric grid
- Does Latinx Twitter exist?
- My dad says he’s a “targeted individual.” Maybe we all are
- Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories