The Real Faces of 54 Roman Emperors

By | October 6, 2020
  • Daniel Voshart, a designer in Toronto, used machine learning and Photoshop to transform busts of Roman emperors into photorealistic images.
  • Voshart wanted to present the emperors as they would have looked at the end of their reign, notwithstanding any diseases that could have altered their appearance.
  • Artists have been known to exaggerate the good looks of the ruling class, so presumably these images are more fair.

    Artists have historically exaggerated how attractive rulers were in their portraits and sculptures. Queen Caroline of England said it best back in 1795, when she described the moment she first laid eyes upon her fiancé, King George IV: “I find him very fat, and by no means as beautiful as his portrait.”

    Now, a Toronto-based designer is correcting some of those creative liberties. Melding together machine learning, Photoshop, and historical records, Daniel Voshart has transformed 54 Roman emperor busts from the Principate period (27 B.C. to 285 A.D) into photorealistic images.

    Originally, Voshart undertook the work as a quarantine project of sorts. “I think it was the nature of the pandemic, which had me thinking about anything else, and maybe I was drawn to the morbid details of the emperors’ lives,” he tells Popular Mechanics. “I was working on a sci-fi show about 2,000 years in the future, so maybe I was drawn to thinking about the past.”

    But Voshart couldn’t have predicted that orders for his first-edition prints—featuring emperors like Augustus, Nero, and Decius—would blow up on his Etsy page. “I didn’t know the response would be such that I would actually [have to] reduce my job’s hours simply to meet demand,” he says.

    54 roman emperors in a print that looks photorealistic

    Daniel Voshart

    The process wasn’t simply a matter of plugging in photos of the busts to spit out some kind of perfect human face, Voshart says. To generate the rough drafts for each emperor’s face, Voshart heavily relied on a machine learning tool called Artbreeder. The open-source software uses generative adversarial networks, or GANs, to spawn an image.

    If those sound familiar, it’s probably because you’ve heard of deepfakes, or synthetic media often used for nefarious purposes. GANs, the underlying technology in deepfakes, can help algorithms move beyond the simple task of classifying data and into the realm of data creation—in this case, images. This happens when two GANs try to fool each other into thinking an image is real. Using as little as one image, a tried-and-tested GAN can create a video clip of, say, Richard Nixon.

    While Voshart’s Roman emperors aren’t deepfakes, they do share a similar technological framework—they’re just different applications of machine learning. Specifically, Artbreeder uses NVidea StyleGAN, an open-source GAN that computer scientists created back in December 2018.

    In a September 30 virtual lecture on “GANs for Good,” Anima Anandkumar, the director of machine learning research for NVIDIA, explains how the technology works. Utilizing a technique called disentanglement learning, the GAN can better separate and control certain style elements in isolation, something artists like Voshart can naturally do much better than machines can.

    This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

    “Humans are great at this,” Anandkumar explained in the lecture. “We have different concepts we’ve learned as infants and we’ve done that in an unsupervised way, and this way, we can now compose, and make entirely new images or concepts.” In practice, that means a user has more control over which properties of a source image they’d like to use in the new one.

    This content is imported from {embed-name}. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

    Joel Simon, the developer who created Artbreeder, tells Popular Mechanics that it all comes down to the way that the program’s neural networks represent “space” in the images.

    “When an image is ‘uploaded,’ the face is cropped and then a search process is done to find the closest place in the space for that image,” he explains. “Once it’s in this ‘space,’ it’s easy to ‘move around’ by adding or subtracting numbers that correspond to values like the age or gender, here called the ‘genes.’ So by adding color, it does so in a very intelligent way, not just by editing pixels, but by moving through the space of all faces.”

    This makes it simpler for an artist like Voshart to upload training data—in this case, about 800 of samples of Roman emperor busts—to come up with a hyperrealistic face with fewer artifacts, or abnormalities introduced by the software.

    elagabalus

    Daniel Voshart

    Still, Voshart had a significant amount of work on his hands, even after using the Artbreeder software. In his testing phase, prior to producing the Roman Emperor faces that are pictured in his final prints, the results came ridden with abnormalities.

    “The result comes out with lots of strange artifacts and tends to morph features back to some kind of average face, which is the opposite of what you want when you want to maintain an interesting expression,” Voshart says. “My process was more download from Artbreeder, modify in Photoshop, and repeat the process by loading back into Artbreeder.”

    Even though it could be a headache to remove flaws in the generated images, Voshart says there’s “not a remote chance” he could have done the work without the power of machine learning.

    This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io

    Latest Content – Men's Health