NVIDIA Research team has made life easier for novice painters with a new deep learning tool: GauGAN that will turn rough doodles sketched into a Microsoft-like paint into “photorealistic masterpieces.” GauGAN converts ordinary lines and patches into “lifelike images” using GAN or generative adversarial networks, the famous class of machine learning systems developed in 2014 by Ian Goodfellow.

GauGAN, which could be used to draw a landscape with all possible real-life features, could be a powerful tool to everyone; architects, urban planners, game developers, landscape designers, etc., in creating virtual worlds. These professionals can use the tool, which understand the looks of a real world, in making rapid changes to a synthetic scene and better create prototype ideas.

Brainstorming designs is a lot easier with simple sketches, and GauGAN technology is able to convert rough doodles into highly realistic images, NVIDIA vice president of applied deep learning research Bryan Catanzaro said. GauGAN technology is like a “smart paintbrush” anyone can use to fill in high-level outlines inside rough segmentation maps.

How GauGAN creates lifelike landscapes from doodles

You can draw and manipulate the scenes of your own segmentation maps, labeling each segment with features like snow, sea, sky, or sand.

GauGAN is developed using a deep learning framework called PyTouch which fills in the landscape with show-stopping results. For instance, draw a line and a circle on top, the system simply generates a custom tree if you select tree. To include features like grass or snow, simply draw a segment and select the appropriate label. You can as well swap a segment label, for instance, from “grass” to “snow” and GauGAN will automatically change the scene to a winter, turning the previously leafy tree into barren.

“It’s like a coloring book picture that describes where the sky is, where the sun is, where a tree is,” said Catanzaro. The neural network can fill in every detail and texture, including shadows and reflections, depending on “what it has learned about real images,” he added.

GANs is designed to produce convincing results

With no clear understanding of the real world, GANs can produce results that are convincing through its networks: a generator and a discriminator. The discriminator quality-checks results from the generator. Through pixel-by-pixel feedback, the discriminator trains the generator on how to improve its synthetic images to look more real.

The generator simply learns to create convincing replica after the discriminator learns that real lakes and real ponds have reflections by training on a million real images.

GauGAN app also allows you to change a sunset scene to daytime, change the style of a generated image and add a style filter. The app can also fill in other landscape features like people, roads, buildings, even though it focuses on nature elements like sky, sea, and land.

NVIDIA has released an online GauGAN demo so you can try for yourself.