DragGan: redefining visual content control | | | | Turtles AI

DragGan: redefining visual content control
DukeRem
  With total #flexibility and #control in #mind, an #innovative new #technique for manipulating #Generative #Adversarial #Network #images has been developed. According to #computer #science #researchers, the new #DragGAN #tool allows for dragging any points on a GAN-generated #image to precisely move the pixels to desired locations. Through DragGAN, vision experts explain users can synthesize realistic visual content with exact control over poses, shapes, expressions and layouts of diverse object categories like animals, human faces, landscapes and vehicles. Unlike most existing controllable GAN methods that depend on labelled data or extra 3D models, DragGAN achieves its flexibility, precision and generality through a simple point-based interaction paradigm. Using only a mouse click, users define which locations on an image called “handle points” they wish to drag to new target positions. DragGAN consists of a "feature-based motion supervision" that guides the handle point to the targeted location, along with a clever "point tracking" method to keep relocating the handle point as the image is manipulated. After clicking the new points, the optimization process continues, moving the handle point closer to its intended target position with each iteration, frequently making subtle adjustments. You can see the following video for a visive explanation: