PhotoGuard: from MIT to protect from deep fakes | | | | Turtles AI

PhotoGuard: from MIT to protect from deep fakes
DukeRem26 July 2023
  While we just released our weekly newsletter about deep fakes (please click here to access it and consider following it, if you are on LinkedIn), MIT has released a new tool that should help protect your images from being counterfeit. This technique, called PhotoGuard, alters select pixels in images to confuse AI editing programs without impacting a human's perception. By subtly changing pixel values, PhotoGuard creates "perturbations" that throw off algorithmic image models. In the encoder attack method, these perturbations target the latent representation of the image, essentially blinding the AI from recognizing what it sees. A more complex diffusion attack camouflages an image to appear differently to AI algorithms. While not foolproof, PhotoGuard shows promise in defending against unauthorized image editing, though further research is needed to improve practical use. Highlights:
  • PhotoGuard subtly changes pixel values in images to introduce "perturbations" that fool AI editors
  • The perturbations blind algorithmic models by disrupting their latent image representation
  • A diffusion attack method camouflages images to appear differently to AI systems