• Wed. Feb 28th, 2024

MIT’s ‘PhotoGuard’ protects your images from malicious AI edits

MIT’s ‘PhotoGuard’ protects your images from malicious AI edits

Dal-e and stable diffusion were just the beginning. As generative AI systems proliferate and companies work to differentiate their offerings from those of their competitors, chatbots across the Internet are taking over to edit images and create them — leading the likes of Shutterstock and Adobe. But with those new AI-powered capabilities come familiar pitfalls, such as unauthorized tampering or outright theft of existing online artwork and images. Watermarking techniques can help alleviate the latter, while the new “Photoguard“Technology developed by MIT CSAIL will help prevent the former.

PhotoGuard works by altering selected pixels in an image that can disrupt the AI’s ability to understand what the image is. Those “perturbations,” as the research team refers to them, are invisible to the human eye but easily read by machines. The “encoder” attack method introduced by these artifacts targets a latent representation of the target image’s algorithmic model — the complex math that describes the position and color of each pixel in an image — preventing the AI ​​from understanding what it’s looking at.

A more advanced and computationally intensive “diffusion” attack method disguises one image as another in the AI’s eyes. It defines a target image and optimizes the perturbations in its image to resemble its target. Any edits an AI tries to make to these “immunized” images will be applied to the fake “target” images, resulting in an unrealistic image.

“The encoder attack tricks the model into thinking that the input image (to be edited) is some other image (for example, a gray image),” Hadi Salman, MIT doctoral student and lead author of the paper, told mGadget. Or the image may be reversed.

“A collaborative approach involving model developers, social media platforms and policymakers presents a strong defense against unauthorized image manipulation. Acting on this important issue is paramount today, Salman said in a statement. “While I am happy to contribute to this solution, putting this protection into practice requires a lot of effort. Companies developing these models need to invest in engineering strong immunizations against the threats posed by these AI tools.

Leave a Reply

Your email address will not be published. Required fields are marked *