Close Menu
Ztoog
    What's Hot
    Mobile

    Save up to $100 on these Marshall portable speakers

    Gadgets

    This top-rated color sensor is under $60 this Memorial Day

    AI

    Google DeepMind used a large language model to discover new math

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

      Today’s NYT Strands Hints, Answer and Help for May 26 #449

      LiberNovo Omni: The World’s First Dynamic Ergonomic Chair

    • Technology

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

      Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

      5 Skills Kids (and Adults) Need in an AI World – O’Reilly

    • Gadgets

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

      Google shows off Android XR-based glasses, announces Warby Parker team-up

    • Mobile

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

      vivo T4 Ultra specs leak

    • Science

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

      A trip to the farm where loofahs grow on vines

    • AI

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

      AI learns how vision and sound are connected, without human intervention | Ztoog

    • Crypto

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

      Senate advances GENIUS Act after cloture vote passes

    Ztoog
    Home » Using AI to protect against AI image manipulation | Ztoog
    AI

    Using AI to protect against AI image manipulation | Ztoog

    Facebook Twitter Pinterest WhatsApp
    Using AI to protect against AI image manipulation | Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    As we enter a brand new period the place applied sciences powered by synthetic intelligence can craft and manipulate photographs with a precision that blurs the road between actuality and fabrication, the specter of misuse looms massive. Recently, superior generative fashions equivalent to DALL-E and Midjourney, celebrated for his or her spectacular precision and user-friendly interfaces, have made the manufacturing of hyper-realistic photographs comparatively easy. With the limitations of entry lowered, even inexperienced customers can generate and manipulate high-quality photographs from easy textual content descriptions — starting from harmless image alterations to malicious modifications. Techniques like watermarking pose a promising answer, however misuse requires a preemptive (as opposed to solely put up hoc) measure. 

    In the hunt to create such a brand new measure, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) developed “PhotoGuard,” a method that makes use of perturbations — minuscule alterations in pixel values invisible to the human eye however detectable by laptop fashions — that successfully disrupt the mannequin’s capacity to manipulate the image.

    PhotoGuard makes use of two completely different “attack” strategies to generate these perturbations. The extra easy “encoder” assault targets the image’s latent illustration within the AI mannequin, inflicting the mannequin to understand the image as a random entity. The extra subtle “diffusion” one defines a goal image and optimizes the perturbations to make the ultimate image resemble the goal as intently as attainable.

    “Consider the possibility of fraudulent propagation of fake catastrophic events, like an explosion at a significant landmark. This deception can manipulate market trends and public sentiment, but the risks are not limited to the public sphere. Personal images can be inappropriately altered and used for blackmail, resulting in significant financial implications when executed on a large scale,” says Hadi Salman, an MIT graduate pupil in electrical engineering and laptop science (EECS), affiliate of MIT CSAIL, and lead writer of a brand new paper about PhotoGuard. 

    “In more extreme scenarios, these models could simulate voices and images for staging false crimes, inflicting psychological distress and financial loss. The swift nature of these actions compounds the problem. Even when the deception is eventually uncovered, the damage — whether reputational, emotional, or financial — has often already happened. This is a reality for victims at all levels, from individuals bullied at school to society-wide manipulation.”

    PhotoGuard in apply

    AI fashions view an image in a different way from how people do. It sees an image as a posh set of mathematical knowledge factors that describe each pixel’s shade and place — that is the image’s latent illustration. The encoder assault introduces minor changes into this mathematical illustration, inflicting the AI mannequin to understand the image as a random entity. As a outcome, any try to manipulate the image utilizing the mannequin turns into practically unattainable. The modifications launched are so minute that they’re invisible to the human eye, thus preserving the image’s visible integrity whereas guaranteeing its safety.

    The second and decidedly extra intricate “diffusion” assault strategically targets the complete diffusion mannequin end-to-end. This includes figuring out a desired goal image, after which initiating an optimization course of with the intention of intently aligning the generated image with this preselected goal.

    In implementing, the staff created perturbations throughout the enter area of the unique image. These perturbations are then used in the course of the inference stage, and utilized to the photographs, providing a sturdy protection against unauthorized manipulation.

    “The progress in AI that we are witnessing is truly breathtaking, but it enables beneficial and malicious uses of AI alike,” says MIT professor of EECS and CSAIL principal investigator Aleksander Madry, who can also be an writer on the paper. “It is thus urgent that we work towards identifying and mitigating the latter. I view PhotoGuard as our small contribution to that important effort.”

    The diffusion assault is extra computationally intensive than its less complicated sibling, and requires vital GPU reminiscence. The staff says that approximating the diffusion course of with fewer steps mitigates the problem, thus making the method extra sensible.

    To higher illustrate the assault, contemplate an artwork mission, for instance. The authentic image is a drawing, and the goal image is one other drawing that’s utterly completely different. The diffusion assault is like making tiny, invisible modifications to the primary drawing in order that, to an AI mannequin, it begins to resemble the second drawing. However, to the human eye, the unique drawing stays unchanged.

    By doing this, any AI mannequin trying to modify the unique image will now inadvertently make modifications as if coping with the goal image, thereby defending the unique image from meant manipulation. The result’s an image that continues to be visually unaltered for human observers, however protects against unauthorized edits by AI fashions.

    As far as an actual instance with PhotoGuard, contemplate an image with a number of faces. You may masks any faces you don’t need to modify, after which immediate with “two men attending a wedding.” Upon submission, the system will regulate the image accordingly, making a believable depiction of two males taking part in a marriage ceremony.

    Now, contemplate safeguarding the image from being edited; including perturbations to the image earlier than add can immunize it against modifications. In this case, the ultimate output will lack realism in contrast to the unique, non-immunized image.

    All fingers on deck

    Key allies within the struggle against image manipulation are the creators of the image-editing fashions, says the staff. For PhotoGuard to be efficient, an built-in response from all stakeholders is important. “Policymakers should consider implementing regulations that mandate companies to protect user data from such manipulations. Developers of these AI models could design APIs that automatically add perturbations to users’ images, providing an added layer of protection against unauthorized edits,” says Salman.

    Despite PhotoGuard’s promise, it’s not a panacea. Once an image is on-line, people with malicious intent may try to reverse engineer the protecting measures by making use of noise, cropping, or rotating the image. However, there may be loads of earlier work from the adversarial examples literature that may be utilized right here to implement sturdy perturbations that resist frequent image manipulations.

    “A collaborative approach involving model developers, social media platforms, and policymakers presents a robust defense against unauthorized image manipulation. Working on this pressing issue is of paramount importance today,” says Salman. “And while I am glad to contribute towards this solution, much work is needed to make this protection practical. Companies that develop these models need to invest in engineering robust immunizations against the possible threats posed by these AI tools. As we tread into this new era of generative models, let’s strive for potential and protection in equal measures.”

    “The prospect of using attacks on machine learning to protect us from abusive uses of this technology is very compelling,” says Florian Tramèr, an assistant professor at ETH Zürich. “The paper has a nice insight that the developers of generative AI models have strong incentives to provide such immunization protections to their users, which could even be a legal requirement in the future. However, designing image protections that effectively resist circumvention attempts is a challenging problem: Once the generative AI company commits to an immunization mechanism and people start applying it to their online images, we need to ensure that this protection will work against motivated adversaries who might even use better generative AI models developed in the near future. Designing such robust protections is a hard open problem, and this paper makes a compelling case that generative AI companies should be working on solving it.”

    Salman wrote the paper alongside fellow lead authors Alaa Khaddaj and Guillaume Leclerc MS ’18, in addition to Andrew Ilyas ’18, MEng ’18; all three are EECS graduate college students and MIT CSAIL associates. The staff’s work was partially completed on the MIT Supercloud compute cluster, supported by U.S. National Science Foundation grants and Open Philanthropy, and based mostly upon work supported by the U.S. Defense Advanced Research Projects Agency. It was introduced on the International Conference on Machine Learning this July.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Rationale engineering generates a compact new tool for gene therapy | Ztoog

    AI

    The AI Hype Index: College students are hooked on ChatGPT

    AI

    Learning how to predict rare kinds of failures | Ztoog

    AI

    Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    AI

    AI learns how vision and sound are connected, without human intervention | Ztoog

    AI

    How AI is introducing errors into courtrooms

    AI

    With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

    AI

    Google DeepMind’s new AI agent cracks real-world problems better than humans can

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    AI

    This Paper from Alibaba Unveils DiffusionGAN3D: Revolutionizing 3D Portrait Generation and Adaptation with Advanced GANs and Text-to-Image Diffusion Models

    In the quickly evolving digital imagery and 3D illustration panorama, a brand new milestone is…

    Science

    Why don’t we remember being a baby? New clues in memory mystery.

    What’s the earliest memory you possibly can recall? While many individuals’s recollections of the previous…

    Technology

    What is Web 3.0 and Why Should Every Entrepreneur be Web 3.0 Ready?

    How typically do you utilize Quora, a privately owned searchable query and reply (QnA) platform?…

    Mobile

    Samsung Galaxy S23 FE spotted on TENAA, photos and specs tag along

    We already know that the Galaxy S23 FE is correct across the nook. Last we…

    Technology

    In five experiments, encouraging people to use search engines to evaluate the veracity of fake news posts increased the likelihood of them being rated as true (Lauren Leffer/Scientific American)

    Lauren Leffer / Scientific American: In five experiments, encouraging people to use search engines to…

    Our Picks
    Mobile

    ACSI 2022 smartphone survey shows strong marks for Apple, Samsung

    Science

    Boeing has now lost $1.1 billion on Starliner, with no crew flight in sight

    AI

    A creation story told through immersive technology | Ztoog

    Categories
    • AI (1,493)
    • Crypto (1,753)
    • Gadgets (1,805)
    • Mobile (1,851)
    • Science (1,866)
    • Technology (1,802)
    • The Future (1,648)
    Most Popular
    Science

    Graviton: We’ve glimpsed something that behaves like a particle of gravity

    Mobile

    Weekly poll: are macro-enabled ultra wide cameras useful or useless?

    Technology

    In Battle Over Health Care Costs, Private Equity Plays Both Sides

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.