Close Menu
Ztoog
    What's Hot
    Gadgets

    EU regulator says Apple should be on hook for €14.3 billion tax bill

    Science

    New fiery doughnut image is our most detailed glimpse of a black hole

    Mobile

    Fake versions of two Android apps need to be uninstalled now before your bank account info is stolen

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Can work-life balance tracking improve well-being?

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

      Today’s NYT Strands Hints, Answer and Help for May 26 #449

    • Technology

      Elon Musk tries to stick to spaceships

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

      Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

    • Gadgets

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

      Google shows off Android XR-based glasses, announces Warby Parker team-up

    • Mobile

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

      vivo T4 Ultra specs leak

    • Science

      June skygazing: A strawberry moon, the summer solstice… and Asteroid Day!

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

    • AI

      Fueling seamless AI at scale

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    • Crypto

      Bitcoin Maxi Isn’t Buying Hype Around New Crypto Holding Firms

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

    Ztoog
    Home » Using AI to protect against AI image manipulation | Ztoog
    AI

    Using AI to protect against AI image manipulation | Ztoog

    Facebook Twitter Pinterest WhatsApp
    Using AI to protect against AI image manipulation | Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    As we enter a brand new period the place applied sciences powered by synthetic intelligence can craft and manipulate photographs with a precision that blurs the road between actuality and fabrication, the specter of misuse looms massive. Recently, superior generative fashions equivalent to DALL-E and Midjourney, celebrated for his or her spectacular precision and user-friendly interfaces, have made the manufacturing of hyper-realistic photographs comparatively easy. With the limitations of entry lowered, even inexperienced customers can generate and manipulate high-quality photographs from easy textual content descriptions — starting from harmless image alterations to malicious modifications. Techniques like watermarking pose a promising answer, however misuse requires a preemptive (as opposed to solely put up hoc) measure. 

    In the hunt to create such a brand new measure, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) developed “PhotoGuard,” a method that makes use of perturbations — minuscule alterations in pixel values invisible to the human eye however detectable by laptop fashions — that successfully disrupt the mannequin’s capacity to manipulate the image.

    PhotoGuard makes use of two completely different “attack” strategies to generate these perturbations. The extra easy “encoder” assault targets the image’s latent illustration within the AI mannequin, inflicting the mannequin to understand the image as a random entity. The extra subtle “diffusion” one defines a goal image and optimizes the perturbations to make the ultimate image resemble the goal as intently as attainable.

    “Consider the possibility of fraudulent propagation of fake catastrophic events, like an explosion at a significant landmark. This deception can manipulate market trends and public sentiment, but the risks are not limited to the public sphere. Personal images can be inappropriately altered and used for blackmail, resulting in significant financial implications when executed on a large scale,” says Hadi Salman, an MIT graduate pupil in electrical engineering and laptop science (EECS), affiliate of MIT CSAIL, and lead writer of a brand new paper about PhotoGuard. 

    “In more extreme scenarios, these models could simulate voices and images for staging false crimes, inflicting psychological distress and financial loss. The swift nature of these actions compounds the problem. Even when the deception is eventually uncovered, the damage — whether reputational, emotional, or financial — has often already happened. This is a reality for victims at all levels, from individuals bullied at school to society-wide manipulation.”

    PhotoGuard in apply

    AI fashions view an image in a different way from how people do. It sees an image as a posh set of mathematical knowledge factors that describe each pixel’s shade and place — that is the image’s latent illustration. The encoder assault introduces minor changes into this mathematical illustration, inflicting the AI mannequin to understand the image as a random entity. As a outcome, any try to manipulate the image utilizing the mannequin turns into practically unattainable. The modifications launched are so minute that they’re invisible to the human eye, thus preserving the image’s visible integrity whereas guaranteeing its safety.

    The second and decidedly extra intricate “diffusion” assault strategically targets the complete diffusion mannequin end-to-end. This includes figuring out a desired goal image, after which initiating an optimization course of with the intention of intently aligning the generated image with this preselected goal.

    In implementing, the staff created perturbations throughout the enter area of the unique image. These perturbations are then used in the course of the inference stage, and utilized to the photographs, providing a sturdy protection against unauthorized manipulation.

    “The progress in AI that we are witnessing is truly breathtaking, but it enables beneficial and malicious uses of AI alike,” says MIT professor of EECS and CSAIL principal investigator Aleksander Madry, who can also be an writer on the paper. “It is thus urgent that we work towards identifying and mitigating the latter. I view PhotoGuard as our small contribution to that important effort.”

    The diffusion assault is extra computationally intensive than its less complicated sibling, and requires vital GPU reminiscence. The staff says that approximating the diffusion course of with fewer steps mitigates the problem, thus making the method extra sensible.

    To higher illustrate the assault, contemplate an artwork mission, for instance. The authentic image is a drawing, and the goal image is one other drawing that’s utterly completely different. The diffusion assault is like making tiny, invisible modifications to the primary drawing in order that, to an AI mannequin, it begins to resemble the second drawing. However, to the human eye, the unique drawing stays unchanged.

    By doing this, any AI mannequin trying to modify the unique image will now inadvertently make modifications as if coping with the goal image, thereby defending the unique image from meant manipulation. The result’s an image that continues to be visually unaltered for human observers, however protects against unauthorized edits by AI fashions.

    As far as an actual instance with PhotoGuard, contemplate an image with a number of faces. You may masks any faces you don’t need to modify, after which immediate with “two men attending a wedding.” Upon submission, the system will regulate the image accordingly, making a believable depiction of two males taking part in a marriage ceremony.

    Now, contemplate safeguarding the image from being edited; including perturbations to the image earlier than add can immunize it against modifications. In this case, the ultimate output will lack realism in contrast to the unique, non-immunized image.

    All fingers on deck

    Key allies within the struggle against image manipulation are the creators of the image-editing fashions, says the staff. For PhotoGuard to be efficient, an built-in response from all stakeholders is important. “Policymakers should consider implementing regulations that mandate companies to protect user data from such manipulations. Developers of these AI models could design APIs that automatically add perturbations to users’ images, providing an added layer of protection against unauthorized edits,” says Salman.

    Despite PhotoGuard’s promise, it’s not a panacea. Once an image is on-line, people with malicious intent may try to reverse engineer the protecting measures by making use of noise, cropping, or rotating the image. However, there may be loads of earlier work from the adversarial examples literature that may be utilized right here to implement sturdy perturbations that resist frequent image manipulations.

    “A collaborative approach involving model developers, social media platforms, and policymakers presents a robust defense against unauthorized image manipulation. Working on this pressing issue is of paramount importance today,” says Salman. “And while I am glad to contribute towards this solution, much work is needed to make this protection practical. Companies that develop these models need to invest in engineering robust immunizations against the possible threats posed by these AI tools. As we tread into this new era of generative models, let’s strive for potential and protection in equal measures.”

    “The prospect of using attacks on machine learning to protect us from abusive uses of this technology is very compelling,” says Florian Tramèr, an assistant professor at ETH Zürich. “The paper has a nice insight that the developers of generative AI models have strong incentives to provide such immunization protections to their users, which could even be a legal requirement in the future. However, designing image protections that effectively resist circumvention attempts is a challenging problem: Once the generative AI company commits to an immunization mechanism and people start applying it to their online images, we need to ensure that this protection will work against motivated adversaries who might even use better generative AI models developed in the near future. Designing such robust protections is a hard open problem, and this paper makes a compelling case that generative AI companies should be working on solving it.”

    Salman wrote the paper alongside fellow lead authors Alaa Khaddaj and Guillaume Leclerc MS ’18, in addition to Andrew Ilyas ’18, MEng ’18; all three are EECS graduate college students and MIT CSAIL associates. The staff’s work was partially completed on the MIT Supercloud compute cluster, supported by U.S. National Science Foundation grants and Open Philanthropy, and based mostly upon work supported by the U.S. Defense Advanced Research Projects Agency. It was introduced on the International Conference on Machine Learning this July.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Fueling seamless AI at scale

    AI

    Rationale engineering generates a compact new tool for gene therapy | Ztoog

    AI

    The AI Hype Index: College students are hooked on ChatGPT

    AI

    Learning how to predict rare kinds of failures | Ztoog

    AI

    Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    AI

    AI learns how vision and sound are connected, without human intervention | Ztoog

    AI

    How AI is introducing errors into courtrooms

    AI

    With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Mobile

    Samsung to fix Vivid mode with a toggle in next Galaxy S24 update

    Samsung began transport the Galaxy S24 flagships simply final week, however early adopters discovered a…

    Technology

    Do all plans support it and how does it work?

    Wi-Fi calling is an nearly important characteristic for those who dwell in areas (or massive…

    Science

    Augmented Reality to Bring X-Ray Vision

    In “The Man with the X-Ray Eyes,” a B-movie sci-fi movie from the Sixties, the…

    AI

    New technique helps robots pack objects into a tight space | Ztoog

    Anyone who has ever tried to pack a family-sized quantity of bags into a sedan-sized…

    Mobile

    Doing (and costing) the most

    Samsung Galaxy S24 UltraThe Samsung Galaxy S24 Ultra has all of it after which some.…

    Our Picks
    Technology

    Leaked Bethesda Titles Ignite Gaming Frenzy

    Mobile

    14″ and 16″ Redmi Book Pro 2024 launch with Intel Core Ultra CPUs and new AI powers

    Technology

    Apple Rumored to Drop New Beats Studio Pro Headphones in July

    Categories
    • AI (1,494)
    • Crypto (1,754)
    • Gadgets (1,805)
    • Mobile (1,851)
    • Science (1,867)
    • Technology (1,803)
    • The Future (1,649)
    Most Popular
    Crypto

    FTX Case Passed to Third Circut Court of Appeals: ‘The facts are not in dispute’

    Science

    A new algorithm could help detect landslides in minutes

    The Future

    AI comes up with battery design that uses 70 per cent less lithium

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.