Close Menu
Ztoog
    What's Hot
    Science

    It’s almost showtime for SpaceX’s massive Starship rocket

    Crypto

    Bitcoin Upper Band Moves Above $105,400

    The Future

    Hey Google, I get raising prices but WTF?!

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      OPPO launches A5 Pro 5G: Premium features at a budget price

      How I Turn Unstructured PDFs into Revenue-Ready Spreadsheets

      Is it the best tool for 2025?

      The clocks that helped define time from London’s Royal Observatory

      Summer Movies Are Here, and So Are the New Popcorn Buckets

    • Technology

      What It Is and Why It Matters—Part 1 – O’Reilly

      Ensure Hard Work Is Recognized With These 3 Steps

      Cicada map 2025: Where will Brood XIV cicadas emerge this spring?

      Is Duolingo the face of an AI jobs crisis?

      The US DOD transfers its AI-based Open Price Exploration for National Security program to nonprofit Critical Minerals Forum to boost Western supply deals (Ernest Scheyder/Reuters)

    • Gadgets

      Maono Caster G1 Neo & PD200X Review: Budget Streaming Gear for Aspiring Creators

      Apple plans to split iPhone 18 launch into two phases in 2026

      Upgrade your desk to Starfleet status with this $95 USB-C hub

      37 Best Graduation Gift Ideas (2025): For College Grads

      Backblaze responds to claims of “sham accounting,” customer backups at risk

    • Mobile

      Motorola’s Moto Watch needs to start living up to the brand name

      Samsung Galaxy S25 Edge promo materials leak

      What are people doing with those free T-Mobile lines? Way more than you’d expect

      Samsung doesn’t want budget Galaxy phones to use exclusive AI features

      COROS’s charging adapter is a neat solution to the smartwatch charging cable problem

    • Science

      Nothing is stronger than quantum connections – and now we know why

      Failed Soviet probe will soon crash to Earth – and we don’t know where

      Trump administration cuts off all future federal funding to Harvard

      Does kissing spread gluten? New research offers a clue.

      Why Balcony Solar Panels Haven’t Taken Off in the US

    • AI

      Hybrid AI model crafts smooth, high-quality videos in seconds | Ztoog

      How to build a better AI benchmark

      Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

      This data set helps researchers spot harmful stereotypes in LLMs

      Making AI models more trustworthy for high-stakes settings | Ztoog

    • Crypto

      Ethereum Breaks Key Resistance In One Massive Move – Higher High Confirms Momentum

      ‘The Big Short’ Coming For Bitcoin? Why BTC Will Clear $110,000

      Bitcoin Holds Above $95K Despite Weak Blockchain Activity — Analytics Firm Explains Why

      eToro eyes US IPO launch as early as next week amid easing concerns over Trump’s tariffs

      Cardano ‘Looks Dope,’ Analyst Predicts Big Move Soon

    Ztoog
    Home » Using AI to protect against AI image manipulation | Ztoog
    AI

    Using AI to protect against AI image manipulation | Ztoog

    Facebook Twitter Pinterest WhatsApp
    Using AI to protect against AI image manipulation | Ztoog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    As we enter a brand new period the place applied sciences powered by synthetic intelligence can craft and manipulate photographs with a precision that blurs the road between actuality and fabrication, the specter of misuse looms massive. Recently, superior generative fashions equivalent to DALL-E and Midjourney, celebrated for his or her spectacular precision and user-friendly interfaces, have made the manufacturing of hyper-realistic photographs comparatively easy. With the limitations of entry lowered, even inexperienced customers can generate and manipulate high-quality photographs from easy textual content descriptions — starting from harmless image alterations to malicious modifications. Techniques like watermarking pose a promising answer, however misuse requires a preemptive (as opposed to solely put up hoc) measure. 

    In the hunt to create such a brand new measure, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) developed “PhotoGuard,” a method that makes use of perturbations — minuscule alterations in pixel values invisible to the human eye however detectable by laptop fashions — that successfully disrupt the mannequin’s capacity to manipulate the image.

    PhotoGuard makes use of two completely different “attack” strategies to generate these perturbations. The extra easy “encoder” assault targets the image’s latent illustration within the AI mannequin, inflicting the mannequin to understand the image as a random entity. The extra subtle “diffusion” one defines a goal image and optimizes the perturbations to make the ultimate image resemble the goal as intently as attainable.

    “Consider the possibility of fraudulent propagation of fake catastrophic events, like an explosion at a significant landmark. This deception can manipulate market trends and public sentiment, but the risks are not limited to the public sphere. Personal images can be inappropriately altered and used for blackmail, resulting in significant financial implications when executed on a large scale,” says Hadi Salman, an MIT graduate pupil in electrical engineering and laptop science (EECS), affiliate of MIT CSAIL, and lead writer of a brand new paper about PhotoGuard. 

    “In more extreme scenarios, these models could simulate voices and images for staging false crimes, inflicting psychological distress and financial loss. The swift nature of these actions compounds the problem. Even when the deception is eventually uncovered, the damage — whether reputational, emotional, or financial — has often already happened. This is a reality for victims at all levels, from individuals bullied at school to society-wide manipulation.”

    PhotoGuard in apply

    AI fashions view an image in a different way from how people do. It sees an image as a posh set of mathematical knowledge factors that describe each pixel’s shade and place — that is the image’s latent illustration. The encoder assault introduces minor changes into this mathematical illustration, inflicting the AI mannequin to understand the image as a random entity. As a outcome, any try to manipulate the image utilizing the mannequin turns into practically unattainable. The modifications launched are so minute that they’re invisible to the human eye, thus preserving the image’s visible integrity whereas guaranteeing its safety.

    The second and decidedly extra intricate “diffusion” assault strategically targets the complete diffusion mannequin end-to-end. This includes figuring out a desired goal image, after which initiating an optimization course of with the intention of intently aligning the generated image with this preselected goal.

    In implementing, the staff created perturbations throughout the enter area of the unique image. These perturbations are then used in the course of the inference stage, and utilized to the photographs, providing a sturdy protection against unauthorized manipulation.

    “The progress in AI that we are witnessing is truly breathtaking, but it enables beneficial and malicious uses of AI alike,” says MIT professor of EECS and CSAIL principal investigator Aleksander Madry, who can also be an writer on the paper. “It is thus urgent that we work towards identifying and mitigating the latter. I view PhotoGuard as our small contribution to that important effort.”

    The diffusion assault is extra computationally intensive than its less complicated sibling, and requires vital GPU reminiscence. The staff says that approximating the diffusion course of with fewer steps mitigates the problem, thus making the method extra sensible.

    To higher illustrate the assault, contemplate an artwork mission, for instance. The authentic image is a drawing, and the goal image is one other drawing that’s utterly completely different. The diffusion assault is like making tiny, invisible modifications to the primary drawing in order that, to an AI mannequin, it begins to resemble the second drawing. However, to the human eye, the unique drawing stays unchanged.

    By doing this, any AI mannequin trying to modify the unique image will now inadvertently make modifications as if coping with the goal image, thereby defending the unique image from meant manipulation. The result’s an image that continues to be visually unaltered for human observers, however protects against unauthorized edits by AI fashions.

    As far as an actual instance with PhotoGuard, contemplate an image with a number of faces. You may masks any faces you don’t need to modify, after which immediate with “two men attending a wedding.” Upon submission, the system will regulate the image accordingly, making a believable depiction of two males taking part in a marriage ceremony.

    Now, contemplate safeguarding the image from being edited; including perturbations to the image earlier than add can immunize it against modifications. In this case, the ultimate output will lack realism in contrast to the unique, non-immunized image.

    All fingers on deck

    Key allies within the struggle against image manipulation are the creators of the image-editing fashions, says the staff. For PhotoGuard to be efficient, an built-in response from all stakeholders is important. “Policymakers should consider implementing regulations that mandate companies to protect user data from such manipulations. Developers of these AI models could design APIs that automatically add perturbations to users’ images, providing an added layer of protection against unauthorized edits,” says Salman.

    Despite PhotoGuard’s promise, it’s not a panacea. Once an image is on-line, people with malicious intent may try to reverse engineer the protecting measures by making use of noise, cropping, or rotating the image. However, there may be loads of earlier work from the adversarial examples literature that may be utilized right here to implement sturdy perturbations that resist frequent image manipulations.

    “A collaborative approach involving model developers, social media platforms, and policymakers presents a robust defense against unauthorized image manipulation. Working on this pressing issue is of paramount importance today,” says Salman. “And while I am glad to contribute towards this solution, much work is needed to make this protection practical. Companies that develop these models need to invest in engineering robust immunizations against the possible threats posed by these AI tools. As we tread into this new era of generative models, let’s strive for potential and protection in equal measures.”

    “The prospect of using attacks on machine learning to protect us from abusive uses of this technology is very compelling,” says Florian Tramèr, an assistant professor at ETH Zürich. “The paper has a nice insight that the developers of generative AI models have strong incentives to provide such immunization protections to their users, which could even be a legal requirement in the future. However, designing image protections that effectively resist circumvention attempts is a challenging problem: Once the generative AI company commits to an immunization mechanism and people start applying it to their online images, we need to ensure that this protection will work against motivated adversaries who might even use better generative AI models developed in the near future. Designing such robust protections is a hard open problem, and this paper makes a compelling case that generative AI companies should be working on solving it.”

    Salman wrote the paper alongside fellow lead authors Alaa Khaddaj and Guillaume Leclerc MS ’18, in addition to Andrew Ilyas ’18, MEng ’18; all three are EECS graduate college students and MIT CSAIL associates. The staff’s work was partially completed on the MIT Supercloud compute cluster, supported by U.S. National Science Foundation grants and Open Philanthropy, and based mostly upon work supported by the U.S. Defense Advanced Research Projects Agency. It was introduced on the International Conference on Machine Learning this July.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Hybrid AI model crafts smooth, high-quality videos in seconds | Ztoog

    AI

    How to build a better AI benchmark

    AI

    Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

    AI

    This data set helps researchers spot harmful stereotypes in LLMs

    AI

    Making AI models more trustworthy for high-stakes settings | Ztoog

    AI

    The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

    AI

    Novel method detects microbial contamination in cell cultures | Ztoog

    AI

    Seeing AI as a collaborator, not a creator

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Technology

    Reddit CEO doubles down on attack on Apollo developer in drama-filled AMA

    Reddit’s unpopular choice to revise its API pricing in a transfer that’s forcing third-party apps…

    AI

    How underwater drones could shape a potential Taiwan-China conflict

    The report’s authors element a variety of ways in which use of drones in any…

    The Future

    BlueAnt X6 Review – This party speaker is an absolute beast

    In many respects, a number of the Bluetooth audio system available on the market have…

    Science

    Cremated remains reveal Late Bronze Age funeral customs

    Burial rites and different types of grieving the lifeless presumably date again to the Neanderthals…

    The Future

    Hasbro Reveals Ahsoka-Inspired Clone Trooper Figure Packs

    Image: HasbroThe live-action Clone Trooper has had a little bit of a renaissance currently, between…

    Our Picks
    Crypto

    BlackRock Moves Into $3 Million Seed Round

    AI

    A Team of UC Berkeley and Stanford Researchers Introduce S-LoRA: An Artificial Intelligence System Designed for the Scalable Serving of Many LoRA Adapters

    Mobile

    WhatsApp could look very different soon

    Categories
    • AI (1,483)
    • Crypto (1,745)
    • Gadgets (1,796)
    • Mobile (1,840)
    • Science (1,854)
    • Technology (1,790)
    • The Future (1,636)
    Most Popular
    Crypto

    Illegal Cryptocurrency Mining Operation Shut Down in Malaysia

    AI

    Researchers at Northeastern University Propose NeuFlow: A Highly Efficient Optical Flow Architecture that Addresses both High Accuracy and Computational Cost Concerns

    Mobile

    EU adopts new framework for data transfer to the US

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.