Close Menu
Ztoog
    What's Hot
    AI

    University of Oxford Researchers Utilize Physics-Aware Machine Learning to Tackle Major Quantum Device Challenge

    Technology

    Samsung unveils Galaxy Z Fold Special Edition: A sleeker, more powerful foldable

    Science

    When did humans start social knowledge accumulation?

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      What is Project Management? 5 Best Tools that You Can Try

      Operational excellence strategy and continuous improvement

      Hannah Fry: AI isn’t as powerful as we think

      FanDuel goes all in on responsible gaming push with new Play with a Plan campaign

      Gettyimages.com Is the Best Website on the Internet Right Now

    • Technology

      Iran war: How could it end?

      Democratic senators question CFTC staffing cuts in Chicago enforcement office

      Google’s Cloud AI lead on the three frontiers of model capability

      AMD agrees to backstop a $300M loan from Goldman Sachs for Crusoe to buy AMD AI chips, the first known case of AMD chips used as debt collateral (The Information)

      Productivity apps failed me when I needed them most

    • Gadgets

      macOS Tahoe 26.3.1 update will “upgrade” your M5’s CPU to new “super” cores

      Lenovo Shows Off a ThinkBook Modular AI PC Concept With Swappable Ports and Detachable Displays at MWC 2026

      POCO M8 Review: The Ultimate Budget Smartphone With Some Cons

      The Mission: Impossible of SSDs has arrived with a fingerprint lock

      6 Best Phones With Headphone Jacks (2026), Tested and Reviewed

    • Mobile

      Android’s March update is all about finding people, apps, and your missing bags

      Watch Xiaomi’s global launch event live here

      Our poll shows what buyers actually care about in new smartphones (Hint: it’s not AI)

      Is Strava down for you? You’re not alone

      The Motorola Razr FIFA World Cup 2026 Edition was literally just unveiled, and Verizon is already giving them away

    • Science

      Big Tech Signs White House Data Center Pledge With Good Optics and Little Substance

      Inside the best dark matter detector ever built

      NASA’s Artemis moon exploration programme is getting a major makeover

      Scientists crack the case of “screeching” Scotch tape

      Blue-faced, puffy-lipped monkey scores a rare conservation win

    • AI

      Online harassment is entering its AI era

      Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

      New method could increase LLM training efficiency | Ztoog

      The human work behind humanoid robots is being hidden

      NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    • Crypto

      SEC Vs. Justin Sun Case Ends In $10M Settlement

      Google paid startup Form Energy $1B for its massive 100-hour battery

      Ethereum Breakout Alert: Corrective Channel Flip Sparks Impulsive Wave

      Show Your ID Or No Deal

      Jane Street sued for alleged front-running trades that accelerated Terraform Labs meltdown

    Ztoog
    Home » Text-to-image generation in any style – Google Research Blog
    AI

    Text-to-image generation in any style – Google Research Blog

    Facebook Twitter Pinterest WhatsApp
    Text-to-image generation in any style – Google Research Blog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Posted by Kihyuk Sohn and Dilip Krishnan, Research Scientists, Google Research

    Text-to-image fashions skilled on giant volumes of image-text pairs have enabled the creation of wealthy and various photographs encompassing many genres and themes. Moreover, standard kinds resembling “anime” or “steampunk”, when added to the enter textual content immediate, could translate to particular visible outputs. While many efforts have been put into immediate engineering, a variety of kinds are merely onerous to explain in textual content kind because of the nuances of colour schemes, illumination, and different traits. As an instance, “watercolor painting” could refer to numerous kinds, and utilizing a textual content immediate that merely says “watercolor painting style” could both consequence in one particular style or an unpredictable mixture of a number of.

    When we discuss with “watercolor portray style,” which will we imply? Instead of specifying the style in pure language, StyleDrop permits the generation of photographs which can be constant in style by referring to a style reference picture*.

    In this weblog we introduce “StyleDrop: Text-to-Image Generation in Any Style”, a instrument that permits a considerably increased degree of stylized text-to-image synthesis. Instead of searching for textual content prompts to explain the style, StyleDrop makes use of a number of style reference photographs that describe the style for text-to-image generation. By doing so, StyleDrop allows the generation of photographs in a style per the reference, whereas successfully circumventing the burden of textual content immediate engineering. This is finished by effectively fine-tuning the pre-trained text-to-image generation fashions through adapter tuning on just a few style reference photographs. Moreover, by iteratively fine-tuning the StyleDrop on a set of photographs it generated, it achieves the style-consistent picture generation from textual content prompts.

    Method overview

    StyleDrop is a text-to-image generation mannequin that permits generation of photographs whose visible kinds are per the user-provided style reference photographs. This is achieved by a few iterations of parameter-efficient fine-tuning of pre-trained text-to-image generation fashions. Specifically, we construct StyleDrop on Muse, a text-to-image generative imaginative and prescient transformer.

    Muse: text-to-image generative imaginative and prescient transformer

    Muse is a state-of-the-art text-to-image generation mannequin based mostly on the masked generative picture transformer (MaskGIT). Unlike diffusion fashions, resembling Imagen or Stable Diffusion, Muse represents a picture as a sequence of discrete tokens and fashions their distribution utilizing a transformer structure. Compared to diffusion fashions, Muse is understood to be sooner whereas attaining aggressive generation high quality.

    Parameter-efficient adapter tuning

    StyleDrop is constructed by fine-tuning the pre-trained Muse mannequin on just a few style reference photographs and their corresponding textual content prompts. There have been many works on parameter-efficient fine-tuning of transformers, together with immediate tuning and Low-Rank Adaptation (LoRA) of enormous language fashions. Among these, we go for adapter tuning, which is proven to be efficient at fine-tuning a big transformer community for language and picture generation duties in a parameter-efficient method. For instance, it introduces lower than a million trainable parameters to fine-tune a Muse mannequin of 3B parameters, and it requires solely 1000 coaching steps to converge.

    Parameter-efficient adapter tuning of Muse.

    Iterative coaching with suggestions

    While StyleDrop is efficient at studying kinds from just a few style reference photographs, it’s nonetheless difficult to study from a single style reference picture. This is as a result of the mannequin could not successfully disentangle the content material (i.e., what’s in the picture) and the style (i.e., how it’s being offered), resulting in decreased textual content controllability in generation. For instance, as proven beneath in Step 1 and a couple of, a generated picture of a chihuahua from StyleDrop skilled from a single style reference picture exhibits a leakage of content material (i.e., the home) from the style reference picture. Furthermore, a generated picture of a temple seems too much like the home in the reference picture (idea collapse).

    We deal with this difficulty by coaching a brand new StyleDrop mannequin on a subset of artificial photographs, chosen by the consumer or by image-text alignment fashions (e.g., CLIP), whose photographs are generated by the primary spherical of the StyleDrop mannequin skilled on a single picture. By coaching on a number of artificial image-text aligned photographs, the mannequin can simply disentangle the style from the content material, thus attaining improved image-text alignment.

    Iterative coaching with suggestions*. The first spherical of StyleDrop could consequence in decreased textual content controllability, resembling a content material leakage or idea collapse, because of the problem of content-style disentanglement. Iterative coaching utilizing artificial photographs, generated by the earlier rounds of StyleDrop fashions and chosen by human or image-text alignment fashions, improves the textual content adherence of stylized text-to-image generation.

    Experiments

    StyleDrop gallery

    We present the effectiveness of StyleDrop by operating experiments on 24 distinct style reference photographs. As proven beneath, the pictures generated by StyleDrop are extremely constant in style with one another and with the style reference picture, whereas depicting numerous contexts, resembling a child penguin, banana, piano, and many others. Moreover, the mannequin can render alphabet photographs with a constant style.

    Stylized text-to-image generation. Style reference photographs* are on the left contained in the yellow field.
    Text prompts used are:
    First row: a child penguin, a banana, a bench.
    Second row: a butterfly, an F1 race automotive, a Christmas tree.
    Third row: a espresso maker, a hat, a moose.
    Fourth row: a robotic, a towel, a wooden cabin.
    Stylized visible character generation. Style reference photographs* are on the left contained in the yellow field.
    Text prompts used are: (first row) letter ‘A’, letter ‘B’, letter ‘C’, (second row) letter ‘E’, letter ‘F’, letter ‘G’.

    Generating photographs of my object in my style

    Below we present generated photographs by sampling from two personalised generation distributions, one for an object and one other for the style.

    Images on the high in the blue border are object reference photographs from the DreamBooth dataset (teapot, vase, canine and cat), and the picture on the left on the backside in the pink border is the style reference picture*. Images in the purple border (i.e. the 4 decrease proper photographs) are generated from the style picture of the precise object.

    Quantitative outcomes

    For the quantitative analysis, we synthesize photographs from a subset of Parti prompts and measure the image-to-image CLIP rating for style consistency and image-to-text CLIP rating for textual content consistency. We research non–fine-tuned fashions of Muse and Imagen. Among fine-tuned fashions, we make a comparability to DreamBooth on Imagen, state-of-the-art personalised text-to-image methodology for topics. We present two variations of StyleDrop, one skilled from a single style reference picture, and one other, “StyleDrop (HF)”, that’s skilled iteratively utilizing artificial photographs with human suggestions as described above. As proven beneath, StyleDrop (HF) exhibits considerably improved style consistency rating over its non–fine-tuned counterpart (0.694 vs. 0.556), in addition to DreamBooth on Imagen (0.694 vs. 0.644). We observe an improved textual content consistency rating with StyleDrop (HF) over StyleDrop (0.322 vs. 0.313). In addition, in a human desire research between DreamBooth on Imagen and StyleDrop on Muse, we discovered that 86% of the human raters most popular StyleDrop on Muse over DreamBooth on Imagen in phrases of consistency to the style reference picture.

    Conclusion

    StyleDrop achieves style consistency at text-to-image generation utilizing just a few style reference photographs. Google’s AI Principles guided our growth of Style Drop, and we urge the accountable use of the know-how. StyleDrop was tailored to create a customized style mannequin in Vertex AI, and we consider it might be a useful instrument for artwork administrators and graphic designers — who would possibly wish to brainstorm or prototype visible property in their very own kinds, to enhance their productiveness and enhance their creativity — or companies that wish to generate new media property that replicate a specific model. As with different generative AI capabilities, we suggest that practitioners guarantee they align with copyrights of any media property they use. More outcomes are discovered on our venture web site and YouTube video.

    Acknowledgements

    This analysis was carried out by Kihyuk Sohn, Nataniel Ruiz, Kimin Lee, Daniel Castro Chin, Irina Blok, Huiwen Chang, Jarred Barber, Lu Jiang, Glenn Entis, Yuanzhen Li, Yuan Hao, Irfan Essa, Michael Rubinstein, and Dilip Krishnan. We thank homeowners of photographs used in our experiments (hyperlinks for attribution) for sharing their priceless property.


    *See picture sources ↩

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Online harassment is entering its AI era

    AI

    Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

    AI

    New method could increase LLM training efficiency | Ztoog

    AI

    The human work behind humanoid robots is being hidden

    AI

    NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    AI

    Personalization features can make LLMs more agreeable | Ztoog

    AI

    AI is already making online crimes easier. It could get much worse.

    AI

    NVIDIA Researchers Introduce KVTC Transform Coding Pipeline to Compress Key-Value Caches by 20x for Efficient LLM Serving

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Gadgets

    Best Action Cameras (2025), Tested and Reviewed

    Compare Our Top Pick Action CamerasThe B-RollDJI Osmo Action 4 Courtesy of DJISome of our…

    Science

    When did humans start social knowledge accumulation?

    A key side of humans’ evolutionary success is the truth that we do not have…

    Science

    15 captivating photos of auroras seen from space

    An aurora is a blinding spectacle to witness down right here on Earth, however from…

    Technology

    Jakarta-based Praktis, which provides end-to-end supply chain software for D2C brands, raised a $20M Series A led by East Ventures (Catherine Shu/Ztoog)

    Catherine Shu / Ztoog: Jakarta-based Praktis, which supplies end-to-end provide chain software program for D2C…

    Science

    How to Clean up Space Debris: High Tech Solutions

    Since 1957, with the launch of Sputnik, area particles has solely elevated. Virilio, the French…

    Our Picks
    Crypto

    Samsung Pay Expands Crypto Payments with Alchemy Pay Partnership

    Mobile

    Apple releases iOS 17.3 with Stolen Device Protection

    Gadgets

    Mondrian AI: Yennefer Enterprise AI Platform

    Categories
    • AI (1,560)
    • Crypto (1,827)
    • Gadgets (1,870)
    • Mobile (1,910)
    • Science (1,939)
    • Technology (1,862)
    • The Future (1,716)
    Most Popular
    Science

    Spying on Beavers From Space Could Help Save California

    The Future

    The (Cleaning) Droid You’re Looking for – Review Geek

    Science

    Scientists Have Pushed the Schrödinger’s Cat Paradox to New Limits

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2026 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.