Close Menu
Ztoog
    What's Hot
    Mobile

    Nothing OS 2.0 is being developed with personalization and ease in mind

    The Future

    Eyeball reflections can reveal a 3D model of what you are looking at

    Gadgets

    MoonSwatch Mission to Neptune 2023: Price, Details, Release Date

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Are entangled qubits following a quantum Moore’s law?

      Disneyland’s 70th Anniversary Brings Cartoony Chaos to This Summer’s Celebration

      Story of military airfield in Afghanistan that Biden left in 2021

      Tencent hires WizardLM team, a Microsoft AI group with an odd history

      Today’s NYT Connections Hints, Answers for May 12, #701

    • Technology

      Crypto elite increasingly worried about their personal safety

      Deep dive on the evolution of Microsoft's relationship with OpenAI, from its $1B investment in 2019 through Copilot rollouts and ChatGPT's launch to present day (Bloomberg)

      New leak reveals iPhone Fold won’t look like the Galaxy Z Fold 6 at all

      Apple will use AI and user data in iOS 19 to extend iPhone battery life

      Today’s NYT Wordle Hints, Answer and Help for May 12, #1423

    • Gadgets

      We Hand-Picked the 24 Best Deals From the 2025 REI Anniversary Sale

      “Google wanted that”: Nextcloud decries Android permissions as “gatekeeping”

      Google Tests Automatic Password-to-Passkey Conversion On Android

      Maono Caster G1 Neo & PD200X Review: Budget Streaming Gear for Aspiring Creators

      Apple plans to split iPhone 18 launch into two phases in 2026

    • Mobile

      The Forerunner 570 & 970 have made Garmin’s tiered strategy clearer than ever

      The iPhone Fold is now being tested with an under-display camera

      T-Mobile takes over one of golf’s biggest events, unleashes unique experiences

      Fitbit’s AI experiments just leveled up with 3 new health tracking features

      Motorola’s Moto Watch needs to start living up to the brand name

    • Science

      Do these Buddhist gods hint at the purpose of China’s super-secret satellites?

      From Espresso to Eco-Brick: How Coffee Waste Fuels 3D-Printed Design

      Ancient three-eyed ‘sea moth’ used its butt to breathe

      Intelligence on Earth Evolved Independently at Least Twice

      Nothing is stronger than quantum connections – and now we know why

    • AI

      With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

      Google DeepMind’s new AI agent cracks real-world problems better than humans can

      Study shows vision-language models can’t handle queries with negation words | Ztoog

      How a new type of AI is helping police skirt facial recognition bans

      Hybrid AI model crafts smooth, high-quality videos in seconds | Ztoog

    • Crypto

      Is Bitcoin Bull Run Back? Daily RSI Shows Only Mild Bullish Momentum

      Robinhood grows its footprint in Canada by acquiring WonderFi

      HashKey Group Announces Launch of HashKey Global MENA with VASP License in UAE

      Ethereum Breaks Key Resistance In One Massive Move – Higher High Confirms Momentum

      ‘The Big Short’ Coming For Bitcoin? Why BTC Will Clear $110,000

    Ztoog
    Home » Text-to-image generation in any style – Google Research Blog
    AI

    Text-to-image generation in any style – Google Research Blog

    Facebook Twitter Pinterest WhatsApp
    Text-to-image generation in any style – Google Research Blog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Posted by Kihyuk Sohn and Dilip Krishnan, Research Scientists, Google Research

    Text-to-image fashions skilled on giant volumes of image-text pairs have enabled the creation of wealthy and various photographs encompassing many genres and themes. Moreover, standard kinds resembling “anime” or “steampunk”, when added to the enter textual content immediate, could translate to particular visible outputs. While many efforts have been put into immediate engineering, a variety of kinds are merely onerous to explain in textual content kind because of the nuances of colour schemes, illumination, and different traits. As an instance, “watercolor painting” could refer to numerous kinds, and utilizing a textual content immediate that merely says “watercolor painting style” could both consequence in one particular style or an unpredictable mixture of a number of.

    When we discuss with “watercolor portray style,” which will we imply? Instead of specifying the style in pure language, StyleDrop permits the generation of photographs which can be constant in style by referring to a style reference picture*.

    In this weblog we introduce “StyleDrop: Text-to-Image Generation in Any Style”, a instrument that permits a considerably increased degree of stylized text-to-image synthesis. Instead of searching for textual content prompts to explain the style, StyleDrop makes use of a number of style reference photographs that describe the style for text-to-image generation. By doing so, StyleDrop allows the generation of photographs in a style per the reference, whereas successfully circumventing the burden of textual content immediate engineering. This is finished by effectively fine-tuning the pre-trained text-to-image generation fashions through adapter tuning on just a few style reference photographs. Moreover, by iteratively fine-tuning the StyleDrop on a set of photographs it generated, it achieves the style-consistent picture generation from textual content prompts.

    Method overview

    StyleDrop is a text-to-image generation mannequin that permits generation of photographs whose visible kinds are per the user-provided style reference photographs. This is achieved by a few iterations of parameter-efficient fine-tuning of pre-trained text-to-image generation fashions. Specifically, we construct StyleDrop on Muse, a text-to-image generative imaginative and prescient transformer.

    Muse: text-to-image generative imaginative and prescient transformer

    Muse is a state-of-the-art text-to-image generation mannequin based mostly on the masked generative picture transformer (MaskGIT). Unlike diffusion fashions, resembling Imagen or Stable Diffusion, Muse represents a picture as a sequence of discrete tokens and fashions their distribution utilizing a transformer structure. Compared to diffusion fashions, Muse is understood to be sooner whereas attaining aggressive generation high quality.

    Parameter-efficient adapter tuning

    StyleDrop is constructed by fine-tuning the pre-trained Muse mannequin on just a few style reference photographs and their corresponding textual content prompts. There have been many works on parameter-efficient fine-tuning of transformers, together with immediate tuning and Low-Rank Adaptation (LoRA) of enormous language fashions. Among these, we go for adapter tuning, which is proven to be efficient at fine-tuning a big transformer community for language and picture generation duties in a parameter-efficient method. For instance, it introduces lower than a million trainable parameters to fine-tune a Muse mannequin of 3B parameters, and it requires solely 1000 coaching steps to converge.

    Parameter-efficient adapter tuning of Muse.

    Iterative coaching with suggestions

    While StyleDrop is efficient at studying kinds from just a few style reference photographs, it’s nonetheless difficult to study from a single style reference picture. This is as a result of the mannequin could not successfully disentangle the content material (i.e., what’s in the picture) and the style (i.e., how it’s being offered), resulting in decreased textual content controllability in generation. For instance, as proven beneath in Step 1 and a couple of, a generated picture of a chihuahua from StyleDrop skilled from a single style reference picture exhibits a leakage of content material (i.e., the home) from the style reference picture. Furthermore, a generated picture of a temple seems too much like the home in the reference picture (idea collapse).

    We deal with this difficulty by coaching a brand new StyleDrop mannequin on a subset of artificial photographs, chosen by the consumer or by image-text alignment fashions (e.g., CLIP), whose photographs are generated by the primary spherical of the StyleDrop mannequin skilled on a single picture. By coaching on a number of artificial image-text aligned photographs, the mannequin can simply disentangle the style from the content material, thus attaining improved image-text alignment.

    Iterative coaching with suggestions*. The first spherical of StyleDrop could consequence in decreased textual content controllability, resembling a content material leakage or idea collapse, because of the problem of content-style disentanglement. Iterative coaching utilizing artificial photographs, generated by the earlier rounds of StyleDrop fashions and chosen by human or image-text alignment fashions, improves the textual content adherence of stylized text-to-image generation.

    Experiments

    StyleDrop gallery

    We present the effectiveness of StyleDrop by operating experiments on 24 distinct style reference photographs. As proven beneath, the pictures generated by StyleDrop are extremely constant in style with one another and with the style reference picture, whereas depicting numerous contexts, resembling a child penguin, banana, piano, and many others. Moreover, the mannequin can render alphabet photographs with a constant style.

    Stylized text-to-image generation. Style reference photographs* are on the left contained in the yellow field.
    Text prompts used are:
    First row: a child penguin, a banana, a bench.
    Second row: a butterfly, an F1 race automotive, a Christmas tree.
    Third row: a espresso maker, a hat, a moose.
    Fourth row: a robotic, a towel, a wooden cabin.
    Stylized visible character generation. Style reference photographs* are on the left contained in the yellow field.
    Text prompts used are: (first row) letter ‘A’, letter ‘B’, letter ‘C’, (second row) letter ‘E’, letter ‘F’, letter ‘G’.

    Generating photographs of my object in my style

    Below we present generated photographs by sampling from two personalised generation distributions, one for an object and one other for the style.

    Images on the high in the blue border are object reference photographs from the DreamBooth dataset (teapot, vase, canine and cat), and the picture on the left on the backside in the pink border is the style reference picture*. Images in the purple border (i.e. the 4 decrease proper photographs) are generated from the style picture of the precise object.

    Quantitative outcomes

    For the quantitative analysis, we synthesize photographs from a subset of Parti prompts and measure the image-to-image CLIP rating for style consistency and image-to-text CLIP rating for textual content consistency. We research non–fine-tuned fashions of Muse and Imagen. Among fine-tuned fashions, we make a comparability to DreamBooth on Imagen, state-of-the-art personalised text-to-image methodology for topics. We present two variations of StyleDrop, one skilled from a single style reference picture, and one other, “StyleDrop (HF)”, that’s skilled iteratively utilizing artificial photographs with human suggestions as described above. As proven beneath, StyleDrop (HF) exhibits considerably improved style consistency rating over its non–fine-tuned counterpart (0.694 vs. 0.556), in addition to DreamBooth on Imagen (0.694 vs. 0.644). We observe an improved textual content consistency rating with StyleDrop (HF) over StyleDrop (0.322 vs. 0.313). In addition, in a human desire research between DreamBooth on Imagen and StyleDrop on Muse, we discovered that 86% of the human raters most popular StyleDrop on Muse over DreamBooth on Imagen in phrases of consistency to the style reference picture.

    Conclusion

    StyleDrop achieves style consistency at text-to-image generation utilizing just a few style reference photographs. Google’s AI Principles guided our growth of Style Drop, and we urge the accountable use of the know-how. StyleDrop was tailored to create a customized style mannequin in Vertex AI, and we consider it might be a useful instrument for artwork administrators and graphic designers — who would possibly wish to brainstorm or prototype visible property in their very own kinds, to enhance their productiveness and enhance their creativity — or companies that wish to generate new media property that replicate a specific model. As with different generative AI capabilities, we suggest that practitioners guarantee they align with copyrights of any media property they use. More outcomes are discovered on our venture web site and YouTube video.

    Acknowledgements

    This analysis was carried out by Kihyuk Sohn, Nataniel Ruiz, Kimin Lee, Daniel Castro Chin, Irina Blok, Huiwen Chang, Jarred Barber, Lu Jiang, Glenn Entis, Yuanzhen Li, Yuan Hao, Irfan Essa, Michael Rubinstein, and Dilip Krishnan. We thank homeowners of photographs used in our experiments (hyperlinks for attribution) for sharing their priceless property.


    *See picture sources ↩

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

    AI

    Google DeepMind’s new AI agent cracks real-world problems better than humans can

    AI

    Study shows vision-language models can’t handle queries with negation words | Ztoog

    AI

    How a new type of AI is helping police skirt facial recognition bans

    AI

    Hybrid AI model crafts smooth, high-quality videos in seconds | Ztoog

    AI

    How to build a better AI benchmark

    AI

    Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

    AI

    This data set helps researchers spot harmful stereotypes in LLMs

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Gadgets

    GaeaStar’s 3D-Printed Clay Coffee Cups Are Disposable, but Can They Save Us From Microplastics?

    It’s kind of like seeing a fingerprint in a hand-crafted ceramic mug. Toolmarks inform the…

    The Future

    Samsung shows off Zoom Anyplace camera likely coming to Galaxy S24 Ultra

    Samsung has launched an indication of its ISOCELL Zoom Anyplace tech which makes use of…

    Gadgets

    This AI Tool Can Detect Scams in Photos, Videos and WhatsApp

    So first we’re going to check out the steps you should take in order to…

    Mobile

    Apple made the wrong decision not bringing the Apple Watch to Android

    Recently, a report surfaced claiming that at one cut-off date, Apple was really contemplating making…

    Gadgets

    Samsung’s Bot Fit Wearable Assistive Robot Set For CES 2024 Launch

    Samsung is ready to launch its first wearable assistive robotic, now often known as the…

    Our Picks
    Mobile

    Spotify and Calm collaborate to bring transformative content to users worldwide

    Gadgets

    Motorola Adaptive Display: A Smartphone That Bends Around The Wrist

    The Future

    Watch 11 minutes of Final Fantasy VII Rebirth gameplay features — and the final trailer

    Categories
    • AI (1,487)
    • Crypto (1,748)
    • Gadgets (1,799)
    • Mobile (1,844)
    • Science (1,858)
    • Technology (1,795)
    • The Future (1,641)
    Most Popular
    Mobile

    Apple’s AirTag 2, with improved Ultra-Wideband chip, not expected until 2025

    Mobile

    Amazon releases new ChatBot, Open AI drama winds down

    Crypto

    Why Is The Bitcoin Price Up Today?

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.