Hair is one of the most outstanding options of the human physique, impressing with its dynamic qualities that carry scenes to life. Studies have persistently demonstrated that dynamic parts have a stronger attraction and fascination than static photos. Social media platforms like TikTook and Instagram witness the day by day sharing of huge portrait pictures as individuals aspire to make their footage each interesting and artistically charming. This drive fuels researchers’ exploration into the realm of animating human hair inside nonetheless photos, aiming to supply a vivid, aesthetically pleasing, and stunning viewing expertise.
Recent developments in the discipline have launched strategies to infuse nonetheless photos with dynamic parts, animating fluid substances corresponding to water, smoke, and hearth inside the body. Yet, these approaches have largely ignored the intricate nature of human hair in real-life images. This article focuses on the inventive transformation of human hair inside portrait pictures, which includes translating the image into a cinemagraph.
A cinemagraph represents an progressive quick video format that enjoys favor amongst skilled photographers, advertisers, and artists. It finds utility in numerous digital mediums, together with digital commercials, social media posts, and touchdown pages. The fascination for cinemagraphs lies in their potential to merge the strengths of nonetheless photos and movies. Certain areas inside a cinemagraph function delicate, repetitive motions in a quick loop, whereas the the rest stays static. This distinction between stationary and shifting parts successfully captivates the viewer’s consideration.
Through the transformation of a portrait picture into a cinemagraph, full with delicate hair motions, the thought is to reinforce the picture’s attract with out detracting from the static content material, creating a extra compelling and participating visible expertise.
Existing methods and business software program have been developed to generate high-fidelity cinemagraphs from enter movies by selectively freezing sure video areas. Unfortunately, these instruments aren’t appropriate for processing nonetheless photos. In distinction, there was a rising curiosity in still-image animation. Most of these approaches have centered on animating fluid parts corresponding to clouds, water, and smoke. However, the dynamic conduct of hair, composed of fibrous supplies, presents a distinctive problem in comparison with fluid parts. Unlike fluid aspect animation, which has obtained in depth consideration, the animation of human hair in actual portrait pictures has been comparatively unexplored.
Animating hair in a static portrait picture is difficult attributable to the intricate complexity of hair constructions and dynamics. Unlike the clean surfaces of the human physique or face, hair includes a whole bunch of 1000’s of particular person elements, ensuing in complicated and non-uniform constructions. This complexity results in intricate movement patterns inside the hair, together with interactions with the head. While there are specialised methods for modeling hair, corresponding to utilizing dense digicam arrays and high-speed cameras, they’re typically expensive and time-consuming, limiting their practicality for real-world hair animation.
The paper offered in this text introduces a novel AI technique for robotically animating hair inside a static portrait picture, eliminating the want for consumer intervention or complicated {hardware} setups. The perception behind this strategy lies in the human visible system’s decreased sensitivity to particular person hair strands and their motions in actual portrait movies, in comparison with artificial strands inside a digitalized human in a digital atmosphere. The proposed answer is to animate “hair wisps” as an alternative of particular person strands, creating a visually pleasing viewing expertise. To obtain this, the paper introduces a hair wisp animation module, enabling an environment friendly and automated answer. An overview of this framework is illustrated under.
The key problem in this context is easy methods to extract these hair wisps. While associated work, corresponding to hair modeling, has centered on hair segmentation, these approaches primarily goal the extraction of the total hair area, which differs from the goal. To extract significant hair wisps, the researchers innovatively body hair wisp extraction for instance segmentation drawback, the place a person section inside a nonetheless picture corresponds to a hair wisp. By adopting this drawback definition, the researchers leverage occasion segmentation networks to facilitate the extraction of hair wisps. This not solely simplifies the hair wisp extraction drawback but additionally permits the use of superior networks for efficient extraction. Additionally, the paper presents the creation of a hair wisp dataset containing actual portrait pictures to coach the networks, together with a semi-annotation scheme to provide ground-truth annotations for the recognized hair wisps. Some pattern outcomes from the paper are reported in the determine under in contrast with state-of-the-art methods.
This was the abstract of a novel AI framework designed to remodel nonetheless portraits into cinemagraphs by animating hair wisps with pleasing motions with out noticeable artifacts. If you have an interest and wish to be taught extra about it, please be at liberty to seek advice from the hyperlinks cited under.
Check out the Paper and Project Page. All Credit For This Research Goes To the Researchers on This Project. Also, don’t neglect to affix our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, the place we share the newest AI analysis information, cool AI tasks, and extra.
If you want our work, you’ll love our publication..
We are additionally on WhatsApp. Join our AI Channel on Whatsapp..
Daniele Lorenzi obtained his M.Sc. in ICT for Internet and Multimedia Engineering in 2021 from the University of Padua, Italy. He is a Ph.D. candidate at the Institute of Information Technology (ITEC) at the Alpen-Adria-Universität (AAU) Klagenfurt. He is at present working in the Christian Doppler Laboratory ATHENA and his analysis pursuits embody adaptive video streaming, immersive media, machine studying, and QoS/QoE analysis.