Close Menu
Ztoog
    What's Hot
    Crypto

    PYUSD Post-Launch: Exploring Mixed Reactions, Technology and Future

    Gadgets

    Qi2 Wireless Charging: Everything You Need to Know

    AI

    LMSYS ORG Introduces Arena-Hard: A Data Pipeline to Build High-Quality Benchmarks from Live Data in Chatbot Arena, which is a Crowd-Sourced Platform for LLM Evals

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Can work-life balance tracking improve well-being?

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

      Today’s NYT Strands Hints, Answer and Help for May 26 #449

    • Technology

      Elon Musk tries to stick to spaceships

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

      Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

    • Gadgets

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

      Google shows off Android XR-based glasses, announces Warby Parker team-up

    • Mobile

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

      vivo T4 Ultra specs leak

    • Science

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

      A trip to the farm where loofahs grow on vines

    • AI

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

      AI learns how vision and sound are connected, without human intervention | Ztoog

    • Crypto

      Bitcoin Maxi Isn’t Buying Hype Around New Crypto Holding Firms

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

    Ztoog
    Home » Language to rewards for robotic skill synthesis – Google Research Blog
    AI

    Language to rewards for robotic skill synthesis – Google Research Blog

    Facebook Twitter Pinterest WhatsApp
    Language to rewards for robotic skill synthesis – Google Research Blog
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Posted by Wenhao Yu and Fei Xia, Research Scientists, Google

    Empowering end-users to interactively educate robots to carry out novel duties is a vital functionality for their profitable integration into real-world purposes. For instance, a consumer might want to educate a robotic canine to carry out a brand new trick, or educate a manipulator robotic how to set up a lunch field based mostly on consumer preferences. The current developments in massive language fashions (LLMs) pre-trained on intensive web knowledge have proven a promising path in direction of reaching this objective. Indeed, researchers have explored various methods of leveraging LLMs for robotics, from step-by-step planning and goal-oriented dialogue to robot-code-writing brokers.

    While these strategies impart new modes of compositional generalization, they give attention to utilizing language to hyperlink collectively new behaviors from an current library of management primitives which are both manually engineered or discovered a priori. Despite having inner information about robotic motions, LLMs wrestle to instantly output low-level robotic instructions due to the restricted availability of related coaching knowledge. As a outcome, the expression of those strategies are bottlenecked by the breadth of the accessible primitives, the design of which regularly requires intensive skilled information or large knowledge assortment.

    In “Language to Rewards for Robotic Skill Synthesis”, we suggest an strategy to allow customers to educate robots novel actions via pure language enter. To accomplish that, we leverage reward features as an interface that bridges the hole between language and low-level robotic actions. We posit that reward features present a perfect interface for such duties given their richness in semantics, modularity, and interpretability. They additionally present a direct connection to low-level insurance policies via black-box optimization or reinforcement studying (RL). We developed a language-to-reward system that leverages LLMs to translate pure language consumer directions into reward-specifying code after which applies MuJoCo MPC to discover optimum low-level robotic actions that maximize the generated reward perform. We show our language-to-reward system on quite a lot of robotic management duties in simulation utilizing a quadruped robotic and a dexterous manipulator robotic. We additional validate our methodology on a bodily robotic manipulator.

    The language-to-reward system consists of two core parts: (1) a Reward Translator, and (2) a Motion Controller. The Reward Translator maps pure language instruction from customers to reward features represented as python code. The Motion Controller optimizes the given reward perform utilizing receding horizon optimization to discover the optimum low-level robotic actions, resembling the quantity of torque that ought to be utilized to every robotic motor.

    LLMs can’t instantly generate low-level robotic actions due to lack of information in pre-training dataset. We suggest to use reward features to bridge the hole between language and low-level robotic actions, and allow novel advanced robotic motions from pure language directions.

    Reward Translator: Translating consumer directions to reward features

    The Reward Translator module was constructed with the objective of mapping pure language consumer directions to reward features. Reward tuning is very domain-specific and requires skilled information, so it was not stunning to us after we discovered that LLMs educated on generic language datasets are unable to instantly generate a reward perform for a selected {hardware}. To tackle this, we apply the in-context studying skill of LLMs. Furthermore, we cut up the Reward Translator into two sub-modules: Motion Descriptor and Reward Coder.

    Motion Descriptor

    First, we design a Motion Descriptor that interprets enter from a consumer and expands it right into a pure language description of the specified robotic movement following a predefined template. This Motion Descriptor turns probably ambiguous or imprecise consumer directions into extra particular and descriptive robotic motions, making the reward coding process extra steady. Moreover, customers work together with the system via the movement description area, so this additionally offers a extra interpretable interface for customers in contrast to instantly exhibiting the reward perform.

    To create the Motion Descriptor, we use an LLM to translate the consumer enter into an in depth description of the specified robotic movement. We design prompts that information the LLMs to output the movement description with the correct amount of particulars and format. By translating a imprecise consumer instruction right into a extra detailed description, we’re ready to extra reliably generate the reward perform with our system. This thought may also be probably utilized extra usually past robotics duties, and is related to Inner-Monologue and chain-of-thought prompting.

    Reward Coder

    In the second stage, we use the identical LLM from Motion Descriptor for Reward Coder, which interprets generated movement description into the reward perform. Reward features are represented utilizing python code to profit from the LLMs’ information of reward, coding, and code construction.

    Ideally, we want to use an LLM to instantly generate a reward perform R (s, t) that maps the robotic state s and time t right into a scalar reward worth. However, producing the proper reward perform from scratch remains to be a difficult downside for LLMs and correcting the errors requires the consumer to perceive the generated code to present the fitting suggestions. As such, we pre-define a set of reward phrases which are generally used for the robotic of curiosity and permit LLMs to composite completely different reward phrases to formulate the ultimate reward perform. To obtain this, we design a immediate that specifies the reward phrases and information the LLM to generate the proper reward perform for the duty.

    The inner construction of the Reward Translator, which is tasked to map consumer inputs to reward features.

    Motion Controller: Translating reward features to robotic actions

    The Motion Controller takes the reward perform generated by the Reward Translator and synthesizes a controller that maps robotic commentary to low-level robotic actions. To do that, we formulate the controller synthesis downside as a Markov choice course of (MDP), which might be solved utilizing completely different methods, together with RL, offline trajectory optimization, or mannequin predictive management (MPC). Specifically, we use an open-source implementation based mostly on the MuJoCo MPC (MJPC).

    MJPC has demonstrated the interactive creation of various behaviors, resembling legged locomotion, greedy, and finger-gaiting, whereas supporting a number of planning algorithms, resembling iterative linear–quadratic–Gaussian (iLQG) and predictive sampling. More importantly, the frequent re-planning in MJPC empowers its robustness to uncertainties within the system and allows an interactive movement synthesis and correction system when mixed with LLMs.

    Examples

    Robot canine

    In the primary instance, we apply the language-to-reward system to a simulated quadruped robotic and educate it to carry out varied abilities. For every skill, the consumer will present a concise instruction to the system, which can then synthesize the robotic movement by utilizing reward features as an intermediate interface.

    Dexterous manipulator

    We then apply the language-to-reward system to a dexterous manipulator robotic to carry out quite a lot of manipulation duties. The dexterous manipulator has 27 levels of freedom, which may be very difficult to management. Many of those duties require manipulation abilities past greedy, making it troublesome for pre-designed primitives to work. We additionally embody an instance the place the consumer can interactively instruct the robotic to place an apple inside a drawer.

    Validation on actual robots

    We additionally validate the language-to-reward methodology utilizing a real-world manipulation robotic to carry out duties resembling choosing up objects and opening a drawer. To carry out the optimization in Motion Controller, we use AprilTag, a fiducial marker system, and F-VLM, an open-vocabulary object detection device, to determine the place of the desk and objects being manipulated.

    Conclusion

    In this work, we describe a brand new paradigm for interfacing an LLM with a robotic via reward features, powered by a low-level mannequin predictive management device, MuJoCo MPC. Using reward features because the interface allows LLMs to work in a semantic-rich house that performs to the strengths of LLMs, whereas making certain the expressiveness of the ensuing controller. To additional enhance the efficiency of the system, we suggest to use a structured movement description template to higher extract inner information about robotic motions from LLMs. We show our proposed system on two simulated robotic platforms and one actual robotic for each locomotion and manipulation duties.

    Acknowledgements

    We would love to thank our co-authors Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, and Yuval Tassa for their assist and help in varied points of the undertaking. We would additionally like to acknowledge Ken Caluwaerts, Kristian Hartikainen, Steven Bohez, Carolina Parada, Marc Toussaint, and the groups at Google DeepMind for their suggestions and contributions.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    Rationale engineering generates a compact new tool for gene therapy | Ztoog

    AI

    The AI Hype Index: College students are hooked on ChatGPT

    AI

    Learning how to predict rare kinds of failures | Ztoog

    AI

    Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

    AI

    AI learns how vision and sound are connected, without human intervention | Ztoog

    AI

    How AI is introducing errors into courtrooms

    AI

    With AI, researchers predict the location of virtually any protein within a human cell | Ztoog

    AI

    Google DeepMind’s new AI agent cracks real-world problems better than humans can

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Crypto

    Ethereum Price Soars To Over $2,300

    The market efficiency of Ethereum has been steadily rising since October, marking a constructive and…

    Gadgets

    Woman Creates Convincing Hospital Bed Selfie Thanks To Photoshop’s AI

    A girl has just lately amazed social media customers along with her capability to effortlessly…

    Technology

    5 of the best free AI image generators

    The use of AI image generators is turning into increasingly more prevalent as employees and…

    Technology

    Runway rolls out Act-One, a Gen-3 Alpha tool for animating AI-generated characters with realistic facial expressions using video and voice recordings as inputs (Runway)

    Runway: Runway rolls out Act-One, a Gen-3 Alpha tool for animating AI-generated characters with realistic…

    Gadgets

    LG UltraGear Monitor Unleashes Next-Level Immersive Gaming With 49-Inches

    LG has simply launched its newest gaming monitor, referred to as UltraGear mannequin 49GR85DC. The…

    Our Picks
    Gadgets

    Google Pixel Watch 2 Unveiled! OLED Screen, New SoC And More

    Mobile

    Lazy Android texters now have even fewer reasons to contribute meaningfully to conversations

    Mobile

    All we know so far and what we want to see

    Categories
    • AI (1,493)
    • Crypto (1,754)
    • Gadgets (1,805)
    • Mobile (1,851)
    • Science (1,866)
    • Technology (1,803)
    • The Future (1,649)
    Most Popular
    Gadgets

    Get these Celestron Eclipse glasses now before it’s too late

    Crypto

    Glassnode Co-Founders Weigh In On Bitcoin (BTC) Path To $30,000

    Science

    The Second Person to Get a Pig Heart Transplant Just Died

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.