Close Menu
Ztoog
    What's Hot
    AI

    This AI Research Introduces a Novel Two-Stage Pose Distillation for Whole-Body Pose Estimation

    AI

    MIT launches Working Group on Generative AI and the Work of the Future | Ztoog

    The Future

    Microsoft Surface Laptop Studio 2 and Laptop Go 3: how to preorder

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Drivers in fatal Ford BlueCruise crashes were likely distracted before impact

      Livestream FA Cup Soccer: Watch Newcastle vs. Man City From Anywhere

      What is Project Management? 5 Best Tools that You Can Try

      Operational excellence strategy and continuous improvement

      Hannah Fry: AI isn’t as powerful as we think

    • Technology

      Stop Editing Manually: 5 AI Tools in Photoshop You Should Be Using

      Laser 3D Printing Could Build Lunar Base Structures

      Iran war: How could it end?

      Democratic senators question CFTC staffing cuts in Chicago enforcement office

      Google’s Cloud AI lead on the three frontiers of model capability

    • Gadgets

      Goal Zero Yeti 1500 6G review: A rugged portable power station that isn’t afraid to get dirty

      How to Run Ethernet Cables to Your Router and Keep Them Tidy

      macOS Tahoe 26.3.1 update will “upgrade” your M5’s CPU to new “super” cores

      Lenovo Shows Off a ThinkBook Modular AI PC Concept With Swappable Ports and Detachable Displays at MWC 2026

      POCO M8 Review: The Ultimate Budget Smartphone With Some Cons

    • Mobile

      How Affiliate Programs for Betting Apps Work Across MENA

      Samsung managed to tie Apple for first place in this one 2025 smartphone market report

      Need a power station? These two Anker ones are nearly half off

      Android’s March update is all about finding people, apps, and your missing bags

      Watch Xiaomi’s global launch event live here

    • Science

      Anduril, the autonomous weapons maker, doubles the size of its space unit

      Florida can’t decide if its official saltwater mammal is a dolphin or a porpoise

      Big Tech Signs White House Data Center Pledge With Good Optics and Little Substance

      Inside the best dark matter detector ever built

      NASA’s Artemis moon exploration programme is getting a major makeover

    • AI

      NVIDIA Releases Nemotron 3 Super: A 120B Parameter Open-Source Hybrid Mamba-Attention MoE Model Delivering 5x Higher Throughput for Agentic AI

      A “ChatGPT for spreadsheets” helps solve difficult engineering challenges faster | Ztoog

      Online harassment is entering its AI era

      Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

      New method could increase LLM training efficiency | Ztoog

    • Crypto

      Pundit Reveals Why Bitcoin Is Headed For Another Crash To $42,000

      Ethereum co-founder Jeffrey Wilcke sends $157M in ETH to Kraken after months of wallet silence

      SEC Vs. Justin Sun Case Ends In $10M Settlement

      Google paid startup Form Energy $1B for its massive 100-hour battery

      Ethereum Breakout Alert: Corrective Channel Flip Sparks Impulsive Wave

    Ztoog
    Home » A New AI Research Proposes A Simple Yet Effective Structure-Based Encoder For Protein Representation Learning According To Their 3D Structures
    AI

    A New AI Research Proposes A Simple Yet Effective Structure-Based Encoder For Protein Representation Learning According To Their 3D Structures

    Facebook Twitter Pinterest WhatsApp
    A New AI Research Proposes A Simple Yet Effective Structure-Based Encoder For Protein Representation Learning According To Their 3D Structures
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    Proteins, the vitality of the cell, are concerned in numerous functions, together with materials and coverings. They are made up of an amino acid chain that folds right into a sure form. A important variety of novel protein sequences have been discovered lately as a result of growth of low-cost sequencing know-how. Accurate and efficient in silico protein operate annotation strategies are required to shut the present sequence-function hole since useful annotation of a novel protein sequence continues to be costly and time-consuming.

    Many data-driven approaches depend on studying representations of the protein constructions as a result of many protein features are managed by how they’re folded. These representations can then be utilized to duties like protein design, construction classification, mannequin high quality evaluation, and performance prediction.

    The variety of printed protein constructions is orders of magnitude lower than the variety of datasets in different machine-learning software fields as a result of problem of experimental protein construction identification. For occasion, the Protein Data Bank has 182K experimentally confirmed constructions, in comparison with 47M protein sequences in Pfam and 10M annotated footage in ImageNet. Several research have used the abundance of unlabeled protein sequence information to develop a correct illustration of current proteins to shut this representational hole. Many researchers have used self-supervised studying to pretrain protein encoders on hundreds of thousands of sequences.

    🚀 Build high-quality coaching datasets with Kili Technology and clear up NLP machine studying challenges to develop highly effective ML functions

    Recent developments in correct deep learning-based protein construction prediction strategies have made it possible to successfully and confidently predict the constructions of many protein sequences. Nevertheless, these strategies don’t particularly seize or use the details about protein construction that’s recognized to find out how proteins operate. Many structure-based protein encoders have been proposed to make use of structural data higher. Unfortunately, the interactions between edges, that are essential in simulating protein construction, have but to be explicitly addressed in these fashions. Moreover, as a result of dearth of experimentally established protein constructions, comparatively little work has been finished up till lately to create pretraining strategies that benefit from unlabeled 3D constructions.

    Inspired by this development, they create a protein encoder that may be utilized to a spread of property prediction functions and is pretrained on essentially the most possible protein constructions. They recommend a simple but environment friendly structure-based encoder termed the GeomEtry-Aware Relational Graph Neural Network, which conducts relational message passing on protein residue graphs after encoding spatial data by together with numerous structural or sequential edges. They recommend a sparse edge message passing approach to enhance the protein construction encoder, which is the primary effort to implement edge-level message passing on GNNs for protein construction encoding. Their thought was impressed by the design of the triangle consideration in Evoformer.

    They additionally present a geometrical pretraining strategy primarily based on the well-known contrastive studying framework to study the protein construction encoder. They recommend revolutionary augmentation features that improve the similarity between acquired representations of substructures from the identical protein whereas reducing that between these from totally different proteins to seek out physiologically linked protein substructures that co-occur in proteins. They concurrently recommend a set of easy baselines primarily based on self-prediction.

    They established a robust basis for pretraining protein construction representations by evaluating their pretraining strategies in opposition to a number of downstream property prediction duties. These pretraining issues embrace the masked prediction of varied geometric or physicochemical properties, reminiscent of residue varieties, Euclidean distances, and dihedral angles. Numerous checks utilizing a wide range of benchmarks, reminiscent of Enzyme Commission quantity prediction, Gene Ontology time period prediction, fold’classification, and response classification, present that GearNet enhanced with edge message passing can constantly outperform current protein encoders on the vast majority of duties in a supervised setting. 

    Moreover, utilizing the instructed pretraining technique, their mannequin skilled on fewer than one million samples obtains outcomes equal to and even higher than these of essentially the most superior sequence-based encoders pretrained on datasets of one million or billion. The codebase is publicly accessible on Github. It is written in PyTorch and Torch Drug. 


    Check out the Paper and Github Link. All Credit For This Research Goes To the Researchers on This Project. Also, don’t neglect to affix our 26k+ ML SubReddit, Discord Channel, and Email Newsletter, the place we share the newest AI analysis information, cool AI initiatives, and extra.


    Aneesh Tickoo is a consulting intern at MarktechPost. He is presently pursuing his undergraduate diploma in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time engaged on initiatives geared toward harnessing the ability of machine studying. His analysis curiosity is picture processing and is enthusiastic about constructing options round it. He loves to attach with folks and collaborate on attention-grabbing initiatives.


    🔥 Gain a aggressive
    edge with information: Actionable market intelligence for international manufacturers, retailers, analysts, and traders. (Sponsored)

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    AI

    NVIDIA Releases Nemotron 3 Super: A 120B Parameter Open-Source Hybrid Mamba-Attention MoE Model Delivering 5x Higher Throughput for Agentic AI

    AI

    A “ChatGPT for spreadsheets” helps solve difficult engineering challenges faster | Ztoog

    AI

    Online harassment is entering its AI era

    AI

    Meet NullClaw: The 678 KB Zig AI Agent Framework Running on 1 MB RAM and Booting in Two Milliseconds

    AI

    New method could increase LLM training efficiency | Ztoog

    AI

    The human work behind humanoid robots is being hidden

    AI

    NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data

    AI

    Personalization features can make LLMs more agreeable | Ztoog

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Technology

    Structural Evolutions in Data – O’Reilly

    I’m wired to consistently ask “what’s next?” Sometimes, the reply is: “more of the same.” That…

    AI

    The power of App Inventor: Democratizing possibilities for mobile applications | Ztoog

    In June 2007, Apple unveiled the primary iPhone. But the corporate made a strategic choice…

    Technology

    Android 15’s voice activation feature could let you launch ChatGPT hands-free

    Mishaal Rahman / Android AuthorityTL;DR Currently, the one apps you can launch by voice command…

    AI

    Symbol tuning improves in-context learning in language models – Google Research Blog

    Posted by Jerry Wei, Student Researcher, and Denny Zhou, Principal Scientist, Google Research

    The Future

    X appears to block Taylor Swift searches… barely

    X appears to have blocked searches for Taylor Swift as a response to a latest…

    Our Picks
    Gadgets

    Toku’s CLAiR: AI-Powered Retina Scan Predicts Cardiovascular Risks

    AI

    This AI Research from DeepMind Aims at Reducing Sycophancy in Large Language Models (LLMs) Using Simple Synthetic Data

    Science

    Quantum Bullsh*t review: Time to save quantum theory for science

    Categories
    • AI (1,562)
    • Crypto (1,829)
    • Gadgets (1,872)
    • Mobile (1,913)
    • Science (1,941)
    • Technology (1,864)
    • The Future (1,718)
    Most Popular
    The Future

    Forrester’s No-Code Citizen Development Security Breach Prediction Misses the Mark

    AI

    This AI Research from Google Explains How They Trained a DIDACT Machine Learning ML Model to Predict Code Build Fixes

    Science

    Los Angeles Just Proved How Spongy a City Can Be

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2026 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.