Close Menu
Ztoog
    What's Hot
    The Future

    Over 500 OpenAI staff threaten to quit unless board resigns and reinstates Sam Altman, Greg Brockman

    AI

    Meet UniRef++: A Game-Changer AI Model in Object Segmentation with Unified Architecture and Enhanced Multi-Task Performance

    Mobile

    Apple finally adds customizable Default Alerts sounds with iOS 17.2

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      How I Turn Unstructured PDFs into Revenue-Ready Spreadsheets

      Is it the best tool for 2025?

      The clocks that helped define time from London’s Royal Observatory

      Summer Movies Are Here, and So Are the New Popcorn Buckets

      India-Pak conflict: Pak appoints ISI chief, appointment comes in backdrop of the Pahalgam attack

    • Technology

      Ensure Hard Work Is Recognized With These 3 Steps

      Cicada map 2025: Where will Brood XIV cicadas emerge this spring?

      Is Duolingo the face of an AI jobs crisis?

      The US DOD transfers its AI-based Open Price Exploration for National Security program to nonprofit Critical Minerals Forum to boost Western supply deals (Ernest Scheyder/Reuters)

      The more Google kills Fitbit, the more I want a Fitbit Sense 3

    • Gadgets

      Maono Caster G1 Neo & PD200X Review: Budget Streaming Gear for Aspiring Creators

      Apple plans to split iPhone 18 launch into two phases in 2026

      Upgrade your desk to Starfleet status with this $95 USB-C hub

      37 Best Graduation Gift Ideas (2025): For College Grads

      Backblaze responds to claims of “sham accounting,” customer backups at risk

    • Mobile

      Samsung Galaxy S25 Edge promo materials leak

      What are people doing with those free T-Mobile lines? Way more than you’d expect

      Samsung doesn’t want budget Galaxy phones to use exclusive AI features

      COROS’s charging adapter is a neat solution to the smartwatch charging cable problem

      Fortnite said to return to the US iOS App Store next week following court verdict

    • Science

      Failed Soviet probe will soon crash to Earth – and we don’t know where

      Trump administration cuts off all future federal funding to Harvard

      Does kissing spread gluten? New research offers a clue.

      Why Balcony Solar Panels Haven’t Taken Off in the US

      ‘Dark photon’ theory of light aims to tear up a century of physics

    • AI

      How to build a better AI benchmark

      Q&A: A roadmap for revolutionizing health care through data-driven innovation | Ztoog

      This data set helps researchers spot harmful stereotypes in LLMs

      Making AI models more trustworthy for high-stakes settings | Ztoog

      The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

    • Crypto

      ‘The Big Short’ Coming For Bitcoin? Why BTC Will Clear $110,000

      Bitcoin Holds Above $95K Despite Weak Blockchain Activity — Analytics Firm Explains Why

      eToro eyes US IPO launch as early as next week amid easing concerns over Trump’s tariffs

      Cardano ‘Looks Dope,’ Analyst Predicts Big Move Soon

      Speak at Ztoog Disrupt 2025: Applications now open

    Ztoog
    Home » Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
    Technology

    Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns

    Facebook Twitter Pinterest WhatsApp
    Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    A digital camera strikes by means of a cloud of multi-colored cubes, every representing an e-mail message. Three passing cubes are labeled “k****@enron.com”, “m***@enron.com” and “j*****@enron.com.” As the digital camera strikes out, the cubes kind clusters of comparable colours.

    This is a visualization of a giant e-mail database from the Enron Corporation, which is usually used to coach synthetic intelligence programs, like ChatGPT.

    Jeremy White

    Last month, I obtained an alarming e-mail from somebody I didn’t know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my e-mail tackle, he defined, as a result of GPT-3.5 Turbo, one of many newest and most sturdy massive language fashions (L.L.M.) from OpenAI, had delivered it to him.

    My contact data was included in a listing of enterprise and private e-mail addresses for greater than 30 New York Times staff {that a} analysis staff, together with Mr. Zhu, had managed to extract from GPT-3.5 Turbo within the fall of this yr. With some work, the staff had been in a position to “bypass the model’s restrictions on responding to privacy-related queries,” Mr. Zhu wrote.

    My e-mail tackle isn’t a secret. But the success of the researchers’ experiment ought to ring alarm bells as a result of it reveals the potential for ChatGPT, and generative A.I. instruments prefer it, to disclose way more delicate private data with only a little bit of tweaking.

    When you ask ChatGPT a query, it doesn’t merely search the online to search out the reply. Instead, it attracts on what it has “learned” from reams of knowledge — coaching information that was used to feed and develop the mannequin — to generate one. L.L.M.s practice on huge quantities of textual content, which can embody private data pulled from the Internet and different sources. That coaching information informs how the A.I. instrument works, however it’s not speculated to be recalled verbatim.

    In idea, the extra information that’s added to an L.L.M., the deeper the reminiscences of the previous data get buried within the recesses of the mannequin. A course of often called catastrophic forgetting could cause an L.L.M. to treat beforehand discovered data as much less related when new information is being added. That course of will be helpful once you need the mannequin to “forget” issues like private data. However, Mr. Zhu and his colleagues — amongst others — have just lately discovered that L.L.M.s’ reminiscences, similar to human ones, will be jogged.

    In the case of the experiment that exposed my contact data, the Indiana University researchers gave GPT-3.5 Turbo a brief record of verified names and e-mail addresses of New York Times staff, which prompted the mannequin to return related outcomes it recalled from its coaching information.

    Much like human reminiscence, GPT-3.5 Turbo’s recall was not good. The output that the researchers had been in a position to extract was nonetheless topic to hallucination — a bent to supply false data. In the instance output they offered for Times staff, most of the private e-mail addresses had been both off by a couple of characters or completely incorrect. But 80 % of the work addresses the mannequin returned had been right.

    Companies like OpenAI, Meta and Google use completely different methods to stop customers from asking for private data by means of chat prompts or different interfaces. One technique entails instructing the instrument the right way to deny requests for private data or different privacy-related output. An common person who opens a dialog with ChatGPT by asking for private data can be denied, however researchers have just lately discovered methods to bypass these safeguards.

    Safeguards in Place

    Directly asking ChatGPT for somebody’s private data, like e-mail addresses, cellphone numbers or social safety numbers, will produce a canned response.

    Mr. Zhu and his colleagues weren’t working straight with ChatGPT’s commonplace public interface, however moderately with its utility programming interface, or API, which exterior programmers can use to work together with GPT-3.5 Turbo. The course of they used, known as fine-tuning, is meant to permit customers to offer an L.L.M. extra information a few particular space, akin to drugs or finance. But as Mr. Zhu and his colleagues discovered, it will also be used to foil a number of the defenses which are constructed into the instrument. Requests that will sometimes be denied within the ChatGPT interface had been accepted.

    “They do not have the protections on the fine-tuned data,” Mr. Zhu stated.

    “It is very important to us that the fine-tuning of our models are safe,” an OpenAI spokesman stated in response to a request for remark. “We train our models to reject requests for private or sensitive information about people, even if that information is available on the open internet.”

    The vulnerability is especially regarding as a result of nobody — other than a restricted variety of OpenAI staff — actually is aware of what lurks in ChatGPT’s training-data reminiscence. According to OpenAI’s web site, the corporate doesn’t actively hunt down private data or use information from “sites that primarily aggregate personal information” to construct its instruments. OpenAI additionally factors out that its L.L.M.s don’t copy or retailer data in a database: “Much like a person who has read a book and sets it down, our models do not have access to training information after they have learned from it.”

    Beyond its assurances about what coaching information it doesn’t use, although, OpenAI is notoriously secretive about what data it does use, in addition to data it has used up to now.

    “To the best of my knowledge, no commercially available large language models have strong defenses to protect privacy,” stated Dr. Prateek Mittal, a professor within the division {of electrical} and pc engineering at Princeton University.

    Dr. Mittal stated that A.I. firms weren’t in a position to assure that these fashions had not discovered delicate data. “I think that presents a huge risk,” he stated.

    L.L.M.s are designed to continue to learn when new streams of knowledge are launched. Two of OpenAI’s L.L.M.s, GPT-3.5 Turbo and GPT-4, are a number of the strongest fashions which are publicly out there right this moment. The firm makes use of pure language texts from many alternative public sources, together with web sites, but it surely additionally licenses enter information from third events.

    Some datasets are widespread throughout many L.L.M.s. One is a corpus of about half one million emails, together with hundreds of names and e-mail addresses, that had been made public when Enron was being investigated by vitality regulators within the early 2000s. The Enron emails are helpful to A.I. builders as a result of they include tons of of hundreds of examples of the best way actual folks talk.

    OpenAI launched its fine-tuning interface for GPT-3.5 final August, which researchers decided contained the Enron dataset. Similar to the steps for extracting details about Times staff, Mr. Zhu stated that he and his fellow researchers had been in a position to extract greater than 5,000 pairs of Enron names and e-mail addresses, with an accuracy fee of round 70 %, by offering solely 10 identified pairs.

    Dr. Mittal stated the issue with non-public data in business L.L.M.s is much like coaching these fashions with biased or poisonous content material. “There is no reason to expect that the resulting model that comes out will be private or will somehow magically not do harm,” he stated.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    Technology

    Ensure Hard Work Is Recognized With These 3 Steps

    Technology

    Cicada map 2025: Where will Brood XIV cicadas emerge this spring?

    Technology

    Is Duolingo the face of an AI jobs crisis?

    Technology

    The US DOD transfers its AI-based Open Price Exploration for National Security program to nonprofit Critical Minerals Forum to boost Western supply deals (Ernest Scheyder/Reuters)

    Technology

    The more Google kills Fitbit, the more I want a Fitbit Sense 3

    Technology

    Sorry Shoppers, Amazon Says Tariff Cost Feature ‘Is Not Going to Happen’

    Technology

    Vibe Coding, Vibe Checking, and Vibe Blogging – O’Reilly

    Technology

    Robot Videos: Cargo Robots, Robot Marathons, and More

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Gadgets

    Starlink Mobility: Stay Connected Almost Anywhere on Earth

    Exciting information for these which might be consistently on the go, as Elon Musk just…

    Crypto

    PEPE Whale Makes $8.13M In Profit As Bullish Rally Continues

    Pepe coin (PEPE) has been displaying an unbelievable efficiency alongside the remainder of the crypto…

    Technology

    Getting AAA games working in Linux sometimes requires concealing your GPU

    Enlarge / There are some energies you shouldn’t faucet for sorcery, one thing each Hogwarts…

    Gadgets

    Oh hey, Google just announced the Pixel Fold

    Look, I’m not going to take a seat right here and fake that your entire…

    Crypto

    Why This Bank CEO Wants 99% Of The Crypto Industry Gone

    In a daring and contentious assertion, Caitlin Long has asserted that 99% of the crypto…

    Our Picks
    Gadgets

    The best heated slippers in 2024

    The Future

    Google’s Pixel 8 Pro, Pixel Watch 2 and Pixel Buds Pro: The first week

    Mobile

    ASUS reportedly shuts down Zenfone division, no more compact flagships

    Categories
    • AI (1,482)
    • Crypto (1,744)
    • Gadgets (1,796)
    • Mobile (1,839)
    • Science (1,853)
    • Technology (1,789)
    • The Future (1,635)
    Most Popular
    The Future

    Moon’s surface cooled down during strict COVID-19 lockdown, Indian study claims

    AI

    How Should We Store AI Images? Google Researchers Propose an Image Compression Method Using Score-based Generative Models

    Gadgets

    Yi Design’s Water-Permeable Bricks Tackle Ceramic Waste For Flood Prevention

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.