Close Menu
Ztoog
    What's Hot
    Technology

    Q&A with Adobe General Counsel and Chief Trust Officer Dana Rao on the Content Authenticity Initiative, content credentials, AI deepfake detection, and more (Wall Street Journal)

    Crypto

    Survey Proclaims It As Defining Future

    Gadgets

    Ubuntu 23.10 is a Minotaur that moves faster and takes up less space

    Important Pages:
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    Facebook X (Twitter) Instagram Pinterest
    Facebook X (Twitter) Instagram Pinterest
    Ztoog
    • Home
    • The Future

      Can work-life balance tracking improve well-being?

      Any wall can be turned into a camera to see around corners

      JD Vance and President Trump’s Sons Hype Bitcoin at Las Vegas Conference

      AI may already be shrinking entry-level jobs in tech, new research suggests

      Today’s NYT Strands Hints, Answer and Help for May 26 #449

    • Technology

      Elon Musk tries to stick to spaceships

      A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

      Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

      Apple iPhone exports from China to the US fall 76% as India output surges

      Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

    • Gadgets

      Future-proof your career by mastering AI skills for just $20

      8 Best Vegan Meal Delivery Services and Kits (2025), Tested and Reviewed

      Google Home is getting deeper Gemini integration and a new widget

      Google Announces AI Ultra Subscription Plan With Premium Features

      Google shows off Android XR-based glasses, announces Warby Parker team-up

    • Mobile

      Deals: the Galaxy S25 series comes with a free tablet, Google Pixels heavily discounted

      Microsoft is done being subtle – this new tool screams “upgrade now”

      Wallpaper Wednesday: Android wallpapers 2025-05-28

      Google can make smart glasses accessible with Warby Parker, Gentle Monster deals

      vivo T4 Ultra specs leak

    • Science

      Analysts Say Trump Trade Wars Would Harm the Entire US Energy Sector, From Oil to Solar

      Do we have free will? Quantum experiments may soon reveal the answer

      Was Planet Nine exiled from the solar system as a baby?

      How farmers can help rescue water-loving birds

      A trip to the farm where loofahs grow on vines

    • AI

      Rationale engineering generates a compact new tool for gene therapy | Ztoog

      The AI Hype Index: College students are hooked on ChatGPT

      Learning how to predict rare kinds of failures | Ztoog

      Anthropic’s new hybrid AI model can work on tasks autonomously for hours at a time

      AI learns how vision and sound are connected, without human intervention | Ztoog

    • Crypto

      Bitcoin Maxi Isn’t Buying Hype Around New Crypto Holding Firms

      GameStop bought $500 million of bitcoin

      CoinW Teams Up with Superteam Europe to Conclude Solana Hackathon and Accelerate Web3 Innovation in Europe

      Ethereum Net Flows Turn Negative As Bulls Push For $3,500

      Bitcoin’s Power Compared To Nuclear Reactor By Brazilian Business Leader

    Ztoog
    Home » Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
    Technology

    Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns

    Facebook Twitter Pinterest WhatsApp
    Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp

    A digital camera strikes by means of a cloud of multi-colored cubes, every representing an e-mail message. Three passing cubes are labeled “k****@enron.com”, “m***@enron.com” and “j*****@enron.com.” As the digital camera strikes out, the cubes kind clusters of comparable colours.

    This is a visualization of a giant e-mail database from the Enron Corporation, which is usually used to coach synthetic intelligence programs, like ChatGPT.

    Jeremy White

    Last month, I obtained an alarming e-mail from somebody I didn’t know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my e-mail tackle, he defined, as a result of GPT-3.5 Turbo, one of many newest and most sturdy massive language fashions (L.L.M.) from OpenAI, had delivered it to him.

    My contact data was included in a listing of enterprise and private e-mail addresses for greater than 30 New York Times staff {that a} analysis staff, together with Mr. Zhu, had managed to extract from GPT-3.5 Turbo within the fall of this yr. With some work, the staff had been in a position to “bypass the model’s restrictions on responding to privacy-related queries,” Mr. Zhu wrote.

    My e-mail tackle isn’t a secret. But the success of the researchers’ experiment ought to ring alarm bells as a result of it reveals the potential for ChatGPT, and generative A.I. instruments prefer it, to disclose way more delicate private data with only a little bit of tweaking.

    When you ask ChatGPT a query, it doesn’t merely search the online to search out the reply. Instead, it attracts on what it has “learned” from reams of knowledge — coaching information that was used to feed and develop the mannequin — to generate one. L.L.M.s practice on huge quantities of textual content, which can embody private data pulled from the Internet and different sources. That coaching information informs how the A.I. instrument works, however it’s not speculated to be recalled verbatim.

    In idea, the extra information that’s added to an L.L.M., the deeper the reminiscences of the previous data get buried within the recesses of the mannequin. A course of often called catastrophic forgetting could cause an L.L.M. to treat beforehand discovered data as much less related when new information is being added. That course of will be helpful once you need the mannequin to “forget” issues like private data. However, Mr. Zhu and his colleagues — amongst others — have just lately discovered that L.L.M.s’ reminiscences, similar to human ones, will be jogged.

    In the case of the experiment that exposed my contact data, the Indiana University researchers gave GPT-3.5 Turbo a brief record of verified names and e-mail addresses of New York Times staff, which prompted the mannequin to return related outcomes it recalled from its coaching information.

    Much like human reminiscence, GPT-3.5 Turbo’s recall was not good. The output that the researchers had been in a position to extract was nonetheless topic to hallucination — a bent to supply false data. In the instance output they offered for Times staff, most of the private e-mail addresses had been both off by a couple of characters or completely incorrect. But 80 % of the work addresses the mannequin returned had been right.

    Companies like OpenAI, Meta and Google use completely different methods to stop customers from asking for private data by means of chat prompts or different interfaces. One technique entails instructing the instrument the right way to deny requests for private data or different privacy-related output. An common person who opens a dialog with ChatGPT by asking for private data can be denied, however researchers have just lately discovered methods to bypass these safeguards.

    Safeguards in Place

    Directly asking ChatGPT for somebody’s private data, like e-mail addresses, cellphone numbers or social safety numbers, will produce a canned response.

    Mr. Zhu and his colleagues weren’t working straight with ChatGPT’s commonplace public interface, however moderately with its utility programming interface, or API, which exterior programmers can use to work together with GPT-3.5 Turbo. The course of they used, known as fine-tuning, is meant to permit customers to offer an L.L.M. extra information a few particular space, akin to drugs or finance. But as Mr. Zhu and his colleagues discovered, it will also be used to foil a number of the defenses which are constructed into the instrument. Requests that will sometimes be denied within the ChatGPT interface had been accepted.

    “They do not have the protections on the fine-tuned data,” Mr. Zhu stated.

    “It is very important to us that the fine-tuning of our models are safe,” an OpenAI spokesman stated in response to a request for remark. “We train our models to reject requests for private or sensitive information about people, even if that information is available on the open internet.”

    The vulnerability is especially regarding as a result of nobody — other than a restricted variety of OpenAI staff — actually is aware of what lurks in ChatGPT’s training-data reminiscence. According to OpenAI’s web site, the corporate doesn’t actively hunt down private data or use information from “sites that primarily aggregate personal information” to construct its instruments. OpenAI additionally factors out that its L.L.M.s don’t copy or retailer data in a database: “Much like a person who has read a book and sets it down, our models do not have access to training information after they have learned from it.”

    Beyond its assurances about what coaching information it doesn’t use, although, OpenAI is notoriously secretive about what data it does use, in addition to data it has used up to now.

    “To the best of my knowledge, no commercially available large language models have strong defenses to protect privacy,” stated Dr. Prateek Mittal, a professor within the division {of electrical} and pc engineering at Princeton University.

    Dr. Mittal stated that A.I. firms weren’t in a position to assure that these fashions had not discovered delicate data. “I think that presents a huge risk,” he stated.

    L.L.M.s are designed to continue to learn when new streams of knowledge are launched. Two of OpenAI’s L.L.M.s, GPT-3.5 Turbo and GPT-4, are a number of the strongest fashions which are publicly out there right this moment. The firm makes use of pure language texts from many alternative public sources, together with web sites, but it surely additionally licenses enter information from third events.

    Some datasets are widespread throughout many L.L.M.s. One is a corpus of about half one million emails, together with hundreds of names and e-mail addresses, that had been made public when Enron was being investigated by vitality regulators within the early 2000s. The Enron emails are helpful to A.I. builders as a result of they include tons of of hundreds of examples of the best way actual folks talk.

    OpenAI launched its fine-tuning interface for GPT-3.5 final August, which researchers decided contained the Enron dataset. Similar to the steps for extracting details about Times staff, Mr. Zhu stated that he and his fellow researchers had been in a position to extract greater than 5,000 pairs of Enron names and e-mail addresses, with an accuracy fee of round 70 %, by offering solely 10 identified pairs.

    Dr. Mittal stated the issue with non-public data in business L.L.M.s is much like coaching these fashions with biased or poisonous content material. “There is no reason to expect that the resulting model that comes out will be private or will somehow magically not do harm,” he stated.

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp

    Related Posts

    Technology

    Elon Musk tries to stick to spaceships

    Technology

    A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)

    Technology

    Gemini in Google Drive can now help you skip watching that painfully long Zoom meeting

    Technology

    Apple iPhone exports from China to the US fall 76% as India output surges

    Technology

    Today’s NYT Wordle Hints, Answer and Help for May 26, #1437

    Technology

    5 Skills Kids (and Adults) Need in an AI World – O’Reilly

    Technology

    How To Come Back After A Layoff

    Technology

    Are Democrats fumbling a golden opportunity?

    Leave A Reply Cancel Reply

    Follow Us
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    Top Posts
    Mobile

    Amazon UK has the brand spanking new Xiaomi Redmi Note 12 on sale at a cool discount already

    Initially unveiled all the best way again in October 2022, the 6.67-inch Xiaomi Redmi Word…

    Gadgets

    Chipotle Unveils Autocado, A Robotic Assistant For Guacamole Making

    Chipotle is testing a brand new machine referred to as the Autocado, developed in partnership…

    Technology

    Best Peloton Alternatives for 2023

    $476 at Amazon Echelon Smart Connect Bike EX3 Best offers on an indoor bike $1,399…

    The Future

    Snag This 13-Piece Carote Cookware Set for Just $70 With This Labor Day Sale

    The clock is ticking! Labor Day has wrapped up, however there are nonetheless unimaginable offers…

    Gadgets

    Goodbye $99 Fee: Developer Betas Now Free For iOS, watchOS, And More

    During the WWDC 2023, occasion that introduced us a primary have a look at the…

    Our Picks
    Technology

    The Kids Online Safety Act isn’t all right, critics say

    The Future

    iRobot in Financial Trouble: Amazon Cuts Acquisition Price

    Science

    The Looming El Niño Could Cost the World Trillions of Dollars

    Categories
    • AI (1,493)
    • Crypto (1,754)
    • Gadgets (1,805)
    • Mobile (1,851)
    • Science (1,866)
    • Technology (1,803)
    • The Future (1,649)
    Most Popular
    AI

    Researchers at the University of Tokyo Developed an Extended Photonic Reinforcement Learning Scheme that Moves from the Static Bandit Problem Towards a more Challenging Dynamic Environment

    Crypto

    SBF’s prosecutors emphasize the case is not about crypto: ‘It’s about lies. It’s about stealing, greed.’

    The Future

    Solana’s price rises to $160, highest level since January 2022 as memecoin mania rises

    Ztoog
    Facebook X (Twitter) Instagram Pinterest
    • Home
    • About Us
    • Contact us
    • Privacy Policy
    • Terms & Conditions
    © 2025 Ztoog.

    Type above and press Enter to search. Press Esc to cancel.