• Home
  • Science
  • Technology
  • Futurism
  • Weather Extreme

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Tornadoes in Canada | Coeur d’Alene Press

June 5, 2023

James Webb Space Telescope spies earliest complex organic molecules

June 5, 2023

Pixel 8's Tensor G3 gets leaked detailing all of its main specs

June 5, 2023
Facebook Twitter Instagram
Facebook Twitter Instagram YouTube
Futurist JournalFuturist Journal
Demo
  • Home
  • Science
  • Technology
  • Futurism
  • Weather Extreme
Futurist JournalFuturist Journal
Home » Now that machines can learn, can they unlearn?
All Technology

Now that machines can learn, can they unlearn?

NewsBy NewsAugust 21, 2021Updated:August 21, 2021No Comments6 Mins Read0 Views
Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email

Andriy Onufriyenko | Getty Images

Companies of all kinds use machine learning to analyze people’s desires, dislikes, or faces. Some researchers are now asking a different question: How can we make machines forget?

A nascent area of computer science dubbed machine unlearning seeks ways to induce selective amnesia in artificial intelligence software. The goal is to remove all trace of a particular person or data point from a machine learning system, without affecting its performance.

If made practical, the concept could give people more control over their data and the value derived from it. Although users can already ask some companies to delete personal data, they are generally in the dark about what algorithms their information helped tune or train. Machine unlearning could make it possible for a person to withdraw both their data and a company’s ability to profit from it.

Although intuitive to anyone who has rued what they shared online, that notion of artificial amnesia requires some new ideas in computer science. Companies spend millions of dollars training machine-learning algorithms to recognize faces or rank social posts, because the algorithms often can solve a problem more quickly than human coders alone. But once trained, a machine-learning system is not easily altered, or even understood. The conventional way to remove the influence of a particular data point is to rebuild a system from the beginning, a potentially costly exercise. “This research aims to find some middle ground,” says Aaron Roth, a professor at the University of Pennsylvania who is working on machine unlearning. “Can we remove all influence of someone’s data when they ask to delete it, but avoid the full cost of retraining from scratch?”

Advertisement

Work on machine unlearning is motivated in part by growing attention to the ways artificial intelligence can erode privacy. Data regulators around the world have long had the power to force companies to delete ill-gotten information. Citizens of some locales, like the EU and California, even have the right to request that a company delete their data if they have a change of heart about what they disclosed. More recently, US and European regulators have said the owners of AI systems must sometimes go a step further: deleting a system that was trained on sensitive data.

Last year, the UK’s data regulator warned companies that some machine-learning software could be subject to GDPR rights such as data deletion, because an AI system can contain personal data. Security researchers have shown that algorithms can sometimes be forced to leak sensitive data used in their creation. Early this year, the US Federal Trade Commission forced facial recognition startup Paravision to delete a collection of improperly obtained face photos and machine-learning algorithms trained with them. FTC commissioner Rohit Chopra praised that new enforcement tactic as a way to force a company breaching data rules to “forfeit the fruits of its deception.”

The small field of machine unlearning research grapples with some of the practical and mathematical questions raised by those regulatory shifts. Researchers have shown they can make machine-learning algorithms forget under certain conditions, but the technique is not yet ready for prime time. “As is common for a young field, there’s a gap between what this area aspires to do and what we know how to do now,” says Roth.

One promising approach proposed in 2019 by researchers from the universities of Toronto and Wisconsin-Madison involves segregating the source data for a new machine-learning project into multiple pieces. Each is then processed separately, before the results are combined into the final machine-learning model. If one data point later needs to be forgotten, only a fraction of the original input data needs to be reprocessed. The approach was shown to work on data of online purchases and a collection of more than a million photos.

Advertisement

Roth and collaborators from Penn, Harvard, and Stanford recently demonstrated a flaw in that approach, showing that the unlearning system would break down if submitted deletion requests came in a particular sequence, either through chance or from a malicious actor. They also showed how the problem could be mitigated.

Gautam Kamath, a professor at the University of Waterloo also working on unlearning, says the problem that project found and fixed is an example of the many open questions remaining about how to make machine unlearning more than just a lab curiosity. His own research group has been exploring how much a system’s accuracy is reduced by making it successively unlearn multiple data points.

Kamath is also interested in finding ways for a company to prove—or a regulator to check—that a system really has forgotten what it was supposed to unlearn. “It feels like it’s a little way down the road, but maybe they’ll eventually have auditors for this sort of thing,” he says.

Regulatory reasons to investigate the possibility of machine unlearning are likely to grow as the FTC and others take a closer look at the power of algorithms. Reuben Binns, a professor at Oxford University who studies data protection, says the notion that individuals should have some say over the fate and fruits of their data has grown in recent years in both the US and Europe.

It will take virtuoso technical work before tech companies can actually implement machine unlearning as a way to offer people more control over the algorithmic fate of their data. Even then, the technology might not change much about the privacy risks of the AI age.

Differential privacy, a clever technique for putting mathematical bounds on what a system can leak about a person, provides a useful comparison. Apple, Google, and Microsoft all fete the technology, but it is used relatively rarely, and privacy dangers are still plentiful.

Binns says that while it can be genuinely useful, “in other cases it’s more something a company does to show that it’s innovating.” He suspects machine unlearning may prove to be similar, more a demonstration of technical acumen than a major shift in data protection. Even if machines learn to forget, users will have to remember to be careful who they share data with.

This story originally appeared on wired.com.

Source

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News
  • Website

Related Posts

Pixel 8's Tensor G3 gets leaked detailing all of its main specs

June 5, 2023

2 hidden paraglider skins discovered in Zelda: Tears of the Kingdom

June 5, 2023

Xbox Showcase Will Feature No ‘Full CGI Trailers’ For First-Party Games

June 5, 2023

Q&A wth Nvidia CEO Jensen Huang

June 5, 2023

Ubisoft will show off Assassin’s Creed Mirage, Avatar, and The Crew Motorfest this month

June 4, 2023

The End of an Internet Era

June 4, 2023

Leave A Reply Cancel Reply

You must be logged in to post a comment.

Recent Posts
  • Tornadoes in Canada | Coeur d’Alene Press
  • James Webb Space Telescope spies earliest complex organic molecules
  • Pixel 8's Tensor G3 gets leaked detailing all of its main specs
  • Tiny DNA Circles Defying Genetic Laws Drive Cancer Formation
  • BTQ Technologies Publishes Research Paper on Proof-of-Work Consensus by Quantum Sampling
Recent Comments
    Demo
    Top Posts

    Chinese granny finds online fame for depiction of elderly loneliness

    December 4, 20219 Views

    Starbucks Teases Web 3 Platform in NFT Announcement

    May 4, 20225 Views

    Pandas AI: The Generative AI Python Library

    May 16, 20234 Views
    Don't Miss

    Tornadoes in Canada | Coeur d’Alene Press

    June 5, 2023

    In early April, I wrote about the number of tornadoes that were seen across the…

    James Webb Space Telescope spies earliest complex organic molecules

    June 5, 2023

    Pixel 8's Tensor G3 gets leaked detailing all of its main specs

    June 5, 2023

    Tiny DNA Circles Defying Genetic Laws Drive Cancer Formation

    June 5, 2023
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    Demo
    Most Popular

    Chinese granny finds online fame for depiction of elderly loneliness

    December 4, 20219 Views

    Starbucks Teases Web 3 Platform in NFT Announcement

    May 4, 20225 Views

    Pandas AI: The Generative AI Python Library

    May 16, 20234 Views
    Our Picks

    Tornadoes in Canada | Coeur d’Alene Press

    June 5, 2023

    James Webb Space Telescope spies earliest complex organic molecules

    June 5, 2023

    Pixel 8's Tensor G3 gets leaked detailing all of its main specs

    June 5, 2023
    Editor's Pick

    Blistering heat scorches Europe as Portugal wildfire injures 29 and officials warn of hotter, changing climate – MKFM 106.3FM

    July 10, 2022

    Qualcomm’s Smartphone for Snapdragon Insiders is still stuck on Android 11

    June 14, 2022

    OpenSea Reimburses Users $1.8 Million USD After Site Explot

    January 29, 2022
    Futurist Journal
    Facebook Twitter Instagram Pinterest YouTube Dribbble
    • Contact Us
    • Privacy Policy
    © 2023 futuristjournal.com - All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.