Google is hosting the first “Machine Unlearning” challenge. That is, these are algorithms that “forget”, an emerging field of research. OpenAI tried for months to weed out skills it deems unethical or harmful, and sometimes went too far. Unlike deleting data from disk, removing knowledge from AI models (without crippling other skills) is much more difficult. But it is useful and sometimes necessary because:
▸ Reduces toxic/biased/NSFW content
▸ Complies with privacy, copyright and regulatory laws
▸ Puts control back to content creators: people can request to delete their contribution to the dataset after training a model
▸ Update outdated knowledge as new scientific discoveries come in
Deep learning has recently driven tremendous progress in a wide variety of applications, from generating realistic images and impressive retrieval systems to language models that can sustain human conversations. However, the widespread use of deep neural network models requires caution: as guided by Google’s AI Principles, we seek to develop AI technologies responsibly by understanding and mitigating potential risks, such as the propagation and amplification of unfair biases and the protection of user privacy.
Recently Google has published in its blog an article where it talks about how to completely erase the influence of the data requested to be deleted is a challenge since, apart from simply removing them from the databases where they are stored, it also requires erasing the influence of that data on other artifacts such as trained machine learning models. In addition, recent research has shown that in some cases it may be possible to infer with high accuracy whether an example was used to train a machine learning model using membership inference attacks (MIAs). This can raise privacy concerns, as it implies that even if an individual’s data is removed from a database, it may still be possible to infer whether that individual’s data was used to train a model.
Given the above, unlearning is an emerging subfield of ML that aims to remove the influence of a specific subset of training examples – the “forgotten set” – from a trained model. In addition, an ideal algorithm for “unlearning” would eliminate the influence of certain examples while maintaining other beneficial properties, such as accuracy in the rest of the training set and generalization to retained examples. A simple way to produce this “unlearned” model is to retrain the model into a tight training set that excludes samples from the forgotten set. However, this is not always a viable option, as retraining deep models can be computationally expensive. An ideal algorithm for “unlearning” would instead use the already trained model as a starting point and make adjustments efficiently to eliminate the influence of the requested set.
Applications of machine unlearning
Machine unlearning (MU) has applications beyond protecting user privacy. For example, unlearning can be used to remove inaccurate or outdated information from trained models (for example, due to labeling errors or changes in the environment) or to remove harmful, manipulated, or atypical data.
The field of MU is related to other areas of ML, such as differential privacy, continuous learning, and equity. Differential privacy aims to ensure that no example of training has too great an influence on the trained model, which is a more demanding goal compared to unlearning, which only requires erasing the influence of the set designated to forget. Research on continuous learning seeks to design models that can learn continuously while maintaining previously acquired skills.
As work on unlearning progresses, it can also open up new ways to promote equity in models, correcting unfair biases or the disparate treatment of members belonging to different groups (e.g. demographics, age groups, etc.).
Announcing the first challenge of MU
Google has announced the first Unlearning Challenge, which will be held as part of the NeurIPS 2023 Competition Track. The objective of the competition is twofold. First, by unifying and standardizing assessment metrics for unlearning, we hope to identify the strengths and weaknesses of different algorithms through fair comparisons. Secondly, by opening this competition to all, we hope to foster novel solutions and shed light on open challenges and existing opportunities.
The competition will be held in Kaggle and will run between mid-July 2023 and mid-September 2023. As part of the competition, the starter kit is now available. This starter kit provides a foundation for participants to build and test their unlearning models on a toy dataset.
- Announcing the first Machine Unlearning Challenge. https://ai.googleblog.com/2023/06/announcing-first-machine-unlearning.html.
- Google Announces The First Machine Unlearning Challenge. https://analyticsindiamag.com/google-announces-the-first-machine-unlearning-challenge/.
- Asserting the primary Machine Unlearning Problem – Google Analysis …. https://www.nsmaat.net/2023/06/30/announcing-the-first-machine-unlearning-challenge-google-research-blog/.