Business

FTC’s Use of Algorithmic Disgorgement in Settlements with AI Companies

January 25, 2024

Volume XIV, Number 25

Welcome to this week’s issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies.

This week, we discuss the Federal Trade Commission’s (FTC or Commission) use of a remedy known as “algorithmic disgorgement” in settlements with AI companies. Our key takeaways are:

  • Algorithmic disgorgement is the enforced deletion of algorithms developed using illegally collected data. As stated by FTC Commissioner Rebecca Kelly Slaughter, the rationale behind this remedy is that “when companies collect data illegally, they should not be able to profit from either the data or any algorithm developed using it.”
  • The FTC first deployed this remedy in its settlement with Cambridge Analytica in 2019. The Commission has since used algorithmic disgorgement in multiple subsequent settlements.
  • The most recent settlement including algorithmic disgorgement, the FTC’s December 2023 settlement with the Rite Aid Corporation, is significant in that it is the Commission’s first use of its Section 5 unfairness authority against an allegedly discriminatory use of AI. The presence of algorithmic disgorgement in the Rite Aid settlement suggests an increased use of model deletion in future AI enforcement actions.

As we discussed in last week’s newsletter, for decades, the FTC has filed complaints against companies for data privacy violations. In certain cases, the FTC has been able to obtain settlements with these firms. These settlements often mandate that the offending firms institute appropriate changes to address and redress the breach of data privacy, such as the institution of regular data privacy assessments and risk management systems.

As technology has evolved, so too has the FTC’s enforcement approach. This week, we discuss the Commission’s new enforcement paradigm for the AI age: algorithmic disgorgement, or the enforced deletion of algorithms developed using illegally collected data.

In just a few short years, this remedy has gone from inchoate theory to a powerful enforcement tool leveraged against multibillion-dollar firms. As one key FTC official put it, algorithmic disgorgement is a “significant part” of the Commission’s overall AI enforcement strategy.

What is Algorithmic Disgorgement?

Most algorithms are developed using a process called “training,” including the powerful large language models that power popular generative AI applications. However, the data used to train these algorithms is crucial. If the data is obtained illegally, the resulting algorithms are tainted. Algorithmic disgorgement seeks to remedy this by requiring the deletion of such tainted algorithms, preventing companies from benefiting from ill-gotten data.

As the FTC continues to adapt its enforcement strategies to the advancements in technology, algorithmic disgorgement represents a significant shift in the Commission’s approach to addressing data privacy violations in the AI age.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *