Listen to the article (AI powered narration)

Published on December 08, 2020

If Proposition 24, now known as the California Privacy Rights Act of 2020 (CPRA), is a harbinger of things to come, we’re about to see more AI legislation. Not only does the CPRA establish a new privacy protection regulation agency, expand the definition of personal information, and create new transparency requirements, the law also explicitly addresses algorithmic decision-making. The CPRA, which passed last month, expands the California Consumer Privacy Act (CCPA) and greatly bolsters consumers’ data privacy rights. It doesn’t take effect until January 1, 2023, so businesses have time to make the necessary adjustments.

The CPRA takes aim at AI-based models

Unlike the CCPA, the CPRA explicitly addresses AI-based models’ processes. Consumers have new rights under the CPRA, including the right to access information about companies’ automated decision-making, as well as the right to opt-out of these automated decision-making technologies. This is significant.

Consumers can inquire about the logic involved in the AI’s decision-making, and they can also request that their data not be included in these technologies. Generally speaking, this law is part of a much-needed, noble effort to stop businesses from misusing consumers’ sensitive personal information; to be sure, businesses need to cease tracking consumers across devices, and across unrelated businesses. As it stands, even with this new law, there’s still a woefully inadequate amount of oversight of how businesses handle consumer data—especially data used in algorithmic decision-making. However, it appears this is likely to change.

New AI regulatory initiatives are coming down the pike in Europe

California’s legislators have modeled much of their data privacy laws on the legal work being done on the other side of the pond. After all, the European Commission, which brought us the General Data Protection Regulation (GDPR), first drafted their Ethical Guidelines for Trustworthy AI back in December 2018. Although these guidelines are not legally binding, they served as a first step toward legislation.

Assessment List for Trustworthy AI

In July 2020, the EU’s high-level expert group on AI (AI HLEG) revised and published their final “Assessment List for Trustworthy AI (ALTAI),” which is based on seven key requirements:

  1. Human agency and oversight
  2. Technical robustness and safety
    a. Resilience to attack and security
    b. General safety
    c. Accuracy
    d. Reliability, fall-back plans, and reproducibility
  3. Privacy and data governance
  4. Transparency
    a. Traceability
    b. Explainability
    c. Communication
  5. Diversity, non-discrimination, and fairness
  6. Environmental and societal well-being
  7. Accountability
    a. Auditability
    b. Risk management

Not only does the ALTAI offer a checklist for AI designers, developers, compliance officers, and data scientists to follow, but it also offers insight into the nature of the legislation that is on the horizon. Again, the ALTAI did not arise out of a vacuum; like the CPRA, it too builds upon the privacy laws that were concurrently put forth in the GDPR.

The GDPR and AI regulation

Although the initial GDPR document does not explicitly make reference to algorithmic decision-making or AI systems, it does mandate the regulation of processing of personal data, regardless of which technology is used. As many lawyers and analysts have noted, any processing of personal data through an algorithm undoubtedly falls within the scope of the GDPR; nevertheless, European Commission president Ursula von der Leyen has vowed to push supplementary legislation regarding AI.

To that end, the Center for Information Policy Leadership (CIPL), a Washington, D.C.-based privacy and security policy think tank, cautions against too much legislation. In their March 2020 white paper, “Artificial Intelligence and Data Protection: How the GDPR Regulates AI,” CIPL analysts contend that “the GDPR already extensively regulates AI,” and any regulatory group that intends to govern AI must take the GDPR into consideration to avoid saddling businesses with unnecessary or conflicting obligations. This is a fair point.

Conclusion

As consumers increasingly recognize the importance of data privacy, and AI systems proliferate, we are in for some AI regulatory intervention. In fact, the two phenomena are undoubtedly intertwined. It’s vital that consumers formally agree to provide their data for inclusion in companies’ algorithmic decision-making—and that consumers understand how their data is being used.

Unfortunately, companies do not have a good track record of self-regulation, so formal AI legislation will definitely be necessary. When it comes to emerging technologies, lawmakers are generally playing catch-up, so it will be interesting to see how quickly such laws can come to fruition.

John Donegan

John Donegan

Enterprise Analyst, ManageEngine

John is an Enterprise Analyst at ManageEngine. He covers infosec, cybersecurity, and public policy, addressing technology-related issues and their impact on business. Over the past fifteen years, John has worked at tech start-ups, as well as B2B and B2C enterprises. He has presented his research at five international conferences, and he has publications from Indiana University Press, Intellect Books, and dozens of other outlets. John holds a B.A. from New York University, an M.B.A. from Pepperdine University, an M.A. from the University of Texas at Austin, and an M.A. from Boston University.

 Learn more about John Donegan
x