April 06, 2020

The Download April 2020

16 min

Introduction

In this month's issue, we examine a letter written by leaders of the House Committee on Science, Space and Technology following an alleged data breach by a facial recognition company. At the Federal Trade Commission (FTC), we explore the FTC's announcement of and request for public comment on its Endorsement Guides, the FTC's Privacy & Data Security Update: 2019, and the FTC's announcement of its Gramm-Leach-Bliley Act (GLBA) Safeguards Rule Workshop and the extension of the GLBA Safeguards Rule comment period. In California, we discuss the California Attorney General's announcement of the third draft of proposed California Consumer Privacy Act regulations and his statement that a private right of action should be included in federal privacy legislation. We also address the status of Washington state's privacy legislation. Across the pond, we provide an update of Artificial Intelligence actions taken by the United Kingdom and the European Union.

Heard on the Hill

House Science Committee Leaders Send a Letter to Facial Recognition Company after Data Breach

On March 3, 2020, the House Committee on Science, Space, and Technology (Committee) Chairwoman Eddie Bernice Johnson (D-TX) and Ranking Member Frank Lucas (R-OK) announced that they sent a facial recognition technology company, Clearview AI, a letter regarding recent reports of a data breach.

The letter sent to Clearview AI's CEO, Hoan Ton-That, inquired about the purported data breach, which the Committee alleged involved billions of images from social media. The Committee Chairwoman and Ranking Member expressed concern that because Clearview AI works with law enforcement entities, law enforcement operations may be adversely affected. The Committee cited multiple news sources, with one reporting that Clearview AI disclosed to its customers that there was unauthorized access to user accounts. The letter stated that Clearview AI allegedly "lost its entire client list to hackers."

The Committee requested answers to a multitude of questions surrounding how Clearview AI uses, collects, and protects data about consumers by March 17, 2020. The letter noted that the Committee sought detailed descriptions of the cybersecurity practices and procedures used by Clearview AI and the underlying data that is used for the company's facial recognition products. Chairwoman Johnson and Ranking Member Lucas also asked whether Clearview AI collects or purchases data from third-party entities and if Clearview AI abides by any established best security practices or standards.

Around the Agencies and Executive Branch

Federal Trade Commission Announces Request for Comment on Endorsement Guides

On February 21, 2020, the Federal Trade Commission (FTC) filed a request for public comment (RFC) on whether to make changes to its Guides Concerning the Use of Endorsements and Testimonials in Advertising (Endorsement Guides).

The FTC first promulgated the Endorsement Guides in 1972, which were subsequently amended in 1975, 1980, and 2009. As part of its regulatory review program, the FTC periodically reviews regulations and guides, seeking information on the economic impact of such rules and guides.

According to the FTC, the RFC seeks feedback on, among other things, whether: (1) the Endorsement Guides succeed in addressing "prevalent" marketplace practices; (2) the Endorsement Guides benefit consumers; (3) tech advancements necessitate that the FTC amend the Endorsement Guides; (4) information in other FTC guidance should be added to the Endorsement Guides; (5) there is adequate disclosure in social media of "unexpected material connections" between advertisers and endorsers; (6) endorsement disclosures affect children; (7) marketer-provided incentives to consumers skew or bias composite ratings; (8) the Endorsement Guides should address affiliate links; and (9) potential disclosures by advertisers or operators of review sites for publication of reviews should exist to help prevent deceptive or unfair practices.

The FTC vote to approve the RFC was 5-0, with Commissioner Chopra issuing a separate statement. In Commissioner Chopra's statement, he raised concerns including regarding what he said were: (1) companies that "launder" advertising by paying social media "influencers" to endorse products without disclosure; and (2) recent enforcement actions by the FTC regarding "fake reviews" and undisclosed endorsements. Commissioner Chopra also added that the FTC must determine if new requirements for social media advertising are necessary and whether to use civil penalty liability in future enforcement. Public comments were originally due to the FTC by April 21, 2020, however, the comment period has been extended until June 22, 2020, due to the Coronavirus pandemic.

Federal Trade Commission Publishes Privacy and Data Security Update: 2019

The Federal Trade Commission (FTC) published its annual Privacy & Data Security Update: 2019 (the Update), which summarizes the FTC's accomplishments in the previous year in the areas of privacy and data security, including enforcement, policy, advocacy, rulemaking, and outreach. The following are key highlights:

Significant Enforcement Fines. The FTC secured, pending court approval, a $5 billion settlement against a social media company over privacy and data security violations, which it touts as the largest privacy- or data security-related penalty that has been imposed worldwide. The FTC alleged that the company had, in violation of a previous FTC order, misrepresented the amount of control users had over personal information, had failed to implement and maintain a "reasonable" program to ensure consumer privacy, and had deceptively failed to disclose that telephone numbers provided by users for authentication purposes would also be used to deliver targeted advertising to those users.

First-Time Enforcement Actions. The FTC identified two enforcement actions where it extended its existing enforcement authority into new areas in 2019:

  • Stalking Apps. The FTC brought its first enforcement action against a developer of "stalking" apps, which it described as "software that allows purchasers to monitor the mobile devices on which they are installed, without users' knowledge." The FTC alleged that the developer did not take reasonable steps to ensure that the apps would be used for "legitimate and lawful" purposes. The FTC also alleged that the developer of the app did not take reasonable precautions to protect personal information collected from children, and made misrepresentations as to the security of information collected by the app.
  • VoIP Do-Not-Call Violations. For the first time, the FTC asserted jurisdiction over a provider of VoIP (Voice over Internet Protocol, or Internet-enabled telecommunications) services. The FTC obtained temporary restraining orders, preliminary injunctions, and asset freezes against an enterprise comprising of four individuals and six corporate entities. The FTC alleged that the enterprise ran an allegedly fraudulent credit card interest rate reduction scheme. One of the corporate entities was a VoIP provider that allegedly transmitted illegal robocalls for the enterprise. At the preliminary injunction stage, the court rejected the entity's challenge to the FTC's jurisdiction over VoIP services.

Data Breach Enforcement. The FTC continued to bring and settle claims against companies that experienced data breaches after they allegedly failed to implement "reasonable," "readily available," and/or "low-cost" data security protections to protect customer data and/or where the companies made alleged misrepresentations about the strength or efficacy of their data security measures. The largest of these settlements was a $575 million (minimum) settlement against a consumer reporting agency following a security breach affecting personal information of more than 147 million people. The FTC also brought and/or settled similar claims against a rewards program, a dress-up games website, enterprise software providers, and a manufacturer of Internet-connected devices.

Children's Privacy. The FTC continued to obtain settlements against companies over allegations that they collected personal information from children under 13 from child-directed properties and/or from users who self-identified as being under the age of 13. These settlements included a $170 million settlement with a search company and its video streaming subsidiary and a $5.7 million settlement with a social networking app.

International Enforcement. The FTC continued to enforce alleged violations of international data transfer frameworks. In the Update, the FTC noted twelve cases it brought against organizations that misrepresented their participation in the EU-U.S. Privacy Shield. The FTC also collaborated with the United Kingdom's Information Commissioner's Office in bringing enforcement actions against Cambridge Analytica and persons involved in purportedly deceptively harvesting personal information from social media users for use in voter targeting.

Advocacy. The FTC filed a comment on the National Institute of Standards and Technology's proposed privacy framework. The FTC's comment emphasized three familiar themes: that organizations should (1) address the risks of privacy breaches at each step of the risk management process, (2) consider the sensitivity of personal information when managing privacy risks, and (3) review their data-handling practices against public-facing statements and consumer expectations. In addition, the FTC testified before Congress regarding the need for privacy and data security legislation.

Federal Trade Commission Announces GLBA Safeguards Rule Workshop and Comment Period on Proposed Changes

On March 2, 2020, the Federal Trade Commission (FTC) announced that it will hold a public workshop on May 13, 2020, seeking research, testimony, and other input on changes the FTC proposed making to its Gramm-Leach-Bliley Act's Standards for Safeguarding Customer Information Rule (Safeguards Rule or Rule) in April 2019.

The Safeguards Rule was promulgated in 2002 pursuant to Subtitle A of Title V of the Gramm-Leach-Bliley Act (GLBA), which required the FTC to establish standards for financial institutions under its jurisdiction relating to administrative, technical, and physical safeguards for certain information.1 In its current form, the Rule requires financial institutions to develop, implement, and maintain a comprehensive written information security program that consists of the administrative, technical, and physical safeguards the financial institution uses to access, collect, distribute, process, protect, store, use, transmit, dispose of, or otherwise handle customer information.2 In addition, the safeguards set forth in the program must be appropriate to the size and complexity of the financial institution, the nature and scope of its activities, and the sensitivity of any customer information at issue.3

The FTC's April 2019 announcement of proposed changes contained four main modifications. First, it contained provisions designed to provide covered financial institutions with more guidance on how to develop and implement specific aspects of an overall information security program. Second, it added provisions designed to improve the accountability of financial institutions' information security programs. Third, it exempted small businesses from certain requirements. Fourth, it expanded the definition of "financial institution" to include entities engaged in activities that the Federal Reserve Board determines to be incidental to financial activities.

According to the FTC, the workshop will explore some of the issues raised in response to amendments the FTC proposed back in April 2019. In addition, the workshop is seeking information, empirical data, and testimony on topics such as:

  • price models for specific elements of information security programs;
  • standards for security in various industries;
  • the availability of third-party information security services aimed at different-sized institutions;
  • information about penetration and vulnerability testing; and
  • the costs of and possible alternatives to encryption and multifactor authentication.

The public can submit a comment on these topics until June 12, 2020. Instructions for filing comments appear in the Federal Register.

In the States

California AG Announces Third Draft of CCPA Regulations and Requests Private Right of Action in Federal Privacy Legislation

On March 11, 2020, the California Attorney General's Office (AG) released a second revision of proposed regulations that the AG is required to produce under the California Consumer Privacy Act (CCPA). The AG released the initial draft in October 2019 and a first revision of the regulations on February 10, 2020. This most recent draft was open for a 15-day comment period that closed on March 27, 2020. The AG indicated that it must finalize the regulations by the CCPA's July 1, 2020 enforcement date.

The latest draft CCPA regulations made relatively few changes compared to the February revisions. One change clarified that, in response to a request to know specific pieces of information, a business should not provide unique biometric data, but should instead inform consumers in enough detail the types of biometric data they collect (e.g., fingerprint scans but not the actual scans). Additionally, the AG removed a requirement that user-enabled global privacy controls not be preselected to an opt-out state. Such privacy controls can now be provided to consumers with opt-outs enabled by default. It remains unknown whether the AG expects to issue another round of revisions before submitting the regulations to the California Office of Administrative Law for approval and before final publication prior to July 1.

Prior to releasing the new, revised regulations, the AG also sent a letter to Congress to express support for a federal privacy law that would build on the CCPA's framework. The letter explained, at a high level, the CCPA's provisions and the rights given to consumers. The AG also stated that such a bill should allow state attorneys general and consumers to enforce the bill through a private right of action. Several proposals have been put forward for a federal privacy law, including some proposals that contain a private right of action.

Washington Does Not Pass Privacy Legislation

During Washington's 2020 regular session, legislators were unable to reach consensus on comprehensive privacy legislation to govern the use and sharing of "personal data" about residents of the state. This marks the second year in a row that privacy legislation advanced but failed to be enacted in Washington state.

The Washington Privacy Act (SB 6281) adopted certain terms and concepts used in the General Data Protection Regulation (GDPR), such as "controllers," "processors," and "personal data." The bill applied to legal entities that conduct business in the state of Washington or produce products or services that are targeted to residents of the state, and that satisfy at least one of the following thresholds:

  • During a calendar year, control or process personal data of 100,000 Washington residents or more; or
  • Derive over twenty-five percent (25%) of gross revenue from the sale of personal data and process or control personal data of at least 25,000 Washington residents.

The Washington Privacy Act also mirrored certain provisions of the California Consumer Privacy Act (CCPA), but it diverged from the CCPA's approach in other respects. For example, like the CCPA, the Washington Privacy Act gave consumers rights to access, delete, and opt-out of the "sale" of personal data. However, the bill included a right to correct personal data, and it imposed certain requirements on regulated "controllers" similar to obligations in the GDPR, such as an obligation to complete data protection assessments for certain processing activities involving personal data.

SB 6281 failed to pass in Washington partly because of legislators' disagreement on the bill's enforcement terms. The House of Representatives approved a version of the bill with a private right of action, while the Senate version of the bill allowed enforcement actions by the state's Attorney General only. Specifically, the Washington House of Representatives pushed for language that determined a violation of the bill to be an "unfair or deceptive act in trade or commerce and an unfair method of competition" for the purpose of the state's Consumer Protection Act. This language allowed private litigants to bring suits for any violation of the bill's terms. By way of comparison, the CCPA allows private litigants to bring lawsuits only in limited circumstances after breaches of personal information.

Senator Reuven Carlyle, the primary sponsor of the Washington Privacy Act, reiterated his support for Attorney General enforcement in a statement about the bill on the last day of the legislative session. Although comprehensive privacy legislation failed to pass in Washington state this year, legislators could take the issue up again in subsequent legislative sessions. Interested parties should closely follow data privacy legislation in Washington state to stay apprised of potential new legal requirements in the future.

International

Updates from the European Union & the United Kingdom on Artificial Intelligence

Last month, the European Commission (Commission) and the United Kingdom (UK) issued guidance related to consumer privacy in the context of artificial intelligence (AI) and automated decision-making (ADM).

The European Parliament passed a resolution (Resolution) that acknowledges the rapid technological advances within the fields of AI, machine learning, and ADM, and the benefits these advances will offer society through improved public services and innovative products for consumers. The Resolution also highlights potential concerns related to those advances, including the risks to the right to personal data protection and privacy. The Resolution suggests that when consumers interact with a system that utilizes ADM, consumers "should be properly informed about how it functions, about how to reach a human with decision-making powers, and about how the system's decisions can be checked and corrected." Stressing the need for a risk-based approach to regulation, the Resolution calls on the Commission to develop a risk assessment scheme for AI and ADM to ensure a consistent approach to address enforcement concerns.

On February 19, 2020, the UK Information Commissioner's Office (ICO) issued its draft "Guidance on the AI Auditing Framework" (Draft Guidance). The Draft Guidance provides a methodology for auditing AI applications to ensure data protection compliance when using AI to process personal data. The Draft Guidance also contains advice on how to understand data protection laws in relation to AI and recommendations for organizational and technical measures to mitigate the potential risks AI poses to individuals. The Draft Guidance is wide-ranging in scope and addresses:

  • Accountability and governance in AI through data protection impact assessments;
  • Assessing and improving AI system performance to mitigate potential discrimination;
  • Data minimization and security; and
  • Consideration and enablement of individuals' rights in AI systems.

Due to recent circumstances, the consultation period for the Draft Guidance has been extended from April 1, 2020 to May 1, 2020. The UK ICO states that feedback is essential to developing guidance that is "both conceptually sound and applicable to real-life situations" and seeks input from those with a compliance focus, such as data protection officers, as well as technology specialists, including machine learning experts and software developers.

Finally, the Commission presented three policy papers that outline its key priorities and anticipated next steps for data and AI: (1) a White Paper on Artificial Intelligence (AI White Paper); (2) a communication on a European strategy for data (the Data Strategy Communication); and (3) a communication on shaping Europe's digital future (the Digital Future Communication).

  • The AI White Paper proposes a "common European approach to AI" in order to "avoid fragmentation of the single market." The paper recommends implementation of a risk-based approach to ensure proportionate regulatory intervention. A core element of the Commission's proposal is a risk-based assessment which would be mandatory for "high-risk" applications of AI. The AI White Paper states that an AI application would be considered high-risk if it is utilized in a high-risk sector, such as healthcare and transport, and if significant risks can be expected to occur for any individual or company. The proposed requirements for such applications would consist of training data, data and record-keeping, robustness and accuracy, human oversight, and biometric identification obligations. For AI applications that do not qualify as "high-risk" and therefore would not be subject to the mandatory requirements, the AI White Paper suggests a voluntary opt-in to the high-risk requirements or a similar set, through which economic operators would be awarded a "quality label" for AI applications for compliance. The AI White Paper also presents proposals related to compliance, enforcement, and governance in order to create a new regulatory framework for AI, including possible adjustments to EU product safety and liability legislation. The consultation period for the AI White Paper closes on May 31, 2020.
  • The Data Strategy Communication proposes the creation of a "European data space," meaning a single market for data that would allow data sharing across the EU to enable the EU to become a "role model for a society empowered by data to make better decisions – in business and the public sector." To achieve this, the Commission proposes a regulatory framework regarding data management, access, and reuse of data among businesses, between businesses and government, and within administrations. In addition to the regulatory framework, the Commission proposes investments in the development of technological systems and infrastructures for hosting and processing data. Last, the Commission proposes to launch specific actions for the development of sector-specific common data spaces to offer data pools in strategic economic sectors and domains of public interest. The Data Strategy Communication lists nine common data spaces for the following sectors: industrial manufacturing, European Green Deal (a set of policy initiatives for increasing environmental sustainability in the EU's economy), mobility, health, financial, energy, agriculture, public administration, and skills.
  • The Digital Future Communication sets forth the Commission's three key objectives to shape Europe's digital future: (1) providing technology that works for people; (2) a fair and competitive economy; and (3) an open, democratic, and sustainable society. The Digital Future Communication sets out the Commission's plans to develop new policies and frameworks to enable Europe to deploy cutting-edge digital technologies and strengthen its cybersecurity capacities.

Footnote

1 Public Law 106-102, 113 Stat. 1338 (1999).
16 C.F.R. §§ 314.2(c), 314.3(a).
Id. at §314.3(a), (b).