Introduction
In this issue, we examine three Congressional hearings on data privacy legislation, artificial intelligence (AI) in the financial services industry, and online "manipulation and deception." We highlight the Federal Trade Commission (FTC) settlements with the former CEO of Cambridge Analytica, as well as FTC settlements with four companies for alleged Privacy Shield violations. We feature a workshop titled "NIST Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management," hosted at Venable. At the state level, we explore actions taken by the California Attorney General regarding the California Consumer Privacy Act (CCPA). Across the pond, we discuss the United Kingdom's Information Commissioners Office's AI guidance, and we report on the European Data Protection Board's Sixteenth Plenary Session.
Heard on the Hill
Senate Commerce Committee Holds Hearing to Consider Data Privacy Legislation
On December 4, 2019, the Senate Committee on Commerce, Science, and Transportation (Committee) held a hearing entitled, "Examining Legislative Proposals to Protect Consumer Data Privacy." The hearing was the Committee's fifth and final 2019 data privacy hearing, and featured discussion of (1) Committee Chairman Roger Wicker's (R-MS) data privacy discussion draft entitled, "United States Consumer Data Privacy Act of 2019," and Committee Ranking Member Maria Cantwell's (D-WA) S. 2968, the "Consumer Online Privacy Rights Act"; (2) other existing and potential federal data privacy bills; (3) potential attributes of a federal data privacy law; and (4) state and international data privacy laws including the California Consumer Privacy Act (CCPA) and the European Union's General Data Protection Regulation (GDPR). Hearing witnesses represented industry, a public interest group, and academia.
During the hearing, Committee Members and witnesses noted that a consensus has emerged regarding Congress's need to enact a comprehensive data privacy bill. Specifically, hearing participants discussed potential merits and drawbacks of (1) state law preemption; (2) a private right of action; (3) defining harm; (4) obligations related to companies' sharing of consumer data with third parties; and (5) assigning the Federal Trade Commission (FTC) varying levels of funding and rulemaking authority. Additionally, several Committee Members expressed support for data security, children's privacy, and bias mitigation obligations in a federal privacy bill.
While discussing Ranking Member Cantwell's S. 2968 and Chairman Wicker's discussion draft, witnesses said they were encouraged by provisions in each proposal that required companies to gain consumers' consent and other provisions that provided consumers with data rights. One witness said S. 2968's data portability provisions may harm competition. One witness described the CCPA as a "historic" statute, while also noting that the CCPA is not comprehensive. Another witness expressed concern that the GDPR has harmed small and medium-sized companies' business prospects in Europe.
Committee Chairman Wicker has indicated that the Committee will be focused on advancing a data privacy bill in 2020.
House Financial Services’ Task Force on Artificial Intelligence Convenes Hearing on Artificial Intelligence in the Financial Services Industry
On December 6, 2019, the House Committee on Financial Services' (Committee) Task Force on Artificial Intelligence (AI Task Force) convened a hearing titled, "Robots on Wall Street: The Impact of AI on Capital Markets and Jobs in the Financial Services Industry." The formation of the AI Task Force was announced in May 2019 and convened four hearings in 2019. Witnesses during the December 6 hearing included representatives from academia and the financial services industry.
During opening statements, AI Task Force Chair Bill Foster (D-IL) stated that digital businesses are "natural monopolies," adding that access to datasets gives companies more advantages. AI Task Force Ranking Member Barry Loudermilk (R-GA) highlighted the benefits of AI, including increased efficiency, cybersecurity, and fraud detection. He emphasized the importance of cybersecurity protections for source code.
The use of alternative data in the financial industry is gaining traction as a point of discussion. Alternative data is any data that is not commonly associated with the financial services industry but can be used to guide investors or creditors to make decisions. Witnesses noted that alternative data has been extremely helpful in the financial services industry, and an industry representative added that it is important to determine ownership and access rights to alternative data. In response to a question from Chair Foster, witnesses from academia and industry listed types of data used as alternative data in the financial services industry, including (1) geolocation data; (2) transaction data; and (3) satellite images.
A representative from the academic community expressed concern about algorithmic bias, and other witnesses encouraged addressing those concerns before algorithms are relied upon for financial decisions. Ranking Member Loudermilk also voiced cybersecurity concerns about protecting algorithms, adding that obtaining source code should be a "privacy violation." He noted that algorithms should be secured in the financial sector.
House Energy and Commerce Committee's Subcommittee on Consumer Protection and Commerce Holds Hearing on Manipulative and Deceptive Online Practices
On January 8, 2020, the House Committee on Energy and Commerce's (Committee) Subcommittee on Consumer Protection and Commerce (Subcommittee) held a hearing entitled, "Americans at Risk: Manipulation and Deception in the Digital Age." Hearing witnesses included representatives from the technology sector, academia, and an online privacy advocacy organization. Among other topics, the hearing addressed (1) privacy and data security, including data breaches; (2) Federal Trade Commission (FTC) enforcement; (3) online "deception" and "dark patterns"; (4) dissemination of "manipulative" practices; (5) freedom of speech; and (6) potential solutions to "harmful" online practices.
Committee Chairman Frank Pallone (D-NJ) opened the hearing by discussing what he identified as the simultaneous growth of deceptive online practices and technological innovation. Committee Ranking Member Greg Walden (R-OR) expressed support for the technology industry's efforts to protect consumers against deceptive online practices, noting that Congress is monitoring industry efforts to combat bad actors. Subcommittee Chair Jan Schakowsky (D-IL) expressed concern that the FTC lacks enough enforcement authority to protect consumers online and requested that the Subcommittee clarify that offline protections to consumers also apply online. Subcommittee Ranking Member Cathy McMorris Rodgers (R-WA) encouraged online platforms to be innovative in their approach to consumer protections. She warned against regulators harming beneficial online services with an overly broad regulatory approach.
The witnesses outlined the perceived advantages and risks of regulating online platforms. The witness from the technology sector and one witness from an academic institution expressed support for industry collaboration to combat bad actors. Witnesses from the privacy advocacy organization and another witness from an academic institution encouraged more strict regulation and enforcement actions. The privacy advocacy witness suggested that the business model for online platforms is inherently linked to deception and that platforms should be accountable for misinformation on their sites.
During the questioning, Subcommittee Ranking Member McMorris Rodgers (R-WA) and Congressman Tom O'Halleran (D-AZ) asked witnesses about the FTC's enforcement capabilities against online deception. The witness representing a privacy advocacy organization responded that while the FTC has used its enforcement authority against deceptive online practices, the FTC must be reformed to keep pace with technological innovation. A witness from an academic institution stated that the FTC could more effectively combat online deception with its current regulatory authority.
Several members of the Committee noted their concern about the quick spread of misinformation, though they differed on whether regulators should establish new regulations or use existing enforcement mechanisms.
Around the Agencies and Executive Branch
FTC Announces Proposed Settlements in Cases Involving Alleged EU-U.S. Privacy Shield Violations
On December 3, 2019, the Federal Trade Commission (FTC) announced that it had reached settlements with four companies over their alleged misrepresentation of participation in the EU-U.S. Privacy Shield Framework. The EU-U.S. Privacy Shield Framework is a regulatory framework that allows personal data to be transferred from the European Union (EU) to the United States in compliance with EU privacy and data protection laws. Participation in the EU-U.S. Privacy Shield program is voluntary, but once a U.S. organization commits to compliance with Privacy Shield principles through self-certification, that commitment is enforceable under U.S. law.
The FTC alleged that each of the four companies falsely claimed to be participating in the EU-U.S. Privacy Shield framework. The FTC said that two of the companies had submitted self-certification applications to the Department of Commerce but failed to finalize them; however, both companies claimed on their website to comply with both the EU-U.S. Privacy Shield framework as well as the Swiss-U.S. Privacy Shield framework. The Swiss-U.S. Privacy Shield framework performs a similar function to the EU version, allowing companies to transfer personal data in compliance with Swiss law. In the cases of the other two companies, the FTC said that the companies were once EU-U.S. Privacy Shield participants, but had allowed their certifications to lapse without changing their privacy policies and websites to reflect their non-participation. The FTC further alleged that during the time these latter two companies were participants in the program, they failed to meet the annual self-assessment or outside compliance review verification required of EU-U.S. Privacy Shield participants. Under the settlements, all four companies are prohibited from misrepresenting their participation in the EU-U.S. Privacy Shield framework, as well as any similar privacy or data security program. In the settlements reached with the two companies who had previously participated in the EU-U.S. Privacy Shield, those companies must either apply the protections of that framework to the personal information they collected while participating in the program, return the information, or delete it.
In the United States, enforcement of the EU-U.S. Privacy Shield and the Swiss-U.S. Privacy Shield principles falls under the mandate of the FTC, whereas the Department of Commerce is responsible for the administration of both the EU-U.S. and the Swiss-U.S. Privacy Shield Frameworks. Since the establishment of the framework in 2016, the FTC has brought a total of 21 related enforcement actions, mostly alleging companies misrepresented their participation in the Privacy Shield framework. Participants in the Privacy Shield must complete the self-certification process through the Department of Commerce and then recertify annually. If a company chooses to end its participation in the program, it is still subject to lasting obligations regarding all personal data that was transferred under the program.
FTC Finalizes Settlement with Former Cambridge Analytica CEO and Data Provider Following Opinion and Order Against Cambridge Analytica
In December 2019, the Federal Trade Commission (FTC) approved a settlement with Alexander Nix, the former CEO of Cambridge Analytica, and Aleksandr Kogan, researcher and developer of the “thisisyourdigitallife” app (referred to below and by the FTC as the GSRApp) that provided data to Mr. Nix and Cambridge Analytica. The FTC alleged that the two defendants deceptively stated that they would not collect names or other identifiable information, when in fact they participated in collecting personal information from tens of millions of social media users for use in voter profiling and targeting. The FTC’s settlement with two defendants follows its enforcement action against Cambridge Analytica itself.
On November 25, 2019, the FTC issued an Opinion and Order against Cambridge Analytica. In that case, the FTC found that Cambridge Analytica violated the prohibition on deceptive acts or practices in Section 5 of the FTC Act (15 U.S.C. § 45). In its Opinion, the FTC found that Cambridge Analytica and Mr. Nix provided funding and direction to Mr. Kogan’s development of the GSRApp, and Mr. Kogan then provided data obtained by the GSRApp to Mr. Nix and Cambridge Analytica. The Opinion further stated that the GSRApp did not disclose the full scope of the collection of data from users of the app, or that information about users’ friends would also be collected. The FTC noted that the GSRApp provided a disclosure that “we will NOT download your name or any other identifiable information – we are interested in your demographics and likes.” The FTC found that if the user granted permission, in contravention of this disclosure, the GSRApp collected social profile data, including unique identifiers, gender, birth date, location, connections lists, as well as social media activities and their social media connections. The FTC alleged that Mr. Kogan collected this social media data from over a quarter of a million GSRApp users and an estimated 50-65 million of their social media connections and provided it to Mr. Nix and Cambridge Analytica.
The final settlement against the two defendants prohibits both Mr. Nix and Mr. Kogan from making false or deceptive statements regarding the extent to which they collect, use, share, or sell personal information, as well as the purposes for which they collect, use, share, or sell such information. The settlement also requires Mr. Nix and Mr. Kogan to delete or destroy any personal information collected from consumers via the GSRApp and any related work product that originated from the data.
Cybersecurity Coalition and Better Identity Coalition Host NIST Privacy Framework Workshop
On January 9, 2020, the Cybersecurity Coalition, the Better Identity Coalition, and the National Institute of Standards and Technology (NIST) held a "NIST Privacy Framework Workshop." The workshop discussed topics relating to the development of the "NIST Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management" (Framework). The workshop participants included representatives from the industry and NIST.
Opening remarks by Kevin Stine, Chief, Applied Cybersecurity Division, NIST, highlighted the goal of NIST's Framework to incentivize entities to view privacy as a multi-faceted concern. Mr. Stine expressed support for what he described as the Framework's risk-based and outcome-focused approach.
Naomi Lefkovitz, Senior Privacy Policy Advisor, NIST, presented an overview of the Framework commenting that the key value propositions of the Framework are increasing consumer trust, fulfilling compliance with existing privacy laws and regulations, and facilitating communication among industry stakeholders. Ms. Lefkovitz presented a roadmap of expected next steps regarding the implementation of the Framework and noted that the Framework is privacy risk assessment focused. Ms. Lefkovitz referenced the next steps once the Framework is finalized which included, among others, the formation of additional resources in the NIST repository and creation of a privacy risk assessment and other confidence mechanisms.
The panel discussion covered the following topics:
- California Consumer Privacy Act (CCPA) and the European Union's General Data Protection Regulation (GDPR) Compliance. Panelists noted that one of the goals of the Framework is to enable compliance with different jurisdictions' requirements, such as the CCPA and GDPR.
- Potential Federal Privacy Legislation. The panel discussed how the passage of a federal privacy law would affect the Framework. One panelist commented that regardless of a state patchwork of laws or federal privacy law, companies would be encouraged to use the Framework because of the breadth of laws on the international level.
- Next Steps in the Development of the Framework. Ms. Lefkovitz highlighted that the Framework will be a "living" document that will continue to be updated with industry feedback after the release of the initial version.
- Privacy Certification Programs. Panelists expressed support for privacy certifications confirming compliance and emphasized that confidence mechanisms were included in the next steps for the Framework.
- The Framework's Expected Implementation Timeline. Ms. Lefkovitz stated that companies may begin to implement provisions from Framework drafts although the official Framework has not yet been published.
Shortly after the workshop, on January 16, 2020, NIST announced its release of Version 1.0 of the NIST Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management. The press release emphasized that the Framework is not compulsory and stated that it can assist entities with CCPA and GDPR compliance.
In the States
California AG's Office Holds Public Hearings Regarding Proposed CCPA Regulations
In December 2019, the California Attorney General's Office (AG) held a series of public hearings as part of its ongoing rulemaking process under the California Consumer Privacy Act (CCPA). The hearings were held in four locations, Sacramento, Los Angeles, San Francisco, and Fresno on December 2, 3, 4, and 5, 2019 respectively. Similar hearings were held before the draft regulations being released in the same cities. At each hearing, members of the AG staff heard oral testimony from the public related to a variety of topics.
At the public hearings, some of the key topics that consistently arose included clarifying definitions, the notice requirements under the CCPA, opt-out mechanisms, the treatment of deletion requests, the requirement to pass opt-out requests downstream, the request verification process, and delaying the enforcement date past July 1, 2020. As with the previous set of public hearings on the CCPA, the AG staff did not engage with members of the public during the hearings.
Following the hearings, formal written comments were submitted to the AG on December 6, 2019. The AG is expected to release a revised set of proposed regulations ahead of the July 1, 2020 enforcement date and will be required to provide another 15 or 45-day comment period depending on the scope of the revisions. Finally, the proposed regulations will be reviewed and approved by the California Office of Administrative Law within 30 days of the AG submitting the final rules. Unless the AG states otherwise, the AG can begin enforcing the CCPA on July 1, 2020.
International
United Kingdom's Information Commissioners Office Publishes Artificial Intelligence Guidance
On December 2, 2019, the United Kingdom's (UK) Information Commissioner's Office (ICO) published a blog written by Simon McDougall, Executive Director for Technology Policy and Innovation, entitled, "ICO and The Alan Turing Institute open consultation on first piece of AI guidance." The ICO and the Alan Turing Institute (The Turing) co-published draft regulatory guidance entitled "Explaining decisions made with AI," intended to provide practical advice to businesses using artificial intelligence ("AI") technology, and to individual consumers affected by such technology.
The consultation solicits feedback on the draft guidance, which provides information to businesses that use AI to make decisions about individuals, including how to approach making and conveying decisions to individuals that are affected by organizations using AI. The guidance is structured in three sections: (Part 1) The basics of explaining AI; (Part 2) Explaining AI in practice; and (Part 3) Explaining what AI means for your organization.
There are four key principles captured in the guidance, which are formulated from the European Union's General Data Protection Regulation (GDPR):
- Be transparent: Organizations should consider making use of AI for decision-making obvious and appropriately explain the decisions made to individuals in a meaningful way.
- Be accountable: Organizations should ensure appropriate oversight of their AI decision systems and be able to provide answers about the use of these systems.
- Consider context: As stated in the blog post, there is no one-size-fits-all approach to explaining AI-assisted decisions. In light of this, organizations should consider the context to explain AI-assisted decisions.
- Reflect on impacts: At the initial stages of an AI project, organizations should ask and answer questions about the ethical purposes and outcomes of the project.
The consultation was open until January 24, 2020, for organizations to review the guidance and participate in a survey. The final version of the guidance is expected to be published later in 2020.
Sixteenth Plenary Session of the European Data Protection Board
At its sixteenth plenary session, the European Data Protection Board (EDPB) met for two days to discuss and issue public documents regarding: accreditation requirements for General Data Protection Regulation (GDPR) monitoring bodies; the application of GDPR to Internet Access Service Providers (IASs); and the right to be forgotten. EDPB is an independent European body composed of representatives of Europe's national data protection authorities and the European Data Protection Supervisor. EDPB was established by GDPR and is responsible for ensuring consistent application of data protection rules throughout the European Union (EU) and promoting cooperation among Europe's data protection authorities.
GDPR Opinion on Accreditation Requirements
The EDPB adopted its opinion on the United Kingdom Supervisory Authority's draft decision on the accreditation requirements for codes of conduct monitoring bodies. Under Article 64 of GDPR, EDPB is required to issue an opinion whenever a supervisory authority intends to issue accreditation requirements regarding codes of conduct that address GDPR compliance requirements. In its opinion, EDPB proposes changes to the UK Supervisory Authority's draft accreditation requirements to ensure a consistent application of accreditation monitoring bodies across the European Economic Area.
Application of GDPR to IASs
EDPB issued its response to a request for guidance by the Body of European Regulators for Electronic Communication on the application of GDPR to IASs for existing traffic management and billing practices. In the letter, EDPB raises concerns regarding IASs' processing of domain names and URLs for traffic management and billing. EDPB encourages IASs to adopt less invasive and more standardized ways to conduct these activities.
Guidelines on the Right to be Forgotten
EDPB also published its draft guidelines on the criteria of the right to be forgotten in search engine cases under GDPR (the Guidelines). The Guidelines aim to provide guidance on the grounds under GDPR for submitting a request that search engine operators erase links to web pages displayed in response to searches of the data subject's name (Delisting Requests or Right to Be Forgotten). The Guidelines explain that GDPR Article 17.1 provides six separate grounds for making a Delisting Request. The Guidelines conclude, however, that in practice many of the grounds will rarely or never be used to support a Delisting Request. Data Subjects will most likely be able to successfully submit a Delisting Request under two grounds: (a) the processing of their personal by the search engine operator is no longer necessary (Article 17.1.a;) and/or (b) they have grounds to object to the processing based on their situation (Article 17.1.c).
To the extent Sections 17.1.a or 17.1.c apply, the Guidelines go on to explain the grounds search engine operators have in denying such requests. EDPB concludes that although Article 17.3's grounds for rejecting requests made under Article 17.1 are likely inadequate within the context of the Right to Be Forgotten, search engine operators may rely on the balancing test contained with Article 21 as a basis for denying requests made under Article 17.1.c. Under Article 21, search engine operators can deny Right to be Forgotten requests if they can establish that there are overriding legitimate grounds for listing the specific search results which outweigh the data subject's grounds for seeking their removal. Although the burden of proof is quite high (i.e., overriding legitimate grounds), search engine providers are not limited to the exemptions provided under Article 17.3 while conducting this assessment.
CNIL Issues Updated Guidelines for Whistleblowing Programs
On December 10, 2019, the French Data Protection Authority (CNIL) published new Guidelines and FAQs (French only) about company whistleblowing protocols. The newly released Guidelines replace the CNIL's Single Authorization AU-004 decision.
Background:
Following the enactment of the European Union's General Data Protection Regulation (GDPR) and the subsequent 2018 updates to the French Data Protection Act, the CNIL was granted the power to issue Guidelines. The Guidelines are non-binding but assist companies with implementing specific data processing activities. The new Guidelines address whistleblowing programs under the GDPR and French national law.
Whistleblowing Guidelines:
Notable provisions of the Guidelines include:
- Scope. The Guidelines implement a single set of data protection rules to cover various whistleblowing programs. The Guidelines apply to all types of whistleblowing hotlines, both those mandated under French law, and those that are not required under the law but are implemented on a company's own initiative.
- Notice. Under Article 13 of the GDPR, the whistleblower should receive specific information before and after submitting a whistleblower report, for instance, acknowledgment of receipt of the report.
- Lawful Basis. Data processing for legally-required whistleblowing programs can occur based on compliance with a legal obligation. For voluntary whistleblowing programs, companies can process personal data based on the legitimate interests of the company.
- Retention. Once a report has been investigated, personal data relating to the report must be erased or anonymized within two months of the conclusion of the investigation if no action was taken on the report. Personal data may be kept longer where the action was taken.