November 2019

The Download November 2019

Developments in eCommerce, Privacy, Internet Advertising, Marketing, and Information Services Law and Policy

20 min

Introduction

Happy Thanksgiving! In this issue of The Download, we cover the House Financial Services Committee's Task Force on Artificial Intelligence hearing on cloud computing and financial data storage and the Senate Banking Committee's hearing on data ownership. We also review the Federal Trade Commission's (FTC) published comments on the National Institute of Standards and Technology's Privacy Framework Preliminary Draft, and we highlight the FTC's social media sponsorship disclosure guidance. We also report on the FTC's suit against a data storage services company for alleged deception regarding participation in the EU-U.S. Privacy Shield and the FTC's settlement in its first case against developers of "stalking" apps. Across the pond, we discuss the European Commission's report on the EU-U.S. Privacy Shield Review, we review the Assistant Supervisor at the Office of the European Data Protection Supervisor's blog post focused on the use of facial recognition technology, and we examine the United Kingdom's Information Commissioner's Office conclusion of its initial call for input regarding an artificial intelligence auditing framework.

Heard on the Hill

House Financial Services Committee Task Force on AI Holds Hearing on Cloud Computing and Financial Data Storage

On October 18, 2019, the House Financial Services Committee's (Committee) Task Force on Artificial Intelligence (AI Task Force) held a hearing entitled, "AI and the Evolution of Cloud Computing: Evaluating How Financial Data is Stored, Protected, and Maintained by Cloud Providers." The AI Task Force was created on May 9, 2019. The witnesses for the hearing included representatives from journalism, the technology sector, and financial services. Among other topics, the hearing addressed: (1) data privacy and cloud security in the financial services industry; (2) potential regulatory frameworks regarding cloud-based technologies in financial services; and (3) shared liability between financial entities and cloud providers following a data breach.

AI Task Force Chairman Bill Foster (D-IL) opened the hearing by discussing what he identified as advantages of cloud-based systems and data privacy considerations of increased cloud computing. AI Task Force Acting Ranking Member Denver Riggleman (R-VA) expressed support for finding regulatory solutions that would promote innovation and streamline costs while helping to increase data security for consumers. The witnesses outlined the perceived advantages and risks of cloud computing in the financial services industry. The financial services representative stated the need for a clear regulatory framework and emphasized the importance of collaboration between cloud service providers, financial institutions, and regulators. Another witness, from a data security and analytics company, noted the importance of privacy-enhancing technologies (PETs) in addressing data security considerations in cloud computing. A witness from a cybersecurity company expressed support for industry best practices and standards and echoed the need for collaboration between cloud service providers, financial institutions, and regulators.

During questioning, Rep. Sylvia Garcia (D-TX) asked witnesses who should be liable for data breaches. Rep. Anthony Gonzalez (R-OH) followed up by asking how easy it is to attribute responsibility for a data breach event. The journalist witness responded that liability depends on the incident and said that it can be difficult to pinpoint a responsible party. The witness from a cybersecurity company addressed the question of shared liability between cloud service providers and financial institutions, expressing that he believes safe technology requires a combination of security features of the technology itself and safe use of that technology by users.

Several members of the AI Task Force and witnesses noted the importance of more uniform regulatory standards. The AI Task Force has drafted legislation, the Strengthening Cybersecurity for the Financial Sector Act of 2019, which the Financial Services Committee Majority Staff has said aims to improve the uniformity of regulatory authority across the financial services industry.

Senate Banking Committee Convenes Data Ownership Hearing

On October 24, 2019, the Senate Committee on Banking, Housing and Urban Affairs (Committee) convened a hearing on "Data Ownership: Exploring Implications for Data Privacy Rights and Data Valuation." During the hearing, Committee Members and four witnesses discussed Congress's potential development of a federal privacy framework, with added attention to the following topics: (1) whether data should be assigned property rights; (2) data valuation; (3) transparency regarding platforms' data practices; (4) health privacy; and (5) the European Union's (EU) General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

Committee Chairman Mike Crapo (R-ID) noted in his opening statement that the hearing would enable Committee Members to explore potential benefits and drawbacks associated with endowing consumers with data property rights, adding that it may be difficult to determine who "owns" certain types of data. In his opening statement, Ranking Member Sherrod Brown (D-OH) stated that classifying data as property would not prevent corporations from selling data and noted concern that classifying data as property would not provide individuals with more control over information.

During questioning, several Committee Members expressed skepticism about the concept of classifying data as property. Several Members pointed out that data should be viewed differently than tangible objects. When discussing data valuation, Sen. Mark Warner (D-VA) added support for enhancing transparency for how a company collects and uses data. Sen. John Kennedy (R-LA) noted his support for developing data valuation mechanisms to increase transparency around how companies use data, noting that because certain platforms derive most of their revenue from targeted advertising, it should be possible to calculate the value of data.

Sen. Robert Menendez (D-NJ) noted that he is concerned about the potential misuse of sensitive health data by insurance companies for potentially discriminatory purposes, and a witness replied that Congress should consider developing laws that prevent certain forms of algorithmic discrimination in the healthcare sector. Chairman Crapo asked witnesses how established privacy frameworks, such as the GDPR and the CCPA, address data ownership and platform transparency. In response, a witness stated that the CCPA includes civil rights protections for consumers without classifying data as property. A separate witness noted that the GDPR also does not assign property rights to data and that the GDPR contains other requirements that enhance consumers' ability to access and control data.

This was the Committee's fourth hearing on data privacy in the 116th Congress.

Around the Agencies and Executive Branch

FTC Publishes Comments on NIST Privacy Framework Preliminary Draft

The Federal Trade Commission (FTC) recently submitted public comments (Comments) to the National Institute of Standards and Technology (NIST) on the Preliminary Draft for the NIST Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management (NIST Privacy Framework or Framework). Released for comment on September 6, 2019, the NIST Privacy Framework is intended to "drive better privacy engineering and help organizations protect individuals' privacy." The NIST Privacy Framework follows the model of, and was designed to work in conjunction with, the NIST Framework for Improving Critical Infrastructure Cybersecurity (NIST Cybersecurity Framework), and it likewise consists of three fundamental parts: Core, Profiles, and Implementation Tiers. Within the Core, which provides "activities and outcomes that enable an organizational dialogue about managing privacy risk," there are five functions: Identify-P, Govern-P, Control-P, Communicate-P, and Protect-P.

In the Comments, the FTC provides a brief background of its role in protecting consumer privacy under the FTC Act and notes that its history of enforcement actions "provide guidance on the Commission's views as to which privacy practices violate the law as well as the necessary elements of a reasonable privacy program." Against this backdrop, the FTC praises the emphasis that the Framework places on flexibility, observing that "[p]rivacy programs are not one-size-fits-all, but rather must be tailored to the size and complexity of the organization, the scope and nature of its data processing activities, and the volume and sensitivity of the consumer data at stake." The FTC also lauds the NIST Privacy Framework, noting that "privacy programs need to evolve with an organization's changing practices and business environment," including conducting ongoing assessments to catch changes and understand needs for additional compliance measures.

The FTC does, however, make five recommendations for future drafts of the NIST Privacy Framework. First, the FTC opines that "privacy breaches" should be considered at every step of the Framework. Under the preliminary draft, organizations are to consider "privacy risks" generally and consideration of "privacy breaches" falls within a single Core function, Protect – P. Recognizing that this mirrors the NIST Cybersecurity Framework, the FTC observes that "clarifying the discussion of privacy risks and privacy breaches would be useful, particularly for organizations that may not also use NIST's Cybersecurity Framework." Second, the FTC suggests that NIST "consider clarifying that an organization's risk assessment, safeguards, and other related procedures for managing privacy risks should account for the sensitivity of the consumer data that it is processing." The Preliminary Draft focuses on potential outcomes from privacy risks, not data sensitivity. Third, the FTC suggests "a more detailed discussion of the analysis an organization should undertake as part of the Draft Privacy Framework's Communicate-P Core Function, which is intended to enable organizations and individuals to have a 'reliable understanding' of how data is processed." This suggestion was driven by the FTC's observation that violations often occur when organizations "fail to accurately describe how they collect, use, or share consumer data." Fourth, the FTC recommends that Govern – P Core Function of the Framework, which is intended to enable an ongoing understanding of an organization's risk management priorities, encourage organizations to designate specific persons to oversee the privacy program. The preliminary draft considers cross-functional team building but does not address single points of accountability. Finally, the FTC recommends that the Framework make clear that "a comprehensive risk assessment is a necessary first step before making decisions about which privacy controls should be implemented and the timetable for such implementation."

FTC Publishes Summaries of Social Media Sponsorship Disclosure Guidance

On November 5, 2019, the Federal Trade Commission (FTC) published "Disclosures 101 for Social Media Influencers," along with an accompanying short video. The materials summarize the FTC's existing guidance on how influencers must disclose endorsements, sponsorships, and other material connections with brands they promote.

Consistent with prior guidance, the new material emphasizes that influencers must clearly and conspicuously disclose any material connection with a brand, such as a sponsorship, endorsement, free or discounted product, or other personal, family, or employment relationship. It also reminds influencers that making these disclosures is their responsibility, and they should not rely on others—or on social media platform tools—to do it for them.

Influencers are discouraged from placing disclosures in less visible places, like hashtag clouds, behind "Read More" links, or on profile pages. The FTC also prefers simpler and clearer disclosures like "advertisement," "ad," or "sponsored," over abbreviations like "#sp" or "#spon" or terms like "thanks" or "ambassador" without references to the brand name. Nevertheless, for "space-limited" platforms like Twitter, the FTC notes that "[BRAND]Partner" or "[BRAND] Ambassador" could be options.

The new FTC material also provides tailored recommendations for a range of social media platforms and activities. Streamers are encouraged to periodically repeat disclosures during live streams, YouTubers and videographers are advised to make disclosures in videos (not just descriptions), and Snapchat and Instagram users are directed to superimpose disclosures over photos and stories.

Finally, the FTC provides a reminder to avoid fabricating or misrepresenting product experiences, features, or benefits.

FTC Sues Data Storage Services Company for Alleged Deception Regarding Participation in the EU-U.S. Privacy Shield

The Federal Trade Commission (FTC) sued a Nevada data storage services company last month based on allegations that the company misled consumers about its participation in and compliance with the EU-U.S. Privacy Shield Framework and failed to adhere to the program's requirements before allowing its certification to lapse.

The Privacy Shield Framework provides a method for companies to transfer personal data from the European Union (EU) to the United States in a manner consistent with EU data protection laws. To participate in the program, companies must annually self-certify compliance with seven Privacy Shield Principles (and sixteen Supplementary Principles) designed to ensure "adequate" protection for personal data.

In its complaint, the FTC alleged that, between January 2017 and October 2018, RagingWire Data Centers, Inc. (RagingWire) claimed in its online privacy policy that the company participated in the Privacy Shield Framework and complied with the program's requirements when, in fact, the company had allowed the certification to lapse in January 2018. Although the Department of Commerce twice warned RagingWire to take down the claims that it participated in the Privacy Shield unless and until the company recertified its participation in the program, RagingWire did not remove the Privacy Shield statements until the FTC contacted the company in October 2018.

The FTC separately alleged that RagingWire failed to comply with three of the Privacy Shield requirements in the time that it was certified under the program. First, the company allegedly did not verify annually that it had made accurate statements about its Privacy Shield privacy practices and that those practices had been implemented. Second, RagingWire allegedly failed to maintain a dispute resolution process for consumers with privacy-related complaints about the company. Third, RagingWire allegedly failed to comply with the Privacy Shield requirement that companies that cease to participate in the program must affirm to the Department of Commerce that they either will continue to apply the Privacy Shield Principles to any data received according to the Privacy Shield or will delete or return all such data.

This lawsuit serves as a reminder that the FTC may invoke its authority under Section 5 of the FTC Act to pursue any potential misrepresentations relating to the Privacy Shield Framework.

FTC Announces Settlement in First Case Against Developers of "Stalking" Apps

On October 22, 2019, the Federal Trade Commission (FTC) announced a proposed settlement with the developers of three "stalking" apps barring them from selling apps that monitor consumers' mobile devices unless the developers take certain steps to change their practices. The FTC alleged that the developers violated Section 5 of the FTC Act and the Children's Online Privacy Protection Act (COPPA). In the FTC's press release, Andrew Smith, Director of the FTC's Bureau of Consumer Protection, stated that these particular apps "were designed to run surreptitiously in the background and are uniquely suited to illegal and dangerous uses" and "[u]nder these circumstances, [the FTC] will seek to hold app developers accountable for designing and marketing a dangerous product."

Retina-X Studios (Retina-X), and its owner, James N. Johns, Jr. (collectively the developers) developed three mobile applications, MobileSpy, PhoneSheriff, and TeenShield, that "allowed purchasers to monitor the mobile devices they were installed on, without the knowledge or permission of the device's user." The agency said that the apps allowed purchasers to access sensitive information about device users, including the user's physical movement and online activities. The complaint states that each of the apps provided purchasers with instructions on how to remove the app's icon from appearing on the mobile device's screen so that the device user would not know the app was installed on the device.

According to the complaint, the developers marketed MobileSpy to monitor employees and children and PhoneSheriff and TeenShield to monitor mobile devices used by children, but the developer did not take any steps to ensure that its apps were being used for these purposes.

To install the apps, purchasers were required to bypass mobile device manufacturer restrictions, exposing the devices to security vulnerabilities and likely invalidating manufacturer warranties, according to the FTC complaint. The agency goes on to say that the developers failed to adopt and implement reasonable information security policies and procedures, conduct security testing on its mobile apps, and conduct adequate oversight of its service providers. In April 2017, Retina-X was alerted that a hacker gained access to the company's cloud storage and deleted certain information, according to the complaint. The allegations note that Retina-X was informed of the breach through a journalist who was tipped off by the hacker.

As part of the settlement agreement, the developers are required to have purchasers state that they will only use the app to monitor a child or an employee, or another adult who has provided written consent; include an icon with the name of the app on the mobile device, which is only removable by a parent or legal guardian; and implement and maintain a comprehensive information security program among other requirements.

The Commission voted to issue the proposed administrative complaint and to accept the consent agreement in a 5-0 vote.

International

European Commission Publishes Report on EU-U.S. Privacy Shield Review

On October 23, 2019, the European Commission (Commission) published its report on the third annual review of the functioning of the EU-U.S. Privacy Shield (the Privacy Shield). Operative since August 2016, the Privacy Shield sets forth procedures and safeguards for transferring personal data from the European Union (EU) to the United States. The European Commission is required to perform annual reviews of the Privacy Shield to evaluate all aspects of the functioning of the framework.

The report is based on information gathered from key stakeholders, such as Privacy Shield-certified companies, EU data protection authorities, and non-governmental organizations. The findings of the report were further informed by publicly available materials, such as court decisions, implementing rules and procedures of relevant U.S. authorities, reports and studies from non-governmental organizations, transparency reports issued by Privacy Shield-certified companies, and annual reports from independent recourse mechanisms. At the time of the third annual review meeting, the Privacy Shield had more than 5,000 participating companies.

The report covers all aspects of the function of the Privacy Shield, with a focus on issues identified by the Commission during the second annual review. According to the Commission, the Privacy Shield has moved from the inception phase to a more operational phase in its third year. The report highlights the experiences and lessons learned from the practical application of the framework, noting several areas of improvement. These include the proactive efforts of the Department of Commerce such as the introduction of a systematic compliance process in April 2019, the enforcement actions taken by the Federal Trade Commission (FTC) in the third year of operation, and the recent confirmations of two additional Board members to the Privacy and Civil Liberties Oversight Board by the U.S. Senate.

The report provides several recommendations for improving the functioning of the Privacy Shield as well as appointments to key oversight bodies. The Commission recommends that several steps be taken to better ensure the effective functioning of the Privacy Shield in practice. These recommendations include the following:

  • To increase transparency and readability of the Privacy Shield list for both businesses and individuals in the EU, the Department of Commerce should shorten the "grace period" granted to companies for completing the recertification process. The current "grace period" of approximately 3.5 months, or longer in certain cases, should be reduced to a maximum of 30 days. The Commission suggests that this is enough time for recertification and to rectify any issues identified during the recertification process. If the recertification is not completed within this time, the Department of Commerce should issue a warning letter without any delay.
  • For the Department of Commerce's "spot-check procedure," companies should be assessed for their compliance with the Accountability for Onward Transfers Principle by providing a summary or representative copy of the privacy provisions of a contract concluded by a Privacy Shield-certified company for onward transfer.
  • The Department of Commerce should develop and implement tools for detecting false claims of participation in the Privacy Shield from companies that have never applied for certification.
  • To increase investigation efforts by enforcement authorities, the FTC should share meaningful information on ongoing investigations with the Commission, as well as with EU Data Protection Authorities that also have enforcement responsibilities.
  • The EU Data Protection Authorities, the Department of Commerce and the FTC should develop common guidance on the definition and treatment of human resources data.

The Commission noted that it would continue to closely monitor further developments concerning specific elements of the Privacy Shield framework, including the development of federal privacy legislation in the U.S. The Commission stated it welcomes a comprehensive approach to privacy and data protection to "increase the convergence between the EU and the U.S. systems" to help "strengthen the foundations on which the Privacy Shield framework has been developed."

In a press release, United States Secretary of Commerce Wilbur Ross "welcomed the release" of the report. He said, "Privacy Shield is a success, and the U.S. Department of Commerce looks forward to continuing to work cooperatively and jointly with our European colleagues to strengthen the EU-U.S. Privacy Shield Framework."

Office of the European Data Protection Supervisor Writes Blog Post on Facial Recognition Technology

On October 28, 2019, Wojciech Wiewiórowski, Assistant Supervisor at the Office of the European Data Protection Supervisor (EDPS), published a blog post focused on the use of facial recognition technology. In the post, he discussed some of the reasons that governments are deploying the technology, the treatment of the technology under the General Data Protection Regulation (GDPR) generally, and ethical and policy considerations regarding the technology's adoption.

Mr. Wiewiórowski began by noting that in the absence of any specific regulation addressing facial recognition technology, private companies and public sector entities have been adopting the technology for a variety of uses. He explained that an initial use case – verifying the identity of individuals crossing the border – is relatively uncontroversial, but that the use of the technology to identify unknown individuals in general presented more serious privacy concerns. Mr. Wiewiórowski discussed how governments justify using the technology for national security reasons, citing the Chinese government's surveillance apparatus deployed to control the Uighur population in Xinjiang as an example. He suggested that another justification for adopting the technology is for convenience because the technology can eliminate the need for persons to carry paper documents for identification.

Mr. Wiewiórowski next discussed five aspects of GDPR's applicability to facial recognition technology. First, he said that EU data protection rules apply unequivocally to the processing of biometric data, which includes facial images. Second, he noted that interference with fundamental rights must be "demonstrably necessary," and that typically there are other less intrusive means than facial recognition to achieve the goal for which the technology is being used. Third, he questioned whether there could be a valid legal basis for the application of the technology. Fourth, he regarded the technology's deployment as lacking accountability and transparency and being "marked by obscurity," such that algorithmic discrimination is a distinct possibility. Finally, he noted that the nature of the technology makes data minimization and data security by design difficult to achieve.

On the question of ethics, Mr. Wiewiórowski objected to treating a human's face as an "object for measurement and categorisation by automated processes," noting that such a development "touches the right to human dignity." He suggested that the technology would be tested on vulnerable populations in society, and potentially would chill freedom of expression. As evidence, he cited ongoing protests in Hong Kong. He concluded by arguing that facial recognition technology is being advanced as a "solution to a problem that does not exist," noted that jurisdictions around the world are moving to impose moratoriums on its use and stated that Data Protection Authorities in the EU will be addressing the topic in future discussions.

The United Kingdom Information Commissioner's Office Concludes Initial Call for Input Regarding Its AI Auditing Framework

In 2017, the Information Commissioner's Office (ICO), the entity tasked with enforcing the General Data Protection Regulation (GDPR) and data protection laws in the United Kingdom, published a report titled "Big data, artificial intelligence, machine learning, and data protection" to examine the impact of artificial intelligence (AI) on "privacy, data protection, and the associated rights of individuals." The UK ICO also publicly announced AI would be one of its "technology priority areas" through at least 2021. Following that announcement, the ICO issued a call for input on developing an auditing framework for AI in March of this year. The call for input was an effort to define a methodology to review AI applications for transparency, fairness, and approaches to data protection risks. In late October, the ICO announced the end of its call for input by way of a blog post describing key themes the regulator identified from stakeholders during the input period.

The ICO listed three main themes that emerged during the input period and may be built into the ensuing AI auditing framework: (1) the need to build adequate AI governance and risk management capabilities; (2) the need to understand data protection risks and set appropriate risk appetites; and (3) the desire to leverage Data Protection Impact Assessments (DPIAs), i.e., GDPR-required evaluations for high-risk data processing activities, to help develop compliant and ethical approaches to AI.

  • Building Adequate AI Governance and Risk Management Capabilities. The ICO emphasized that data scientists and engineering teams should not be the only constituencies responsible for examining AI risks. According to the ICO, companies' senior leadership should be involved in AI risk management and should be held accountable for addressing the technology appropriately. The regulator also noted that organizations should not underestimate the "initial and ongoing level of investment of resources and effort that will be required" to build appropriate risk management approaches to AI, and the level of governance and risk management capabilities should be tailored and proportional to the related AI data protection risks.
  • Understanding Data Protection Risks and Setting Appropriate Risk Appetites. The ICO suggested that organizations should develop a comprehensive understanding of their AI systems' data protection risks. The regulator also suggested that entities' risk appetites must be appropriately tailored to the data protection risks inherent in their AI systems.
  • Leveraging DPIAs to Develop Compliant and Ethical Approaches to AI. The ICO noted that organizations will likely need to engage in DPIAs for AI systems because using AI to process personal data can cause high-risk to individuals' rights and freedoms. The regulator stressed the benefits DPIAs can bring to organizations looking to manage data protection risk in the context of AI.

The three themes the ICO identified will likely inform the formal consultation paper the regulator intends to publish to unveil and describe its AI auditing framework. The ICO has indicated that the formal consultation paper will be released to the public in January 2020.