Data Protection Report Archives - LexBlog https://www.lexblog.com/site/data-protection-report/ Legal news and opinions that matter Fri, 31 May 2024 19:45:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.lexblog.com/wp-content/uploads/2021/07/cropped-siteicon-32x32.png Data Protection Report Archives - LexBlog https://www.lexblog.com/site/data-protection-report/ 32 32 SEC statement clarifies material cybersecurity incident disclosure requirement https://www.lexblog.com/2024/05/31/sec-statement-clarifies-material-cybersecurity-incident-disclosure-requirement/ Fri, 31 May 2024 19:31:38 +0000 https://www.lexblog.com/2024/05/31/sec-statement-clarifies-material-cybersecurity-incident-disclosure-requirement/ SEC final rule on reporting material cybersecurity incidents

In July 2023, the US Securities and Exchange Commission (SEC) finalized its rule requiring public companies to disclose material cybersecurity incidents under Item 1.05 of Form 8-K. Though materiality is not a new concept in SEC regulations, in the context of cybersecurity incidents, materiality assessments and disclosure practices are evolving areas with little practical precedent or guidance to draw upon. Fundamentally, an incident is considered material if “there is a substantial likelihood that a reasonable shareholder would consider it important” in making an investment decision.1 This includes assessing all relevant qualitative and quantitative factors, such as reputation, customer and vendor relationships, and competitiveness, in addition to financial and operational impacts, as well as potential litigation and regulatory actions.2

Disclosures under Item 1.05 of Form 8-K are supposed to be limited to material cybersecurity incidents. However, since the rule went into effect in December 2023, out of an abundance of caution, many companies have filed Item 1.05 Form 8-Ks despite not having made a materiality determination. At the International Association of Privacy Professionals’ April 2024 Global Privacy Summit in Washington D.C., SEC officials acknowledged that such “cover yourself 8-Ks” may be counterproductive.

SEC’s Director of the Division of Corporation Finance releases statement on material cybersecurity incident determination and disclosure

On May 21, 2024, the Director of the SEC’s Division of Corporation Finance issued a statement providing useful guideposts for assessing the SEC rule’s disclosure requirements under Item 1.05 of Form 8-K (the “Statement”). The Director reiterated that the intended use of Item 1.05 Form 8-K is to inform investors of material cybersecurity incidents, and to that end, delineated how companies should proceed when faced with a cybersecurity incident for which they have not yet made a materiality determination.

Specifically:

  • If a company wishes to disclose an immaterial incident (i.e., an incident that has been determined as immaterial), it may do so by making a disclosure under Item 8.01 of Form 8-K, which is for disclosing “any events, with respect to which information is not otherwise called for by this form, that the registrant deems of importance to security holders.”3
  • If a company has not yet made a materiality determination, it may make a disclosure under Item 8.01 of Form 8-K. Subsequently, if the incident is determined to be material, the company may file an Item 1.05 Form 8-K within four business days of the determination. It may refer to its prior Item 8.01 disclosure, but must ensure that the subsequent filing satisfies all Item 1.05 requirements.

The Statement also provides additional insight into the consideration in making a materiality determination. In particular, the Statement recognizes that while there are numerous factors to consider in determining an incident’s actual or reasonably likely impact, there may be some cybersecurity incidents “so significant” as to warrant a materiality determination even before ascertaining the incident’s impact or reasonably likely impact.4 The Statement does not define “so significant” any further and could be interpreted as a “catch-all” category that may nudge companies toward disclosure based on their specific circumstances, such as their industries or roles in the market.

In disclosing a “so significant” incident in an Item 1.05 Form 8-K, the company should provide investors with information necessary to understand the material aspects of the incident (i.e., nature, scope, and timing), and include a statement that it has not yet determined the incident’s impact or reasonably likely impact. The company should amend its Form 8-K to disclose the impact once that information becomes available.5

Takeaways

The Statement underscores the need for companies to establish, document, and maintain materiality determination and disclosure protocols as part of their cybersecurity incident response procedures. Companies should take into account their own unique circumstances and the particularities of each incident in making these decisions and document the assessment. Ultimately, it is important to keep in mind that despite the emerging nature of disclosure practices, purely defensive disclosures under Item 1.05 of Form 8-K may create investor confusion about the significance of the cybersecurity incident and could create a line of inquiry from the SEC.  

Special thanks to Law Clerk Shushan Gabrielyan (Los Angeles, CA) for her assistance in the preparation of this content.


[1] Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure, Release Nos. 33-11216; 34-97989 (July 26, 2023) [88 FR 51896] (Aug. 4, 2023), available at https://www.sec.gov/news/statement/gerding-cybersecurity-disclosure-20231214.

[2] Id.

[3] SEC, Form 8-K, available at https://www.sec.gov/files/form8-k.pdf.

[4] Erik Gerding, Disclosure of Cybersecurity Incidents Determined To Be Material and Other Cybersecurity Incidents, SEC.gov(May 21, 2024), available athttps://www.sec.gov/news/statement/gerding-cybersecurity-incidents-05212024.

[5] Id.

]]>
Data Protection Report
UK regulators’ strategic approaches to AI: a guide to key regulatory priorities for AI governance professionals https://www.lexblog.com/2024/05/24/uk-regulators-strategic-approaches-to-ai-a-guide-to-key-regulatory-priorities-for-ai-governance-professionals/ Fri, 24 May 2024 07:37:47 +0000 https://www.lexblog.com/2024/05/28/uk-regulators-strategic-approaches-to-ai-a-guide-to-key-regulatory-priorities-for-ai-governance-professionals/

Background – white paper response on the UK’s approach to AI regulation

In February 2024, the UK Department for Science, Innovation, and Technology (DSIT) set out the government’s proposed approach to AI regulation. It published a response to its consultation on its 2023 white paper, ‘A pro innovation approach to AI regulation’ (the White Paper). DSIT confirmed that, for the time being, the UK will follow its proposed approach of setting cross-sectoral principles to be enforced by existing regulators rather than passing new legislation to regulate AI. These principles (the Principles) are:

  • Safety, security and robustness.
  • Appropriate transparency and explainability.
  • Fairness.
  • Accountability and governance.
  • Contestability and redress.

The letters

At the time of publication of the response to the White Paper, ministers wrote to key sectoral and cross-economy regulators requesting an update on how they are taking forward the White Paper proposals and the steps they are taking to develop their strategic approaches to AI. The following regulators received a letter:

  1. the Financial Conduct Authority (FCA)
  2. the Bank of England (BoE)
  3. the Information Commissioner’s Office (ICO)
  4. the Office of Communications (Ofcom)
  5. the Medicines and Healthcare products Regulatory Agency
  6. the Office for Nuclear Regulation (ONR)
  7. the Health and Safety Executive (HSE)
  8. the Office for Standards in Education, Children’s Services and Skills
  9. the Office of Qualifications and Examinations Regulation
  10. the Legal Services Board
  11. the Office of Gas and Electricity Markets (Ofgem)
  12. the Competition and Markets Authority (CMA)
  13. the Equality and Human Rights Commission (EHRC).

DSIT published the responses to these letters on 1 May 2024. These responses set out actions regulators have taken, upcoming plans, and how they are collaborating through the Digital Regulation Cooperation Forum (DRCF), with other regulators, and with other stakeholders. Below, we summarise responses from DRCF members the CMA, FCA, ICO, and Ofcom, along with the BoE and Prudential Regulation Authority (PRA), Ofgem, the ONR, and the EHRC .

Review our summary of key responses or our take on the responses.

Summary of key responses

RegulatorSummary of response
CMAThe CMA’s response took the form of a strategic update on its approach to AI, highlighting its role in ensuring that consumers, businesses, and the wider economy reap the benefits of developments in AI, while harms are mitigated. The CMA’s work on foundation models has allowed it to investigate how foundation model markets work and the current and possible future implications for competition and consumer protection. The CMA’s publications have included an initial report, update paper, technical update report and its Trends in Digital Markets report. More broadly, it considers potential issues that could be raised by AI (for example, anti-competitive self-preferencing). The CMA will continue to monitor for potential competition and consumer protection concerns, applying its six principles to identify any such concerns: access, diversity, choice, fair dealing, transparency, and accountability.  For more information on the CMA’s publications, see our post. The CMA plans to utilise new powers under the Digital Markets, Competition and Consumers Bill to raise standards in the market. It stresses the need for cooperation with other regulators and highlights its work through the DRCF, as well as directly with regulators such as the ICO. For more information on the interaction between AI and competition law, watch our video interview.
FCAIn its AI update, the FCA welcomes the government’s principles-based, sector-led approach to AI and confirms that it is focused on how firms can safely and responsibly adopt the technology as well as understanding what impact AI innovations are having on consumers and markets – including close scrutiny of the systems and processes firms have in place to ensure regulatory expectations are met. The document outlines the ways in which the FCA’s approach to regulation and supervision addresses the Principles. It also sets out what the FCA plans to do in the next 12 months in relation to AI, including: Continuing to further its understanding of AI deployment in UK financial markets.Building on existing foundations: the existing framework aligns with and supports the Principles in various ways, but the FCA will closely monitor the situation and may actively consider future regulatory adaptations if needed.Collaboration, including regular collaboration with domestic partners and further prioritising international engagement.Testing for beneficial AI, including working with DRCF member regulators to deliver the pilot AI and Digital Hub, whilst running its own Digital Sandbox and Regulatory Sandbox, and exploring an AI sandbox.Its own use of AI, including investing more into technologies to proactively monitor markets, including for market surveillance purposes.Looking towards the future: it is taking a proactive approach to understanding emerging technologies and their potential impact as part of its Emerging Technology Research Hub. The AI update was published alongside a speech by chief executive Nikhil Rathi announcing the FCA’s plans to focus on Big Tech and Feedback Statement FS24/1 on data asymmetry between Big Tech and firms in financial services. See our update on the FCA’s plans for further detail.
BoE, including the PRAThe BoE’s response took the form of a joint letter with the PRA. The letter outlines the BoE’s and the PRA’s statutory objectives and secondary objectives, and explains that their focus is on understanding how to support the safe and responsible adoption of AI and machine learning (ML) in financial services from a macro-financial and prudential perspective, given the potential benefits – including driving innovation – that AI/ML could bring to firms. The letter states that the BoE and the PRA have a close interest in encouraging the safe and responsible adoption of AI and ML in financial services, given the potential risks this could pose to the BoE’s and the PRA’s objectives. There are four key areas of focus that the BoE and PRA have been exploring, where further clarification on their regulatory framework could be beneficial and which are relevant to AI and ML: Data Management.Model Risk Management.Governance.Operational Resilience and Third Party Risks. The BoE and PRA are planning to run the third instalment of their ‘ML in UK financial services’ survey, to ensure their understanding of AI/ML adoption remains up to date. In addition, given the rapid pace of innovation and widespread use cases, they are undertaking deeper analysis on the potential financial stability implications of AI/ML over the course of 2024, and that analysis will be considered by the Financial Policy Committee. See our update for more information on the BoE and PRA response.
ICOIn its strategic update, the ICO emphasises the potential benefits AI can bring across sectors, and also highlights that there are inherent risks associated with AI, concerning, for example, transparency, security, and fairness. It stresses that data protection law is risk-based; it requires these risks to be mitigated and managed via technical and organisational measures, but not necessarily completely removed. The ICO highlights specific focus areas: foundation models, high-risk AI applications (including emotion recognition technology, on which they have previously issued a warning), facial recognition technology, biometrics, and AI’s impact on children (highlighting its recent children’s code strategy). It also sets out how the Principles map to data protection law principles, obligations, and rights. The ICO highlights recent enforcement action and emphasises that it will use its “full regulatory toolbox”, including enforcement notices and monetary penalty notices. It also highlights existing guidance on AI and data protection and automated decision-making and profiling. Consultations on updates are expected from spring 2025. It will also consult on biometric classification in spring 2024, and will continue its consultation series on generative AI. Collaboration with stakeholders and regulators remains a key focus for the ICO in shaping AI regulation and policies.
OfcomOfcom’s strategic approach to AI 2024/2025 sets out three key risk areas and its work to date to address those risks: Synthetic media: AI tools that can be used to generate synthetic media can be used to create content that depicts child sexual abuse, acts of terrorism, and non-consensual pornography. Use of AI to personalise content, leading to amplification of illegal and harmful content. More advanced forms of AI, like GenAI, being used to develop more virulent malware, identify vulnerabilities in networks, and/or provide instructions on how to breach network security. Its work to address this risk includes publishing the draft Illegal Harms Code of Practice, publishing research to understand adoption and attitudes towards GenAI, and engaging with standards bodies like the European Telecommunications Standards Institute and the International Organisation for Standardisation. To address these risks, Ofcom has over 100 technology experts (with approximately 60 AI experts) in its data and technology teams, including some with direct experience of developing AI tools. It is building strategic partnerships with academic institutions, developing data skills across Ofcom, and cooperating with other through the DRCF and internationally. Its planned AI work spans Online Safety, broadcasting, telecoms, and crosscutting initiatives. In Online Safety, its planned actions include researching the merits of red teaming to detect vulnerabilities in AI models and tools, the merits of synthetic media detection tools, and the merits of automated content classifiers. It will also research the merits of using GenAI for content moderation, as well as potential methods for preventing children from encountering GenAI pornographic content. Cross-cutting actions include monitoring and engaging with AI developments internationally, including the EU’s AI Act, horizon scanning to identify emerging and longer-term AI developments, and engaging with the government, including through its Central AI Risk Function.
ONRThe ONR pro-innovation approach to regulating AI in the nuclear sector sets out the ONRs strategic approach and states that the ONR is already well aligned with the Principles.  The ONR has established an AI-focused team of specialist safety and security inspectors, alongside the ONR innovation hub, and commissioning research to explore the suitability of ONR’s current approach to regulating AI. The ONR sets out its programme of work as follows: taking part in a five-year UK Research and Innovation (UKRI)-funded project alongside universities, nuclear site licensees and other regulators, focusing on the development of robotics for the UK nuclear industry and how to structure associated safety arguments;developing regulatory sandboxing as a tool for duty holders and regulators to test innovative technologies in collaboration while exploring the suitability of the existing regulatory framework;updating guidance for ONR inspectors, supported by early engagement with duty holders to foster open dialogue and explain our regulatory expectations;regularly engaging with international partners and UK regulators, sharing best practice and collaborating to build regulatory capability; andproviding regulatory advice to duty holders and the wider nuclear sector.
OfgemOfgem’s response emphasises that AI is already being used by the energy sector across England, Scotland and Wales (Great Britain). It can create efficiencies for energy bill payers, energy suppliers and the wider energy sector, but it can also create challenges and risks. AI could play a big part in decarbonising the energy sector, for example it can be used to better predict weather that can help improve solar generation forecasts. Ofgem recently put out a call for input on use of AI within the energy sector, which closed on 17 May 2024. Ofgem’s recommendations in the call for input include: collaboration with the Office of AI (part of DSIT) and relevant regulators such as the CMA, the ICO, the EHRC and the HSE, along with other relevant stakeholders, including through establishment of an AI Best Practice Cross Industry Forum;considering the potential for AI collusion in energy markets, by continuing discussions with the Competition and Markets Authority and energy sector regulators in other jurisdictions;ensuring interoperability with international markets is considered, with any issues and challenges effectively addressed in a joined-up manner by UK and international regulators;considering the need for additional regulatory tools, such as an AI sandbox;consider the issue of liability and the AI supply chain to ensure effective measures are in place to regulate the sector; anddeveloping specific guidance for the sector, through which it anticipates minimising the need for formal intervention. Ofgem’s proposed next steps are to: analyse the findings from the call for input and update its approach to regulating AI; develop AI guidance for energy bill payers, energy companies and organisations; continue to work with companies and organisations across the energy sector; andcontinue to research and identify the opportunities and threats associated with use of AI in the energy sector.
EHRC  The EHRC’s response highlights that it is a small strategic regulator in comparison to many of the other regulators being called upon to play a part in regulating AI. It states that its budget has remained static at £17.1 million since 2016 with the consequence that its ability to scale up and respond to the risks to equality and human rights presented by AI is limited. The approach it outlines reflects the need to prioritise its work. The EHRC prioritised AI in its 2022–25 strategic plan. It has already issued guidance on AI and the Public Sector Equality Duty and has undertaken focused work on specific issues, including exploring the potential discrimination as a result of online recruitment through social media platforms and supporting litigation to challenge the potential discriminatory use of facial recognition technology (FRT) in the workplace. It is partnering with the Responsible Technology Adoption Unit (RTAU), Innovate UK, and the ICO to drive the development of new socio-technical solutions to address bias and discrimination in AI systems. The EHRC confirms its broad support for the Principles, and highlights that all are relevant to the equality and human rights frameworks. It also highlights its unique role in supporting the fairness principle, both with regulated bodies and across the regulatory community. It is participating in the Fairness Innovation Challenge, alongside the ICO and the RTAU. It has also taken part in a workshop with the DRCF to explore fairness across regulatory remits. In 2024–25 the EHRC will focus predominantly on: reducing and preventing digital exclusion, particularly for older and disabled people in accessing local services;the use of AI in Recruitment Practices; developing solutions to address bias and discrimination in AI systems;use of FRT by the police and elsewhere; andpartnering with the RTAU on the fairness innovation challenge to develop tools for tackling algorithmic bias and discrimination.

Our take

The next government may choose to push forward AI legislation quickly, or continue with the current approach of regulating AI through the existing framework. Whichever approach is taken forward, the responses above will provide a valuable reference point for organisations’ AI governance programmes; these regulators will continue to lead the way on both providing guidance and on enforcement in either scenario.

As is to be expected, DRCF members, the CMA, FCA, and ICO, provided responses demonstrating the significant steps they have already taken and plan to take on regulating AI.

The responses of most regulators highlighted resource constraints and the need to prioritise notwithstanding the £10 million earmarked to upskill all of the relevant regulators to address the challenges of regulating AI.

Key themes for prioritisation include foundation models, technologies that pose threats to fairness like facial recognition, AI’s impact on children, and ensuring competition concerns are addressed.

While regulators’ areas of focus vary according to their remit, there is a significant degree of alignment and overlap. The Principles provide a shared framework around which regulators must shape their AI activities. AI governance may appear a daunting task, as it requires multiple regulatory regimes to be addressed. However, organisations can take comfort from the broad similarities in regulatory approaches. Addressing the Principles for a particular use case will not require a distinct set of actions for each in-scope regime. A holistic approach can and should be taken, considering whether additional actions are required when looking through any of the regulatory lenses focused on the technology.

]]>
Data Protection Report
Is your Texas data protection assessment started? https://www.lexblog.com/2024/05/23/is-your-texas-data-protection-assessment-started/ Thu, 23 May 2024 22:26:49 +0000 https://www.lexblog.com/2024/05/23/is-your-texas-data-protection-assessment-started/ As we have previously written, the Texas comprehensive privacy law, known as the Texas Data Privacy and Security Act (TDPSA), goes into effect on Monday, July 1, 2024.  As a reminder, unlike other states’ comprehensive privacy laws that are currently in effect, Texas does not include a minimum number of residents for applicability.  Instead, the three criteria for applicability of the TDPSA are that the company:

  • conducts business in this state or produces a product or service consumed by residents of this state;
  • processes or engages in the sale of personal data; and
  • is not a small business as defined by the United States Small Business Administration, . . . .  [Note:  That definition varies by industry and typically is based upon annual revenue, number of employees, or both.]

Consequently, many companies that do not meet the thresholds for other states’ laws can be subject to Texas’ requirements.  It’s common for companies to be subject only to the California and Texas requirements but not any of the other states’ current comprehensive privacy laws.

Like other states except California, TDPSA includes an “employee” exception, for “data processed or maintained in the course of an individual applying to, being employed by, or acting as an agent or independent contractor of a controller, processor, or third party, to the extent that the data is collected and used within the context of that role.”  Unlike other states, however, the law requires the controller to offer opt-outs from solely automated “profiling in furtherance of a decision that produces a legal or similarly significant effect concerning the consumer.”  The law defines those “significant effects” to include “the provision or denial by the controller” of “employment opportunities.”  Consequently, uses of artificial intelligence with respect to recruiting or promotion decisions, as well as using the data to train AI systems, may raise issues under TDPSA, which are not raised under the other states’ laws currently in effect.  As a result, companies should review their uses of artificial intelligence in recruiting processes as well as whether they are using employee data in training AI or other ways.

Unlike California’s current requirements, organizations that are “controllers” under TDPSA must also conduct and document a data protection assessment with respect to certain uses of “personal data.” Those uses include the processing of “sensitive data” (which includes precise geolocation data) or “any processing activities involving personal data that present a heightened risk of harm to consumers” or “processing of personal data for purposes of profiling,” if the profiling presents certain reasonably foreseeable risks of various harms to consumers, including financial or reputation harm.

The law requires that the data protection assessment must identify and balance the benefits “to the controller, the consumer, other stakeholders, and the public, against the potential risks to the rights of the consumer associated with that processing” of the data.  The assessment must also take into account certain factors, including the context of the processing and the reasonable expectations of the consumer.  The controller must make the assessment available to the Texas Attorney General upon a civil investigative demand.  Fortunately for the controllers, the law states that the assessment “is confidential and exempt from public inspection” and disclosure in response to the Attorney General’s request “does not constitute a waiver of attorney-client privilege or work product protection.”

There is no private right of action.  Only the Attorney General may enforce the TDPSA.  In addition, the law includes a 30-day notice and cure period before the Attorney General may bring an action.  Violations can result in civil penalties not to exceed $7,500 per violation.

Our Take

If you are subject to the TDPSA, have you reviewed the law and taken the necessary actions to comply?

1.         Is your company using artificial intelligence in recruiting or in other ways that may be “profiling” relating to “employment opportunities”?  You may need to give the individuals a right to opt-out of that use.

2.         Is your company using employee data in training artificial intelligence?  That use may not be within the context of employment, so the “employee” exception may not apply and your company may need to comply with TDPSA with respect to this data.

3.         Does your company process any “sensitive data” of Texas residents in a way that is in-scope for the law?  Is your company doing “any processing activities involving personal data that present a heightened risk of harm to consumers”?  If so, have you begun drafting the data privacy assessment?  Texas has provided only a general description of the requirements, so it may take longer than you expect.  Remember, TDPSA goes into effect on July 1.

]]>
Data Protection Report
The US government, privacy, and security – recent developments https://www.lexblog.com/2024/05/01/the-us-government-privacy-and-security-recent-developments/ Wed, 01 May 2024 14:45:00 +0000 https://www.lexblog.com/2024/05/01/the-us-government-privacy-and-security-recent-developments/ The United States Federal Government is turning its attention to privacy and cybersecurity laws, and the result has been several recent legal developments that may have an impact on your business. Keeping up with these developments is not easy, so we’ve created a fun way to test your knowledge of the same: 

  1. True or False: There is a bipartisan bill pending that would pre-empt state breach notification laws.
  2. True or False: There is a proposed federal regulation that would require reporting within 24 hours if your company pays the ransom in a ransomware incident.
  3. True or False: The White House has issued an Executive Order that calls for regulation of sending bulk Americans’ sensitive data to “countries of concern.”
  4. True or False: There is a bipartisan bill pending that would expand private rights of action for privacy matters, ranging from use of dark patterns, to failure to conduct due diligence on service providers, to failure to recognize opt-outs.
  5. True or False: Norton Rose Fulbright is the best law firm in the world!

Items 2 through 4 are True (and clearly so is 5), but item 1 is False (the American Privacy Rights Act (APRA) would pre-empt state comprehensive privacy laws, but APRA would NOT pre-empt breach notification laws).

Moving on to how these developments could affect your company:

With respect to ransomware, does your incident response plan address secondary ransoms, such as threatened release of stolen information? What about regulatory notifications, is the appropriate staff aware of its obligations and able to provide notice within 24 hours? When was the last time you tested your incident response plan?

Do you know what type of data your company collects and maintains? Does it include the personal information of individuals that live in the United States? Do you know what legal obligations the company has with regards to such data? Have you done an inventory of which third party vendors have access to this information and how they secure it? How would you know if large amounts of the company’s data, especially if it contains personal information, moved to another country?

Many state privacy laws currently do not include a private right of action, but did you know about the growing line of cyber liability case law? Could privacy laws be used to inform the duty of any data that your company collects and maintains? Additionally, if APRA passes, many—perhaps most—of your actions relating to personal data may also now become subject to a federal private right of action (which includes attorneys’ fees and litigation costs).  When was the last time you reviewed your company’s privacy practices? When was the last time you had a third party test your cyber security policies and procedures? Have you checked whether your website is sending personal data to third parties? Have you tested any of your company’s apps for compliance with the app stores’ privacy requirements?

Experienced counsel can assist you with all of these items, as well as helping you keep up with the many, many changes in this area. The best experienced counsels include fun ways to keep you updated, like true and false quizzes.

]]>
Data Protection Report
$10,000,000 civil penalty for disclosing personal data without consent https://www.lexblog.com/2024/04/30/10000000-civil-penalty-for-disclosing-personal-data-without-consent/ Tue, 30 Apr 2024 12:42:21 +0000 https://www.lexblog.com/2024/04/30/10000000-civil-penalty-for-disclosing-personal-data-without-consent/ On April 15, 2024, the U.S. Department of Justice, upon referral from the Federal Trade Commission, filed a complaint and stipulated order against telehealth company Cerebral, Inc.  The claims related to the company’s sharing personal data without consumer consent and making it very difficult for consumers to cancel their subscriptions to this telehealth service.  As part of the order, the company agreed to post “clearly and conspicuously” on its websites and apps for the next two years:

Between October 2019 and [date], we shared the personal of information of consumers visiting our website and apps with other companies without their permission. Specifically, we shared details about consumers (including contact information, birthdates, IP addresses, and other demographic information); any intake questionnaire responses they provided (including selected services and other personal health information); location information; and subscription or other treatment information (including appointment dates, clinical information, and insurance and pharmacy information) with approximately two-dozen outside firms, including social media firms such as Facebook / Meta and Tik Tok, and other businesses such as Google.

In brief, the company agreed to settle the charges—without admitting or denying the allegations—for a civil payment of $10 million (partly suspended) plus $5 million in civil relief, as part of a 20-year stipulated order.  The order also requires that the company destroy personal data for which it had not received consent and to create a document retention and destruction policy.

Background

According to the complaint, Cerebral offered telehealth services, including mental health treatment and/or medication management services, through websites and mobile apps since October 2019.  As indicated in the paragraph quoted above, the company collected some very sensitive personal information.  Its privacy policy stated that the company would treat the data confidentially, and the company would not share data without user consent.  In December 2020, the company revised its privacy policy (increasing its size to 15 pages) and added a statement that it used Facebook Pixel, a web analytics and advertising service by Facebook, Inc. that “uses cookies, pixel tags, and other storage and tracking technology to collect or receive information from [Cerebral’s] [w]ebsites and [a]pps based on [consumers’] usage activity.”  The company also, according to the complaint, used the consumer data for targeted advertisement services that “relied on exploiting user PHI in order to (1) re­ target current Cerebral users with additional advertisements for Cerebral services, and (2) target new, potential users who were demographically similar to existing Cerebral users.”  The complaint stated:  “By permitting tracking tools on Cerebral’s websites and apps, Defendants caused a massive disclosure of consumers’ remarkably sensitive PHI directly or indirectly to twenty or more third parties, including Linkedin, Snapchat, and TikTok.”

Then, according to the complaint:

In March 2023, over three years after it began to unlawfully share its patients’ PHI with third parties as alleged above, Cerebral filed a notice with the U.S. Department of Health and Human Services (“HHS”) acknowledging that its inappropriate use of tracking tools on its websites and apps constituted a breach of unsecured health information protected under HIPAA. Cerebral disclosed that its breach impacted nearly 3.2 million consumers between October 2019 and March 2023.

The company also admitted “that it disclosed consumers’ sensitive PHI to entities that were not able to meet all legal requirements to protect consumers’ health information.”

The complaint alleged that the company’s data handling practices also resulted in unauthorized disclosures of personal information.  For example, the government alleged that the company placed personal information “in a shared electronic folder, which unauthorized persons whom Cerebral has been unable to identify accessed multiple times”  In addition, “former employees and contractors accessed 266 patient files using access credentials Cerebral failed to revoke.”

The complaint alleged that Cerebral’s conduct constituted unfair and deceptive acts or practices in violation of Section 5 of the Federal Trade Commission Act, plus a claim for violation of the Restore Online Shoppers’ Confidence Act (ROSCA) due to its complex cancellation of subscription processes, plus a claim for violation of the federal Opioid Act, with respect to a substance use disorder treatment service.

The Stipulated Order

In addition to the $10 million penalty and $5 million civil relief, the 20-year order places several additional obligations on the company, including the website notice described above, third-party assessments, an information security program, express consent requirements with respect to personal information, certifications by third parties to which personal information is disclosed, and required disclosures with respect to negative option subscriptions. 

The order also, in Section IX, set forth data destruction requirements and a data retention policy.  The company has 60 days to delete all personal data collected without appropriate consent unless it obtains affirmative express consent.  The order defines “Deletion” to mean “to remove Covered Information such that it is not maintained in retrievable form and cannot be retrieved in the normal course of business.”  With respect to a data retention policy, the order states:

  1. Within seven days of entry of this Order, Defendant must document and adhere to a retention schedule for Covered Information in compliance with this Order. Such schedule shall set forth: (1) the purpose or purposes for which each type of Covered Information is collected; (2) the specific business needs for retaining each type of Covered Information; (3) a specific timeframe for Deletion of each type of Covered Information (absent any intervening Deletion requests from consumers) limited to the shortest time necessary to fulfill the purpose for which the Covered Information was collected, and in no instance providing for the indefinite retention of any Covered Information or retention beyond 10 years; and (4) a true and accurate explanation of why the set timeframe for Deletion is the shortest time reasonably necessary for the specific business needs cited.

The order also requires Cerebral to permit individuals to access or delete their data, and the company has 30 days to respond to such as request, which can be extended for an additional 30 days when “reasonably necessary.”  Cerebral has 30 days from the date of the order to submit to the FTC a listing of all third parties that received the covered information “in any form, including in hashed or encrypted form.”  The company has 60 days from the date of the order to contact those third parties and direct them to delete the data unless the data is necessary for the treatment, payment or health care operations of patients.  In addition, the order requires the company, for each product or service, to develop policies and procedures that include, among other things, “the data retention limit set for each type of Covered Information and the technical means for achieving Deletion.” 

Our Take

It is becoming routine for U.S. regulators to require companies who have a data breach and/or mishandle consumer personal information, to implement a meaningful record retention program that focuses on deletion of personal information.  We are seeing order after order in the U.S. pushing back on indefinite retention of information.

Mature privacy programs must coordinate with their information governance counterparts to develop guidance that actually helps their employees organize their data and effectively delete information they no longer need.  Vague assertions of legal or business need are no longer enough to continue to store personal information for extended periods of time.  Here, the Stipulated Order was focused on tying specific retention periods to demonstratable business or legal requirements.

As with any compliance program, the most difficult part is not generating the guidance, policies or procedures, but cost-effectively changing people’s behavior and measuring success.  It is great if a company’s System of Record programmatically deletes records at specific time periods and intervals.  This success, however, is undermined if employees easily download the same information and squirrel it away in fileshares, Sharepoint sites and hard drives.

Maturing and implementing meaningful record retention requires making incremental progress with your enterprise systems as well as employee behavior and distributed file storage.  It will take time and record retention needs to be integrated into the business model of both your front-end and back -end operations.

]]>
Data Protection Report
FCA sets out plans to make Big Tech a priority and provides update on its approach to AI https://www.lexblog.com/2024/04/23/fca-sets-out-plans-to-make-big-tech-a-priority-and-provides-update-on-its-approach-to-ai-2/ Tue, 23 Apr 2024 08:49:34 +0000 https://www.lexblog.com/2024/04/23/fca-sets-out-plans-to-make-big-tech-a-priority-and-provides-update-on-its-approach-to-ai-2/ On 22 April 2024, the Financial Conduct Authority (FCA) published a speech by its chief executive, Nikhil Rathi, entitled ‘Navigating the UK’s Digital Regulation Landscape: Where are we headed?’. In the speech, Mr Rathi announced the FCA’s plans to focus on Big Tech, which are included in Feedback Statement FS24/1 (published alongside the speech). The speech also covered the FCA’s response to the Government’s White Paper on Artificial Intelligence (AI), which was also published in parallel with the speech.

The speech: key points

As part of his speech, Mr Rathi explained that the FCA plans to examine how Big Tech firms’ unique access to large sets of data could unlock better products, more competitive prices and wider choice for consumers and businesses. He noted that whilst the growing emergence of Big Tech in financial services has already made life easier for consumers, it remains unclear how valuable their data will become in financial markets. If the FCA’s analysis finds Big Tech data is valuable in financial services, it will look to incentivise more data sharing between Big Tech and financial firms through its Open Banking and broader Open Finance work. If it finds potential risk or harms from non-sharing of data it will also look to develop proposals for the Competition and Markets Authority (CMA) to consider when they are given powers to regulate designated firms’ digital and data conduct, expected via the Digital Markets, Competition and Consumers Bill.

Mr Rathi also highlighted the FCA’s continued joint work with the Bank of England (BoE) and the Prudential Regulation Authority on the role of critical third parties and AI. He flagged that collaboration, including with industry, through fora such as the newly launched Digital Regulation Cooperation Forum (DRCF) AI and Digital Hub is key to ensuring the FCA’s approach is proportionate and supports innovation.

FS24/1 on data asymmetry between Big Tech and firms in financial services

In FS24/1, the FCA summarises its analysis of the responses it received to the call for input (CFI), launched in November 2023, on potential competition impacts from the data asymmetry between Big Tech and firms in financial services. The FCA also sets out its next steps.

The FCA has already committed, as part of its 3-year strategy, to identifying potential competition benefits and harms from Big Tech firms’ growing presence in financial services, and one area of concern was that the asymmetry of data between these firms and financial services firms could have significant adverse implications for how competition develops in financial services in the future. In FS24/1, the FCA explains that it aims to mitigate the risk of competition in retail financial markets evolving in a way that results in some Big Tech firms gaining market power while enabling the potential competition benefits (from Big Tech entry and expansion).

Four ‘next steps’ are set out in FS24/1 to address the key issues identified by the FCA. In determining these next steps, the FCA notes that it has balanced the fact that no significant harms have arisen from data asymmetry to date, while starting to develop a regulatory framework that enables increased competition and innovation. The steps are for the FCA to:

  • Continue monitoring Big Tech firms’ activities in financial services to assess whether policy changes are needed and working with its regulatory partners.
  • Work with Big Tech firms to examine whether their data from their data from their core digital activities would be valuable in certain retail financial markets.
  • Develop proposals (dependent on those results) in the context of Open Finance, and for the CMA to consider.
  • Examine how firms’ incentives, including Big Tech firms, can be aligned to share data where this is valuable to achieve good outcomes for consumers.
  • Work closely with the Payment Systems Regulator (PSR), alongside these initiatives, to understand the risks and opportunities related to digital wallets.

FCA response to Government White Paper on AI

The ‘AI update’ published by the FCA provides an update on its approach to AI following the Government’s publication of its pro-innovation strategy in February 2024. The FCA welcomes the Government’s principles-based, sector-led approach to AI and confirms that it is focused on how firms can safely and responsibly adopt the technology as well as understanding what impact AI innovations are having on consumers and markets – including close scrutiny of the systems and processes firms have in place to ensure regulatory expectations are met. It confirms that it will continue to closely monitor the adoption of AI across UK financial markets, including keeping under review if amendments to the existing regulatory regime are needed, and will continue to monitor the potential macro effects that AI can have on financial markets (e.g. cybersecurity, financial stability, interconnectedness, data concerns or market integrity).

The document outlines the ways in which the FCA’s approach to regulation and supervision addresses the five key ‘AI principles’ identified by the Government: 1) safety, security, robustness; 2) appropriate transparency and explainability; 3) fairness; 4) accountability and governance; and 5) contestability and redress.

It also sets out what the FCA plans to do in the next 12 months in relation to AI, including:

  • Continuing to further its understanding of AI deployment in UK financial markets: This is intended to ensure that any potential future regulatory interventions are not only effective but also proportionate and pro-innovation, and also that the FCA can respond promptly from a supervisory perspective to any emerging issues at specific firms. Examples of work in this area includes current diagnostic work on the deployment of AI across UK financial markets; re-rerunning a third edition of the machine learning survey (jointly with the BoE); and collaborating with the PSR to consider AI across systems areas.
  • Building on existing foundations: While the existing framework, in so far as it applies to firms using AI, aligns with and supports the Government’s AI principles in various ways, the FCA notes that it is continuing to closely monitor the situation and may actively consider future regulatory adaptations if needed. It flags regulatory regimes such as those relating to operational resilience, outsourcing and critical third parties as having increasing relevance to firms’ safe and responsible use of AI and says it will feed in lessons from its better understanding of AI deployment in UK financial markets into its ongoing policy work in these areas.
  • Collaboration: The FCA explains that it routinely collaborates with partners domestically and internationally, including on AI, and that given recent developments (such as the AI Safety Summit and the G7 Leaders’ Statement on the Hiroshima AI Process) it has further prioritised its international engagement on AI.
  • Testing for beneficial AI: As greater use of AI by market participants means its impact on consumers and markets is expected to increase, the FCA highlights that it is working with DRCF member regulators to deliver the pilot AI and Digital Hub, whilst at the same time running its own Digital Sandbox (which allows for the testing of technology via synthetic data) and the Regulatory Sandbox (for which the FCA is the global pioneer). The FCA is also assessing opportunities to pilot new types of regulatory engagement as well as environments in which the design and impact of AI on consumers and markets can be tested and assessed without harm materialising – this includes exploring changes to its innovation services that could enable the testing, design, governance and impact of AI technologies in UK financial markets within an AI Sandbox.
  • Its own use of AI: The FCA uses web scraping and social media tools that are able to detect, review and triage potential scam websites, and plans to invest more into these technologies to proactively monitor markets, including for market surveillance purposes. It notes that it is currently exploring potential further use cases involving Natural Language Processing to aid triage decisions, assessing AI to generate synthetic data or using Large Language Models to analyse and summarise text.
  • Looking towards the future: Finally, the FCA notes that it is taking a proactive approach to understanding emerging technologies and their potential impact as part of its Emerging Technology Research Hub – e.g. as part of the DRCF Horizon Scanning and Emerging Technologies workstream in 2024-2025, it will conduct research on deepfakes and simulated content following engagement with stakeholders. It also notes that it has been actively monitoring advancements in quantum computing and examining the potential benefits for industry and consumers, while also considering the impact of the inherent security risks.

]]>
Data Protection Report
Apple introduces “Privacy Manifests” for new and updated apps https://www.lexblog.com/2024/04/16/apple-introduces-privacy-manifests-for-new-and-updated-apps/ Tue, 16 Apr 2024 15:23:44 +0000 https://www.lexblog.com/2024/04/16/apple-introduces-privacy-manifests-for-new-and-updated-apps/ Apple recently announced that beginning in spring 2024, developers of certain SDKs and apps that use those SDKs will be required to include a “Privacy Manifest,” which lists all tracking domains used in the relevant SDK or app. To determine whether this is relevant to your company, a list of SDKs that require a Privacy Manifest can be found here. Privacy Manifests are required in order to either:

  • Submit a new app to the App Store that includes a listed SDK or
  • Submit an app update to the App Store that adds one of the listed SDKs.

If users have opted out through the App Tracking Transparency (ATT) framework, iOS system will block outgoing network connections to that domain.     

What is in Privacy Manifest? The Privacy Manifest consists of four top-level keys:

  1. NSPrivacyTracking – Reflects whether your app or third-party SDK uses data for tracking (i.e., behavioral advertising)?
  2. NSPrivacyTrackingDomains – Lists tracking domains.
  3. NSPrivacyCollectedDataTypes (i.e., Privacy Nutrition Labels) – Lists the categories of data that your app or third-party SDK collects together with purpose. The responses should match what is currently listed in the relevant app’s Privacy Nutrition Label.
  4. NSPrivacyAccessedAPITypes – Lists the pre-approved “required reason APIs” used by the app or SDK and the corresponding approved purpose.

How we can help

Through the use of our in-house tool, NT Analyzer, Norton Rose Fulbright can assist attorneys and developers with the new Privacy Manifest requirements, including confirming SDK uses and tracking domains and verifying the accuracy of Privacy Nutrition Labels.

If you are interested in learning more about the firm’s technical capabilities, including a demo of NT Analyzer, please contact NTAnalyzer@nortonrosefulbright.com.


]]>
Data Protection Report
CISA issues proposed rules for cyber incident reporting in critical infrastructure https://www.lexblog.com/2024/04/09/cisa-issues-proposed-rules-for-cyber-incident-reporting-in-critical-infrastructure/ Tue, 09 Apr 2024 20:21:02 +0000 https://www.lexblog.com/2024/04/09/cisa-issues-proposed-rules-for-cyber-incident-reporting-in-critical-infrastructure/ On March 27, 2024, the Cybersecurity and Infrastructure Security Agency (“CISA”) published a Notice of Proposed Rulemaking for the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (“CIRCIA”), which imposes new reporting requirements for entities operating in critical infrastructure sectors. The CIRCIA was originally enacted in part as a response to recent attacks on critical infrastructure, such as the ransomware attack on Colonial Pipeline in May 2021, but CISA’s proposed regulations take a surprisingly broad view of who may be considered a covered entity and what incidents are reportable.

Who Qualifies as a Covered Entity

Covered entities are limited to the 16 critical infrastructure sectors laid out in the Presidential Policy Directive on Critical Infrastructure Security and Resilience Chemical; Commercial Facilities; Communications; Critical Manufacturing; Dams; Defense Industrial Base; Emergency Services; Energy; Financial Services; Food and Agriculture; Government Facilities; Healthcare and Public Health; Information Technology; Nuclear Reactors, Materials, and Waste; Transportation Systems; and Water and Wastewater Systems.

Organizations are generally expected to be able to self-identify if they operate in a particular sector, but the types of entities that would be considered participants in a given sector are also described in greater detail by Sector-Specific Plans that are readily available on the CISA website. Across sectors, CISA has created both size-based and sector-specific criteria to determine which entities are considered “covered entities,” with the aim of receiving a broad range of reports from those organizations that are likeliest to be targeted by cybersecurity attacks or would have the greatest impact on critical infrastructure if they were to suffer an attack, and also have the resources to implement cybersecurity measures that would be responsive to CISA regulations.

Size-Based Criteria:

All entities in critical infrastructure that exceed the small business size standards set forth by the Small Business Administration—that is, any that are not considered “small businesses”—automatically qualify as covered entities.

Sector-Based Criteria:

CISA’s sector-based criteria captures smaller entities that may not meet the size threshold but are nonetheless considered “high-risk,” such as critical access hospitals in rural areas, owners and operators of nuclear facilities, and large school districts. Many of these criteria overlap with pre-existing regulatory reporting requirements. For example, government contractors or subcontractors with reporting obligations to the DOD or DOE for cyber incidents, or financial services entities that are already required to report cyber incidents to their primary federal regulator would be considered “covered entities” under the CIRCIA. There are no sector-based criteria for the Commercial Facilities, Dams, or Food and Agriculture sectors, where the entities that would likely impact national security, economic security, or public safety are already identified by size.

What Qualifies as a Covered Cyber Incident

A covered incident is a cyber incident that leads to any of the following:

  • a substantial loss of confidentiality, integrity, or availability of a covered entity’s information system or network;
  • a serious impact on the safety and resiliency of a covered entity’s operational systems and processes;
  • a disruption of a covered entity’s ability to engage in business or industrial operations, or deliver goods or services; or
  • unauthorized access to a covered entity’s information system or network, or any nonpublic information contained therein, that is facilitated through or caused by either a compromise of a cloud service provider, managed service provider, other third-party data hosting provider, or a supply chain compromise.

Note that this definition covers the listed impacts regardless of cause, and can include compromises not only to the covered entity itself but also to cloud service or managed service providers, third-party data hosting providers, supply chain operators, etc., that provide services to the covered entity.

Reporting

There are four circumstances that trigger a reporting requirement under the CIRCIA.

  1. A covered entity experiences a covered cyber incident
  2. A covered entity makes a ransom payment as a result of ransomware attack against the entity.  
  3. Substantial new or different information becomes available related to the covered cyber incident before it has concluded and been fully mitigated.  
  4. A covered entity makes a ransom payment after it has already filed a covered cyber incident report.  

CISA has proposed four report types, one per type of triggering event, to be filed by covered entities or third parties filing on behalf of covered entities through a web-based form called the “CIRCIA Incident Reporting Form.” Regardless of type, all reports will require covered entities or third parties filing on behalf of covered entities to indicate:

  1. Report type
  2. Identity of the covered entity
  3. Contact information
  4. Third-party authorization (if third-party reporting on behalf of a covered entity)  

Covered Cyber Incident Reports

When a covered entity experiences a covered cyber incident, the covered entity or an authorized third party must file a covered entity cyber report no “later than 72 hours after the covered entity reasonably believes that a covered incident has occurred.”

Understanding that it may not always be immediately apparent that a cyber incident has occurred, CISA expects that entities may need to perform a preliminary analysis before having a “reasonable belief” that it experienced a covered cyber incident. Generally, CISA anticipates that this analysis should be fairly quick, a matter of hours rather than days.

Once a covered entity reasonably believes it has experienced a covered cyber incident, it or its authorized third party must report it as a Covered Cyber Incident via the CIRCIA Incident Reporting Form. At this stage, the covered entity or its authorized third party should be prepared to provide as much of the following information as possible (CISA acknowledges that at this stage in an incident investigation, an entity may not have all of the information yet.)

  • A description of the function if affected information systems and network devices
  • A description of the unauthorized access and extent of the information compromise or impact
  • A description of any disruption to business or industrial operations resulting from the unauthorized access
  • The incident date range
    • Date incident began
    • Date incident was detected
    • Date incident was mitigated and resolved (if applicable)
    • Duration of the unauthorized access prior to detecting and reporting it (If applicable)
  • A description of the vulnerabilities exploited and security defenses in place
  • The type of incident and a description of the tactics, techniques, and procedures (TTPs) used to perpetrate the incident
  • A description of how the incident was detected and any indicators of compromise  (IOCs)
  • Information related to the perpetrator’s identity

Additionally, when reporting, a covered entity or its authorized third party must, if possible, submit a copy or sample of any malicious software it believes is connected to the incident.

Ransom Payment Reports

When a covered entity or a third party acting on the covered entity’s behalf makes a ransom payment as a result of ransomware attack, it or its authorized third party must report the payment via a Ransom Payment Report within 24 hours of making the payment.


CISA considers a payment to have been made when the payment is disbursed. When filing the report, a covered entity or its authorized third party must provide the following information:

  • A description of the ransomware attack
  • A description of the vulnerabilities exploited and security defenses in place
  • Information related to the identity of the perpetrator
  • The details of the ransom payment:
    • Date of payment
    • Manner of payment requested (type of virtual currency or other commodity)
    • Payment instructions
    • Payment amount
  • The verbatim text or screenshot of the actual demand (if multiple demands or payments were made, a covered entity must report each one)
  • The aftermath of the payment (data returned, decryption keys provided, etc.)
  • The identity of any entities who assisted the covered entity in responding to the ransomware attack or making the payment
  • The information related to any law enforcement engagement related to the payment

Joint Cyber Incident and Ransom Payment Reports

Where a covered entity has made a ransomware payment within the 72 hour window of reaching the belief that a covered incident has occurred, it or its authorized third party may file a Joint Cyber Incident and Ransom Payment Report.

Supplemental Reports

Under two sets of circumstances, a covered entity could be required to file a Supplemental Report to a previously filed report.

  1. The covered entity obtains “substantial new or different information.” In this instance, the Supplemental Report would serve to either fill the gaps in a previously filed Covered Cyber Incident Report, Ransom Payment Report, or Joint Cyber Incident and Ransom Payment Report or act as an amendment to a previously filed report. In the latter case, the additional information would show that the previously filed report is materially incorrect or incomplete.
  2. The covered entity makes a ransom payment after it has filed a Covered Cyber Incident Report.

Supplemental reports should be filed “promptly” which CISA interrupts to mean within 24 hours of discovering new or different information or of making a ransom payment.

Reporting Exceptions

There are three circumstances in which a covered entity may be entirely exempted from filing a CIRCIA Incident Report.

Combined Report

As discussed above, a covered entity can submit a single Joint Cyber Incident and Ransom Payment Report to report both a covered cyber incident and ransom payment. This submission is appropriate where the covered entity makes a ransom payment within the 72 hour window of reporting the covered cyber incident.

Substantially Similar Report

If a covered entity is legally required to report substantially similar information within a substantially similar timeframe to another federal agency with whom CISA has an information sharing agreement and mechanism, the covered entity does not also need to report under the CIRCIA.

In order for this exception to apply, CISA must be able to receive the information from the other federal agency within the same timeframe it would have received the information had the covered entity reported directly to CISA. This means that if the other federal agency requires reporting an incident within 72 hours of a covered entity reasonably believing it occurred, the federal agency must have an instantaneous information sharing mechanism in place with CISA so that CISA may receive the report with its required 72 hour time frame.

The CIRCIA proposes to call these information sharing mechanisms “CIRCIA Agreements” and CISA will announce and catalogue all these agreements on its public facing website. If a covered entity reports to another federal agency with which CISA does not have a CIRCIA Agreement, this exception will not apply and the covered entity will also have to report to CISA.

Domain Name Exception

Covered entities or functions within a covered entity that are owned-operated or governed by a multi-stakeholder organization that develop, implement, or enforce policies relating to the Domain Name System (DNS) will be exempt from reporting a covered cyber incident to CISA.

Preservation Requirements

The CIRCIA also proposes to impose data and information preservation requirements on covered entities. Specifically, CIRCIA will require covered entities to propose information related to:

  • Communications between the covered entity and the threat actor
  • Indicators of compromise
  • Relevant log entries, memory captures, and forensic images
  • Network information and traffic related to the incident
  • System information that may help identify the exploited vulnerabilities
  • Information related to any exfiltrated data
  • Data and records related to any ransom payment made
  • Any forensic or other report related to the covered incident

A covered entity should begin to preserve these records at either (1) the date upon which the entity establishes a reasonable belief that a covered cyber incident has occurred, or (2) the date upon which the ransom payment is made, whichever is earlier. A covered entity has to then preserve these records for two years from the submission date of its latest required CIRCIA report.

In terms of manner of preservation, a covered entity needs to preserve these records so that the covered entity may easily retrieve them to respond to a government request. Additionally, covered entities should take reasonable measures to protect the preserved information against unauthorized access, disclosure, deterioration, deletion, destruction, and alteration.

Note that a covered entity is not required to create any records or data it does not already have in its possession. This preservation requirement only applies to records and data that an entity has created or will create regardless of the CIRCIA.

Personal Information

Although CISA has not included specific notification requirements for compromised personal information, covered incident reports may include whether any personal information was compromised, and covered entities should take care to preserve records related to any personal information impacted by the incident. CISA has proposed a broad definition of personal information that extends beyond what is typically considered notifiable under state and federal regulations, including but not limited to:

  • identifying information such as photographs, names, home addresses, direct telephone numbers, and Social Security numbers; and
  • information that does not directly identify an individual but is nonetheless personal, non-public, and specific to an identified or identifiable individual, such as medical information, personal financial information (e.g., an individual’s wage or earnings information; income tax withholding records; credit score; banking information), contents of personal communications, and personal web browsing history.

Unlike the definition provided in the Cybersecurity Information Sharing Act of 2015, CISA does not require that the information be “known at the time of sharing” to be personal information.

Enforcement

The CIRCIA provides various enforcement methods for CISA to use if CISA believes that a covered entity failed to report a covered cyber incident or ransom payment in accordance with CISA’s proposed regulatory reporting requirements. These mechanisms include:

  • the issuance of a Request for Information (RFI);
  • the issuance of a subpoena, ;
  • a referral to the Attorney General to bring a civil action in District Court to enforce a subpoena and/or pursue a potential contempt of court; and
  • other enforcement proceedings such as acquisition penalties, suspension, and debarment.

CISA must consider the following factors when determining whether to exercise its enforcement authority: the complexity of determining whether a covered cyber incident has occurred, the covered entity’s prior interactions with CISA, and the covered entity’s understanding of the policies and procedures for reporting covered cyber incidents and ransom payments. 

The enforcement provisions of CIRCIA do not apply to State, Local, Tribal, or Territorial (SLTT) Government Entities.

Request for Information (RFI)

Under the CIRCIA, the CISA Director can issue an RFI and may also formally designate another individual (or more than one individual) as having the authority to issue an RFI. RFIs are applicable in two scenarios: (1) when an entity fails to report a covered cyber incident or a ransom payment; and (2) when the CISA would like additional information following a covered entity’s submission of a report. This means that the CISA may issue RFIs for failure to submit a Supplemental Report, or if it finds a report to be deficient or noncompliant.

The CIRCIA provides liability protection for any person or entity that submits a CIRCIA Report or information in response to an RFI. CIRCIA reports and RFI responses are also considered the commercial, financial, and proprietary information of a covered entity when so designated by the entity (there is an option to choose this when submitting). The reports and RFI responses are not considered a waiver of any applicable privilege or protection.

Note that an RFI is not a final agency action, so the issuance of an RFI cannot be appealed.

Subpoenas

If the CISA Director has not received an adequate response to an RFI within 72 hours of issuance, the Director may issue a subpoena to compel disclosure of information. This includes information that the Director deems necessary to determine whether a covered cyber incident or ransom payment has occurred, and to assess potential impacts of the incident on national security, economic security, or public health and safety.

Responses to subpoenas do not receive the same protections as information in a CIRCIA Report or information submitted in response to an RFI. Notably, subpoenaed information may be shared with certain law enforcement and regulatory officials. CISA is proposing this approach so that the unavailability of protections will incentivize covered entities to comply with the applicable regulation or an RFI.

Attorney General Referrals

If a covered entity fails to comply with a subpoena, the CISA Director may refer the matter to the Attorney General to bring a civil action in a district court of the United States to enforce the subpoena. A court may punish a failure to comply with a CIRCIA subpoena as contempt of court.

Acquisition Penalties, Suspension, and Debarment

The CISA Director must refer all circumstances of a covered entity’s noncompliance that may warrant suspension and debarment action to the DHS Suspension and Debarment Official. The CISA Director has the power to provide information regarding a noncompliant entity who has a procurement contract with the Federal government to the Attorney General and to the contracting official responsible for oversight of the contract in question.

Penalties for False Statements and Representations

Any person who knowingly and willfully makes a materially false or fraudulent statement or representation in connection with, or within, a CIRCIA Report, RFI response, or reply to an administrative subpoena is subject to penalties. CISA interprets “materially false or fraudulent statements or representations relating to CIRCIA” to potentially include knowingly and willfully doing any of the following:

  • submitting a CIRCIA Report for an incident that did not occur
  • claiming to be a representative of a covered entity whom you do not in fact represent
  • certifying you are a third party authorized to submit on behalf of a covered entity when you do not have authorization, and
  • including false information within a CIRCIA Report, RFI Response, or response to an administrative subpoena.

A report that a covered entity reasonably believes to be true at the time of submission, but later learns is incorrect, is not a false statement or misrepresentation if the entity submits a Supplemental Report reflecting the new information.

Penalties for making false statements and representations include a fine or imprisonment for no more than five years. The maximum penalty for making false statements and penalties increases to eight years imprisonment if the false statement is related to international or domestic terrorism or certain sexual offenses.

Additionally, materially false or fraudulent statements or representations in submissions to CISA do not receive the protections and restrictions afforded to CISA Reports and RFI responses.

Key Takeaways

  • Businesses that may not have expected to be affected by the CIRCIA should carefully review whether they fall under CISA’s new definitions and be prepared to report on cyber incidents in the event the proposed rules are adopted.
  • CISA requires covered entities to report covered cyber incidents no later than 72 hours after they reasonably believe a covered incident has occurred or within 24 hours of making a ransom payment after a ransomware attack, and is empowered by various enforcement mechanisms to take action against covered entities that fail to do so.

Reports on incidents should include descriptions of: the affected information systems and network devices and their functions; the extent, impact, and date range of the incident; the vulnerabilities exploited and security defenses in place; the TTPs used to perpetrate the incident; how the incident was detected and any IOCs; and any information related to the perpetrator’s identity.

]]>
Data Protection Report
EU confirms agreement on rules to improve working conditions of platform workers https://www.lexblog.com/2024/04/04/eu-confirms-agreement-on-rules-to-improve-working-conditions-of-platform-workers-2/ Thu, 04 Apr 2024 09:04:24 +0000 https://www.lexblog.com/2024/04/04/eu-confirms-agreement-on-rules-to-improve-working-conditions-of-platform-workers-2/ On 11 March the Council of the EU confirmed the provisional agreement reached on the Platform Workers Directive (the Directive).  The Directive aims to improve the working conditions of those who work on platforms in the gig economy and will also regulate the use of algorithms by digital labour platforms. 

Employment protection

The EU suggests that there are more than 28 million people working on digital labour platforms in the EU, sometimes known as “gig economy” workers.  One of the key issues regarding these individuals is correctly determining their employment status in order to understand the minimum standards of employment protection to which they are entitled. The agreed text on the Directive means that Member States will establish a legal presumption that will help determine the correct employment status of persons working in digital platforms in their legal systems.  The legal presumption will be triggered when facts indicating control and direction are found.  People working in the digital platforms, their representatives or national authorities may invoke the legal presumption and claim that they have been misclassified.  The burden of proof will be on the digital platform to prove that there is no employment relationship.  In addition, Member States will provide guidance to digital platforms and national authorities when the new measures are put in place.

This is a departure from the original drafting which provided that individuals would be presumed to be employees if a certain number of criteria were met. The compromise reached means that the Directive will not outline the conditions to determine employment status; instead this responsibility is given to each EU Member State taking into account national law, collective agreements and EU case law.  In the UK there has been a significant amount of case law considering the employment status of such gig economy workers, and while the UK is not bound by the Directive it will be interesting to see how this will impact any UK determinations.  In addition, the Labour party in the UK has said that it will consult on its proposal to create a single “worker” status for all but the genuinely self- employed and reviewing the rights available to such workers, potentially increasing the employment protections that gig economy workers may receive.

Regulating algorithmic management

The Directive is interesting on algorithmic management as it covers with more specificity ground already covered by the GDPR and which will also be covered by the EU AI Act when it finally comes into force. All three instruments can apply to automated monitoring and decision making relating to platform workers. Platform operators are going to have to take account of them all.

The Directive prohibits automated monitoring or decision making based on the psychological or emotional state, private conversations, activity outside of the performance of the platform work, data used to predict the person’s exercise of fundamental rights (such as collective bargaining or association), inferences relating to certain sensitive characteristics and one from many biometric identification of the person performing platform work. It appears that the intention is to catch decision support systems as well as wholly automated systems and these activities will therefore be banned.

Outside these prohibited areas all automated monitoring and decision making of platform workers is deemed to trip the requirement to undertake a DPIA under the GDPR, workers representatives must be consulted (they also have a right to be assisted by an expert of their choice) before the system’s introduction and provided with comprehensive and detailed information about how the system will be used and what the parameters and weightings used to make decisions are, whilst workers or applicants themselves must receive the same information in a concise form. A right to human review within 2 weeks (quicker than under the GDPR) is included and platform workers are entitled to require monitored data relating to their activities to be moved to other platforms.

The system’s operation must be reviewed at least every two years and the results shared with workers representatives. Private rights of action are created and GDPR penalties can be applied for breach of these provisions.

Next steps

The text of the Directive must now be finalised and must then be formally adopted.  Member States will have two years after the formal steps of adoption to incorporate the provisions into their national legislation. The intention is to make these platforms decision making processes much more transparent and easier for workers to anticipate how they will be treated and to challenge practices that they consider to be unfair. As many of these rules could be derived via the principles in the GDPR platform operators should beware of DPAs applying them in practice sooner than their implementation date.

This post has also been published on our Global Employment blog – Global Workplace Insider here

]]>
Data Protection Report
Testing the tricky apps for privacy and data protection https://www.lexblog.com/2024/03/25/testing-the-tricky-apps-for-privacy-and-data-protection/ Mon, 25 Mar 2024 19:49:08 +0000 https://www.lexblog.com/2024/03/25/testing-the-tricky-apps-for-privacy-and-data-protection/ Dealing with cert pinning and root detection

The privacy area has been white-hot lately, including litigation and investigations involving VPPA; Wiretap/Pen Register/Trap and Trace; and Opt Out Compliance. Furthermore, with the HHS updates on tracking in the HIPAA context, and the new state privacy laws (such as the My Health My Data Act), we can also expect a ramped-up focus on healthcare, fitness, pharma, nutrition, and medical devices. If a company wants to beat the plaintiffs’ lawyers and regulators to the punch, it is critical that the company conduct periodic network traffic analysis tests (also known as “dynamic testing”) of its mobile apps. Testing allows a company to see what data is collected from the app and by whom.

Occasionally, network traffic analysis can be frustrated by additional security measures used in the financial and healthcare areas (and also, increasingly, in areas where sensitive intellectual property may be in play). These measures can include “root detection” and “cert-pinning”. Cert-pinning helps ensure that the company app is solely communicating with the intended server by forcing the company app to trust a predefined or “pinned” certificate or set of certificates. Cert-pinning is typically used to prevent state-sponsored man-in-the-middle (MITM) attacks (i.e., malicious activities conducted by a government to intercept or otherwise manipulate communications between two parties). On the other hand, root detection is used to safeguard users and companies against devices that have been rooted (Android) or jailbroken (iOS) (i.e., bypasses the manufacturer’s restrictions), which can potentially compromise the security of the device and the applications running on it. 

Cert-pinning (on both iOS and Android) can frustrate proxying of network traffic because the proxy certificate will cause certificate validation errors and prevent collection and analysis of traffic. Root detection, meanwhile, can frustrate traffic analysis on Android because a rooted Android device is often used to conduct such tests. Mobile apps that are equipped with root detection will simply not work on a rooted phone. All of these issues can pose a major problem for Chief Privacy Officers if their company’s own security protections are preventing them from conducting compliance-critical testing.

The NT Analyzer team has invested significant time in developing workarounds to successfully handle both cert-pinning and root detection. Through the use of the Frida instrumentation toolkit, mitmproxy, and some custom scripting, we have been able to routinely bypass both hurdles during our testing.

Now, more than ever, it is important for companies to obtain line-of-sight on data collection/sharing that is otherwise hidden from view. For more information about NT Analyzer testing or to discuss this blogpost, please contact: NTAnalyzer@nortonrosefulbright.com.

]]>
Data Protection Report
Singapore releases New Guidelines on the Use of Personal Data in AI Systems https://www.lexblog.com/2024/03/25/singapore-releases-new-guidelines-on-the-use-of-personal-data-in-ai-systems/ Mon, 25 Mar 2024 10:04:43 +0000 https://www.lexblog.com/2024/03/25/singapore-releases-new-guidelines-on-the-use-of-personal-data-in-ai-systems/ On 1 March 2024, Singapore’s Personal Data Protection Commission (PDPC) issued the Advisory Guidelines on the Use of Personal Data in AI Recommendation and Decision Systems (AI Advisory Guidelines). These AI Advisory Guidelines followed a public consultation which concluded in August 2023. Our blog post on the public consultation for the draft AI Advisory Guidelines can be accessed here.

Summary of the Advisory Guidelines

At the outset, it should be noted that the AI Advisory Guidelines are focused on the use of personal data in AI recommendation and decision systems (AI Systems). It does not discuss the use of personal data in the context of generative AI (GenAI) solutions.

The AI Advisory Guidelines provide specific guidance on how the Personal Data Protection Act 2012 (PDPA) applies in three typical stages of AI System implementation:

  1. Development, Testing and Monitoring – When organisations act as AI developers and use personal data for training and testing AI Systems, as well as for monitoring the performance of AI systems post-deployment.
  2. Deployment – When organisations deploy AI Systems that collect and use personal data (“business to consumer” or B2C).
  3. Procurement – Service providers for bespoke AI Systems developed using personal data in the organisations’ possession (“business to business” or B2B).

In the table below, we summarise the applicable obligations and exceptions under the PDPA highlighted in the AI Advisory Guidelines that may apply at each stage of AI System implementation.

Stage of AI System ImplementationApplicable Obligations and Exceptions
Development, Testing and MonitoringGenerally, organisations can use personal data for such activities only if they have received meaningful consent for such use.
 
Alternatively, organisations may rely on the following exceptions (subject to the relevant requirements) under the PDPA:
 
1. Business Improvement Exception – in situations where an organisation is developing a product or has an existing product that it is enhancing. This exception is also relevant when an AI System is intended to improve operational efficiency by supporting decision making, or if the AI System is intended to offer more or new personalised products and/or services to customers. It extends to intra-group sharing of personal data for such purposes (but not cross-company).
 
2. Research Exception – in situations where an organisation is conducting commercial research to advance science and engineering without a specific product development roadmap. It extends to cross-company sharing of personal data for commercial research.
DeploymentIn deploying AI Systems, organisations need to be aware of the following obligations:
 
1. Consent and Notification Obligations – to obtain meaningful consent, organisations are encouraged to provide the following information when crafting notices (to the extent practicable):
 
a. the function of their product that requires collection and processing of personal data;

b. a general description of types of personal data that will be collected and processed;

c. an explanation of how such processing of personal data collected is relevant to the product feature; and

d. the specific features of personal data that are more likely to influence the product feature.
 
2. Legitimate Interests Exception – organisations may rely on the legitimate interests exception to process personal data without consent if the purpose for processing data falls within one of the specific purposes identified in the PDPA and if they meet the relevant requirements. To illustrate the operation of this exception, the AI Advisory Guidelines cite the use of personal data as input in an AI System for the purpose of detecting or preventing illegal activities.   
 
3. Accountability Obligation – organisations deploying AI Systems are encouraged to be transparent about their use of such systems by:
 
a. including in their written policies relevant practices and safeguards to achieve fairness and reasonableness;
 
b. providing greater detail in their written policies to obtain meaningful consent from individuals to process personal data and/or provide information about the practices and safeguards to protect the interests of individuals where organisations seek to rely on the relevant exceptions to consent; and
 
c. providing more information on data quality and governance measures taken during AI System development.
 
ProcurementService providers (e.g., systems integrators) may be considered data intermediaries under the PDPA if they process personal data as part of developing bespoke or fully customisable AI Systems for their customers. In such situations, service providers will need to comply with the relevant obligations applicable to data intermediaries under the PDPA (i.e., the Protection and Retention Obligations). To satisfy the Protection Obligation, the AI Advisory Guidelines recommend that service providers adopt the following good practices:
 
1. At the pre-processing stage, use techniques such as data mapping and labelling to keep track of data that was used to form the training dataset.
 
2. Maintain a provenance record to document the lineage of the training data that identifies the source of training data and tracks how it has been transformed during data preparation.
 
Further, the AI Advisory Guidelines encourage these service providers to support organisations in meeting their Consent, Notification and Accountability Obligations. They may do so by:
 
1. being familiar with the types of information that contribute towards meeting their customers’ Consent, Notification and Accountability Obligations by paying attention to the context

2. designing their systems to facilitate the extraction of relevant information to meet their customers’ PDPA obligations.

Key Takeaways

Substantively, the published AI Advisory Guidelines are very similar to the draft version that was released for public consultation in July 2023, with an added explanation of how organisations may rely upon the legitimate interests exception when deploying AI Systems. The primary focus of these guidelines remains unchanged from the draft, aiming to clarify the PDPA’s application where personal data is used in the development and training of AI Systems, as well as when personal data is collected as input from data subjects for use in AI Systems.

As the AI Advisory Guidelines are targeted at AI recommendation and decision systems, they do not address emerging concerns related to generative AI, which raise distinct privacy concerns in the use of personal data to train foundation models or as input in applications. These concerns relating to generative AI are currently being studied by the PDPC and have been highlighted in the recently released draft Model Governance Framework for Generative AI (GenAI MGF) by Singapore’s Infocomm Media Development Authority in collaboration with the AI Verify Foundation. Our blog post on the GenAI MGF can be accessed here.

That said, the AI Advisory Guidelines remain a valuable resource as companies increasingly turn to AI Systems to optimise processes and develop insights from data that they have collected (e.g., for HR purposes, customer engagement etc.). These AI Advisory Guidelines are also relevant to service providers – who increasingly have a role in helping companies with developing AI systems for use in their business operations.

As AI Systems often involve the processing of significant amounts of data (including personal data), it is critical for companies to understand how they can implement these AI systems in a way which complies with these guidelines.

We would like to thank our trainee Judeeta Sibs, practice trainee at Ascendant Legal LLC, for her contribution to this post.

]]>
Data Protection Report
HHS updates online tracker guidance https://www.lexblog.com/2024/03/21/hhs-updates-online-tracker-guidance/ Thu, 21 Mar 2024 20:32:46 +0000 https://www.lexblog.com/2024/03/21/hhs-updates-online-tracker-guidance/ On March 18, 2024, the US Department of Health and Human Services (HHS) issued an updated, 17-page Bulletin titled “Use of Online Tracking Technologies by HIPAA Covered Entities and Business Associates (the Bulletin). Our readers may recall that HHS had originally issued the Bulletin in December of 2002, which we summarized here. HHS’ changes are generally clarifications and additional examples. This post will focus on the changes to the original guidance.

The original and updated guidance applies to all third-party tracking technologies, even those that are deployed to improve the overall functionality of the site or collect general metrics on user interactions with the site or app (i.e., like a standard analytics cookie). The guidance can apply to areas of apps or websites that, at first glance, are not squarely in scope for HIPAA (i.e., a covered entity’s website that lets you search for open appointments).

What changed?

Our readers may recall that HHS took a very broad view of the information that could constitute Protected Health Information (PHI) or Individually Identifiable Health Information (IIHI) whose disclosure to tracking vendors could potentially violate HIPAA. One clarification that the updated guidance provides is the statement that “the mere fact that an online tracking technology connects the IP address of a user’s device (or other identifying information) with a visit to a webpage addressing specific health conditions or listing health care providers is not a sufficient combination of information to constitute IIHI if the visit to the webpage is not related.”

With respect to information collected from a visitor on a covered entity’s webpage that is accessible without logging in (unauthenticated webpage), the updated guidance states, “do not result in a disclosure of PHI to tracking technology vendor if the online tracking technologies on the webpages do not have access to information that relates to any individual’s past, present, or future health, health care, or payment for health care.” Under the new examples:

A visitor to a hospital’s public webpage for job postings or visitor hours does not disclose IIHI, so HIPAA would not apply

If tracking technologies collected an individual’s email address, or reason for seeking health care typed or selected by an individual, when the individual visits a regulated entity’s webpage and makes an appointment with a health care provider or enters symptoms in an online tool to obtain a health analysis, HIPAA would apply, so there would need to be a BAA with the tracking technology vendor or the user would need to consent.

On the other hand, HHS provided two examples that may be difficult for the covered entity to differentiate when looking at the data, involving a visitor to a hospital’s oncology page:

If a student were writing a term paper on the changes in the availability of oncology services before and after the COVID-19 public health emergency, the collection and transmission of information showing that the student visited a hospital’s webpage listing the oncology services provided by the hospital would not constitute a disclosure of PHI, even if the information could be used to identify the student ...

However, if an individual were looking at a hospital’s webpage listing its oncology services to seek a second opinion on treatment options for their brain tumor, the collection and transmission of the individual’s IP address, geographic location, or other identifying information showing their visit to that webpage is a disclosure of PHI to the extent that the information is both identifiable and related to the individual’s health or future health care.

HHS added a new example with respect to mobile apps:

[A] patient might use a health clinic’s diabetes management mobile app to track health information such as glucose levels and insulin doses. In this example, the transmission of information to a tracking technology vendor as a result of using such app would be disclosure of PHI because the individual’s use of the app is related to an individual’s health condition (i.e., diabetes) and that, together with any individually identifying information (e.g., name, mobile number, IP address, device ID) meets the definition of IIHI. 

HHS also added a paragraph on its enforcement priorities, including the following: 

OCR is prioritizing compliance with the HIPAA Security Rule in investigations into the use of online tracking technologies. OCR’s principal interest in this area is ensuring that regulated entities have identified, assessed, and mitigated the risks to ePHI when using online tracking technologies and have implemented the Security Rule requirements to ensure the confidentiality, integrity, and availability of ePHI...

Technical steps you can take

If your organization is a covered entity or business associate using these technologies on any sites or apps, below are some technical steps you can take to help your compliance efforts.

  1. Determine which trackers are on any site/app that you develop or offer that can include PHI.
  2. Learn whether these trackers are developed/offered by you (so-called “first-party trackers”) or whether they are offered by third parties (and if by third parties, which category of third party, such as targeting/advertising, analytics, etc.). Note, a BAA will not be a viable solution for targeted advertising, which would be considered marketing under HIPAA. In those cases, additional restrictions would apply. 
  3. If there are third-party trackers, find out which third parties are involved, and whether your organization:
    a. has a BAA in place with each; and/or
    b. prefers to remove these third parties from your site/app. Note that even third parties that provide analytics information are in scope. Also be on the lookout for trackers that were inadvertently placed on your site, particularly on unauthenticated sites that historically have been less stringently controlled.
  4. Determine if the site or app can obtain a HIPAA compliant authorization from the user prior to the disclosure of PHI to the third-party tracker. Authorizations are subject to stringent requirements set out at § 164.508(b).  

Our Take

This guidance restates HIPAA’s position with regard to third party tracking technologies used for marketing or targeted advertising. But the guidance potentially introduces and broadens HIPAA’s scope in two ways. First, tracking technologies of all stripes are potentially in scope. Second, sites that do not squarely collect PHI, like a registration site or a covered entities homepage, may be in-scope for HIPAA.

Reach out for more information on how we can help your organization meet its HIPAA and privacy requirements. You may consider utilizing NT Analyzer, our firm’s in-house technical privacy compliance tool suite, to complete these steps. Indeed, the HHS Bulletin, like many other privacy trends (e.g., CCPA, mobile app store requirements, etc.), reinforces the importance for organizations to utilize technical frameworks to inform and comply with their privacy requirements.

NT Analyzer is a practical tool suite for managing privacy compliance in mobile apps, websites, and IoT. The tool detects and tracks the full range of data, including PHI and PII, that is collected and shared, and then generates actionable reports through the lens of applicable privacy requirements, such as HIPAA. Click here to request a demo.

]]>
Data Protection Report
ECJ’s ruling on the interpretation of “personal data” and “joint controller” in the context of the IAB TCF Framework https://www.lexblog.com/2024/03/15/ecjs-ruling-on-the-interpretation-of-personal-data-and-joint-controller-in-the-context-of-the-iab-tcf-framework/ Fri, 15 Mar 2024 11:16:58 +0000 https://www.lexblog.com/2024/03/15/ecjs-ruling-on-the-interpretation-of-personal-data-and-joint-controller-in-the-context-of-the-iab-tcf-framework/ On 7 March 2024, the European Court of Justice (the ECJ) published an important decision in relation to IAB Europe’s Transparency and Consent Framework (the TCF).

The judgment of the ECJ is unsurprising given previous case law on the definitions of “personal data” and “controller” under the GDPR and the ECJ’s emphasis that the overarching objective of the GDPR is to “[ensure] a high level of protection of the fundamental rights and freedoms of natural persons”.

Background

The TCF is a consent framework relied upon by many organisations that participate in the online advertising ecosystem looking to achieve compliance with the General Data Protection Regulation (GDPR) and ePrivacy Directive. It was developed by IAB Europe, an industry body representing undertakings in the digital advertising and marketing sector and under it users can give their consent preferences via a consent management platform, which creates a digital record of such preferences (the TC String) which is shared with advertising vendors.

The Belgian Data Protection Authority (DPA) had received a number of complaints that the TCF was not compliant with the GDPR.  In February 2022, the Belgian DPA found that IAB Europe acted as controller, that the TC String constituted personal data, and that IAB Europe had not complied with various obligations under the GDPR.  The Belgian DPA’s decision related to IAB Europe’s compliance, but will also have implications on the future of the TCF and whether participants can use it to obtain valid consent. 

IAB Europe appealed the decision, resulting in the Belgian Court of Appeal referring questions to the ECJ on whether the TC String constituted personal data and whether IAB Europe acted as joint controller.  It will now be for the Belgian Court of Appeal to determine the impact of this ECJ’s ruling on IAB Europe’s appeal against the Belgian DPA’s findings against IAB Europe and the TCF.

Key findings of the ECJ

  1. The consent string constitutes personal data:
    • Citing various case law, the ECJ emphasised that:
      • the definition of “personal data” covers data related to an “identifiable” person;
      • when determining whether a person is identifiable, account should be taken of “all the means reasonably likely to be used...either by the controller or by another person to identify the natural person directly or indirectly”; and
      • this means that the information required to enable identification does not all have to be in the hands of one person.
    • As the TC string contains the preferences of an internet user’s consent, it is information that “relates to a natural person” within the meaning of Art 4(1).
    • Furthermore, where information contained in a TC String is associated with an identifier, such as an IP address, that information can make it possible to identify the person concerned. Where this is the case, it must be considered that the TC String contains personal data of an identifiable user and therefore constitutes personal data.
    • If requested by IAB Europe, IAB Europe members are required to provide IAB Europe with information that allows users whose data are the subject of a TC String to be identified. This means that IAB Europe appears to have “reasonable means” allowing it to identify a particular natural person. It follows from this that the TC String is personal data. This analysis is not affected by the fact that IAB Europe itself cannot combine the TC String with the IP address and does not have the means of directly accessing the IP address from its members.
  2. IAB Europe acts as a joint controller in relation to the data processing connected to the collection of preferences in the TC String in accordance with the TCF:
    • The ECJ, citing the GDPR itself and relevant case law, notes that the definition of “controller” is broad.
    • The ECJ reminded us that. in a joint controller arrangement, whilst each joint controller must independently meet the definition of controller (i.e. must determine the means and purpose of the processing), the joint controllers do not need to have equal responsibility. On the contrary, joint controllers can be involved in different stages of the processing and to different degrees; such participation can be converging and does not have to involve a common decision.
    • It is also not necessary for each of them to have access to the personal data concerned, as established by previous case law.
    • Applying this to IAB Europe, the ECJ ruled that IAB Europe should be regarded as exerting influence over certain data processing activities connected to the TCF for its own purposes and determining, jointly with the members, the means behind such operations. This is because:
      • IAB Europe established the TCF framework with a view to promoting and enabling the operation of online advertising auction in the context of the GDPR, which the ECJ views as them determining, jointly with the members, the purpose of the data processing operations; and
      • IAB Europe determined the various rules relating to storage and dissemination of the TC String that its members must comply with in order to participate and can suspend members that do not comply with the requirements. This, according to the ECJ, supports that view that IAB Europe, jointly with the members, determine the means of the processing.

Accordingly, subject to the Belgian referring court verifying the underlying facts, the ECJ found that IAB Europe must be regarded as a joint controller of processing connected to the recording of the consent preference in the TC String in accordance with the TCF rules. The fact that IAB Europe does not have direct access to the personal data in question does not impact this analysis.

3. The above joint controller analysis does not necessarily extend to the subsequent use of this data by IAB Members.

  • The above analysis does not automatically extend to the subsequently processing of the TC String personal data for the purposes of targeted online advertising (e.g. the transmission of the data to third parties or the actual offering of personalised advertising). It would only be regarded as a joint controller of such subsequent processing if it has actually “exerted an influence over the determination of the purpose and means of that processing”. Whether this is the case would be for the referring Belgian court to ascertain in the context of the main proceedings.

Our take

The ECJ’s interpretation of “personal data” was unsurprising and confirmed the broad interpretation applied in previous case law.  On the other hand, the conclusion that a party that simply sets standards and cannot directly access the data being processed is a controller may appear, at first glance, to be an extension of the GDPR’s scope.  However, as the ECJ set out, previous case law had already established that all joint controllers need not have access to the data.  Nevertheless, the ECJ’s broad interpretation in this case could also impact other organisations, including other “sectoral organisation” that set standards.

The Belgian court will now take into account the ECJ’s findings when it resumes its examination of IAB Europe’s arguments in its appeal against the Belgian DPA’s decision. This decision of the court will ultimately determine the future of the TCF. In the meantime, IAB Europe says that it welcomes the decision as it provides “well-needed clarity over the concepts of personal data and (joint) controllership” and notes that it will be posting more “in-depth commentary” on the ruling and its consequences shortly.

In the meantime, organisations that use TCF should ensure that they comply with the most up-to-date version of the TCF, continue to monitor developments and be prepared to adjust, which given the proposed phase-out of third party cookies may ultimately be necessary in any event.

]]>
Data Protection Report
ICO launches a call for views on the “pay or okay” model https://www.lexblog.com/2024/03/08/ico-launches-a-call-for-views-on-the-pay-or-okay-model/ Fri, 08 Mar 2024 16:59:17 +0000 https://www.lexblog.com/2024/03/08/ico-launches-a-call-for-views-on-the-pay-or-okay-model/ Earlier this week the ICO launched a call for views on the “pay or okay” business model. By way of recap, this model gives users of online services the choice to either consent to personalised advertising using their data or to pay a fee to access an ad-free version of the service. In its blog post launching the call for views, the ICO also provided an update on its wider cookie compliance work.

Key takeaways from the blog:

  1. In its emerging thinking on the “pay or okay” model, the ICO notes that data protection law does not prohibit the model in principle, which many organisations will find reassuring.
  1. However, it notes that care must be taken to ensure that any consent obtained for personalised advertising in the context of such a model is valid (in particular freely given, fully informed and capable of being withdrawn without detriment). As a starting point, the ICO suggests that the following will be factors that help determine whether the consent is valid:
  • Whether there is an imbalance of power between the service provider and the user – consent is unlikely to be freely given where a user has little or no choice whether to use the service or not. The market power of the service provider would be a factor;
  • Whether there is equivalence between the ad-funded service and paid-for service;
  • Whether the fee is appropriate – consent is unlikely to be freely given when the fee for an ad-free service is unreasonably high;
  • Whether the users are presented with the choices fairly and equally and understand how their personal data will be used by the service providers and others “as payment for the service they receive”. No doubt, the recommendations in the ICO and CMA’s joint paper on Harmful Design in Digital Markets will play into what the ICO will expect on how information and choices are presented; and
  • How existing users use the service – there may be a different balance in power where a service is used extensively in user’s lives (as is the case with many social media platforms).
  1. The ICO indicates that its developing views on the “pay or okay” model will be influenced by regulatory and industry developments in the UK and other jurisdictions. Among other things, they are likely to be referring to the upcoming phase-out of third party cookies by Google, the EDPB’s anticipated guidance on the “pay or okay” model and the ECJ judgment on the IAB TCF (which we will report on shortly).
  1. The ICO plans to publish updated guidance on cookies and similar technologies for consultation after the DPDI Bill receives Royal Assent later this year. This will set out its final and expanded regulatory positions on the pay or okay models. In their letter to key industry stakeholders dated 5 March 2024, they have also indicated that the updated guidance will cover the impact of the proposed changes to the cookie consent rules set out in the DPDI Bill.
  1. Buoyed by the impact that their recent cookie crackdown has had on the consent banners of the UK’s most viewed website, the ICO plans to deploy technology to assess the cookie compliance of much greater numbers of sites. It is also assessing the responses they have received from organisations that have not made the required changes to “[determine] which cases to prioritise for enforcement action”. Rather ominously, the ICO ends its blog post by saying “This is the last chance to change. Our next announcement in this space will be about enforcement action”. This should act as a warning to all companies that they cannot ignore their obligations in relation to cookie consent.

Our take

The “pay or okay” model has received significant (and often negative) attention following Meta’s introduction of the model in the EU last Autumn. Despite this, given the challenges faced by companies that rely on online advertising (especially following the 2023 Meta decision and regulatory guidance and enforcement action across Europe), “pay or okay” looks likely to be a route favoured by many companies provided that they can implement it in a compliant manner. The ICO’s efforts to help provide clarity on this is therefore very welcome, although where they will land still remains unclear.

We will continue to follow this and other developments in this area, with 2024 set to be a year of profound change for the online advertising sector.

]]>
Data Protection Report
Executive Order on access to Americans’ bulk sensitive data and Attorney General proposed regulations – Part 2 https://www.lexblog.com/2024/03/07/executive-order-on-access-to-americans-bulk-sensitive-data-and-attorney-general-proposed-regulations-part-2/ Thu, 07 Mar 2024 16:01:12 +0000 https://www.lexblog.com/2024/03/07/executive-order-on-access-to-americans-bulk-sensitive-data-and-attorney-general-proposed-regulations-part-2/ Approximately at the same time as the Executive Order that we described in Part 1 was issued, the Attorney General (AG) unofficially released 90 pages of Advanced Notice of Proposed Rulemaking (ANPRM), which will become official once published in the Federal Register.  The AG has proposed several regulations, and has solicited public comments on over 100 questions.  The public can respond within 45 days of publication in the Federal Register.  After evaluation of the responses, the AG will then propose revised regulations, which will also be subject to a public comment period.  These proposed regulations generally address only Section 2 of the Executive Order.

Which countries are “countries of concern”?

China

Cuba

Iran

North Korea

Russia

Venezuela

What is “personally identifiable data” that is “in combination with each other”?

The proposed regulation would define the term to mean any “listed identifier” that is linked to any other “listed identifier.”  The proposed definition of “listed identifier” is

  • Full or truncated government identification or account number (such as a Social Security Number, driver’s license or state identification number, passport number, or Alien Registration Number)  [Note that this definition apparently includes truncated Social Security Numbers.]
  • Full financial account numbers or personal identification numbers associated with a financial institution or financial-services company
  • Device-based or hardware-based identifier (such as International Mobile Equipment Identity (IMEI), Media Access Control (MAC) address, or Subscriber Identity Module (SIM card number)
  • Demographic or contact data (such as first and last name, birth date, birthplace, zip code, residential street or postal address, phone number, and email address and similar public account identifiers)
  • Advertising identifier (such as Google Advertising ID, Apple ID for Advertisers, or other Mobile Advertising ID (MAID))
  • Account-authorization data (such as account username, account password, or an answer to security questions
  • Network-based identifier (such as Internet Protocol (IP) address or cookie data)
  • Call-detail data (such as Customer Proprietary Network Information (CPNI))

This definition would exclude:

  • Employment history;
  • Educational history;
  • Organizational memberships;
  • Criminal history; or
  • Web-browsing history

The proposed regulation provides some examples to help demonstrate “linking” that becomes “covered personal identifiers”:

  • Example 3.  Demographic or contact data linked only to other demographic or contact data—such as a data set linking first and last names to residential street addresses, email addresses to first and last names, or customer loyalty membership records linking first and last names to phone numbers—would not constitute covered personal identifiers.
  • Example 4.  Demographic or contact data linked to other demographic or contact data and to another listed identifier—such as a data set linking first and last names to email addresses and to IP addresses—would constitute covered personal identifiers.  [Note that the only difference between the two examples is the addition of IP addresses.

With respect to a combination with other data that “makes the personally identifiable data exploitable by a country of concern,” the AG stated:  the Department does not intend to impose an obligation on transacting parties to independently determine whether particular combinations of data would be “exploitable by a country of concern”; rather, the Department intends to identify specific classes of data that, when combined would satisfy this standard.”

As for geolocation data, the Attorney General is proposing to limit this data to “precise” geolocation data, but is seeking public comment on what the distance should be to be “precise geolocation data.”

“Biometric identifiers” are proposed to be defined as “measurable physical characteristics or behaviors used to recognize or verify the identity of an individual, including facial images, voice prints and patterns, retina and iris scans, palm prints and fingerprints, gait, and keyboard usage patterns that are enrolled in a biometric system and templates created by the system.”

With respect to “human ‘omic” identifiers, the proposed regulation would limit that term to human genomic data only.

For those in healthcare, “personal health data” would mean “individually identifiable health information’ (as defined in 42 U.S.C. § 1302d(6) and 45 CFR 160.103), regardless of whether such information is collected by a “covered entity” or “business associate” as defined in 45 CFR 160.103).”

As for “financial data,” the proposed regulation defines the term as “data about an individual’s credit, charge, or debit card, or bank account, including purchases and payment history; data in a bank, credit, or other financial statement, including assets, liabilities and debts, and transactions; or data in a credit or “consumer report.”

(at 17-24)

What are the thresholds for “bulk” data?

Recall that the focus of the Executive Order was not on single transactions, but rather bulk transactions of Americans’ personal data.  The proposed regulation stated that:  “To the maximum extent feasible, the bulk thresholds would be set based on a risk-based assessment that examines threat, vulnerabilities, and consequences as components of risk” (at 24).  The characteristics “may include both human-centric characteristics (which describe a data set in terms of its potential value to a human analyst) and machine-centric characteristics (which describe how easily a data set could be processed by a computer system” (at 24).

The regulation also proposes this chart with respect to risk levels and bulk thresholds:

(at 25).

What is included in the special category of “government related data”?

Because data about government and military employees is of special concern, the proposed regulations would define “government-related data” to include two categories: 

(1) any precise geolocation data, regardless of volume, for any location within any area enumerated on a list of specific geofenced areas associated with military, other government, or other sensitive facilities or locations (the Government-Related Location Data List), or (2) any sensitive personal data, regardless of volume, that a transacting party markets as linked or linkable to current or recent former employees or contractors, or former senior officials, of the U.S. government, including the military and Intelligence Community.”

(at 30).

What types of data brokerage transactions are in-scope?

The proposed regulation, in keeping with the Executive Order’s focus on permitting commercial transactions, proposed a definition of “data brokerage” as “the sale of, license of access to, or similar commercial transactions involving the transfer of data from any person (the provider) to any other person (the recipient), where the recipient did not collect or process the data directly from the individuals linked or linkable to the collected or processed data” (at 34).  The proposed regulations state:

Except as otherwise authorized pursuant to these regulations, no U.S. person, on or after the effective date, may knowingly engage in a covered data transaction involving data brokerage with any foreign person unless the U.S. person contractually requires that the foreign person refrain from engaging in a subsequent covered data transaction involving the same data with a country of concern or covered person.

(at 50).

What are “vendor agreements”?

The proposed regulation would define vendor agreements as “any agreement or arrangement, other than an employment agreement, in which any person provides goods or services to another person, including cloud-computing services, in exchange for payment or other consideration” (at 34-35).  These agreements would include not only sending covered data for storage to a company headquartered in a country of concern but also:

  • Example 20.  A medical facility in the United States contracts with a company headquartered in a country of concern to provide IT-related services.  The medical facility has bulk personal health data on its U.S. patients.  The IT services provided under the contract involve access to the medical facility’s systems containing the bulk personal health data.
  • Example 22.  A U.S. company develops mobile games that collect bulk precise geolocation data and biometric identifiers of U.S. person users.  The U.S. company contracts part of the software development to a foreign person who is primarily resident in a country of concern and is a covered person.  The software-development services provided by the covered person under the contract involve access to the bulk precise geolocation data and biometric identifiers.

(at 35).

Will there be some exempt financial transactions?

The proposed regulations anticipate exempting data transactions “to the extent that they are ordinarily incident to and part of the provision of financial services” including:

(i)         banking, capital-markets, or financial-insurance services;

(ii)        a financial activity authorized by 12 U.S.C. § 24 (Seventh) and rules and regulations thereunder;

(iii)       an activity that is “financial in nature or incidental to a financial activity” or “complementary to a financial activity,” as set forth in section 4(k) of the Bank Holding Company Act of 1956 and rules and regulations thereunder;

(iv)       the provision or processing of payments involving the transfer of personal financial data or covered personal identifiers for the purchase and sale of goods and services (such as the purchase, sale, or transfer of consumer products and services through online shopping or e-commerce marketplaces), other than data transactions that involve data brokerage; and

(v)        compliance with any Federal laws and regulations . . .

(at 55-56).

What about inter-affiliate transactions?

The proposed regulations consider exempting some common inter-affiliate transactions to the extent that they are:

(1) between a U.S. person and its subsidiary or affiliated located in (or otherwise subject to the ownership, direction, jurisdiction, or control) of a country of concern, and (2) ordinarily incident to and part of ancillary business operations (such as the sharing of employees’ covered personal identifiers for human-resources purposes; payroll transactions like the payment of salaries and pension to overseas employees or contractors; paying business taxes or fees; purchasing business permits or licenses; sharing data with auditors and law firms for regulatory compliance; and risk-management purposes).

(at 57).  Note, however, that the proposed regulatory exemption would not apply if the subsidiary wanted access to the bulk personal data “for the purpose of complying with a request order by the country of concern under those national-security laws to provide access to that data” (at 58).

What about due diligence, compliance programs, and recordkeeping?

The Attorney General is considering

a model in which U.S. persons subject to the contemplated program employ a risk-based approach to compliance by developing, implementing, and routinely updating a compliance program.  The compliance program suitable for a particular U.S. person would be based on that U.S. person’s individualized risk profile and would vary depending on a variety of factors, including the U.S. person’s size and sophistication, products and services, customers and counterparties, and geographic locations.

(at 68).  The proposed regulations would impose affirmative reporting obligations only “as conditions of certain categories of U.S. persons that are engaging in restricted covered data transactions or as conditions of a general or specific license, or in certain narrow circumstances to identify attempts to engage in prohibited covered data transactions” (at 69),

What are the proposed penalties?

The proposed penalties would be civil monetary penalties (at 71).

Our Take

This proposal is the Advance Notice of Proposed Rulemaking (ANPRM), which will have a 45-day comment period, and will then be followed by the Notice of Proposed Rulemaking.  Comments received from the ANPRM may affect the wording of the proposed regulations, but at this point, it seems that the simplest way to avoid the proposed regulation is not to do business with the six “countries of concern.”  That advice, however, may not be practical for multinational companies that have affiliates in, or that work with companies in, any of those six countries.

In its current posture, the ANPRM appears to be almost the opposite of GDPR.  The U.S. seems to be approving most transfers of personal data, except where those transfers meet the criteria described in the ANPRM.

]]>
Data Protection Report
Executive Order on access to Americans’ bulk sensitive data – Part 1 https://www.lexblog.com/2024/03/07/executive-order-on-access-to-americans-bulk-sensitive-data-part-1/ Thu, 07 Mar 2024 15:59:15 +0000 https://www.lexblog.com/2024/03/07/executive-order-on-access-to-americans-bulk-sensitive-data-part-1/ On February 28, 2024, the White House issued an Executive Order on Preventing Access to Americans’ Bulk Sensitive Data and United States Government-Related Data by Countries of Concern.  The 17-page Executive Order pointed out that “countries of concern” could use bulk sensitive data in a variety of ways that could adversely affect U.S. national security, including:  “Countries of concern can rely on advanced technologies, including artificial intelligence (AI), to analyze and manipulate bulk sensitive personal data to engage in espionage, influence, kinetic, or cyber operations or to identify other potential strategic advantages over the United States” (at 1).   The Executive Order does not impose any immediate legal obligations on any company.

The Executive Order pointed out that countries of concern can obtain access to this data in several ways, including “through data brokerages, third-party vendor agreements, employment agreements, investment agreements, or other such arrangements”” (at 3-4).  Furthermore, the Executive Order found that the concern was not only for countries of concern but also “[e]ntities owned by, and entities or individuals controlled by or subject to the jurisdiction or direction of, a country of concern” (at 3 (emphasis supplied)).  The Executive Order was careful to note that it does not “broadly prohibit United States persons from conducting commercial transactions, including exchanging financial and other data as part of the sale of commercial goods and services, with entities and individuals located in or subject to the control, direction, or jurisdiction of countries of concern” (at 4).

The Executive Order provided in Section 2 that the Attorney General, in consultation with the Department of Homeland Security, will issue regulations on the topic of prohibited and regulated transactions (at 4).  The affected transactions including any relevant transaction that “was initiated, is pending, or will be completed after the effective date of the regulations” (at 5), and the regulatory prohibitions shall take precedence over “any contract entered into or any license or permit granted prior to the effective date of the applicable regulations” (at 9).  Note that the Executive Order specifically prohibited the regulations from including any generalized data localization requirements (at 8).

Section 3 of the Executive Order focused on protecting sensitive personal data, including data traveling in submarine cables, where the cable is “owned or operated by persons owned by, controlled by or subject to the jurisdiction or direction of a country of concern, or that connects to the United States and terminates in the jurisdiction of a country of concern” (at 9).  [Privacy lawyers may find this concern ironic in light of the EU’s concerns about mass surveillance of personal data traveling in submarine cables, as described in Schrems.]  This section also pointed out that, with respect to healthcare data:  “Even if such data is anonymized, pseudonymized, or de-identified, advances in technology, combined with access by countries of concern to large data sets, increasingly enable countries of concern that access this data to re-identify or de-anonymize data, which may reveal the exploitable health information of United States persons” (at 10-11).  The Department of Health and Human Services will be one of the agencies contributing to the regulations under this section.  This section of the Executive Order also addresses the data brokerage industry, which can “enable access to bulk sensitive personal data and United States Government-related data by countries of concern and covered persons” (at 11).  The Consumer Financial Protection Bureau is “encouraged” to consider taking steps to address this risk (at 11-12).

The Executive Order also contains some important definitions:

Access” is broadly defined as “logical or physical access, including the ability to obtain, read, copy, decrypt, edit, divert, release, affect, alter the state of or otherwise view or receive, in any form, including through information technology systems, cloud computing platforms, networks, security systems, equipment, or software.”

Covered person” is defined as “an entity owned by, controlled by, or subject to the jurisdiction or direction of a country of concern; a foreign person who is an employee or contractor of such an entity; a foreign person who is an employee or contractor of a country of concern; a foreign person who is primarily resident in the territorial jurisdiction of a country of concern; or any person designated by the Attorney General . . . “

Covered personal identifiers.”  The Executive Order left the definition up to the Attorney General’s regulations but provided general guidance:  “specifically listed classes of personally identifiable data that are reasonably linked to an individual, and that—whether in combination with each other, with other sensitive personal data, or with other data that is disclosed by a transacting party pursuant to the transaction that that makes the personally identifiable data exploitable by a country of concern—could be used to identify an individual from a data set or link data across multiple data sets to an individual.”  The Executive Order, however, did specifically exclude the following from the definition:

(i)         demographic or contact data that is linked only to another piece of demographic or contact data (such as first and last name, birth date, birthplace, zip code, residential street or postal address, phone number, and email address and similar public account identifiers); or

(ii)        a network-based identifier, account-authentication data, or all-detail data that is linked only to another network-based identifier, account-authentication data, or call-detail data for the provision of telecommunications, networking or  similar services.

Human ‘omic data.”  The Executive Order added this new term, which it defined as “data generated from humans that characterizes or quantifies human biological molecule(s), such as human genomic data, epigenomic data, proteomic data, transcriptomic data, microbiomic data, or metabolomic data, as further defined” by the Attorney General regulations.

Sensitive personal data.”  Similar to “covered personal identifiers,” the Executive Order provided a general description but left the definition up to the Attorney General’s regulations:  “covered personal identifiers, geolocation and related sensor data, biometric identifiers, human ‘omic data, personal health data, personal financial data, or any combination thereof, as further defined” in the Attorney General regulations.  The Executive Order contains specific exceptions for information that is part of a public record or made available to the general public, as well as information subject to certain provisions of the International Emergency Economic Powers Act of 1977 (IEEPA).

Finally, the Executive Order, in Section 8(e), makes it clear that there is no private right of action.

See Part 2 for a description of the Attorney General proposed regulations.

]]>
Data Protection Report
UK government’s response to AI White Paper consultation: next steps for implementing the principles https://www.lexblog.com/2024/03/04/uk-governments-response-to-ai-white-paper-consultation-next-steps-for-implementing-the-principles/ Mon, 04 Mar 2024 15:36:37 +0000 https://www.lexblog.com/2024/03/04/uk-governments-response-to-ai-white-paper-consultation-next-steps-for-implementing-the-principles/ The authors acknowledge the assistance of Salma Khatab, paralegal, in researching and preparing some aspects of this blog

The UK Department for Science, Innovation, and Technology (DSIT) has published its response to its consultation on its white paper, ‘A pro innovation approach to AI regulation’ (the Response). The Response outlines key investment initiatives and regulatory steps.  It confirms that, for the present, the UK will follow its proposed approach of setting cross-sectoral principles to be enforced by existing regulators rather than passing new legislation to regulate AI.  Alongside this, the government has published guidance setting out considerations that regulators may wish to have when developing tools and guidance to implement the principles.

Proposed regulatory framework

The government confirms its plans to introduce five cross-sectoral principles to enable existing regulators to interpret and apply responsible AI innovation. The principles are:

  • Safety, security and robustness.
  • Appropriate transparency and explainability.
  • Fairness.
  • Accountability and governance.
  • Contestability and redress.

Development of a central function, to support effective risk monitoring, regulatory coordination, and knowledge exchange, has already begun.

No new statutory obligations – for now

The UK’s approach contrasts with the EU’s approach of introducing statutory obligations for supply chain participants for certain AI use cases.

Regulators will not initially face any new statutory duties to have due regard to these principles, though the government anticipates introducing this duty after reviewing an initial period of non-statutory implementation. 

The Response also considers “highly capable general-purpose AI”, defined as models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.  Some of the risks of this type of technology were outlined in the papers on the state of frontier AI prepared for the AI safety summit in November 2023.  The government has now confirmed that it believes binding measures are likely to be required in the future, but it will not “rush to regulate”, because ”introducing binding measures too soon, even if highly targeted, could fail to effectively address risks, quickly become out of date, or stifle innovation”.

It acknowledges that it is common for the law to allocate liability to the last actor in the chain, generally those using the technology.  This means that the actors most able to address risks and harms, those creating the technology, are not necessarily incentivised to develop responsible AI or held accountable when they do not.  This could undermine innovation and dampen innovation, a risk the Commission confirms the EU looked to tackle with the AI Act.

In the meantime, the AI Safety Institute, established at the AI Safety Summit, will continue to evaluate the risks around these systems.  It recently published its approach to evaluations with two case studies presented at the summit.

Guidance for regulators on implementing the principles

The government will take a phased approach to issuing guidance for regulators.  In the first phase, published to coincide with the white paper response, the government “supports regulators by enabling them to properly consider the principles and to start considering developing tools and guidance...if they have not done so already”.  In phase two, to be released by summer 2024, it will expand and provide further detail, following feedback from regulators and other stakeholders.  Phase three will involve collaborative working with regulators to identify areas for potential joint tools and guidance across regulatory remits.

The guidance provides suggestions on what regulators “could” do, rather than what they should or must do.  Regulators may take a technology agnostic approach to regulation, and may decide the guidance is not relevant to them provided they are satisfied that their regulatory framework adequately covers the issues around AI adoption. It also suggests that regulators develop tools and guidance that promote knowledge and understanding as relevant in the context of their remit, rather than setting out step-by-step processes. Regulators are encouraged to collaborate and share knowledge through existing mechanisms, like the Digital Regulation Cooperation Forum, as well as new ones. 

The guidance suggests that regulators may wish to cite horizontal standards produced by organisations like BSI, ISO, and IEC, and references specific standards relevant to each of the principles.

In relation to accountability, the guidance acknowledges the issues around liability in the supply chain and where it falls. It suggests that regulators consider whether their regulatory powers or remits allow them to place legal responsibility on actors in the supply chain that are best placed to mitigate the risks.  Where legal responsibility cannot be assigned to an actor in the supply chain that operates in a regulatory remit, it suggests they encourage the AI actors within the remit to ensure good governance in who they outsource to.  In practice, this is likely to translate into regulators considering whether they could provide more guidance on due diligence and contractual safeguards required to use third party AI suppliers.

AI Regulation Roadmap

The government has written to regulators including the Office of Communications (Ofcom); Information Commissioner’s Office (ICO); Financial Conduct Authority (FCA); Competition and Markets Authority (CMA) asking them to publish an update outlining their strategic approach by 30 April.  Regulators are tasked with summarising steps they are taking in line with expectations set out in the white paper, assessing sector-specific AI risks, identifying skills gaps and actions to address these, and outlining anticipated activities over the coming 12 months.

DSIT has also committed to a roadmap for 2024 to:

  • continue to develop the UK’s domestic policy position on AI regulation;
  • progress action to promote AI opportunities and tackle AI risks;
  • build out the central function and support regulators;
  • encourage effective AI adoption and provide support for industry, innovators, and employees; and
  • support international collaboration on AI governance.

Investment in Skills and Technology

The government has pledged over £100 million to develop regulators’ technical capabilities and nurture AI innovation. This includes £10 million to jumpstart regulators’ AI capabilities and a £9 million partnership with the US as part of the International Science Partnerships Fund.

Promoting AI governance and AI assurance

The Response emphasises promoting transparency and accountability in AI deployment across sectors. The Algorithmic Transparency Recording Standard (ATRS) established a standardised way for public sector organisations to proactively publish information about how and why they are using algorithmic methods in decision-making.  Following a successful pilot, it will be making use of the ATRS a requirement for all government departments.

Since the publication of the Response, the government has also published an Introduction to AI assurance. This includes an overview of AI assurance and the tools that can be used (e.g. audits, performance testing), notes on AI assurance in practice, and key actions for organisations.

From CDEI to RTA

The government also announced the rebranding of the Centre for Data Ethics and Innovation (CDEI) as the Responsible Technology Adoption Unit (RTA). This change looks to reflect its role more accurately within DSIT to develop tools and techniques that enable responsible adoption of AI in the private and public sectors.

Further engagement on intellectual property, but no code of practice

DSIT and the Department for Culture, Media and Sport will “lead a period of engagement with the AI and rights holder sectors, seeking to ensure the workability and effectiveness of an approach that allows the AI and creative sectors to grow together in partnership”.  The government looks to resolve tensions between, on the one hand, AI developers’ need to access large and diverse datasets to train their models and, on the other hands, creative industries and rights holders’ concerns around the use of copyright protected content.  The Intellectual Property Office previously convened a working group made up of rights holders and AI developers, but this group was unable to agree an effective voluntary code.  

Our take

A key part of the government’s rationale for following this approach was a desire to utilise and build on the existing regulatory framework.  The current framework imposes both horizontal and vertical binding requirements that govern the use of AI.  Horizonal requirements include those set out under data protection law, while vertical requirements include the FCA’s Consumer Duty.  These existing duties already require organisations using AI to address security, transparency and explainability, fairness, accountability, and rights of redress.

The Response confirms that, for the present, organisations will have no new UK statutory obligations to comply with when developing or using AI.  However, going forward, regulators will have regard to the principles for future enforcement action.  The Response presents an opportunity for organisations to develop an AI governance programme to address existing horizontal and, where applicable, vertical requirements alongside the principles. 

Many UK organisations will also find themselves caught in the scope of the binding requirements set out in the EU AI Act.  A robust AI governance framework will be vital to ensure those requirements are identified and addressed alongside discharging obligations under UK legislative requirements and the AI principles.

]]>
Data Protection Report
ASEAN releases Joint Guide to ASEAN Model Contractual Clauses and EU Standard Contractual Clauses and AI Governance Guide  https://www.lexblog.com/2024/02/28/article-title-asean-releases-joint-guide-to-asean-model-contractual-clauses-and-eu-standard-contractual-clauses-and-ai-governance-guide/ Wed, 28 Feb 2024 07:43:53 +0000 https://www.lexblog.com/2024/02/28/article-title-asean-releases-joint-guide-to-asean-model-contractual-clauses-and-eu-standard-contractual-clauses-and-ai-governance-guide/ On 1 and 2 February 2024, at the fourth 4th ASEAN Digital Ministers Meeting (ADGMIN) in Singapore, ASEAN[1] unveiled:

We summarise and discuss both the Joint Guide and the ASEAN AI Governance Guide below.

Joint MCC – SCC Guide

To recap, the first part of the Joint MCC – SCC Guide (the Reference Guide) was first published in May 2023. The Reference Guide analysed practical similarities and differences between the ASEAN Model Contractual Clauses (ASEAN MCCs) and the EU Standard Contractual Clauses (EU SCCs).[2] Our previous article on the Reference Guide can be accessed at: European Commission and ASEAN releases Guide to ASEAN Model Contractual Clauses and EU Standard Contractual Clauses | Data Protection Report.

To further assist organisations with implementing practical measures to manage data flows between ASEAN and the EU, the ADGMIN updated the Joint MCC – SCC Guide to include an implementation guide (the Implementation Guide).

The Implementation Guide complements the Reference Guide by identifying non-exhaustive examples of best practices for implementing the relevant clauses for the two types of cross-border data transfer relationships discussed in the Reference Guide:

  • controller-to-controller transfers, involving both the data exporter and importer jointly deciding on the purposes and methods of processing data[3]; and
  • controller-to-processor transfers, where the data importer processes personal data on behalf of the data exporter[4].

Key takeaways

With the addition of the Implementation Guide to further clarify the process of operationalising data transfer clauses, the Joint MCC – SCC Guide is a useful tool for organisations seeking to implement cross-border transfers of personal data within ASEAN and between ASEAN and the EU. Companies can consider implementing these best practices to operationalise the safeguards required under both the ASEAN MCCs and EU SCCs.

However, the Joint MCC – SCC Guide serves only to provide a basic understanding of the applicable general principles – it may not provide sufficiently detailed insights into specific transfer and processing contexts. For example, certain jurisdictions (such as Singapore) may suggest, or even require, modifications to the ASEAN MCCs to ensure compliance with local regulations.[5] Organisations should therefore consider seeking legal advice in the relevant jurisdictions, to ensure that their cross-border data-transfers do not contravene local laws.

ASEAN AI Governance Guide

Summary of key features

The ADGMIN also endorsed the ASEAN AI Governance Guide, acknowledging the importance of providing guidance to encourage the responsible and secure development of emerging technologies.

The ASEAN AI Governance Guide, while not binding, is meant to apply to AI developers and deployers, individuals interested in using or expanding AI systems, as well as policymakers throughout ASEAN. The ASEAN AI Governance Guide addresses topics such as deploying AI technologies in commercial contexts.

The ASEAN AI Governance Guide outlines seven guiding principles for the design, development, and deployment of ethical AI systems:

  1. Transparency and explainability – Deployers should disclose to users how AI systems are implemented. Developers and deployers should also prioritize user understanding by providing straightforward explanations of how the AI system makes decisions.
  1. Fairness and equity – Deployers should prevent AI algorithmic decisions from worsening existing discrimination across demographics (e.g., bias relating to gender and ethnicity) by implementing safeguards, such as human interventions and regular bias testing.
  1. Safety and security – Deployers should conduct security testing, such as vulnerability assessment and penetration testing.
  1. Human-centricity – To ensure people benefit from AI while protecting them from potential harms, developers must actively avoid employing manipulative design techniques (known as dark patterns) e.g. default options that disregard user interests such as like data sharing or tracking online activities.
  1. Privacy and data governance – Deployers must respect data protection throughout AI development and deployment by complying with relevant data protection legislation when collecting, storing, generating, and deleting data in the AI system lifecycle.
  1. Accountability and integrity – Organisations should establish clear reporting structures, defining roles and responsibilities throughout the AI system lifecycle. AI systems must also be developed with integrity, and any errors or unethical outcomes should be documented and corrected to prevent harm to users upon deployment.
  1. Robustness and reliability – AI systems must remain resilient to unforeseen data inputs, avoid exhibiting dangerous behaviour, and consistently perform as intended. Deployers should conduct thorough testing before deployment to guarantee consistent outcomes across various situations.

The ASEAN AI Governance Guide also outlines four key components of an AI governance framework, which are identical to the Singapore’s Model AI Governance Framework[6] last updated in January 2020:

  1. Internal governance structures and measures – Deployers should set up (or adapt existing) internal governance structure and measures to incorporate values, risks and responsibilities relating to algorithmic decision-making.
  1. Determining the level of human involvement in AI-augmented decision-making – Organisations should apply the methodology in the ASEAN AI Governance Guide to define their risk appetite for AI use, determine acceptable risks, and identify the appropriate level of human involvement in AI-augmented decision-making.
  1. Operations management – Developers and deployers should review and study the considerations for developing, selecting, and maintaining AI models, including data management.
  1. Stakeholder interaction and communication – The ASEAN AI Governance Guide also includes strategies for deployers to effectively communicate with their respective stakeholders on when AI issued in their offerings, information on the type of AI system used, the intended purpose of the AI system, and how the AI system affects the decision-making process in relation to users.

Additionally, the ASEAN AI Governance Guide also contains national and regional-level recommendations for policymakers to consider when drafting AI legislation in their jurisdictions. For example, national-level recommendations include initiatives such as upskilling the workforce to cultivate a pool of AI-trained graduates. The regional-level recommendations, such as adapting the ASEAN AI Governance Guide to address governance of generative AI, are similar to Singapore’s draft Model AI Governance Framework for Generative AI (Singapore GenAI Framework) released in January 2024, which we have written about.[7]

The ASEAN AI Governance Guide also explores real-world use cases of organisations in ASEAN, such as Singapore’s Ministry of Education and the Smart Nation Group, which have implemented AI governance measures.

Key takeaways

The ASEAN AI Governance Guide provides actionable recommendations for organisations to implement ethical AI practices, such as defining risk appetite, establishing internal governance structures, and managing stakeholder interactions. This pragmatic approach facilitates the adoption of responsible AI practices in the region and provides organisations with a reference point to consider when implementing AI governance structures in ASEAN, in the absence of any defined AI legislation or regulations in the region.

The ASEAN AI Governance Guide also represents a light-touch and flexible approach, having regard to the varying levels of digital development, regulatory maturity and enforcement effectiveness across ASEAN, which gives rise to different policy concerns and considerations.

By aligning closely with Singapore’s framework, the ASEAN AI Governance Guide also establishes a consistent approach to AI governance across the region, fostering interoperability and harmonisation.

The release of the Joint Guide and the ASEAN AI Governance Guide are significant steps toward aligning standards for cross-border data transfers and for AI technology in ASEAN. These guides provide helpful frameworks and practical recommendations to ensure responsible and secure AI development and data handling practices, but they must always be read alongside local laws and guidance which may include additional specific restrictions or requirements.

We would like to thank our trainee Judeeta Sibs, practice trainee at Ascendant Legal LLC, for her contribution to this post.


[1] The Association for Association of Southeast Asian Nations (ASEAN) consists of: Brunei Darussalam, Cambodia, Indonesia, Laos, Malaysia, Myanmar, Philippines, Singapore, Thailand, and Vietnam

[2] Both the ASEAN MCCs and the EU SCCs facilitate compliance with applicable data protection requirements in relation to cross-border transfers of personal data.

[3] The Implementation Guide provides examples on operationalising some of the controller-to-controller transfer clauses in the Reference Guide across the following 12 key areas:

(a) specifying the purpose of the transfer and purpose limitation;

(b) ensuring data accuracy;

(c) minimizing data, limiting storage;

(d) maintaining security and confidentiality;

(e) handling sensitive data;

(f) managing onward transfers;

(g) ensuring transparency;

(h) respecting the rights of individuals;

(i) addressing responsibility/accountability;

(j) the ability to comply; and

(k) managing government access to data.

[4] The Implementation Guide provides examples to operationalising some of the controller-to-processor clauses in the Reference Guide in the following nine key areas:

(a) specifying the purpose of the transfer and purpose limitation;

(b) ensuring data accuracy;

(c) implementing storage limitation and return procedures;

(d) maintaining security and confidentiality;

(e) handling sensitive data;

(f) managing sub-processing;

(g) ensuring transparency;

(h) respecting the rights of individuals; and

(i) managing government access to data.

[5] The Personal Data Protection Commission of Singapore has issued a guidance note for the ASEAN MCCs, outlining recommended amendments for compliance with Singapore’s data privacy laws. See: Singapore-Guidance-for-Use-of-ASEAN-MCCs—010921.pdf (pdpc.gov.sg)

[6] See our summary of the second edition of the Model AI Governance Framework here: https://www.dataprotectionreport.com/2020/02/singapore-updates-its-model-artificial-intelligence-governance-framework/

[7] See our summary on the draft Model AI Governance Framework here: https://www.dataprotectionreport.com/2024/02/singapore-proposes-governance-framework-for-generative-ai/

]]>
Data Protection Report
Biden administration issues Executive Order and takes action to enhance maritime cybersecurity https://www.lexblog.com/2024/02/23/biden-administration-issues-executive-order-and-takes-action-to-enhance-maritime-cybersecurity/ Fri, 23 Feb 2024 15:07:55 +0000 https://www.lexblog.com/2024/02/23/biden-administration-issues-executive-order-and-takes-action-to-enhance-maritime-cybersecurity/ On February 21, 2024, President Biden signed an Executive Order and issued several federal rules aimed at improving the cybersecurity of U.S. ports and maritime supply chains. The measures introduce new cybersecurity requirements and standards for stakeholders of the U.S. Marine Transportation System (MTS) and increase the authority of the U.S. Coast Guard in its ability to address cyber threats. These rules are part of a broader effort to improve the nation’s cybersecurity presented in a prior Executive Order issued on May 12, 2021 (for more information, please see this earlier post). Alongside the Executive Order, the Biden administration announced a plan to invest in the domestic manufacturing of port cranes to reduce reliance on foreign-built infrastructure potentially used by nation-state and financially motivated attackers to disrupt U.S. organizations.

The new initiatives presented by the White House to bolster maritime cybersecurity and prevent the interruption of MTS operations are threefold. They include (i) an Executive Order, (ii) a Notice of Proposed Rulemaking on Cybersecurity in the MTS issued by the Coast Guard, and (iii) a Maritime Security Directive on cyber risk management.

I. Increased authority of the Coast Guard

Central to the announcement of the White House is an Executive Order that increases the authority of the Department of Homeland Security (DHS) to respond to maritime cyber threats through the U.S. Coast Guard. At the press conference announcing the Executive Order, Deputy National Security Advisor to the White House, Anne Neuberger, restated the intention of the administration to “ensure that there are similar requirements for cyber” as there are currently for “a storm or another physical threat.” As such, the Executive Order amends multiple sections of Part 6, title 33 of the Code of Federal Regulations (CFR) to integrate “cyber incidents” in the list of threats posed to the MTS. It also incorporates the definition of “incident” from 44 U.S.C. 3552(b)(2):

The term “incident” means an occurrence that —

(A) actually or imminently jeopardizes, without lawful authority, the integrity, confidentiality, or availability of information or an information system; or

(B) constitutes a violation or imminent threat of violation of law, security policies, security procedures, or acceptable use policies.

One of the most significant actions announced by the White House pertains to the creation of new cybersecurity reporting requirements for ports’ networks and computer systems. Under amended section 33 CFR 6.16-1, actual or threatened cyber incidents endangering “any vessel, harbor, port, or waterfront facility” must be reported to the Federal Bureau of Investigation (FBI), the Cybersecurity and Infrastructure Security Agency (CISA), and the Captain of the Port. This new reporting requirement adds to the already existing obligations of “the master, owner, agent, or operator of a vessel or waterfront facility” to prevent sabotage and subversive activity.

The Executive Order also strengthens the ability of the Coast Guard to respond to cyberattacks by requiring vessels and waterfront facilities to mitigate unsatisfactory conditions involving cybersecurity threats putting at risk vessels, harbors, and other maritime facilities. The Executive Order extends the authority of the Coast Guard to take possession and control of vessels presenting a potential cyber risk to U.S. maritime infrastructure.

II. Proposed rules and standards

With cyberattacks targeting U.S. critical infrastructure on the rise, the Coast Guard issued a Notice of Proposed Rulemaking on cybersecurity regulations and minimum standards in the MTS. The proposed rules leverage common frameworks issued by the National Institute of Standards and Technology (NIST) and CISA to strengthen maritime cybersecurity measures regarding “account security, device security, data security, governance and training, risk management, supply chain management, resilience, network segmentation, reporting, and physical security.” These standards would apply to US-flagged vessels, Outer Continental Shelf facilities, and U.S. facilities subject to the Maritime Transportation Security Act of 2002 regulations.

The proposed rules also introduce the term “Reportable cyber incident.” This term would create a reporting threshold between cyber incidents that require reporting and those that do not. A “reportable cyber incident” would be defined as:

an incident that leads to, or, if still under investigation, could reasonably lead to, any of the following:

(1) Substantial loss of confidentiality, integrity, or availability of a covered information system, network, or OT system;

(2) Disruption or significant adverse impact on the reporting entity’s ability to engage in business operations or deliver goods or services, including those that have a potential for significant impact on public health or safety or may cause serious injury or death;

(3) Disclosure or unauthorized access directly or indirectly of non-public personal information of a significant number of individuals;

(4) Other potential operational disruption to critical infrastructure systems or assets; or

(5) Incidents that otherwise may lead to a Transport security incident (TSI) as defined in 33 CFR 101.105.

The proposed rules contemplate two cybersecurity regulatory measures for incident reporting. The first alternative would be for a “reportable cyber incident” to be reported without delay to the Coast Guard National Response Center (NRC) via toll-free numbers. If the cyber incident does not involve any physical or pollution effects, it could also be reported directly to CISA. The requirement to report to the Coast Guard would be fulfilled by sharing all such reports between the NRC and CISA. The second alternative would be to report a “reportable cyber incident” to CISA following the directions laid out in the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA). Finally, the proposed rule would consider expressly requiring the reporting of ransom payments related to ransomware attacks.

These proposed rules are open to public comments. Stakeholders of the MTS who intend to participate and provide comments to the Coast Guard on the 230-page proposed rules have until April 22, 2024.

III. Nonpublic Maritime Cybersecurity Directive

The announcement also addresses certain national security concerns posed by the use of port cranes, known as ship-to-shore cranes, manufactured in the People’s Republic of China (PRC). To that end, the Coast Guard issued a Maritime Security Directive on cyber risk management actions for specified foreign-built port cranes located in strategic locations designated as U.S. Commercial Strategic Seaports. This announcement aligns with the recent public release by CISA of a cybersecurity advisory on state-sponsored cyber threat actor “Volt Typhoon” and its ability to persist in critical infrastructure systems. Considering the sensitivity of the information contained in the directive, its content is not available to the public. Covered persons can obtain a copy of the directive through their local Coast Guard Captain of the Port or District Commander.

Our take

This Executive Order and the proposed rules by the U.S. Coast Guard reinforce the measures taken by the administration to create regulations and requirements addressing the rising cybersecurity threats faced by U.S. critical infrastructure, specifically the maritime transport industry. The standards proposed by the Coast Guard are intended as minimum obligations establishing a common baseline in the U.S. maritime supply chains. Companies and other entities relying on the MTS are encouraged by the Biden administration to not only meet but exceed the new cybersecurity requirements. This includes proactively identifying and assessing cyber risks and threats and enhancing cybersecurity procedures.

]]>
Data Protection Report
The right of access to personal data: a more extensive view? https://www.lexblog.com/2024/02/16/the-right-of-access-to-personal-data-a-more-extensive-view/ Fri, 16 Feb 2024 16:36:59 +0000 https://www.lexblog.com/2024/02/16/the-right-of-access-to-personal-data-a-more-extensive-view/ This article first appeared in PLC Magazine in the January / February 2024 issue of PLC Magazine.

The right of access to personal data looks set to be a key focus area for data protection regulators for 2024 in both the EU and the UK. The European Data Protection Board (EDPB) announced that its 2024 co-ordinated enforcement action will look at how controllers implement the right of access to personal data. In the UK, data subject access requests (DSARs) remain a priority for the Information Commissioner’s Office.

Historically, there have been differences in how controllers in different European countries handle DSARs. However, alongside the enhanced regulatory focus in this area, recent European Court of Justice (ECJ) case law has indicated that the right of access should not always be interpreted as restrictively as it has been previously.

Thanks to George Hairs for contributing towards this article.

Read the full article

]]>
Data Protection Report
Significant amendments to the Singapore Cyber Security Act set to have implications for the cybersecurity landscape https://www.lexblog.com/2024/02/13/significant-amendments-to-the-singapore-cyber-security-act-set-to-have-implications-for-the-cybersecurity-landscape/ Tue, 13 Feb 2024 14:08:29 +0000 https://www.lexblog.com/2024/02/13/significant-amendments-to-the-singapore-cyber-security-act-set-to-have-implications-for-the-cybersecurity-landscape/ On 15 December 2023, the Cyber Security Agency of Singapore (CSA) released the draft Cybersecurity (Amendment) Bill (Draft Bill), which seeks to amend the Cyber Security Act 2018 (CS Act), for public consultation. The public consultation concluded on 15 January 2024.

The consultation paper and the Draft Bill can be accessed here.

The proposed changes are significant and will have implications for the cybersecurity landscape in Singapore which we consider below.

Background

The amendments in the Bill seek to ensure that Singapore’s cybersecurity laws are aligned with their purpose of protecting Singapore against cybersecurity threats and adverse disruptions.  

The Proposed Changes

Broadly, the Draft Bill proposes to make two key changes: 

  • strengthening the regulatory approach to critical information infrastructure (CII); and
  • extending the regulatory scope of the CS Act to include other entities beyond CII owners.  

Strengthening the Regulatory Approach to CII

At present, Part 3 of the CS Act primarily imposes obligations on CII owners. This regulatory approach reflects the fact that, at the time the CS Act was enacted, providers of essential services tended to own and operate the CII necessary for the delivery of such essential services.

However, since the enactment of the CS Act, there has been a shift towards virtualisation or use of outsourced vendors (Computing Vendors) to provide specific computing needs. Recognising that the use of such Computing Vendors should be facilitated if it could improve the delivery of essential services, the CSA is proposing to introduce a new Part 3A to the CS Act, to facilitate the use of Computing Vendors by providers of essential services.

Under the new proposed Part 3A of the CS Act, providers of essential services will be permitted to use Computing Vendors in the delivery of an essential service. However, responsibility for the cybersecurity of the essential service will remain with its providers. The Commissioner of Cybersecurity (Commissioner) will be able to impose various duties on providers of essential services that are designed to result in the same cybersecurity outcomes as Part 3 of the CS Act (which applies to CII owners).[1] 

To ensure that providers of essential services can discharge their duties under the CS Act, they will be required to obtain legally binding commitments from their Computing Vendor. If they are not able to obtain such commitments, the Commissioner may order the provider of essential service to cease the use of the non-provider owned CII.

Extending the Regulatory Scope of the CS Act beyond CII

The other significant change to the CS Act relates to the extension of the regulatory scope of the CS Act beyond that of CII owners and providers of essential services.

This is a recognition of the fact that due to increased digitisation, there are other components in Singapore’s cybersecurity landscape apart from essential services where disruptions caused by cybersecurity incidents could significantly impact or degrade life in Singapore.

Therefore, the CSA is proposing to expand the CS Act, with Parts 3B, 3C and 3D, to regulate the following classes of entities:

  • major providers of foundational digital infrastructure (FDI). These relate to important digital infrastructure not falling within the CII designations which could lead to major disruptions and impact if compromised, for example, data centre operators or cloud service providers;
  • entities of special cybersecurity interest (ESCI). These are entities in possession of sensitive data that could have an adverse effect on Singapore’s interests if compromised, for example, entities collaborating with the Government; and
  • owners of systems of temporary cybersecurity concern (STCC). These are systems that are of temporary significance, for example, the national vaccination systems.

As providers of essential services and CII owners, once designated, these entities will be subject to certain duties under the CS Act. The duties imposed on these entities include the duty to provide information to the Commissioner, the duty to comply with codes of practices, standards of performance or written directions issued by the Commissioner and the duty to notify the Commissioner of prescribed cybersecurity incidents. 

Key Takeaways

The proposed enhanced powers of the CSA will have the following implications for the cybersecurity landscape:

  • Increased Regulatory Oversight: the new designations, namely, FDI, ESCI and STCC entities, increase the scope of the CSA to provide regulatory oversight of the cybersecurity approach of these entities.
  • Stricter Cybersecurity Standards: more stringent cybersecurity requirements will apply across the wider range of regulated entities, with penalties for non-compliance to be set out in subsidiary legislation.
  • Enhanced Incident Reporting Regime: reporting obligations for cyber security incidents will be expanded beyond those CII systems under the direct control of owners or service providers.
  • Increased Supply Chain Scrutiny: the expansion of regulatory oversight is likely to lead to further scrutiny over cybersecurity supply chains, with the effect of more stringent requirements being imposed downstream by such entities regulated by the CSA. 

We would like to thank our practice trainee, Charles How, for his assistance with the preparation of this update.


[1] Such duties include providing information on non-provider owned CIIs, complying with codes of practice, standards of performance, conduct regular audits, notify the Commissioner of changes of ownership of non-provider-owned CII and of the occurrence of prescribed cybersecurity incidents etc.

]]>
Data Protection Report
Two FTC complaints that over-retention of personal data violates Section 5 https://www.lexblog.com/2024/02/13/two-ftc-complaints-that-over-retention-of-personal-data-violates-section-5/ Tue, 13 Feb 2024 14:00:00 +0000 https://www.lexblog.com/2024/02/13/two-ftc-complaints-that-over-retention-of-personal-data-violates-section-5/ On January 18, 2024, the U.S. Federal Trade Commission announced a complaint and proposed consent order with InMarket Media, LLC, a digital marketing platform and data aggregator.  Less than two weeks later, on February 1, the FTC announced a complaint and proposed consent order with software licensor and data provider Blackbaud, Inc.  In both cases, the FTC’s complaint alleged that the companies retained personal data for longer than was necessary, and that conduct violated Section 5 of the Federal Trade Commission Act as an unfair act or practice.  Under the proposed consent orders, both companies do not confirm or deny the allegations.

InMarket

Digital marketing platform and data aggregator InMarket collected extensive personal data both through purchases of data as well as through the software development kit (SDK) that it licensed to software developers.  The company would use the collected personal data (including geolocation data) to group customers into “audiences” for purposes of targeted advertising, including ads displayed through the SDK.  The FTC’s complaint alleged that InMarket failed to:  (a) “notify consumers that their location data will be used for targeted advertising” and (b) “verify that mobile applications (“apps”) incorporating the InMarket SDK have notified consumers of such use.”  (Complaint ¶ 3) 

According to the FTC’s complaint, developers would incorporate the InMarket SDK into developers’ apps, which would request access to the location data the consumer’s mobile device generates.  If the consumer allowed access, the device would send both the device’s precise latitude and longitude as well as the timestamp and unique mobile device identifier.  The FTC’s complaint states:  “From 2016 to the present, about 100 million unique devices sent Respondent location data each year.”  (Complaint ¶ 5 (emphasis in original))  InMarket would share advertising revenue with developers that incorporated the InMarket SDK into their apps.

InMarket also incorporated its SDK into its own apps, which would, for example, offer consumers rewards for completing certain tasks, such as watching videos, walking into stores, etc.  With respect to disclosure to consumers, the FTC complaint states that InMarket’s request for location data was, for example:  “Allow CheckPoints to access your location?  This allows us to award you extra points for walking into stores.”  (Complaint at ¶¶ 13-14)  The FTC pointed out that InMarket did disclose that it used consumer data for targeted advertising in its privacy policy, but the location consent screen did not link to the privacy policy.  (Complaint at ¶ 19)  The FTC alleged that “the misleading prompts do not inform consumers of the apps’ data collection and use practices” and that representations regarding the use of location information were material to consumers.  (Complaint at ¶¶ 18-19).

With respect to developers, the FTC alleged that InMarket did not require the developers to obtain informed consumer consent, but instead simply required that developers “comply with all applicable laws.”  (Complaint at ¶ 21)  The InMarket agreement with developers also did not disclose that the information users provided “will be supplemented and cross-referenced with purchased data and analyzed to draw inferences about those users for marketing purposes. Nor does it disclose to these app developers that it retained their users’ location information for up to five years.”  (Complaint at ¶ 22)

With respect to the five-year retention period, the FTC’s complaint stated:

This unreasonably long retention period—far longer than is necessary to accomplish InMarket’s stated purpose for collection (to allow a consumer to earn shopping points or make shopping lists)—significantly increases the risk that this sensitive data could be disclosed, misused, and linked back to the consumer, thereby exposing sensitive information about that consumer’s life.

(Complaint at ¶ 26)

That statement formed the basis for Count III of the FTC’s complaint:

Respondent’s retention of detailed location data for such an extended period has caused or is likely to cause substantial injury in the form of a loss of privacy about the day-to-day movements of millions of consumers, and an increased risk of disclosure of such sensitive information.  This injury is not reasonably avoidable by consumers themselves, as they are not aware of the scope of these practices.  This injury is also not outweighed by countervailing benefits to consumer or competition.  Consequently, Respondent’s retention of consumers’ detailed location data for longer than is reasonably necessary to effectuate its business purpose is an unfair act or practice.

(Count III of Complaint)

Count I of the complaint related to InMarket’s collection without disclosing InMarket’s uses of the location data.  Count II focused on the collection and use of consumer data from third-party apps.  Count IV alleged that InMarket had deceived consumers with its partial disclosure of its uses of the location data—alleged to be a deceptive failure to disclose.  All four counts alleged a violation of Section 5 of the FTC Act.

Under the proposed 20-year consent order, the FTC would place multiple requirements on InMarket, including a prohibition on the sale or licensing of location data and the establishment of an “SDK Supplier Assessment Program.”  (Consent Order at ¶¶ II, VI)  With respect to data retention, not only would the proposed consent order require that InMarket establish a “simple, easily-located means for consumers to request that Respondent delete Location Data that Respondent previously collected from a specific mobile device,” it would also require a publicly available timeframe for retention of covered information on InMarket’s website.  (Consent Order at ¶¶ 9-10) That retention timeframe would need to be “the shortest time reasonably necessary to fulfill the purpose for which the Covered Information was collected, and in no instance providing for the indefinite retention of any Covered Information.”(Consent Order at ¶ X)

Blackbaud

Blackbaud had a cyber security event in 2020 where personal information was exfiltrated.  (We have twice covered the consumer class action, here and here.)  Blackbaud provides data collection and maintenance software solutions for administration, fundraising, marketing, and analytics services to various charitable organizations, including healthcare, religious, and educational institutions as well as various foundations.  As a result, Blackbaud had personal information on millions of consumers, including names, dates of birth, Social Security numbers, financial information, medical information, religious belief data, and account credentials.  According to paragraph 9 of the FTC complaint, Blackbaud did not encrypt database backup files.  The complaint also alleges that:  “Blackbaud did not enforce its own data retention policies, resulting in the company keeping customer’s consumer data for years longer than was necessary.” (Complaint ¶10.)

Blackbaud notified its customers of the security incident on July 16, 2020, but the notice stated that “No information about your constituents was accessed.”  Unfortunately, the FTC complaint states that Blackbaud knew by July 31 that personal information had been accessed (Complaint ¶ 14), yet Blackbaud did not disclose the extent of the breach until October 2020.

The FTC pointed out that Blackbaud’s posted privacy policy stated “While no website can guarantee exhaustive security, we maintain appropriate physical electronic and procedural safeguards to protect your personal information collected via the website.”  (Complaint  ¶18)  The FTC’s complaint then listed 8 ways the agency believed that Blackbaud did not maintain appropriate security, ranging from weak passwords, to failing to patch in a timely manner, to failure to “implement and enforce appropriate data retention schedules and deletion practices for the vast amounts of consumers’ personal information stored on its network.”  (Complaint  ¶19)  The FTC’s five-count complaint alleged in Count II that Blackbaud’s alleged failure to “implement and enforce reasonable data retention practices for sensitive consumer data maintained by its customers on its network” was an unfair act or practice in violation of Section 5 of the FTC Act.

Under the proposed 20-year consent order, among other things, Blackbaud would be required to “Delete“ or destroy backup data that is not retained in connection with providing products or services to customers, excluding information legally required to be maintained  (Consent § II)  The draft consent order defines “Delete” to mean “to remove Covered Information such that it is not maintained in retrievable form and cannot be retrieved in the normal course of business.”   Blackbaud would also be required to “refrain from maintaining any Covered Information not necessary for the purpose(s) for which such information is stored and/or maintained by Respondent.”  In addition, similar to the InMarket proposed consent above, Blackbaud would be required to post on its website a publicly available timeframe for retention of covered information, which cannot be an indefinite retention period.  The FTC added that the retention schedule requirements apply to databases of personal information of former customers and customers who migrate to a different Blackbaud product.  (Complaint § III)

Our Take

As we have previously written, regulators are focusing more and more on retention of data.  Note in the InMarket case, the FTC alleged that five years was too long for InMarket to retain sensitive location data.  With respect to Blackbaud, the FTC was particularly concerned about retention of former customers’ personal financial and health data, not to mention the company’s alleged failure to follow its own record retention policies.  Moreover, in the Blackbaud draft Consent Order, it appears the FTC is taking an even more nuanced approach finding that even if the company needed certain data for longer periods, it did not need to keep it accessible or unencrypted.  This regulatory scrutiny could lead to even more scrutiny of company retention and storage practices, especially where companies keep multiple copies of the same data and where convenience copies (that are generally more accessible) do not necessarily need to be kept as long as archival or system or record versions. 

Retention has become a focus of the FTC as part of its overall focus on privacy and consumer protection.  As most companies cannot “flip-the-switch” on record retention as it impacts multiple areas of the business including operations and legal, companies would be wise to start developing a plan to update and revise their information governance strategy and program. 

]]>
Data Protection Report
CNIL publishes a draft TIA guide https://www.lexblog.com/2024/02/06/cnil-publishes-a-draft-tia-guide/ Tue, 06 Feb 2024 16:26:16 +0000 https://www.lexblog.com/2024/02/06/cnil-publishes-a-draft-tia-guide/ The Court of Justice of the European Union (CJEU)’s Schrems II decision[1] clarified strict rules for personal data transfers outside of the European Union.  The European Data Protection Board (EDPB) followed up with recommendations[2] setting out its expectations on what the Schrems II decision meant for carrying out a data transfer impact assessment (TIA) for Article 46 GDPR instruments.  The French data protection authority (CNIL) has recently followed up with its own step-by-step guidance on TIAs.[3] This guidance, published in French and English, aims to provide practical assistance to data exporters and is based on the EDPB’s previous recommendations.

The guidance includes tables to collate key information and sets out a process to determine if a TIA is necessary and then provides step-by-step information summarising the EDPB’s recommendations on how to carry out the impact assessment. It makes some unwelcome comments in relation to the involvement of importers in the TIA process and the need to undertake TIAs on downstream onward transferees.

It is currently in a consultation phase, with the consultation due to close on 12 February 2024 and the final guidance expected later in the year.

When is a TIA required?

Article 44 GDPR places restrictions on transfers of personal data outside the European Economic Area (EEA). 

Where the EU Commission has made an “adequacy decision” in respect of a country, data can be transferred outside the EEA without additional safeguards to that country.  A TIA is not required for transfers to jurisdictions with an adequacy decision in place, which include Israel, Argentina, and Switzerland.  The Commission recently made a partial adequacy decision for transfers to the US under the EU-US Data Privacy Framework (DPF), meaning that a TIA is not required for transfers to recipients who have self-certified under the DPF.

For transfers to jurisdictions for which no adequacy decision is in place, Article 46 of the GDPR lists a series of transfer instruments or “appropriate safeguards” that exporters can rely on. These include binding corporate rules (BCRs) and standard data protection clauses adopted by the Commission, known as standard contractual clauses or SCCs.[4]

The Schrems II decision clarified that the exporter must assess through a TIA whether the personal data transferred under the SCCs will benefit from an essentially equivalent level of protection in the importer’s hands in the importing jurisdiction. The EDPB has since clarified that it also expects this assessment for BCRs and other Article 46 tools.[5]  The CNIL is looking to help organisations navigate this assessment through its guide.  

The CNIL clarifies that an exporter should carry out a TIA for transfers to jurisdictions with a partial adequacy decision that does not cover the data transferred, or only covers some of the data transferred.  This applies for transfers to the US where the recipient has not signed up to the DPF.

A notable increase in the importer’s involvement

From step 3 onwards, the guide offers a comprehensive checklist to assess the laws and practices of the country to which the data is transferred, the effectiveness of the transfer tools, potential supplementary measures to be added and the associated procedural steps.

The CNIL shows a clear intention to involve the importer, i.e. the receiving party in the third country (who can be a controller or processor).  The CNIL considers the importer’s cooperation to be essential, as they hold a lot of information required for the assessment.[6] The CNIL provides questions for the data importer in step 3 and underlines that the process of identifying supplementary measures (step 4 of the guide) “should be undertaken with due diligence, in collaboration with the importer and must be documented”.[7] The CNIL also emphasises the need for users of the guide to determine the roles of the parties involved in the transfer (controller, joint controller, or processor), “as it determines the allocation of responsibilities”.

This is a more stringent approach compared to the EDPB’s recommendations which confine the collaboration of the importer to “where appropriate”. [8] It implies that the data importer would possess essential information for the TIA without directly putting any duty on them, only quoting them as one possible source of information.[9] However, the EDPB does acknowledge that the CJEU made both the data exporter and the data importer responsible for assessing that the level of protection in third countries is compliant in practice with the level set by EU data protection law, in the Schrems II judgment.[10]

A strict approach for data processors

The CNIL takes a strict view of the obligations of processors in relation to the provision of information. It highlights that, in its reading of the GDPR, “in the context of a relationship between a controller and a processor, the transmission of this information to the controller by the processor is part of the latter’s obligations under Article 28 of the GDPR, and in particular Article 28(3)(h)“.[11] This requires that data processor agreements  include an obligation on the processor to make available to the controller all information necessary to demonstrate compliance with its GDPR obligations.[12]  The guidance does not suggest that expectations would differ for processors not subject to the GDPR.

This includes Information on the legislation of the third country, the practices of the authorities  and the circumstances of the transfer. The processor cannot limit its duty to the provision of “a simple conclusion or an executive summary of its assessment”.[13]This may seek to discourage vendor processors from maintaining a policy of providing only high-level information to customers around their assessments.

Through the focus on this article of the GDPR and its interpretation, the CNIL seems to wish to remind readers of the central role of the processor in ensuring precise TIAs. Although, this is only an optional methodology, it could still impact the CNIL’s decisions on whether personal data transfers comply with the GDPR.  

A close-up on onward transfers

The CNIL’s draft guide includes a brief, but potentially impactful, comment on onward transfers. Under Article 44 of the GDPR, onward transfers refer to a transfer of personal data to another third country or international organisation, which occurs after the personal data has already been transferred to a data importer outside the EEA.[14]

The EDPB’s recommendations leave room for interpretation on the scope of assessment that must be carried out for onward transfers, noting that   “When mapping transfers, do not forget to also take into account onward transfers[15].  The CNIL, however, is more specific on this point in step 1 of its draft guide. One of the questions in this section asks about the possibility of onward transfers by the importer. Where onward transfers will take place, the guide notes that exporter should “conduct a separate TIA dedicated to each onward transfer”.[16]

As a result, the CNIL suggests that exporters will need to enquire whether any further transfers take place after the initial transfer, and act accordingly. If onward transfers will take place, then exporters will have to use one document for the initial transfer and then separate documents for each onward transfer (if following the process set out by the CNIL). This undoubtedly increases the data exporter’s burden while conducting TIAs.

This requirement may present practical challenges, as exporters are often unable to obtain much more than an assurance from importers or processors that the level of security and risk of access by the authorities in the third countries will not be worse than in previous transfers.  As discussed above, this is an issue that the CNIL considers significant enough to address explicitly. At present, it may be challenging to obtain sufficient details to carry out assessments for onward transfers in all cases.

An optional methodology

The CNIL is clear on its intent for this guide. It is “a methodology, a checklist” identifying “various elements to be considered when carrying out a TIA”. It is a tool using the six steps set out in the EDPB’s recommendations to give indications on “how the analysis can be carried out [...] and points to the relevant documentation”. Use of the guide is not obligatory, and other methodologies can be applied when conducting TIAs.[17]

Our take

The CNIL’s draft guide is comprehensive and detailed.   It offers some insight into the French authority’s strict interpretation of the analysis required for a TIA. It demonstrates an underlying goal of upholding EU data subjects’ rights, with less focus alongside this on a risk-based and business-friendly tool than other authorities outside of the EU, for example the ICO.[18]

As a comprehensive step-by-step guidance on carrying out a TIA from an EU regulator, when finalised, the guide will be a cautionary point of reference for organisations with EU GDPR obligations.  

TIAs are no longer required for transfers to the US to DPF certified recipients, but the CNIL has made it clear that is expects to see TIAs for transfers to the US that do not rely on the DPF, and TIAs continue to be required for transfers to other non-adequate third countries.

Exporters will have to ensure the collaboration of importers, particularly where their importers are data processors. Exporters can also flag to importers that the CNIL’s view is that their provision of information relating to the law enforcement laws and practices in their jurisdiction is a requirement under the GDPR.   

The CNIL’s strict stance on the need to carry out a TIA for each onward transfer would be burdensome to comply with.  The CNIL acknowledges some processors’ reluctance to provide comprehensive information on onward transfers. This area has the potential to create further TIA work where exporters have not undertaken TIAs to this level. The CNIL’s enforcement activity in this space coupled with the final text of the guidance should be monitored closely.

The consultation remains open until 12 February for organisations wishing to respond.  The CNIL is likely to publish its final position later in the year.

Thanks to Laura Helloco for contributing towards this article.


[1] CJUE, C-311/18 “Facebook Ireland and Schrems”, 16 July 2020, ECLI:EU:C:2020:559.

[2] EDPB, Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data, 18 June 2021.

[3] CNIL, Draft practical guide, Transfer Impact Assessment, 8 January 2024.

[4] GDPR, Article 46 “Transfers subject to appropriate safeguards”.

[5] EDPB, Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data, 18 June 2021, para 22.

[6] CNIL, Draft practical guide, Transfer Impact Assessment, 8 January 2024, p. 2.

[7] CNIL, Draft practical guide, Transfer Impact Assessment, 8 January 2024, p. 14.

[8] EDPB, Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data, 18 June 2021, para. 30.

[9] EDPB, Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data, 18 June 2021, para. 31, 39, 44, 45.

[10] EDPB, Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data, 18 June 2021, para.1.5.

[11] CNIL, Draft practical guide, Transfer Impact Assessment, 8 January 2024, p. 2.

[12] GDPR, Article 28 “Processor”.

[13] CNIL, Draft practical guide, Transfer Impact Assessment, 8 January 2024, p. 4.

[14] GDPR, Article 44 “General principle for transfers”; Recital 101 “General Principles for International Data Transfers”.

[15] EDPB, Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data, 18 June 2021, para. 10.

[16] CNIL, Draft practical guide, Transfer Impact Assessment, 8 January 2024, p. 7.

[17] CNIL, Draft practical guide, Transfer Impact Assessment, 8 January 2024, para 1.2.

[18] ICO, TRA tool, November 2022 <https://view.officeapps.live.com/op/view.aspx?src=https%3A%2F%2Fico.org.uk%2Fmedia%2Ffor-organisations%2Fdocuments%2F4022639%2Ftransfer-risk-assessments-tool-202211.doc&wdOrigin=BROWSELINK>

]]>
Data Protection Report
Singapore proposes Governance Framework for Generative AI https://www.lexblog.com/2024/02/04/singapore-proposes-governance-framework-for-generative-ai/ Sun, 04 Feb 2024 14:35:47 +0000 https://www.lexblog.com/2024/02/04/singapore-proposes-governance-framework-for-generative-ai/ On 16 January 2024, Singapore’s Infocomm Media Development Authority (IMDA), in collaboration with the AI Verify Foundation, announced a public consultation on its draft Model AI Governance Framework for Generative AI (Draft GenAI Governance Framework), showing the areas where future policy interventions relating to generative AI may take place and options for such intervention.

The Draft GenAI Governance Framework may be accessed here. Views on the Draft GenAI Governance Framework may be provided to the IMDA at info@aiverify.sg.

A brief summary of, and our key takeaways from, the Draft GenAI Governance Framework are set out below.

Singapore’s initiatives on AI governance

The Singapore government has been keeping a close eye on the AI landscape through the implementation of the following key initiatives:

  • National AI Strategy: In 2019, Singapore released its first National AI Strategy, detailing initiatives aimed at enhancing the integration of AI to boost the economy. To highlight the practical applications of AI, Singapore initiated national projects in sectors such as education, healthcare, and safety & security. Additionally, investments were made to bolster the overall AI ecosystem. The National AI Strategy was last updated in 2023.
  • Model AI Governance Framework: The Model AI Governance Framework was first introduced in 2019 to provide detailed and readily implementable guidance to private sector organisations to address key ethical and governance issues when deploying AI solutions. A second edition of the Model AI Governance Framework was issued in 2020[1].
  • AI Verify Foundation and the AI Verify testing tool: Announced in June 2023, the IMDA released AI Verify, an open-source AI governance testing framework and software toolkit developed by IMDA. IMDA also set up the AI Verify Foundation to harness the collective power and contributions of the open-source community to further develop the AI Verify testing tools for the responsible use of AI[2].
  • Proposed Advisory Guidelines on Use of Personal Data in AI Recommendations and Decision Systems: In July 2023, the Personal Data Protection Commission (PDPC) issued proposed advisory guidelines under the Personal Data Protection Act 2012 concerning the use of personal data to develop machine learning (ML) AI models or systems, as well as the collection and use of personal data in such ML systems for decisions, recommendations, and predictions.[3]
  • Discussion Paper on Generative AI: Implications for Trust and Governance: In June 2023, the IMDA, together with Aicadium, released a discussion paper outlining Singapore’s plans for a reliable and responsible adoption of Generative AI. The paper discusses risk assessment methods and suggests six key dimensions for policymakers to enhance AI governance—addressing immediate concerns while investing in long-term outcomes.
  • MAS’s FEAT principles and Veritas Toolkit: In June 2023, the Monetary Authority of Singapore (MAS) introduced an open-source toolkit aimed at promoting responsible AI usage within the financial industry. Known as the Veritas Toolkit version 2.0, this toolkit facilitates financial institutions in conducting assessments based on the Fairness, Ethics, Accountability, and Transparency (FEAT) principles. These principles offer guidance to firms in the financial sector on responsibly utilising AI and data analytics in their products and services.

Against this backdrop, the Draft GenAI Governance Framework emerges as the latest instrument driving AI development in Singapore.

Summary of the Draft GenAI Governance Framework

Aligned with Singapore’s National AI Strategy, the Draft GenAI Governance Framework aims to propose a systemic and balanced approach to addressing generative AI concerns while continuing to facilitate innovation.

The Draft GenAI Governance Framework underscores the importance of global collaboration on policy approaches and emphasises the need for policymakers to work with industry, researchers, and like-minded jurisdictions.

To that end, the Draft GenAI Governance Framework identifies nine dimensions that address generative AI concerns while balancing fostering ongoing innovation. They are summarised in the table below.

S/NDimensionKey recommendations
1AccountabilityThe Draft GenAI Governance Framework suggests allocating responsibility in the generative AI development chain according to the control levels of each stakeholder. It also proposes enhancing end-user protection by providing indemnities and updating legal redress and safety protection frameworks. This ensures that end-users have additional safeguards against potential harm from AI-enabled products and services.
2DataThe Draft GenAI Governance Framework advises policymakers to clarify the application of existing personal data laws to generative AI and encourage research for the creation of safer and culturally representative models. Additionally, policymakers are urged to foster open dialogue between copyright owners and generative AI companies, facilitating balanced solutions for copyright issues related to data used in AI training.
3Trusted Development and DeploymentThe Draft GenAI Governance Framework proposes that the industry should standardize several aspects of Generative AI. Firstly, it suggests adopting common best practices in the development of Generative AI. Secondly, the framework recommends standardizing the disclosure of models, akin to a “food label”, enabling comparisons between different AI models. Thirdly, it suggests standardizing the evaluation of Generative AI models to implement a baseline set of required safety tests.
4Incident ReportingAI developers should establish processes to monitor and report incidents arising from the use of their AI systems. Simultaneously, policymakers need to determine the severity threshold for AI incidents that would necessitate reporting to the government.
5Testing and AssurancePolicymakers are recommended to establish common standards in AI testing to ensure quality and consistency across the industry.
6SecurityThe Draft GenAI Governance Framework suggests developing new testing tools to mitigate the risks associated with generative AI. One example is the creation of a digital forensic tool specifically designed for generative AI, aimed at identifying and extracting potential malicious codes concealed within the model.
7Content ProvenanceDue to the potential for AI-generated content to amplify misinformation, policymakers are urged to collaborate with stakeholders in the AI content lifecycle. Together, they can work on solutions such as digital watermarking and cryptographic provenance to reduce the risk of misinformation.
8Safety and Alignment R&DPolicymakers are urged to accelerate their investment in R&D to guarantee alignment of AI models with human intention and values. Additionally, facilitating global cooperation among AI safety R&D institutes is essential to optimise limited resources and keep pace with commercial growth.  
9AI for Public GoodThe Draft GenAI Governance Framework encourages governments to democratize AI access by educating the public on identifying deepfakes and using chatbots safely. Additionally, it emphasizes the role of governments in leading innovation within the industry, especially among SMEs, through measures like the use of sandboxes. Furthermore, the framework recommends increasing efforts to upskill the workforce and promote the sustainable development of AI systems.

Key Takeaways

The Draft GenAI Governance Framework reflects Singapore’s broader efforts to contribute to AI governance and provides useful insight into the concerns of policymakers regarding the development and deployment of generative AI systems.

While the Draft GenAI Governance Framework is helpful for organisations to understand the key policy implications regarding generative AI, it is more of a discussion paper and does not prescribe specific practices for organisations to adopt or implement when deploying generative AI solutions. This approach is not entirely unexpected at this stage as the technology is still developing rapidly and policymakers worldwide are still grappling with how they should deal with the risks and concerns associated with generative AI.

We are closely observing this space to see how policymakers around the world will react to the upcoming EU AI Act and whether they will follow a similar approach. We also anticipate that the Singapore government will issue additional papers and guidance in the near future.

We would like to thank Judeeta Sibs, practice trainee at Ascendant Legal LLC, for her assistance with the preparation of this update.


[1] See our summary of the second edition of the Model AI Governance Framework here: https://www.dataprotectionreport.com/2020/02/singapore-updates-its-model-artificial-intelligence-governance-framework/

[2] See our summary on the AI Verify Foundation here: https://www.dataprotectionreport.com/2023/06/singapore-contributes-to-the-development-of-accessible-ai-testing-and-accountability-methodology-with-the-launch-of-the-ai-verify-foundation-and-ai-verify-testing-tool/

[3] See our summary of the public consultation on this development: Singapore Releases Proposed Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems | Data Protection Report

]]>
Data Protection Report
NYDFS issues significant guidance on insurers using AI or external data https://www.lexblog.com/2024/02/02/insurance-and-artificial-intelligence-nydfs-proposed-circular-letter/ Fri, 02 Feb 2024 15:47:53 +0000 https://www.lexblog.com/2024/02/02/insurance-and-artificial-intelligence-nydfs-proposed-circular-letter/ On January 17, 2024 the New York Department of Financial Services (“NYDFS”) published a Proposed Insurance Circular Letter (“Proposed Circular”) regarding the use of artificial intelligence systems (“AIS”) and external consumer data and information sources (“ECDIS”) in insurance underwriting and pricing. This Proposed Circular does not create or change any legislation, but once finalized, will reflect how NYDFS interprets existing laws and regulations as they relate to AIS and ECDIS and to clarify its expectations of the insurance industry.

Purpose and Background

The Proposed Circular, which applies to all insurers authorized to write insurance in New York State that use ECDIS or AIS, defines AIS as any “machine-based system designed to perform functions normally associated with human intelligence” that are used in connection with insurance underwriting or pricing. It defines ECDIS as data or information used to supplement traditional underwriting or pricing, as a proxy for traditional underwriting or pricing, or to establish “lifestyle indicators” that contribute to underwriting or pricing.[1]

The Proposed Circular acknowledges the potential benefits of AIS and ECDIS in “simplifying and expediting insurance underwriting and pricing processes,” but acknowledges that they can reflect and reinforce systemic biases and inequalities. It therefore encourages insurers who use such technologies to mitigate potential harm to consumers with a proper governance and risk management framework.

Fairness Principles

The Proposed Circular states that an insurer is obligated under existing laws to establish that its data source or model using ECDIS or AIS for underwriting or pricing would not result in, or permit unfair discrimination. The data source or model also should not use or be based on a protected class. Insurers also must ensure that vendor-supplied ECDIS or AIS complies with anti-discrimination laws, and insurers cannot rely solely on a vendor’s claim of nondiscrimination. The Proposed Circular outlines several ways to abide by these fairness principles.

  • Actual Actuarial Validity. An insurer that uses ECDIS should be able to demonstrate that it is “supported by generally accepted actuarial standards of practices and are based on actual or reasonably anticipated experience.” Actual or reasonably anticipated experience includes statistical studies, predictive modeling, and risk assessments. Ensuring actuarial validity also includes the ability to demonstrate that the ECDIS does not “serve as a proxy for any protected classes that may result in unfair or unlawful discrimination.”
  • Unfair and Unlawful Discrimination. An insurer should not use ECDIS or AIS in the underwriting or pricing process unless it has determined that it does not collect or use information that would constitute unlawful discrimination or unfair trade practices. This principle applies even where the insurer is not collecting the information itself, but is rather getting it from a third-party vendor. Further, an insurer should not use ECDIS or AIS unless it completes a comprehensive assessment that establishes that the underwriting or pricing guidelines are not unlawfully discriminatory.
  • Analyzing for Unfair or Unlawful Discrimination. When determining if ECDIS or AIS unlawfully discriminates, an insurer should appropriately document its analysis. Further, an insurer should conduct unlawful discrimination testing on a regular basis after deploying ECDIS or AIS, and it is encouraged to employ quantitative and qualitive assessments.

Governance and Risk Management

The Proposed Circular outlines that an insurer should have a corporate governance framework, as required by 11 NYCRR § 90.2, that “provides appropriate oversight of the insurer’s use of ECDIS and AIS.” It outlines four ways to ensure compliance.

  • Board and Senior Management Oversight. The Proposed Circular suggests that oversight of ECDIS and AIS can be delegated to specific board committees or members of senior management, but it should ensure that proper reporting is in place. Further, insurers should create policies and procedures related to ECDIS and AIS oversight, and all relevant operation areas should be engages in such oversight.
  • Policies Procedures and Documentation: The Proposed Circular emphasizes the importance of policies, procedures, and documentation related to AIS and ECDIS. Insurers should create policies and procedures that include “clearly defined roles and responsibilities, as well as monitoring and reporting requirements to senior management,” and that include appropriate training requirements. Further, insurers should maintain comprehensive documentation for their use of AIS and should be prepared to make such documentation available to NYDFS upon request. Finally, insurers should implement a procedure to field complaints and inquiries from consumers regarding its use of AIS or ECDIS.
  • Risk Management and Internal Controls. The Proposed Circular discusses that insurers “should manage the relevant risks at each stage of the AIS life cycle,” either within its already established enterprise risk management program, or through an independent program. It also states that the internal audit function that is already required under 11 NYCRR § 89.16 should include assessments of the overall effectiveness of the AIS and ECDIS risk management framework.

Third-Party Vendors

The Proposed Circular discusses the importance of oversight of any third-party vendors that utilize EDCIS or AIS. Insurers should develop written standards, policies, procedures, and protocols to facilitate such oversight. Specifically, the Proposed Circular suggests that insurers should:

  • retain responsibility for understanding any tools, EDCIS, or AIS used in underwriting and pricing for insurance that were developed by third-party vendors ensuring compliance with all applicable regulations;
  • develop written standards for the use of ECDIS and AIS developed by a third-party vendor;
  • implement procedures for reporting any incorrect information to third-party vendors for further investigation and update as necessary; and
  • develop procedures to remediate incorrect information from their AIS that the insurer has identified or has been reported to a third party.

Transparency, Notice and Consumer Redress

Existing insurance laws codify the importance of transparency in insurance underwriting and pricing. Therefore, insurers should include details about “all information upon which the insurer based any declination, limitation, rate differential, or other adverse underwriting decisions,” including specific details about ECDIS or AIS, where applicable.

For any adverse underwriting or pricing decision that was based on ECDIS or AIS, the insurer must provide a notice to the insured or potential insured that discloses: (i) whether the insurer uses AIS in its underwriting or pricing process, (ii) whether the insurer uses data about the person obtained from external vendors, and (iii) that such person has the right to request information about the specific data that resulted in the underwriting or pricing decision including contact information for making such request. Insurers should also be prepared to respond to consumer complaints and inquiries about their use of AIS and ECDIS by implementing procedures to receive and address such complaints. Insurers must maintain any records of complaints regarding AIS or ECDIS and be prepared to make such records available to the NYDFS upon request.

Enforcement

The NYDFS may audit and examine an insurer’s use of ECDIS and AIS, including within the scope of regular or targeted examinations pursuant to Insurance Law § 309 or a request for special report pursuant to Insurance Law § 308.

Feedback Request

NYDFS is accepting feedback on all aspects of the Proposed Circular through March 16, 2024. Comments should be submitted to innovation@dfs.ny.gov using the subject line “Proposed Circular on the use of AI and ECDIS in Insurance Underwriting and Pricing.”

Our Take

Insurers that utilize ECDIS or AIS in its underwriting or pricing processes have a responsibility to properly oversee these technologies. Such oversight should be implemented internally and should also be used to monitor third-party vendors that use them. While the Proposed Circular is not final (see Feedback Request above), insurers can use this time to assess their board and management structures, policies and procedures, and risk management plans to see where changes or additions should be made. Insurers are also encouraged to analyze the various Proposed Circular’s expectations with an eye towards their existing policies and procedures for use of ECDIS and AIS, in order to identify potential gaps.


[1] The Proposed Circular specifically exempts MIB Group, Inc. member information exchange service, a motor vehicle report, or a criminal search history from the definition of ECDIS.

]]>
Data Protection Report
International Data Privacy Day: Unpacking recent significant ECJ decisions https://www.lexblog.com/2024/01/26/international-data-privacy-day-unpacking-recent-significant-ecj-decisions/ Fri, 26 Jan 2024 12:28:15 +0000 https://www.lexblog.com/2024/01/26/international-data-privacy-day-unpacking-recent-significant-ecj-decisions/ A flurry of significant European Court of Justice judgments relating to data protection were published in the final few months of 2023.

In celebration of International Data Privacy Day, in this 1 hour webinar our European data protection specialists will unpack the following four important judgments, looking at what was decided by the Court and what the decisions will mean for organisations in the EU and UK more generally:

  • A ruling on 9 November on when an alphanumeric code may or may not constitute personal data: Judgment on Case C‑319/22 (Gesamtverband Autoteile-Handel eV v Scania CV AB)
  • A ruling on 5 December on data protection authorities’ fining powers under the GDPR: Judgements on cases C-683/21 (Nacionalinis visuomenės sveikatos centras) and C-807/21 (Deutsche Wohnen)
  • A ruling on 7 December on the interpretation of wholly automated decision making under Art 22, which could potentially have significant impact on AI vendors andtheir relationships with their customers: Judgment on Case-634/21 (SCHUFA Holding AG)
  • A ruling on 14 December on controllers’ data security obligations and the scope of “non-material damages” in the context of cybercrime: Judgment on Case-340/21 (Natsionalna agentsia za prihodite)

Register now

]]>
Data Protection Report
The EU AI Act: What obligations will apply to your business? https://www.lexblog.com/2024/01/26/the-eu-ai-act-what-obligations-will-apply-to-your-business/ Fri, 26 Jan 2024 12:25:42 +0000 https://www.lexblog.com/2024/01/26/the-eu-ai-act-what-obligations-will-apply-to-your-business/ Political agreement was achieved at the beginning of December in relation to the EU’s AI Act (AIA) – the first major step in the regulation of artificial intelligence.

Although the final texts are not yet available, the key elements are clear, with the “risk-based” approach at the heart of the AIA.

Working from the last available texts, we will provide insights into which activities will likely be caught as prohibited, high-risk and transparent – the essential first step for understanding the level of impact the AIA will have on your business.

Parts of the AIA could become applicable before the end of the year. Are you prepared?

Watch now

]]>
Data Protection Report
$8 million penalty to NYDFS – and another case of over-retention https://www.lexblog.com/2024/01/18/8-million-penalty-to-nydfs-and-another-case-of-over-retention/ Thu, 18 Jan 2024 08:00:00 +0000 https://www.lexblog.com/2024/01/18/8-million-penalty-to-nydfs-and-another-case-of-over-retention/ 2024 was not a happy new year for Genesis Global Trading, Inc. (“GGT”).  On January 3, 2024, the New York Department of Financial Services announced a consent order with GGT, where GGT agreed to pay NYDFS $8 million and to surrender its BitLicense (for cryptocurrency trading), due to alleged violations of NYDFS’ cybersecurity and its virtual currency regulations.  This post will focus on the cybersecurity regulation issues.  (For more information about the crypto and financial services/regulation aspects, please see https://www.nortonrosefulbright.com/en/knowledge/publications/4c9650ae/2023-crypto-round-up

Background

NYDFS granted GGT a license to conduct a non-custodial cryptocurrency exchange business, which meant GGT was subject to NYDFS’ virtual currency regulation and its cybersecurity regulation.  NYDFS conducted its first audit of GGT for the period of May 17, 2018 through March 31 2019.  NYDFS found violations of both the cybersecurity and virtual currency regulations. 

NYDFS conducted its second audit for the period April 1, 2019 through March 31, 2022.  According to the consent, NYDFS “determined that, while GGT’s business had grown significantly during this period, little effort or resources had been directed to addressing the deficiencies identified in the First Exam. In fact, the Second Exam identified further compliance failures with respect to the Virtual Currency Regulation and the Cybersecurity Regulation.”

Cybersecurity Regulation

NYDFS found a number of issues with respect to GGT’s lack of compliance with the cybersecurity regulation, starting with the required risk assessment.  NYDFS characterized the risk assessment as “the foundation of a Covered Entity’s cybersecurity program,” (¶ 29)  adding that it “serves to inform the design of the cybersecurity policies,” which the entity’s Board must approve. (¶30)

Not only was the assessment “years late,” NYDFS  said that it “was not sufficiently comprehensive and did not include identification of areas, systems, or processes that required material improvement, updating, or redesign, or plans for enhancing GGT’s cybersecurity program to achieve full compliance with the requirements of the” cybersecurity regulation.  (¶31)  The risk assessment did not allow for revisions due to changes in threats and technological developments, nor did it “adequately consider the cybersecurity risks to GGT’s business operations including NPI collected or stored on Information Systems and the inadequate controls in place to protect” GGT’s systems. (¶ 32)

NYDFS also found the GGT did not address asset inventory and device management, nor did GGT include the requirement to notify NYDFS within 72 hours of a cybersecurity incident (¶ 35).  GGT’s business continuity/disaster recovery plan “still lacked sufficient BCDR procedures to address certain cybersecurity requirements.”  (¶ 36)  NYDFS also found that GGT’s employees were not “sufficiently trained” on their roles under the BCDR policy and there was no annual testing.  (¶ 36)  

Data and Over-Retention

NYDFS then demonstrated how inter-connected the cybersecurity regulation’s requirements are with respect to data.  NYDFS found that GGT’s data classification policies and procedures “were incomplete, thus resulting in significant concerns regarding GGT’s ability to adequately assess its compliance with the Cybersecurity Regulation’s access privilege, data disposal, and encryption requirements.  These issues, in turn, prevented GGT from effectively limiting access to sensitive information.”  (¶37, citations omitted)

The second NYDFS audit found that GGT had never established policies and procedures for the periodic secure disposal of non-public personal information.  (¶ 39)  “In fact, data in critical applications was stored indefinitely and there was no process in place for categorizing and purging data that is no longer necessary to store, despite the clear requirements” in the cybersecurity regulation (¶ 39)  In addition “due to the lack of a data classification policy, there were no means to ensure that all sensitive data and NPI were identified and encrypted as required by” the cybersecurity regulation. ( ¶ 40)

GGT has 10 days to pay the $8 million penalty (¶ 61) and it has agreed to surrender its virtual currency business license. ( ¶66)

Our Take

As we have previously written, regulators are giving increasing attention to the over-retention of data and have been leaning on it to levy fines.  Here, similar to the FTC’s settlement described in the link above, GGT had over-retained personal information, but that it had no plan to remediate the issue and, in fact, was indefinitely retaining data for no documented business purpose.  Companies should focus on establishing and implementing a reasonable information governance policy and record retention schedule with special emphasis on documents that contain personal information.

Even where implementing such policies and programs in the near term is difficult – much less actively disposing of data at scale –companies can put themselves in a better position by having a working framework and a path to substantial completion.  Moreover, organizations should focus more on actually changing behavior and getting actual data deleted, than over-tuning policies and schedules.  Whether a document category should be kept for 6 or 7 years is not as significant a decision as actually working to teach employees and systems stop retaining data indefinitely.  A database that deletes data systematically after 10 years is good, but undermined if employees routinely download the information to fileshares and OneDrive and retain the information indefinitely. 

]]>
Data Protection Report
Thailand – The Regulation with respect to Cross-border Transfer of Personal Data https://www.lexblog.com/2024/01/16/thailand-the-regulation-with-respect-to-cross-border-transfer-of-personal-data/ Wed, 17 Jan 2024 02:10:09 +0000 https://www.lexblog.com/2024/01/16/thailand-the-regulation-with-respect-to-cross-border-transfer-of-personal-data/ On 25 December 2023, the Personal Data Protection Committee (PDPC) published two notifications detailing regulations for cross-border transfers of personal data under Sections 28 and 29 (Notifications) of the Personal Data Protection Act B.E. 2562 (2019) (PDPA). These Notifications are the Adequacy Country Notification and the Appropriate Safeguard Notificationrespectively.

Key information

In summary, the Adequacy Country Notification establishes guidelines for determining whether a destination country or international organisation (Destinations) meets the standards for receiving personal data transfers. This assessment contains two crucial factors: (1) the alignment of the Destination’s legal safeguards with the PDPA (particularly regarding security measures, data subject rights, and legal remedies) and (2) the existence of a competent and independent regulatory body to enforce its data protection laws. Additionally, the PDPC has the power to establish a list of approved Destinations and retains the authority to make case-by-case decisions for other Destinations as applicable.

On the other hand, the Appropriate Safeguard Notification permits the transfer of personal data to Destinations without having to comply with the Adequacy Country Notification, provided that the transferee is affiliated with the transferor. This can be done by establishing binding corporate rules (BCRs), the internal policies safeguarding data transfers within affiliated businesses or group. However, these BCRs are required to be approved by the PDPC prior to their implementation.

Additionally, when personal data transfers are necessary be transferred to Destinations that do not meet the standards for receiving personal data transfers without being covered by the BCRs, and not falling under limited exemptions, the Appropriate Safeguard Notification mandates the transferors to establish appropriate safeguards, such as, Standard Contractual Clauses (SCCs) or certifications, to transfer the personal data to such Destinations.

Therefore, it could be said that both the Adequacy Country Notification and the Appropriate Safeguard Notification mainly replicate the concept of the cross-border transfer of personal data from the General Data Protection Regulation.

Penalties

Failure to comply with the Notifications could result in an administrative fine of up to THB 5,000,000. In limited circumstances, criminal penalties may also apply, including imprisonment for up to one year and/or a fine not exceeding THB 1,000,000.

Key Timeline

The Notifications will take effect on 24 March 2024.

What is next?

Business operators engaging in cross-border personal data transfers should verify whether their Destinations meets the standards for receiving such transfers by submitting an inquiry to the PDPC. If the Destinations does not meet the standards and they cannot rely on a limited exemption, business operators must promptly establish BCRs to be approved by the PDPC or proceed with other appropriate safeguards as may be applicable on a case-by-case basis.

]]>
Data Protection Report
ICYMI –December in privacy and cybersecurity https://www.lexblog.com/2024/01/04/icymi-december-in-privacy-and-cybersecurity/ Thu, 04 Jan 2024 20:09:33 +0000 https://www.lexblog.com/2024/01/04/icymi-december-in-privacy-and-cybersecurity/ December tends to be a busy time for everyone, so you may have missed a privacy update or two.  We have set out some updates in the form of questions, with links in the answers where you can find more information.  (For those making this quiz a competitive event, we have included a tie-breaker/bonus question.)  Answers are below.

1.         As of December 18, 2023, unless the U.S. Attorney General determines that public disclosure would be a substantial threat to national security or public safety, the Securities and Exchange Commission (SEC) requires that public companies must report a cyber incident, typically by filing a publicly available Form 8-K, within:

a.         24 hours of discovery of the incident

b.         24 hours of a determination that the incident is material

c.         4 business days after discovery of the incident

d.         4 business days after determining that the incident is material

2.         If a public company seeks to delay the public report because the company believes a substantial threat to national security or public safety may exist, the company must notify the FBI, which will gather evidence for the U.S. Attorney General’s review and determination.  Upon receipt of the public company’s request, how long does the FBI’s December 6, 2023 memo state that the FBI will take to verify the request and assign an agent in the company’s local FBI office:

a.         “Without unreasonable delay”

b.         Within 2 hours

c.         Within 12 hours

d.         Within 2 business days

3.         On December 9, 2023, the California Privacy Protection Agency (CPPA) met to discuss, among many other topics, three proposed draft regulations that were required by the California Privacy Rights Act (CPRA) amendment to the California Consumer Privacy Act (CCPA):  Automated Decision-Making Technology, Cybersecurity Audits, and Risk Assessments.  One of the three regulations advanced, and the other two were sent back for additional work.  Which one advanced so that it can be proposed at the next CPPA board meeting for a vote to proceed to formal rulemaking?

a.         Automated Decision-Making Technology

b.         Cybersecurity Audits

c.         Risk Assessments

4.         The Illinois Biometric Information Privacy Act (BIPA) also generated headlines in early December with an Illinois Supreme Court ruling on whether BIPA’s HIPAA exception for “information collected, used, or stored for health care treatment, payment or operations under HIPAA” applied to biometric information of health care workers (not patients) whose fingerprints were scanned in order for the workers to access materials and medications for patient health care treatment and operations.  How did Illinois’ highest court rule:

a.         The exception applied to the workers’ data because the exception applies to all health care workers’ biometric data.

b.         The exception applied to the workers’ data because the biometric data was “collected, used or stored for health care treatment, payment or operations under HIPAA” only, and the source of the data was not relevant.

c.         The exception did not apply because, if the Illinois legislature had intended to exempt all health care workers from BIPA, it could have done so [affirming the appellate court].

d.         The exception did not apply because BIPA’s goal of protecting the secrecy interest of an individual in his/her biometric data is furthered by a narrow reading of the exception.

5.         Headlines in 2023 also had many references to artificial intelligence.  Does the national defense bill (National Defense Authorization Act for Fiscal Year 2024, signed on December 22, 2023) require the Department of Defense to develop a “bug bounty” program for certain large artificial intelligence models being integrated into the missions and operations of the Department of Defense?

a.         Yes

b.         No

6.         The national defense bill’s “Generative AI Detection and Watermark Competition” permits the Secretary to Defense to open the competition to which of the following six types of entities:

1.         Federally funded research and development centers

2.         Entities within the private sector

3.         Entities within the defense industrial base

4.         Institutions of higher education

5.         Federal departments and agencies

6.         Any other categories of participants as the Secretary of Defense considers appropriate

a.         1, 3, and 6 only

b.         1, 5, and 6 only

c.         1, 3, 5, and 6 only

d.         All six types

7.         The Federal Trade Commission (FTC) also addressed artificial intelligence and machine learning.  In its December 20, 2023 Notice of Proposed Rulemaking relating to amendments to the Children’s Online Privacy Protection Act (COPPA) regulations, the FTC has proposed which of the following:

a.         Prohibiting any artificial intelligence algorithm from interacting with a child

b.         Prohibiting website/app/online service operators from using COPPA’s “internal operations” exception to disclose or use personal information in connection with machine learning processes that encourage or prompt use of the website or online service

c.         Permitting website/app/online service operators to use artificial intelligence and machine learning to determine whether the consent was likely provided by a child or an adult

d.         Permitting website/app/online service operators to use COPPA’s “internal operations” exception to disclose or use personal information in connection with machine learning processes

8.         The FTC also proposed amending the COPPA regulations to require that website/app/online service operators establish and maintain a data retention policy that:

a.         Is written

b.         Specifies the business need for retaining a child’s personal information

c.         Specifies the timeframe for deleting the personal information

d.         Precludes indefinite retention

e.         a through c only

f.          All of the above

g.         None of the above—the FTC did not address retention

9.         With respect to comprehensive state privacy laws, on December 31, 2023, the number of U.S. states with such laws increased to five (adding to California, Colorado, Connecticut, and Virginia) with which state’s law going into effect

a.         Illinois

b.         Massachusetts

c.         New York

d.         Utah

e.         Washington

10.       Three more states are scheduled to have comprehensive privacy laws take effect in 2024:  Montana, Oregon, and Texas.

a.         Which one does NOT require a minimum number of residents’ personal data for the “controller” to be in-scope for the law?

b.         Which one makes most non-profits subject to its provisions?

c.         Which one does NOT take effect on July 1, 2024?

Tie-breaker/Bonus question:

Which two states have comprehensive privacy laws scheduled to go into effect on January 1, 2025?

a.         Delaware and Iowa

b.         Indiana and Michigan

c.         New Mexico and South Carolina

c.         Tennessee and Wisconsin

Answers:

1.d.(4 business days after determining that the incident is material).  See SEC, Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure, 88 Fed. Reg. 51896  (Aug 4, 2023), available at https://www.sec.gov/files/rules/final/2023/33-11216.pdf

2.b (within 2 hours).  FBI Policy Notice 1297N, Cyber Victim Requests to Delay Securities and Exchange Commission Public Disclosure (Dec. 6, 2023), available at https://www.fbi.gov/file-repository/fbi-policy-notice-120623.pdf/view?mod=djemCybersecruityPro&tpl=cy

3.b (cybersecurity audits).  The draft regulation and webcast of the meeting are available at the CPPA’s website:  https://cppa.ca.gov/meetings/materials/20231208.html

4.b. (The exception applied to the workers’ data because the biometric data was “collected, used or stored for health care treatment, payment or operations under HIPAA” and the source of the data was not relevant.).  The court was careful to note that it was “not construing the language at issue as a broad, categorical exclusion of biometric identifiers taken from health care workers.” ¶ 57 of Mosby v. The Ingalls Mem. Hosp., 2023 IL 129081 (Nov. 30, 2023), available at https://ilcourtsaudio.blob.core.windows.net/antilles-resources/resources/aa521aa9-5cf0-417c-a388-85bfec69625d/Mosby%20v.%20Ingalls%20Memorial%20Hospital,%202023%20IL%20129081.pdf

5.a. (yes).  Under Section 1542 of HR 2670 (National Defense Authorization Act for Fiscal Year 2024) (https://www.govtrack.us/congress/bills/118/hr2670/text ), the DoD has 180 days from enactment, subject to availability of appropriations.

6.d (all six types).  Section 1543(b) of HR 2670.

7.b (Prohibiting website/app/online service operators from using the “internal operations” exception to disclose or use personal information in connection with machine learning processes that encourage or prompt use of the website or online service).  Federal Trade Commission, Notice of Proposed Rulemaking, Children’s Online Privacy Protection Rule, at 44, available at https://www.ftc.gov/system/files/ftc_gov/pdf/p195404_coppa_reg_review.pdf

8.f.  (all of the above).  Proposed § 312.10, see NPRM at 160.

9.d. (Utah).  The Utah Consumer Privacy Act, available at https://le.utah.gov/~2022/bills/static/SB0227.html

10.a. Texas.  The Texas Data Privacy and Security Act (HB4) can be found here:  https://capitol.texas.gov/tlodocs/88R/billtext/pdf/HB00004F.pdf#navpanes=0   See our summary here:  https://www.dataprotectionreport.com/2023/06/texas-enacts-comprehensive-privacy-law/

10.b Oregon. The Oregon Consumer Privacy Act (SB 619) can be found here:  https://olis.oregonlegislature.gov/liz/2023R1/Downloads/MeasureDocument/SB619/Enrolled   Section 2(2)(r) exempts “A nonprofit organization that is established to detect and prevent fraudulent acts in connection with insurance,” and subsection (2)(s)(C) exempts the non-commercial activities of “A nonprofit organization that provides programming to radio or television networks.”

10.c. Montana.  The Consumer Data Privacy Act (SB 384), which goes into effect on October 1, 2024 (see § 14), can be found here:  https://leg.mt.gov/bills/2023/billpdf/SB0384.pdf .

Tie-breaker/bonus question:  a.  Delaware and Iowa

]]>
Data Protection Report