Inside Privacy Archives - LexBlog https://www.lexblog.com/site/inside-privacy/ Legal news and opinions that matter Fri, 31 May 2024 22:55:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.lexblog.com/wp-content/uploads/2021/07/cropped-siteicon-32x32.png Inside Privacy Archives - LexBlog https://www.lexblog.com/site/inside-privacy/ 32 32 Colorado Privacy Act Amended To Include Biometric Data Provisions https://www.lexblog.com/2024/05/31/colorado-privacy-act-amended-to-include-biometric-data-provisions/ Fri, 31 May 2024 22:54:32 +0000 https://www.lexblog.com/2024/05/31/colorado-privacy-act-amended-to-include-biometric-data-provisions/ On May 31, 2024, Colorado Governor Jared Polis signed HB 1130 into law. This legislation amends the Colorado Privacy Act to add specific requirements for the processing of an individual’s biometric data. This law does not have a private right of action.

Similarly to the Illinois Biometric Information Privacy Act (BIPA), this law requires controllers to provide notice and obtain consent prior to the collection or processing of a biometric identifier. The law also prohibits controllers from selling or disclosing biometric identifiers unless the customer consents or unless disclosure is necessary to fulfill the purpose of collection, to complete a financial transaction, or is required by law.

The law contains several novel requirements. For instance, it prevents a controller from purchasing a biometric identifier unless: (a) they pay the consumer, (b) they obtain the consumer’s consent, and (c) the purchase is unrelated to the provision of a product or service to the customer. Additionally, it requires companies meeting certain thresholds to disclose detailed information about their biometric data collection and use upon consumer request, including the source from which the controller access the data and the purpose for which it was processed.

The law also sets forth retention requirements that differ from those of BIPA. Specifically, controllers processing biometric data must adopt written guidelines that require the permanent destruction of a biometric identifier by the earliest of: (a) the date upon which the initial purpose for collecting the biometric identifier has been satisfied; (b) 24 months after the consumer last interacted with the controller; or (c) the earliest reasonably feasible date. The earliest reasonably feasible date must be no more than 45 days after a controller determines that storing the biometric identifier is no longer necessary or relevant to the express processing purpose, as identified by an annual review. The controller may extend the 45 day period by up to 45 additional days if necessary given the complexity and amount of biometric identifiers to be deleted. The written policy must also establish a retention schedule for biometric identifiers and include a protocol for responding to a breach of security involving biometric data. Note that the controller need not publish policies applying only to current employees or internal protocols for responding to security incidents.

Lastly, the law contains guidance on the use of biometric systems by employers. It specifies that employers may collect biometric identifiers as a condition of employment, but only to: permit access to secure physical locations or hardware (and not to track a current employee’s location or how much time they spend using an application); to record the start and end of a work day; and to improve workplace and public safety. The collection of biometric identifiers from employees for other reasons may not be a condition of employment and may occur only with consent. The law contains a broad statement that employers may still collect and process employees’ biometric identifiers for uses aligned with the employee’s reasonable expectations based on the role.

]]>
Inside Privacy
Council of Europe Adopts International Treaty on Artificial Intelligence https://www.lexblog.com/2024/05/31/council-of-europe-adopts-international-treaty-on-artificial-intelligence/ Fri, 31 May 2024 17:16:28 +0000 https://www.lexblog.com/2024/05/31/council-of-europe-adopts-international-treaty-on-artificial-intelligence/ On May 17, 2024, the Council of Europe adopted the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (the “Convention”).  The Convention represents the first international treaty on AI that will be legally binding on the signatories.  The Convention will be open for signature on September 5, 2024. 

The Convention was drafted by representatives from the 46 Council of Europe member states, the European Union and 11 non-member states (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay).  The Convention is not directly applicable to businesses – it requires the signatories (the “CoE signatories”) to implement laws or other legal measures to give it effect.  The Convention represents an international consensus on the key aspects of AI legislation that are likely to emerge among the CoE signatories.

The Convention bears many similarities with the EU AI Act. (For more information on the EU AI Act, please refer to our latest blog posts here, here and here.) Some of these similarities are as follows:

  • The Convention covers the use of AI systems in both the public and private sectors, with exceptions for AI systems related to the protection of national security interests and research and development activities.
  • The definition of “AI system” is similar to that in the EU AI Act, which is based on the OECD’s definition of AI.
  • The Convention calls for the signatories to adopt or maintain measures to ensure, among other things: transparency and oversight, accountability and responsibility, equality and non-discrimination, privacy and data protection, reliability – themes which are also central to the EU AI Act.
  • Like the EU AI Act, the Convention calls for the Parties to adopt measures that require entities to implement risk and impact management frameworks for AI systems.

The Convention gives the CoE signatories the discretion to either adopt or maintain legislative, administrative or other measures to give effect to the provisions in the Convention. The Convention provides that such measures may be graduated and differentiated in view of the severity and probability of the occurrence of adverse impacts on human rights, democracy and the rule of law throughout the lifecycle of AI systems – in other words, that a risk-based approach may be taken. The Convention also gives the CoE signatories leeway to implement the framework for remedies and oversight mechanisms as appropriate to the jurisdiction. It also calls on the CoE signatories to strengthen cooperation efforts and to exchange relevant and useful information between themselves.

* The Council of Europe is an international organization that is distinct from the European Union.  Founded in 1949, the Council of Europe has a mandate to promote and safeguard the human rights enshrined in the European Convention on Human Rights. The organization brings together 47 countries, including all of the 27 EU member states. 

This blog post was written with the contributions of Diane Valat.

]]>
Inside Privacy
Italy Proposes New Artificial Intelligence Law https://www.lexblog.com/2024/05/30/italy-proposes-new-artificial-intelligence-law/ Thu, 30 May 2024 14:32:33 +0000 https://www.lexblog.com/2024/05/30/italy-proposes-new-artificial-intelligence-law/ On May 20, 2024, a proposal for a law on artificial intelligence (“AI”) was laid before the Italian Senate.

The proposed law sets out (1) general principles for the development and use of AI systems and models; (2) sectorial provisions, particularly in the healthcare sector and for scientific research for healthcare; (3) rules on the national strategy on AI and governance, including designating the national competent authorities in accordance with the EU AI Act; and (4) amendments to copyright law. 

We provide below an overview of the proposal’s key provisions.

Objectives and General Principles

The proposed law aims to promote a “fair, transparent and responsible” use of AI, following a human-centered approach, and to monitor potential economic and social risks, as well as risks to fundamental rights.  The law will sit alongside and complement the EU AI Act (for more information on the EU AI Act, see our blogpost here).  (Article 1)

The proposed law sets out general principles, based on the principles developed by the Commission’s High-level expert group on artificial intelligence, pursuing three broad objectives:

  1. Fair algorithmic processing. Research, testing, development, implementation and application of AI systems must respect individuals’ fundamental rights and freedoms, and the principles of transparency, proportionality, security, protection of personal data and confidentiality, accuracy, non-discrimination, gender equality and inclusion.
  2. Protection of data. The development of AI systems and models must be based on data and processes that are proportionate to the sectors in which they’re intended to be used, and ensure that data is accurate, reliable, secure, qualitative, appropriate and transparent.  Cybersecurity throughout the systems’ lifecycle must be ensured and specific security measures adopted.
  3. Digital sustainability. The development and implementation of AI systems and models must ensure human autonomy and decision-making, prevention of harm, transparency and explainability.  (Article 3)

Definitions

The definitions relied upon by the proposed law, such as “AI system” and “[general-purpose] AI model” are the same as those of the EU AI Act, while the definition of the term “data” is based on the Data Governance Act.  (Article 2)

Processing of Personal Data Related to the Use of AI Systems

Information and disclosures relating to the processing of data must be drafted in clear and plain language, to ensure full transparency and the ability to object to unfair processing activities.

Minors of 14+ years of age can provide their consent to the processing of personal data related to the use of AI systems, provided that the relevant information and disclosures are easily accessible and comprehensible.  Access to AI by minors below the age of 14 requires parental consent.  (Article 4)

Use of AI in the Healthcare Sector

As a general objective, the proposed law sets out that AI systems should contribute to the improvement of the healthcare system, prevention and treatment of diseases, while respecting the rights, freedoms and interests of individuals, including their data protection rights. 

The use of AI systems in the healthcare system must not select, nor influence, access to medical services on a discriminatory basis.  Individuals have a right to be informed about the use of AI and its advantages relating to diagnosis and therapy, and to obtain information about the logic involved in decision-making.

Such AI systems are intended to support processes of prevention, diagnosis, treatment and therapeutic choice.  Decision-making must remain within the healthcare professional’s purview.  (Article 7)

Scientific Research to Develop AI Systems for the Healthcare Sector

The proposed law aims to simplify data protection-related obligations for scientific research carried out by public and private not-for-profit entities, for processing of personal data, including health data, for scientific research purposes to develop AI systems for the prevention, diagnosis and treatment of diseases, development of medicines, therapies and rehabilitation technologies, and manufacturing of medical devices.  (Article 7)

In particular, the proposed law:

  • Lifts the requirement to obtain the data subject’s consent, by identifying the purposes mentioned above as “substantial public interests”, in accordance with Article 9(2)(g) of GDPR.  This does not apply to business and for-profit activities.
  • Authorizes secondary use of personal data, including special categories of data, stripped of direct identifiers, for processing for the mentioned “substantial public interests”.  As a result, a new consent is no longer required if the research changes.

In such cases, the following requirements apply:

  • Transparency and information obligations towards data subjects may be met in a simplified form, for instance, by publishing a privacy notice on the data controller’s website.
  • The processing activities must be (1) approved by the competent ethics committee, and (2) communicated to the Italian data protection authority (“Garante”); and (3) certain information must be shared with the Garante, including a data protection impact assessment, and any processors indicated.  Processing may start 30 days after such communication, if the Garante does not issue a blocking measure.  (Article 8)

These provisions align with a recent amendment of the Italian Privacy Code concerning processing for medical research purposes (see our blogpost here).

Other Sectorial Provisions

The use of AI systems in the employment context must be safe, reliable, transparent, and must respect human dignity and the protection of personal data.  The employer must inform the worker of the use of any AI, together with other information to be provided before the employment commences.  (Article 10)

In the context of regulated professions, AI may only be used for supporting tasks.  To preserve the fiduciary relationship with the client, information about any AI systems used by the practitioner must be communicated in a clear, plain and comprehensive manner.  (Article 12)

National Strategy on AI

The proposed law introduces a national strategy on AI, to be updated every two years, intended to frame a public-private partnership, coordinate the activities of public bodies, and set measures and economic incentives to promote business and industrial development in the field of AI.  (Article 17)

Governance

The proposed law designates two competent national authorities for AI, as required by the EU AI Act, with competence to apply and enforce national and EU law on AI, as follows:

  • Agenzia per l’Italia digitale (“AgID”, the agency for “digital Italy”).  AgID will be responsible for (1) promoting innovation and development of AI, and (2) setting procedures and exercising functions relating to the notification, evaluation, accreditation and monitoring of the notified bodies tasked with conducting conformity assessments of AI systems pursuant to the EU AI Act.
  • Agenzia per la cybersicurezza nazionale (“ACN”, the agency for national cybersecurity).  ACN will be (1) entrusted with monitoring, inspection and enforcement powers over AI systems, in accordance with the rules of the EU AI Act, and (2) responsible to promote and develop AI from a cybersecurity perspective.

The Garante, although not designated as a competent authority for AI, maintains its competence and powers in relation to the processing of personal data.  (Article 18)

The Italian Government is also delegated to adopt, within 12 months from the entry into force of the law, the legislation needed to align national law to the EU AI Act.  (Article 22)

Labelling of AI-generated News and Information

The proposed law establishes a requirement to label any news or informational content that is entirely generated by AI, or also partially modified or altered by AI, in such a way that it appears to present fictional data, facts and information as real, with an “AI” mark, label or announcement.  (Article 23)

Copyright Protection and AI-generated Works

The proposed law introduces certain amendments to copyright law.  In particular, with regards to AI-generated works, it clarifies that only works of the human intellect are protected by copyright, including where the work was created with the support of AI tools, to the extent it is the result of the author’s intellectual endeavor.  (Article 24)

Criminal Provisions

Among other things, the proposed law establishes a new offence targeting the unauthorized dissemination of images, video or audio falsified or altered by AI, when it is capable of misleading with regards to its authenticity.  The new offence carries a sanction of 1-3 years of imprisonment.  (Article 25)

Next Steps

As part of the legislative process, the proposed law will need to be reviewed, discussed and approved by the Senate, and will then be passed on to the Chamber of Deputies, which must approve the same text.  Once formally approved, the law will enter into force on the 15th day after its publication in the Italian Official Journal.

***

Covington’s Data Privacy and Cybersecurity Team continues to monitor developments on AI, and regularly advises clients on their most challenging regulatory and compliance issues in the EU and other major markets.  If you have questions about the proposed Italian law on AI or the EU AI Act, we are happy to assist with any queries.

]]>
Inside Privacy
CNIL Opens Public Consultation on Its Standards for Processing Health Data https://www.lexblog.com/2024/05/29/cnil-opens-public-consultation-on-its-standards-for-processing-health-data/ Wed, 29 May 2024 17:13:06 +0000 https://www.lexblog.com/2024/05/29/cnil-opens-public-consultation-on-its-standards-for-processing-health-data/ On May 16, 2024, the CNIL launched a public consultation on all of its health data standards.  Interested stakeholders are encouraged to participate by completing a questionnaire (available in French here) by July 12, 2024.

French law has specific requirements for the processing of health data.  In particular, it generally requires that the processing either comply with one of the French Supervisory Authority’s (“CNIL”) standards (such as the méthodologies de référence or “MRs” – hereafter Health Data Standards”) or be specifically authorized by the CNIL. 

Since 2018, the CNIL has issued multiple Health Data Standards to cover a variety of processing activities, such as medical research and pharmacovigilance.  However, as technologies deployed in the health sector rapidly evolve, some of these standards have become outdated and fail to adequately meet industry practices and needs.  For instance, conducting a decentralized clinical trial is typically challenging under the current Health Data Standards, meaning that sponsors are often forced to pursue the more burdensome and time consuming CNIL authorization. 

The consultation questionnaire released by the CNIL is divided in five sections:

  • the Health Data Standards covering research activities;
  • the other Health Data Standards (e.g., on pharmacovigilance);
  • adaptation required because of the increasing use of AI;
  • specific documentation the CNIL could provide; and
  • participation to upcoming working groups – the CNIL encourages participants to identify any topics they consider as high priorities, in particular as the CNIL is considering setting up some working groups on high priorities.

The CNIL also used this opportunity to summarize its recommendations and best practices relating to three aspects of decentralized clinical trials.  These guidelines cover:

  • Electronic information notices (see here) – The CNIL highlights the importance of ensuring that the confidentiality of the data is sufficiently protected and identifies some security measures to that end.  For instance, where the notice contains direct or indirect health information about the individual, the CNIL considers that it may only be sent to a regular email address (as opposed to via a secure platform) provided that (i) the subject and text of the email do not include any sensitive data, (ii) the notice itself is shared as an encrypted attachment or via a password-protected link and (iii) the relevant encryption key or password is shared separately and via different means (e.g., by post);
  • Following-up and monitoring patients at home (see here) – The CNIL reminds sponsors how they can make such arrangements while still complying with the Health Data Standards (in particular where the sponsor relies on a third party);
  • Remote quality control (see here) – Sponsors who wish to engage in remote quality control currently cannot do so while relying on a Health Data Standard and need to obtain a specific authorization from the CNIL. However, the CNIL has compiled a list of best practices that, if complied with, would facilitate the authorization process.  Such best practices include transparency requirements, the consultation of the data protection officer, precautions concerning remote consultation and the professional secrecy of clinical research associates, and a list of security measures (including a requirement that the data be stored in the EU or an EU-adequate country).

These guidelines are only temporary, as the CNIL intends to better address these issues in the updated version of its Health Data Standards.  The consultation questionnaire thus also enables participants to comment on these guidelines.  In terms of timeline, the CNIL will analyze responses to this public consultation during Summer and Fall 2024.  Some updated Health Data Standards are expected in the course of 2025, starting with the ones identified as high priorities during the consultation. 

]]>
Inside Privacy
Italian Legislator and Regulator Update Rules on Processing of Health Data for Medical Research https://www.lexblog.com/2024/05/24/italian-legislator-and-regulator-update-rules-on-processing-of-health-data-for-medical-research/ Fri, 24 May 2024 07:50:04 +0000 https://www.lexblog.com/2024/05/24/italian-legislator-and-regulator-update-rules-on-processing-of-health-data-for-medical-research/ On May 9, 2024, the Italian data protection authority (“Garante”) published a decision identifying the safeguards that controllers must put in place when processing health data for medical research purposes, in cases where data subjects’ consent cannot be obtained for ethical or organizational reasons.

The Garante’s decision follows a recent legislative development, enacted by Law n. 56 of April 29, 2024, and effective as of May 1, 2024, which amended, among other things, Article 110 of the Italian Privacy Code.  The amendment removes the obligation to submit a research program and related data protection impact assessment (“DPIA”) for prior consultation to the Garante, in cases where it is impossible or disproportionately burdensome to contact the concerned individuals.  

We provide below an overview of the legal framework and the safeguards identified by the Garante.

Article 110 of the Italian Privacy Code

Article 110 of the Italian Privacy Code sets out two exceptions to the general rule of consent as the legal basis for processing health data for the purposes of medical, biomedical and epidemiological research. In particular, consent is not required when:

  1. the research is conducted on the basis of a law, regulation or EU law, in accordance with Article 9(2)(j) of GDPR.  In these cases, the controller must conduct and publish a DPIA; or
  2. due to particular reasons, informing data subjects proves impossible, or entails a disproportionate effort, or it risks rendering impossible or seriously impairing the achievement of the research’s objectives.  In these cases, the controller must (i) adopt appropriate measures to protect the rights, freedoms and legitimate interests of the data subject, (ii) obtain a reasoned positive opinion on the research program from the competent local ethics committee, and (iii) comply with the safeguards identified by the Garante.  

Prior consultation with the Garante was previously required in the scenarios described in point 2) above.  The Italian legislator has now removed this procedural step.

The Garante’s Safeguards

Following this development, the Garante issued the abovementioned safeguards, which apply in the context of processing of health data for medical research purposes in cases where the concerned subjects are (i) deceased, or (ii) unreachable due to ethical or organizational reasons.

The Garante defines the latter two categories as follows:

  • “ethical” reasons relate to a situation where the individual is unaware of their condition, such that providing a privacy notice to them would entail disclosing news about the study which could cause material or psychological damage to them;
  • “organizational impossibility” reasons relate to a situation where not collecting the data of unreachable individuals, considering the total number of subjects to be enrolled in the study, would have significant consequences for the quality of the study’s results, and taking into account the criteria of inclusion, modalities of enrolment, statistical numerousness of the sample, and the period of time that has passed since the original data collection.  This includes situations where:
    • contacting individuals would entail a disproportionate effort in view of the high number of subjects in the cohort – which should be considered only in exceptional cases; and
    • after undertaking every reasonable effort to contact individuals, they appear to be deceased or unreachable at the time of inclusion in the study.  The Garante clarifies that this process includes verifying whether the concerned individuals are alive, consulting the details provided in clinical documentation, using telephone contact details where provided, and collecting publicly available contact details.

In these cases, the Garante requires controllers to adopt certain safeguards.  In addition to the measures illustrated in points 2(i)-(ii) above, the controller must:

  • carefully explain and document, in the research project, the existence of ethical or organizational reasons, as described above;
  • where applicable, also document the reasonable efforts made to attempt to contact the concerned individuals; and
  • conduct and publish a DPIA, and communicate it to the Garante.

New Ethics Rules for Processing for Scientific Research Purposes

Finally, in its decision, the Garante also launched the process for the adoption of new ethics rules in the context of the processing of personal data for statistical and scientific research purposes, which will complement the safeguards outlined above. 

***

Covington’s Data Privacy and Cybersecurity Team regularly advises clients in the health and scientific research space, including on the privacy aspects of clinical trials.  Our team is happy to assist with any inquiries.

]]>
Inside Privacy
SEC Adopts Amendments to Regulation S-P https://www.lexblog.com/2024/05/22/sec-adopts-amendments-to-regulation-s-p/ Wed, 22 May 2024 20:02:16 +0000 https://www.lexblog.com/2024/05/22/sec-adopts-amendments-to-regulation-s-p/ On May 16, the U.S. Securities and Exchange Commission (“SEC”) adopted amendments to Regulation S-P, which implements the Gramm-Leach Bliley Act (“GLBA”) for SEC-regulated entities such as broker-dealers, investment companies, registered investment advisers, and transfer agents.

Among other requirements, the amendments require SEC-regulated entities to adopt written policies and procedures for an incident response program that is “reasonably designed to detect, respond to, and recover from unauthorized access to or use of customer information.”  Under the required incident response program, SEC-regulated entities must provide timely notification to individuals whose sensitive customer information was, or is reasonably likely to have been, accessed or used without authorization.  Other provisions address record keeping, annual privacy notices, and oversight of service providers, as well as expanding the scope of financial institutions and “customer information” covered by the rule.

The SEC had previously issued a proposed rule for comment in the Federal Register in April 2023.  Industry representatives raised a number of concerns with the rule, including conflicts between the proposed rule and state data breach laws and a lack of consistency with the safeguarding standards promulgated by other federal prudential regulators.  Despite these concerns, the final rule is substantially as proposed and reflects only minor revisions.  For example, the following changes have been made to the notification provisions of the final rule:

  • Clarification that the requirement does not apply in cases where a SEC-regulated entity reasonably determines that a specific individual’s sensitive customer information was not accessed or used without authorization.
  • Broadening the scope and timing requirements of the so-called “law enforcement exception” to allow delays in providing notifications where the Attorney General determines that notice would pose a substantial risk to public safety, in addition to national security.
  • No longer requiring that notifications include “what has been done to protect the sensitive customer information from further unauthorized access or use” given the risk that this information could advantage threat actors.

The final rule will become effective 60 days after publication in the Federal Register.

]]>
Inside Privacy
Maryland Enacts Age-Appropriate Design Code https://www.lexblog.com/2024/05/22/maryland-enacts-age-appropriate-design-code/ Wed, 22 May 2024 19:59:01 +0000 https://www.lexblog.com/2024/05/22/maryland-enacts-age-appropriate-design-code/ On May 9, 2024, Maryland Governor Wes Moore signed the Maryland Age-Appropriate Design Code Act (“AADC”) into law.  The AADC will go into force on October 1, 2024.  This post summarizes the law’s key provisions.

  • Covered businesses:  The AADC covers for-profit entities doing business in Maryland (1) with at least $25 million in gross revenues; (2) when the business derives at least 50% of its revenue from the sale of consumer personal data; or (3) when the business buys, receives, sells, or shares the personal data of at least 50,000 Maryland residents.
  • Covered products:  Similar to California’s AADC, the Maryland AADC applies to online products “reasonably likely to be accessed by children.”  The statute provides several different tests to meet this standard: when the online product is directed to children under COPPA, when the product is routinely accessed by a significant number of children (or is substantially similar to a such a product), when the product markets to children, when the business’ internal research documents that a significant amount of the product’s audience is children, or the business knows or should have known the user is a child.
  • Duty of care:  The AADC imposes a “best interests of children” duty of care when designing, developing, and providing products reasonably likely to be accessed by children.  Covered businesses must process children’s data consistent with this duty.  The “best interests” standard has two parts: First, product design or use of the child’s data must not benefit the company to the detriment of the child.  Second, product design or use of the child’s data must not produce reasonably foreseeable physical or financial harm, severe emotional harm, a highly offensive intrusion on the child’s privacy, or discriminate based on a protected characteristic like race, religion, disability, gender identity, or sexual orientation.
  • Data Protection Impact Assessment (“DPIA”) requirements:  Like California’s AADC, the Maryland AADC requires a covered business to complete a DPIA for each online service, product, or feature reasonably likely to be accessed by children.  The business must update the DPIA within 90 days of making material changes to data processing pertaining to the covered product.  The DPIA must determine whether the product is designed with the best interests of children in mind.  To make this determination, the DPIA should consider the following factors: whether children could experience harmful contacts, harmful conduct, exploitative contracts, addictive features, harmful data collection or processing practices, harmful experiments in the product, harmful algorithms, and any other factor indicating that product design is inconsistent with the best interests of children.
  • Default settings:  The AADC requires all privacy settings provided to children to default to a “high level of privacy” unless the business can show a compelling reason for another default.
  • Geolocation data:  The AADC bars processing of children’s precise geolocation data by default, unless the precise geodata is strictly necessary to provide the product and the business processes the precise geodata for the limited time necessary to provide the product.  In contrast to California’s AADC, the Maryland AADC does not require products to provide a signal to the child when their parent tracks the child’s location.
  • Age gating:  The Maryland AADC does not require covered entities to implement age-gating in their products.  By contrast, California’s AADC mandates age estimation.
  • Enforcement:  The Maryland Division of Consumer Protection in the Office of the Attorney General has exclusive authority to enforce the AADC.  Businesses have 90 days to cure violations after receiving notice from the Division. If not cured, the Maryland AADC applies the same penalties as California’s AADC—up to $2,500 per child per negligent violation and up to $7,500 per child per intentional violation.
]]>
Inside Privacy
France Publishes Updated Certification Standard for the Hosting of Health Data https://www.lexblog.com/2024/05/22/france-publishes-updated-certification-standard-for-the-hosting-of-health-data/ Wed, 22 May 2024 15:21:36 +0000 https://www.lexblog.com/2024/05/22/france-publishes-updated-certification-standard-for-the-hosting-of-health-data/ The French Public Health Code requires that certain service providers hosting health data hold a specific “HDS” certification.  In order to obtain this certification, providers must comply with the requirements set out in the “HDS” certification standard.  On May 16, 2024, France officially published an updated version of this “HDS” certification standard.

  1. Key Changes

The updated standard includes a few clarifications, for instance on the activities for which hosting providers have to obtain certification (in particular the activity of “administering and operating healthcare systems”), or regarding the contractual obligations of the hosting provider.

It also incorporates changes previously made to the ISO 27001 standard into the HDS certification standard.

Importantly, it features new requirements in terms of sovereignty, in particular:

  • a requirement to restrict the storage of health data to the territory of an EEA member state; and
  • transparency requirements vis-à-vis the hosting provider’s customers in the event of transfers outside the EEA (e.g., in the form of remote access to the data).
  1. Entry into force

As of November 16, 2024, new applicants for HDS certification will be assessed against this new version of the HDS certification standard.

French authorities also highlighted that hosting providers that are already HDS-certified will need to renew their HDS certification according to the updated standard within 24 months, i.e., by May 16, 2026 at the latest.

]]>
Inside Privacy
Alabama Enacts Genetic Privacy Bill https://www.lexblog.com/2024/05/20/alabama-enacts-genetic-privacy-bill/ Mon, 20 May 2024 15:32:07 +0000 https://www.lexblog.com/2024/05/20/alabama-enacts-genetic-privacy-bill/ On May 16, 2024, Alabama enacted a genetic privacy bill (HB 21), which regulates consumer-facing genetic testing companies.  HB 21 continues the recent trend of states enacting genetic privacy legislation aimed at regulating direct-to-consumer (“DTC”) genetic testing companies, such as in Nebraska and Virginia, with more than 10 states now having similar laws on the books. 

Scope of HB 21

HB 21 regulates “genetic testing companies’” practices involving “genetic data.”  HB 21 defines a “genetic testing company” as “[a]ny person, other than a health care provider, that directly solicits a biological sample from a consumer for analysis in order to provide products or services to the consumer which include disclosure of information that may include, but is not limited to, the following:

  1. The genetic link of the consumer to certain population groups based on ethnicity, geography, or anthropology;
  2. The probable relationship of the consumer to other individuals based on matching DNA for purposes that include genealogical research; or
  3. Recommendations to the consumer for managing wellness which are based on physical or metabolic traits, lifestyle tendencies, or disease predispositions that are associated with genetic markers present in the consumer’s DNA.”

In turn, “genetic data” is defined as “[a]ny data derived from analysis of a biological sample which concerns a consumer’s genetic characteristics and which may include, but is not limited to, any of the following formats or sources:

  1. Raw data that results from sequencing all or a portion of a consumer’s extracted DNA;
  2. Genotypic and phenotypic information obtained from analyzing a consumer’s raw sequence data; or
  3. Health information self-reported by the consumer to a genetic testing company to be used by the company in connection with analyzing the consumer’s raw sequence data or for product development or scientific research.”

Obligations under HB 21

HB 21 imposes several requirements on an entity that falls within the meaning of a “genetic testing company,” many of which are similar to obligations imposed by other DTC genetic testing laws.  For example, HB 21 (i) requires genetic testing companies to provide notice to consumers regarding the company’s privacy practices and collection, use, and disclosure of genetic data (including the disclosure of de-identified genetic data to third parties for research), (ii) allows consumers the ability to access and delete the their genetic data, and (iii) provides consumers with the ability to revoke consent for the storage of the their biological sample or other consent previously provided under the law. 

HB 21 requires a genetic testing company to obtain a consumer’s express consent for the collection, use, and disclosure of the consumer’s genetic data and enumerates specific elements that this express consent must contain, including identifying who may have access to the consumer’s sample and data and obtaining permission to retain the biological sample and genetic data for future testing.  HB 21 also requires express consent “every time the company” (i) transfers the biological sample or genetic data to a third party for a reason other than the provision of the product or service ordered, (ii) uses the biological sample or genetic data for a purpose other than the ordered product or service, or (iii) markets to a consumer based on the consumer’s genetic data.

While HB 21 contains an exemption for research carried out by certain entities, discussed below, HB 21 requires that genetic testing companies obtain informed consent in compliance with 45 C.F.R. part 46 (the federal Common Rule) for transfers of the consumer’s biological samples or genetic data for (i) independent research by a third party or (ii) for research sponsored by the genetic testing company for the purpose of product or service research and development, scientific publication, or promotion of the company.

Exemptions

HB 21 contains four key exemptions.  First, by definition, “genetic data” does not include de-identified data, which must meet either one of two specific standards to be considered de-identified.  One of these standards is the de-identification standard in the Health Insurance Portability and Accountability Act, as amended and its implementing regulations (“HIPAA”). 

Second, HB 21 exempts covered entities and business associates under HIPAA. 

Third, HB 21 contains an exemption for certain research activities, specifically “the collection, use, or retention of biological samples or genetic data for noncommercial purposes, including for research and instruction, by a public or private institution of higher learning or any entity owned or operated by a public or private institution of higher learning.”  The scope of this research exemption is slightly different than that in several other states’ DTC genetic privacy laws, such as Virginia’s, which generally exempt research conducted in accordance with human subject research frameworks.

Finally, HB 21 does not apply to “biological samples or genetic data lawfully obtained by law enforcement pursuant to a criminal investigation.”

Enforcement and Effective Date

HB 21 will go into effect on October 1, 2024 and be enforced by the Consumer Division of the Office of the Attorney General.  Once in effect, consumers will be able to report a violation of HB 21 to that office—HB 21 does not contain a private right of action.  A violation of HB 21 could result in a civil penalty of up to $3,000 for each violation.

]]>
Inside Privacy
FTC Announces Health Privacy Enforcement Action Against Telehealth Company, Cerebral https://www.lexblog.com/2024/05/20/ftc-announces-health-privacy-enforcement-action-against-telehealth-company-cerebral/ Mon, 20 May 2024 15:29:07 +0000 https://www.lexblog.com/2024/05/20/ftc-announces-health-privacy-enforcement-action-against-telehealth-company-cerebral/ Last month, the Federal Trade Commission (“FTC”) announced its enforcement action against telehealth firm, Cerebral, Inc. (“Cerebral”), for its alleged unauthorized disclosures of consumers’ sensitive personal health information and other sensitive data to third parties for advertising purposes in violation of the FTC Act.  The complaint also alleges that Cerebral violated the Opioid Addiction Recovery Fraud Prevention Act (“OARFPA”), and the Restore Online Shoppers’ Confidence Act (“ROSCA”), which permits the court to order permanent injunctive relief, civil penalties, and other monetary relief for actions in violations of specific sections of the FTC Act, the OARFPA, and the ROSCA.  According to the proposed order, Cerebral must pay more than $7 million in civil penalties and consumer refunds.  In addition, Cerebral will be banned from using or disclosing consumers’ personal and health information (including online identifiers, such as IP addresses or other persistent identifiers) for advertising and must obtain consumers’ affirmative express consent before disclosing such information to outside parties.

Below is a discussion of the complaint and proposed order.

Complaint

Cerebral is a telehealth platform that sells subscription services offering online health care treatment, such as mental health treatment and/or medication management services, through websites and mobile apps.  According to the complaint, Cerebral routinely “collected and stored personal health information (“PHI”) and other sensitive information of consumers seeking treatment,” such as names, addresses, birth dates, demographic information, IP address, medication histories, and treatment plans, among other information.  Per the complaint, Cerebral misrepresented the extent to which and the purposes for, use and disclose of patients’ personal information, mishandled and exposed hundreds of thousands of patients’ personal information, and failed to provide patients with a simple means to cancel their subscriptions and stop recurring charges.  The FTC also emphasized that Cerebral did not appropriately inform consumers about the company’s information practices, including during Cerebral’s registration process, but rather offered hyperlinks to its privacy policy and telehealth consent in small print and buried key information regarding the company’s data sharing terms within its lengthy and dense privacy policy.

In addition to other allegations, the complaint alleges:

  • Cerebral failed to clearly disclose that it would be sharing consumer’s sensitive data with third parties for advertising.  Cerebral utilized tracking tools (e.g., pixels) that collected and sent patients’ PHI to third parties who used the PHI to provide advertising, data analytics, or other services to Cerebral.  The data Cerebral sent included consumers’ contact information, persistent identifiers, information about consumers’ activities while using Cerebral’s website and/or apps, and medical or mental health information disclosed by users when filling out Cerebral’s mental health questionnaire or engaging with its website in ways that demonstrated interests in particular services and treatments.  Per the complaint, Cerebral shared the sensitive information of nearly 3.2 million consumers with third party media and advertising platforms by using or integrating tracking tools on its website or apps.
  • Cerebral failed to deploy adequate safeguards for the sensitive data collected from consumers and engaged in “sloppy security practices.”  For example, the complaint alleges Cerebral failed to block former employees from accessing confidential electronic medical records of patients and failed to ensure only the patients’ providers accessed patient records. 
  • Cerebral sent more than 6,000 promotional materials to patients in the form of a postcard—rather than within an envelope— that included names and addresses of patients in treatment, and language that reasonably indicated diagnosis, treatment, and a relationship with Cerebral, thereby revealing patients’ private, HIPAA-protected status. 
  • Cerebral sold its subscription services on a negative option basis, meaning a consumer’s silence (i.e., failure to cancel an agreement) was treated as consent to be charged for goods or services.
  • Cerebral violated ROSCA by failing to clearly disclose all material terms of their cancellation policies before charging customers and failing to obtain consumers’ express informed consent before charging their financial institution for products or services.

The complaint also charges Cerebral’s former CEO, Kyle Robertson, alleging that he had “extensive personal involvement” in the teams and practices that led to the enforcement.  However, according to the FTC’s announcement, Robertson “has not agreed to a settlement and the charges against him will be decided by the court.”

Proposed Order

The proposed order, among other requirements, will:

  • Prohibit Cerebral from using “Covered Information” for advertising, marketing, promoting, offering, offering for sale, or selling any products or services on, or through websites, mobile apps, or other platforms, including those of a third party.  Covered Information is broadly defined to include personal information, individually identifiable health information, and persistent identifiers (e.g., IP address, device ID), among other data types.  Past orders generally banned companies from using “Health Information” for advertising purposes, which the FTC defined more narrowly to include individually identifiable information relating to the past, present, or future physical or mental health or conditions of an individual, or “Covered Information” to the extent it would be used for targeted advertising.  Here, the proposed order appears to prohibit Cerebral from using any personal identifiers for a larger pool of advertising activities.
  • Require that Cerebral delete all consumer personal and health information and any product (e.g., models, tools) derived therefrom that has not been collected for treatment, payment, or health care operations unless Cerebral obtains affirmative express consent from the consumer for such retention.
  • Require Cerebral to implement a data retention schedule and provide consumers with a clear mechanism to request their data be deleted.
  • Prohibit Cerebral from misrepresenting any negative option and cancellation policies or practices and require it to provide consumers with an easy method to cancel services.
]]>
Inside Privacy
HHS Modifies Privacy Rule to Support Reproductive Health Care Privacy https://www.lexblog.com/2024/05/20/hhs-modifies-privacy-rule-to-support-reproductive-health-care-privacy-2-2/ Mon, 20 May 2024 13:59:24 +0000 https://www.lexblog.com/2024/05/20/hhs-modifies-privacy-rule-to-support-reproductive-health-care-privacy-2-2/ HHS Modifies Privacy Rule to Support Reproductive Health Care Privacy

On April 26, 2024, the Office for Civil Rights (“OCR”) at the U.S. Department of Health & Human Services (“HHS”) published a final rule that modifies the Standards for Privacy of Individually Identifiable Health Information (“Privacy Rule”) under the Health Insurance Portability and Accountability Act (“HIPAA”) regarding protected health information (“PHI”) concerning reproductive health. We previously covered the proposed rule (hereinafter, “the NPRM”), which was published on April 17, 2023. The final rule aligns closely with the NPRM.

OCR noted that the Supreme Court’s ruling in Dobbs v. Jackson Women’s Health Organization (holding that there is no constitutional right to abortion) created a legal landscape that “increase[s] the potential that use and disclosure of PHI about an individual’s reproductive health will undermine access to and the quality of health care generally.” According to OCR, the final rule aims to “continue to protect privacy in a manner that promotes trust between individuals and health care providers and advances access to, and improves the quality of, health care” by “limit[ing] the circumstances in which provisions of the Privacy Rule permit the use or disclosure of an individual’s PHI about reproductive health care for certain non-health care purposes.”

The final rule prohibits a regulated entity from using or disclosing an individual’s PHI:

  • to conduct a criminal, civil, or administrative investigation into or impose criminal, civil, or administrative liability on any person for the mere act of seeking, obtaining, providing, or facilitating reproductive health care that is lawful under the circumstances in which it is provided; and
  • to identify an individual, health care provider, or other person to initiate an investigation or proceeding against that person in connection with seeking, obtaining, providing, or facilitating reproductive health care that is lawful under the circumstances in which it is provided.

“Lawful under the circumstances in which it is provided” means that the reproductive health care is either:

  • lawful under the circumstances in which the health care is provided and in the state in which it is provided; or
  • protected, required, or authorized by Federal law, including the United States Constitution, regardless of the state in which such health care is provided.

The final rule includes a presumption that the reproductive health care provided by a person other than the regulated entity receiving the request was lawful. The final rule also imposes a new requirement that regulated entities must obtain an attestation from the requestor that a requested use or disclosure of PHI potentially related to reproductive health care is not for a prohibited purpose. OCR plans to publish a model attestation prior to the compliance date of the final rule.

The final rule does not prevent the use or disclosure of PHI for purposes otherwise permitted under the Privacy Rule. Notably, the final rule also does not prohibit the use or disclosure of PHI to investigate or impose liability on persons in situations involving reproductive health care that was unlawful when it was provided.

The final rule also modifies the Privacy Rule in the following ways:

  • Clarifying and adopting new definitions: The final rule clarifies that “person” in the HIPAA Rules means “natural person” (meaning a person who is born alive). In a slight departure from the NPRM, the final rule defines “public health,” in the context of surveillance, investigation, and intervention, as “population-level activities to prevent disease and promote health of populations.” Public health surveillance, investigation, and intervention do not include efforts to conduct criminal, civil, and administrative investigations or impose criminal, civil, nor administrative liability for the mere act of seeking, obtaining, providing, or facilitating health care. This revision was intended to clarify that the final rule does not prevent reporting of public health information on communicable diseases. The definition of “reproductive health care” is expanded from that proposed in the NPRM to mean health care “that affects the health of an individual in all matters relating to the reproductive system and to its functions and processes.”
  • Personal representatives: The final rule clarifies that a personal representative’s provision or facilitation of reproductive health care at the request of the individual does not constitute the basis for a reasonable belief that the personal representative is subjecting the individual to abuse. This clarification responds to a concern that a regulated entity that disagrees with the reproductive services sought by the personal representative could cease to recognize that person as an individual’s personal representative by asserting abuse on the part of the personal representative.
  • Modifications of Notice of Privacy Practices (“NPP”): Regulated entities must modify their NPPs to inform individuals that their PHI may not be used or disclosed for a purpose prohibited under this final rule.

The final rule goes into effect on June 25, 2024, and regulated entities must implement compliance measures by December 23, 2024. Regulated entities have until February 16, 2026, to comply with the provisions related to NPPs.

]]>
Inside Privacy
Illinois Legislature Passes BIPA Amendment Limiting Violation Accrual https://www.lexblog.com/2024/05/17/illinois-legislature-passes-bipa-amendment-limiting-violation-accrual/ Fri, 17 May 2024 21:43:12 +0000 https://www.lexblog.com/2024/05/17/illinois-legislature-passes-bipa-amendment-limiting-violation-accrual/ Yesterday, both houses of Illinois’ legislature passed S.B. 2979, a significant amendment to the Illinois Biometric Information Privacy Act (BIPA). The bill states that an entity that, in more than one instance, obtains the same biometric identifier or biometric information from the same person using the same method of collection, in violation of BIPA’s notice and consent requirement has committed a single violation. As a result, each aggrieved person is entitled to, at most, one recovery for a single collective violation.

For instance, an employer who requires employees to use a biometric timekeeping system without providing the requisite notice and obtaining consent would, under the amended law, be liable only for one violation of BIPA, rather than one violation for each day the employer had used the timekeeping system. This is significant because the law imposes a penalty of $1000 per violation, or $5,000 per intention or reckless violation. Due to this amendment, plaintiffs’ incentive to file suit under BIPA may decrease.

This bill overturns the Illinois Supreme Court’s decision in Cothron v. White Castle Sys., Inc., 2023 IL 128004 (July 18, 2023). In that decision, the court held that “a claim accrues under the Act with every scan or transmission of biometric identifiers or biometric information without prior informed consent.” The court reasoned that the “plain language of the statute” regulated acts such as “collection” and “capture,” which can happen more than once. The holding also emphasized that it “cannot rewrite a statute to create new elements or limitations not included by the legislature.” The court explicitly stated that the legislature was best suited to address “policy-based concerns about potentially excessive damage awards under the Act,” which the legislature has now done.

The bill also provides that consent from data subjects may be obtained via electronic signature, which is defined as “an electronic sound, symbol, or process attached to or logically associated with a record and executed or adopted by a person with the intent to sign the record.”

The bill now heads to the governor’s desk for signature. The act would take effect upon signature.

]]>
Inside Privacy
European Commission Calls on Industry to Commit to the AI Pact in the Run-Up to the European Elections https://www.lexblog.com/2024/05/14/european-commission-calls-on-industry-to-commit-to-the-ai-pact-in-the-run-up-to-the-european-elections-2/ Tue, 14 May 2024 11:49:46 +0000 https://www.lexblog.com/2024/05/14/european-commission-calls-on-industry-to-commit-to-the-ai-pact-in-the-run-up-to-the-european-elections-2/ Although the final text of the EU AI Act should enter into force in the next few months, many of its obligations will only start to apply two or more years after that (for further details, see our earlier blog here). To address this gap, the Commission is encouraging industry to take early, voluntary steps to implement the Act’s requirements through an initiative it is calling the AI Pact. With the upcoming European elections on the horizon, the Commission on 6 May 2024 published additional details on the AI Pact and encouraged organizations to implement measures addressing “critical aspects of the imminent AI Act, with the aim of curbing potential misuse” and contributing “to a safe use of AI in the run-up to the election.”

What is the AI Pact?

The Commission launched the AI Pact in November 2023 with the objective of assisting organizations in planning ahead for compliance with the AI Act and encouraging early adoption of the measures outlined in the Act. Organizations involved in the AI Pact will make formal pledges to work towards compliance with the upcoming AI Act and provide specific details about the actions they are currently taking or planning to take to meet the Act’s requirements.

The AI Pact will be overseen by the Commission’s newly formed AI Office and will be structured around two pillars:

  • Pillar I: gathering and exchanging knowledge with the AI Pact network – organizations participating in the Pact contribute to the creation of a collaborative community, sharing their experiences and best practices. This will include workshops organized by the AI Office on topics including responsibilities under the AI Act and how to prepare for the Act’s implementation.
  • Pillar II: facilitating and communicating company pledges – “providers” and “deployers” of AI systems (as defined in the AI Act) will be encouraged to proactively share the concrete actions they’ve committed to take to meet the Act’s requirements and report on their progress on a regular basis. The commitments will be collected and published by the AI Office.

What does involvement in the AI Pact offer participants?

According to the Commission, the benefits for organizations participating in the AI Pact include:

  • Fostering a shared understanding of the AI Act’s goals.
  • Sharing knowledge and increasing the visibility and credibility of the safeguards put in place to demonstrate trustworthy AI.
  • Building additional trust in AI technologies.

***

The Covington team continues to monitor developments on the AI Act, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about the AI Act, or other tech regulatory matters, we are happy to assist with any queries.

]]>
Inside Privacy
Employers Beware: New Wave of Illinois Genetic Information Privacy Act Litigation https://www.lexblog.com/2024/05/07/employers-beware-new-wave-of-illinois-genetic-information-privacy-act-litigation/ Tue, 07 May 2024 20:04:06 +0000 https://www.lexblog.com/2024/05/07/employers-beware-new-wave-of-illinois-genetic-information-privacy-act-litigation/ Likely spurred by plaintiffs’ recent successes in cases under Illinois’s Biometric Information Privacy Act (“BIPA”), a new wave of class actions is emerging under Illinois’s Genetic Information Privacy Act (“GIPA”). While BIPA regulates the collection, use, and disclosure of biometric data, GIPA regulates that of genetic testing information. Each has a private right of action and provides for significant statutory damages, even potentially where plaintiffs allege a violation of the rule without actual damages.[1] From its 1998 enactment until last year, there were few GIPA cases, and they were largely focused on claims related to genetic testing companies.[2] More recently, plaintiffs have brought dozens of cases against employers alleging GIPA violations based on allegations of employers requesting family medical history through pre-employment physical exams. This article explores GIPA’s background, the current landscape and key issues, and considerations for employers.

Key GIPA Provisions

GIPA is intended to prevent employers and insurers from using genetic testing information as a means of discrimination for employment or underwriting purposes. See 410 ILCS 513/1, et seq.

Specifically as to employers, GIPA prohibits:

  • Soliciting, requesting, requiring, or purchasing a person or their family member’s genetic testing as a condition of employment.
  • Using such information for employment decisions.
  • Using genetic information for workplace wellness programs, unless the employee provides GIPA-compliant written authorization.

GIPA also prevents disclosure of the identity of a genetic testing subject or the results of genetic testing to third parties without the subject’s authorization. GIPA provides for the greater of actual damages or $2,500 for a negligent violation and $15,000 for a willful violation—steeper than BIPA’s $1,000 and $5,000 statutory damages, respectively.

Litigation Landscape

Given the dearth of GIPA caselaw, there is little precedent on the application and scope of its provisions. But in a new swath of cases, employees and job applicants assert that employers have requested family medical histories during pre-employment physicals in violation of GIPA.

In April, an Illinois state court dismissed one such case against a hospital, finding that the employee-plaintiff allegedly was asked about her own current medical status—and not her or her family’s genetic history—during a pre-employment physical that occurred after she had been offered a job and that she released the hospital from liability. Mendoza v. Advocate Health and Hosp. Corp., No. 23-CH-7844, (Ill. Cir. Ct. April 24, 2024).

Several other suits have dismissal motions ripe for ruling, so we are likely to see the caselaw develop in the near-term.

Scope of Genetic Information. How courts will interpret the scope of “genetic information” is currently unclear. GIPA adopts the Health Insurance Portability and Accountability Act’s definition of “genetic information,” which includes, among other things, “[t]he manifestation of a disease or disorder in family members of such individual.” 410 ILCS 513/10; 45 CFR § 160.103. Depending on the particular information an employer requests and an employee provides in a pre-employment physical or questionnaire, defendants may be able to argue that such medical histories do not constitute “genetic information.”

Potential Damages. Though GIPA provides for greater damages than does BIPA, which has resulted in significant liability in the tens and even hundreds of millions of dollars, GIPA’s scope is narrower than BIPA’s, and other practical factors may result in lesser overall exposure. The Illinois Supreme Court ruled last year that each repeat violation of BIPA—for example, each time an employer collects an employee’s fingerprint data for building or computer access, which could happen several times per day—can be an individual violation. Cothron v. White Castle Sys., Inc., 216 N.E.3d 918, 929, as modified on denial of reh’g (July 18, 2023). Even if courts determine that the same applies to GIPA, practically, given the limited number of times an employer may request genetic information, the number of violations for which a company may face liability could be significantly lower under GIPA. But the significant statutory damages could present sizeable exposure depending on the class size if a class is certified.

Additionally, given the similar wording of the two statutes’ damages provisions, courts may also determine GIPA damages, like BIPA damages, are discretionary. See id. This also could help cabin liability.

Considerations for Employers

In the meantime, Illinois employers should consider their hiring practices and whether pre-employment physicals are necessary, and if so, whether and how detailed family medical history questionnaires must be. Employers might also consider liability waivers for the collection of genetic information and reviewing their insurance policies for carveouts barring coverage for such litigation.


[1] At least one court has so held with respect to GIPA, relying on the Illinois Supreme Court’s holding relating to a similar provision of BIPA. See Bridges v. Blackstone Grp., Inc., 2022 WL 2643968, at *3 (S.D. Ill. July 8, 2022), aff’d, 66 F.4th 687 (7th Cir. 2023).

[2] See, e.g., Bridges, 2022 WL 2643968; Melvin v. Sequencing, LLC, 344 F.R.D. 231 (N.D. Ill. 2023); see also In re Ambry Genetics Data Breach Litig., 567 F. Supp. 3d 1130, 1150 (C.D. Cal. 2021).

]]>
Inside Privacy
HHS Modifies Privacy Rule to Support Reproductive Health Care Privacy https://www.lexblog.com/2024/05/03/hhs-modifies-privacy-rule-to-support-reproductive-health-care-privacy/ Fri, 03 May 2024 17:30:15 +0000 https://www.lexblog.com/2024/05/03/hhs-modifies-privacy-rule-to-support-reproductive-health-care-privacy/ On April 26, 2024, the Office for Civil Rights (“OCR”) at the U.S. Department of Health & Human Services (“HHS”) published a final rule that modifies the Standards for Privacy of Individually Identifiable Health Information (“Privacy Rule”) under the Health Insurance Portability and Accountability Act (“HIPAA”) regarding protected health information (“PHI”) concerning reproductive health. We previously covered the proposed rule (hereinafter, “the NPRM”), which was published on April 17, 2023. The final rule aligns closely with the NPRM.

OCR noted that the Supreme Court’s ruling in Dobbs v. Jackson Women’s Health Organization (holding that there is no constitutional right to abortion) created a legal landscape that “increase[s] the potential that use and disclosure of PHI about an individual’s reproductive health will undermine access to and the quality of health care generally.” According to OCR, the final rule aims to “continue to protect privacy in a manner that promotes trust between individuals and health care providers and advances access to, and improves the quality of, health care” by “limit[ing] the circumstances in which provisions of the Privacy Rule permit the use or disclosure of an individual’s PHI about reproductive health care for certain non-health care purposes.”

The final rule prohibits a regulated entity from using or disclosing an individual’s PHI:

  • to conduct a criminal, civil, or administrative investigation into or impose criminal, civil, or administrative liability on any person for the mere act of seeking, obtaining, providing, or facilitating reproductive health care that is lawful under the circumstances in which it is provided; and
  • to identify an individual, health care provider, or other person to initiate an investigation or proceeding against that person in connection with seeking, obtaining, providing, or facilitating reproductive health care that is lawful under the circumstances in which it is provided.

“Lawful under the circumstances in which it is provided” means that the reproductive health care is either:

  • lawful under the circumstances in which the health care is provided and in the state in which it is provided; or
  • protected, required, or authorized by Federal law, including the United States Constitution, regardless of the state in which such health care is provided.

The final rule includes a presumption that the reproductive health care provided by a person other than the regulated entity receiving the request was lawful. The final rule also imposes a new requirement that regulated entities must obtain an attestation from the requestor that a requested use or disclosure of PHI potentially related to reproductive health care is not for a prohibited purpose. OCR plans to publish a model attestation prior to the compliance date of the final rule.

The final rule does not prevent the use or disclosure of PHI for purposes otherwise permitted under the Privacy Rule. Notably, the final rule also does not prohibit the use or disclosure of PHI to investigate or impose liability on persons in situations involving reproductive health care that was unlawful when it was provided.

The final rule also modifies the Privacy Rule in the following ways:

  • Clarifying and adopting new definitions: The final rule clarifies that “person” in the HIPAA Rules means “natural person” (meaning a person who is born alive). In a slight departure from the NPRM, the final rule defines “public health,” in the context of surveillance, investigation, and intervention, as “population-level activities to prevent disease and promote health of populations.” Public health surveillance, investigation, and intervention do not include efforts to conduct criminal, civil, and administrative investigations or impose criminal, civil, nor administrative liability for the mere act of seeking, obtaining, providing, or facilitating health care. This revision was intended to clarify that the final rule does not prevent reporting of public health information on communicable diseases. The definition of “reproductive health care” is expanded from that proposed in the NPRM to mean health care “that affects the health of an individual in all matters relating to the reproductive system and to its functions and processes.”
  • Personal representatives: The final rule clarifies that a personal representative’s provision or facilitation of reproductive health care at the request of the individual does not constitute the basis for a reasonable belief that the personal representative is subjecting the individual to abuse. This clarification responds to a concern that a regulated entity that disagrees with the reproductive services sought by the personal representative could cease to recognize that person as an individual’s personal representative by asserting abuse on the part of the personal representative.
  • Modifications of Notice of Privacy Practices (“NPP”): Regulated entities must modify their NPPs to inform individuals that their PHI may not be used or disclosed for a purpose prohibited under this final rule.

The final rule goes into effect on June 25, 2024, and regulated entities must implement compliance measures by December 23, 2024. Regulated entities have until February 16, 2026, to comply with the provisions related to NPPs.

]]>
Inside Privacy
Changes to the UK investigatory powers regime receive royal assent https://www.lexblog.com/2024/05/03/changes-to-the-uk-investigatory-powers-regime-receive-royal-assent-2/ Fri, 03 May 2024 14:08:29 +0000 https://www.lexblog.com/2024/05/03/changes-to-the-uk-investigatory-powers-regime-receive-royal-assent-2/ On April 25, 2024, the UK’s Investigatory Powers (Amendment) Act 2024 (“IP(A)A”) received royal assent and became law.  This law makes the first substantive amendments to the existing Investigatory Powers Act 2016 (“IPA”) since it came into effect, and follows an independent review of the effectiveness of the IPA published in June 2023.

The most significant amendments are:

  • Introduction of requirements to notify the UK Government of changes to services.  The IP(A)A grants a new power to the UK Government, which may issue notices to operators of covered services (e.g., communications service or network providers) requiring them to notify the Government before they make certain types of changes to their services.  The precise types of changes that may be notifiable will be set out in secondary legislation, but the intent appears to be to cover changes that might prevent a provider from complying with warrants they receive under the IPA.  This provision has been controversial, as it could potentially be used to require providers to notify the UK Government if they wish to introduce tools like end-to-end encryption.
  • New personal data breach notification requirements.  The UK’s Privacy and Electronic Communications Regulations 2003 already require providers of electronic communications networks and services to notify the Information Commissioner’s Office if they suffer a personal data breach.  The IPA(A) introduces a new requirement on such providers also to notify the Investigatory Powers Commissioner (“IPC”).  Where (among other things) there is a public interest in doing so, taking into account the seriousness of the breach and potential impacts on national security / the prevention of crime, the IPC must inform individuals affected by the breach.  Covered providers may need to consider amending their incident response plans to account for these notifications.
  • Broader powers for intelligence agencies to access certain types of data.  The IPA currently requires intelligence agencies to obtain a warrant from the Secretary of State (and approved by a Judicial Commissioner) before they can retain large databases of personal data consisting primarily of data relating to individuals who are unlikely to be of interest to the intelligence services.  The IP(A)A will permit the head of an intelligence agency (again subject to approval by a Judicial Commissioner) to issue certain types of warrants for bulk personal datasets where individuals have a “low expectation of privacy”, based on factors including whether the data was made public by the individual or is widely known about in the public domain.  The IPA(A)A also makes provision, for the first time, for intelligence services to access bulk personal datasets held by third parties, provided they obtain a warrant from the Secretary of State and that warrant is approved by a Judicial Commissioner.

    In addition, the IP(A)A creates a broader set of circumstances when law enforcement and intelligence agencies may access internet connection records, i.e., metadata relating to when and where individuals connected to the internet or other communications networks.

Other provisions of the IP(A)A are largely intended to clarify certain provisions of the IPA and to prevent circumvention—for example, amendments to clarify that the definition of “telecommunications operator” covers operators located outside the UK but that provide services to people in the EU, and an express statement that the UK Government can enforce “retention notices” (i.e., notices requiring a telecommunications operator to retain data for a certain period) against providers located outside the UK.  There are also new provisions related to when certain powers set out in the IPA may be used in relation to Members of Parliament and journalists.

]]>
Inside Privacy
What the Diversity in Faces Litigation Means for Biometric Technologies https://www.lexblog.com/2024/05/02/what-the-diversity-in-faces-litigation-means-for-biometric-technologies/ Fri, 03 May 2024 02:36:52 +0000 https://www.lexblog.com/2024/05/02/what-the-diversity-in-faces-litigation-means-for-biometric-technologies/ In 2020, Illinois residents whose photos were included in the Diversity in Faces dataset brought a series of lawsuits against multiple technology companies, including IBM, Facefirst, Microsoft, Amazon, and Google alleging violations of Illinois’ Biometric Information Privacy Act.[1] In the years since, the cases against IBM and FaceFirst were dismissed at the agreement of both parties, while the cases against Microsoft, Amazon, and most recently, Google were dismissed at summary judgment.

These cases are unique in the landscape of BIPA litigation because in all instances, defendants are not alleged to have had direct contact with the plaintiffs. Instead, plaintiffs alleged that defendants used a dataset of photos created by IBM (the Diversity in Faces, or DiF, dataset) which allegedly included images publicly made available by photo-sharing website Flickr. The DiF dataset allegedly implemented facial coding schemes to measure various aspects of the facial features of individuals pictured, and was made available to researchers with the goal of mitigating dataset bias. The nature of these allegations sets these cases apart from cases like Monroy v. Shutterfly, Inc., 2017 WL 4099846 or In re Facebook Biometric Info. Priv. Litig., 326 F.R.D. 535 in which plaintiffs alleged that defendants had collected biometric data from them. Here, there was no allegation that plaintiffs used a product created by defendants, gave data to defendants, or interacted with defendants in any way. Thus, these cases demonstrate the importance of considering BIPA when developing biometric technologies or performing research, even if direct interaction with Illinois residents is limited.

Extraterritoriality

It is well-established that BIPA does not apply extraterritorially to conduct outside of Illinois. The DiF cases considered whether BIPA’s territorial limits barred plaintiffs’ claims. The courts uniformly declined to grant defendants’ motions to dismiss on such grounds but did eventually grant motions for summary judgment. Both the Amazon and Microsoft courts acknowledged at the motion to dismiss stage that plaintiffs did not upload any data to defendant companies, that they did not directly use defendants’ products, and that plaintiffs did not allege that defendants had obtained the DiF dataset from Illinois. However, the courts allowed discovery in order to assess not only typical factors such as plaintiff’s residency and the location of harm, but also “[i]nternet-specific factors, such as where the site or information was accessed, or where the corporation operates the online practice.”

Ultimately, all courts to rule on the question found that BIPA did not apply as the events in question did not occur primarily and substantially in Illinois. To support this finding, the Amazon and Microsoft courts noted that entities other than defendants were responsible for collecting and generating facial scans from the photographs. Additionally, the Amazon court found that there was no evidence that employees had downloaded, reviewed, or evaluated the DiF dataset in Illinois. Similarly, the Google court stated that plaintiffs had not alleged any “direct interaction” that would give rise to the alleged BIPA violations. The Microsoft court went further by stating that even if Microsoft’s systems “‘chunked,’ encrypted, and stored the DiF Dataset on a server in Illinois,” any connection between Microsoft’s conduct and Illinois would still have been too attenuated for BIPA to apply.

Unjust Enrichment

Plaintiffs also brought unjust enrichment claims, alleging that defendants unlawfully acquired plaintiffs’ biometric information and profited from its dissemination. On summary judgment, the Microsoft and Amazon courts found that, because employees did not use the facial annotations in the dataset and did not use the dataset to train or improve their facial recognition technologies, there was no unjust enrichment. It is worth noting that these decisions relied on highly fact-specific analyses citing multiple relevant depositions.

In conclusion, a key observation emerging from this line of cases is that those that did not settle were dismissed at summary judgment once discovery showed that defendants’ actions were not connected to Illinois and that they did not use the DiF dataset to improve their own technologies. Though this trend may slow the rate at which new BIPA litigation is filed against companies that use biometric data to improve their technologies, companies can still consider mitigating risk and improving their chances of prevailing on motions to dismiss by closely examining the source of any biometric data and evaluating whether consumer consent was obtained.


[1] Vance v. Int’l Bus. Machines Corp., 2020 WL 5530134; Vance v. Facefirst, Inc., 2021 WL 5044010; Vance v. Amazon.com, Inc., 2022 WL 12306231; Vance v. Google LLC, 2024 WL 1141007; Vance v. Microsoft Corp., 2022 WL 9983979.

]]>
Inside Privacy
Congress Passes Bill Prohibiting Sharing or Selling Americans’ Sensitive Data to Entities Controlled by Foreign Adversaries https://www.lexblog.com/2024/05/02/congress-passes-bill-prohibiting-sharing-or-selling-americans-sensitive-data-to-entities-controlled-by-foreign-adversaries/ Thu, 02 May 2024 13:54:32 +0000 https://www.lexblog.com/2024/05/02/congress-passes-bill-prohibiting-sharing-or-selling-americans-sensitive-data-to-entities-controlled-by-foreign-adversaries/ On April 24, 2024, President Biden signed into law H.R. 815, which includes the Protecting Americans’ Data from Foreign Adversaries Act of 2024 (“the Act”), a bill that passed the House 414-0 as H.R. 7520 on March 20.  The Act is one of several recent actions by the U.S. government to regulate transfers of U.S. personal data for national security reasons, with a particular focus on China.  While the ultimate policy objectives are similar, the Act takes a different approach by comparison to the Biden Administration’s Executive Order on Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern (“the EO”), which the U.S. Department of Justice (“DOJ”) is in the process of implementing.  We summarize below some key features of the Act, which will go into effect on June 23, 2024.

The Act makes it unlawful for data brokers to sell, license, rent, trade, transfer, release, disclose, provide access to, or otherwise make available personally identifiable sensitive data of a United States individual (i.e., people residing in the United States) to any foreign adversary or any entity controlled by a foreign adversary. 

  • Data brokers” for purposes of the Act are any entity that, for valuable consideration, sells, licenses, rents, trades, transfers, releases, discloses, provides access to, or otherwise makes available data of United States individuals that the entity did not collect directly from such individuals to another entity.  The Act exempts certain entities from the meaning of “data broker.”  Specifically, the Act does not apply to an entity to the extent that such entity:
    • (i) is transmitting data of a U.S. individual, including communications of such an individual, at the request or direction of such individual;
    • (ii) is providing, maintaining, or offering a product or service with respect to which personally identifiable sensitive data, or access to such data, is not the product or service;
    • (iii) is reporting or publishing news or information concerning local, national, or international events or other matters of public interest;
    • (iv) is reporting, publishing, or otherwise making available news or information that is available to the general public; or
    • (v) is acting as a service provider.  A “service provider” is an entity that: (A) collects, processes, or transfers data on behalf of, and at the direction of: (i) an individual or entity that is not a foreign adversary country or controlled by a foreign adversary; or (ii) a Federal, State, Tribal, territorial, or local government entity; and (B) receives data from or on behalf of an individual or entity described in subparagraph (A)(i) or a Federal, State, Tribal, territorial, or local government entity.

As noted above, the Act prohibits making available sensitive data of United States individuals to entities or individuals controlled by a foreign adversary.

  • “Foreign adversary countries” are those specified in 10 U.S.C. § 4872(d)(2), which currently includes the Democratic People’s Republic of North Korea, the People’s Republic of China, the Russian Federation, and the Islamic Republic of Iran.
  • An entity “controlled by a foreign adversary” means an individual or entity that is:
    • (A) a foreign person domiciled in, is headquartered in, has its principal place of business in, or is organized under the laws of a foreign adversary country;
    • (B) an entity with respect to which a foreign person or combination of foreign persons described in (A) directly or indirectly own at least a 20 percent stake; or
    • (C) a person subject to the direction or control of a foreign person or entity described in (A) or (B).

The Act includes in its definition of “sensitive data” sixteen categories of data plus any data made available by a data broker “for the purpose of identifying the types of data.” Categories of sensitive data include government issued identifiers, biometric information, genetic information, and precise geolocation information, among other things.  “Sensitive data” is considered personally identifiable if it “identifies or is linked or reasonably linkable, alone or in combination with other data, to an individual or a device that identifies or is linked or reasonably linkable to an individual.”

Violations of this Act would be enforced by the Federal Trade Commission (“FTC”) as violations of an unfair or deceptive act or practice under the FTC Act.  It is unclear how the FTC will interpret and enforce the Act, especially in light of ambiguities in the statutory language, the FTC’s lack of national security expertise, and the potential overlap with DOJ’s authority under the EO.

]]>
Inside Privacy
FTC Issues Final Rule to Expand Scope of the Health Breach Notification Rule https://www.lexblog.com/2024/05/02/ftc-issues-final-rule-to-expand-scope-of-the-health-breach-notification-rule/ Thu, 02 May 2024 12:55:28 +0000 https://www.lexblog.com/2024/05/02/ftc-issues-final-rule-to-expand-scope-of-the-health-breach-notification-rule/ On Friday, April 26, 2024, the Federal Trade Commission (“FTC”) voted 3-2 to issue a final rule (the “final rule”) that expands the scope of the Health Breach Notification Rule (“HBNR”) to apply to health apps and similar technologies and broadens what constitutes a breach of security, among other updates.  We previously covered the proposed rule, which was issued on May 18, 2023.

In the FTC’s announcement of the final rule, the FTC emphasized that “protecting consumers’ sensitive health data is a high priority for the FTC” and that the “updated HBNR will ensure [the HBNR] keeps pace with changes in the health marketplace.”  Key provisions of the final rule include:

  • Revised definitions:  The final rule includes changes to current definitions in the HBNR that codify the FTC’s recent position on the expansiveness of the HBNR.  Specifically, among other definition changes, the HBNR contains key updates to the definitions of:
    • “Personal health records (‘PHR’) identifiable information.”  In the final rule, the FTC adopts changes to the definition of PHR identifiable information that were included in the proposed rule to clarify that the HBNR applies to health apps and other similar technologies not covered by the Health Insurance Portability and Accountability Act, as amended, and its implementing regulations (collectively, “HIPAA”).  In the final rule, the FTC discusses the scope of the definition, noting that “unique, persistent identifiers (such as unique device and mobile advertising identifiers), when combined with health information constitute ‘PHR identifiable health information’ if these identifiers can be used to identify or re-identify an individual.”
    • “Covered health care provider.”  In the proposed rule, the FTC proposed adding a definition of “health care provider” to include providers of medical or other health services, or any other entity furnishing “health care services or supplies” (i.e., websites, apps, and Internet-connected devices that provide mechanisms to track health conditions, medications, fitness, sleep, etc.).  The final rule does not make substantive changes to this proposed definition but does contain a slight terminology change to “covered health care provider” to distinguish that term from the definition of “health care provider” in other regulations. 

In the final rule, the FTC notes that the concern (expressed by some commenters) that the scope of these definitions could impermissibly cause the HBNR to cover retailers of general purpose items like shampoo or vitamins is unwarranted—rather, the FTC explains, the threshold inquiry is whether an entity is a vendor of PHR, which is “an entity that offers or maintains a [PHR].”  The final rule notes that to be a vendor of PHR covered by the HBNR, an app, website, or online service “must provide an offering that relates more than tangentially to health” and that a PHR must be “an electronic record of PHR identifiable health information on an individual, must have the technical capacity to draw information from multiple sources, and must be managed, shared, and controlled by or primarily for the individual.” 

  • “Breach of security.”  In the final rule, the FTC adopts the proposed changes to the meaning of “breach of security” to capture a company’s intentional but unauthorized disclosures of consumers’ PHR identifiable health information to third party companies, as well as traditional cybersecurity incidents.  Notably, the FTC emphasizes in the final rule that the meaning of “breach of security” includes more than just unauthorized disclosures to third parties—the FTC takes the position that the term also includes unauthorized uses, i.e., “where an entity exceeds an authorized access to PHR identifiable health information, such as where it obtains data for one legitimate purpose, but later uses that data for a secondary purpose that was not originally authorized by the individual.”

The final rule notes that the FTC has not added a definition of “authorization,” but provides several examples of what may constitute an “unauthorized” disclosures of PHR identifiable health information, including (i) affirmative privacy misrepresentations to users such that disclosures of PHR identifiable health information are inconsistent with consumer expectations and (ii) “deceptive omissions,” where a company does not disclose, or obtain affirmative express consent from users for, the sharing of their PHR identifiable health information for targeted advertising.

  • “PHR Related Entity.”  The FTC adopts in the final rule the proposed changes to the definition of “PHR related entity” to affirm that (i) PHR related entities include entities offering products and services through any online service, including mobile applications, (ii) PHR related entities encompass only entities that access or send unsecured PHR identifiable health information to a PHR, and (iii) a third party service provider that accesses PHR identifiable health information in the course of providing services is not automatically rendered a PHR related entity.  However, the final rule states that, to the extent a third party service provider uses PHR identifiable health information that it receives in its capacity as a service provider for its own purposes (e.g., its own research and development), this entity is a PHR related entity “to the extent that that it offers its services . . . for its own purposes rather than to provide services.” 

The final rule also requires that vendors of PHR and PHR related entities notify their third party service providers that the vendor of PHR/PHR related entity is subject to the HBNR.  According to the final rule, the purpose of this notice is to ensure that the third party service providers are aware of the content of the data transmissions received by the third party service providers and that the third party service providers provide timely notice to the vendor of PHR/PHR related entity of any breach under the HBNR. 

The final rule states that vendors of PHR and PHR related entities may facilitate compliance with this notice requirement by stipulating via contract whether the transmissions to third party service providers will contain PHR identifiable health information.  The final rule suggests that both the vendor of PHR/PHR related entity and third party service provider should monitor for compliance with such contractual provisions taking into consideration the size and sophistication of the entity and the sensitivity of the data.  Further, the final rule suggests that certain entities that may act as third party service providers, such as “a large advertising platform,” may have heightened obligations to monitor the data it receives (even where partners promise not to send PHR identifiable health information to it), particularly if the entity has in the past routinely received unsecured PHR identifiable health information notwithstanding vendors’ of PHR/PHR related entities’ commitments to the contrary.  The final rule distinguishes these heightened monitoring obligations from those of “small firms that do not engage in high-risk activities where the contract precludes sending such data and there is no history of such transmissions.”

  • Clarification of the meaning of a PHR “draw[ing] information from multiple sources:”  In the final rule, the FTC adopts the proposed changes to what it means for a PHR to draw information from multiple sources.  Specifically, a PHR will now be defined to include an electronic record of PHR identifiable health information that has the technical capacity to draw information from multiple sources.  For example, according to the final rule, because a fitness app has the technical capacity to draw identifiable health information from both the user and the fitness tracker, it is a PHR, even if some users elect not to connect the fitness tracker. 
  • Provision of electronic notice and included content:  The final rule adopts the proposal that notice of a breach sent by electronic mail must also be provided by one or more of a text message, within-app message, or electronic banner, which must be clear and conspicuous.  The final rule also requires that a notice of breach include the name or identity (or, where providing the full name or identity would pose a risk to individuals or the entity providing notice, a description), website, and contact information of any third parties that acquired unsecured PHR identifiable health information as a result of a breach of security.
  • Timing changes for notices of breaches:  Previously, the HBNR required notice “as soon as possible and in no case later than ten business days following the date of discovery of the breach” for breaches involving 500 or more individuals.  For breaches involving less than 500 individuals, the HBNR requires notice within 60 calendar days following the end of the calendar year.  The final rule modifies the timing for the notice of a breach of security involving 500 or more individuals to “without unreasonable day and in no case later than 60 calendar days after the discovery of a breach of security.”  The notice to the FTC must be sent at the same time as the notice to individuals.

As noted above, this final rule was not issued unanimously—the FTC Commissioners voted 3-2 to finalize the changes, with recently confirmed Commissioners Holyoak and Ferguson opposing the final rule.  Among other reasons outlined in their dissenting statement, Commissioners Holyoak and Ferguson argued that the final rule “exceeds the Commission’s statutory authority, puts companies at risk of perpetual non-compliance, and opens the Commission to legal challenge that could undermine its institutional integrity.”

While the finalization of these changes to the HBNR is notable, many of these changes reflect the codification of the position already taken by the FTC in recent years in prior guidance and enforcement actions.  In 2021, the FTC adopted by a 3-2 vote a policy statement “Statement of the Commission on Breaches by Health Apps and Other Connected Devices,” which took a similarly broad approach to when health apps and connected devices are covered by the HBNR and when there is a “breach” for purposes of the HBNR.  Then Commissioners Phillips and Wilson opposed the policy statement based on concerns about the expansion of the HBNR beyond the FTC’s statutory authority, among other concerns.  Since the 2021 policy statement, the FTC has brought its first two enforcement actions under the HBNR against GoodRx (issued 4-0) and Easy Healthcare (issued 3-0), leveraging its broad interpretation of the meaning of “breach.”

]]>
Inside Privacy
The Maryland Online Data Privacy Act Set to Reshape the State Privacy Legislation Landscape with Stringent Requirements https://www.lexblog.com/2024/05/01/the-maryland-online-data-privacy-act-set-to-reshape-the-state-privacy-legislation-landscape-with-stringent-requirements/ Wed, 01 May 2024 14:58:00 +0000 https://www.lexblog.com/2024/05/01/the-maryland-online-data-privacy-act-set-to-reshape-the-state-privacy-legislation-landscape-with-stringent-requirements/ Last month, the Maryland legislature passed the Maryland Online Data Privacy Act (“MODPA”). Pending Governor’s signature, Maryland will become the latest state to enact comprehensive privacy legislation, joining California, Virginia, Colorado, Connecticut, Utah, Iowa, Indiana, Tennessee, Montana, Oregon, Texas, Florida, Delaware, New Jersey, New Hampshire, Kentucky, and Nebraska.

MODPA contains unique provisions that will require careful analysis to ensure compliance, including: data minimization requirements; restrictions on the collection, sale, or transfer of sensitive data; and consumer health data-related obligations.  These unique provisions have the potential to create additional work streams even for companies who have come into compliance for existing state laws.  This blog post summarizes the statute’s key takeaways.

  • Scope: The MODPA applies to processors whose business targets Maryland residents and, who during the preceding year, controlled or processed the data of at least 35,000 Maryland consumers or of at least 10,000 Maryland consumers while deriving more than 20% of gross revenue from the sale of personal data.  The MODPA includes many exemptions present in other state comprehensive privacy laws, including exemptions for certain nonprofits, state government entities, financial institutions, and protected health information under HIPAA, among others. 
  • Consumer Rights:  The MODPA provides consumers with rights found in many other state comprehensive privacy laws.  These rights include access, correction, deletion, and portability, and rights to opt-out of processing for targeted advertising, the sale of personal data, and profiling in furtherance of solely automated decisions.  The MODPA also will require controllers to honor opt-out preference signals.
  • Data Minimization Requirements:  The MODPA restricts the collection of personal data to what is reasonably necessary to maintain or provide the requested product or service, with even more stringent data minimization expectations for sensitive data, as discussed below.  Additionally, the Act would require controllers to obtain consent prior to processing personal data for a purpose that is not reasonably necessary to or compatible with the disclosed purpose for which the personal data is processed.  Helpfully, the MODPA provides that controllers and processors are not restricted from their ability to engage in an enumerated list of processing activities (e.g., protecting against and investigating fraud and security incidents and for internal use to perform certain internal operations reasonably anticipated by consumers), although only to the extent such processing is reasonably necessary and proportionate to the enumerated purposes.
  • Sensitive Personal Data Restrictions:  The MODPA would broadly prohibit the sale of sensitive personal data, and restrict the collection, processing, or sharing of sensitive personal data except when “strictly necessary to provide or maintain a specific product or service requested by the consumer.”  The MODPA defines sensitive personal data to include:  racial or ethnic origin, religious beliefs, sex life, sexual orientation, status as transgender or nonbinary, national origin, or citizenship or immigration status, genetic or biometric data, personal data collected from a consumer under 13 years old, precise geolocation data, and certain consumer health data. 
  • Consumer Health Data:  The MODPA’s definition of consumer health data encompasses personal data that the controller uses to identify a consumer’s physical or mental health status, including data related to gender-affirming treatment or reproductive or sexual healthcare.  A person may not grant an employee or contractor access to consumer health data unless the recipient subject to a contractual or statutory duty of confidentiality, or confidentiality is required as a condition of employment.  Consumer health data is considered sensitive personal data under the MODPA.  As such, the MODPA’s restrictions on sensitive personal data would similarly apply to consumer health data.  Like Connecticut, Maryland’s privacy law would also prohibit the use of geofence technology to establish a virtual boundary around certain health facilities for the purpose of identifying, tracking, or collecting data from, or sending notifications to consumers regarding the consumers’ consumer health data.
  • Consumers Under 18 Years Old:  The MODPA would prohibit the sale, or processing for purposes of targeted advertising, of personal data of consumers under the age of 18 years. 
  • Anti-discrimination:  The MODPA would prohibit, with limited exceptions, the collection, processing, or transferring of personal data or publicly available data “in a manner that unlawfully discriminates in or otherwise unlawfully makes unavailable the equal enjoyment of goods or services on the basis of race, color, religion, national origin, sex, sexual orientation, gender identity, or disability.” 
  • Data Protection Assessments:  The Act would require data protection assessments for processing activities that involve targeted advertising, the sale of personal data, profiling (in limited circumstances), the processing of sensitive data, among others.
  • Loyalty Program Conditions:  Under the MODPA, controllers would be prohibited from conditioning consumer participation in loyalty programs on the sale of consumer personal data. 
  • Enforcement: MODPA grants exclusive enforcement power to the Maryland Attorney General and provides for a 60-day cure period that sunsets April 1, 2027. 
]]>
Inside Privacy
Nebraska Enacts Nebraska Data Privacy Act https://www.lexblog.com/2024/04/29/nebraska-enacts-nebraska-data-privacy-act/ Mon, 29 Apr 2024 14:05:41 +0000 https://www.lexblog.com/2024/04/29/nebraska-enacts-nebraska-data-privacy-act/ On April 17, the Nebraska governor signed the Nebraska Data Privacy Act (the “NDPA”) into law.  Nebraska is the latest state to enact comprehensive privacy legislation, joining CaliforniaVirginiaColoradoConnecticutUtahIowaIndiana, Tennessee, Montana, OregonTexasFloridaDelawareNew Jersey,  New Hampshire, Kentucky, and Maryland. The NDPA will take effect on January 1, 2025.  This blog post summarizes the statute’s key takeaways.

  • Scope:  Similar to Texas’s comprehensive privacy law, the NDPA does not use numerical thresholds of consumers’ data collected to determine applicability.  Instead, the NDPA applies to persons who (1) conduct business in Nebraska or produce products or services consumed by Nebraska residents, and (2) process or sell personal data.  The NDPA includes many exemptions present in other state comprehensive privacy laws, including exemptions for nonprofits, government entities, financial institutions, and protected health information under HIPAA, among others. 
  • Consumer Rights:  The NDPA, among other things, grants consumers the rights of access, deletion, portability, and correction.  The NDPA will also allow consumers to opt-out of targeted advertising, the sale of personal data, and automated profiling in furtherance of decisions producing a legal or similarly significant effect concerning the consumer.  The NDPA’s definition of “sale of personal data” includes “the exchange of personal data for monetary or other valuable consideration.”
  • Sensitive Data:  Controllers will be required to obtain consent before processing a consumer’s sensitive data.  The NDPA defines sensitive data as personal data that reveals racial or ethnic origin, religious beliefs, a mental or physical health diagnosis, sexual orientation, or citizenship or immigration status, genetic or biometric data processed to uniquely identify individuals, personal data collected from a known child, and precise geolocation data.
  • DPIAs:  The NDPA would require Data Protection Impact Assessments (“DPIAs”) for processing activities that involve targeted advertising, the sale of personal data, profiling (in limited circumstances), processing of sensitive data, or would otherwise present a heightened risk of harm to consumers.
  • Enforcement:  The Nebraska Attorney General will have exclusive authority to enforce the Act.  The statute will also grant controllers and processors with a 30-day right to cure that does not sunset.
]]>
Inside Privacy
EHDS Series – 5: European Health Data Space Governance, Enforcement and Timelines https://www.lexblog.com/2024/04/25/ehds-series-5-european-health-data-space-governance-enforcement-and-timelines/ Thu, 25 Apr 2024 09:55:02 +0000 https://www.lexblog.com/2024/04/25/ehds-series-5-european-health-data-space-governance-enforcement-and-timelines/ In March 2024, the EU lawmakers reached agreement on the European Health Data Space (EHDS).  Although the text has not yet been formally adopted by all the European institutions, a number of interesting points can already be highlighted.  This article focuses on the governance and enforcement of the EHDS; for an overview of the EHDS generally, see our first post in this series.

The final text of the EHDS was adopted by the European Parliament on 24 April 2024 and is expected to be formally adopted by the European Council in the coming months.

1: Governance

    The EHDS establishes a new independent advisory and regulatory body, the European Health Data Space Board (EHDS Board) to facilitate the exchange of information and cooperation among Member States and with the European Commission.  The Board has a wide remit, albeit a consultative one.  In respect of secondary use of health data, for example, it will assist Member States in coordinating the practices of their Health Data Access Bodies (HDABs), exchange best practices that help the Commission in preparing its secondary legislation, and share information on identified risks and incidents in relation to the secondary use of health data.

    The EHDS Board will be composed of two representatives per Member State, one nominated for primary use (health care) and the other for secondary use (scientific research).  It will be co-chaired by one representative of the EU Commission and one representative of the Member States.  The Board can also draw on external experts.

    In addition to the EHDS Board, the EHDS creates a “stakeholder forum” through which relevant stakeholders can advise the EHDS Board by providing practical views on their respective sectors.  The stakeholders forum will be composed of, but not limited to, representatives of the pharmaceutical industry, health professionals, consumers, patients, and scientific researchers.  Commercial and non-commercial interests will need to be equally represented.  The members will be appointed by the EU Commission following an open call for interest. 

    2: Enforcement

    The EHDS contains a dedicated enforcement section in relation to the secondary use of health data.  Enforcement is primarily attributed to each Member State’s HDAB, which have to monitor compliance by data holders and data users and may request information from them as needed to verify such compliance.  In addition, individuals have a right to lodge a complaint (individually or collectively) with the HDAB. 

    In particular, the HDAB has the power to:

    • revoke a health data user’s permit and exclude a data user from EHDS for up to five years;
    • fine a data holder who refuses to share data, with periodic penalty payments (the amount of which will be established under national law) and, in case of repeated breaches, exclude the data holder from access to EHDS data as a data user, while being required to share as a data holder;
    • inform other HDABs of such measures – the Commission will set up an IT tool for this purpose; and
    • inform the Data Protection Authorities of any possible breach of the GDPR.

    In addition, the HDAB can impose an administrative fine on data users and to a lesser extent on data holders.  The fining language in the EHDS is quite similar to that of the GDPR and so are the potential fines.  Minor infringements by data users can be subject to fine of up to €10 million or, in case of an undertaking, 2% of the total worldwide annual turnover of the preceding financial year.  Some violations, however, can be subject to a fine of up to €20 million or, in case of an undertaking, 4% of the total worldwide annual turnover of the preceding financial year.  Violations subject to these increased fines include:

    • a data user using data for prohibited purposes;
    • a data user extracting personal data (instead of anonymous data) from the HDAB’s secure processing environment – presumably by circumventing protections put in place by the HDAB;
    • a data user trying to re-identify individuals; and
    • a data user or data holder not complying with an HDAB’s enforcement measures.

    As an exception to the above, Data Protection Authorities are responsible for enforcing the EHDS opt-out of individuals, in accordance with the enforcement provisions of the GDPR.

    Finally, individuals have the right to receive compensation for material or non-material damage suffered as a result of an infringement of the EHDS by a digital health authority, a health data access body, a health data holder, or a health data user, in accordance with national and Union law.  They also have the right to mandate a non-profit organization, with statutory objectives that are in the public interest, constituted in accordance with Member State law and active in the field of data protection, to lodge a complaint on their behalf.  These organizations would be the same as those that may represent individuals under the GDPR.  According to the recitals, the concept of damage should be interpreted broadly in light of the case law of the Court of Justice of the EU (see our blog here for more on the emerging EU case-law on non-material damages).

    3: Timelines

    The EHDS is massive endeavor that will require some time to put in place, both for regulated companies and for government bodies.   In this series of blog posts we focused on the secondary use of health data, but the EHDS also contains important chapters of electronic health records and cross border health care, which will also require much effort and funding from Member States.  As a result, the timelines for implementation of the EHDS are quite long.

    In relation to secondary use specifically, the EHDS obligations will start applying four years after its entry into force (i.e., around 2028), except that for some data categories, such as clinical trial data and human genetic data, for which the grace implementation period extends to six years instead (i.e., around 2030).  The European Commission’s ability to recognize third countries, such as the UK and Switzerland, to participate in EHDS is even deferred for ten years – though this does not automatically exclude data users from third countries from participating in the EU EHDS (see our blog post on data users here).

    ]]>
    Inside Privacy
    EHDS Series – 4: The European Health Data Space’s Implications for “Wellness Applications” and Medical Devices https://www.lexblog.com/2024/04/24/ehds-series-4-the-european-health-data-spaces-implications-for-wellness-applications-and-medical-devices/ Wed, 24 Apr 2024 12:09:17 +0000 https://www.lexblog.com/2024/04/24/ehds-series-4-the-european-health-data-spaces-implications-for-wellness-applications-and-medical-devices/ In early March 2024, the EU lawmakers reached agreement on the European Health Data Space (EHDS).  For now, we only have a work-in-progress draft version of the text, but a number of interesting points can already be highlighted. This article focuses on the implications for “wellness applications” and medical devices; for an overview of the EHDS generally, see our first post in this series.

    The final text of the EHDS was adopted by the European Parliament on 24 April 2024 and is expected to be formally adopted by the European Council in the coming months.

    1: Wellness Applications and Medical Devices in Relation to Electronic Health Records

    a) Wellness applications

      The EHDS contains specific provisions on “wellness applications” that claim interoperability with electronic health records (“EHRs”).   Under the original proposal for the EHDS, published in May 2022, “wellness applications” were defined as:

      “any appliance or software intended by the manufacturer to be used by a natural person for processing electronic health data for other purposes than healthcare, such as well-being and pursuing healthy life-styles.” (emphasis added)

      The latest draft of the EHDS defines the term more broadly as:

      “any appliance or software intended by the manufacturer to be used by a natural person for processing electronic health data specifically for providing information on the health of individual persons, or the delivery of care for other purposes than the provision of healthcare.” (emphasis added)

      Wellness applications claiming interoperability with the EHR system in relation to the harmonized components of EHR systems (and thus complying with the essential requirements and applicable common specifications), must, before they are placed on the market (and just like EHR systems) use a “digital testing environment” made available by the European Commission or the Member States to assess the harmonized components of their application.

      Assuming the result of the test is positive, the manufacturer has to apply a (digital) label to the wellness application to inform the user of the interoperability and its effects.  The label is issued by the manufacturer and is valid for a maximum of three years.  The European Commission will determine the format and content of the label. 

      Interoperability of a wellness application does not mean automatic transfer of data to the user’s EHR.  Such sharing may only take place with consent of the users, who must also have the technical ability to decide which parts of the data they want to insert in their EHR and in which circumstances.

      Finally, manufacturers of labelled wellness applications must register their application, including the results of the test environment, into an the EU database maintained and made public by the European Commission. 

      b) Medical devices

      1. Interoperability

      The EHDS also has implications for medical devices but when it comes to defining those obligations, the definitions and current drafting of the EHDS are open to interpretation.  For one, the revised definition of “wellness application” is potentially broad enough to capture medical devices as it now seems to cover appliances or software that provide information on the health of individual persons or the delivery of care for other purposes than the provision of healthcare.  Depending on how you read it, the ‘other purposes than the provision of healthcare’ does not necessarily limit both preceding elements of the definition.  Further, the definition of “EHR system” (electronic health record system) is very broad and includes any “appliance or software”. Both of these definitions could potentially include medical devices.   

      Similar to the position for wellness applications, medical devices and IVDs that claim interoperability with the harmonised components of EHR systems must “prove compliance with the essential requirements on the European interoperability component for EHR systems and the European logging component for EHR systems” laid down in Section 2, Annex II of the EHDS.

      The essential requirements on interoperability of the EHDS would only apply to the extent that the manufacturer of a medical device/IVD, which is providing electronic health data to be processed as part of the EHR system, claims interoperability with an EHR system. In such case, the provisions on “common specifications” for EHR systems should be applicable to those medical devices.

      1. Conformity assessment

      The recitals of the EHDS expressly acknowledge that certain components of EHR systems can qualify as medical devices and be subject to the Medical Device Regulation (EU) 2017/745 (“MDR”) or the In vitro diagnostics Regulation (EU) 2017/746 (“IVDR”).

      As articulated in Recital 29 of the EHDS, software or module(s) of software which is a medical device, IVD or high-risk AI system should be certified in accordance with the MDR, IVDR and the AI Act, as applicable.  Let’s imagine (1) a medical device (2) that stores or views electronic health records (3) to achieve its medical device intended purpose and that (4) uses AI for data processing.  This hypothetical product would qualify as a medical device and EHR system and, on top of that, it uses AI to achieve its intended device purpose.  Hence, the manufacturer will be required to conduct conformity assessments under (at least) three different EU Regulations which are the (1) MDR, the (2) AI Act and (3) the EHDS Regulation. 

      In such cases, where “[w]here EHR systems are subject to other Union legislation in respect of aspects not covered by this Regulation, which also requires an EU declaration of conformity by the manufacturer..., a single EU declaration of conformity shall be drawn up in respect of all Union acts applicable to the EHR system.”

      Although the EHDS requires EU Member States to take appropriate measures to ensure that the respective conformity assessment is carried out as a joint or coordinated procedure in order to limit the administrative burden on manufacturers, it will be interesting to see how the EU Member States will achieve this in practice.  Experience has shown that this has not worked for the (single) conformity assessments under the MDR...

      1. Registration

      There also seems to be some contradiction in the latest draft EHDS when it comes to registration requirements for medical devices.  Article 32(3) EHDS suggests that medical devices that also qualify as EHR systems or claim interoperability with EHR systems need to be registered in the new “EU database for registration of EHR systems and wellness applications” in addition to registration under medical devices rules (i.e., registered with EUDAMED).  However, Recital 36 EHDS conflicts and implies that new registration obligations apply only to “EHR systems and wellness applications, which are not falling within the scope of Regulations (EU) 2017/745 and [...] [AI act COM/2021/206 final]...”  For medical devices the registration should be maintained under the existing databases...” 

      Since the EU appears to be facing significant challenges with larger IT projects like EU-wide databases (e.g., EUDAMED, CTIS), it in any event remains to be seen whether the new EU database for registration of EHR systems and wellness applications under the EHDS will be functional in time to support implementation of the EHDS.

      2: Secondary use

      The EHDS sets out a long list of covered electronic health data that should be made available for secondary use under the EHDS.  It includes, among others, data from wellness applications and health data from medical devices.  Note that this chapter of the EHDS is not limited to wellness applications and medical devices that claim interoperability with EHRs; it seems to apply to all wellness applications and medical devices.  Data holders (see our blog here for more) of data generated by wellness applications and devices will have to share this data upon request from an HDAB. 

      Note, however, that Member States are apparently allowed to introduce stricter safeguards (for example an opt-in consent) for the re-use of health data from wellness applications under the EHDS, but not for health data from the EHR system they can interoperate with or from health data from medical devices.  This makes little sense, and it will be interesting to see what safeguards (if any) Member States introduce in practice.

      We will keep you posted about any further developments.

      ]]>
      Inside Privacy
      EFPIA Issues Statement on Application of the AI Act in the Medicinal Product Lifecycle https://www.lexblog.com/2024/04/23/efpia-issues-statement-on-application-of-the-ai-act-in-the-medicinal-product-lifecycle/ Tue, 23 Apr 2024 08:58:54 +0000 https://www.lexblog.com/2024/04/23/efpia-issues-statement-on-application-of-the-ai-act-in-the-medicinal-product-lifecycle/ On April 22, 2024, the European Federation of Pharmaceutical Industries and Associations (“EFPIA”) issued a statement on the application of the AI Act in the medicinal product lifecycle. The EFPIA statement highlights that AI applications are likely to play an increasing role in the development and manufacture of medicines.  As drug development is already governed by a longstanding and detailed EU regulatory framework, EFPIA stresses that care should be taken to ensure that any rules on the use of AI are fit-for-purpose, adequately tailored, risk-based, and do not duplicate existing rules.  The statement sets forth five “considerations”:

      1. R&D AI qualify under the AI Act’s research exemption

      The AI Act does not apply to AI systems and models developed and put into service solely for scientific research purposes.  Accordingly, the exemption should encompass AI-based drug development tools used in research and development, as that is their sole use. 

      2. Other R&D AI generally not “high risk”

      AI systems and models used in the research and development of medicines that do not fall under the exemption should not be considered high-risk AI, as they generally do not satisfy the criteria for “high risk” AI systems set forth in Article 6 of the AI Act.

      3. No need for additional regulation of R&D AI

      The development of medicines in Europe is already subject to an intricate set of very detailed rules and regulations in Europe.  This regulatory system should suffice to also address the use of AI in the development of medicines, without the need for additional regulation.

      4. The European Medicines Agency (EMA) expected guidance is welcome

      EFPA welcomes the EMA’s efforts to assess the impact of AI in R&D, such as in its consultation on a draft reflection paper and multi-annual work plan, and its emphasis on a “risk-based” approach.  This existing regulatory framework should be able to tackle any concerns related to AI in the development of medicines. 

      5. R&D AI governance should be calibrated to its context

      Finally, the EFPIA statement points out that AI regulation should remain flexible in order to keep pace with technological development, but should also be able to adapt to the different contexts in which it is applied, including the relevant stage of a product’s development, its impact on the risk-benefit analysis of a medicine and the applicable level of human oversight.  Collaboration among all stakeholders concerned should help to ensure that the potential of AI can be unlocked while respecting fundamental rights, safety and ethical principles.

      ]]>
      Inside Privacy
      Congressional Review Act Threat Looms Over Biden Administration Rulemakings https://www.lexblog.com/2024/04/22/congressional-review-act-threat-looms-over-biden-administration-rulemakings-5/ Tue, 23 Apr 2024 02:09:50 +0000 https://www.lexblog.com/2024/04/22/congressional-review-act-threat-looms-over-biden-administration-rulemakings-5/ With the 2024 election rapidly approaching, the Biden Administration must race to finalize proposed agency actions as early as mid-May to avoid facing possible nullification if the Republican Party controls both chambers of Congress and the White House next year.  This post summarizes the Congressional Review Act (“CRA”) which will apply to a number of U.S. federal rulemakings, including those related to privacy and cybersecurity.

      The CRA allows Congress to overturn rules issued by the Executive Branch by enacting a joint resolution of disapproval that cancels the rule and prohibits the agency from issuing a rule that is “substantially the same.”  One of the CRA’s most unique features—a 60-day “lookback period”—allows the next Congress 60 days to review rules issued near the end of the last Congress.  This means that the Administration must finalize and publish certain rules long before Election Day to avoid being eligible for CRA review in the new year.  

      Overview of the CRA

      The CRA requires federal agencies to submit all final rules to Congress before the rule may take effect.  It provides the House with 60 legislative days and the Senate with 60 session days to introduce a joint resolution of disapproval to overturn the rule.  This 60-day period counts every calendar day, including weekends and holidays, but excludes days that either chamber is out of session for more than three days pursuant to an adjournment resolution.  In the Senate, a joint resolution of disapproval receives only limited debate and may not be filibustered.  Moreover, if it has been more than 20 calendar days since Congress received a final rule and a joint resolution has not been reported out of the appropriate committee, a group of 30 Senators can file a petition to force a floor vote on the petition.   

      If a CRA resolution receives a simple majority in both chambers and is signed by the President, or if Congress overrides a presidential veto, the rule cannot go into effect and is treated “as though such rule had never taken effect.”[1]  The agency is also barred from reissuing a rule that is “substantially the same,” unless authorized by future law.[2]    

      Election Year Threat: CRA Lookback Period

      These procedures pose special challenges for federal agencies in an election year.  If a rule is submitted to Congress within 60 days before adjournment, the CRA’s lookback provision allows the 60-day timeline to introduce a CRA resolution to start over in the next session of Congress.

      This procedure ultimately requires the current administration to assess the threat of a CRA resolution against certain rules and determine whether to issue the rule safely before the deadline or risk a potential CRA challenge. 

      Mid-May Deadline Estimated for Biden Agency Actions

      Calculating the CRA deadline is exceedingly difficult for federal agencies because it is a moving target.  The House and Senate may each cancel or add days to their legislative calendar right up until adjournment, making it impossible to calculate the deadline with precision.  Moreover, the lookback period starts on the date that is 60 days before adjournment, regardless of which chamber reaches that threshold first.  In other words, if the date that is 60 House legislative days before adjournment falls on a date that is 65 Senate session days prior to adjournment, that date starts the lookback window, even though the Senate is not within 60 session days of adjournment. 

      While only the House and Senate parliamentarians can determine the lookback window with authority, based on the currently-released House and Senate calendars, agency rules submitted to Congress before May 22, 2024 will not be subject to CRA review by the new 119th Congress in 2025.  However, since this deadline is based on the House calendar, it would be pushed later if the House were to add legislative days to the calendar to complete legislative work.

      Administration Preparations

      Agency leaders are working to finalize rules in time to meet this deadline.  At a recent conference held by the American Law Institute’s Continuing Legal Education, Vicki Arroyo, the Environmental Protection Agency’s (EPA) Associate Administrator for Policy, explained that the deadline is “something that [the EPA is] very focused on.”[3]  Although the deadline is uncertain, she noted that to be cautious, agencies may submit rules as “early as the end of April or May.”[4]

      Federal agencies have already begun or plan to submit rules to Congress before the late May deadline, including:  

      • An EPA rule that sets new standards to reduce air pollutant emissions from cars and a rule that requires fossil fuel plants to rely on new technologies to reduce pollution levels.
      • A Department of Labor rule that modifies Wage and Hour regulations to clarify the criteria for classifying workers as independent contractors as opposed to employees and a rule that narrows the standards to classify as exempt from the Fair Labor Standards Act’s minimum pay and overtime requirements. 
      • A Department of Justice rule that takes additional steps to implement the Bipartisan Safer Communities Act, which makes various changes to federal firearm laws, including expanding background check requirements and broadening the scope of existing restrictions.
      • The Energy Department’s regulations setting consumer water heater energy efficiency standards to lower utility costs for American families and increase energy savings.
      • An Office of Personnel Management rule that would implement stronger guardrails for career employees, allowing them to keep civil service protections unless they voluntarily accept a political appointee position and adding requirements when reclassifying career positions as political appointments.
      • A Bureau of Land Management rule setting management standards that put conservation on par with resource extraction to protect public lands and restore degraded habitats. 
      • A Department of Health and Human Services rule that establishes comprehensive minimum staffing standards for nursing homes.
      • A Department of Housing and Urban Development rule proposing a framework to ensure that federal funding is used in a systematic way to affirmatively support the goals of the Fair Housing Act.

      Some rules will not be finalized by the deadline, which could signal that agency leaders view these rules as less vulnerable to the CRA, either because they enjoy bipartisan support or because they are not politically polarizing.  For example, the EPA plans to finalize a rule in October that would mandate actions to reduce lead exposure, including replacing lead pipes within the next ten years. 

      In addition, several rules were recently submitted to the Office of Information and Regulatory Affairs (OIRA), which generally has 90 days to review a rule before it is sent to Congress.  It is possible that OIRA’s 90-day review period will push these rules past the May deadline for submission to Congress.  Examples of these rules include:

      • A Department of Education rule that establishes a negotiated rulemaking committee to prepare regulations for the Federal Student Aid programs related to the modification, waiver, release, or compromise of student loans under the Higher Education Act of 1965.
      • A Social Security Administration rule that expands the definition of a public assistance household to reduce administrative burdens for low-income households participating in public assistance programs.
      • A Department of Agriculture rule that sets standards for labeling meat and poultry products produced using animal cell culture technology—a process that involves taking a small number of cells from living animals and growing them in a controlled environment to generate food.

      The fate of these rules will depend on how soon the Administration can finish these rulemakings and submit them to Congress.  Otherwise, the outcome of the 2024 election will determine whether these rules ever take effect. 


      [1] 5 U.S.C. § 801(f).

      [2] Id. § 801(b).

      [3] Kevin Bogardus, Murky Deadline Looms for Biden’s Regs, E&E News by Politico (Mar. 21, 2024),

      https://www.eenews.net/articles/murky-deadline-looms-for-bidens-regs/.

      [4] Kevin Bogardus, Murky Deadline looms for Biden’s Regs, E&E News by Politico (Mar. 21, 2024), https://www.eenews.net/articles/murky-deadline-looms-for-bidens-regs/.

      ]]>
      Inside Privacy
      NIS2 implementation enters the final stretch – six months to deadline https://www.lexblog.com/2024/04/18/nis2-implementation-enters-the-final-stretch-six-months-to-deadline/ Thu, 18 Apr 2024 19:15:40 +0000 https://www.lexblog.com/2024/04/18/nis2-implementation-enters-the-final-stretch-six-months-to-deadline/ In six months’ time, on 17 October 2024, Member State laws that transpose the EU’s revised Network and Information Systems Directive (“NIS2”) will start to apply.  As described in more detail in our earlier blog post (here), NIS2 significantly expands the categories of organizations that fall within scope of EU cybersecurity legislation. This new, cross-sector law imposes additional and more granular security and incident reporting rules, enhanced governance requirements that apply to organizations’ “management bodies,” and creates a stricter enforcement regime.

      Organizations that are preparing for NIS2 need to keep a watchful eye on national implementing laws, competent authorities, and secondary legislation from the Commission on some of the substantive requirements.

      Some Member States (e.g., Croatia) have already passed their transposing legislation, and others (e.g., Germany and Belgium) have published draft laws that are going through the legislative process. Despite the October deadline, many Member States have not yet published drafts or started their legislative process. NIS2 is a “minimum harmonization” law, meaning that Member States’ implementing laws can impose additional obligations beyond those set out in the text of the Directive. 

      As we enter the last six months before national laws start to apply, establishing which Member States’ competent authorities will have jurisdiction to enforce NIS2 will also be a critical assessment for regulated entities.

      We also expect to see European Commission implementing acts that will flesh out NIS2 obligations, complementing guidance the Commission published earlier this year (see our blog here). These implementing acts were expected to be published in early 2024, but have not yet materialized. Watch this space.

      *                      *                      *

      The Data Privacy and Cybersecurity Practice at Covington has deep experience advising on privacy and cybersecurity issues across Europe, including on NIS, NIS2, and other cyber-related regulations. If you have any questions about Member State transpositions of NIS2, how NIS2 will affect your business, or about developments in the cybersecurity space more broadly, our team would be happy to assist.

      ]]>
      Inside Privacy
      Rounding up Five Recent CJEU Cases on GDPR Compensation https://www.lexblog.com/2024/04/18/rounding-up-five-recent-cjeu-cases-on-gdpr-compensation/ Thu, 18 Apr 2024 13:29:45 +0000 https://www.lexblog.com/2024/04/18/rounding-up-five-recent-cjeu-cases-on-gdpr-compensation/ In recent months, the European Court of Justice (“CJEU”) issued five judgments providing some clarity on the scope of individuals’ rights to claim compensation for “material and non-material damage” under Article 82 of the GDPR. These rulings will inform companies’ exposure to compensation claims, particularly in the context of the EU’s Collective Redress Directive, but open questions remain about the quantum of compensation courts will offer in these cases and we expect both the CJEU and national courts to deliver additional case-law clarifying this topic in the coming year (for more information on recent CJEU cases related to compensation, see our previous blog posts here and here).

      • In VB v Natsionalna agentsia za prihodite (C-340/21), the CJEU concluded that individuals may have suffered “non-material damage”—and therefore be able to claim compensation—if they can demonstrate that they feared future misuse of personal data that was compromised in a personal data breach.  
      • In VX v Gemeinde Ummendorf (C-456/22), the CJEU found that there is no de minimis threshold for damage, below which individuals cannot claim for compensation.
      • In BL v MediaMarktSaturn (C-687-21), the CJEU restated its existing case-law, and expanded upon its analysis in VB by clarifying that alleged harms cannot be “purely hypothetical”.
      • In Kočner v Europol (C-755/21), the CJEU awarded non-material damages of €2000 for the publication in newspapers of transcripts of “intimate” text messages.
      • In GP v Juris GmbH (C-741/21), the CJEU found that where one processing activity infringes multiple provisions of the GDPR, this should not allow claimants to “double-count” the harm they suffered.

      We provide further detail on each case below.

      The VB case

      The VB case arose out of a 2019 cyberattack suffered by the Bulgarian National Agency for Public Revenues. This attack resulted in the publication of millions of individuals’ personal data on the internet. One affected individual brought a claim for compensation, alleging that she had suffered non-material damage because she was afraid the publication of her personal data could lead to misuse of that data in the future—despite there being no evidence that her data had in fact been misused.

      The CJEU noted that the concept of “damage” in Article 82 should be “broadly interpreted in light of the case-law of the CJEU in a manner which fully reflects the objectives of the GDPR”. Consequently, the CJEU held that the mere fear of future misuse of personal data that was compromised in a personal data breach can constitute non-material harm under the GDPR, even if there is no evidence that any misuse has occurred. That said, individuals will only be able to claim compensation if they can prove that they have in fact suffered such harm as a result of the breach, i.e., that they in fact had a well-founded fear of future misuse, and can demonstrate that the damage is causally linked to the alleged GDPR infringement. EU Member States’ courts will need to assess this factual question as well as the thorny question of assigning a monetary value to this “fear”, on a case-by-case basis. 

      The VX case

      The VX ruling originated from the online publication of personal data (including names and home addresses) contained in municipal council meeting agendas. This data was accidentally published for a period of 3 days on the council’s website before the error was noticed and the files were taken down. Some of the individuals whose data appeared in the meeting agendas brought claims for compensation under Article 82 of the GDPR; the municipal council argued that no compensation should be payable as the affected individuals had not suffered any meaningful harm. That is, the municipal council argued that Article 82 should not allow compensation for trivial harms that fall below a “de minimis” threshold.

      Similarly to the VB ruling, the CJEU reiterated that non-material damage should be interpreted broadly. It therefore dismissed the municipal council’s arguments and held that, “it would be contrary to Article 82 of the GDPR to limit it solely to the damage of a certain degree of seriousness”. The effect of this is that, while it is up to the claimants to prove as a matter of fact that they suffered damage, and to prove that the damage was causally linked to a GDPR breach (as described below), there is no “de minimis” threshold preventing claimants from bringing claims relating to minor or trivial harms.

      The BL case

      The BL case arose when BL bought some items from an electronics store and chose to pay for the goods on credit; to apply for the credit, he had to fill out a form containing his personal data such as his bank details and salary. Due to a mistake by the store, BL’s goods and his forms were given to the wrong customer, as a result of which that other customer held BL’s goods and forms for about 30 minutes until the store realised the error and retrieved them. BL claimed that the provision of the forms (i.e., BL’s personal data) to the other customer violated the GDPR and he demanded compensation from the store. 

      The CJEU reiterated, as it did in the cases above, that the purpose of Article 82 is ultimately compensatory rather than punitive: any damages awarded under Article 82 must be calculated by reference to the detriment that BL suffered as a result of the breach, rather than the desire to punish the electronics store for the breach. The CJEU then went on to reiterate its conclusions in the Austria Post case (see our blog here on that case) as to the conditions that must be satisfied for non-material damages to be awarded, namely: (i) the individual suffered damage; (ii) there was an infringement of the GDPR; and (iii) there is a causal link between the damage and the infringement. That is, even if the electronics store had breached the GDPR by handing BL’s documents to the wrong person, this was not sufficient to enable BL to claim damages – he had to prove as a matter of fact that he suffered harm and that the GDPR breach was causally linked to that harm.

      Finally, the CJEU clarified its statements in the VB case that fear of personal data misuse may amount to “damage” for the purposes of Article 82, holding that such fear must be “well-founded,” meaning that there must be reasonable grounds for the fear rather than “a purely hypothetical risk of misuse”.

      The Kočner case

      The Kočner case arose from an investigation into the murder of a Slovakian journalist. During this investigation, the Slovakian crime agency requested Europol’s assistance in deciphering data stored on devices owned by Mr Kočner. Some of the information stored on these phones, including transcripts of “intimate and sexual communications” between Mr Kočner and his girlfriend, were subsequently published in the media.

      Mr Kočner claimed that the publication of this information caused him non-material damage, and sought compensation under Regulation 2016/794 (the Europol regulation), which requires Europol to protect individuals against unlawful processing of their personal data. The European Court of Justice found that Europol had breached the Europol regulation by allowing Mr Kočner’s information to fall into the hands of journalists, and that those journalists’ subsequent publication of this data “adversely affected [Mr Kočner’s] honour and reputation, which caused him non-material damage”.

      The court then assessed that the amount of compensation owed to Mr Kočner’s for this damage was €2000. Unhelpfully, the court did not provide a justification for this figure, beyond noting that it had assessed the compensation “on an equitable basis”.

      The GP case

      The GP case arose when GP received, on three occasions, marketing letters from Juris (a provider of legal research services) despite having previously revoked his consent for, and exercised his right to object to, the use of his data for marketing purposes.

      GP claimed that Juris had committed multiple breaches of the GDPR by failing to honour his consent withdrawal and his objection, and that the amount of compensation payable to him should be increased because of the multiple breaches. He then went on to claim that these breaches amounted to a “loss of control” of his personal data, and that he was therefore entitled to claim compensation per se, without needing to demonstrate that he had suffered any actual harm.

      The CJEU first reiterated its analysis in Austria Post and BL that a breach of the GDPR does not lead to a right to compensation per se; instead, that breach must be linked to an actual harm that the claimant can demonstrate.

      The CJEU then went on to note that, when calculating non-material damages, it is not relevant to consider the factors set out in the GDPR for calculating fines issued by supervisory authorities (such as “any relevant previous infringements” or “financial benefits gained” by the data controller). As in BL, the court noted that the purpose of the GDPR’s non-material damages provisions is fundamentally to compensate the harm suffered by the data subject, rather than punish the data controller – so the “gravity of the infringement... that caused the alleged [damage] cannot influence the amount of compensation granted”.

      Finally, the CJEU considered how courts should calculate damages in cases where a controller is accused of multiple breaches of the GDPR, all of which cause the same harm (e.g., because one processing operation infringes multiple articles of the GDPR). Again, the CJEU noted that the purpose of Article 82 is fundamentally compensatory, so the fact that a processing activity breaches multiple articles of the GDPR should not allow claimants to “double-count” the harm they have suffered.

      What happens next

      With the exception of Kočner, the court has refrained from giving definitive answers on the appropriate quantum of non-material damages payable – that issue was instead remitted to the relevant national courts. We therefore expect further national case-law to see how the principles laid down by the CJEU will be applied in practice.  We also expect further commentary from the CJEU in the coming months – the cases discussed in this post are just a handful of a raft of cases currently pending before the CJEU which are set to examine compensation under the GDPR. The topic of defining non-material damages is also of increasing importance as EU Member States continue their transposition of the Representative Actions Directive.

      This post was written with the assistance of Diane Valat and Alberto Vogel.

      *                             *                             *

      Covington’s Data Privacy and Cybersecurity Practice regularly advises on European privacy laws, including data breaches, cyber incidents, and litigation at the European Court of Justice.  If you have any questions about the implications of these rulings for your business, please let us know.

      ]]>
      Inside Privacy
      Certain Provisions in the American Privacy Rights Act of 2024 Could Potentially Affect AI https://www.lexblog.com/2024/04/18/certain-provisions-in-the-american-privacy-rights-act-of-2024-could-potentially-affect-ai-2/ Thu, 18 Apr 2024 12:34:02 +0000 https://www.lexblog.com/2024/04/18/certain-provisions-in-the-american-privacy-rights-act-of-2024-could-potentially-affect-ai-2/ Earlier this month, lawmakers released a discussion draft of a proposed federal privacy bill, the American Privacy Rights Act of 2024 (the “APRA”).  While the draft aims to introduce a comprehensive federal privacy statute for the U.S., it contains some notable provisions that could potentially affect the development and use of artificial intelligence systems.  These provisions include the following:

      • Impact Assessments.  Large data holders (defined as covered entities that meet certain size thresholds) that use an algorithm to collect, process, or transfer covered data “in a manner that poses consequential risk of harm” in certain categories and to certain groups (e.g., applications relating to minors; making or facilitating ads for healthcare, credit, and similar opportunities; determining access to public accommodations; disparate impacts based on protected categories) would be required to conduct an impact assessment. The impact assessment would have to include certain information prescribed by the statute, including a detailed description of design process and methodologies of the covered algorithm; detailed description of data used; a description of the outputs produced by the covered algorithm; an assessment of the necessity and proportionality of the algorithm in relation to its purpose; and a detailed description of the steps the large data holder has taken or will take to mitigate potential harms.
      • Algorithm Design Evaluation.  Covered entities or service providers that “knowingly develop[]” a covered algorithm would be required to conduct a design evaluation prior to deploying the covered algorithm in interstate commerce.  Specifically, the bill would require covered entities and service providers to evaluate the design, structure, and inputs of the algorithm, including training data, prior to deploying that algorithm to reduce the risk of potential harm. 
      • FTC Rulemaking.  The APRA contemplates that the FTC would promulgate rules to establish the processes by which large data holders submit impact assessments and by which covered entities may exclude from the bill’s requirements any low-risk algorithms.

      We will continue to monitor this and similar developments across our blogs.

      ]]>
      Inside Privacy
      California Privacy Protection Agency Issues Enforcement Advisory on Data Minimization https://www.lexblog.com/2024/04/11/california-privacy-protection-agency-issues-enforcement-advisory-on-data-minimization/ Thu, 11 Apr 2024 13:14:41 +0000 https://www.lexblog.com/2024/04/11/california-privacy-protection-agency-issues-enforcement-advisory-on-data-minimization/ On April 2, the Enforcement Division of the California Privacy Protection Agency issued its first Enforcement Advisory, titled “Applying Data Minimization to Consumer Requests.”  The Advisory highlights certain provisions of and regulations promulgated under the California Consumer Privacy Act (“CCPA”) that “reflect the concept of data minimization” and provides two examples that illustrate how businesses may apply data minimization principles in certain scenarios.

      First, the Advisory includes the CCPA’s data minimization principle reflected in Civil Code § 1798.100(c): “[a] business’ collection, use, retention, and sharing of a consumer’s personal information shall be reasonably necessary and proportionate” to achieve the purpose for which it was collected or processed, or another, compatible and disclosed purpose. 

      The Advisory notes that the regulations “underscor[e] this principle” by explaining that whether a business’s data practices are “reasonably necessary and proportionate” within the meaning of the statute is based on (1) “[t]he minimum personal information that is necessary to achieve the purpose identified,” (2) “possible negative impacts to consumers posed by the business’s collection or processing of the personal information,” and (3) “the existence of additional safeguards for the personal information” to address those possible negative impacts.  The Advisory next highlights other CCPA regulations that “reflect the concept of data minimization.”  For example, the Advisory identifies certain regulations that prohibit requiring consumers to provide “additional information beyond what is necessary” to exercise certain rights under the CCPA, including 11 CCR § 7025(c)(2) concerning opt-out preference signals.  

      The Advisory also describes two hypothetical “illustrative scenarios in which a business might encounter the data minimization principle.”  The first scenario contemplates a business’s response to a consumer’s request to opt out of sale/sharing, and the second a business’s process for verifying a consumer’s identity with respect to a request to delete.  In both, the Advisory provides examples of questions businesses could consider to apply data minimization principles to the scenarios.  These questions reflect the three bases set out in the regulations to determine whether a business’s data practices are “reasonably necessary and proportionate.” as discussed above.  For example, per the Advisory, a business verifying a deletion request could consider: “We already have certain personal information from this consumer.  Do we need to ask for more personal information than we already have?”

      Finally, the Advisory explains that Enforcement Advisories are intended to “provide[ ] additional detail about principles of the CCPA and highlight[ ] observations of non-compliance to deter violations.”  They do not “implement, interpret, or make specific the law enforced or administered by the California Privacy Protection Agency, establish substantive policy or rights, constitute legal advice, or reflect the views of the Agency’s Board.”  The Agency further states that adherence to guidance in an advisory is not a safe harbor from potential enforcement actions, which are assessed on a case-by-case basis. 

      ]]>
      Inside Privacy
      EDPB 2023 Coordinated Enforcement Framework on DPOs: What Are the Key Takeaways for Organizations? https://www.lexblog.com/2024/04/11/edpb-2023-coordinated-enforcement-framework-on-dpos-what-are-the-key-takeaways-for-organizations/ Thu, 11 Apr 2024 11:46:47 +0000 https://www.lexblog.com/2024/04/11/edpb-2023-coordinated-enforcement-framework-on-dpos-what-are-the-key-takeaways-for-organizations/ On January 17, 2024, the European Data Protection Board (“EDPB”) published its report on the 2023 Coordinated Enforcement Framework (“CEF”), which examines the current landscape and obstacles faced by data protection officers (“DPOs”) across the EU.  In particular, the report provides a snapshot of the findings of each supervisory authority (“SA”) on the role of DPOs, with a particular focus on (i) the challenges DPOs face and (ii) recommendations to mitigate and address these obstacles in light of the GDPR.  This blog post summarizes the key findings of the EDPB’s 2023 CEF report.

      Background

      The 2023 CEF was conducted by the EU SAs, each of whom sent a selection controllers and processors  in their jurisdictions a pre-agreed questionnaire, in some cases slightly modified from the original, to be completed by their respective DPOs.  In a few cases, questionnaires were completed by a member of an organization’s senior management (instead of a DPO).

      Key Takeaways

      The report highlights the following key findings and makes the following recommendations:

      • Insufficient transparency on DPOs.  Several SAs noted that a number of organizations did not always publicly disclose or provide their SAs with contact information for their DPOs (e.g., the DPO’s email address; there is no need to include the DPO’s name), which may contravene a data subject’s right to information and ability to access their personal data.
        • SAs’ key recommendations:  Organizations should ensure that a DPO’s contact details are made available to the public to enable effective communication with data subjects and SAs.  They will also need to maintain up-to-date contact information and communicate any changes to data subjects (e.g., in their privacy notice).
      • Insufficient resources allocated to DPOs.  Several SAs noted that a number of DPOs did not have adequate resources to perform their tasks effectively.
        • SAs’ key recommendations:  Organizations should ensure that adequate financial and human resources are provided to DPOs, including: (i) completing a survey to determine the organization’s needs, particularly in terms of personnel required to assist the DPO and the type of matters the DPO is or should be involved in; (ii) allocating an independent budget to DPOs that ensures their autonomy; and (iii) providing internal teams to support the DPO.  The SAs also endorse training to enable staff to stay up-to-date with the latest privacy developments.
      • Insufficient involvement of DPOs in completing privacy-related tasks.  Several SAs noted that a number of DPOs did not always have (i) access to information on matters falling within their remit, including data subject access requests (“DSARs”), data breaches, and so forth; and (ii) information regarding why their organizations may have deviated from their recommendations.
        • SAs’ key recommendations:  DPOs should always be consulted on questions related to data privacy.  To this end, organizations should develop and implement internal policies to determine when a DPO’s involvement is necessary (e.g., DSAR, data breaches, etc.), as well as coordinate with other key departments (e.g., HR, Compliance, IT, etc.).
      • Insufficient oversight of conflicts of interests, and reporting mechanisms to high-level management.  Several SAs noted that a high number of DPOs responded by noting that they can receive instructions regarding the performance of their tasks and/or may have additional roles in the organization that could pose a conflict (in light of Article 38(3) and (6) of the GDPR and recent CJEU’s judgment on DPOs’ conflicts of interests).
        • SAs’ key recommendations:  Organizations should: (i) raise awareness regarding the DPO’s role and responsibilities; (ii) identify roles that would be incompatible with the function of DPO; and (iii) draw up and circulate internal policies identifying a DPO’s tasks.

      What’s next?

      Based on the results of the 2023 survey, the EDPB and SAs will develop further guidance and additional tools (e.g., training, workshops, factsheets, etc.).   SAs have also indicated that they may launch investigations or sectoral audits on the basis of the information gleaned through the survey.

      *           *           *

      Covington’s Data Privacy and Cybersecurity team regularly advises companies on their most challenging compliance issues in the EU and other key markets, including on DPOs’ designation and role and data subjects’ rights.  Our team is happy to assist companies in any questions relating to DPOs, on top of any other privacy or cybersecurity-related questions .

      (This blog post was written with the contributions of Diane Valat.)

      ]]>
      Inside Privacy