Odia Kagan - Legal Columnist - LexBlog https://www.lexblog.com/author/okagan/ Legal news and opinions that matter Fri, 31 May 2024 19:19:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.lexblog.com/wp-content/uploads/2021/07/cropped-siteicon-32x32.png Odia Kagan - Legal Columnist - LexBlog https://www.lexblog.com/author/okagan/ 32 32 The Colorado Artificial Intelligence Act: What You Need to Know https://www.lexblog.com/2024/05/31/the-colorado-artificial-intelligence-act-what-you-need-to-know/ Fri, 31 May 2024 13:44:54 +0000 https://www.lexblog.com/2024/05/31/the-colorado-artificial-intelligence-act-what-you-need-to-know/ Colorado recently enacted its Artificial Intelligence law, launching a new era of state AI laws.

What do you need to know?

  • The bill is effective February 1, 2026 and enforceable by the Attorney General.
  • This is a comprehensive AI bill that applies directly to the private sector.
  • Like EU AI Act, the gating item is “high risk AI” defined as: an AI system that has been specifically developed and marketed or internationally and substantially modified to make or to be a substantial factor in making a consequential decision.
  • Like U.S. state privacy laws, a consequential decision is a decision that has a material legal or similarly significant effects on a consumer’s access to or availability, cost or terms of things like education, employment, essential goods or services, financial or lending service, healthcare, housing, insurance or legal services.
  • Like EU AI Act, it allocates responsibility to “developers” and “deployers.” This means service providers are directly implicated.
  • The focus (key violation): reasonable care to avoid algorithmic discrimination, with extensive to-do’s, for a presumption of non-discrimination.

Developers need to:

  • Provide deployers information and documentation necessary to complete an impact assessment (evaluation; data governance, mitigation).
  • Provide deployers and the public information on the types of high-risk systems that the developer has developed or intentionally and substantially modified and makes available.
  • Report to the Attorney General’s Office any known or reasonably foreseeable risk of algorithmic discrimination within 90 days after the discovery or receipt of a credible report from deployer.

Deployers need to:

  • Implement a risk management policy and program (NIST AI RMF recommended).
  • Complete an impact assessment at least annually and within 90 days of substantial change.
  • Notify a consumer of specified items if the system makes a consequential decision concerning a consumer (including a description of the system, purpose , decision, human involvement, data, right to opt out of profiling).
  • Make a publicly available statement summarizing the types of high-risk systems currently deployed.
  • Disclose to the AG the discovery of algorithmic discrimination within 90 days.

The impact assessment is similar to the U.S. state law required DPIA and needs to include:

  • Purpose, intended use, context & benefits.
  • Known or reasonably foreseeable risks of algorithmic discrimination & mitigation steps.
  • Categories of data processed & outputs to customize high risk AI system.
  • Metrics used to evaluate performance and limitations.
  • Description of transparency measures taken.
  • Post deployment monitoring and safeguards including oversight.
]]>
Privacy Compliance & Data Security
8 Principles for the Deployment of AI Systems in the Workplace https://www.lexblog.com/2024/05/24/8-principles-for-the-deployment-of-ai-systems-in-the-workplace/ Fri, 24 May 2024 15:47:07 +0000 https://www.lexblog.com/2024/05/24/8-principles-for-the-deployment-of-ai-systems-in-the-workplace/ The U.S. Department of Labor and The White House recently released a new framework designed to protect U.S. workers from adverse consequences when artificial intelligence systems are deployed in the workplace.

The framework sets forth eight mandatory principles for the development and deployment of AI systems in the workplace (some of which overlap or reinforce requirements in existing privacy law and Federal Trade Commission guidance.)

  • Centering Worker Empowerment: Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use and oversight of AI systems for use in the workplace. This seems like part of a DPIA requirement under GDPR and US state laws.
  • Ethically Developing AI: AI systems should be designed, developed and trained in a way that protects workers.
  • Establishing AI Governance and Human Oversight: This is similar to the opt out requirement under privacy laws.
  • Ensuring Transparency in AI Use (for both applicants and employees): This is also what many privacy laws require
  • Protecting Labor and Employment Rights: Including the right to organize, wages, health, safety and anti discrimination.
  • Using AI to Enable Workers
  • Supporting Workers Impacted by AI
  • Ensuring Responsible Use of Worker Data: Workers’ data collected, used or created by AI systems should be limited in scope and location, used only to support legitimate business aims and protected and handled responsibly. This parallels the data minimization in collection and use, as well as the use for “compatible purpose,” set forth in privacy laws.
]]>
Privacy Compliance & Data Security
Top 5 Privacy Concerns for My Clients https://www.lexblog.com/2024/05/14/top-5-privacy-concerns-for-my-clients/ Tue, 14 May 2024 13:50:25 +0000 https://www.lexblog.com/2024/05/14/top-5-privacy-concerns-for-my-clients/ What are the top 5 data privacy concerns for my clients the past couple of months?

Here are some notes I recently compiled for a meeting of Fox Rothschild’s national Privacy Law Practice Group.

Patchwork Paralysis

  • How do you address the new laws popping up in Kentucky, Maryland, and Vermont?
  • How do you get ready for private right of action in Vermont?
  • What are state regulators thinking? Are they looking to each other for enforcement examples? Regulations? Are they considering complaints made by competitors? (Per a recent update at the IAPP Global Privacy Summit, the answers are: “Yes”, “Yes” and “Yes.”)

Federal Privacy Law?

  • What should we know about the APRA?
  • Is it likely to pass? (State AGs are voting: Floor not ceiling.)

Children’s Information

  • New lawsuits are putting edtech companies on notice
  • States, including Maryland and Vermont, are passing age appropriate design codes with many new obligations.
  • New obligations for under 18’s (not just under 16’s as before)

Health information

  • The FTC has been hard at work with two new enforcements.
  • The OCR is also not resting with its guidance that applies to online trackers.

AI

  • FTC says no carve out for AI.
  • Colorado passed the Colorado Artificial Intelligence Act, with a lot of obligations and a focus on algorithmic bias.
  • The OMB passes details memo with obligations for state agencies and government contractors.
  • Illinois proposes amendment re: employee profiling using AI.
]]>
Privacy Compliance & Data Security
Cookies and Online Merchants: What Can We Learn From California Lawsuits https://www.lexblog.com/2024/05/08/cookies-and-online-merchants-what-can-we-learn-from-california-lawsuits/ Wed, 08 May 2024 17:17:54 +0000 https://www.lexblog.com/2024/05/08/cookies-and-online-merchants-what-can-we-learn-from-california-lawsuits/ New lawsuits that were recently filed in California echo some of the “cookie” conversations my colleagues and I have been having with online merchants and retail clients.

The fact patterns tend to be similar to each other. Here are some things to note:

  • Mind your service providers: Payment processors may process the information collected through the merchant website for more than just facilitating the specific transaction.
  • Such additional uses are not necessarily expected by consumers. And consumers may be unaware that a third party is involved in the payment processing. Consumer expectation is front and center for the Federal Trade Commission. (These types of uses also might constitute a sale under U..S privacy laws)
  • The lawsuits raise the issue of using info for fraud purposes under the “unexpected by consumer” prong. (This is something to watch since there is a fraud exception under U.S. state privacy laws, though its breadth has yet to be tested.)
  • As the FTC has been saying, it all needs to start with understanding what your service providers do with your data. It is important to be transparent about data uses and get consent when necessary.
]]>
Privacy Compliance & Data Security
AI, the FTC and Consumer Facing Applications: Some Guidance https://www.lexblog.com/2024/04/30/ai-the-ftc-and-consumer-facing-applications-some-guidance/ Tue, 30 Apr 2024 15:50:08 +0000 https://www.lexblog.com/2024/04/30/ai-the-ftc-and-consumer-facing-applications-some-guidance/ There is no exemption for AI when it comes to consumer facing applications, the Federal Trade Commission recently stressed in a blog post.

Key points:

  • Quietly changing the terms of service agreements could be unfair or deceptive: Any firm that reneges on its user privacy commitments risks running afoul of the law. It may be unfair or deceptive for a company to adopt more permissive data practices —for example, to start sharing consumers’ data with third parties or using that data for AI training — and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy.
  • If you promise that you won’t use customer data for secret purposes, such as to train or update their models — be it directly or through workarounds — don’t. Doing so may be a violation of the law and the FTC has and will continue to require the unlawfully gotten data to be deleted.
  • Claims of privacy and security do not shield anticompetitive conduct. The FTC will closely scrutinize any claims that competition must be impeded to advance privacy or security.
]]>
Privacy Compliance & Data Security
AI and Accuracy: The UK’s Information Commissioner’s Office Offers Guidance https://www.lexblog.com/2024/04/26/ai-and-accuracy-the-uks-information-commissioners-office-offers-guidance/ Fri, 26 Apr 2024 13:08:37 +0000 https://www.lexblog.com/2024/04/26/ai-and-accuracy-the-uks-information-commissioners-office-offers-guidance/ The United Kingdom’s Information Commissioner’s Office has issued guidance on the accuracy of artificial intelligence

Some key points:

  • For generative AI models, both developers and deployers must consider the impact that training data has on the outputs and how the outputs will be used.
  • If inaccurate training data contributes to inaccurate outputs, and the outputs have consequences for individuals, then it is likely that the developer and the deployer are not complying with the accuracy principle
  • Once the organization deploying the model has established the purpose for it, and ensured with the developer that the model is appropriate for that purpose, it can then decide whether the purpose requires accurate outputs
  • The more a generative AI model is used to make decisions about people, or is relied on by its users as a source of information rather than inspiration, the more that accuracy should be a central principle in the design and testing of the model
  • For example: A model used to summarize customer complaints must have accurate outputs in order to achieve its purpose. This purpose requires both statistical accuracy (the summary needs to be a good reflection of the documents it is based on) and data protection accuracy (output must contain correct information about the customer).
  • If a model is not sufficiently statistically accurate because the purpose that the developer envisaged for it does not necessarily require accuracy, developers should put in place technical and organizational controls to ensure that it is not used for purposes which require accuracy. This could involve, for example, contractual requirements limiting types of usage in customer contracts with deployers or analysis of customer usage (when the model is accessed through an API).
  • Developers should also assess and communicate the risk and impact of so-called “hallucinations”, ie incorrect and unexpected outputs
  • Organizations who make the application available to people would need to carefully consider and ensure the model is not used by people in a way which is inappropriate for the level of accuracy that the developer knows it to have

This could include:

  • Providing clear information about the statistical accuracy of the application, and easily understandable information about appropriate usage
  • Monitoring user-generated content, either by analysing the user query data or by monitoring outputs publicly shared by users
  • User engagement research, to validate whether the information provided is understandable and followed by users
  • Labelling the outputs as generated by AI or not factually accurate. (e.g “watermarking” and “data provenance”)
  • Providing information about the reliability of the output, for example through the use of confidence scores.

For more information, click here.

]]>
Privacy Compliance & Data Security
Can an Employer Review Social Media Posts While Assessing Candidates? https://www.lexblog.com/2024/04/24/can-an-employer-review-social-media-posts-while-assessing-candidates/ Wed, 24 Apr 2024 12:59:33 +0000 https://www.lexblog.com/2024/04/24/can-an-employer-review-social-media-posts-while-assessing-candidates/ Can an employer review social media posts for the purpose of assessing job candidates?

Sometimes, according to a guide on data protection in employment released by the national Data Protection Authority for Latvia, Datu valsts inspekcija,

Some key points:

  • The purpose for which the applicant has created a social network profile should be distinguished (i.e. private or professional). For example, if the applicant stated in the application that he regularly publishes his opinion on some topics related to the field of the position, familiarization with the content created by the applicant would be permissible in order to gain insight into the applicant’s professionalism.
  • At the same time, it cannot be ruled out that in some cases the applicant’s activities in the private profile may also be taken into account. The employer needs to be able to objectively justify it, though. (For example, when the position to be held involves significant publicity and the applicant’s reputation is important.) The employee must be informed about this before the selection.
  • When applying the employer’s (controller’s) legitimate interests as the legal basis for data processing, a balancing test must be performed.
]]>
Privacy Compliance & Data Security
Florida Issues Draft Privacy Regulations: What You Need to Know https://www.lexblog.com/2024/04/17/florida-issues-draft-privacy-regulations-what-you-need-to-know/ Wed, 17 Apr 2024 14:09:17 +0000 https://www.lexblog.com/2024/04/17/florida-issues-draft-privacy-regulations-what-you-need-to-know/ Florida has issued draft regs for its new privacy law, but this is important far beyond the Sunshine state. U.S. State regulators are looking to each other for guidance on similar provisions.

What do you need to know?

Who is a child?

  • Most state laws impose a “known child” standard, but do not provide a definition.
  • Per Florida, a “known child” is if you “actually know” or “willfully disregard” that this is a child.
  • Per new regs, “willfully disregard” is if ” based on facts or circumstances readily available you should reasonably have been aroused to question whether a consumer was a child and thereafter failed to perform reasonable age verification.”
  • It is not “willfully disregarding” if you utilize a reasonable age verification method with respect to all consumers and determined that the consumer was not a child (unless you later gain actual knowledge & fail to act).
  • Reasonable age verification is “any commercially reasonable method regularly used by the government or businesses for the purpose of age and identity verification.”
  • Who is the parent (for getting parental consent)? You need to conduct a reasonable parental verification before allowing the exercise of any right. That is “any method that is reasonably calculated at determining that a person is a parent of a child that also verifies the age and identity of that parent by commercially reasonable means including: (1) requesting from a child the child’s parent’s name, address, phone number, and e-mail address; (2) contacting the name provided by the child and confirming that the parent is the child’s parent by obtaining documents or information; and (3) utilizing any commercially reasonable method regularly used by the government or business to verify that parent’s identity and age [similar to one of the FTC approved COPPA methods]”

Authentication:

  • Needs to be done by a commercially reasonable method, which you determine by considering: (1) The rights the requestor is seeking to exercise; (2) The type, sensitivity, value and volume of personal data at issue; (3) The degree of possible harm that could be suffered by the consumer in the event of improper access, use or deletion of their personal data; and (4) The cost to the controller for completing the authentication method.
  • Don’t ask for more information than you already have for authentication unless you must and then only use the new information to authenticate and immediately delete it [similar to the new CA guidance on data minimization in DSARs)
  • You SHALL use a password protected account for verification if you have them, (CA is “may”) but you can’t require the creation of an account for this.

Information security

  • Additional detailed requirements include compliance with NIST CSF.

For more information, click here.

]]>
Privacy Compliance & Data Security
The CPPA, FCC, FTC Are Collaborating https://www.lexblog.com/2024/04/05/the-cppa-fcc-ftc-are-collaborating/ Fri, 05 Apr 2024 12:59:00 +0000 https://www.lexblog.com/2024/04/05/the-cppa-fcc-ftc-are-collaborating/ The California Privacy Protection Agency, the California Attorney General’s Office, the FCC, the Federal Trade Commission, and more are embracing a collaborative approach. They all talk with each other.

That is according to CPPA Executive Director Ashkan Soltani, who spoke with Travis LeBlanc during the International Association of Privacy Professionals Global Privacy Summit.

On enforcement:

  • There is an ongoing sweep on connected vehicles and one re: data minimization in the consumer requests space
  • There is a big push on public awareness to provide resources for individuals
  • The kid gloves are off. There is no right to cure. There are enforcement advisories that can help with guidance, but there is no right to cure. This is especially the case regarding the parts that have been on the books since the CCPA went into effect.
  • Auditing is an important part of compliance.
  • Enforcement advisories shine the light on sections in the law that the CPPA is paying attention to. They are not binding, but the underlying regulations are … .

On draft regs:

  • Re the draft: rules on cyber audits, DPIA, ADMT and the new new regs: the regulatory process has started, including a statement of reasons and an economic analysis to be presented to the board in July. At that time, it may move to formal rulemaking. After that, it’ll take another 9-12 months until the rules are finalized (so looking at potentially July 2025)
  • Additional regs may be advanced if the board so directs.
  • CT AG said in a conference that they aren’t doing regulation, but rather looking to CA and CO regulations

On AI:

  • The rules only deal with AI that touches personal information. (No AI without PI) There are 50+ bills in CA on various aspects of AI regulations.
]]>
Privacy Compliance & Data Security
Feds to Review Privacy Practices of 10 Largest Airlines https://www.lexblog.com/2024/03/27/feds-to-review-privacy-practices-of-10-largest-airlines/ Wed, 27 Mar 2024 14:05:00 +0000 https://www.lexblog.com/2024/03/27/feds-to-review-privacy-practices-of-10-largest-airlines/ U.S. Secretary of Transportation Pete Buttigieg recently announced the Department of Transportation (DOT) would undertake a privacy review of the nation’s ten largest airlines. Specifically, they will look at their policies and procedures as they relate to the collection, handling, maintenance and use of passengers’ personal information.

“Airline passengers should have confidence that their personal information is not being shared improperly with third parties or mishandled by employees,” he said, according to a news release.

Some key points

  • DOT, in partnership with Sen. Ron Wyden (D-Oregon), will also probe whether airlines are unfairly or deceptively monetizing or sharing that data with third parties.
  • DOT has stated that mishandling consumers’ private information may be considered an unfair or deceptive practice by airlines
  • The privacy review is the first of what will be periodic reviews of airline privacy practices by the DOT.
  • As part of the privacy review, DOT sent a letter to the airlines requesting information regarding their policies and procedures on data privacy,
    any complaints filed and their privacy training.
  • DOT has the authority to investigate complaints and take enforcement action against airlines and ticket agents that engage in unfair or deceptive practices involving passenger information. The DOT can also impose civil penalties where appropriate.
  • DOT enforces airlines’ compliance with COPPA
]]>
Privacy Compliance & Data Security
GenAI and Public Sector Procurement in California: What You Need to Know https://www.lexblog.com/2024/03/26/genai-and-public-sector-procurement-in-california-what-you-need-to-know/ Tue, 26 Mar 2024 13:33:00 +0000 https://www.lexblog.com/2024/03/26/genai-and-public-sector-procurement-in-california-what-you-need-to-know/ California recently released GenAI Guidelines for Public Sector Procurement, Uses and Training, as well as a GenAI Risk Assessment.

What do you need to know?

The guidelines and risk assessment come on the heels of Gov. Gavin Newsom’s AI Executive Order and California GenAI Risk Report.

Key points:

  • Generative Artificial Intelligence (GenAI) is defined as: Pretrained AI models that can generate images, videos, audio, text and derived synthetic content.
  • For Incidental GenAI purposes all state entities must: (1) Assign a member of the executive team the responsibility of continuous GenAI monitoring and evaluation; (2) Attend mandatory Executive and Procurement Team GenAI trainings and (3) Review annual employee training and policy to ensure staff understand and acknowledge the acceptable use of GenAI tools
  • For Intentional AI procurement, all state agencies ALSO must: (4) identify a business need (before the procurement) and understand the implications of using GenAI to solve that problem statement; (5) Create a culture of engagement and open communication with state employee end users; (6) Assess the risks and potential impacts of deploying the GenAI under consideration; (7) invest time and resources (before procurement) to prepare data inputs and test models adequately; (8) Establish a GenAI-focused team responsible for continuously evaluating the potential use of GenAI and its implications for operations and program administration.

Risk Assessment:

  • Deployment of GenAI technologies must be evaluated through a risk assessment based on the National Institute of Standards and Technology (NIST) AI Risk Management Framework, as well as relevant portions of the (State Administration Manual) SAM and State Information Management Manual (SIMM)

For low risk GenAI:

  • Describe the project use case, problem and impact of outcome
  • Were there other options considered?
  • Will the GenAI system be shared or procured with any other state entity or third-party organization?
  • Has a Privacy Threshold Assessment (PTA) and Privacy Impact Assessments (PIA) (SIMM 5310 – C) been completed?

For Moderate to high risks systems, also:

  • What type of model(s) and/or network(s) will be used in the GenAI system?
  • What mechanism will the GenAI system use to notify a user that they are interacting with a GenAI system rather than a human?
  • Does the output of the system make decisions that are legal or similarly significant?

Additional general questions:

  • What are the data inputs?
  • Who will be the GenAI team responsible?
  • How does using the GenAI tool build trust with the end user?
  • How will system owners identify and mitigate hallucinations/accuracy?
]]>
Privacy Compliance & Data Security
A DCMA Exemption for ‘Right to Repair’? https://www.lexblog.com/2024/03/19/a-dcma-exemption-for-right-to-repair/ Tue, 19 Mar 2024 13:32:00 +0000 https://www.lexblog.com/2024/03/19/a-dcma-exemption-for-right-to-repair/ The Federal Trade Commission and the DOJ support a Digital Millennium Copyright Act exemption for vehicle operational data to promote the “right to repair.”

  • An exemption currently exists for computer programs that control motorized land vehicles, marine vessels, and mechanized agricultural vehicles for purposes of diagnosis, repair, or lawful modification of the vehicle or vessel function.
  • The FTC and DOJ support adopting an additional exemption to allow vehicle owners or the repair shop of their choice to access, store, and share vehicle operational data.
  • Per the FTC: Giving owners the option of providing their own data to their chosen repairer need not increase cybersecurity risks as there is no evidence to suggest that independent repair shops are more likely than authorized repair shops to compromise or misuse customer data.
  • In addition, the proposed exemption would simply empower owners by providing them access to their own vehicle operational data. It would not prevent a manufacturer from imposing a reasonable authentication measure that prevents access to the data by someone other than the owner or the owner’s authorized representative.

Read more here.

]]>
Privacy Compliance & Data Security
India’s Digital Personal Data Protection Act: What You Need to Know https://www.lexblog.com/2024/03/11/indias-digital-personal-data-protection-act-what-you-need-to-know/ Mon, 11 Mar 2024 13:26:04 +0000 https://www.lexblog.com/2024/03/11/indias-digital-personal-data-protection-act-what-you-need-to-know/ India enacted its new Digital Personal Data Protection Act last year.

Here are some key takeaways regarding the law, courtesy of Sajai Singh, a partner at J. Sagar Associates in India. Singh spoke recently at Alpine Privacy Days in Switzerland.

  • Does not apply to:
    • Offline data
    • Publicly available information (Hello scraping public information!)
    • Domestic use. Employee data for employment purposes
  • Adopts key FIPs like: Data minimization, purpose limitation, accuracy, information security, retention limitation.
  • Applies to non-Indian businesses that offer services to individuals in India or data of such individual if in hands of their nominee (even if the individual has died or if the nominee lives outside of India).
  • Only legal basis is consent. Consent has to be active. Introducing concept of “consent manager” to give consent for you (probably some organization). Consent is revocable only going forward.
  • Consent should be taken in advance. Aka consent for credit card to be charged whenever used for payment, not for each processing.
  • Informed consent also includes what happens when you have a grievance with respect to the data processing (and this applies to employees too)
  • Exceptions to consent: legal obligation, contractual obligation, vital interest (but if you state in the contract that you will do targeted advertising, that would probably count in the contractual necessity).
  • Children = under 18.
  • Employee = deemed consent. Not for selling employee data, but yes for biometrics and facial recognition.
]]>
Privacy Compliance & Data Security
Biden Administration Issues Executive Order to Protect Sensitive Personal Data https://www.lexblog.com/2024/02/29/biden-administration-issues-executive-order-to-protect-sensitive-personal-data/ Thu, 29 Feb 2024 16:19:36 +0000 https://www.lexblog.com/2024/02/29/biden-administration-issues-executive-order-to-protect-sensitive-personal-data/ The White House recently issued an executive order that restricts cross-border transfers of personal data from the United States to “countries of concern.”

President Biden also urged Congress to pass comprehensive privacy legislation, especially to protect children.

Key points:

  • Focus is on sensitive data, including genomic data, biometric data, personal health data, geolocation data and financial data.
  • Concerns are with the sharing and re-sharing of the data through data brokers, such that it ends up in the hands of foreign intelligence services, militaries or companies controlled by foreign government.
  • U.S. Department of State will issue regulations.
  • U.S. Department of Justice will also issue regulations.
  • DOJ and U.S. Department of Homeland Security will issue security standards to prevent access by countries of concern to Americans’ data through other commercial means, such as data available via investment, vendor and employment relationships.
  • U.S. Department of Health and Human Services (HHS), U.S. Department of Defense and U.S. Department of Veterans Affairs will work to ensure that federal grants, contracts and awards are not used to facilitate access to Americans’ sensitive health data by countries of concern, including via companies located in the United States.
  • The above should not stop the flow of information necessary for financial services activities. It also should not impose measures aimed at a broader decoupling of the substantial consumer, economic, scientific and trade relationships that the United States has with other countries.

For for information, read a Fact Sheet here.

]]>
Privacy Compliance & Data Security
Don’t Expect to Easily Claim ‘Disproportionate’ Effort When Responding to US Data Access Requests https://www.lexblog.com/2024/02/27/dont-expect-to-easily-claim-disproportionate-effort-when-responding-to-us-data-access-requests/ Tue, 27 Feb 2024 14:10:13 +0000 https://www.lexblog.com/2024/02/27/dont-expect-to-easily-claim-disproportionate-effort-when-responding-to-us-data-access-requests/ U.S. companies thinking about falling back on “disproportionate” effort for access requests under the new U.S. privacy laws because they require compiling too many documents should think again.

The Berlin Administrative court recently said that if complying with a request would require a review of more than 5,000 pages of documents from over 100 proceedings over the last 20 years to check in each case whether the rights of third parties would be infringed by handing them
over [to redact], that it would not be disproportionate and you must comply.

Will the US regulators interpret it this way? Not necessarily.

But seeing as this is the relevant interpretation in Europe, companies should definitely consider this as an option.

Read the decision here.

]]>
Privacy Compliance & Data Security
The FTC and Employee Surveillance https://www.lexblog.com/2024/02/20/the-ftc-and-employee-surveillance/ Tue, 20 Feb 2024 21:55:24 +0000 https://www.lexblog.com/2024/02/20/the-ftc-and-employee-surveillance/ The U.S. Federal Trade Commission is working to enforce employee surveillance, according to Benjamin Wiseman, Associate Director of the FTC’s Division of Privacy and Identity Protection.

Here are some key takeaways from a recent speech he gave to the Harvard Journal of Law & Technology.

  • The Commission’s recent actions addressing AI facial recognition technology, data brokers and health apps and websites demonstrate that the FTC will not hesitate to combat emerging privacy harms and other abuses in the marketplace. The same applies to worker surveillance.
  • As worker surveillance and AI management tools continue to permeate the workplace, the Commission has made clear that it will protect Americans from potential harms stemming from these technologies.
  • In a policy statement, the Commission emphasized that companies may violate the FTC Act if they, for example, deploy surveillance technology to monitor gig workers’ every move without transparency about how it impacts pay or performance evaluation.
  • In another policy statement, the FTC warned that companies that make deceptive statements about biometric technologies, fail to inform users about its use, or use biometric information in ways that are likely to cause harm without taking reasonable measures to mitigate injury may violate the FTC Act.
  • To understand what the Commission expects from worker surveillance tools that collect sensitive information like geolocation information and biometrics, we can look to recent Commission actions against companies deploying such tools in other contexts. (Think the Rite Aid Smart CCTV decision and the X-Mode decision). The principles from these cases apply with equal force to individuals subjected to surveillance on the job.
]]>
Privacy Compliance & Data Security
A Helpful Guide on Data Processing Consent https://www.lexblog.com/2024/02/12/a-helpful-guide-on-data-processing-consent/ Mon, 12 Feb 2024 14:11:24 +0000 https://www.lexblog.com/2024/02/12/a-helpful-guide-on-data-processing-consent/ The Office of the Data Protection Authority of the Bailiwick of Guernsey has issued concise guide on the definition of consent.

This is helpful not only for GDPR, but also for understanding and implementing consent under the new U.S. State privacy laws.

It is worth noting that it is important to refresh consent. While the law does not set a time limit for when consent “expires,” it is up to you to ensure you review and refresh consent as appropriate given your circumstances. How long consent lasts will depend on the context, and your retention policy periods.

Note: The FTC recently told Inmarket they needed to refresh consent for precise geolocation every 6 months. CPRA, however, does not allow you to refresh consent after an opt out before 12 months.

]]>
Privacy Compliance & Data Security
Virginia Looks to Regulate Artificial Intelligence https://www.lexblog.com/2024/01/29/virginia-looks-to-regulate-artificial-intelligence/ Mon, 29 Jan 2024 15:33:19 +0000 https://www.lexblog.com/2024/01/29/virginia-looks-to-regulate-artificial-intelligence/ Virginia continues to charge ahead in the AI space, with Delegate Michelle Lopes Maldonado recently submitting a new “AI Developer Act.

Here are some key points to know.

Definitions

  • AI is defined as “technology that uses data to train statistical models for the purpose of enabling a computer system or service to autonomously perform any task, including visual perception, language processing, and speech recognition, that is normally associated with human intelligence or perception.”
  • “Consequential decision” is used in place of “legal or similarly significant effects” (as in GDPR and the other state laws) and has a similar, but narrower, definition requiring “material” effect on access to credit, criminal justice, education, employment, health care, housing or insurance.
  • “High-risk artificial intelligence system” means any artificial intelligence system that is specifically intended to autonomously make, or be a controlling factor in making, a consequential decision. A system or service is not a “high-risk artificial intelligence system” if it is intended to (i) perform a narrow procedural task, (ii) improve the result of a previously completed human activity, (iii) detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review, or (iv) perform a preparatory task to an assessment relevant to a consequential decision.

Requirements for Developers

Developers of high risk AI systems can’t sell, lease, give or otherwise provide such system to a deployer without:

  • Statement of the intended uses.
  • Documentation setting forth the known limitations of the system, the purpose, the intended benefits, how the system was evaluated, measures taking to mitigate discrimination and how the system can be used for making consequential decisions.
  • Providing the deployer the technical capability to access all information / documentation to conduct an impact assessment.

Requirements for Developers of Generative AI: (Starting 10/1/24)

Can’t sell to consumers or anyone doing business in Virginia unless the GenAI system:

  • Reduces and mitigates the reasonably foreseeable risks.
  • Exclusively incorporate and processes datasets that are subject to data governance measures that are appropriate for generative artificial intelligence systems, including data governance measures to examine the suitability of data sources for possible biases and appropriate mitigation.
  • Achieves, throughout the life cycle of such generative artificial intelligence system, appropriate levels of performance, predictability, interpretability, corrigibility, safety, and cybersecurity, as assessed through appropriate methods, including model evaluation involving independent experts, documented analysis, and extensive testing, during conceptualization, design and development of such generative artificial intelligence system.

Unless the developer conducted an impact assessment that assesses:

  • Intended purpose
  • The extent to which AI will be used
  • Extent to which prior use of such AI has harmed/adversely impacted individuals or gave rise to concern of such
  • Potential extent for adverse impact/harm
  • Extent to which the individuals potentially impacted are dependent on the outcome (eg. b/c they can’t opt out)
  • Extent to which the individuals who many be harmed belong to a vulnerable population
  • Extent to which the outcomes produced are reversible.

Can’t give it to a search engine operator or social media platform operator without providing to such search engine operator or social media platform operator the technical capability such search engine operator or social media platform operator reasonably requires to perform such search engine operator’s or social media platform operator’s duties.

Requirements for Deployers:

Before using high risk AI systems for consequential decisions:

  • Avoid risk of algorithmic discrimination
  • Implement a risk management policy/program that is (1) at least as strict as the AI RMF or other nationally recognized AI management framework (2) reasonable for the size, complexity, nature scope and sensitivity of the data
  • Completed an impact assessment before deploying and not later than 90 days after each update.

The impact assessment needs to include:

  • Purpose and risk of algorithmic discrimination
  • (if applicable) the extent that was used in a manner consistent or different from developer’s intended use
  • Description of the data processed as inputs and the outputs
  • (if applicable) data used to retrain the system
    Transparency measures taken
    (if applicable) any post deployments monitoring performance and user safeguards (including oversight processes).
  • There are some carveouts / exceptions – e.g. for law enforcement, research, live saving – but the burden of proof is on the party trying to rely on the exemption.
  • Enforcement by the AG with statutory fines.
  • Effective date for some developer obligations is 10/1/24 and 7/1/26 for deployers
]]>
Privacy Compliance & Data Security
AIVF, IMDA Propose Model Governance Framework for Generative AI https://www.lexblog.com/2024/01/25/aivf-imda-propose-model-governance-framework-for-generative-ai/ Thu, 25 Jan 2024 14:10:08 +0000 https://www.lexblog.com/2024/01/25/aivf-imda-propose-model-governance-framework-for-generative-ai/ The AI Verify Foundation (AIVF) and Infocomm Media Development Authority (IMDA) have developed a draft Model AI Governance Framework for Generative AI.

This framework expands on the existing Model Governance Framework that covers Traditional AI, which was last updated in 2020.

It looks at nine proposed dimensions to support a comprehensive and trusted AI ecosystem. The core elements are based on the principles that decisions made by AI should be explainable, transparent and fair.

Read more here.

]]>
Privacy Compliance & Data Security
Are Test Answers Personal Data? https://www.lexblog.com/2024/01/24/are-test-answers-personal-data/ Wed, 24 Jan 2024 14:07:25 +0000 https://www.lexblog.com/2024/01/24/are-test-answers-personal-data/ Are test questions and answers personal data that needs to be provided pursuant to an access request?

A German court recently weighed in, providing some good insight regarding both GDPR and U.S. state data privacy laws.

Some key takeaways:

  • Answers given by a student in a test could be considered personal data. Test questions, however, cannot be considered personal data.
  • The argument the test questions are strictly linked to the answers given by the plaintiff was also not accepted by the court, which held that the questions do not reveal anything about the level of knowledge of the plaintiff and thus do not constitute personal data.
  • Access requests under Article 15 GDPR serve the purpose of making data subjects aware of the processing of their personal data and to verify the legality of processing. It is therefore irrelevant that the plaintiff needed access to the test questions for the purpose of interpreting the test result because he does not have a right to access under the GDPR for such purpose.
  • Test questions may constitute trade secrets.
]]>
Privacy Compliance & Data Security
Your DNA and the FTC: What You Need to Know https://www.lexblog.com/2024/01/08/your-dna-and-the-ftc-what-you-need-to-know/ Mon, 08 Jan 2024 18:22:01 +0000 https://www.lexblog.com/2024/01/08/your-dna-and-the-ftc-what-you-need-to-know/ The Federal Trade Commission recently published a blog post regarding the privacy of DNA.

What do you need to know?

  • Protecting biometric information – including genetic data – is a top FTC priority. (See FTC Biometric Policy Statement from May 2023). These cases can and have involved serious penalties, a requirement to delete biometrics data, requirements to get affirmative express consent in the future, a mandated security program and more.
  • Genetic data is sensitive. While some other data types can be stripped of identifying characteristics, that’s not necessarily the case when it comes to genetic information. Here the sensitivity of the data is high, as is the risk of harm (particularly in this era of increasing biometric surveillance).
  • Secure genetic data in line with the heightened sensitivity of this data.
  • Secure customer accounts — you must take reasonable steps to secure customer accounts against common hacking techniques, including credential-stuffing attacks. Consider whether two-factor authentication should be mandatory (check the Ring case for guidance).
  • Your accuracy claims about genetic testing much be correct. DNA testing for ancestry is, therefore — at best — an estimation of ancestry, not a precise science. Stick to reliable science for all claims you make.
  • The FTC is watching how companies use — and claim to use — Artificial Intelligence. DNA algorithms are no exception. If you’re promoting your AI or algorithm, make sure your claims don’t deceive or otherwise harm consumers.
  • The FTC has a strong track record of challenging deceptive or unfair dark patterns, including when it comes to obtaining “consent” for the use and disclosure of genetic data
  • You can’t make material retroactive changes in your privacy notice.
  • Don’t lie. Ever. Review your privacy notice to make sure that it is clear, complete and accurate.
]]>
Privacy Compliance & Data Security
Information Commissioner’s Office Issues Guidance on How to Keep Employment Records: What You Need to Know https://www.lexblog.com/2024/01/03/information-commissioners-office-issues-guidance-on-how-to-keep-employment-records-what-you-need-to-know/ Wed, 03 Jan 2024 13:45:00 +0000 https://www.lexblog.com/2024/01/03/information-commissioners-office-issues-guidance-on-how-to-keep-employment-records-what-you-need-to-know/ The United Kingdom’s Information Commissioner’s Office recently issued guidance on how to keep employment records.

This is good advise for employers beyond Europe (and particularly in California). The data retention requirements of the California Privacy Rights Act are the same as GDPR.

Here are some key takeaways:

Accuracy

  • You must take all reasonable steps to keep any personal information you hold about your workers accurate and up-to-date
  • The more important it is that the personal information is accurate, the greater the effort you should put into ensuring its accuracy. So if you are using the information to make decisions that might significantly affect the worker concerned or others, you should put more effort into ensuring accuracy. This may mean you have to get independent confirmation that the information is accurate.

Retention limitation

  • You must not keep personal information for longer than you need it.
  • You must consider any legal or regulatory requirements and seek advice on compliance, if necessary.
  • You should set up a retention policy or schedule that lists: the types of record or information you hold; what you use it for; and how long you intend to keep it.
  • You should not take a ‘one-size-fits-all’ approach to retention of workers’ personal information. While you may need to hold on to some types of information about previous workers, you may be able to delete other information as soon as the employment relationship ends.
  • Different categories of personal information will need different retention periods.
  • Where possible, you could set up automated systems to help with this process that flag when information you are holding is due to be reviewed or deleted

Transparency

  • You must provide a privacy notice
  • You could provide it: as part of your staff privacy notice on your organization’s intranet; as part of your general data protection policy; as separate privacy information in a worker handbook; using ‘just in time’ notices if using online workshops, platforms or tools where personal information might be collected or shared with others; as a general notice on a staff notice board; or by sending a letter or email to workers.

Right of erasure

  • In some circumstances, people have the right to have their personal information erased.
  • It only applies in certain circumstances, many of which do not apply in an employment context.
  • The right to erasure does apply where the personal information is no longer necessary for the purpose you collected it for.

Accountability

  • You must have appropriate measures and records in place to be able to demonstrate your compliance with your data protection obligations.

]]>
Privacy Compliance & Data Security
How Will the FTC Regulate Artificial Intelligence? https://www.lexblog.com/2024/01/02/how-will-the-ftc-regulate-artificial-intelligence/ Tue, 02 Jan 2024 14:42:39 +0000 https://www.lexblog.com/2024/01/02/how-will-the-ftc-regulate-artificial-intelligence/ Federal Trade Commissioner Alvaro Bedoya recently released a statement on Rite Aid’s use of smart CCTV.

Here some key takeaways from the statement (and how the recent settlement applies to the use of surveillance and AI technologies).

You should consider this a blueprint for future AI enforcement by the FTC.

  • We often talk about how surveillance “violates rights” and “invades privacy.” We should; it does. What cannot get lost in those conversations is the blunt fact that surveillance can hurt people.
  • The settlement offers a strong baseline for what the FTC expects an algorithmic fairness program to look like.
  • The FTC is not afraid to ban the use of particular AI technology for a number of years, or to order the deletion of biometric information collected through it.
  • The FTC will not necessarily accept the use of biometric surveillance in commercial settings. There is a powerful policy argument that there are some decisions that should not be automated at all. Many technologies should never be deployed in the first place.
  • This decision extends beyond smart surveillance into the use of any technology to automate important decisions about people’s lives, including decisions that could cause them substantial injury. Some context include: automated resume screening, screening for housing, and screening using pricing models.

When using AI for facial recognition you must:

  • Carefully consider how and when people can be enrolled in an automated decision-making system, particularly when that system can substantially injure them
  • Notify people about the use of the technology (unless this is impossible due to specific safety concerns)
  • Allow an opt out (unless this is impossible due to specific safety concerns)
  • Notify people when you take some action against them based on this, as well as how to contest it
  • Deploy robust testing, including testing for statistically significant bias on the basis of race, ethnicity, gender, sex, age or disability – acting alone or in combination
  • Conduct a detailed assessment of how inaccuracies may arise from training data, hardware issues, software issues, probe photos and differences between training and deployment environments
  • Conduct ongoing annual testing “under conditions that materially replicate” conditions in which the system is deployed
  • Shut down the system if you cannot address the risks identified through this assessment and testing
]]>
Privacy Compliance & Data Security
Federal Judges Start Cracking Down on the Use of Artificial Intelligence in Court Filings https://www.lexblog.com/2023/12/11/federal-judges-start-cracking-down-on-the-use-of-artificial-intelligence-in-court-filings/ Mon, 11 Dec 2023 14:29:27 +0000 https://www.lexblog.com/2023/12/11/federal-judges-start-cracking-down-on-the-use-of-artificial-intelligence-in-court-filings/ The U.S. District Court for the Eastern District of Michigan recently published a proposed rule requiring lawyers to disclose any time they use artificial intelligence to help them write legal filings. Per Danielle Ferguson of Law360, lawyers would need to verify all citations were real.

According to the revised rule in Michigan:

  • “Artificial intelligence” or “AI” means the capability of computer systems or
    algorithms to imitate intelligent human behavior.
  • If generative AI is used to compose or draft any paper presented for filing, the filer
    must disclose its use and attest that citations of authority have been verified by a human being by using print volumes or traditional legal databases and that the language in the paper has been checked for accuracy by the filer.

What might this mean for the future of AI in the courts? Is this the start of a trend?

In November, the 5th U.S. Circuit Court of Appeals proposed a similar change. If a program was used, the filers must promise that all text, including citations and legal analysis, were reviewed for accuracy and approved by a human.

U.S. District Judge Brantley Starr in the Northern District of Texas also requires litigants to file a certificate attesting that no generative AI be used in filings, or that any AI used would be checked for accuracy by a human.

For more information, read the Law360 article here or the rule here.

]]>
Privacy Compliance & Data Security
Oregon Establishes State Government AI Advisory Council https://www.lexblog.com/2023/12/06/oregon-establishes-state-government-ai-advisory-council/ Wed, 06 Dec 2023 13:50:12 +0000 https://www.lexblog.com/2023/12/06/oregon-establishes-state-government-ai-advisory-council/ Governor Tina Kotek recently established the Oregon State Government AI Advisory Council.

The council will develop recommendations for its utilization of artificial intelligence throughout state government, while honoring transparency, privacy and equity. Those recommendations should be ready by no later than six months from the date of its first convening. A final recommended action plan should be ready no later than 12 months from its first convening.

The council was established by Executive Order .

For more information, read the news release here and the order here.

]]>
Privacy Compliance & Data Security
Oregon Passes State Privacy Law https://www.lexblog.com/2023/12/04/oregon-passes-state-privacy-law/ Mon, 04 Dec 2023 16:17:00 +0000 https://www.lexblog.com/2023/12/04/oregon-passes-state-privacy-law/ The state of Oregon has passed a comprehensive data protection law (SB0619), which will go into effect in July 2024.

What do you need to know about SB0619, also known as the Oregon Consumer Privacy Act?

  • Similar to the Colorado progeny of laws, but with differences – not the least of which is that it applies to nonprofits starting July 2025.
  • Different definition of personal data than the other state laws (though practical difference is unclear). It doesn’t include information made available through widely distributed media.
  • Biometric data is such that is capable of identifying a person, even if not used for this.
  • Deidentified data includes data derived from patient data and Deidentified under HIPAA.
  • Sale is for monetary or other valuable consideration, but carves out sharing to an affiliate, or as part of a merger/acquisition or made publicly available.
  • Scope: processing the information of 100,000 consumers or 25,000 consumers, where 25% of the revenue is from sale of data.
  • There is a list of carve outs that the law doesn’t prohibit a controller from doing, which include: (1) Conducting internal research to develop, improve or repair products, services or technology; (2) Performing internal operations that are reasonably aligned with a consumer’s expectations, that the consumer may reasonably anticipate based on the consumer’s existing relationship with the controller or that are otherwise compatible with processing.
  • The law provides a much needed clarification (obvious under GDPR, but not explicitly stated in the parallel state laws) that the carve out applies only subject to controller fulfilling the data minimization, purpose limitation and retention limitations of the law (adequate and reasonably necessary for, relevant to, proportionate in relation to and limited to the purposes) and adequately protecting the data from unauthorized use.
  • Specifically stating that the burden of proof regarding the carve out is on the controller.
  • The consumer may designate an authorized agent by means of an internet link, browser setting, browser extension, global device setting or other technology that enables the consumer to opt out of the controller’s processing of the consumer’s personal data.
  • Controllers must provide a clear and conspicuous description of any processing of personal data in which the controller engages for the purpose of targeted advertising or for the purpose of profiling the consumer in furtherance of decisions that produce legal effects or effects of similar significance, and a procedure by which the consumer may opt out of this type of processing.
  • A controller that discloses deidentified data must exercise reasonable oversight to monitor compliance with any contractual commitments to which the deidentified data is subject and shall take appropriate steps to address any breaches of the contractual commitments.
  • Enforceable by the Attorney General’s Office.
]]>
Privacy Compliance & Data Security
A Cookie Is Not Just a Cookie: EDPB Issues Draft Guidelines on Art 5(3) ePrivacy Directive https://www.lexblog.com/2023/12/04/a-cookie-is-not-just-a-cookie-edpb-issues-draft-guidelines-on-art-53-eprivacy-directive/ Mon, 04 Dec 2023 15:15:34 +0000 https://www.lexblog.com/2023/12/04/a-cookie-is-not-just-a-cookie-edpb-issues-draft-guidelines-on-art-53-eprivacy-directive/ A cookie is not just a cookie, according to the European Data Protection Board. It’s also similar technologies, and access and Internet of Things (IOT).

Here are some key takeaways you need to know from the EDPB’s draft guidelines on the Art 5(3) clarifying the applicability of Art 5(3) of the ePrivacy Directive.

The operations carried out relate to information

This includes both non-personal data and personal data, regardless of how this data was stored and by whom, i.e. whether by an external entity (also including other entities than the one having access), by the user, by a manufacturer, or any other scenario.

The operations carried out involve a ‘terminal equipment’ of a subscriber or user.

  • NOT a device that solely acts as a communication relay.
  • May be comprised of any number of individual pieces of hardware, which together form the terminal equipment. This may or may not take the form of a physically enclosed device hosting all the display, processing, storage and peripheral hardware (for example, smartphones, laptops, connected cars, connected TVs, smart glasses)
  • Terminal equipment that allows for personal correspondence and the legitimate interests of the legal persons to be carried out is protected. The user or subscriber may own or rent or otherwise be provided with the terminal equipment.
  • Multiple users or subscribers may share the same terminal equipment in multiple communications (for example, in the case of a connected car) and a single communication may involve more than one terminal equipment. Protection is not dependent on whether the electronic communication was initiated by the user or even on whether the user is aware of the said communication.

The operations carried out are made in the context of the ‘provision of publicly available electronic communications services in public communications networks’

  • Broad enough to cover any type of infrastructure. It includes networks managed or not by an operator, networks co-managed by a group of operators, or even ad-hoc networks in which terminal equipment may dynamically join or leave a mesh of other terminal equipment using short range transmission protocols.
  • There is no limitation with regards to the number of terminal equipment present in the network at any time.
  • The public availability of the communication service over the communication network is necessary for the applicability of Article 5(3) ePD.
  • The fact that the network is made available to a limited subset of the public (for example, subscribers, whether paying or not, subject to eligibility conditions) does not make such a network private.

The operations carried out indeed constitute a ‘gaining of access’ or ‘storage’

  • Storage and access do not need to occur within the same communication and do not need to be performed by the same party.
  • Applies where accessing entity instructs the terminal equipment to proactively send information on each subsequent HTTP (Hypertext Transfer Protocol) call. (Cookies)
  • Applies where the accessing entity distributes software on the terminal of the user that will then proactively call an API (application programming interface) endpoint over the network.
  • Applies to JavaScript code, where the accessing entity instructs the browser of the user to send asynchronous requests with the targeted content.
  • Applies when the entity instructing the terminal to send back the targeted data and the entity receiving information are not the same.
  • As long as the networked storage medium constitutes a functional equivalent of a local storage medium (including the fact that its only purpose is for the user of the terminal equipment to store information that will be processed on the terminal equipment itself), that storage medium will be considered part of the terminal equipment.

Use cases:

Information:

  • MAC or IP address of the terminal equipment
  • Session identifiers (SSRC, Websocket identifier)
  • Authentication tokens
  • HTTP header including “accept” field or user agent)
  • Caching mechanism (such as ETag or HSTS)
  • Other functionalities (cookies being one of them

Local use:

Use of information by an application would not be subject to Article 5(3) ePD as long as the information does not leave the device, but when this information or any derivation of this information is accessed through the communication network, Article 5(3) ePD may apply.

Crypto mining: the sole fact that the software instructing the nefarious processing has been distributed over a network would imply the application of Article 5(3) ePD.

Tracking pixels and URLs: Under the condition that said pixel or tracked URL have been distributed over a public communication network, it is clear that it constitutes storage on the communication network user’s terminal equipment, at the very least through the caching mechanism of the client-side software. As such, Article 5(3) ePD is applicable. It can also constitute access.

IP address: Unless the entity can ensure that the IP address does not originate from the terminal equipment of a user or subscriber, it has to take all the steps pursuant to the Article 5(3) ePD. (Eg IPv6)

IOT:

  • Devices have a direct connection to a public communication network, for example through the use of WIFI or a cellular SIM card. In the first case, the IoT device, where it is connected to a public communications network, would itself be considered a terminal
  • Other IoT devices do not have a direct connection to a public communication network and might be instructed to relay the information to another device through a point-to-point connection (for example, through Bluetooth).
  • In both situations, Article 5(3) ePD would apply as it is. Through the instruction of the IoT device to send the dynamically stored data to the remote server, there is “gaining of access.”
  • In the case of IoT devices connected to the network via a relay device (a smartphone, a dedicated hub, etc.) with a purely point to point connection between the IoT device and the relay device, the transmission of data to the relay could fall outside of the Article 5(3) ePD as the communication does not take place on a public communication network. However, the information received by the relay device would be considered stored by a terminal and Article 5(3) ePD would apply as soon as this relay is instructed to send that information to a remote server

Unique identifiers

In the context of “unique identifier” collection on websites or mobile applications, the entity collecting is instructing the browser (through the distribution of client-side code) to send that information. As such a “gaining of access” is taking place and Article 5(3) ePD applies.

]]>
Privacy Compliance & Data Security
The FTC Is Coming After Your AI https://www.lexblog.com/2023/11/27/the-ftc-is-coming-after-your-ai/ Mon, 27 Nov 2023 16:33:20 +0000 https://www.lexblog.com/2023/11/27/the-ftc-is-coming-after-your-ai/ The Federal Trade Commission is after your Artificial Intelligence.

According to a recent news release, the FTC has approved an omnibus resolution authorizing the use of compulsory process in nonpublic investigations involving products and services that use or claim to be produced using artificial intelligence (AI) or claim to detect its use.

The omnibus resolution will streamline FTC staff’s ability to issue civil investigative demands (CIDs) in investigations relating to AI. (CIDs are a form of compulsory process similar to a subpoena,)

Take note!

]]>
Privacy Compliance & Data Security
Cross Border Complaints: What You Need to Consider https://www.lexblog.com/2023/11/21/cross-border-complaints-what-you-need-to-consider/ Tue, 21 Nov 2023 17:01:38 +0000 https://www.lexblog.com/2023/11/21/cross-border-complaints-what-you-need-to-consider/ Here are a few things to consider in a cross border complaint, according to the International Association of Privacy Professionals’ Data Protection Congress panel with Isabelle Vereecken of the European Data Protection Board, Cedric Burton of Wilson Sonsini Goodrich & Rosati, Romain Robert, a former NOYB program director, and Antonio Caselli of The Italian Data Protection Authority.

  • The new regulations will have required minimum provisions to make sure that the complaint is admissible despite the different state law formalities to figure out who is the lead supervisory authority.
  • If you receive an RFI from the DPA you must reply. It’s not a formal investigation, but cooperation is important.
  • You may receive multiple RFIs from different DPAs until the LSA is chosen.
  • There are now going to be rules regarding selecting the LSA.
  • If possible, it’s important to determine the main establishment of the party sued. That obviously impacts the determination of the LSA.
  • After a formal complaint is filed, informal consultation may happen.
  • The preliminary comments don’t have to be circulated for comments. But in the proposed regulations, more collaboration at an earlier stage is contemplated and leading to better harmonization. Also, deadlines are proposed to be added. Mostly for the CSAs.
  • The draft regulations do not add additional rights to the complainant and by harmonizing, it actually decreases the rights sometimes. (For example, in Belgium, the complainant has right regarding access and expressing opinions that are on par with the controller.)
  • The CSA objections should relate to the objections regarding violations of GDPR and that the proposed decision constitutes a breach of fundamental human rights.
  • Of 900 OSS cases, the EDPB took action only in 11 cases.
  • Where there is an unresolved objection, the EDPB gets involved. The EDPB is not an appeal body. It just assesses the objections.
  • First, there is a process to make sure the file is complete.
  • The EDPB secretariat conducts a legal analysis and then brings it to the members.
  • Sometimes, EDPB decides an objection is not reasoned or relevant enough. It’s a majority decision.
]]>
Privacy Compliance & Data Security
Pennsylvania Could Join the US State Privacy Law Race https://www.lexblog.com/2023/11/06/pennsylvania-could-join-the-u-s-state-privacy-law-race/ Mon, 06 Nov 2023 18:03:14 +0000 https://www.lexblog.com/2023/11/06/pennsylvania-could-join-the-u-s-state-privacy-law-race/ Pennsylvania is considering its own state privacy law, joining California and a host of other U.S. states.

Rep. Edward Neilson (D-174) is sponsoring H.B 1201. The bill was referred to the Pennsylvania House of Representatives’ Commerce committee on May 19 and discussed on September 7.

Some key points:

  • Scope thresholds are revenue of $10 million (lower than the other laws), 50,000 users or 50% of revenues from sale.
  • Standard carve outs include entity exemption to financial institutions
  • Data minimization, purpose specification, information security obligations are included
  • Employment-related data is included in personal information and employment opportunities are included as a legal or similarly significant effect requirement for DPIA (if impacted by automated decision making). However, data processed or maintained in the contact of employment – is carved out
  • Publicly available information is excluded, but it can only be used for a purpose compatible with that for which the data is maintained and made available.
  • Sale: for monetary or other valuable consideration
  • Sensitive information concept similar to the other state laws and requires consent
  • Third party includes public authority or agency
  • Similar consumer rights as under other state laws (access, rectification, deletion, opt out)
  • Targeted advertising to under 16’s or selling their data requires consent
  • Required privacy notice
  • Honoring opt out preference signals (starting 1/1/26)
  • Detailed requirements for controller to processor contract (DPA)
  • Required data protection assessment for activities with heightened risk of harm
  • Specifically addresses pseudonymized data
  • Enforced by Pennsylvania Attorney General and an unfair or deceptive act/practice with mandatory 60 day cure until December 31, 202512/31/25
  • AG can provide guidance and will promulgate regulations

 

]]>
Privacy Compliance & Data Security