AI Archives - LexBlog https://www.lexblog.com/tag/ai/ Legal news and opinions that matter Fri, 31 May 2024 22:53:38 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.lexblog.com/wp-content/uploads/2021/07/cropped-siteicon-32x32.png AI Archives - LexBlog https://www.lexblog.com/tag/ai/ 32 32 From Encryption to Employment, U.S. Federal Agencies Brace for the Effects of Quantum Computing, AI and More https://www.lexblog.com/2024/05/31/from-encryption-to-employment-u-s-federal-agencies-brace-for-the-effects-of-quantum-computing-ai-and-more/ Fri, 31 May 2024 12:59:27 +0000 https://www.lexblog.com/2024/05/31/from-encryption-to-employment-u-s-federal-agencies-brace-for-the-effects-of-quantum-computing-ai-and-more/ In this week’s edition of Consumer Protection Dispatch, we look at the latest regulatory developments from the U.S. Department of Commerce, Consumer Financial Protection Bureau, and the Securities and Exchange Commission regarding data and AI.

]]>
Internet & Social Media Law Blog
From AI to integrations: 3 new DealCloud product innovations to boost performance and results https://www.lexblog.com/2024/05/31/from-ai-to-integrations-3-new-dealcloud-product-innovations-to-boost-performance-and-results/ Fri, 31 May 2024 12:46:16 +0000 https://www.lexblog.com/2024/05/31/from-ai-to-integrations-3-new-dealcloud-product-innovations-to-boost-performance-and-results/ Intapp DealCloud recently released a suite of new product innovations built to improve efficiency at your firm through automated workflows, zero-entry capabilities, and enhanced collaboration. Learn how they can help your professionals save time, improve accuracy, and strengthen networking and business development.

1. Automatically create transcripts and gain AI-driven insights from Microsoft Teams meetings

Microsoft Outlook now provides the option to automatically capture transcripts for recorded Microsoft Teams meetings. These transcripts integrate with DealCloud so users can easily find and refer to them. Users with Intapp Assist can also use the AI tool to pull summaries of these meetings and generate follow-up content.

What problem does it solve?

Taking thorough notes during meetings can be distracting — but without them, busy professionals often forget key takeaways and action items.

By automatically transcribing meetings via Outlook and DealCloud, all meeting participants can focus on the live discussion — without worrying whether information is being correctly captured. And if anyone needs to refresh their memory, they can just look back at the transcript in DealCloud.

Busy professionals can also save time by using Intapp Assist, which provides AI-summarized insights that cover the most important takeaways from the meeting. Your teams can quickly review these summaries and take any necessary next steps.

How do I enable this feature?

DealCloud users must first enable the tool’s activity capture. Then, they can integrate the meeting transcript feature by going to the Outlook add-in settings and selecting “Include Teams Meeting Transcript Automatically.” AI insights are available exclusively for Intapp Assist users

2. Elevate networking and prospecting via LinkedIn

The DealCloud LinkedIn browser integration matches company and contact names, websites, and LinkedIn profiles to existing companies and contacts within your DealCloud solution. Newly identified contacts and companies are easily added with a match from DataCortex and Intapp Data. With these features, your users always have access to accurate data and insights while targeting relationships within LinkedIn.

What problem does it solve?

LinkedIn is often the most up-to-date source when it comes to validating a contact’s new role or company. However, it takes time and manual effort to pull all this information from LinkedIn and update the contact profiles in your own system.

With the new DealCloud LinkedIn browser integration, professionals can identify, validate, and capture company and contact information without having to toggle between LinkedIn and their own relationship management systems.

How do I enable this feature?

DealCloud users can find this browser extension in the Google Chrome web store.

3. Access more data and insights from Moody’s within DataCortex

View an expanded data set from Moody’s within DataCortex. This data includes more than 470 million global companies and 80+ company attributes such as demographics, company hierarchy, industry classification, and financial information.

What problem does it solve?

Previously, Moody’s was most applicable to M&A professionals, as companies were only available to be tagged as participants in a deal. Now, users can discover target companies and associated details directly within DealCloud.

Users can leverage this data to enhance proprietary sourcing and business development workflows. They can also use this integration to uphold data integrity by validating and enriching firmographic information from Moody’s.

How do I enable this feature?

This feature is available to DealCloud users with Moody’s enabled as a third-party data partner within DataCortex. Learn more about Moody’s.

Interested in becoming an Intapp DealCloud user? Schedule a demo.

]]>
Horizons
Legal AI Under the Microscope: Stanford HAI’s In-Depth Analysis of Lexis and Westlaw AI Tools https://www.lexblog.com/2024/05/31/legal-ai-under-the-microscope-stanford-hais-in-depth-analysis-of-lexis-and-westlaw-ai-tools/ Fri, 31 May 2024 13:35:09 +0000 https://www.lexblog.com/2024/05/31/legal-ai-under-the-microscope-stanford-hais-in-depth-analysis-of-lexis-and-westlaw-ai-tools/ Reflections on the Stanford HAI Report on Legal AI Research Tools

The updated paper from the Stanford HAI team presents rigorous and insightful research into the performance of generative AI tools for legal research. The complexity of legal research is well-acknowledged, and this study looks into the capabilities and limitations of AI-driven legal research tools like Lexis+ AI and Westlaw AI-Assisted Research.

Stanford and Unclean Hands

This is Stanford Human-Centered Artificial Intelligence (HAI) team’s second bite at this apple. The first version of this report was criticized by me and others for using Thomson Reuters’ Practical Law AI tool as a legal research tool instead of using Westlaw Precision AI (which they call Westlaw AI-AR in their report.) In the follow up report, and in the blog post and on LinkedIn posts, authors of the report made it sound like Thomson Reuters had some nefarious reason for this. In reality, TR had not released the AI tool to any academic institutions, not just the Stanford group.

In the original report, this should have been posted up front and center on the paper, instead it was finally mentioned on page 9. It appears to me that Stanford HAI released the initial report in an attempt to pressure TR to give them access, which did work. However, this type of maneuver from Stanford HAI may have repercussions on whether the legal industry gives them a second chance to take this report seriously. We expect a high degree of ethics from prestigious instructions like Stanford, and in my opinion, this team of researchers did not meet those expectations in the initial report.

Challenges in Legal Research

Legal research is inherently challenging due to the intricate and nuanced nature of legal language and the vast array of case law, statutes, and regulations that must be navigated. The Stanford researchers designed their study with prompts that often included false statements or were worded in ways that tested the AI tools’ ability to understand legal complexities. This approach, while insightful, introduces a certain bias aimed at identifying the tools’ vulnerabilities. I would not say that they cherry-picked the results for the paper, but that the questions posed were designed to challenge the systems they were testing. This is not a bad thing, just something to consider when reading the report.

AI Tools: Strengths and Weaknesses

One of the primary advantages and disadvantages of legal research generative AI tools is their attempt to provide direct answers to user prompts. Traditional legal research tools like Westlaw and Lexis previously operated by presenting users with a list of relevant cases, statutes, and regulations based on Boolean or natural language searches. The shift to AI-generated responses aims to streamline this process but comes with its own set of challenges.

For instance, Westlaw’s Precision AI (Westlaw AI-AR) often produces lengthy answers, which can be a double-edged sword. While these detailed responses can be informative, they also increase the risk of including inaccuracies or “hallucinations.” The study highlights that these AI tools sometimes produce errors, raising questions about the effectiveness of their initial vector search compared to traditional Boolean and natural language searches. The researchers gave the results a Correct/Incorrect/Refusal score. To be correct, the AI generated answer had to be 100% correct. Any inaccurate information automatically resulted in an incorrect score. If the results had five points of reasoning, and only one of those was factually incorrect or hallucinated, the entire question was ruled incorrect. Something else to keep in mind when looking at the statistics.

Comparing AI and Traditional Searches

A key point of interest is how AI search results for statutes, regulations, and cases compare with traditional search methods. The study suggests that while the AI tools’ vector search technology is promising in retrieving relevant documents, the accuracy of the AI-generated responses is still a concern. Users must still verify the AI’s answers against the actual documents, much like traditional legal research. As my friend Tony Thai from Hyperdraft would jokingly say, “I mean we could just do the regular research and I don’t know...READ like we are paid to do.”

The Role of Vendors and the Future of Legal AI

Vendors like Westlaw and Lexis need to take this report seriously. The previous version of the study had its limitations, but the updated report addresses many of these deficiencies, providing a more comprehensive evaluation of the AI tools. While vendors may point out that the study is based on earlier versions of their products, the fundamental issues highlighted remain relevant.

The legal community has not abandoned these AI tools despite their flaws, primarily due to the decades of goodwill these vendors have built. The industry understands that this technology is still in its infancy. With the exception of CoCounsel, which was not directly evaluated in this study, these products have not reached their first birthday yet. This is very new technology which vendors and customers are wanting to get released quickly, and fix issues along the way. Vendors have the opportunity to engage in honest conversations with their customers about how they plan to improve these tools and reduce hallucinations. Vendors need to respect the customers and be forthcoming with issues they are working to correct. Creative advertising may create some short-term wins but may damage long-term relationships. Vendors are bending themselves into pretzels to advertise some version of “hallucination-free” on websites and marketing media when everyone knows that hallucinations in Generative AI tools is a feature, not a bug.

RAG as a Stop-Gap Measure

The study underscores that Retrieval-Augmented Generation (RAG) is a temporary solution. Something I have discussed with the vendors for a while now. For generative AI tools to truly excel, users need more sophisticated ways to manipulate their searches, refine retrieval processes, and interact with the AI. Currently, the system is basic and limited, but there is optimism that it will improve with time and consumer pressure. RAG is one of the first steps toward improving AI generated results for legal research, not the last.

Personal Reflections and Industry Implications

Reflecting on the broader implications, it is clear that the AI tools in their current form are not yet replacements for thorough, human-conducted legal research. The tools offer significant promise but must evolve to become more reliable and versatile. Legal professionals must remain vigilant in verifying AI-generated outputs and continue to advocate for advancements that will make these tools more robust and dependable. We are right in the middle of the Trough of Disillusionment on the Gartner Hype Cycle. This means that customers are expecting solid results, and while there is this unusual honeymoon with the legal industry when it comes to forgiving AI tools for hallucinations, this honeymoon will not last much longer. Lawyers are notorious for trying a product once, and if it works, they stick with it, and if it doesn’t, it may be months or years before they are willing to try it again. We are almost at that point in the industry with Generative AI tools that don’t reach their advertised abilities.

While the Stanford HAI report sheds light on the current state of legal AI research tools, it also provides a roadmap for future improvements. The legal community must work collaboratively with vendors to refine these Generative AI technologies, ensuring they become valuable assets in the practice of law.

]]>
3 Geeks and a Law Blog
ESMA issues initial guidance for firms using AI in investment services https://www.lexblog.com/2024/05/31/esma-issues-initial-guidance-for-firms-using-ai-in-investment-services/ Fri, 31 May 2024 10:35:49 +0000 https://www.lexblog.com/2024/05/31/esma-issues-initial-guidance-for-firms-using-ai-in-investment-services/ On 30 May 2024, the European Securities and Markets Authority (ESMA) issued a statement providing initial guidance for firms that use artificial intelligence (AI) technologies when providing investment services to retail clients.

ESMA states that, when using AI, it expects firms to comply with relevant MiFID II requirements, particularly in relation to organisational aspects, conduct of business, and their regulatory obligation to act in the best interest of the client.

It also warns that, while AI offers potential benefits to firms and clients, it also poses inherent risks including:

  • Algorithmic biases and data quality issues.
  • Opaque decision-making by a firm’s staff members.
  • Over-reliance on AI by both firms and clients for decision-making. 
  • Privacy and security concerns linked to the collection, storage, and processing of the large amount of data needed by AI systems.

Investment firms are reminded that potential uses of AI that would be covered by MiFID II requirements include customer support, fraud detection, risk management, compliance, and support to firms in the provision of investment advice and portfolio management. 

ESMA and the national competent authorities plan to continue monitoring the use of AI in investment services and the relevant EU legal framework to determine whether further action is needed in this area.

]]>
Global Regulation Tomorrow
Singapore Publishes Generative AI Model Governance Framework https://www.lexblog.com/2024/05/31/singapore-publishes-generative-ai-model-governance-framework/ Fri, 31 May 2024 08:58:22 +0000 https://www.lexblog.com/2024/05/31/singapore-publishes-generative-ai-model-governance-framework/ On May 30, 2024, Singapore’s AI Verify Foundation and Infocomm Media Development Authority published its Model AI Governance Framework for Generative AI (GenAI Framework).

Was the GenAI Framework Consulted Upon?

Yes, a draft of the GenAI Framework was open for public consultation from January 16 to March 15, 2024. Our blog post on this can be found here.

Does the GenAI Framework Cover AI That Is Not Generative AI?

No, an earlier model governance framework was issued for traditional (non-generative) AI. Our blog post on this can be found here.

What Does the GenAI Framework Address?

  • Accountability – This compels all parties throughout the entire multi-layered AI development chain to be responsible to end-users. A useful parallel can be drawn with today’s cloud and software development stacks, where initial practical steps can be taken by the various players across the tech stack.
  • DataAs a core element to AI model development, what is fed to the model is important in terms of the quality of data used, which should be from trusted data sources. In cases where the use of data for model training is potentially contentious, such as personal data and copyright material, it is also important to give business clarity, ensure fair treatment and to do so in a pragmatic way.
  • Trusted Development and Deployment – Notwithstanding the limited visibility that end-users may have, meaningful transparency around the baseline safety and hygiene measures undertaken is key. This involves industry adopting best practices in development, evaluation and over time, “food label”-type transparency and disclosure.
  • Incident Reporting – Establishing regulatory notification structures and practices can help facilitate timely remediation of any incidents and support the continuous improvement of AI systems more broadly.
  • Testing and Assurance – Third-party testing and assurance can play a complementary role to develop common and consistent standards around AI, and ultimately demonstrate trust with end-users. With independent verification already adopted in the finance and healthcare domains, such processes can be similarly adopted in AI even as an emerging field. It is important to develop common standards around AI testing to ensure quality and consistency.
  • Security – Generative AI introduces the potential for new threat vectors against the models themselves. This goes beyond security risks inherent in any software stack. While this is a nascent area, existing frameworks for information security need to be adapted and new testing tools developed to address these risks.
  • Content Provenance – Transparency about where and how content is generated is necessary, to avoid misinformation and fraud. Use of technical solutions like digital watermarking and cryptographic provenance must be considered in the right context.
  • Safety and Alignment Research & Development (R&D) – Accelerated R&D investment is required to improve model alignment with human intention and values. Singapore hopes to achieve this alongside global cooperation among AI safety R&D institutes.
  • AI for Public Good – Democratizing AI access, improving public sector adoption, as well as upskilling workers and developing systems sustainably, will steer AI towards outcomes for the public good.

Is the GenAI Framework a Mandatory Regulatory Requirement in Singapore?

No, this is a model governance framework which businesses developing or deploying generative AI can adapt for use. The document itself encourages stakeholders to view the relevant issues set out in the GenAI Framework “in a practical and holistic manner” and that “[n]o single intervention will be a single bullet.”

Businesses are therefore advised to tailor the relevant good practices offered in the GenAI Framework, based on their own unique characteristics such as the particular use case for the generative AI system, nature of business and associated risks arising therefrom or other circumstances.

Additionally, reliance on the GenAI Framework does not absolve a company from having to comply with other applicable laws including but not limited to intellectual property, data protection, online safety, consumer protection, competition, contract, tort and sectoral regulations. 

Should you require any advice or assistance, feel free to reach out to your usual firm contact.

Disclaimer: The views and opinions expressed here are of the author(s) alone and do not necessarily reflect the opinion or position of Squire Patton Boggs or its clients. While every effort has been made to ensure that the information contained in this article is accurate, neither its author(s) nor Squire Patton Boggs accept responsibility for any errors or omissions. The content of this article is for general information only and is not intended to constitute or be relied upon as legal advice.

]]>
Privacy World
Debunking AI Myths with AI Visionary Aurélie Jacquet https://www.lexblog.com/2024/05/30/debunking-ai-myths-with-ai-visionary-aurelie-jacquet/ Thu, 30 May 2024 21:00:00 +0000 https://www.lexblog.com/2024/05/30/debunking-ai-myths-with-ai-visionary-aurelie-jacquet/ The buzz around AI can make it difficult to discern the reality from the hype. Fortunately, some bright minds in the legal world who are way ahead of this challenge—including AI Visionary Aurélie Jacquet.

]]>
The Relativity Blog
Gen AI in Legal Practice: It’s Not About Us Lawyers, It’s About Our Clients https://www.lexblog.com/2024/05/30/gen-ai-in-legal-practice-its-not-about-us-lawyers-its-about-our-clients/ Fri, 31 May 2024 00:12:26 +0000 https://www.lexblog.com/2024/05/30/gen-ai-in-legal-practice-its-not-about-us-lawyers-its-about-our-clients/ Lawyers need to advise clients of risks of Gen AI.

Another week, and I find myself at yet another legal conference focusing on AI and Gen AI. Lots of the now standard discussions about whether and how Gen AI will impact lawyers and the legal profession. Presenters droning on about the risks and benefits to lawyers of using Gen AI. But like so many things lawyers stew over, the focus of these discussions is almost always on the lawyer’s professional navels and not on the interests of their clients.

When lawyers do focus on their clients in this area, it’s mostly all about worrying about what Gen AI will do to the all-powerful billable hour, what it will do to their revenue, and whether lawyers will be replaced by a Gen AI version of Her (or Him). 

Lawyers worry mainly not about their client’s use and potential liability but about themselves.

But as usual, lawyers are collectively missing something. Their clients, who are businesses, and even individuals are using AI and Gen AI every day. They are using it to develop products. To manufacture products. To assist in making business and individual decisions. To assess risks. To create contracts. All the while, lawyers worry mainly not about their client’s use and potential liability but about themselves.

As lawyers, our ultimate responsibility and singular focus should be to advise and protect our clients competently. To help them assess risks. To help them make business decisions about how to use Gen AI and when to use and not use it. Our clients can certainly make these kinds of decisions based on their business or individual needs. But they look to us to help them assess the legal risks and exposure from their Gen AI use. It’s up to us to educate our clients about the risks and potential bias of GenAi in the business context and how that could lead to liability. When there is a lawsuit over their use of GenAI tools, it’s up to us to navigate them through the litigation and reduce exposure.

Gen AI does bring some risk. Hallucinations. Inaccuracies. Bias.

And Gen AI does bring some risk. Hallucinations. Inaccuracies. Bias like that demonstrated by the Amazon Recruitment Tool which was hastily abandoned because it picked men over women for Amazon jobs . Copyright infringement. Voice cloning. Product liability. Class actions. Not to mention the fact that even computer scientists don’t know how these systems a lot of what they do. We need to educate our clients what this means from a legal perspective and the dangers.

So, if we want to be trusted advisors and help our clients with sophisticated and complicated litigation, don’t we have to understand Gen AI and its risks to not only to ourselves but our clients? Don’t we have to know the “risks and benefits” of Gen AI to competently advise our clients?

There are ethical considerations as well. Comment 8 to Model Ethical Rule 1.1, which governs competence, says we should keep abreast of the risks and benefits of relevant technology. Comment 2 to that same Rule provides, “Perhaps the most fundamental legal skill consists of determining what kind of legal problems a situation may involve, a skill that necessarily transcends any particular specialized knowledge.” Comment 2 to Rule 1.3 provides that we are to act with commitment and even zeal in representing our clients.

We need to ask not what GenAI can or can’t do for us but what risks GenAI does or doesn’t hold for our clients.

All these Comments suggest that when it comes to AI and Gen AI that our clients are using, we need to be knowledgeable and assess the risks of that use.  We can’t run away from Gen AI any more than we could from things like computers, smartphones, cybersecurity issues, and the legal risks these technologies pose. 

The lawyers that succeed in the future will be client centric. They will appreciate their clients use GenAI tools so they can competently advise them. To paraphrase John F. Kennedy, we need to ask not what GenAI can or can’t do for us but what risks GenAI does or doesn’t hold for our clients.

]]>
TechLaw Crossroads
NIST Offers AI Governance Guideline to Help Avoid Bias Liability https://www.lexblog.com/2024/05/30/nists-offers-ai-governance-guideline-to-help-avoid-bias-liability/ Thu, 30 May 2024 17:30:06 +0000 https://www.lexblog.com/2024/05/30/nists-offers-ai-governance-guideline-to-help-avoid-bias-liability/ The issue of bias in artificial intelligence is assuming increased urgency in courtrooms around the country. Organizations that use AI to scan resumes can be sued for employment discrimination. Companies using facial recognition on their property might face premises liability. And numerous government agencies have announced their focus on companies that use AI in ways that violate federal antidiscrimination laws. Avoiding the inadvertent use of AI to implement or perpetuate unlawful biases requires thoughtful AI governance practices.

Basically, AI governance describes the ability to direct, manage, and monitor an organization’s AI activities. Put simply, your clients should no more uncritically accept mass-produced AI output than you would uncritically believe a salesperson you had just met. 

The U.S. National Institute for Standards and Technology (NIST) has recently offered AI governance protocols to minimize bias. Those protocols include the following:

1.         Monitoring.  AI is not “set it and forget it.” Organizations will want to  monitor their AI systems for potential bias issues and have a procedure for alerting the proper personnel when the monitoring reveals a potential problem. Through appropriate monitoring, organizations can know about a potential liability before a lawsuit, or a government enforcement action tells them about it.

2.         Written Policies and Procedures.  Robust written policies and procedures for all important aspects of the business are important, and AI is no exception. Absent effective written policies, managing AI bias can easily become subjective and inconsistent across business sub-units, which can exacerbate risks over time rather than minimize them. Among other things, such policies should include an audit and review process, outline requirements for change management, and provide details of any plans related to incident response for AI systems.

3.         Accountability.  Having a person or team in place who is responsible for protecting against AI bias will maximize your AI governance efforts. Ideally, the accountable person or team will have enough authority to command compliance with proper AI protocols implicitly – or explicitly if need be. And accountability mandates can also be embedded within and across the teams involved in the use of AI systems. Implementing effective AI governance to minimize biases requires careful thought. However, this implementation is crucial to protect against AI bias lawsuits or enforcement actions.

]]>
Data Privacy + Cybersecurity Insider
Leveraging ChatGPT and Generative AI for Intelligent eDiscovery Automation: eDiscovery Webinars https://www.lexblog.com/2024/05/30/leveraging-chatgpt-and-generative-ai-for-intelligent-ediscovery-automation-ediscovery-webinars/ Thu, 30 May 2024 11:30:00 +0000 https://www.lexblog.com/2024/05/30/leveraging-chatgpt-and-generative-ai-for-intelligent-ediscovery-automation-ediscovery-webinars/ That’s a compelling title! Check out this webinar from Lexbe tomorrow to learn about leveraging ChatGPT and generative AI!

The post Leveraging ChatGPT and Generative AI for Intelligent eDiscovery Automation: eDiscovery Webinars appeared first on eDiscovery Today by Doug Austin.

]]>
eDiscovery Today Blog
Appellate Judge Proposes Possible Use of GenAI for Contract Interpretation – Recognizes That AI Hallucinates but Flesh-And-Blood Lawyers Do Too! https://www.lexblog.com/2024/05/30/appellate-judge-proposes-possible-use-of-genai-for-contract-interpretation-recognizes-that-ai-hallucinates-but-flesh-and-blood-lawyers-do-too/ Thu, 30 May 2024 16:05:08 +0000 https://www.lexblog.com/2024/05/30/appellate-judge-proposes-possible-use-of-genai-for-contract-interpretation-recognizes-that-ai-hallucinates-but-flesh-and-blood-lawyers-do-too/ By now, most lawyers have heard of judges sanctioning lawyers for misuse of generative AI, typically for not fact checking the outputs. Other judges have issued local rules governing use of or prohibiting AI. These actions have become prevalent. What has not been prevalent is judges encouraging the possible use of AI in interpreting contracts. Perhaps that will change because of a very thoughtful concurring opinion in an appeal to the 11th Circuit Court of Appeals in insurance coverage dispute matter. In that opinion, Judge Newsom penned a 31-page concurrence which focused on whether and how AI-powered large language models should be “considered” to inform the interpretive analysis used in determining the “ordinary meaning” of contract terms.

Click here to read more.

]]>
AI Law and Policy
Frequently Asked Questions for FINRA Member Firms on AI Use in Public Communications https://www.lexblog.com/2024/05/30/frequently-asked-questions-for-finra-member-firms-on-ai-use-in-public-communications/ Thu, 30 May 2024 12:00:00 +0000 https://www.lexblog.com/2024/05/30/frequently-asked-questions-for-finra-member-firms-on-ai-use-in-public-communications/ Several days ago, FINRA released guidance updating its Frequently Asked Questions relating to Rule 2210 on Advertising and Public Communications to address Artificial Intelligence.  This follows after FINRA has consistently been noting that the use of AI and AI tools should be addressed by member firms in their policies and procedures and may implicate their regulatory obligations.  The two new FAQs are accessible from the FINRA page on FAQs About Advertising Regulation.

The first addresses supervising chatbot communications and asks, “If a firm uses AI technology to create chatbot communications that are used with investors, how should the firm supervise that activity?”  FINRA notes in response that these communications may be subject to FINRA communications rules as correspondence, retail communications, or institutional communications.  The communications, like other communications, would be subject to FINRA Rules 2210 and 3110.  Rule 3110(b)(4) requires firms to establish, maintain, and enforce written procedures for the review of incoming and outgoing written (including electronic) correspondence relating to the firm’s investment banking or securities business that must be appropriate for the member’s business, size, structure, and customers.

The second addresses AI-created communications and asks, “Is a firm responsible for the content of communications created using artificial intelligence (AI) technology?”  Of course, as one would expect, firms are responsible for all communications, including AI-generated communications.  FINRA in the response reminds firms that they must ensure that AI-generated communications comply with applicable securities laws and regulations and FINRA rules, including compliance with supervision requirements, recordkeeping requirements and content standards, among others.

]]>
Free Writings + Perspectives
Slack’s Use of Customer Data in AI Training Sparks Privacy Concerns https://www.lexblog.com/2024/05/29/slacks-use-of-customer-data-in-ai-training-sparks-privacy-concerns/ Wed, 29 May 2024 18:54:35 +0000 https://www.lexblog.com/2024/05/29/slacks-use-of-customer-data-in-ai-training-sparks-privacy-concerns/ Editor’s Note: This article explores the recent controversy surrounding Slack’s use of sensitive user data for training AI models. It highlights key issues of transparency and user control, which are critical for maintaining trust in digital platforms. The discussion is particularly relevant for cybersecurity, information governance, and eDiscovery professionals as it underscores the importance of ethical data practices and robust privacy protections in an era of rapid technological advancement. Understanding these dynamics is crucial for developing strategies that protect user data while fostering innovation in digital communication tools.

Industry News – Artificial Intelligence Beat

Slack’s Use of Customer Data in AI Training Sparks Privacy Concerns

ComplexDiscovery Staff

In a recent wave of privacy concerns, Slack has come under scrutiny over its practices of using customer data to train its machine-learning models without explicit user consent. The company, renowned for pioneering collaborative workplace technologies, finds itself entangled in debates that highlight critical concerns over personal data and artificial intelligence.

Reports emerged in mid-May suggesting that Slack, a widely used workplace communication platform, was employing sensitive user data, such as messages and files, for training AI models aimed at enhancing platform functionality. Documentation within Slack’s privacy policies indicated a potentially invasive use of data, stating that customer information was utilized to develop non-generative AI/ML models for features like emoji and channel recommendations.

The controversy intensified due to Slack’s opt-out-only policy regarding data usage. Users had to go through a complex process involving emailing the company to prevent their data from being used, a method considered non-user-friendly by many. This raised significant concerns about the ease with which users could protect their privacy.

In response to press inquiries and public backlash, Slack clarified its policy, stating that the data used for training their machine-learning models did not include direct message content. Instead, it utilized aggregate, de-identified data such as timestamps and interaction counts. This clarification aimed to assure users that their private conversations were not being directly analyzed or exposed.

The incident raises fundamental questions about the ethical use of customer data in machine learning, especially in a space as intimate and collaborative as Slack workspaces. While Slack insists on its commitment to privacy and adherence to industry-standard practices, the situation has sparked a broader discussion about transparency and control over personal data in the digital age.

A statement from a Salesforce spokesperson, representing Slack, reiterated the company’s stance on data protection. The spokesperson emphasized that Slack does not build or train models that could reproduce customer data. Despite the controversy, Slack continues to promote its premium generative AI tools, asserting that these do not rely on customer data for training.

The tension between advancing technological capabilities and protecting user privacy continues to grow, as evidenced by Slack’s ongoing efforts to balance innovation with user rights. This situation exemplifies the complex landscape of tech privacy issues, serving as a moment for technology companies to reassess their data handling and public trust strategies.

Industry experts suggest that the outcry over Slack’s data usage policies could drive significant changes in how tech companies manage user data. There is a growing demand for more transparent and user-friendly methods for opting out of data collection. Moreover, companies may need to implement more robust consent mechanisms that clearly inform users about how their data will be used and give them straightforward options to control their data.

This controversy also underscores the importance of clear communication from tech companies about their data practices. Users need to understand what data is being collected, how it is being used, and what measures are in place to protect their privacy. As AI and machine learning technologies continue to evolve, the ethical implications of data usage will become increasingly significant.

In the broader context of digital privacy, the Slack incident highlights the need for regulatory frameworks that protect user data while allowing for technological innovation. Governments and regulatory bodies may need to step in to establish clear guidelines and standards for data usage in AI development.

The reports about Slack’s use of user data for AI training have ignited a critical conversation about privacy and data ethics in the tech industry. As the debate continues, it is clear that both companies and regulators will need to navigate the delicate balance between leveraging data for technological advancement and safeguarding user privacy. This moment represents a salient juncture for the future of data privacy and AI ethics, with far-reaching implications for the tech industry and its users.

News Sources

Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery OÜ

The post Slack’s Use of Customer Data in AI Training Sparks Privacy Concerns appeared first on ComplexDiscovery.

]]>
ComplexDiscovery
Senate Working Group and Biden administration guidance on the use of AI in the workplace https://www.lexblog.com/2024/05/29/senate-working-group-and-biden-administration-guidance-on-the-use-of-ai-in-the-workplace/ Wed, 29 May 2024 21:00:05 +0000 https://www.lexblog.com/2024/05/29/senate-working-group-and-biden-administration-guidance-on-the-use-of-ai-in-the-workplace/ Shortly after the DOL’s release of guidance on the use of AI in the workplace, a bipartisan working group from the U.S. Senate and the Biden administration have released additional guidance regarding the use of AI in the workplace.

Bipartisan Senate AI Working Group’s “road map” for establishing federal AI policies

On May 15, 2024, the Bipartisan Senate AI Working Group released a “road map” for establishing federal AI policies. The road map is titled “Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate,” and outlines the opportunities and risks involved with AI development and implementation. Most notably, the road map highlights key policy priorities for AI, such as: promoting AI innovation, investing in research and development for AI, establishing training programs for AI in the workplace, developing and clarifying AI laws and guidelines, addressing intellectual property and privacy issues raised by AI and creating related protections for those affected, and integrating AI into already-existing laws.

The working group acknowledged that the increased use of AI in the workplace poses the risk of “hurting labor and the workforce” but also emphasized that AI has great potential for positive application. This dichotomy necessitates the advancement of additional “innovation” that will create “ways to minimize those liabilities.”

Biden administration’s May 16, 2024 guidance on AI

A day later on May 16, 2024, the Biden administration released guidelines on how humans should oversee AI at work. Noting that the guidelines apply to all sectors and during the whole lifecycle of AI (“from design to development, testing, training, deployment and use, oversight, and auditing”), the White House provided the following eight guiding principles:

  1. Centering worker empowerment: Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use, and oversight of AI systems for use in the workplace.
  2. Ethically developing AI: AI systems should be designed, developed, and trained in a way that protects workers.
  3. Establishing AI governance and human oversight: Organizations should have clear governance systems, procedures, human oversight, and evaluation processes for AI systems for use in the workplace.
  4. Ensuring transparency in AI use: Employers should be transparent with workers and job seekers about the AI systems that are being used in the workplace.
  5. Protecting labor and employment rights: AI systems should not violate or undermine workers’ right to organize, health and safety rights, wage and hour rights, and anti-discrimination and anti-retaliation protections.
  6. Using AI to enable workers: AI systems should assist, complement, and enable workers, and improve job quality.
  7. Supporting workers impacted by AI: Employers should support or upskill workers during job transitions related to AI.
  8. Ensuring responsible use of worker data: Workers’ data collected, used, or created by AI systems should be limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly.

Employers who are using AI should carefully review the Senate’s and the Biden administration’s guidance, as well as the other federal guidance regarding AI, to understand the government’s expectations regarding the use of AI as well as best practices to minimize the risk of employment claims related to the use of AI.

]]>
Employment Law Watch
Glimmer of Hope? Judge Suggests Some Claims in AI Image Case May Survive https://www.lexblog.com/2024/05/29/glimmer-of-hope-judge-suggests-some-claims-in-ai-image-case-may-survive/ Wed, 29 May 2024 18:32:31 +0000 https://www.lexblog.com/2024/05/29/glimmer-of-hope-judge-suggests-some-claims-in-ai-image-case-may-survive/ We are still waiting for a formal ruling on the Andersen v. Stability AI defendants’ second round of motions to dismiss, but so far it’s looking like most of the case may be allowed to proceed to discovery. The judge heard oral arguments on May 8, 2024 in this case involving image-generating AI software, a day after issuing a tentative ruling seeming to give the plaintiffs a chance to try to prove up at least some of their claims.

As a brief recap, a class of  visual artists are suing Stability AI, Runway AI, Midjourney, and DeviantArt, alleging that the defendants’ image-generating AI programs or related activity infringed the plaintiffs’  original works in violation of the Copyright Act.  (For additional background, see our most recent update on the case here).  Victory originally seemed unlikely for the plaintiffs after the judge tossed out most of their case in response to the defendants’ first round of motions to dismiss.   But the plaintiffs filed an amended complaint addressing what the judge said was a failure to allege “substantial similarity” by including side-by-side comparisons of their works and the AI programs’ allegedly similar outputs.  After we highlighted some examples here, our readers weighed in on whether they thought the examples were “substantially similar” or not.

Although the judge did not address substantial similarity in his tentative ruling, he wrote that plaintiffs’ allegations that certain defendants used the plaintiffs’ works to train their AI programs “suffices for direct infringement as to Stability, Runway, and Midjourney,” and that “plaintiffs have plausibly alleged facts to suggest that compressed copies...of their works are contained in the versions of the Stable Diffusion” AI program used by the defendants.  The judge stated that “[]the facts regarding how the [AI] models operate, or are operated by defendants, should be tested” after discovery.

The judge chastised the plaintiffs a bit in his tentative ruling, stating that they should have sought permission before trying to add additional plaintiffs to the case.  But he then went on to say that he is inclined to give plaintiffs leave to file a new complaint to add the new plaintiffs with his permission. 

It wasn’t all bad news for the defendants, however.  The judge indicated that he would dismiss some of the plaintiffs’ claims, including those brought under the Digital Millenium Copyright Act (DMCA) and a claim for breach of contract or breach of the implied covenant of good faith and fair dealing against DeviantArt.

According to reports of the May 8 hearing, the defendants worked hard to convince the judge that he should dismiss all of the claims.  The reports suggest that the judge kept his views close to the vest and did not indicate which way he might ultimately rule after more than an hour of argument.

Ultimately, the judge’s tentative ruling is just that: tentative.  We’ll be watching closely to see what the judge’s final decision says.  We suspect other plaintiffs in similar suits will be watching closely as well, given that a final order reflecting the tentative ruling could be persuasive to other judges juggling similar claims.

]]>
Gadgets, Gigabytes & Goodwill Blog
Role of Generative AI in Contract Lifecycle Management https://www.lexblog.com/2024/05/29/role-of-generative-ai-in-contract-lifecycle-management/ Wed, 29 May 2024 12:52:57 +0000 https://www.lexblog.com/2024/05/29/role-of-generative-ai-in-contract-lifecycle-management/ Contracts are one of the core lifelines of any business.  They ensure clarity in business deals and safeguard the interests of all parties involved.

Contract lifecycle management demands greater efficiency and accuracy in studying various clauses and legal terms that are essential to ensuring streamlined business operations and risk mitigation.

According to World Commerce & Contracting research, poor contract management continues to cost companies 9% of their bottom line.

But with the digital landscape changing at an unprecedented level as new Generative AI technologies are getting introduced in the market, this futuristic technology is also changing the dynamics of the entire Contract Lifecycle Management (CLM).

Evolution of Generative AI

One of the core technologies powering Generative AI tools is the large language model (LLM), which leverages complex machine learning algorithms and techniques to identify and learn from patterns in the training content, which can include text, code, voice, videos, and images, to generate a new form of content.

From analyzing medical images and reports to uncovering vital patterns that aid doctors in treating patients to predicting the optimized route in logistics based on the learning derived from imperial data, the applications of Generative AI are immense.

Generative AI tools with underlying LLMs can generate various types of content and hold immense potential to rapidly transform the legacy contract management landscape, from contract drafting to analyzing various complex clauses.

How Generative AI is redefining Contract Lifecycle Management (CLM)?

Leveraging the innate capability to generate contextually coherent legal content, Generative AI can be used across various stages of contract lifecycle management, including contract drafting, editing, review, analysis, negotiations, risk mitigation, and much more.

When integrated with standard playbooks and clause libraries with Retrieval Augmented Generation (RAG), it can expedite and offer users a seamless, engaging experience of the entire CLM.

Contract Drafting

Contract drafting is a tedious process handled by a legal expert. Generative AI models can train on huge volumes of legacy contract data, playbooks, and clause libraries to accurately identify data patterns and generate customized contracts based on specific client needs. With automated contract drafting, organizations can utilize their valuable legal resources in productive core legal operations.

Contract Negotiation

During the complex contract negotiation process, Generative AI proves beneficial by surfacing risky clauses that do not adhere to company standards. With this, the negotiators find it easy to point out the specific sections of the contract that require redlining, thus mitigating risks in contracts.

Trained Generative AI can analyze and learn from past negotiation outcomes and agreements and surface relevant clauses, giving negotiators intelligent insights that are helpful during negotiations.

Contract Renewal

Inefficient renewal strategies, due to the lack of a mechanism to send automated alerts reminding about contract renewal dates, result in missed opportunities to renegotiate the contracts based on favorable terms or terminate certain contracts.

Having trained on legacy negotiation outcomes, playbooks, and huge volumes of contracts, Generative AI automates the renewal alerts and notifies the negotiators in advance to prepare for negotiations.

Contract Review and Summarization

Generative AI expedites the contract review process by quickly identifying the relevant clauses and areas of potential risks.

Trained AI models extract vital details and key attributes of contracts, such as dates, termination provisions, clauses, obligations, names, and much more.

Generative AI, through its capability to render human-like responses in an interactive environment, can provide instant summarization of lengthy contract documents that foster informed decision-making.

Risk Mitigation and Compliance Analysis

Failure to monitor various industry standards and regulatory compliances mentioned in the signed contracts can lead to hefty financial losses. Generative AI, through its trained model, helps properly identify non-compliance conditions and flag potential risks clauses.

Why should you be wary while using Generative AI?

Although it seems that the benefits offered by Generative AI are overarching it also has its own set of limitations. It is necessary that legal teams or organizations judiciously undertake utmost caution before moving ahead with this technology. Some of the risks posed include:

Data Privacy

Maintaining the integrity of client data is of prime importance. Legal teams must ensure that the Generative AI tool being used does not use contract-related data containing critical information for any training purpose.

Hallucinations and inaccurate results

Since Generative AI technology is in its nascent stage, it can produce inaccurate results. AI models tend to fabricate information or “hallucinate” and generate results on their own. Trained to recognize patterns in the data and predict the outcome, they often generate random content that can be false or inaccurate.

Lacks human expertise

Contract management is a crucial process and requires in-depth attention by a legal expert. Any inadvertent lapse in contract matters involving significant legal stakes can have immense backlash and financial losses. Generative AI’s output can sometimes be inaccurate or unreliable and, thereby, cannot replace human expertise and judgement entirely. It can be considered as ‘legal assistant’ that aids legal experts in making decisions considering various legal factors.

Conclusion

This disruptive technology has the potential to revolutionize contract lifecycle management by bringing greater efficiency to CLM processes, providing better risk management, and improving workforce productivity by freeing up valuable legal resources to focus on core legal work.

Embracing a Retrieval-Augmented Generation (RAG)- based generative AI model integrated into a contract lifecycle management solution is a legal team’s most secure and practical option. With Knovos experts, explore the scope of AI adoption in your contract management workflow.

The post Role of Generative AI in Contract Lifecycle Management appeared first on Knovos.

]]>
Knovos Blog
DOL’s guidance on use of AI in hiring and employment https://www.lexblog.com/2024/05/29/dols-guidance-on-use-of-ai-in-hiring-and-employment/ Wed, 29 May 2024 14:40:40 +0000 https://www.lexblog.com/2024/05/29/dols-guidance-on-use-of-ai-in-hiring-and-employment/ On April 24, 2024, the U.S. Department of Labor (DOL) issued guidance on how employers should navigate the use of Artificial Intelligence (AI) in hiring and employment practices. The DOL emphasized that eliminating humans from the processes entirely could result in violation of federal employment laws. Although the guidance was addressed to federal contractors and is not binding, all private employers stand to benefit from pursuing compliance with the evolving expectations concerning use of AI in employment practices.

The guidance was issued by the DOL’s Office of Federal Contract Compliance Programs (OFCCP) in compliance with President Biden’s October 30, 2023 Executive Order 14110, which required the DOL to issue guidance for federal contractors on “nondiscrimination in hiring involving AI and other technology-based hiring systems.”

The guidance was issued in two parts: (1) FAQs regarding the use of AI in the Equal Employment Opportunity (EEO) context, and (2) a list of “Promising Practices” that serve as examples of best practices for mitigating the risks involved with implementing AI in employment practices. In short, the FAQs communicate that established non-discrimination principles apply to the use of AI, and the “Promising Practices” provide specific instruction on how to avoid violations when using AI in employment practices.

FAQs regarding the use of AI in the EEOC context

In the FAQs, the DOL defined AI as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments” and “automated systems” as those that “automate workflows and help people complete tasks or make decisions.” The DOL also explained how AI can be used in employment practices (i.e., hiring, performance evaluation, promotion, and termination), how non-discrimination regulations should apply to the use of AI in such employment practices, and how the DOL will investigate the use of AI in employment decisions. Key takeaways from these FAQs include:

  • The same EEO compliance obligations that apply to traditional employment decisions also extend to employment decisions made with the use of AI. Therefore, in using AI to make employment decisions, federal contractors must:
    • Take affirmative action to ensure that employees and applicants are treated without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran;
    • Maintain records and ensure confidentiality of records consistent with all OFCCP-enforced regulatory requirements;
    • Cooperate with the OFCCP by providing the necessary, requested information on their AI systems; and
    • Make reasonable accommodations for individuals with disabilities.
  • The OFCCP will investigate the use of AI during compliance evaluations and complaint investigations to determine whether a federal contractor is in compliance with its nondiscrimination obligations.
  • AI-based tools used to make employment decisions can be “selection procedures” under the Uniform Guidelines on Employee Selection Procedures (UGESP). Under the UGESP, federal contractors must:
    • Understand and clearly articulate the business needs that motivate the use of the AI system;
    • Analyze job-relatedness of the selection procedure;
    • Obtain results of any assessment of system bias, debiasing efforts, and/or any study of system fairness;
    • Conduct routine independent assessments for bias and/or inequitable results; and
    • Explore potentially less discriminatory alternative selection procedures.
  • A federal contractor is responsible for its use of third-party products and services. They cannot delegate their nondiscrimination or affirmative action obligations by using another entity.

The guidelines acknowledge that using AI in employment decisions poses the risk of compliance issues. For example, using AI to automate timekeeping systems and calculation of pay—without a human involved to ensure accuracy—could result in incorrect calculations as it would be difficult for AI to correctly calculate time off or compensation for meal breaks, breaks for pumping breast milk, out-of-office meetings, and other breaks. Similarly, using AI to monitor activities like keystrokes and related data to gauge employee productivity could mischaracterize compensable work as non-compensable work. Any of these oversights could result in violation of federal wage and leave laws like the Fair Labor Standards Act (FLSA), the Family and Medical Leave Act (FMLA), and the Providing Urgent Maternal Protections for Nursing Mothers Act (PUMP).

“Promising Practices”

The DOL’s “Promising Practices” provide recommendations to help federal contractors “avoid potential harm to workers and promote trustworthy development and use of AI.” In doing so, however, the DOL notes that: (1) the “Promising Practices” are not “expressly required,” (2) the “Promising Practices” are not an “exhaustive list,” and (3) the use of AI in the employment context is a rapidly advancing and quickly evolving area. The DOL’s guidance thus provides “Promising Practices” for various stages of implementing AI into workplace decisions.

The “Promising Practices” list includes recommendations that federal contractors should:

  • Provide advance notice of using AI.
  • Communicate transparently with employees, applicants, and representatives to ensure that all are adequately informed of the relevant policies and procedures.
  • Monitor use of AI in making employment decisions and keep track of the resulting data in order to standardize the system(s), provide effective training, and create internal governance structures with clear case standards and monitoring requirements.
  • Make sure they are able to verify the AI system and its vendor (if using a vendor-created system). The federal contractors should know the specifics of the system (data, reliability, safety, etc.) and the contract with the vendor, if applicable.
  • Conduct tests of the AI system to ensure that it is working properly and not circumventing any required compensation calculations, disability accommodations, or other legal protections.  

Employers who are using AI should carefully review the DOL’s guidance, as well as the other federal guidance regarding AI, to understand the government’s expectations regarding the use of AI as well as best practices to minimize the risk of employment claims related to the use of AI.

]]>
Employment Law Watch
Artificial Incompetence: How Generative AI Creates Latent Intellectual Property Issues https://www.lexblog.com/2024/05/29/artificial-incompetence-how-generative-ai-creates-latent-intellectual-property-issues/ Wed, 29 May 2024 08:13:27 +0000 https://www.lexblog.com/2024/05/29/artificial-incompetence-how-generative-ai-creates-latent-intellectual-property-issues/ Previously Published in The Journal of Robotics, Artificial Intelligence & Law, Volume 7, No. 3 | May-June 2024

In this article, the authors examine the extensive legal risks that companies take when using generative artificial intelligence (GenAI), particularly within operations that create intellectual property or other intangible value represented within a business.

While any groundbreaking technology can offer substantial societal benefits and drawbacks, recent advances in GenAI have raised particular concerns. Experts and pioneers in the AI field, including the “godfather of AI,” have warned that this technology poses a profound risk to humanity. They have even called for a six-month pause on developing new GenAI models. In the realm of intellectual property creation and protection, a longer hiatus might be even more prudent.

GenAI programs are algorithmic models that learn from vast amounts of data to generate new text, audio, video, simulations, or software code in response to human prompts. These models can produce content that often appears indistinguishable from human-generated work, creating college-level essays or stunning art in seconds. However, some generated content has proven to be incorrect or severely biased. Since GenAI models are limited by the data they access, any biases in the data can be amplified in their output. While GenAI can simplify repetitive and complex tasks, the lack of an established legal framework and the technology’s inconsistent nature mean companies must tread carefully. Missteps can lead to significant legal exposure, particularly regarding intellectual property (IP) and corporate transactions like mergers and acquisitions (M&A).

GenAI has the potential to be a valuable tool for businesses and the lawyers who advise them. However, its use in creating IP assets or legal documents carries substantial risks and could lead to costly litigation.

To read the full article, please click here.

Related Practice Areas:

Intellectual Property | Artificial Intelligence

Authors:

K. Lance Anderson, Member, Austin
Benton B. Bodamer, Member, Columbus
Jordan E. Garsson, Associate, Austin
Andrew M. Robie, Summer Associate, Columbus

The post Artificial Incompetence: How Generative AI Creates Latent Intellectual Property Issues appeared first on IP Blog.

]]>
IP Brief
vLex Integrates Vincent AI with iManage and Automates Docket Ingestion with Docket Alarm https://www.lexblog.com/2024/05/29/vlex-integrates-vincent-ai-with-imanage-and-automates-docket-ingestion-with-docket-alarm/ Wed, 29 May 2024 12:52:47 +0000 https://www.lexblog.com/2024/05/29/vlex-integrates-vincent-ai-with-imanage-and-automates-docket-ingestion-with-docket-alarm/ In this special episode of The Geek in Review, host Greg Lambert sits down with Ed Walters, Chief Strategy Officer at vLex, to discuss two significant announcements: the integration of vLex’s Vincent AI with iManage Work and the automated docket ingestion feature with iManage using vLex’s Docket Alarm.

The integration between Vincent AI and iManage’s Insight Plus collection allows law firms to leverage their internal knowledge assets alongside vLex’s extensive public law database. This combination of the “two halves of the legal brain” enables lawyers to create brilliant first drafts and analyze documents using the power of generative AI. Walters emphasizes the importance of data quality and the role of knowledge management teams in curating the best practice documents for training AI models.

Security is a top priority for both vLex and iManage in this integration. Walters details the various measures taken to ensure data protection, including encryption, dedicated master keys for each firm, and compliance with industry standards such as ISO 27001 and SOC 2. He also clarifies that vLex uses retrieval-augmented generation, securely passing relevant documents to a closed instance of the foundation model without training on the data itself.

The second announcement focuses on the automated docket ingestion feature, which seamlessly saves court filings from Docket Alarm into the correct iManage folders. This practical solution eliminates the manual process of saving documents and ensures that all team members have access to the most up-to-date versions of the filings.

Looking ahead, Walters hints at future integration points between vLex and iManage, emphasizing the potential for generative AI to help law firms differentiate their services and meet client expectations. He sees Vincent AI as a secure bridge between generative AI and a firm’s internal work product, enabling them to leverage their knowledge assets without the need for expensive, in-house foundation models.

 

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠

https://youtu.be/uDHXq8UT1UU

Contact Us:  Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠

Email: geekinreviewpodcast@gmail.com

Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠

 Transcript

Greg Lambert 0:07
Welcome to The Geek in Review, the podcast focused on innovative and creative ideas in the legal industry. I’m Greg Lambert with a special episode where I am talking with Ed Walters, the street Chief Strategy Officer at vLex. Ed, thanks for jumping on the podcast to talk with me for a couple of new announcements that you guys have at vLex, where you’re releasing it this morning.

Ed Walters 0:32
Yeah, thanks for having me.

Greg Lambert 0:34
All right. So let me let me list off the two things that were in the in the press release, in order. First, you are announcing that there’s an integration with vLex vents in AI? And with iManage work, and I think this is probably between you, you and me and a few listeners is probably the bigger of the two. But you know, this is just huge news, I think for the legal profession, because this has been something that we’ve talked about almost as soon as we heard what Gen AI can do. It’s been Okay, great. When can I start putting, you know, getting use out of my data. And then second, there, you are announcing that there’s an automated docket ingestion feature, which will use vLex Docket Alarm to seamlessly load court filings into iManage into the correct folders for those matters. So do you mind just let’s back up to the first one and talk about give us a little bit more information on these integrations and what they mean for lawyers and legal teams?

Ed Walters 1:49
Sure. So at the top level, let me just say that I manage a company that we have respected for a long time, the vLex team, you know, sort of seen the kind of market penetration that they have their emphasis on security. They’re like one of the most trusted names in legal tech. And so this is an integration that we’ve been excited about for a long time. And the I think the aspiration here is that there’s a lot of potential places where you can sync up between vLex and vLex products like docket alarm, and I manage. So we’re announcing two of them at our managers User Conference connect live, which is happening Wednesday and Thursday, this week, the 29th and 30th. of May. And we will announce from the stage on Thursday, tomorrow, the full integration, but we’re giving you a sneak preview today. However, let me let me talk about the first one that the integration between I manage and Vincent AI and the AI tool from VX, recently named the 2024, New Product of the Year by the American Association of law libraries, we’re over the moon about that. So the way this is going to work is that it is going to be an integration between Ion manages INSIGHT Plus collection for a firm and that firms vLex Vincent AI subscription if you’re subscribed to both products, and you have put that kind of curated, set superset of your best documents, into Insight Plus, when you’re doing research, or when you are creating workflows, or first drafts, or analyzing documents inside of the Vincent AI system, you can use that whole V lacks knowledge base of public law. But you can also pull in documents from the INSIGHT Plus collection from your private collection. And let me just say for a second why I’m so geeked about this on gated review, I really think that the legal brain kind of has two halves. There’s a Public Law half of the legal brain that has judicial opinions and statutes and regulations and court rules, law review articles and things like that. And then the other lobe is the internal knowledge assets of the firm. And in some ways, these are the most important assets that law firms have on their books. This is the playbooks. This is the unique way of doing things. This is what if you if you train your lawyers, this is what you’re hoping that they’re going to learn. In a think for gender of AI. We’ve had solutions, look at internal law firm playbooks. We’ve had solutions that look at the kind of right move Have the brain that public law part of it. But I think this will be the first time you can look at both at the same time, in the same interface for the same gender to the AI tool. And I think in the same ways that we were pleasantly surprised at what happened, when you run generative AI on the public wall database, I think we’re going to be equally amazed at what happens when you combine the world’s largest public law, intelligence platform and relax. And I think that the largest collection of internal firm documents with IMEC.

Greg Lambert 5:33
Yeah, I imagined is now just a kind of a technical question before we move on. Is this the cloud version of iManage? Would the firm need to be on that version in order to get access to these features? Yes, so

Ed Walters 5:52
you need to be on the INSIGHT Plus platform inside of I’m in it. And that is not like the entire database from the firm. This is a curated set of documents that the law firm say, I want to put this in the INSIGHT Plus collection.

Greg Lambert 6:11
And typically, this would be like a knowledge management team that has worked with the individual practice areas to I mean, to some to dumb it down a little bit. So even I could understand it, or it basically your you want your best practice documents that are in there, kind of like the old days where, you know, we started with a with a red well, that had the bit, you know, the best contract, then we moved that online with our KVM. Team. And now this is just kind of that next version of that bigger, bigger, better class of documents. Right?

Ed Walters 6:44
I think that’s right, I think that’s a good way of understanding it. And, you know, I think when you’re when you’re looking at AI, and we talked about this before retrieval, augmented generation, all of the results coming out of generative AI are based on the data on the quality of the data on the quality of the data. And I don’t think that you would want generative AI training on everything, you know, the menu for the summer, associate lunch, the email where you are, you know, RSVP to the party or something. The 17 drafts of the pleading before it comes out, you want the superset that read well, the version of the best documents. And those are what we should be using to train generative AI, which is why the INSIGHT Plus collections can be so powerful, combined with that kind of superset of public law. And if you notice that, like this is not just American law, I mean, we’re in the US. So we sort of think that from that perspective, but Vincent AI already works in the UK and Ireland, in Spain, and the European Union, Mexico, Colombia, Chile, Singapore, New Zealand, we’re rolling out war seems like every month, and so global law firms could take like, you know, not just US Senator practice, but practice around the world in the European Union with our super set there, and really produce like, brilliant first drafts, not just for the US, but anywhere in the

Greg Lambert 8:14
world. Excellent. Well, I know one of the biggest barriers since day one since November of 2022, when we started getting hands on ChatGPT. Three, it has been security concerns and protecting client data. Now I know I manage I mean, this is their bread and butter, of being able to to not just protect the data, but also to isolate the data so that things like ethical walls can be set up in in making sure that people keeping honest people honest and not being able to get to things that that they don’t want to or don’t need to get to. So how and I know you’ve we talked about this for we went on, you kind of have a script that you want to make sure that you cover this correctly. So how did you guys look at protecting it so that when you go to mine, IT security ops team, that they’re comfortable in rolling out a product like this?

Ed Walters 9:18
Yeah, I think that I think you’re right, that is the most essential piece. And you better believe that the iManage team for whom security is their bread and butter marched us through the paces to make sure that everything we do together maintains that security. So I think when the when the VFX team and the Fastcase team worked together with I managed to build this out. We wanted to, you know, overperform we wanted to really shoot the lights out on security, we want to exceed the best industry standards. And so, you know, I did make a checklist. I want to make sure that I’m hitting all the points. So first, we start with encryption. We got all of the information In all customer data or conversation logs, all text from uploaded files, and metadata is encrypted in transit. And at rest, we want to make sure that it’s encrypted through the entire process, like the best security systems are the data these stored using, it’s called a FIPS, 142 compliant algorithmic suite. But it’s like one of the most secure data suites that you can have. It has hardware security modules, and each enterprise customer, like each law firm has their own dedicated encryption master key. So if not, we’re using one key for everybody, right? You know, Jackson Walker has one scan as another, you have your own dedicated master key. And then for global firms, like you have to have some control over those keys. So firms can choose the region residency of a master keys and host them in their own self managed Amazon Web Services account. If you’re in Europe, and you need that key to stay in Europe, you can do that. If you’re in the US and want to keep it in the US soon. You can do that as well. There’s a lot to say about it. But I think it’s safe to say that every security certification that you can get, we’re trying to go load up beyond it was finished. For the security nerds ISO [27001]. Audit, we’ve got a draft certification. And the full certification is just a kind of a matter of timing. Now, we’re, we’ve finished the sock to audit and we’re in the three month sock to surveillance period. But I think maybe the last thing to say is just that, you know, we are not unlike open AI, or, you know, Google with Gemini, we’re not training our own foundation law. So everything we’re doing here is provable amended generation. It’s a little bit like a search engine. You know, if a retrieval party, we are pulling the relevant documents through a secure channel, and storing the data securely, but with those documents, we’re then passing them securely with the dedicated closed instance of the foundation model. In our case, we’re using GPT-4 Turbo right now, but we’re paying through the nose for the super secure version of it. So nothing ever goes to train that model. And we’re not treated with the data. When that transaction is done, when the prompt gets passed, we get the result. All that data is securely only accessible to the firm, not to us, not open AI, not to Google or anyone else. Okay.

Greg Lambert 12:55
All right. So now you’ve convinced my security ops team that it’s okay to do this. Now comes the really hard test of how what what would you say is kind of the biggest value of being able to do this? And what are you doing to make sure that there’s not any kind of potential bias that that’s going on? Or that you’re making sure that answers that are coming back? Or, again, grounded to the documents that, that we’ve isolated as our best practice documents? Yeah, how do you convince the lawyers now to use it?

Ed Walters 13:39
Well, I think, you know, the, the first way you’d have to do that is it just has to work. Like when you know, try to draft things with it, when you evaluate documents with when you try to figure out what comes next. What’s missing, then a contract that has to just flat out work. And this is a place where I think lawyers have been very favorably impressed. It’s why, you know, I think that there were there are more nominations this year than ever for the American Association of law libraries, new product, or like, everybody had an AI product this year. Yeah, this competing for that award. But among all of them, you know, and I always say law librarians are the most discriminating the most sophisticated consumers of these things. Among all of them. They picked Vincent AI. So I would just say, you know, the proof of that the proof is in the pudding. The proof of the pudding is in the eating maybe. And, you know, I think people have to be delighted by it. And it has to just flat out work. And I think that’s been our experience. We just did a couple of different law school, head to head Robot Wars, among the various different genealogy pools. And I’ll just say, you know, Vincent AI did very well, in terms of, you know, the kind of bias and the usefulness of it. I think, you know, we test as most as, as most software developers do, the outputs, I think we’ve benchmarked them. And I think we’ve been pretty favorably impressed with them in the testing. But I mean, American law, in some case, the law of other countries can also reflect bias, I want to be very careful about this. I think you and others have called out software manufacturers for overstating claims about gender to the AI. I don’t want to do that here. I think gender they AI is amazing. But in some sense, these transformers, the T and GPT, are really kind of predicting what the next word is going to be. And not all of the law in the world history is unbiased. And so I think this is a problem we’ll face in the profession. I think it’s it does very well, all these things, but I’ll never say it is completely unbiased. Maybe the last thing to say about it is just the usefulness of it. I think that there’s a lot of things on lawyers too. And legal information professionals do, that they can’t go through. That is just BurgerTime. I think about the document review that we did before he discovery came out, right? Is it useful to have technology assisted review for ediscovery castle with this, as one of the lawyers who did paper discovery back in the day? Anybody who’s had to do that will find it extremely useful if we can automate and mechanize than drugs were. And I will also say, you know, I’m not Panglossian about and I’m not going to say this is the solution that does everything. I always imagined legal work like a chain, right. And we can replace certain links in that chain. But the last links in that chain have to be human judgments. You have to have discriminating information professionals, when you do retrieval augmented generation, who can look at what’s going into making the answer and say include these, not this one, I don’t think this one’s useful. That’s a place where I think fits in really stands out, users can go through audit the list and say, I don’t want to include these two, but I want the rest of them. And I think it is hopefully like more incumbent than ever, that legal information professionals law librarians, paralegals researchers, and lawyers are very, very careful, they’re more discriminating than ever about those sources, you need to make sure that all of the cases exist in the world that they’re not hallucinated, you need to make them verify up, you need to check to make sure that you’re using the right work product from inside of iMatch. And so in some ways, we are automating the worst parts of the work like we did with ediscovery. But it does require and we are more sophisticated, in some ways, we’re more critical and discriminating in some ways, and these are skills that will be important for the next 20 years.

Greg Lambert 18:36
And I’m gonna I’m gonna hit you with a question that isn’t isn’t on the list. And, and hopefully, it’s not outside the scope here. But during the testing of this, was there anything that kind of stood out or surprised you it that worked better than you thought? When when you’re integrating these two products?

Ed Walters 19:00
Yeah, I don’t I don’t have a good answer from the for the iManage front. But I have a great answer. From a demo that I did with Craig Newton, who is the director of Cornell’s Legal Information Institute. We were trying to make videos and screw up. And we were offering a little bit we add the mud tires on. And so we were asking questions that there weren’t answers to, to see if we can make Vincent like make something. And so we were asking questions about to liability when unoccupied autonomous cars crash. And there’s not really any answer to that question. Right. And Craig knew it and he was trying to make it flow or create cases from the future or something.

Greg Lambert 19:51
And which can happen I hear and happens.

Ed Walters 19:56
Didn’t happen. So what happened was, Vincent AI came back concern, I don’t have any cases dealing with autonomous vehicles without occupants. But you might find interesting these analogous cases, from the 1930s when elevators stopped having operators and started running by themselves, the elevators, you used to have a person who was in there, like pulling the alarm and stopping on the fourth floor. Yeah. And at some point, we created self driving elevators. And guess what I mean, some of them had accidents. And then the question is, you know, is the elevator manufacturer liable? Is the building owner, the elevator owner liable? Is the person who gets in the self driving elevator, you know, that responsible because they knew what they’re getting into. I thought that was so interesting, like the idea that a generative AI tool could maybe effectively reason by analogy. And when I thought about it, it’s you know, it’s not really looking at keywords like we used to, it’s like MIT concepts, you know, that your database, like the concepts that are similar are really close together. Right? And the concepts of self driving accidents, I shared a little bit of conceptual DNA with elevators. I think every call

Greg Lambert 21:20
make make sense when you stand back and think about it. So alright, well, let’s, let’s jump over to the the docket alarm and integration with I manage as well. So, you know, talk to us a little bit more about how this is set up. And what what the use cases are for, for firms to kind of automate this this process of moving docket docket information directly into I manage Sure,

Ed Walters 21:50
well, I think people know, docket alarm. The idea is it will track litigation for firms. In addition to being a great research library and analytics library, searching for precedents and things like that. The specific use case here is when something new happens in the case, you get an answer to a complaint, there’s a motion filed in the case, the worst way to do it is to get an email from ECF and Pacer. And you click it and nine people get it, the first person gets the first free Look, everyone else sends an email around to everyone else in the firm to try and figure out who got it who’s got the document.

Greg Lambert 22:36
Or they just log in and pay for it. Right? The with the firm’s credit card? Yes.

Ed Walters 22:40
I mean, there are a lot of people who just pay the 10 cents per page, right? So docket alarm, kind of manage that process. But only to a point, right, so you get one, everyone gets an email with documents attached to it, you don’t have to worry about managing the first free look. And then it’s stored in docket alarm. The problem is that, you know, most I manage users and there’s a lot of them don’t want the doc and alarms, software, the the repo for those documents. And so they’re manually saving them one by one into the right client and matter. And iMac. And there are, you know, I think it’s, you know, probably 75% of the biggest 100 firms in the US use both docket alarm and iManage. And, you know, seven 810 lawyers that affirm docketing clerks, information professionals, law librarians are getting these emails. And nobody knows that someone save it, I manage.

Greg Lambert 23:45
Did I save it into the right folder? Yeah,

Ed Walters 23:48
you did. And then you have this happen scores of times a day. And so this is, I don’t think this is magical. But it’s extremely practical and useful. If you are one of the firm’s that’s using both iManage and docket alarm, when that happens, you’ll still get email, it will still be saved for even docket alarm, but it will also be saved to the client and the matter inside of iManage. He in exactly the right place, it will resolve conflicts, if there’s an update to that document, it’s quite an actually synced. If someone has the wrong version of it, it’ll be replaced by the right version of it, and all securely and nobody has to feed about it. Right. So I don’t know, I think people will tend to push like these kinds of magical AI solutions. I think this is just part of the integration that’s very practical, and very useful.

Greg Lambert 24:42
Is this again, the the cloud version of of iManage that you need to have or is this work is worth. Okay, so, so work is the just make sure I’m clear. That’s the on That’s the cloud version of vie manage, right? So

Ed Walters 25:04
this will save right into wherever you have your I manage it is in the cloud. Okay. Wherever you had your clients and matters for those filings will say directly to them.

Greg Lambert 25:16
Yeah. All right. Well, I know that I manage has their big conference today and tomorrow and, and you’re giving us a nice, early peek into this. Let’s look down the road. While you’re while you’re there at the conference, are you looking at building additional tools and relationships with with I manage that? You think you can kind of get ideas? What what may come down the road?

Ed Walters 25:45
A lot of ideas? Yeah. So I think that there’s, there’s a lot of useful integration points. I’m excited to launch these two that connect live, the New York user event for I manage. But I think these will be the first of many announcements with these integration points. And again, like just the iron of the liberation of these Crown Jewels, there’s been firms that have, you know, created their own Large Language Models that invested to create a foundation model inside the firm. You know, hugely expensive, that’s a really difficult thing to do. And then, you know, whenever there’s new stuff, once a year, once a quarter, once a day, you have to create, like an updated foundation model or something. And I can see why they might do it, right, because these knowledge assets are really important. They’re differentiators for firms. So I hope this is kind of a bridge instead of having to create your own $10 million foundation model. Vincent AI is a bridge between generative AI and your internal work product, your secret sauce, a secure bridge, a sock two compliant bridge, but a bridge that helps law firms to really use and leverage those knowledge assets, using generative AI to differentiate their services at a time where a lot of clients are saying, I want to know what your generative AI strategy is going to be.

Greg Lambert 27:21
Yeah, this this might be one of those things where you may or may have lucked out if you waited a little bit to

Ed Walters 27:33
some timers who did your homework with the epic all nighter? We should have been great. I certainly was.

Greg Lambert 27:42
A college students did that. All right, well, Ed Walters, Chief Strategy Officer at vLex, I want to thank you very much for coming in and taking the time to talk with us about the two new announcements that you have between vLex and iManage. So So Thanks. Thanks for having me, Greg. All right. And of course, thanks to all the listeners for listening to The Geek in Review. If you like what you hear, please share it with a colleague. We’d love to hear from you. So reach out, reach out to us online. LinkedIn is probably the best place to get a hold of us, Ed, if somebody wants to learn more, where should they go?

Ed Walters 28:25
LinkedIn, forward slash login forward slash Walters. I think I still have a, you know, Burner Twitter account on X, @EJWalters, but I’m with you. I think I think LinkedIn is where it’s at these days.

Greg Lambert 28:40
I think so for the legal profession. That seems to be where we’re, we’re housing right now. So and, of course, the music that you hear is from Jerry David DeCicca. So thank you, Jerry. And thanks again Ed.

Ed Walters 28:54
Thanks, Greg.

 

]]>
3 Geeks and a Law Blog
vLex Integrates Vincent AI with iManage and Automates Docket Ingestion with Docket Alarm https://www.lexblog.com/2024/05/29/vlex-integrates-vincent-ai-with-imanage-and-automates-docket-ingestion-with-docket-alarm-2/ Wed, 29 May 2024 12:52:47 +0000 https://www.lexblog.com/2024/05/29/vlex-integrates-vincent-ai-with-imanage-and-automates-docket-ingestion-with-docket-alarm-2/ In this special episode of The Geek in Review, host Greg Lambert sits down with Ed Walters, Chief Strategy Officer at vLex, to discuss two significant announcements: the integration of vLex’s Vincent AI with iManage Work and the automated docket ingestion feature with iManage using vLex’s Docket Alarm.

The integration between Vincent AI and iManage’s Insight Plus collection allows law firms to leverage their internal knowledge assets alongside vLex’s extensive public law database. This combination of the “two halves of the legal brain” enables lawyers to create brilliant first drafts and analyze documents using the power of generative AI. Walters emphasizes the importance of data quality and the role of knowledge management teams in curating the best practice documents for training AI models.

Security is a top priority for both vLex and iManage in this integration. Walters details the various measures taken to ensure data protection, including encryption, dedicated master keys for each firm, and compliance with industry standards such as ISO 27001 and SOC 2. He also clarifies that vLex uses retrieval-augmented generation, securely passing relevant documents to a closed instance of the foundation model without training on the data itself.

The second announcement focuses on the automated docket ingestion feature, which seamlessly saves court filings from Docket Alarm into the correct iManage folders. This practical solution eliminates the manual process of saving documents and ensures that all team members have access to the most up-to-date versions of the filings.

Looking ahead, Walters hints at future integration points between vLex and iManage, emphasizing the potential for generative AI to help law firms differentiate their services and meet client expectations. He sees Vincent AI as a secure bridge between generative AI and a firm’s internal work product, enabling them to leverage their knowledge assets without the need for expensive, in-house foundation models.

 

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠

https://youtu.be/uDHXq8UT1UU

Contact Us:  Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠

Email: geekinreviewpodcast@gmail.com

Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠

 Transcript

Greg Lambert 0:07
Welcome to The Geek in Review, the podcast focused on innovative and creative ideas in the legal industry. I’m Greg Lambert with a special episode where I am talking with Ed Walters, the street Chief Strategy Officer at vLex. Ed, thanks for jumping on the podcast to talk with me for a couple of new announcements that you guys have at vLex, where you’re releasing it this morning.

Ed Walters 0:32
Yeah, thanks for having me.

Greg Lambert 0:34
All right. So let me let me list off the two things that were in the in the press release, in order. First, you are announcing that there’s an integration with vLex vents in AI? And with iManage work, and I think this is probably between you, you and me and a few listeners is probably the bigger of the two. But you know, this is just huge news, I think for the legal profession, because this has been something that we’ve talked about almost as soon as we heard what Gen AI can do. It’s been Okay, great. When can I start putting, you know, getting use out of my data. And then second, there, you are announcing that there’s an automated docket ingestion feature, which will use vLex Docket Alarm to seamlessly load court filings into iManage into the correct folders for those matters. So do you mind just let’s back up to the first one and talk about give us a little bit more information on these integrations and what they mean for lawyers and legal teams?

Ed Walters 1:49
Sure. So at the top level, let me just say that I manage a company that we have respected for a long time, the vLex team, you know, sort of seen the kind of market penetration that they have their emphasis on security. They’re like one of the most trusted names in legal tech. And so this is an integration that we’ve been excited about for a long time. And the I think the aspiration here is that there’s a lot of potential places where you can sync up between vLex and vLex products like docket alarm, and I manage. So we’re announcing two of them at our managers User Conference connect live, which is happening Wednesday and Thursday, this week, the 29th and 30th. of May. And we will announce from the stage on Thursday, tomorrow, the full integration, but we’re giving you a sneak preview today. However, let me let me talk about the first one that the integration between I manage and Vincent AI and the AI tool from VX, recently named the 2024, New Product of the Year by the American Association of law libraries, we’re over the moon about that. So the way this is going to work is that it is going to be an integration between Ion manages INSIGHT Plus collection for a firm and that firms vLex Vincent AI subscription if you’re subscribed to both products, and you have put that kind of curated, set superset of your best documents, into Insight Plus, when you’re doing research, or when you are creating workflows, or first drafts, or analyzing documents inside of the Vincent AI system, you can use that whole V lacks knowledge base of public law. But you can also pull in documents from the INSIGHT Plus collection from your private collection. And let me just say for a second why I’m so geeked about this on gated review, I really think that the legal brain kind of has two halves. There’s a Public Law half of the legal brain that has judicial opinions and statutes and regulations and court rules, law review articles and things like that. And then the other lobe is the internal knowledge assets of the firm. And in some ways, these are the most important assets that law firms have on their books. This is the playbooks. This is the unique way of doing things. This is what if you if you train your lawyers, this is what you’re hoping that they’re going to learn. In a think for gender of AI. We’ve had solutions, look at internal law firm playbooks. We’ve had solutions that look at the kind of right move Have the brain that public law part of it. But I think this will be the first time you can look at both at the same time, in the same interface for the same gender to the AI tool. And I think in the same ways that we were pleasantly surprised at what happened, when you run generative AI on the public wall database, I think we’re going to be equally amazed at what happens when you combine the world’s largest public law, intelligence platform and relax. And I think that the largest collection of internal firm documents with IMEC.

Greg Lambert 5:33
Yeah, I imagined is now just a kind of a technical question before we move on. Is this the cloud version of iManage? Would the firm need to be on that version in order to get access to these features? Yes, so

Ed Walters 5:52
you need to be on the INSIGHT Plus platform inside of I’m in it. And that is not like the entire database from the firm. This is a curated set of documents that the law firm say, I want to put this in the INSIGHT Plus collection.

Greg Lambert 6:11
And typically, this would be like a knowledge management team that has worked with the individual practice areas to I mean, to some to dumb it down a little bit. So even I could understand it, or it basically your you want your best practice documents that are in there, kind of like the old days where, you know, we started with a with a red well, that had the bit, you know, the best contract, then we moved that online with our KVM. Team. And now this is just kind of that next version of that bigger, bigger, better class of documents. Right?

Ed Walters 6:44
I think that’s right, I think that’s a good way of understanding it. And, you know, I think when you’re when you’re looking at AI, and we talked about this before retrieval, augmented generation, all of the results coming out of generative AI are based on the data on the quality of the data on the quality of the data. And I don’t think that you would want generative AI training on everything, you know, the menu for the summer, associate lunch, the email where you are, you know, RSVP to the party or something. The 17 drafts of the pleading before it comes out, you want the superset that read well, the version of the best documents. And those are what we should be using to train generative AI, which is why the INSIGHT Plus collections can be so powerful, combined with that kind of superset of public law. And if you notice that, like this is not just American law, I mean, we’re in the US. So we sort of think that from that perspective, but Vincent AI already works in the UK and Ireland, in Spain, and the European Union, Mexico, Colombia, Chile, Singapore, New Zealand, we’re rolling out war seems like every month, and so global law firms could take like, you know, not just US Senator practice, but practice around the world in the European Union with our super set there, and really produce like, brilliant first drafts, not just for the US, but anywhere in the

Greg Lambert 8:14
world. Excellent. Well, I know one of the biggest barriers since day one since November of 2022, when we started getting hands on ChatGPT. Three, it has been security concerns and protecting client data. Now I know I manage I mean, this is their bread and butter, of being able to to not just protect the data, but also to isolate the data so that things like ethical walls can be set up in in making sure that people keeping honest people honest and not being able to get to things that that they don’t want to or don’t need to get to. So how and I know you’ve we talked about this for we went on, you kind of have a script that you want to make sure that you cover this correctly. So how did you guys look at protecting it so that when you go to mine, IT security ops team, that they’re comfortable in rolling out a product like this?

Ed Walters 9:18
Yeah, I think that I think you’re right, that is the most essential piece. And you better believe that the iManage team for whom security is their bread and butter marched us through the paces to make sure that everything we do together maintains that security. So I think when the when the VFX team and the Fastcase team worked together with I managed to build this out. We wanted to, you know, overperform we wanted to really shoot the lights out on security, we want to exceed the best industry standards. And so, you know, I did make a checklist. I want to make sure that I’m hitting all the points. So first, we start with encryption. We got all of the information In all customer data or conversation logs, all text from uploaded files, and metadata is encrypted in transit. And at rest, we want to make sure that it’s encrypted through the entire process, like the best security systems are the data these stored using, it’s called a FIPS, 142 compliant algorithmic suite. But it’s like one of the most secure data suites that you can have. It has hardware security modules, and each enterprise customer, like each law firm has their own dedicated encryption master key. So if not, we’re using one key for everybody, right? You know, Jackson Walker has one scan as another, you have your own dedicated master key. And then for global firms, like you have to have some control over those keys. So firms can choose the region residency of a master keys and host them in their own self managed Amazon Web Services account. If you’re in Europe, and you need that key to stay in Europe, you can do that. If you’re in the US and want to keep it in the US soon. You can do that as well. There’s a lot to say about it. But I think it’s safe to say that every security certification that you can get, we’re trying to go load up beyond it was finished. For the security nerds ISO [27001]. Audit, we’ve got a draft certification. And the full certification is just a kind of a matter of timing. Now, we’re, we’ve finished the sock to audit and we’re in the three month sock to surveillance period. But I think maybe the last thing to say is just that, you know, we are not unlike open AI, or, you know, Google with Gemini, we’re not training our own foundation law. So everything we’re doing here is provable amended generation. It’s a little bit like a search engine. You know, if a retrieval party, we are pulling the relevant documents through a secure channel, and storing the data securely, but with those documents, we’re then passing them securely with the dedicated closed instance of the foundation model. In our case, we’re using GPT-4 Turbo right now, but we’re paying through the nose for the super secure version of it. So nothing ever goes to train that model. And we’re not treated with the data. When that transaction is done, when the prompt gets passed, we get the result. All that data is securely only accessible to the firm, not to us, not open AI, not to Google or anyone else. Okay.

Greg Lambert 12:55
All right. So now you’ve convinced my security ops team that it’s okay to do this. Now comes the really hard test of how what what would you say is kind of the biggest value of being able to do this? And what are you doing to make sure that there’s not any kind of potential bias that that’s going on? Or that you’re making sure that answers that are coming back? Or, again, grounded to the documents that, that we’ve isolated as our best practice documents? Yeah, how do you convince the lawyers now to use it?

Ed Walters 13:39
Well, I think, you know, the, the first way you’d have to do that is it just has to work. Like when you know, try to draft things with it, when you evaluate documents with when you try to figure out what comes next. What’s missing, then a contract that has to just flat out work. And this is a place where I think lawyers have been very favorably impressed. It’s why, you know, I think that there were there are more nominations this year than ever for the American Association of law libraries, new product, or like, everybody had an AI product this year. Yeah, this competing for that award. But among all of them, you know, and I always say law librarians are the most discriminating the most sophisticated consumers of these things. Among all of them. They picked Vincent AI. So I would just say, you know, the proof of that the proof is in the pudding. The proof of the pudding is in the eating maybe. And, you know, I think people have to be delighted by it. And it has to just flat out work. And I think that’s been our experience. We just did a couple of different law school, head to head Robot Wars, among the various different genealogy pools. And I’ll just say, you know, Vincent AI did very well, in terms of, you know, the kind of bias and the usefulness of it. I think, you know, we test as most as, as most software developers do, the outputs, I think we’ve benchmarked them. And I think we’ve been pretty favorably impressed with them in the testing. But I mean, American law, in some case, the law of other countries can also reflect bias, I want to be very careful about this. I think you and others have called out software manufacturers for overstating claims about gender to the AI. I don’t want to do that here. I think gender they AI is amazing. But in some sense, these transformers, the T and GPT, are really kind of predicting what the next word is going to be. And not all of the law in the world history is unbiased. And so I think this is a problem we’ll face in the profession. I think it’s it does very well, all these things, but I’ll never say it is completely unbiased. Maybe the last thing to say about it is just the usefulness of it. I think that there’s a lot of things on lawyers too. And legal information professionals do, that they can’t go through. That is just BurgerTime. I think about the document review that we did before he discovery came out, right? Is it useful to have technology assisted review for ediscovery castle with this, as one of the lawyers who did paper discovery back in the day? Anybody who’s had to do that will find it extremely useful if we can automate and mechanize than drugs were. And I will also say, you know, I’m not Panglossian about and I’m not going to say this is the solution that does everything. I always imagined legal work like a chain, right. And we can replace certain links in that chain. But the last links in that chain have to be human judgments. You have to have discriminating information professionals, when you do retrieval augmented generation, who can look at what’s going into making the answer and say include these, not this one, I don’t think this one’s useful. That’s a place where I think fits in really stands out, users can go through audit the list and say, I don’t want to include these two, but I want the rest of them. And I think it is hopefully like more incumbent than ever, that legal information professionals law librarians, paralegals researchers, and lawyers are very, very careful, they’re more discriminating than ever about those sources, you need to make sure that all of the cases exist in the world that they’re not hallucinated, you need to make them verify up, you need to check to make sure that you’re using the right work product from inside of iMatch. And so in some ways, we are automating the worst parts of the work like we did with ediscovery. But it does require and we are more sophisticated, in some ways, we’re more critical and discriminating in some ways, and these are skills that will be important for the next 20 years.

Greg Lambert 18:36
And I’m gonna I’m gonna hit you with a question that isn’t isn’t on the list. And, and hopefully, it’s not outside the scope here. But during the testing of this, was there anything that kind of stood out or surprised you it that worked better than you thought? When when you’re integrating these two products?

Ed Walters 19:00
Yeah, I don’t I don’t have a good answer from the for the iManage front. But I have a great answer. From a demo that I did with Craig Newton, who is the director of Cornell’s Legal Information Institute. We were trying to make videos and screw up. And we were offering a little bit we add the mud tires on. And so we were asking questions that there weren’t answers to, to see if we can make Vincent like make something. And so we were asking questions about to liability when unoccupied autonomous cars crash. And there’s not really any answer to that question. Right. And Craig knew it and he was trying to make it flow or create cases from the future or something.

Greg Lambert 19:51
And which can happen I hear and happens.

Ed Walters 19:56
Didn’t happen. So what happened was, Vincent AI came back concern, I don’t have any cases dealing with autonomous vehicles without occupants. But you might find interesting these analogous cases, from the 1930s when elevators stopped having operators and started running by themselves, the elevators, you used to have a person who was in there, like pulling the alarm and stopping on the fourth floor. Yeah. And at some point, we created self driving elevators. And guess what I mean, some of them had accidents. And then the question is, you know, is the elevator manufacturer liable? Is the building owner, the elevator owner liable? Is the person who gets in the self driving elevator, you know, that responsible because they knew what they’re getting into. I thought that was so interesting, like the idea that a generative AI tool could maybe effectively reason by analogy. And when I thought about it, it’s you know, it’s not really looking at keywords like we used to, it’s like MIT concepts, you know, that your database, like the concepts that are similar are really close together. Right? And the concepts of self driving accidents, I shared a little bit of conceptual DNA with elevators. I think every call

Greg Lambert 21:20
make make sense when you stand back and think about it. So alright, well, let’s, let’s jump over to the the docket alarm and integration with I manage as well. So, you know, talk to us a little bit more about how this is set up. And what what the use cases are for, for firms to kind of automate this this process of moving docket docket information directly into I manage Sure,

Ed Walters 21:50
well, I think people know, docket alarm. The idea is it will track litigation for firms. In addition to being a great research library and analytics library, searching for precedents and things like that. The specific use case here is when something new happens in the case, you get an answer to a complaint, there’s a motion filed in the case, the worst way to do it is to get an email from ECF and Pacer. And you click it and nine people get it, the first person gets the first free Look, everyone else sends an email around to everyone else in the firm to try and figure out who got it who’s got the document.

Greg Lambert 22:36
Or they just log in and pay for it. Right? The with the firm’s credit card? Yes.

Ed Walters 22:40
I mean, there are a lot of people who just pay the 10 cents per page, right? So docket alarm, kind of manage that process. But only to a point, right, so you get one, everyone gets an email with documents attached to it, you don’t have to worry about managing the first free look. And then it’s stored in docket alarm. The problem is that, you know, most I manage users and there’s a lot of them don’t want the doc and alarms, software, the the repo for those documents. And so they’re manually saving them one by one into the right client and matter. And iMac. And there are, you know, I think it’s, you know, probably 75% of the biggest 100 firms in the US use both docket alarm and iManage. And, you know, seven 810 lawyers that affirm docketing clerks, information professionals, law librarians are getting these emails. And nobody knows that someone save it, I manage.

Greg Lambert 23:45
Did I save it into the right folder? Yeah,

Ed Walters 23:48
you did. And then you have this happen scores of times a day. And so this is, I don’t think this is magical. But it’s extremely practical and useful. If you are one of the firm’s that’s using both iManage and docket alarm, when that happens, you’ll still get email, it will still be saved for even docket alarm, but it will also be saved to the client and the matter inside of iManage. He in exactly the right place, it will resolve conflicts, if there’s an update to that document, it’s quite an actually synced. If someone has the wrong version of it, it’ll be replaced by the right version of it, and all securely and nobody has to feed about it. Right. So I don’t know, I think people will tend to push like these kinds of magical AI solutions. I think this is just part of the integration that’s very practical, and very useful.

Greg Lambert 24:42
Is this again, the the cloud version of of iManage that you need to have or is this work is worth. Okay, so, so work is the just make sure I’m clear. That’s the on That’s the cloud version of vie manage, right? So

Ed Walters 25:04
this will save right into wherever you have your I manage it is in the cloud. Okay. Wherever you had your clients and matters for those filings will say directly to them.

Greg Lambert 25:16
Yeah. All right. Well, I know that I manage has their big conference today and tomorrow and, and you’re giving us a nice, early peek into this. Let’s look down the road. While you’re while you’re there at the conference, are you looking at building additional tools and relationships with with I manage that? You think you can kind of get ideas? What what may come down the road?

Ed Walters 25:45
A lot of ideas? Yeah. So I think that there’s, there’s a lot of useful integration points. I’m excited to launch these two that connect live, the New York user event for I manage. But I think these will be the first of many announcements with these integration points. And again, like just the iron of the liberation of these Crown Jewels, there’s been firms that have, you know, created their own Large Language Models that invested to create a foundation model inside the firm. You know, hugely expensive, that’s a really difficult thing to do. And then, you know, whenever there’s new stuff, once a year, once a quarter, once a day, you have to create, like an updated foundation model or something. And I can see why they might do it, right, because these knowledge assets are really important. They’re differentiators for firms. So I hope this is kind of a bridge instead of having to create your own $10 million foundation model. Vincent AI is a bridge between generative AI and your internal work product, your secret sauce, a secure bridge, a sock two compliant bridge, but a bridge that helps law firms to really use and leverage those knowledge assets, using generative AI to differentiate their services at a time where a lot of clients are saying, I want to know what your generative AI strategy is going to be.

Greg Lambert 27:21
Yeah, this this might be one of those things where you may or may have lucked out if you waited a little bit to

Ed Walters 27:33
some timers who did your homework with the epic all nighter? We should have been great. I certainly was.

Greg Lambert 27:42
A college students did that. All right, well, Ed Walters, Chief Strategy Officer at vLex, I want to thank you very much for coming in and taking the time to talk with us about the two new announcements that you have between vLex and iManage. So So Thanks. Thanks for having me, Greg. All right. And of course, thanks to all the listeners for listening to The Geek in Review. If you like what you hear, please share it with a colleague. We’d love to hear from you. So reach out, reach out to us online. LinkedIn is probably the best place to get a hold of us, Ed, if somebody wants to learn more, where should they go?

Ed Walters 28:25
LinkedIn, forward slash login forward slash Walters. I think I still have a, you know, Burner Twitter account on X, @EJWalters, but I’m with you. I think I think LinkedIn is where it’s at these days.

Greg Lambert 28:40
I think so for the legal profession. That seems to be where we’re, we’re housing right now. So and, of course, the music that you hear is from Jerry David DeCicca. So thank you, Jerry. And thanks again Ed.

Ed Walters 28:54
Thanks, Greg.

 

]]>
3 Geeks and a Law Blog
Ropes & Gray Discusses AI and the Copyright Liability Overhang https://www.lexblog.com/2024/05/29/ropes-gray-discusses-ai-and-the-copyright-liability-overhang/ Wed, 29 May 2024 04:01:13 +0000 https://www.lexblog.com/2024/05/29/ropes-gray-discusses-ai-and-the-copyright-liability-overhang/ Copyright law, as it relates to Artificial Intelligence (“AI”), is at a crossroads. Rapid innovation in AI has created a great deal of uncertainty regarding whether popular AI platforms infringe copyright. More than a dozen1 suits are pending across the United States in which copyright owners are pursuing various theories of infringement against AI platforms, alleging that AI models either infringe their copyrights because they are trained using copyrighted works,2 or because the output of the AI models itself infringes,3 or both. While these suits are pending, the U.S. Copyright Office has issued a Notice of Inquiry (“NOI”) seeking comments about the collection and curation of AI dataset sources, how those datasets are used to train AI models, and whether permission by or compensation for copyright owners should be required when their works are included in the process.4

This legal uncertainty and potential for liability hang over the AI industry and will affect how AI is used and the terms of agreements between AI vendors, business partners, and users. This article collects the most prominent ongoing AI copyright cases and their theories, as well as recent discussions abhttps://clsbluesky.law.columbia.edu/wp-admin/users.php?page=view-guest-authorsout a potential compulsory copyright licensing scheme,5 and provides considerations regarding the allocation of copyright infringement risk for companies that may be entering into agreements related to the use of AI platforms.

Plaintiffs’ Copyright Infringement Theories

Training the AI Requires Copying Copyrighted Works

Most of the plaintiffs in the cases, with the notable exception of Doe 1 v. Github,6 have asserted direct infringement claims alleging that each respective AI company in the case accessed copyrighted material and made copies for the purposes of training a given AI model. AI models require substantial amounts of data, and some prominent AI vendors “scrape” the internet for that content, which requires a copy to be made of such content.7 This theory of copyright infringement has survived a 12(b)(6) motion to dismiss in one case involving “scraped” training content.8 That said, different AI models are trained with different types of data and information as to the specific ways AI is trained is limited,9 so the success of this theory may depend on the facts of each case.

Post-Training Infringement

Plaintiffs also have offered theories of infringement based on the use of a given AI tool, apart from training it. Plaintiffs have argued that any given AI model, while running, is an unauthorized derivative work because it pulls from copyrighted materials.10 Plaintiffs also have argued that the AI model contains compressed copies of the works through the usage of weight folders, and that this unauthorized copying should be considered direct infringement even if the whole of the work is not represented in a traditional way.11 Finally, plaintiffs have argued that AI outputs can result in substantially similar works that infringe.12

Possible Defenses; Notice of Inquiry Comments

Although several of the defendants of the cases discussed above have not yet filed answers in court as of the date of publication, the likely defense theories and positions on copyright liability can be gleaned from both the more advanced cases and AI companies’ responses13,14,15,16,17 to select questions presented by the Copyright Office in the recently issued Notice of Inquiry (“NOI”).18

In response to questions presented by the Copyright Office in its NOI, OpenAI (which is a defendant in a number of cases19) claimed that its AI is trained using either publicly available or licensed materials.20 Further, it said that training ChatGPT involved access to a large dataset, teaching the model to break down text into smaller words and then correlate specific functions and linguistic data to words, such as probabilistic data about the chronology of words. For example, Open AI said that GPT-4 categorizes words into pronouns, nouns, verbs, and so on, and then uses math to predict the next word given the previous words. Therefore, it said, the structure of language itself is the only thing “stored” in the large language model (LLM), rather than the copyrightable expression in a given work.21 Similarly, Stability AI explained that its generative image AI, Stable Diffusion, breaks down images into basic structures and relative relationships between parts of images.22 Relying on this fragmentation of the scraped content, Microsoft claimed that the idea-expression dichotomy in copyright law should be a shield for AI companies.23

Defendants also assert that after the models are trained, they do not contain any copies of the “scraped” content, or any copy of the material that was used to train the model.24 However, the defendants have generally not yet provided a theory as to why copies made in the process of training the AI models or creating the training databases are not infringement.25

AI companies have also asserted that their use of copyright-protected content constitutes a transformative fair use.26 OpenAI claimed that in “rare” situations, an output could implicate the exclusive rights of a copyright holder by satisfying the substantial similarity test.27 However, OpenAI has stated that nonetheless, the LLM has a substantial noninfringing use,28 which is a patent law doctrine29 that has migrated into copyright law30 but has not yet been used in the manner suggested by OpenAI. Microsoft also mentioned the substantial similarity test for copyright infringement when it noted that Microsoft incorporates “many safeguards” into its AI tools to prevent it from being “misused for copyright infringement,” for example, training the AI to explicitly decline to provide excerpts from protected materials.31

Prospects for Compulsory Licensing Scheme

The Copyright Office is still pursuing its investigation into copyright law and policy issues raised by AI and has not yet addressed the public comments. It plans to issue a report in several sections analyzing the issues in 2024.32 During a meeting in February 2020 about AI and copyright issues, Mary Rasenberger, a former senior policy advisor for the U.S. Copyright Office who is now the Executive Director of the Author’s Guild, recommended a collective licensing scheme.33 In the summer of 2023, the Copyright Office held a couple of public webinars about copyright and AI, and the transcripts from these events reveal that stakeholders in attendance do not see a compulsory licensing scheme as an ideal option, as it leads to issues of valuation.34

Allocation of Risk in AI-Related Contracts

When contracting with third parties who are providing AI-based services or developing AI models, parties should consider how the contract allocates liability for potential copyright infringement. Although AI service providers largely do not indemnify users from copyright claims related to their free AI services,35 some AI vendors offer enterprise and developer customers limited indemnity protections that are often delineated in terms of use for specific AI products.36 Such terms may narrow the scope of indemnities with broad exclusions. For example, while vendors will generally indemnify users against third-party infringement claims related to outputs, some will not indemnify users for claims that the training data and inputs were infringing.37 Therefore, parties should carefully consider the scope of indemnities when choosing between AI services and pay careful attention to what the indemnities leave out.

Looking ahead, if AI companies are found liable to third parties for copyright infringement in the pending litigation discussed above, the cost of using generative AI services may increase as vendors seek to shift liability to consumers, perhaps by further reducing the scope of coverage for AI indemnities. Where companies have the freedom to negotiate more bespoke terms, they would be well advised to seek including a pricing escalation provision to control a vendor’s ability to increase prices over time after the parties have entered into the contract.

Conclusion

With at least a dozen copyright cases pending, we are witnessing a period of litigation that will likely determine the legal relationship between content creators and AI platforms. Until the legal dust settles, however, there will be uncertainty and risk of copyright liability that anyone contracting with respect to the use of AI platforms should be aware of. It remains to be seen what the Copyright Office will recommend to Congress and how it will affect the progression or resolution of these cases, as well as who will bear the potential burdens of infringement liability. Attorneys practicing in this area will want to keep abreast of developments and think ahead as to how risk may be mitigated by contract.

ENDNOTES

  1. SeeThomson Reuters Enter. Ctr. GmbH v. ROSS Intel. Inc., No. 1:20-cv-00613-SB (D. Del. filed May 6, 2020); UAB Planner 5D v. Facebook, Inc., 534 F. Supp. 3d 1126 (N.D. Cal. 2021); Doe 1 v. GitHub, Inc., No. 4:22-cv-06823-JST (N.D. Cal. filed Nov. 3, 2022); Getty Images, Inc. v. Stability AI, Inc., No. 1:23-cv-00135-JLH (D. Del. filed Feb. 3, 2023); Tremblay v. OpenAI, Inc., No. 3:23-cv-03223, (N.D. Cal. filed June 28, 2023); L. v. Alphabet Inc., No. 3:23-cv-3440 (N.D. Cal. filed July 11, 2023); Authors Guild v. OpenAI, Inc., No. 1:23-cv-08292 (S.D.N.Y. filed Sept. 19, 2023); Kadrey v. Meta Platforms, Inc., No. 23-cv-03417-VC (N.D. Cal. Nov. 20, 2023); Huckabee v. Bloomberg L.P., No. 1:23-cv-09152 (S.D.N.Y. filed Oct 17, 2023); Concord Music Grp., Inc. v. Anthropic PBC, No. 3:23-cv-01092 (M.D. Tenn. filed Oct. 18, 2023); Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal. Oct. 30, 2023) (order granting motion to dismiss with leave to amend); The N.Y. Times Co. v. Microsoft Corp., No, 1:23-cv-11195, (S.D.N.Y. filed Dec. 27, 2023); Nazemian et al. v. Nvidia Corp., No. 24-01454 (N.D. Cal. filed Mar. 8, 2024).
  2. See, e.g., Complaint, Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal. Oct. 30, 2023).
  3. See, e.g., Complaint, The N.Y. Times Co. v. Microsoft Corp., No, 1:23-cv-11195, (S.D.N.Y. filed Dec. 27, 2023).
  4. Notice of Inquiry, 88 Fed. Reg. 59942 (U.S. Copyright Office Aug. 30, 2023), https://www.regulations.gov/document/COLC-2023-0006-0001.
  5. See id., Questions 10.3-10.5. For examples of compulsory licensing schemes, see also17 U.S.C. §§ 111(d) and 115, which provide compulsory copyright licenses for cable system transmissions and music, respectively.
  6. The plaintiffs in Githubinstead rely on the theory of improper copyright information management under the Digital Millennium Copyright Act. See 17 U.S.C. § 1202. This argument is also common to other cases collected (for example, in Getty Images).
  7. Unnamed but relevant parties to a few of the matters are LAION and Open Crawl, who maintain and provide databases that AI vendors use for training. The Ninth Circuit has suggested that “scraping” publicly available data does not constitute an invasion of privacy or violation of the Computer Fraud and Abuse Act under the facts presented before it. hiQ Labs, Inc. v. LinkedIn Corp., 938 F.3d 985 (9th Cir. 2019).
  8. SeeThomson Reuters v. ROSS (D. Del. filed May 6, 2020), where the Delaware District Court granted summary judgment to the plaintiffs on the issue of copying.
  9. Compare Answer at 15, ¶111, Authors Guild v. OpenAI, Inc., No. 1:23-cv-08292 (S.D.N.Y. filed Sept. 19, 2023) andAmended Complaint at 19, ¶111, Authors Guild v. OpenAI, Inc., No. 1:23-cv-08292 (S.D.N.Y. filed Sept. 19, 2023).
  10. SeeAuthors Guild v. OpenAI, Inc., No. 1:23-cv-08292 (S.D.N.Y. filed Sept. 19, 2023); Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal. Oct. 30, 2023).
  11. See Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal. Oct. 30, 2023); see alsoThe N.Y. Times Co. v. Microsoft Corp., No, 1:23-cv-11195, (S.D.N.Y. filed Dec. 27, 2023).
  12. SeeDoe 1 v. GitHub, Inc., No. 4:22-cv-06823-JST (N.D. Cal. filed Nov. 3, 2022); Getty Images, Inc. v. Stability AI, Inc., No. 1:23-cv-00135-JLH (D. Del. filed Feb. 3, 2023); Concord Music Grp., Inc. v. Anthropic PBC, No. 3:23-cv-01092 (M.D. Tenn. filed Oct. 18, 2023); Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal. filed Oct. 30, 2023); The N.Y. Times Co. v. Microsoft Corp., No, 1:23-cv-11195, (S.D.N.Y. filed Dec. 27, 2023).
  13. Microsoft Comment Letter (Oct. 30, 2023), https://www.regulations.gov/comment/COLC-2023-0006-8750.
  14. OpenAI Comment Letter (Oct. 30, 2023), https://www.regulations.gov/comment/COLC-2023-0006-8906.
  15. StabilityAI Comment Letter (Oct. 29, 2023), https://www.regulations.gov/comment/COLC-2023-0006-8664.
  16. Meta Comment Letter (Dec. 6, 2023), https://www.regulations.gov/comment/COLC-2023-0006-10332.
  17. Google Comment Letter (Oct. 30, 2023), https://www.regulations.gov/comment/COLC-2023-0006-9003.
  18. All public comment letters are available here: https://www.regulations.gov/document/COLC-2023-0006-0001/comment.
  19. See Tremblay v. OpenAI, Inc., No. 3:23-cv-03223, (N.D. Cal. filed June 28, 2023); Authors Guild v. OpenAI, Inc., No. 1:23-cv-08292 (S.D.N.Y. filed Sept. 19, 2023); The N.Y. Times Co. v. Microsoft Corp., No, 1:23-cv-11195, (S.D.N.Y. filed Dec. 27, 2023).
  20. Supranote 14, at 5.
  21. See id., at 5-6.
  22. Supranote 15, at 10.
  23. Supranote 13, at 3.
  24. See, for example,supra note 16, at 9.
  25. One theory that they may assert would be that these training copies, if they last in RAM for under 1.2 seconds, are “buffering copies,” which have been considered non-infringing copies by the Second Circuit. See Cartoon Network v. CSC Holdings, 536 F.3d 121 (2d Cir. 2008). However, this theory has not yet been asserted by any defendants in the cases, and may not be applicable depending on the technology involved.
  26. See, e.g., supranote 15, at 2 and 8; see also Meta Platforms Inc.’s Answer to First Consolidated Amended Complaintat 11, Kadrey v. Meta Platforms, Inc., No. 23-cv-03417-VC (N.D. Cal. Nov. 20, 2023).
  27. Supranote 14, at 14.
  28. Id.
  29. See 35 U.S. Code § 271(c).
  30. See Sony Corp. of America v. Universal City Studios, Inc.,464 U.S. 417 (1984).
  31. Supranote 13, at 11.
  32. Copyright and Artificial Intelligence, United States Copyright Office, https://www.copyright.gov/ai/.
  33. United States Copyright Office, Copyright in the Age of Artificial Intelligence, 167 (Feb. 5, 2020), https://www.copyright.gov/events/artificial-intelligence/transcript.pdf.
  34. United States Copyright Office, Transcript of Proceedings, 121, 128 (May 31, 2023), https://www.copyright.gov/ai/transcripts/230531-Copyright-and-AI-Music-and-Sound-Recordings-Session.pdf; United States Copyright Office, International Copyright Issues and Artificial Intelligence, 13 (Jul. 26, 2023), https://www.copyright.gov/events/international-ai-copyright-webinar/International-Copyright-Issues-and-Artificial-Intelligence.pdf.
  35. SeeOpenAI, Terms of Use (Nov. 14, 2023), https://openai.com/policies/terms-of-use; see also Amazon, AWS Service Terms (Mar. 27, 2024), https://aws.amazon.com/service-terms/; see also Microsoft, Terms of Use (Feb. 7, 2022), https://aws.amazon.com/service-terms/; see also Google, Google Cloud Generative AI Indemnified Services (Mar. 7, 2024), https://cloud.google.com/terms/generative-ai-indemnified-services; see also Meta, Meta AIs Terms of Service (Apr. 1, 2024), https://m.facebook.com/policies/other-policies/ais-terms.
  36. SeeRegina Sam Penti, Georgina Jones Suzuki & Derek Mubiru, Trouble Indemnity: IP Lawsuits In The Generative AI Boom, Law360 (Jan. 3, 2024, 4:24 PM), https://www.law360.com/articles/1779936/trouble-indemnity-ip-lawsuits-in-the-generative-ai-boom.
  37. See, e.g.,AWS Service Terms, Section 50.10.2 (“AWS will have no obligations or liability [for an Indemnified Generative AI Service] with respect to any claim: (i) arising from Generative AI Output generated in connection with inputs or other data provided by you that, alone or in combination, infringe or misappropriate another party’s intellectual property rights[.]”).

This post come to us from Ropes & Gray LLP. It is based on the firm’s memorandum, “AI and the Copyright Liability Overhang: A Brief Summary of the Current State of AI-Related Copyright Litigation,” dated April 2, 2024, and available here.

]]>
The CLS Blue Sky Blog
European Union officially approves landmark AI legislation https://www.lexblog.com/2024/05/28/european-union-officially-approves-landmark-ai-legislation/ Tue, 28 May 2024 19:24:49 +0000 https://www.lexblog.com/2024/05/28/european-union-officially-approves-landmark-ai-legislation/ The new law categorizes AI systems according to risk

]]>
Canadian Lawyer Mag
Limited-Risk AI—A Deep Dive Into Article 50 of the European Union’s AI Act https://www.lexblog.com/2024/05/28/limited-risk-ai-a-deep-dive-into-article-50-of-the-european-unions-ai-act/ Tue, 28 May 2024 16:40:00 +0000 https://www.lexblog.com/2024/05/28/limited-risk-ai-a-deep-dive-into-article-50-of-the-european-unions-ai-act/ This blog post focuses on the transparency requirements associated with certain limited-risk artificial intelligence (AI) systems under Article 50 of the European Union’s AI Act.

]]>
WilmerHale Privacy and Cybersecurity Law
Could Artificial Intelligence Create Real Liability for Employers? Colorado Just Passed the First U.S. Law Addressing Algorithmic Discrimination in Private Sector Use of AI Systems (US) https://www.lexblog.com/2024/05/28/could-artificial-intelligence-create-real-liability-for-employers-colorado-just-passed-the-first-u-s-law-addressing-algorithmic-discrimination-in-private-sector-use-of-ai-systems-us/ Tue, 28 May 2024 14:34:37 +0000 https://www.lexblog.com/2024/05/28/could-artificial-intelligence-create-real-liability-for-employers-colorado-just-passed-the-first-u-s-law-addressing-algorithmic-discrimination-in-private-sector-use-of-ai-systems-us/ On May 17, 2024, Colorado became the first U.S. state to pass a law aimed at protecting consumers from harm arising out of the use of artificial intelligence (“AI”) systems. Senate Bill 24-205, or the “CAIA,” is designed to regulate the private-sector use of AI systems and will impose obligations on Colorado employers, including affirmative reporting requirements. The CAIA, which will take effect on February 1, 2026, applies to Colorado businesses that use AI systems to make, or that are used as a substantial factor in making, employment decisions.

While President Biden has released an executive order on the development and use of AI, there is no comprehensive federal legislation regulating the use of AI systems. Despite signing the bill into law, Colorado Governor Jared Polis released a signing statement expressing his reservations with the CAIA and encouraging the legislature to improve upon the law before it takes effect. Colorado employers should monitor for guidance on and amendments to the CAIA, in addition to preparing for compliance.

What Employers Need to Know

The CAIA imposes a duty of reasonable care on developers (i.e., creators) and deployers (i.e., users) of high-risk AI systems to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. Although the law does not exclusively regulate employers, high-risk AI systems include AI systems that make, or are a substantial factor in making, employment-related decisions.  The law provides a narrow exemption for businesses with less than fifty employees that do not use their own data to train the AI system.

Under the CAIA, “algorithmic discrimination” means any condition in which the use of an AI system results in differential treatment or impact that disfavors a consumer or group of consumers on the basis of characteristics protected under federal law or Colorado law, including age, color, ethnicity, disability, national origin, race, religion, veteran status, and sex.

The law creates a rebuttable presumption of reasonable care if a deployer takes certain compliance steps, including:

  1. Risk-management policy and program.  Deployers must adopt a risk-management policy and program meeting certain defined criteria. The risk-management policy and program must be regularly reviewed and updated, as well as reasonable in consideration of various listed factors.
  2. Impact assessment.  Deployers must also complete annual impact assessments for high-risk AI systems.  An impact assessment must include, at a minimum, a statement of the purpose, intended use, and benefits of the system, an analysis of whether the system poses known or reasonably foreseeable risks of algorithmic discrimination and a description of how the deployer mitigates those risks, a summary of the data processed as inputs and outputs of the system, an overview of the categories of data, if any, the deployer used to customize the system, any metrics used to evaluate the performance and known limitations of the system, a description of transparency measures taken, including any measures taken to disclose the use of the system to consumers and a description of the post-deployment monitoring and user safeguards provided concerning the system.
  3. Notices. The CAIA also requires deployers to provide various notices to consumers (i.e., Colorado residents). Prior to using an AI system to make employment-related decisions, employers must inform applicants that an AI system will be used and disclose the purpose of the system, the nature of the decision(s) the system may make, and a plain-language description of the system. Additionally, for an applicant adversely affected by the decision of an AI system, the employer must provide the principal reason(s) for the adverse decision, an opportunity to correct any incorrect personal data used by the AI system, and an opportunity to appeal the adverse decision. A covered employer must also post in a “clear and readily available” manner on its website a notice of the types of AI systems that are currently deployed, the known or reasonably foreseeable risks of algorithmic discrimination, and the data collected and used by the deployer. Finally, deployers must disclose to Colorado’s attorney general the discovery of algorithmic discrimination within their AI systems within 90 days after the discovery.

The law provides an affirmative defense in an enforcement action by the attorney general if a deployer (i) discovers and cures a violation as a result of feedback, adversarial testing or red teaming (as those terms are defined by the National Institute of Standards or Technology (NIST)), or an internal review process, and (ii) the deployer is otherwise compliant with the NIST’s Artificial Intelligence Risk Management Framework or another internationally recognized framework for artificial intelligence management. Colorado’s attorney general has the exclusive authority to enforce the CAIA.

Although the CAIA is the first of its kind in the U.S., the law shares structural similarities to the Artificial Intelligence Act recently adopted by the European Union.  Acknowledging industry opposition, Governor Polis expressed in the signing statement his hope that the CAIA will be significantly improved upon before it takes effect and emphasized that “Colorado remains home to innovative technologies.” Colorado employers should continue monitoring for guidance on or amendments to the CAIA, as well as preparing for compliance.   

]]>
Employment Law Worldview
Takeaways – AI Intersects With Sustainable Governance https://www.lexblog.com/2024/05/28/takeaways-ai-intersects-with-sustainable-governance/ Tue, 28 May 2024 18:15:08 +0000 https://www.lexblog.com/2024/05/28/takeaways-ai-intersects-with-sustainable-governance/ On May 9, the Paul, Weiss ESG and Law Institute hosted a breakfast roundtable discussion with experts from UC Law San Francisco, UC Berkeley Law, Deloitte, and Heidrick & Struggles on the intersection of AI and Sustainable Governance.

They were joined by senior executives and industry leaders for a cross-functional, off-the-record conversation exploring the capabilities and risks posed by AI, and the art of applying effective and scalable governance mechanisms.

As stakeholders are increasingly scrutinizing how organizations engage with AI, companies should consider what it means to use technology in an “ethical” and “responsible” fashion.

» view key takeaways from the discussion

]]>
ESG & Law Institute
Takeaways – AI Intersects With Sustainable Governance https://www.lexblog.com/2024/05/28/takeaways-ai-intersects-with-sustainable-governance-2/ Tue, 28 May 2024 18:15:08 +0000 https://www.lexblog.com/2024/05/28/takeaways-ai-intersects-with-sustainable-governance-2/ On May 9, the Paul, Weiss ESG and Law Institute hosted a breakfast roundtable discussion with experts from UC Law San Francisco, UC Berkeley Law, Deloitte, and Heidrick & Struggles on the intersection of AI and Sustainable Governance.

They were joined by senior executives and industry leaders for a cross-functional, off-the-record conversation exploring the capabilities and risks posed by AI, and the art of applying effective and scalable governance mechanisms.

As stakeholders are increasingly scrutinizing how organizations engage with AI, companies should consider what it means to use technology in an “ethical” and “responsible” fashion.

» view key takeaways from the discussion

]]>
ESG & Law Institute
May 30 Event | AI Strategy Summit: IP, Data and Compliance https://www.lexblog.com/2024/05/28/may-30-event-ai-strategy-summit-ip-data-and-compliance/ Tue, 28 May 2024 16:42:16 +0000 https://www.lexblog.com/2024/05/28/may-30-event-ai-strategy-summit-ip-data-and-compliance/ On May 30, Greenberg Traurig Shareholder Tyler Thompson will be a panelist at The AI Strategy Summit: IP, Data and Compliance conference in Chicago. Participating in the session “The Future of Data Privacy in an AI-Driven World: Emerging Trends and Predictions,” Thompson and fellow panelists will discuss emerging technologies, potential changes in data privacy laws, and how companies can prepare for these future developments. The conversation will also focus on:

  • Predicting the evolution of data privacy regulations in the AI era
  • Emerging AI technologies and their impact on data privacy
  • Proactive strategies for future-proofing data privacy in AI applications

Greenberg Traurig will sponsor the conference, which is designed for legal professionals, IP specialists, data privacy experts, compliance officers, and business leaders. The summit will offer practical solutions on how to leverage AI to drive innovation and productivity while ensuring compliance with regulations. Sessions will provide in-depth exploration of the multidisciplinary approach required to navigate the complexities of AI, including the management of IP, the safeguarding of data privacy, adherence to compliance standards, and the protection of trade secrets.

Register here.

]]>
Data Privacy Dish
OpenAI Board Formed Safety Committee, Plans for New AI Model: Artificial Intelligence Trends https://www.lexblog.com/2024/05/28/openai-board-formed-safety-committee-plans-for-new-ai-model-artificial-intelligence-trends/ Tue, 28 May 2024 12:40:28 +0000 https://www.lexblog.com/2024/05/28/openai-board-formed-safety-committee-plans-for-new-ai-model-artificial-intelligence-trends/ OpenAI announced today that the OpenAI Board formed a Safety and Security Committee and has begun training a new AI model to replace the GPT-4 series.

The post OpenAI Board Formed Safety Committee, Plans for New AI Model: Artificial Intelligence Trends appeared first on eDiscovery Today by Doug Austin.

]]>
eDiscovery Today Blog
States Begin To Regulate AI in Absence of Federal Legislation https://www.lexblog.com/2024/05/28/states-begin-to-regulate-ai-in-absence-of-federal-legislation/ Tue, 28 May 2024 09:22:00 +0000 https://www.lexblog.com/2024/05/28/states-begin-to-regulate-ai-in-absence-of-federal-legislation/ Here is the teaser about this Client Update: “Since the European Union seized the early global lead in regulating artificial intelligence, the U.S. Congress has made noise about the need for federal AI legislation, but progress has been slow. The absence of a similarly comprehensive federal law from Congress has created a vacuum that is now being filled by individual states.

These events are unfolding in a familiar pattern, reminiscent of what happened following the E.U. enactment of the General Data Protection Regulation. While the GDPR became an international standard, the federal government failed to adopt a nationwide privacy law, and states like California took the initiative to pass their own privacy laws.

This Update summarizes recent U.S. state efforts by Colorado, Connecticut, Utah, Tennessee, and California to enact AI legislation.”

]]>
Public Chatter
The AI Regulatory Landscape: Colorado, Connecticut, and the Biden Administration https://www.lexblog.com/2024/05/27/the-ai-regulatory-landscape-colorado-connecticut-and-the-biden-administration/ Mon, 27 May 2024 14:58:19 +0000 https://www.lexblog.com/2024/05/27/the-ai-regulatory-landscape-colorado-connecticut-and-the-biden-administration/ Editor’s Note: This article delves into the recent initiatives undertaken by Colorado, Connecticut, and the Biden Administration to address the challenges and opportunities presented by AI. It highlights the pioneering efforts of these states in curbing algorithmic discrimination, promoting consumer transparency, and establishing oversight mechanisms for AI systems. The article also examines the federal perspective, as shaped by the Biden Administration’s executive order, which emphasizes the necessity of a national framework for AI regulation and the importance of ethical considerations and civil rights in AI development.

Industry News – Artificial Intelligence Beat

The AI Regulatory Landscape: Colorado, Connecticut, and the Biden Administration

ComplexDiscovery Staff

In an era dominated by rapid advancements in artificial intelligence, the urgency for meaningful AI regulation has come to the forefront of policy discussions, particularly in Capitol Hill and among state legislatures. The recent initiatives by Colorado and Connecticut, along with the Biden Administration’s executive order, illuminate the complex challenges and critical stakes associated with AI’s integration into society’s fabric.

Colorado has taken a pioneering step with the passing of Senate Bill 24-205 (SB205), a comprehensive law aimed at curbing algorithmic discrimination in employment and other key areas. Scheduled to take effect in 2026, SB205 requires developers and users of high-risk AI systems to adopt stringent compliance measures such as annual impact assessments and consumer notifications when AI significantly influences decisions. The bill also mandates the creation of an AI oversight board to monitor the implementation of these regulations and provide guidance to businesses and organizations utilizing AI technologies. This proactive approach aims to ensure that AI systems are developed and deployed in a manner that upholds fairness, transparency, and accountability.

Meanwhile, Connecticut’s legislation, known as the AI Bill of Rights, emphasizes consumer transparency and the right to contest AI-driven decisions, reflecting a growing consensus on the need for public accountability in AI applications. The state’s approach targets the foundational aspects of AI governance by mandating an annual inventory and public disclosure of AI-utilizing systems. Connecticut’s bill also requires companies to provide clear explanations of how their AI systems function and make decisions, empowering consumers to make informed choices and challenge decisions that may adversely affect them. This focus on consumer rights and transparency sets a crucial precedent for other states and the federal government to follow.

These state-level actions complement the federal perspective shaped by the Biden Administration’s “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” issued in October 2023. The order not only underscores the necessity for a national framework to regulate AI but also highlights the role of ethical considerations and civil rights in AI’s developmental trajectory. It calls for the establishment of an interagency task force to coordinate federal efforts in AI regulation and promote collaboration between government agencies, academia, and the private sector. The executive order also emphasizes the importance of investing in AI research and development to maintain the United States’ competitive edge while ensuring that AI technologies are developed in a responsible and equitable manner.

However, the path to effective AI regulation is fraught with challenges. The Senate AI Working Group’s “Driving U.S. Innovation in Artificial Intelligence” roadmap unveils the conflicting sentiments within Congress regarding the pace and extent of regulatory intervention. While the roadmap advocates for a capabilities-focused approach and federal investments in AI, it stops short of endorsing a broad regulatory agency, signaling caution and the need for sustained deliberative engagement. The roadmap also highlights the importance of balancing regulation with innovation, emphasizing the need to create an environment that fosters the development of AI technologies while mitigating potential risks and negative consequences.

The disparate regulatory efforts at the state and federal levels reflect the broader tension between promoting AI innovation and preventing its potential misuse. As AI technology continues to evolve, the dialogue among policymakers, industry leaders, and civic groups is crucial to navigating the ethical and legal quandaries posed by this transformative technology. The development of a comprehensive and cohesive regulatory framework will require ongoing collaboration and dialogue among these stakeholders to ensure that AI technologies are developed and deployed in a manner that benefits society as a whole.

Moreover, the global nature of AI development and deployment necessitates international cooperation and coordination in AI regulation. The United States must work closely with its allies and partners to establish common standards and best practices for AI governance, ensuring that the benefits of AI are shared equitably and that potential risks are mitigated on a global scale. This international collaboration will be essential in addressing the transnational challenges posed by AI, such as data privacy, cybersecurity, and the potential for AI-driven disinformation campaigns.

As the United States continues to grapple with the complexities of AI regulation, it is clear that a multifaceted approach is required. The initiatives undertaken by Colorado, Connecticut, and the Biden Administration represent important steps in the right direction, but much work remains to be done. Policymakers must strike a delicate balance between fostering innovation and protecting the public interest, ensuring that AI technologies are developed and deployed in a manner that upholds democratic values and promotes social justice. Only through sustained engagement, collaboration, and a commitment to ethical principles can the United States and the global community harness the transformative potential of AI while safeguarding the rights and well-being of all individuals.

News Sources

Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery OÜ

The post The AI Regulatory Landscape: Colorado, Connecticut, and the Biden Administration appeared first on ComplexDiscovery.

]]>
ComplexDiscovery
NIST Delivers Draft Standards on AI and Launches GenAI Evaluation Program in Furtherance of President Biden’s Executive Order on AI https://www.lexblog.com/2024/05/24/nist-delivers-draft-standards-on-ai-and-launches-genai-evaluation-program-in-furtherance-of-president-bidens-executive-order-on-ai/ Fri, 24 May 2024 16:59:16 +0000 https://www.lexblog.com/2024/05/24/nist-delivers-draft-standards-on-ai-and-launches-genai-evaluation-program-in-furtherance-of-president-bidens-executive-order-on-ai/ Late last month, the Department of Commerce’s National Institute of Standards and Technology (“NIST”) released four draft publications regarding actions taken by the agency following President Biden’s executive order on AI (the “Order”; see our prior alert here)[1] and call for action within six months of the Order.  Adding to NIST’s mounting portfolio of AI-related guidance, these publications reflect months of research focused on identifying risks associated with the use of artificial intelligence (“AI”) systems and promoting the central goal of the Order: improving the safety, security and trustworthiness of AI.  The four draft documents, further described below, are titled:

(1) The AI RMF Generative AI Profile;

(2) Secure Software Development Practices for Generative AI and Dual-Use Foundation Models;

(3) Reducing Risks Posed by Synthetic Content; and  

(4) A Plan for Global Engagement on AI Standards. 

In addition to the batch of draft documents, NIST introduced its pilot “NIST GenAI” program, a new evaluation series designed to assess generative AI (“GenAI”) technologies, primarily focused on establishing benchmark metrics and practices to distinguish synthetic content from human-generated content in both text-to-text and text-to-image AI models.  The program will evaluate GenAI tools with a series of challenge problems presented to both “generator”[2] teams (i.e., which provide AI-generated content) and “discriminator” teams (i.e., which classify the data as either human-made or synthetic).  The challenge problems are intended to evaluate the capabilities and limitations of the AI tools and use that information to “promote information integrity and guide the safe and responsible use of digital content,”[3] particularly by helping determine whether content was produced by a human or AI tool.  The NIST GenAI evaluations will be used to inform the work of the U.S. AI Safety Institute at NIST.  NIST is encouraging teams from academia, industry and other research centers to contribute to this research through the NIST GenAI platform. 

The AI RMF Generative AI Profile.[4]  The “AI RMF Generative AI Profile,” a companion resource to the AI Risk Management Framework published by NIST in January 2023, aims to help organizations identify risks posed by GenAI technologies and assist organizations in deciding how to best manage their AI risks in accordance with their business goals, legal and regulatory requirements and risk management priorities.  This resource discusses risks which are unique to or exacerbated by the use of GenAI across the entire AI lifecycle, including, among others, eased access to chemical, biological, radiological or nuclear information, AI hallucinations, data privacy and information security breaches and intellectual property infringement, and proposes specific and actionable items organizations can take to manage each of those (and other) risks.  Organizations will find it helpful to review these action items when developing their AI policy or considering use cases for GenAI.

Secure Software Development Practices for Generative AI and Dual-Use Foundation Models. [5]  The publication titled “Secure Software Development Practices for Generative AI and Dual-Use Foundation Models”  is a companion resource to NIST’s Secure Software Development Framework published in February 2022 and augments the software development practices defined therein by adding considerations, recommendations, notes and references which are specific to GenAI tools and dual-use foundation models (as defined in the Order).  This resource is intended to be used by AI model producers,[6] AI system producers[7] and AI system acquirers[8] to mitigate AI-specific risks present in AI model development, as well as in connection with incorporating and integrating AI models into other software.

Reducing Risks Posed by Synthetic Content.[9]  The Order tasked NIST with providing a report related to understanding the provenance and detection of synthetic content, which is reflected in NIST’s draft publication titled “Reducing Risks Posed by Synthetic Content.”  The publication describes the potential harmful effects posed by synthetic content, including the disproportionate negative effects which synthetic content may have on public safety and democracy (most notably, through the dissemination of mis- or disinformation), particularly on individuals and communities who face systemic and intersectional discrimination and bias, and the harms caused by synthetic child sexual abuse material or non-consensual intimate imagery of real individuals (i.e., “deepfake” pornography).  The report offers an overview of existing technical approaches to digital content transparency and examines current standards, tools and practices for authenticating content and tracking its origin and detecting and labeling synthetic content, such as by using digital watermarking and metadata recording, input and output data filtering, red-teaming and testing safeguards and other validation protocols.  The report offers various use-case scenarios to consider which controls might be most effective and an evaluation of how those techniques may be best implemented.  The report concludes that there is no “silver bullet to solve the issue of public trust in and safety concerns posed by digital content,” but offers NIST’s research to encourage understanding and lay groundwork for further development and investigation into improving the approaches to synthetic content provenance, detection, labeling and authentication. 

A Plan for Global Engagement on AI Standards.[10]  The fourth draft publication announced by NIST, “A Plan for Global Engagement on AI Standards,” pronounces at the outset that standards are critical in the advancement and adoption of novel and emerging technologies such as GenAI, where international players have, until now, diverged in their approach and are looking for global alignment on AI-related standards, particularly in areas such as safety, interoperability and competition.  The draft outlines a plan for global engagement in promoting and developing AI standards by calling for collaboration among key international stakeholders (such as governments, private industry players, academia, consumers and standard developing organizations) to coordinate the development of AI-related standards, continue foundational research on AI-related questions and promote information sharing.  The draft invites feedback on topics that are ripe for AI standardization, including establishing shared terminology for AI concepts, such as foundation models, model fine-tuning, AI red-teaming, open models and synthetic content, and topics related to cybersecurity risks distinct to AI technologies.  It also comments on areas where more research is necessary prior to developing standards, such as energy consumption of AI models and incident response and recovery plans.  NIST calls on the U.S. government to take the lead in certain actions to drive this plan forward, such as encouraging the U.S. government to focus on increasing agencies’ capacity for standards participation, educating U.S. government staff and incorporating private sector feedback, working with allies to articulate shared principles for AI standards and leveraging existing diplomatic relationships to promote exchanges between technical experts.

Although there are over 200 pages of guidelines and recommendations within NIST’s draft publications, the agency recognizes that its body of work related to AI development and risk management is still in foundational stages, and is not intended to be comprehensive, but is offered to support future research and level-set the understanding of relevant stakeholders to provide a shared vocabulary and set of concepts to work from.  All four documents are open for public comment until June 2, 2024.  Organizations can use these documents as helpful guidance for developing frameworks, playbooks and policies on the use of GenAI within their companies.


[1] Exec. Order. No. 14110, 88 Fed. Reg. 75191 (Nov. 1, 2023)

[2] As of this writing, the NIST GenAI website only provides a description of “generator“ participants and “discriminator” participants in the text-to-text context: https://ai-challenges.nist.gov/t2t

[3] https://www.commerce.gov/news/press-releases/2024/04/department-commerce-announces-new-actions-implement-president-bidens

[4] https://airc.nist.gov/docs/NIST.AI.600-1.GenAI-Profile.ipd.pdf

[5] https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-218A.ipd.pdf

[6] Described as “Organizations that are developing their own generative AI and dual-use foundation models” 

[7] Described as “Organizations that are developing software that leverages a generative AI or dual-use foundation model”

[8] Described as “Organizations that are acquiring a product or service that utilizes one or more AI systems”

[9] https://airc.nist.gov/docs/NIST.AI.100-4.SyntheticContent.ipd.pdf

[10] https://airc.nist.gov/docs/NIST.AI.100-5.Global-Plan.ipd.pdf

]]>
Cleary IP and Technology Insights