AI Counsel Blog Posts
Mitigating the Risks of “Agentic Misalignment”
by Zachary Barlow
July 7, 2025
Anthropic, an AI developer known for its “Claude” model, has recently issued a new report on Agentic misalignment. The results indicate that, to borrow John’s turn of phrase, we’ve got a real “Hal 9000 problem” on our hands. The report details an experiment where researchers gave AI instructions to “promote American industrial competitiveness.” Then they […]
Cybersecurity: New Hires are Your Weakest Link
by John Jenkins
July 3, 2025
This Robinson Cole blog cites a recent study that says new hires are your company’s weakest link when it comes to avoiding phishing schemes: When assessing cybersecurity risk in your organization, it is important to understand your users and their behavior. A new study by Keepnet sheds light on new hire behavior concerning phishing susceptibility. According to […]
Survey: Legal Profession Has a Long Way to Go on AI Data Protection
by John Jenkins
July 2, 2025
Security software provider Kiteworks recently released the findings from its 2025 AI Data Security and Compliance Risk Survey, which included responses from 461 cybersecurity, IT, compliance, and legal professionals. The survey found that the legal profession has a long way to go to get its AI data protection house in order. While 31% of legal […]
Gen AI Development: Overcoming Common Pitfalls
by John Jenkins
July 1, 2025
A recent McKinsey article says that the firm’s experience working with more than 150 firms in developing AI programs in the past few years has revealed two common problems that almost always surface: – Failure to innovate: Process constraints, lack of focus, and cycles of rework that quash innovation. Teams that could be solving valuable problems […]
AI & Copyright: Federal Judges Address “Fair Use” Issue
by John Jenkins
June 30, 2025
Last week, California federal district court judges issued two opinions addressing the critical issue of whether the use of copyrighted materials to train AI models constituted a “fair use” of those materials. This intro to Sullivan & Cromwell’s memo on the decisions summarizes the judges’ rulings: On June 23, 2025, the U.S. District Court for […]
AI Call-Monitoring Poses More Legal Risks
by Zachary Barlow
June 26, 2025
Previously, I wrote about Gladstone v. Amazon Web Servs. and Turner v. Nuance Commc’ns, two pending cases challenging the ability of third parties to use AI to evaluate consumer calls. Now, another California case joins the list. Galanter v. Cresta Intelligence is a new class action filed earlier this month in California. The Plaintiff in Galanter […]
States Move to Regulate AI in Financial Services
by Zachary Barlow
June 25, 2025
The financial services industry increasingly uses AI in its operations, including utilizing AI to assist in core decision-making and essential functions. This has raised concern among lawmakers who are seeking to regulate the sector’s use of AI and mitigate potential risks to customers and the industry. While federal efforts to promulgate regulations have fizzled out, […]
New EO Seeks to Bolster AI Cybersecurity
by Zachary Barlow
June 24, 2025
Federal policy on AI has shifted since January with the revocation of the previous administration’s Executive Order (EO) on AI and the new administration ordering a new AI policy plan. However, despite rollbacks and pivots in other areas, there is some continuity between administrations on cybersecurity. The new “Sustaining Select Efforts to Strengthen the Nation’s […]
Reddit Sues Anthropic, but Not for Copyright Infringement
by Zachary Barlow
June 23, 2025
Earlier this month, I wrote about the Disney and Universal lawsuit against Midjourney, alleging that the AI developer violated their copyrights in its AI training and outputs. This style of copyright case against AI developers has become more common as rights holders seek to stop AI developers from using their works without permission. However, a […]
AI Risk Management: Best Practices for “Humans in the Loop”
by John Jenkins
June 18, 2025
Last month, Zach blogged about a Debevoise article on the role of human oversight in AI risk management – a.k.a. having a “human in the loop.” One of the insights in that article that I thought made it worth revisiting was its advice that in some cases, it’s best to have a human “over the […]