A
Large language mistake, legal edition.
Cool update to last week’s story on why language doesn’t equal intelligence: a Michigan judge cited it to justify imposing sanctions over a ChatGPT-assisted filing that mentioned real cases but misstated their facts. Congrats to author Benjamin Riley, and thanks to folks who pointed it out on X and Bluesky!
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Loading comments
Getting the conversation ready...
Most Popular
Most Popular
- Anthropic’s most dangerous AI model just fell into the wrong hands
- Sony’s PlayStation 5 is $200 off for the first time since December
- Framework is building a better couch keyboard because everyone hates the Logitech one
- The unraveling of Dan Crenshaw
- Powerplay 2: Logitech made its magic mousepad cheaper instead of better
![LLMs are toolsthat “emulate the communicative function of language, not the separateand distinct cognitive process of thinking and reasoning.” BenjaminRiley, Large language mistake, The Verge https://thevergetoday.pages.dev/aiartificial-intelligence/827820/large-language-models-ai-intelligenceneuroscience-problems [https://perma.cc/7EHD-PLLZ]. When an LLMoverstates a holding of a case, it is not because it made a mistake whenlogically working through how that case might represent a“nonfrivolous argument for extending, modifying, or reversing existinglaw or for establishing new law;” it is just piecing together a plausiblelooking sentence – one whose content may or may not be true](https://platform.theverge.com/wp-content/uploads/sites/2/2025/12/Screenshot-2025-12-04-at-8.35.16%E2%80%AFAM.png?quality=90&strip=all&crop=0%2C0%2C100%2C100&w=2400)











