More from From ChatGPT to Gemini: how AI is rewriting the internet


The trend of hallucinations showing up in public AI demos continues. As noted by a couple of reporters already, OpenAI’s demo of its new SearchGPT engine shows results that are mostly either wrong or not helpful.
In a prerecorded demonstration video accompanying the announcement, a mock user types music festivals in boone north carolina in august into the SearchGPT interface. The tool then pulls up a list of festivals that it states are taking place in Boone this August, the first being An Appalachian Summer Festival, which according to the tool is hosting a series of arts events from July 29 to August 16 of this year. Someone in Boone hoping to buy tickets to one of those concerts, however, would run into trouble. In fact, the festival started on June 29 and will have its final concert on July 27. Instead, July 29–August 16 are the dates for which the festival’s box office will be officially closed. (I confirmed these dates with the festival’s box office.)
Alongside the FTC and the DOJ, the UK and EU’s antitrust authorities have issued a joint statement saying they will work to ensure fair competition in the AI industry.
One potential issue highlighted by the enforcers is the possibility that AI chipmakers could “exploit existing or emerging bottlenecks,” giving them “outsized influence over the future development” of AI tools.
[Federal Trade Commission]
Demos on this Meta blog show how the company will implement its promise to bring AI to its VR headsets. Like the company’s Ray-Ban smart glasses, you can ask it questions about things you see (in passthrough), and it will answer.
The experimental feature rolls out in English next month, in the US and Canadia (excluding the Quest 2).

If you can’t tell the difference between AGI and RAG, don’t worry! We’re here for you.
You can grab the app from Google Play right now. It’s free and “accessible with all plans, including Pro and Team,” the company says in a blog post.
Anthropic released an iOS app in May.
[Anthropic]
I wasn’t expecting to read a dystopian fic about not-so-distant future office culture in our comments, but what other response could you have to a story about an HR company that wanted to treat AI bots like humans?


OpenAI announced that it is teaming up with Los Alamos National Laboratory to explore how advanced AI models, such as GPT-4o, can safely aid in bioscientific research. I’m a bit disappointed because this was the plot of the science fiction horror book I always wanted to write.
The goal is to test how GPT-4o can help scientists perform tasks in a lab using vision and voice modalities.
A recent study found that if a coding problem put before ChatGPT (using GPT-3.5) existed on coding practice site LeetCode before its 2021 training data cutoff, it did a very good job generating functional solutions, writes IEEE Spectrum.
But when the problem was added after 2021, it sometimes didn’t even understand the questions and its success rate seemed to fall off a cliff, underscoring AI’s limitations without enough data.
[IEEE Spectrum]
Tech giants are rewriting the rules on web scraping, blaming unnamed third parties for disregarding robots.txt, and seemingly claiming the right to reuse anything posted anywhere for AI.
Now, Cloudflare is telling customers on its CDN that it can find and block AI bots that try to get around the rules.
The upshot of this globally aggregated data is that we can immediately detect new scraping tools and their behavior without needing to manually fingerprint the bot, ensuring that customers stay protected from the newest waves of bot activity.




























