More from From ChatGPT to Gemini: how AI is rewriting the internet
After Microsoft invested billions of dollars into OpenAI, now The Information reports that the company has finalized a tender offer that could total $300 million.
The move lets some staff members unload their shares, giving them more of an incentive to stay with the company in a competitive AI landscape where high-level OpenAI employees are getting poached by other tech giants or starting their own companies.
[The Information]
Vice has a great report chatting to various Reddit mods who are fending off the first forays from AI spammers.
So far, say the mods, it’s fairly easy to spot the fakes, but no-one can really predict how bad the situation will get.
As one mod from r/Cybersecurity puts it: “Our problem isn’t necessarily ‘what we’ve found so far’ but ‘what we’ve missed.’”
Just as the EU was finishing up its landmark AI Act, chatbots arrived on the scene, adding new complications to an already tangled regulatory environment.
Now, per The Wall Street Journal, leading EU lawmakers are pushing for new regulations to tackle these new systems. It’s hard to keep pace with tech, but if politicians need help getting a draft together in time, I can recommend a few tools...


The group chat is sacred, but AI can fake it
It’s called Tongyi Qianwen, which The Financial Times translates as “truth from a thousand questions.” It’s going to be integrated into the Chinese tech giant’s productivity software, similar to Microsoft’s plans for its Copilot assistant. And ... that’s about all we can say right now.
Access to Tongyi is limited and it’s not clear how Chinese chatbots will compete with their Western rivals (or vice versa). But talking to computers continues to be the biggest thing in tech — for now.
For all the fears about world-ending AI nightmare scenarios, the clearest problem with AI search so far is that it makes stuff up. That includes potentially libelous claims like baselessly accusing a professor of sexual misconduct or an Australian politician of bribery, two events both recounted in news stories today. The latter might lead to ChatGPT’s first defamation suit — something we’ve discussed the complications of (under US law, at least) before. Whatever happens with these incidents, it seems nearly inevitable somebody will sue over AI “hallucinations” soon.
[Washington Post]
In this week’s paid edition of my Command Line newsletter, I write about visiting San Francisco’s Cerebral Valley neighborhood for a gathering of AI leaders and investors.
There was discussion about the open letter signed by Elon Musk and others asking for a temporary halt on new AI model development, the benefits of open versus closed-source AI, and yes, whether AI may eventually kill us all.
You can subscribe at the link below to get this newsletter edition and future ones delivered directly to your inbox, and the first month is free.
[The Verge]


Earlier this month Google announced a slate of generative AI features for its Workspace suite, and now some members of the public are getting access to a few of them. It’s still unclear when they’ll be generally available — 9to5Google reports the company will let more people use it “over time,” though there’s currently not a waitlist.



An AI chatbot chat-off.























