More from From ChatGPT to Gemini: how AI is rewriting the internet
Grammarly is best known for checking your grammar and spelling, but will soon generate new text suggestions based on what you’re writing. It’s called GrammarlyGo and it’s out in April.
Alongside similar features from Microsoft, Google, Notion, and others, soon you won’t be able to write a single word without a half-dozen AIs trying to edit you. Best to leave them to it.
The company is continuing to loosen the restrictions on how long Bing can hold a conversation, which it put in place to keep the chatbot from going off the rails. Now you can do 10 chats per session and up to 120 per day, where before you were limited to six chat turns per session and 100 per day.


On Thursday, the developer of BlueMail — who has squared off with Apple over its App Store policies before — said the company held up an update adding ChatGPT-powered email generation and other features.
A reviewer said his app needed to add content moderation or else restrict access to ages 17 and up, but then later, the same update was approved, unchanged. The App Store doesn’t have policies about AI specifically, so we can only wait and see if it tightens restrictions further,
If you’re trying to keep pace with what’s happening in generative AI, look no further than Microsoft’s Bing Blog, which has posted details of what’s new in the preview release of its ChatBot and promises regular updates to summarize what they’re learning.
That includes the Chat Tones to tune its personality, Turn Counters, and other changes this time around, but my main question is, do they use Bing AI to write up the summaries?
[blogs.bing.com]
The FTC is casually warning companies to not do that in their advertisements, despite all of the current hype around AI bots and generative technology of all types.
Does the product actually use AI at all?
If you think you can get away with baseless claims that your product is AI-enabled, think again. In an investigation, FTC technologists and others can look under the hood and analyze other materials to see if what’s inside matches up with your claims. Before labeling your product as AI-powered, note also that merely using an AI tool in the development process is not the same as a product having AI in it.
[Federal Trade Commission]


AGI, or artificial intelligence that’s as smart as a human, isn’t here yet — see, uh, everything Bing has done lately. But if it ever does arrive, Altman is (unsurprisingly) quite excited for it, and in his latest OpenAI blog, he talks about how the company is thinking about it.
[OpenAI]



Microsoft’s Bing AI chatbot history dates back at least six years, with Sydney first appearing in 2021.
The game is to guess the secret word. The hitch is that the AI is classifying what words are alike. Yesterday’s word was “grasshopper,” and the AI thought “ant” was closer than “cricket.” Maybe that’s true if you’re analyzing texts to predict the next word — after all, there’s a fable about an ant and a grasshopper — but “cricket” and “grasshopper” are synonyms!
[contexto.me]
Remember that cool video game where we could pretend a chatbot was an AI with multiple personalities and watch it play along?
While Microsoft has only publicly admitted to playing with how many turns you get and tweaking the AI’s “tone”, the reality is that Bing just doesn’t answer fun questions like those anymore. It’ll end the conversation as soon as you ask one. Lame!
The Vanderbilt Hustler reports the school’s Peabody Office of Equity, Diversity and Inclusion is apologizing after sending a message regarding the shooting at Michigan State University that was “paraphrased” from OpenAI’s ChatGPT model (via Gizmodo).
The generic-sounding email lacks any kind of personal touch, and responses to it reflect that, as noted by this quote from Vanderbilt student Laith Kayat:
Deans, provosts, and the chancellor: Do more. Do anything. And lead us into a better future with genuine, human empathy, not a robot
























