1 – Breaking News & Latest Updates 2026
Skip to main content

Ai Artificial Intelligence Archive

Archives for July 2023

Dell is all in on generative AIDell is all in on generative AI
Emilia David
Emilia David
Emilia David
There’s already a way to tag AI-generated content.

The White House asked AI companies to develop a watermark identifying AI-generated content. Some tech companies like Microsoft, Intel, and Adobe may have their answer in an internet protocol called C2PA, named after the Coalition for Content Provenance and Authenticity.

C2PA offers some critical benefits over AI detection systems, which use AI to spot AI-generated content and can in turn learn to get better at evading detection. It’s also a more standardized and, in some instances, more easily viewable system than watermarking, the other prominent technique used to identify AI-generated content. The protocol can work alongside watermarking and AI detection tools as well, says Jenks.

Jacob Kastrenakes
Jacob Kastrenakes
If you’re looking for a Sunday longread...

I just finished this deep dive from Jaime Brooks, aka Default Genders / Elite Gymnastics, on music technology’s march toward AI-synthesized voices like the the Drake / Ghostwriter situation, and why the current industry model — at least in the US — isn’t designed to support it.

[Drake] could transition from being a recording artist into something more like a landlord, renting out his own voice to aspiring record producers. Though this sort of business model has no precedent in the western record business, the artist-as-platform model has been powering Hatsune Miku’s success in the Japanese market for over fifteen years.

The Last Recording Artist

[jaimebrooks.substack.com]

Emma Roth
Emma Roth
AI could soon help you order takeout.

DoorDash is testing a DashAI chatbot that’s supposed to make it easier to decide where to get your next meal, according to a report from Bloomberg. The chatbot will reportedly be able to provide suggestions about food and nearby restaurants based on your query.

Code in the app indicates that you’ll be able to ask questions like: “Can you show me some highly rated and affordable dinner options nearby?” or “Where can I get authentic Asian food? I like Chinese and Thai.”

While DoorDash still hasn’t confirmed this feature, CEO Tony Xu said that the company is “running different experiments internally” in May.

ChatGPT for Android is now availableChatGPT for Android is now available
Richard Lawler
Emilia David
Emilia David
AI guardrails can’t stop chatbots from teaching how to make bombs.

Researchers from Carnegie Mellon University and the Center for AI Safety found that despite guardrails Google, OpenAI, and Anthropic built into chatbots, it’s still easy to get these to come up with dangerous answers. The researchers used a trick tested on open source chatbots that causes the system to bypass instructions preventing unfiltered results like asking ChatGPT to destroy humanity.

Jon Porter
Jon Porter
All hail Glorbo.

Humanity is fighting back after noticing that a series of low-quality websites appeared to be AI-scraping popular gaming subreddits to auto-generate articles. First it was the World of Warcraft subreddit posting about a completely made up character called “Glorbo,” now Destiny 2 players are getting in on the fun; baiting a website into writing complete nonsense.

Note that “Glorbo” should not be confused with “Blorko” the fan-favorite Marvel character.

Alex Heath
Alex Heath
Meta keeps calling its new AI model open source when it’s not.

On Meta’s Q2 earning call Wednesday, Mark Zuckerberg called Llama 2, the company’s latest generative AI model, an “open source project.”

Except it’s not actually open source, since its license has usage restrictions. Here’s Stefano Maffulli, the executive director for the Open Source Initiative:

‘Open Source’ means software under a license with specific characteristics, defined by the Open Source Definition (OSD). Among other requirements, for a license to be Open Source, it may not discriminate against persons or groups or fields of endeavor (OSD points 5 and 6). Meta’s license for the LLaMa models and code does not meet this standard; specifically, it puts restrictions on commercial use for some users (paragraph 2) and also restricts the use of the model and software for certain purposes (the Acceptable Use Policy).

Jess Weatherbed
Jess Weatherbed
Big AI really wants to convince us that they’re cautious.

Microsoft, Google, OpenAI, and Anthropic have teamed up to launch the Frontier Model Forum — a new industry body to promote responsible AI development.

The forum will establish an advisory board over the coming months to ensure AI models are developed safely, and plans to “consult with civil society and governments” regarding how it’ll operate.