39 – Breaking News & Latest Updates 2026
Skip to main content

Emilia David

Emilia David

Former Reporter

Former Reporter

    More From Emilia David

    Emilia David
    Emilia David
    There’s not a lot you can do when AI lies about you.

    Companies like OpenAI, Meta, and Microsoft say they’ve added ways to limit false information on their AI models. It’s not enough for users who have seen their reputation harmed by fake, AI-generated accusations of crimes like terrorism — and have found little legal protection. Now it’s a race to see who can protect people faster: technology or the government.

    Dell is all in on generative AIDell is all in on generative AI
    Emilia David
    Emilia David
    Emilia David
    There’s already a way to tag AI-generated content.

    The White House asked AI companies to develop a watermark identifying AI-generated content. Some tech companies like Microsoft, Intel, and Adobe may have their answer in an internet protocol called C2PA, named after the Coalition for Content Provenance and Authenticity.

    C2PA offers some critical benefits over AI detection systems, which use AI to spot AI-generated content and can in turn learn to get better at evading detection. It’s also a more standardized and, in some instances, more easily viewable system than watermarking, the other prominent technique used to identify AI-generated content. The protocol can work alongside watermarking and AI detection tools as well, says Jenks.

    Emilia David
    Emilia David
    AI guardrails can’t stop chatbots from teaching how to make bombs.

    Researchers from Carnegie Mellon University and the Center for AI Safety found that despite guardrails Google, OpenAI, and Anthropic built into chatbots, it’s still easy to get these to come up with dangerous answers. The researchers used a trick tested on open source chatbots that causes the system to bypass instructions preventing unfiltered results like asking ChatGPT to destroy humanity.