<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Adobe | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2026-04-20T14:27:50+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/adobe" />
	<id>https://www.theverge.com/rss/adobe/index.xml</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/rss/adobe/index.xml" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Nilay Patel</name>
			</author>
			
			<title type="html"><![CDATA[Canva’s CEO on its big pivot to AI enterprise software]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/podcast/913793/melanie-perkins-canva-ai-adobe-affinity-design" />
			<id>https://www.theverge.com/?p=913793</id>
			<updated>2026-04-20T10:27:50-04:00</updated>
			<published>2026-04-20T11:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="Adobe" /><category scheme="https://www.theverge.com" term="Creators" /><category scheme="https://www.theverge.com" term="Decoder" /><category scheme="https://www.theverge.com" term="Design" /><category scheme="https://www.theverge.com" term="Podcasts" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Today, I’m talking with Melanie Perkins, founder and CEO of Canva, a popular online design tool. I always enjoy talking with Melanie. She was last on the show a couple of years ago, just as the AI revolution was coming to the worlds of art and design. At the time, Canva had escaped a lot [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A stylized illustration of Canva CEO Melanie Perkins" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/DCD_Perkins_Canva.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">Today, I’m talking with Melanie Perkins, founder and CEO of Canva, a popular online design tool.</p>

<p class="has-text-align-none">I always enjoy talking with Melanie. She was <a href="https://www.theverge.com/24191080/canva-ceo-melanie-perkins-design-ai-adobe-competition-decoder-podcast-interview">last on the show</a> a couple of years ago, just as the AI revolution was coming to the worlds of art and design. At the time, Canva had escaped a lot of the criticism being leveled at its competitors for adding AI tools. Melanie attributed that both to how much Canva users love the product and also the huge inroads it was making into the business world. Canva is a tool that empowers non-designers to design, and that group of people was just trying to get work done. They didn’t seem nearly as threatened by AI as professionals using other creative software — they may have even felt empowered.</p>

<p class="has-text-align-none">It’s been two years, and it’s safe to say that AI is <a href="https://www.theverge.com/tech/912287/adobe-firefly-ai-assistant-announcement-editing">all over design software</a> now — and a lot more people have a lot more feelings about AI in general. But Melanie and Canva are pushing even more aggressively into integrating AI. The company <a href="https://www.theverge.com/tech/913068/canva-ai-2-update-prompt-based-editing-availability">just announced a big new update</a> that allows people to simply tell Canva what to make and have it go through various data sources like Slack and email to build presentations, documents, and other design materials. Those projects arrive as regular old Canva files, which you can edit at will. You’ll hear Melanie come back to that idea several times — having the output of the AI system be in a format you can edit, so that you can refine it, is a big deal.</p>

<div class="wp-block-vox-media-highlight vox-media-highlight"><img src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24792604/The_Verge_Decoder_Tileart.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="" data-portal-copyright="" />


<p><em>Verge</em> subscribers, don&#8217;t forget you get exclusive access to ad-free <em>Decoder</em> wherever you get your podcasts. Head <a href="https://www.theverge.com/account/podcasts">here</a>. Not a subscriber? You can <a href="https://www.theverge.com/subscribe">sign up here</a>. </p>
</div>

<p class="has-text-align-none">The idea here, as Canva says, is to move “from a design platform with AI tools to an AI platform with design tools.”&nbsp;</p>

<p class="has-text-align-none">I’ll let you all sit with that for a moment.</p>

<p class="has-text-align-none">Obviously I dug into that with Melanie, as well as how she’s thinking about Canva’s relationship to the AI model providers, the cost of the tokens required to automate an app like Canva in this way, and the kinds of pricing that might lead to for users. These new AI tools are still in beta, so there’s a lot to be worked out, but you’ll hear Melanie say she’s confident that Canva’s growth in enterprise will continue to accelerate as more and more companies look for tools that automate tasks like making presentations.</p>

<p class="has-text-align-none">But that’s the same idea as a lot of other big AI players aiming for corporate dollars, and so Melanie and I talked a lot about whether Canva is the right platform to bring everything all together. Unsurprisingly, she thinks it is — not least because she runs Canva using Canva.</p>

<p class="has-text-align-none">Of course, I also asked Melanie for an updated vibe check on AI and design. <a href="https://www.theverge.com/ai-artificial-intelligence/891724/nbc-news-march-2026-poll-ai-ice">Poll</a> after <a href="https://www.pewresearch.org/short-reads/2025/11/06/republicans-democrats-now-equally-concerned-about-ai-in-daily-life-but-views-on-regulation-differ/">poll</a> shows that people really do not like AI right now, and the fears around job displacement and being overrun by slop all come to a head in a piece of creative software that doesn’t require creatives anymore. Melanie had some thoughts here as well — and I did my best to get her to talk about Adobe, which is also adding AI tools and raising prices, a deadly combination for the biggest player in the space. You tell me if I got her to bite.</p>

<p class="has-text-align-none">There’s a lot in this one — like I said, I always enjoy talking to Melanie.</p>

<p class="has-text-align-none">Okay: Canva CEO Melanie Perkins. Here we go.</p>

<iframe frameborder="0" height="200" src="https://playlist.megaphone.fm?e=VMP8000339561" width="100%"></iframe>

<p class="has-text-align-none"><em>This interview has been lightly edited for length and clarity.</em></p>

<p class="has-text-align-none"><strong>Melanie Perkins, you are the founder and CEO of Canva! Welcome back to <em>Decoder</em>.</strong></p>

<p class="has-text-align-none">Thank you so much for having me. It&#8217;s great to be here.</p>

<p class="has-text-align-none"><strong>I am very excited to talk to you again. It&#8217;s been a couple years. You were last on the show in 2024. We talked about AI and design and the feelings people have about AI and design. And I was looking at that interview again just to prepare for this one. And a lot of the themes are all the same. And then the facts surrounding those themes have changed so dramatically in the past two years. And on top of it, you have big news that I really want to dig into.</strong></p>

<p class="has-text-align-none"><strong>So let&#8217;s just start at the start. The last time you were on the show, I said, &#8220;What is Canva?&#8221; And you said, &#8220;Canva is an online design platform.&#8221; And your news this week is, I believe, that the company is changing its own conception of itself. Tell us about that change and what led into it.</strong></p>

<p class="has-text-align-none">There are some things that are changing and there are many things that remain the same. So our mission is still to empower the world to design, and we&#8217;re going to be doing that very much over the years to come. But something that we&#8217;ve always believed is that we should take the latest and greatest technology. We should build the latest and greatest technology and put that into our community&#8217;s hands and enable them to achieve their goals. And what is the latest and greatest technology has certainly changed over the last few years. And so obviously AI is at the center of that change. And so we&#8217;re really excited to be bringing the best of technology and putting that into our customers&#8217; hands as we&#8217;ve done for the last decade. But obviously the latest and greatest technology today is AI. And so we&#8217;re really excited to be doubling down into that space.</p>

<p class="has-text-align-none"><strong>All right. But I&#8217;m looking at a press release that says, &#8220;We&#8217;re moving from a design platform with AI tools to an AI platform with design tools.&#8221; That seems like more than bringing the latest and greatest technology. It seems like a rethinking of what Canva&#8217;s product is. Unpack that a little bit for me.</strong></p>

<p class="has-text-align-none">Let&#8217;s get into it. So when we launched Canva for the very first time, one of the huge innovations that we had was moving from pixels where everything was very granular and required deep expertise to be able to move anything around to be able to design anything to objects, where you could lay out a design. You could just have ideas for different objects. You could search our stock photography library, our illustration library, you could drag it onto the page, you start with a template or start from scratch, you could collaborate and design. And now what we&#8217;re really excited about is with AI, we&#8217;re moving into the concept layer. So you can just take an idea, you can write it in, and then something can get created for you. But very importantly, you can still move into the Canva&#8217;s object editor and lay things out, collaborate, edit away.</p>

<p class="has-text-align-none">And so we&#8217;re really excited about bringing it to this third tier of concept editing, which we think will be extraordinarily exciting. So it&#8217;s our biggest launch ever and becoming the system where work happens end to end. But still very importantly with design at its core, being able to take it &#8230; I was going deep the other day into the definition of design and to design is to mock an idea. And really to mock an idea is at the essence of design. So we&#8217;re really excited about bringing new tools and capabilities to be able to do exactly that.</p>

<p class="has-text-align-none"><strong>I have to ask you: I&#8217;m looking at the presentation about all of this. And it was obviously made in Canva. I know you told me last time that the whole company works in Canva. Did you automate the creation of your own deck announcing the AI tools or did you make it all by hand?</strong></p>

<p class="has-text-align-none">What&#8217;s really cool about this new product release, it can be one shot generation and that is awesome, but the really exciting thing is it&#8217;s actually also iterative. So it can lay out pages. So for example, you can take huge passages of text and then you can just lay that out with Canva AI. So you can actually be your companion, your creative partner as you&#8217;re going through the process. So we didn&#8217;t do it to just one shot generation for the entire deck, I have to say. But what we were able to do is use it for all the fine grain edits, the laying out of boxes and that sort of thing. So it really, it helped with the deck.</p>

<p class="has-text-align-none">But I think that&#8217;s the exciting thing is that I think one shot generation is like AI 1.0 and being able to do iterative, agentic orchestration is really 2.0. So we&#8217;re really excited about that. And then turn it into the press release doc. And it&#8217;s really great at helping to create that first draft for us. And then we can use that to iterate, to collaborate because I think we both certainly know and everyone knows that that one shot generation might be a helpful starting point, but that really is the draft to then be able to iterate and refine from there.</p>

<p class="has-text-align-none"><strong>I&#8217;m curious for this. I&#8217;m just looking at pictures of the interface. It looks like a chatbot. You can ask it all kinds of questions, as you showed off, make me a content plan, do a bunch of stuff for me out on these platforms. You can connect directly to the platforms and have it published for you.</strong></p>

<p class="has-text-align-none"><strong>That feels like, in particular, the cutting edge of marketing is automating the creation of assets and the publishing to platforms and collecting the data and iterating through that. But the interface is still a chatbot and it feels like maybe that&#8217;s going to be the interface for everything forever. Did you experiment with other kinds of interfaces or is it just the open end text box as the end all, be all of AI?</strong></p>

<p class="has-text-align-none">I think that&#8217;s where I was going through those three tiers of pixels, objects, and concepts. I think that&#8217;s what&#8217;s really exciting to me is that in most chatbots out there, you&#8217;re in a chat and you go backwards and forwards asking for the same thing and it will regenerate the entire thing over and over again. It&#8217;s annoying. But with Canva AI, you&#8217;ve got the ability to have conversational editing, which is extraordinarily powerful and brings completely new capabilities. But then you have the normal Canva that you know and love, where you can just drag and drop, you can collaborate, you can do all your iterative editing, you can go and change a word here and update that and not having to prompt to do that. So it actually helps to make complex things simple by bringing it all together into one spot.</p>

<p class="has-text-align-none">You&#8217;ll see in the interfaces, Canva AI, it&#8217;s a brand new tab inside the editor. And so you can go there, you can dictate into your phone, you can do it on the fly, get that first pass, and then it&#8217;ll lay it out just in the normal Canva that you know and love, and then you can just edit that as you would typically do. So after a lot of experimentation, that was where we landed, that it&#8217;s so powerful to be able to dictate for everyone&#8217;s different accessibility needs, even accessibility needs on a day to day basis. Sometimes now I can just be talking to my phone, ask it to generate something and you can just do that on the fly, but then that creates a normal Canva design that you can collaborate, you can edit, you can use our hundred million plus stock photos and illustrations and drag and drop and design that. So really, the huge opportunity is this end to end workflow of being able to take an idea and turn that into a finished, usable work in one seamless platform. Yeah.</p>

<p class="has-text-align-none"><strong>Can I ask just a question about the relationship of the AI to the tools in Canva, and I&#8217;m going to basically just do personal tech support with you.</strong></p>

<p class="has-text-align-none">Yeah, go for it!</p>

<p class="has-text-align-none"><strong>So I used Canva this week. My daughter&#8217;s having a detective themed birthday party. And so we took photos of all her friends and we&#8217;re going to make wanted posters.</strong></p>

<p class="has-text-align-none">Awesome.</p>

<p class="has-text-align-none"><strong>I was like, &#8220;I&#8217;m going to talk to Melanie and I better use Canva to do this.&#8221; It seemed very natural, so I could ask you very weedsy tech support questions. And just in the version of Canva that I was using, it was clear that the AI tools operated in some places and not others. They weren&#8217;t seeing the whole Canva tool palette. And very simply it&#8217;s background remover, which I believe is one of your most popular tools. It&#8217;s everyone&#8217;s most popular tool. I could do it in some parts of Canva, not the other. I couldn&#8217;t look at my finished layout and say, &#8220;Actually, can you just go ahead and remove the background from this photo?&#8221; I had to get to where I needed to be and then ask the question. Is the new Canva AI, can it address the whole set of tools? Is it using Canva as a whole or is it still narrowly sliced?</strong></p>

<p class="has-text-align-none">You hit the nail on the head with what we were doing with Canva AI 2.0. You were using Canva AI 1.0. I&#8217;m very excited to get your hands on Canva AI 2.0. We&#8217;ll have to get you into the million, because it&#8217;ll help you with exactly that. And so you can say for your example, for your wanted posters, create me the wanted poster. And you can upload the photos and it can actually orchestrate all of the different tools in Canva to be able to create that on the fly, without you having to go to the different spots. But you can still go and edit the different particular parts, the element editing if you want, but it actually is able to orchestrate it and then create a layered file in Canva&#8217;s standard format.</p>

<p class="has-text-align-none"><strong>I think I understand how the user will see it. Architecturally, I&#8217;m very curious how you build the product that way, because it doesn&#8217;t seem like there&#8217;s some industry standard way of saying, &#8220;Now you can use this software.&#8221; About half of the attempts I see are just taking screenshots of everything and very slowly clicking around. And there&#8217;s an infinite number of variations on that approach. There&#8217;s </strong><a href="https://www.theverge.com/news/867673/claude-mcp-app-interactive-slack-figma-canva"><strong>the MCP approach</strong></a><strong>, which everyone was really high on and seems to have arrived at whatever point it&#8217;s going to arrive at, and now maybe half the industry is back at, well, we should just do APIs. What approach did you land on?</strong></p>

<p class="has-text-align-none">I think the reason we&#8217;ve been able to make so much progress in this space, firstly was the decade of investment in this interoperable format. So being able to have this design format that spans presentations and whiteboards and docs and videos, the full gamut has been a really powerful part of why, when we launched the foundational model, the design foundational model, it actually is able to create across all of these different formats and is that layered file. Which means that you can operate at a full design level, you can operate on a page level, you can operate on a photo level or text. And so the huge investment in that space is why we&#8217;ve been able to bring this to life with Canva AI 2.0. And there&#8217;s an extraordinary amount of complexity behind the scenes.</p>

<p class="has-text-align-none">We&#8217;ve had hundreds of people working on this project for some years to get to this point in time. But I think that the really important part is one of our engineers described it as an orchestra because there&#8217;s so many tools and systems under the hood that need to talk together to be able to bring that thing to life. So when you say, &#8220;I want to create a wanted poster for my daughter&#8217;s birthday party,&#8221; it will then be able to go and use background remover. It will be able to go and use all of the different tools to be able to assemble that. But from a user standpoint, they just get to say what they want and then we go and do the hard work to achieve that goal.</p>

<p class="has-text-align-none"><strong>I&#8217;m just curious what bet you made there, because it feels like the industry is not coalesced on a strategy. So is it actually clicking around Canva or is there some other way of the AI addressing the tools?</strong></p>

<p class="has-text-align-none">I won&#8217;t go into technical detail there, because I think that we have had a few breakthroughs that made this all possible.</p>

<p class="has-text-align-none"><strong>The other question I have is who the model providers that you have doing this are. Because we are hearing every single day that token use rates for agentic software through the roof, or watching Anthropic have to modify its pricing. There&#8217;s all kinds of stuff happening in that world, and you&#8217;re launching an agentic AI product that, just from the interface alone, makes you want to use it a lot.</strong></p>

<p class="has-text-align-none">I&#8217;m happy to hear that.</p>

<p class="has-text-align-none"><strong>And Anthropic literally has, in Claude, there&#8217;s a usage meter and it will tell you, &#8220;You&#8217;re done now or pay us more money.&#8221; Are you going to have a token usage meter in Canva in the same way?</strong></p>

<p class="has-text-align-none">You asked so many questions in that very short space of time.&nbsp;</p>

<p class="has-text-align-none"><strong>There&#8217;s more to come, don&#8217;t worry.</strong></p>

<p class="has-text-align-none">We have been investing in the areas that we really need to. Becoming domain experts in design has been a really critical part of our research strategy, but then partnering with incredible companies that are spending billions of dollars to build the best in their own areas and then bringing that technology onto Canva is also a key part of the way we&#8217;re approaching this, being experts in design, because that&#8217;s where we really need to specialize because there isn&#8217;t great technology in that space. And then we&#8217;ve got a 100-person research team working very specifically on these problems themselves.</p>

<p class="has-text-align-none">On the AI credit front, we have different tiering available for each of the different packages. So in free, you get limited credits and then in pro, we get much more generous. And then in our business package, you get far more generous, then enterprise even more so. But actually for the first million users, we&#8217;re giving everyone an AI pass, which we&#8217;re really excited about. So it&#8217;s a $100 monthly pass. We&#8217;re going to be giving everyone in that first million so they can just go completely wild and test out all of these new products. So we&#8217;re really excited to see how that is used and see where it takes them.</p>

<p class="has-text-align-none"><strong>I want to come back to pricing, because I have a lot of questions about it, but first I just want to understand the product a little bit more. The </strong><a href="https://www.theverge.com/24191080/canva-ceo-melanie-perkins-design-ai-adobe-competition-decoder-podcast-interview"><strong>last time</strong></a><strong> you were on the show, you were making the inroads to enterprise. You would relaunch for enterprise and we talked a lot about how what you needed to do for enterprise was not necessarily product focused, but just workflow focused. You needed user authentication systems and management systems and dashboards and all that stuff, and you built it out. And that seems to be going really well. I think the numbers I have here are you&#8217;re at $4 billion in annualized revenue, $500 million of which is enterprise. So in two years you&#8217;ve grown. Is that the part of the business that&#8217;s growing the most?</strong></p>

<p class="has-text-align-none">The whole company is growing very rapidly, but yes, enterprise has been growing extremely rapidly. We grew by 100% over the last year, 95% of Fortune 500 companies and getting really deep footprints with thousands of people at companies now, which is extraordinary to see. We think that with Canva AI 2.0, we&#8217;ll radically change that. It will be a huge step change again, and become the system at the center of work and really bring things together.</p>

<p class="has-text-align-none">I think a lot of people can relate right now. It feels like there are a lot of fragmented systems, things that are in lots of different places. We think being able to have that all on one platform, all of the work and all of the designs and presentations and documents, all in one place and with connectors being able to go even further and pull in context and information from your Gmail or your Slack, is going to be a huge step change for the way work gets done.</p>

<p class="has-text-align-none"><strong>That&#8217;s the part I&#8217;m really interested in, the idea that a company is just a collection of disparate databases that are not well organized or managed and that there&#8217;s truth in those databases, if only we could read them all at the same time. That&#8217;s a big part of the AI thesis in general. You hear it all over the place. I work with a bunch of cranky reporters. I don&#8217;t think they put all their ideas in the databases, but I get it. There&#8217;s a sense that there&#8217;s a lot of opportunity in the disparate data sources in a company and you can bring them together to platform and then take action on it and achieve some results.</strong></p>

<p class="has-text-align-none"><strong>Is Canva the right tool to do that work? You&#8217;re </strong><a href="https://techcrunch.com/2026/04/17/anthropic-launches-claude-design-a-new-product-for-creating-quick-visuals/"><strong>right up against Claude</strong></a><strong>. Or you&#8217;re right up against, I don&#8217;t know, Oracle, whatever big enterprise business process automation vendor is going to say, &#8220;AI will connect all your databases,&#8221; and then there&#8217;s Canva. And I&#8217;m wondering if you want the whole opportunity or just the design opportunity.</strong></p>

<p class="has-text-align-none">Well, to me, design, as we&#8217;ve just talked about before, is bringing creativity and productivity together and being able to do that in a way that we think is pretty extraordinarily powerful. I had my own experience of this the other day, which blew my mind. I had to answer a whole bunch of questions that were going into all sorts of different questions over the last decade. And then I was able to just type it into Canva AI 2.0 each of the questions and I was able to construct answers based on all of my designs and all of my documents from the last decade.</p>

<p class="has-text-align-none">And it blew my mind that I was like, this is the only place that actually has this information about me. And so being able to have that full visual suite from docs to sheets, whiteboards, presentations, all of that context. And then I guess the other thing is that, when you think about it, most things end up in a design format of some description at the end of the process. And so being able to have all of that context right there beside the AI tools, we think is pretty powerful.</p>

<p class="has-text-align-none"><strong>I think that the thing I&#8217;m curious about is where the primary interface for that lives. And you&#8217;re obviously making the case that it should be Canva. For the CEO of Canva, it clearly is inside of Canva, but I could bring the CEO of Slack on here and they would happily tell you that that is Slack. Or I don&#8217;t know, Microsoft will tell you that they&#8217;re going to force-feed Copilot to you wherever you are, using a Microsoft product and that&#8217;s where that should be. There are a lot of ideas about this.</strong></p>

<p class="has-text-align-none"><strong>One of the things that makes that messy, in my mind, is that all of these products can now talk to each other in very specific ways. So Canva itself is a plugin for the other chatbots and it seems like the usage of that plugin is very high. How do you think about who owns the interface in a world where the core tool set might be usable somewhere else entirely that also has access to all that data and all that information that the company might have generated?</strong></p>

<p class="has-text-align-none">The key focus for us is always: How do we empower our community the most? How do we help them to achieve their goals? So we&#8217;re already embedded in organizations and businesses all around the world. And when they&#8217;re creating a design today in Canva, it&#8217;s quite a manual process. You have to go to all these different fragmented tools, collect all the information. And so being able to have that just inside the design tools, we think, will make a great deal of sense because it means that you&#8217;re not&#8230; It&#8217;s just cutting down manual and busy work, which is always the thing that we&#8217;re doing for our customers. Like in 2019, we launched background remover and the whole point of that was you click the background remove button and then the background was removed, and that reduced a lot of manual work.</p>

<p class="has-text-align-none">Again, with this release, it&#8217;s the same thing. There&#8217;s a lot of manual work to go and collect all the information, collect all of the context, all in different places. And so having that just there where you&#8217;re designing, we think, makes a lot of sense, where you&#8217;ve already got huge repositories of your images across your company, where you&#8217;ve already got all your brand templates, where you&#8217;re already doing the collaboration. We think that makes a lot of sense. But really, we just want to be putting the tools that help to reduce busy work in the hands of our community and helping them to achieve their goals with less clicks.</p>

<p class="has-text-align-none"><strong>A few weeks ago, we </strong><a href="https://www.theverge.com/podcast/902264/oktas-ceo-is-betting-big-on-ai-agent-identity"><strong>had the CEO of Okta on the show</strong></a><strong>, Todd McKinnon, and he was like, &#8220;The future of Okta is managing agent permissions because this is a security nightmare and I will sell kill switches to every enterprise that has agents running rampant over its networks and databases.&#8221; And so I hear what you&#8217;re saying. It&#8217;s like, okay, Slack is going to have a bunch of agents that can go talk to Canva&#8217;s database of images. Canva will have a bunch of agents that can go talk to Slack&#8217;s database of conversations, something else is going to happen over here. Does that seem like a workable picture of a company of the future, where all of these tools are accessing one another independently or do you think it will naturally land on just one?</strong></p>

<p class="has-text-align-none">I think the cool thing is, for consumers, there&#8217;s going to be choice about how they want to have their work stack set up. And I think it&#8217;s a really exciting time in technology because there&#8217;s just so many new possibilities for the way work gets done to reduce fragmentation. We&#8217;ve got a quarter of a billion people using Canva today, so we think there&#8217;s a huge opportunity to make AI simple and accessible, just like we did with design, but very importantly, helping to empower people to achieve their goals and to communicate their ideas. So we think we&#8217;re pretty excited about what we&#8217;re going to be able to bring out into the world.</p>

<p class="has-text-align-none"><strong>How does it work for you? What&#8217;s the dynamic inside of Canva? Obviously you’re on the bleeding edge of this technology and you obviously have your own tool. How does it work for you? Do all of your tools have AI access to all the other tools? Or do you work only in Canva and let Canva AI go talk to all the other tools? What&#8217;s your setup?</strong></p>

<p class="has-text-align-none">Obviously Canva&#8217;s always had all my designs and my presentations and my documents, but being able to get connectors and being able to pull in information has been pretty astonishing. So for example, being able to say, &#8220;Hey, create me a plan for my next week and how I can optimize my time.&#8221; And it being able to go and read my calendar and then create me a document about my upcoming week, it was like, &#8220;There&#8217;s a lot going on.&#8221; It told me I had a massage booked and I was really surprised about that because I didn&#8217;t actually know until I read that in my Canva doc. And then I was like, &#8220;Oh, I think there&#8217;s a bug here,&#8221; and then I realized that my partner had organized that.</p>

<p class="has-text-align-none"><strong>The bug is, it&#8217;s booking self-care for you whenever it wants.</strong></p>

<p class="has-text-align-none">And so I think it&#8217;s really cool because there&#8217;s a lot of things that would be very manual, like going and doing a calendar audit, and that all of a sudden can actually just happen inside the one thing and it can actually create the presentation or it can create the document and then you can have people collaborating on that as you go.</p>

<p class="has-text-align-none">People talk about AI slop, and I think the AI slop is often one shot generation, you just take that and you put it somewhere else. I think what’s really exciting with Canva is that that’s really just the draft. That&#8217;s the starting point. And then you can use it to iterate, you can use that through manual editing or you can use that through being able to iteratively edit through Canva AI inside the editor itself and to refine it to really be able to clearly articulate your idea. So we&#8217;re pretty excited about the possibilities that it unlocks.</p>

<p class="has-text-align-none"><strong>I feel like it&#8217;s time for the </strong><strong><em>Decoder </em></strong><strong>questions, because you&#8217;ve talked about how much you use Canva internally. The last time I asked you how you make decisions. You said you had a process called decision decks, where you literally made Canva documents with all the pros and cons and you mocked up the products. Is that still the process?</strong></p>

<p class="has-text-align-none">That is still the process. Prototyping has become a very key part of it. So often now there&#8217;s a workable prototype before anything gets launched. I think the really fun thing about, I don&#8217;t know if I talked to you about the complex decision making framework.</p>

<p class="has-text-align-none"><strong>No, this is new.&nbsp;</strong></p>

<p class="has-text-align-none">Okay. Well, this, for anyone that needs to make complex decisions, I find this extremely helpful plitting it out into like, what are the goals, then what are the options, what are the pros and cons for each of the options? But it&#8217;s really fun because now we have a template inside Canva, which is the complex decision making framework doc. And you can literally just dictate using dictation through Canva AI and it will actually go and fill out this template. So there&#8217;s a lot of really exciting ways you can take your ideas and the thoughts in your head and then have that distilled in a way that other people can see and understand, which I guess is the essence of design.</p>

<p class="has-text-align-none">If you think about design in the sense that previously, design can sometimes be thought of as making things look pretty, but really design is about expressing ideas and being able to communicate that effectively and being able to turn something from an idea into reality. And so we think all these new tools really help to facilitate that. I use Canva Code all the time. I used to do a lot of mockups and now I use Canva Code to create prototypes all the time for every idea that I have, which is pretty powerful because it takes the idea far further than it could before.</p>

<p class="has-text-align-none"><strong>The other </strong><strong><em>Decoder </em></strong><strong>question is how the companies are structured. Last time, you were about 4,500 people and you described your structure as a very centralized product team and then lots and lots of local teams. And the metaphor you used was a cupcake and you said, &#8220;We work on the cupcake and we make the cupcake bigger and all the local teams work on the icing.&#8221; Is that still the metaphor?</strong></p>

<p class="has-text-align-none">Yeah, that&#8217;s a fair metaphor. That one&#8217;s been around for&#8230; The cupcake and the icing is actually so applicable in so many different ways. Small empowered teams are really the essence of how we get things done. And we&#8217;re very much a goal-oriented structure.</p>

<p class="has-text-align-none">So for example, with Canva AI 2.0, we really brought everyone together across the company to achieve that goal and bring Canva AI 2.0 out into the world. We do show and tells every week so everyone can share and get deep context on what&#8217;s happening. I think that “goal” has really been the essence of how we&#8217;ve achieved anything over the last decade, being able to rally around goals and have different team formations in order to achieve that.</p>

<p class="has-text-align-none"><strong>How many people is Canva now?</strong></p>

<p class="has-text-align-none">Latest stat, about 5,000.</p>

<p class="has-text-align-none"><strong>So you&#8217;ve been growing. I&#8217;m really curious about, just in that context, decisions and structure, how you made the decision to say, &#8220;Okay, we&#8217;re going to do Canva 2.0 and we&#8217;re going to lean heavily into AI the way that we&#8217;re going to lean into AI.&#8221; That&#8217;s a lot of people. It&#8217;s a big decision. I imagine that there was a decision making slide or a deck and then this feels like it inherently is a top-down decision. We&#8217;re all doing this. Melanie says we&#8217;re all doing this, we&#8217;re all doing this. Walk me through that decision and walk me through any structure changes you had to make in order to accomplish it.</strong></p>

<p class="has-text-align-none">Yeah, absolutely. So I&#8217;m going to take us back to 2011 and to a deck that we had, which was called Canva&#8217;s Chef, before Canva was even called Canva. And the first slide, when you go onto it, it&#8217;s like, &#8220;What do you want to chef up today?&#8221; And then you could type into a search box and the idea was you could type whatever you wanted and then you&#8217;d pop into the editor and you could collaborate and you could have the editing tools. If we shared it with you after this, you’d see it&#8217;s bizarrely similar to what we&#8217;re launching today. So I guess this has been the dream for a really long time, but the technical ability to do this has been&#8230; hard. I&#8217;d say in 2017, we had this document. We called it Getting Smart and we&#8217;re like, &#8220;In the future, future, future, there&#8217;s going to be search-driven design. Rather than going to the buffet and getting something, it&#8217;ll be able to happen on the fly, like a chef cooking something up from the raw ingredients.&#8221;</p>

<p class="has-text-align-none">And now it feels like we can actually do that. Back in October, we&#8217;d been researching this space for some years with the foundational model that was a huge step, the design foundational model, that was a huge key piece. But then in October, there was a significant breakthrough in the company that meant that we could actually do it. So as soon as we saw that, we were all like, &#8220;Oh my goodness, this is really exciting and groundbreaking for what Canva can unlock.&#8221; And so that was when we really started to go all-in and realize that that technology needed to be pushed as far as it could go, which is what we&#8217;re launching today.</p>

<p class="has-text-align-none"><strong>Did you send out an email? Did you send out a Canva deck saying, &#8220;We&#8217;re doing this now, decision made?&#8221; How did that work?</strong></p>

<p class="has-text-align-none">That is a great question. So there was a team working on it already, and then we really bolstered up that team. So then we said, &#8220;Okay, we need to get every single person that can possibly help bring this to life onto the project.&#8221; We started the weekly show-and-tell’s, and we turned it into a more of a centralized AI team with hundreds of people. It went from a smaller team to then many hundreds of people to bring it to life, with everyone working on the different parts that needed to become part of this orchestra.</p>

<p class="has-text-align-none"><strong>I&#8217;m curious. That structure part seems really interesting to me: “We have a software tool, a standard deterministic software tool with a select box and all that stuff, and we&#8217;re going to build an AI that can use that tool. Now we&#8217;ve got to take all the engineers we had and point them at that problem.”</strong></p>

<p class="has-text-align-none"><strong>Did you have to rethink your product team, or did you just make the team that was working on that part larger?</strong></p>

<p class="has-text-align-none">I think a little bit of both. Once the team had this big breakthrough, and we all saw it in action, we said, &#8220;My goodness, this is the coolest thing ever.&#8221; We then had to figure out who could actually help from across the company. I think that&#8217;s the goal-oriented structure I was mentioning before: when there&#8217;s a goal, you need to figure out who are the people that can help bring this to life. And then we were doing a weekly show-and-tell so everyone could get a really clear understanding of where everything was at and all the pieces that needed to be orchestrated to come together.</p>

<p class="has-text-align-none">I think Canva already being interoperable meant that there were a lot of these things that had already been built and that then could just come together in an exciting way. We do something called the Canva jigsaw. We&#8217;ve been doing different variations of the Canva jigsaw since the earliest days, which is often a goal and then all the pieces that need to be worked on independently to be able to bring that to life. That was exactly what we had at the center of this project again.</p>

<p class="has-text-align-none"><strong>You&#8217;re fundamentally a software CEO. I think that&#8217;s a fair description. I think you make software. The nature of software development itself seems to be undergoing some kind of existential crisis. One of our designers here at </strong><strong><em>The Verge</em></strong><strong> and Vox Media described all software development now as calibrating yourself to a database and just talking and seeing what happens and maybe that&#8217;ll turn your brain to mush.&nbsp;</strong></p>

<p class="has-text-align-none"><strong>Are you using as much Claude Code or Codex to make Canva, as it seems like every other company is racing to do?</strong></p>

<p class="has-text-align-none">Yeah, I use Canva Code really extensively from the perspective of–</p>

<p class="has-text-align-none"><strong>When you say Canva Code, that&#8217;s your own coding product? You&#8217;re not using Claude Code or Codex?</strong></p>

<p class="has-text-align-none">Yeah, because it&#8217;s so cool. I used to create mockups all the time. Anytime I had an idea, I would create a mockup. And now anytime I have an idea, I can use Canva Code. But with this latest release, you can actually go in and edit the text. So you can actually code something, you can edit the text, you can drag and drop, you can move things around. We&#8217;ve been really investing heavily on the AI front and upskilling our team.</p>

<p class="has-text-align-none"><strong>So can you make Canva with Canva Code?</strong></p>

<p class="has-text-align-none">Yeah, we do make Canva with Canva Code, not deployed. We have many incredible engineers that actually make it sound to go out to hundreds of millions of people, but we use it for prototyping all the time.</p>

<p class="has-text-align-none"><strong>Yeah, I think the question I&#8217;m asking is more about those folks and how you think about the costs associated with those folks. The nature of software engineering is changing in some big, meaningful way due, in particular, to the coding tools that are available.&nbsp;</strong></p>

<p class="has-text-align-none"><strong>Are you rethinking how that works inside of Canva, as you ship new versions of Canva? Because for every other software CEO I talk to, their minds are exploding. They don&#8217;t quite know how it&#8217;s going to go, but they know it&#8217;s definitely going to change forever.</strong></p>

<p class="has-text-align-none">I think one of the things that we&#8217;ve invested really heavily in is continuously upskilling our team and systems. So we&#8217;ve taken a very intentional approach to give all of our team access to all of the latest and greatest tools. So we actually have not selected a winner. We have just given them everything. And it&#8217;s been very intentional because we want everyone to be playing with the latest and greatest and to be upskilling all the time. We need to be upskilling every one of our systems.</p>

<p class="has-text-align-none">We need to be upskilling because the way we build product is completely different today. The way we do [quality assurance] is completely different today. The way we do actually every system and process inside the companies had to have an AI-native transformation. And so every specialty inside the company has had to have an AI-native transformation — what a designer does today, what an engineer does today, across every single part of the company.</p>

<p class="has-text-align-none">So it&#8217;s been a huge area of investment on the tooling, on giving our team time, and on the specialties. We&#8217;ve had this focus on AI everywhere and then AI impact and now AI-native because we really want to be rethinking everything in this AI era.</p>

<p class="has-text-align-none"><strong>There&#8217;s some rebalancing of power between product managers, designers, and engineers because AI lets them all do each other&#8217;s jobs. Where have you landed on that inside of Canva?</strong></p>

<p class="has-text-align-none">I think we&#8217;re all here to build the best experience we can. I think having really solid expertise has never been the best way to build product. In fact, great PMs often think about things from a design perspective. Great engineers often think about things from a design perspective. So really, it&#8217;s about the team that is there to just create the best thing possible. And having people in their separate siloed, isolated lanes and saying, &#8220;That&#8217;s my territory,&#8221; was never a great way to build product.</p>

<p class="has-text-align-none">With AI, it&#8217;s really leaning further into that. It&#8217;s everyone thinking about what is the best product experience that we can build. And everyone will bring different skills to the fore. So a designer will obviously have a certain expertise, a PM will have certain expertise, an engineer will have certain expertise, but we&#8217;ve always thought of it as a bit of a team sport where the best idea should be winning and everyone should be collaborating to create the best outcome that they possibly can for our community.</p>

<p class="has-text-align-none"><strong>So I understand this positive case for AI, and why you made the decision. I understand that the product promise of just “tell this box what to make and it will make you a first draft and you can go on from there,” based on the data you have. There&#8217;s a pretty significant downside to AI, particularly as it relates to branding.</strong></p>

<p class="has-text-align-none"><strong>There&#8217;s polling here in the United States, at least, that basically is just bad vibes around AI. The last </strong><a href="https://www.theverge.com/ai-artificial-intelligence/891724/nbc-news-march-2026-poll-ai-ice"><strong><em>NBC News</em></strong><strong> poll</strong></a><strong> that we are constantly citing is AI is polling under ICE in terms of favorability and just above the war in Iran. That&#8217;s not a great place for AI to be.</strong></p>

<p class="has-text-align-none"><strong>People are voting against data centers in their communities. AI is widely associated with job loss and maybe now you&#8217;re going to cause some enterprise job loss because social media teams don&#8217;t need to be as big as they needed to be anymore. There&#8217;s </strong><a href="https://www.theverge.com/tech/885710/jack-dorsey-block-layoffs-job-cuts-ai"><strong>a lot of layoffs</strong></a><strong> that are being </strong><a href="https://www.cnbc.com/2026/03/31/oracle-layoffs-ai-spending.html"><strong>blamed on AI across the board</strong></a><strong>.&nbsp;</strong></p>

<p class="has-text-align-none"><strong>You&#8217;re leaning into AI with Canva. You&#8217;re rebranding the whole product as having AI in it. How do you think about that downside risk, that people don&#8217;t like it? The more they&#8217;re exposed to it, the more they&#8217;re saying, &#8220;Wait, stop. I don&#8217;t want this around me.&#8221;</strong></p>

<p class="has-text-align-none">I think it&#8217;s like any tool. It will be whatever you want it to be. And so if you want it to help empower people, if you want it to help deliver better experiences for your customers, if you want it to uplift students and to give them great quality education materials, it can do that.&nbsp;</p>

<p class="has-text-align-none"><strong>Wait, can it do that? I&#8217;m actually not so certain about the student thing.</strong></p>

<p class="has-text-align-none">We launched something called <a href="https://www.canva.com/learn-grid/">LearnGrid</a> and LearnGrid enables, across many countries, to be able to have the curriculum aligned content created. That can be worksheets and immediate feedback. So we&#8217;re really excited about being able to put these tools in teachers&#8217; and students&#8217; hands around the world.&nbsp;</p>

<p class="has-text-align-none">We&#8217;ve got 100 million teachers and students using Canva today, but the access to great tools is very divergent, depending on the wealth of a school, for example. So we&#8217;re really excited about being able to bring that accessibility to students around the world.</p>

<p class="has-text-align-none"><strong>Right. But I think my question is more about slop, right? People are experiencing the tools that exist today, and maybe mostly they&#8217;re experiencing the free version of ChatGPT or whatever AI Overview Google puts in front of them, running on the cheapest possible model at the biggest possible scale. And they&#8217;re having these experiences. I know that the industry likes to say most people have never used AI and certainly no one&#8217;s paying for it, but like a billion people have used ChatGPT and then the polling is the polling.</strong></p>

<p class="has-text-align-none"><strong>I&#8217;m just wondering how you&#8217;re thinking about communicating this is an AI product because, to me, it feels like it comes with all kinds of baggage. I&#8217;m watching OpenAI </strong><a href="https://www.theverge.com/ai-artificial-intelligence/906022/openai-buys-tbpn"><strong>buy TBPN</strong></a><strong> because they think they have a marketing problem. I&#8217;m watching all the venture capitalists say, &#8220;The media is lying about AI and it&#8217;s going to change everything for the better.&#8221;&nbsp;</strong></p>

<p class="has-text-align-none"><strong>And then you&#8217;re racing into being like, &#8220;Canva&#8217;s AI now.&#8221; I think you know that a bunch of designers are going to be very unhappy about this. There&#8217;re some people who are going to just say, &#8220;This is bad. They&#8217;re ruining the product.&#8221; I&#8217;m just wondering how you are thinking about navigating that balance.</strong></p>

<p class="has-text-align-none">I think there&#8217;s going to be a plethora of opinions on any topic. What we always do is just put what our community wants and needs at the center of it. So we&#8217;ve had a lot of people asking, even yourself quite specifically, like, &#8220;I&#8217;ve got this goal. Why can&#8217;t Canva AI just know everything about it and be able to help me with that first draft?&#8221; So helping people to achieve their goals is always going to be at the center of what we do and that&#8217;s exactly what drives these sorts of decisions.</p>

<p class="has-text-align-none">It is about being able to take out a lot of the manual work from being able to create and lay things out. So I really believe that AI should accelerate your vision and creativity, not override it. I think that it&#8217;s really important that AI is just another tool in our toolkit and it will help achieve our goals, if we choose to use it. So we&#8217;ve been really intentional about the product design, like Canva AI is a new tab. So if you just come in and you love templates, you can use that. If you come in and you just love the elements and just creating things from scratch, that&#8217;s totally fine. That&#8217;s totally cool.</p>

<p class="has-text-align-none">But if you want to be able to express an idea just by dictation or through typing, you can do that too. So I think it&#8217;s really important that we understand that every one of our community members is at different stages and different scales of comfort with AI. We want to be making sure that we&#8217;re helping to facilitate that. So I think this is the full spectrum and it&#8217;s really important that Canva isn&#8217;t turning into a chatbot by any stretch of the imagination, but if you do want to be able to just chat to something and have it help you out, you can do that too. So it’s about really enabling all of that.&nbsp;</p>

<p class="has-text-align-none">I can&#8217;t speak for other companies out there in the world. But Canva has benefited greatly from an incredible community. We&#8217;ve got a quarter billion people that use Canva each month. There&#8217;s a lot of love for our product. I think that that love really comes from being able to have Canva be the thing that helps people to express their ideas and turn that into reality. We take that extremely seriously. So with all of these product developments, we are continuing to keep that at our core and empowerment is such a critical principle for us that is very much through everything that you&#8217;ll hopefully be seeing and touching very soon.</p>

<p class="has-text-align-none"><strong>Let me ask you about the competition because you&#8217;re describing goals and when I talk to executives and they describe goals and what people really want, you often realize you&#8217;re talking about business software. Your enterprise is growing for you and this very much feels like an enterprise offering to me. You&#8217;re going to connect to all these other systems and you&#8217;re going to get some work done and you&#8217;re going to do work.</strong></p>

<p class="has-text-align-none"><strong>That&#8217;s what this feels like to me. And I know Canva has a big consumer base and a lot of people have fun with it. This feels like a work product. Is AI fundamentally enterprise software? To me, I don&#8217;t think that people yearn for automation in their personal lives. I think you want to get rid of busy work at work so you can do something more important and a lot of work is inherently repetitive and AI just makes a lot of sense in this zone. Do you think AI is fundamentally enterprise software?</strong></p>

<p class="has-text-align-none">I think you&#8217;re right. Canva AI will totally be the system at the center of how work gets done, but that doesn&#8217;t mean that if you&#8217;re creating those wanted posters for your daughter&#8217;s party, you can&#8217;t be like, &#8220;Pull the invite list from the party coming up.&#8221; And just wanting it to connect to that.</p>

<p class="has-text-align-none"><strong>This implies that I have good access to the database of the eight-year-old girls coming to my house next weekend, but I&#8217;ll grant you that.</strong></p>

<p class="has-text-align-none">But yeah, it often will be about work and work means many different things to many different people. So work can mean a teacher in a classroom, work can mean at a large company, work can mean a small business trying to just get their marketing collateral created. I think we&#8217;ve shifted away from broadcast communication, where everything is one to many, to maybe having a hairdresser be able to send out a campaign on someone&#8217;s birthday to say, &#8220;Here&#8217;s a special voucher for your birthday. We have that particular thing that you like.&#8221; Being able to have that much more personal communication, I think is another aspect.</p>

<p class="has-text-align-none"><strong>It does feel to me like the cutting edge of social media marketing in particular is automation in this way. I probably watched more TikToks and Instagram Reels of social media managers explaining how they have built incredible dashboards using AI tools, and automated entire workflows and built content pipelines. You can see it. There&#8217;s something very important happening there. Presumably Canva will participate in that and they will build those tools inside of Canva.&nbsp;</strong></p>

<p class="has-text-align-none"><strong>Right next to that is Meta itself and TikTok and YouTube, which are all working on tools exactly like this. Mark Zuckerberg last year — I&#8217;m just going to read you this quote — </strong><a href="https://stratechery.com/2025/an-interview-with-meta-ceo-mark-zuckerberg-about-ai-and-the-evolution-of-social-media/"><strong>said this to Ben Thompson</strong></a><strong>: &#8220;In general, we&#8217;re going to get to a point where, if you&#8217;re a business, you come to us, you tell us what your objective is, you connect to your bank account, you don&#8217;t need any creative, you don&#8217;t need any targeting, you don&#8217;t need any measurement. You tell us the results you want and we will give them to you. You expect to be able to read the results that we spit out.&#8221;</strong></p>

<p class="has-text-align-none"><strong>That&#8217;s a redefinition of advertising. They&#8217;re describing, to some extent, your product. You tell it what you want to achieve and AI is going to make a bunch of creative and schedule it across their platform. I know TikTok is working on this. I know YouTube is working on this. They all see this thing that they can sell to their biggest clients, their advertisers. How do you think about competing with the platform&#8217;s own native capabilities that look a lot like what Canva&#8217;s trying to make for marketers?</strong></p>

<p class="has-text-align-none">It&#8217;s actually funny. Back in 2012, we had this pitch and we called it the design engine. And we said all these other platforms are going to have tools and they did. Lots of companies have lots of different tools for a specific platform, but it&#8217;s annoying because as a company, you probably want to be advertising in lots of different places. You probably want to be having your pitch decks and your docs and all the different things and you don&#8217;t want to have that fragmented across lots of different tools and systems.&nbsp;</p>

<p class="has-text-align-none">So Canva is everything in one place, rather than having to go and have your knowledge in lots of different places. So that&#8217;s, I guess, one of the key things that we&#8217;ve been leaning into for the last decade is that Canva can be that thing that is at the center of your work.</p>

<p class="has-text-align-none"><strong>So the back and forth there is these platforms either have bad analytics or are not very generous in sharing their analytics or make you pay extra to access their analytics. Meta obviously has its own models. Google obviously has its own models. They might say, &#8220;Look, if you want to run this creative, you have to make it in our tools. If you want to use this stuff, we will throttle you if you come to us with creative made elsewhere. We&#8217;re going to push you towards our tools. So you use our models and we get two bites of the apple on token pricing.&#8221;</strong></p>

<p class="has-text-align-none"><strong>I&#8217;ve heard this from a bunch of AI CEOs, that database access in general is going to become a new pricing vector. We&#8217;re going to charge for tools. If you want to connect to our system, the customer will have to pay some higher access fee. Have you seen any glimmers of this yet or is it too early to say?</strong></p>

<p class="has-text-align-none">I&#8217;d say, A, it&#8217;s too early, but B, I think that hopefully the customer wins out of all of this.&nbsp;</p>

<p class="has-text-align-none"><strong>That&#8217;s very optimistic.</strong></p>

<p class="has-text-align-none">Hopefully the customer is able to achieve their goals and use the tools that they want to use. I guess at the end of the day, why I&#8217;ve been so infatuated with design is that design is imagining the future and then willing it into existence. And so, design really radically helps that process. You mentioned optimism. I think that&#8217;s why I love design so much is because you do have to imagine the future that you want and then you can work to bring it into reality.</p>

<p class="has-text-align-none"><strong>The reality is Mark Zuckerberg exists and he&#8217;s very, very, very competitive. There&#8217;s also that piece of it.&nbsp;</strong></p>

<p class="has-text-align-none">I think they also like money. I think from our experience, they love to have creative because creative is the blocker.</p>

<p class="has-text-align-none"><strong><em>[Laughs]</em></strong><strong> Did you say they like money? I heard you. Well, I mean, look… I know a lot of social media people who take it as an article of faith–</strong></p>

<p class="has-text-align-none"><em>[Laughs] </em>Let me give a little clarity on that. They&#8217;re not going to stop advertising. Their company is built on advertising, so they&#8217;re going to want to take creative from wherever to have it on their platform. In fact, the lack of companies being able to create great advertising materials has been a huge blocker from people being able to advertise on their platform. And so I don&#8217;t think they&#8217;re going to be sad about creating it in Canva.</p>

<p class="has-text-align-none"><strong>I&#8217;m curious how this one plays out because the other thing that I see at Meta doing is investing heavily in AI themselves. Every week, Zuck has spent another $200 trillion hiring three AI researchers who are going to build him the best model. Who knows how that will pay off. The same way who knows how any of this will pay off.</strong></p>

<p class="has-text-align-none"><strong>But one way it could pay off is for Zuckerberg to say, &#8220;If you want to buy advertising on our platform, you&#8217;re going to generate it with our AI models. And because we own the model, we can charge you less than Melanie, who has to go buy tokens from someone else and pay their margin and pay her margin.&#8221;</strong></p>

<p class="has-text-align-none"><strong>I know a lot of social media managers who are fully convinced that they need to make their videos in Instagram&#8217;s Edits app because Instagram will promote it more heavily, even if they&#8217;re not actually making the videos, even if they&#8217;re just feeding it through to get whatever little metadata that says “Edits.”</strong></p>

<p class="has-text-align-none"><strong>Maybe that&#8217;s true and maybe it&#8217;s not, but the perception of Meta as a platform, the perception of YouTube as a platform, is that they will self preference in this way. So if they&#8217;re also the model providers and they can have lower pricing and the perception of self-preferencing, how do you expect to come up against that?</strong></p>

<p class="has-text-align-none">Let&#8217;s check back in a few years.</p>

<p class="has-text-align-none"><strong>Okay. I thought that&#8217;s what you would say, but I just see it coming. Especially for Meta, which has to find some way to make money with the models they&#8217;re building. As of yet, I don&#8217;t know what it is except for maybe they&#8217;re doing Reels targeting on GPUs.</strong></p>

<p class="has-text-align-none">I can&#8217;t speak for them and their business model, but I can certainly say, from a customer&#8217;s perspective, being able to create all of the content that you want in one place, having little friction between that, being able to deploy into lots of places is what we&#8217;ve been specializing in for, I’d say, the last decade. And certainly being able to take that to other platforms has been great for our customers, but then also great for the other platforms because then they&#8217;re able to have all these people that can do their marketing on those platforms.</p>

<p class="has-text-align-none"><strong>The last time we talked to your model provider was OpenAI, I believe. Is that still the primary partner?</strong></p>

<p class="has-text-align-none">We partner with OpenAI and Anthropic and then, of course, our own internal models. We love to collaborate with everyone.</p>

<p class="has-text-align-none"><strong>Are their models interchangeable? Or do you use them for specific tasks inside of Canva AI?</strong></p>

<p class="has-text-align-none">We always take the best model for the best task, continuously. So it&#8217;s been great to have so many great partners in the space, from Google to Anthropic and OpenAI.</p>

<p class="has-text-align-none"><strong>My sense of the situation is that every token costs the big company&#8217;s money, that they&#8217;re all subsidizing token use. At some point that&#8217;s going to turn, right? They&#8217;re going to want to make a penny of profit on every token. What does that do to your pricing when that happens?</strong></p>

<p class="has-text-align-none">Investing in our own models has been a really core part of our strategy and we were able to bring the cost down, the latency down. And the price is being driven down radically. If you look at the price of LLM queries, it&#8217;s gone down 50 times in the last three years. So it&#8217;s pretty exciting from that standpoint of having so many big companies racing to provide the cheapest models.</p>

<p class="has-text-align-none"><strong>When you say your own models, actually, are you in the fight for GPUs? Are you training them on someone else&#8217;s cloud? How&#8217;s that working?</strong></p>

<p class="has-text-align-none">Yeah, it&#8217;s been a really important area of investment, which is why we&#8217;ve got our own research team of 100 people that are investing in the areas that we need. So for example, I was mentioning the design side — like Magic Layers was from our own research org. It&#8217;s been really exciting to invest in the areas that other companies aren&#8217;t.</p>

<p class="has-text-align-none">We don&#8217;t need to go and compete in areas where there are billions of dollars of investment already happening, but in the areas that we know we can give great advantage to our customers, we certainly do that. So Magic Layers lets you now take any image from wherever you might generate it into Canva and then it will actually split it out into layers, so you can just edit it like a Canva template, which is pretty exciting.</p>

<p class="has-text-align-none"><strong>Does Magic Layers happen on your models or are you going out?&nbsp;</strong></p>

<p class="has-text-align-none">Yeah, that&#8217;s certainly our models.</p>

<p class="has-text-align-none"><strong>That&#8217;s really cool. When you&#8217;ve made the decisions to invest in your own models versus going out to other providers, is there a cost performance ratio? How do you make that decision? Because investing in your own models is expensive.</strong></p>

<p class="has-text-align-none">It is expensive, but for example, Magic Layers has had eight million uses in the four weeks since launching. It really hit a pain point that people had, which was that you generate something and you have to go and reprompt the LLM over and over again to be able to do it. So being able to just go in and make that tiny little text tweak or to be able to collaborate or whatever it might be has been really important.</p>

<p class="has-text-align-none">So I guess every time we&#8217;re choosing a model, it’s about “what is the best in the world?” We want to have price brackets for each of the different areas of our company. So you&#8217;ve got different models, you can choose your premium models or you can choose standard models. So we are domain experts in design and visual AI. And so that&#8217;s been really the focus of our research and development.</p>

<p class="has-text-align-none"><strong>You said you don&#8217;t want to talk about your competitors, but I want to wrap up by talking about your biggest competitor. We spent some time on it the last time you were on the show.&nbsp;</strong></p>

<p class="has-text-align-none">I&#8217;m actually curious, I don&#8217;t even know who you&#8217;re going to name. Who&#8217;s our biggest competitor?</p>

<p class="has-text-align-none"><strong>I think it&#8217;s Adobe. I think in the world of creative software for professionals, it&#8217;s obviously Adobe. And maybe Canva&#8217;s more consumer than that. Who do you think your biggest competitor is?</strong></p>

<p class="has-text-align-none">I shouldn&#8217;t have opened that question up, should I? I should have let you go on–</p>

<p class="has-text-align-none"><strong><em>[Laughs]</em></strong><strong> You walked right into this.</strong></p>

<p class="has-text-align-none">I know, I know. When we set out years ago, we were like, there&#8217;s this huge gap in the market. There weren&#8217;t tools that enabled easy design and that were rapid and enabled creative freedom. And I think that that&#8217;s exactly what we want to do, with Canva AI 2.0 bringing creativity and productivity together, being this place where you can get all of your work done in one place.&nbsp;</p>

<p class="has-text-align-none">I don&#8217;t really think of them as competitors. There are our community of a quarter billion people that we need to satisfy and help them achieve their goals. We really focus on running our own race and filling the gap in the market.&nbsp;</p>

<p class="has-text-align-none"><strong>No, but you have to answer. Who&#8217;s your biggest competitor?</strong></p>

<p class="has-text-align-none">Who&#8217;s our biggest competitor?</p>

<p class="has-text-align-none"><strong>You can&#8217;t say no one. You can&#8217;t be a $4 billion company with no competitors. That&#8217;s not a choice.</strong></p>

<p class="has-text-align-none">I think the way we think about it, it would be actually a bad business decision to be like, &#8220;You know what I&#8217;m going to do? I&#8217;m going to go and create this product that another company has created.&#8221; That wouldn&#8217;t make any sense. We literally go in and we say, &#8220;Where is the gap in the market? Where are users currently having friction?&#8221;</p>

<p class="has-text-align-none"><strong>I like what you&#8217;re doing, and I appreciate it and it&#8217;s very good, but it has to be someone. Who do you want to take market share from and who might take market share from you?</strong></p>

<p class="has-text-align-none">I don&#8217;t know that I have a great answer for you. I think there&#8217;s a lot of fragmented tools right now and having that in one place, I think, is going to be the gap in the market that we fill.</p>

<p class="has-text-align-none"><strong>Were you born this way? You&#8217;re such a pro. It&#8217;s very good. It&#8217;s incredible. I&#8217;m impressed.&nbsp;</strong></p>

<p class="has-text-align-none">You can name some. Who do you think? I&#8217;ll let you say whoever you think.</p>

<p class="has-text-align-none"><strong>I do think it&#8217;s Adobe. And specifically, when I think about the Canva community, it&#8217;s a lot of people who need to make something as consumers or as a one-off at their company and they graduate to the full suite. I think we have talked about that journey for a lot of folks. And when I was young, getting my first legal Photoshop license was a marker. And I think that is still a marker for a lot of people. I think Premier is a marker for a lot of creators, being able to afford that software.</strong></p>

<p class="has-text-align-none"><strong>I think Adobe is a different company, and maybe you don&#8217;t think they&#8217;re a competitor, but they occupy the same space for a lot of creatives, in a lot of ways. Their products line up right with yours. You can prompt Photoshop in exactly the way that you were talking about prompting Canva, and Adobe will tell you that its PDF business is the best business database that has ever existed in the history of the world, and they&#8217;re going to line it all up. I know what they&#8217;re going to do.</strong></p>

<p class="has-text-align-none"><strong>One of the things that I think is the most interesting, when I line up these two companies, is, in general, people love Canva. I think that, on balance, is true. I&#8217;m very curious to see how that goes once you put AI in front of everybody.&nbsp; I think that there&#8217;s some risk there, and in general, people </strong><a href="https://www.theverge.com/tech/913765/adobe-rivals-free-creative-software-app-updates"><strong>are really mad at Adobe all the time</strong></a><strong>. That is just the nature of those two companies, the way they&#8217;re situated right now.&nbsp;</strong></p>

<p class="has-text-align-none"><strong>So I’ve got to ask you this. Shantanu Narayen is leaving Adobe. He announced he&#8217;s going. We don&#8217;t know who the new CEO is going to be. Who do you think the next CEO of Adobe should be?</strong></p>

<p class="has-text-align-none"><em>[Laughs] </em>I definitely can&#8217;t comment on that.</p>

<p class="has-text-align-none"><strong>Yeah you can. Should it be you?</strong></p>

<p class="has-text-align-none">No, definitely not. Maybe you can, but then we-</p>

<p class="has-text-align-none"><strong>No. Nobody wants me to be in charge of PDFs in the world. You don&#8217;t want that at all. But I&#8217;m asking, if you&#8217;re looking at this, there&#8217;s a leadership change coming. Do you see that as an opportunity? Do you see that as, I will say, your competitor, retrenching? But I&#8217;m curious how you are perceiving that changeover there.</strong></p>

<p class="has-text-align-none">Honestly, we really have been pretty busy just focusing on our quarter billion users to try to make sure that we&#8217;re putting great products in their hands.</p>

<p class="has-text-align-none"><strong>Very good.</strong></p>

<p class="has-text-align-none">I just genuinely haven&#8217;t given that any consideration.</p>

<p class="has-text-align-none"><strong>Really? No one sent you a text, like, “He&#8217;s leaving”?</strong></p>

<p class="has-text-align-none">I was aware of it, but it&#8217;s not where my mind is focused. I&#8217;ll give you the way I think about the world. I always think it&#8217;s an internal locus of control and external locus of control. Things that you can control, that actually have an impact and then things that are completely outside your control. I really focus on the things that are within our control and that&#8217;s delivering a great product to our customers that is helping to close our community&#8217;s wishes. And then the things that are outside of my control, I literally just don&#8217;t focus my time and energy on because there&#8217;s quite a bit inside the internal locus of control.</p>

<p class="has-text-align-none"><strong>One of the reasons that I think designers are always mad at Adobe is their pricing goes up; they change the plans, they charge for more, and features go away. You&#8217;re at a scale with Canva now where you have what I would call the Microsoft Word problem, where the toolbar has to have every button in it because you&#8217;re so big that even if it&#8217;s only 1 percent of users who use the button, it&#8217;s still millions of people and you can&#8217;t have millions of people mad at you because you remove the button.</strong></p>

<p class="has-text-align-none"><strong>That feels like Canva&#8217;s at this scale, which is why AI is in a tab, right? You can&#8217;t change it too much. How do you think about making sure your Canva customers, who all use the product every day, seem to be very happy with you and stay happy with you, even as you roll out these products that might fundamentally threaten their jobs or how they work?</strong></p>

<p class="has-text-align-none">I think all of those considerations you said are absolutely very much something we focus greatly on. So for example, when we launched Canva AI, what we&#8217;re really excited about is there&#8217;s so much breadth and depth in Canva&#8217;s product now that a casual user might not be aware of all of the different things and capabilities that Canva can do. Many users are very deeply aware of every single button in Canva, but Canva AI really brings that all together.</p>

<p class="has-text-align-none">So you could just say whatever it is that you want and you might not know the specific tools that you need to be able to use to bring that to life, but it can do it for you. So we&#8217;re really excited about how that will be able to make complex things simple even from the perspective of being able to create your first design in Canva.</p>

<p class="has-text-align-none">We also do an extraordinary amount of user testing and we do that with existing Canva community members, and with new users, and that really helps to refine the products before we&#8217;re getting them out the door and into our community&#8217;s hands. We get <a href="https://www.reddit.com/r/canva/comments/1lw5pt0/icymi_canva_granted_a_bunch_of_features_from_the/">more than one million wishes a year</a> from our community and so we have actually just granted 40 of them at Canva Create.</p>

<p class="has-text-align-none">So all of these things that we&#8217;re doing are very much in partnership with our community. And I think that&#8217;s a really key part for us, is that we want to be building Canva in partnership with our community, getting their feedback, helping to learn from what they want, what they need to do to achieve their goals. And that&#8217;s very much at the center of how we think about it.</p>

<p class="has-text-align-none"><strong>All right. I need to ask you one very important question right at the end.</strong></p>

<p class="has-text-align-none">Sure.</p>

<p class="has-text-align-none"><strong>Do you promise to keep Affinity free?</strong></p>

<p class="has-text-align-none">Yes, absolutely. We&#8217;ve made that absolutely key commitment.</p>

<p class="has-text-align-none"><strong>Just checking. I feel like every time I talk to you, someone tells me, &#8220;Make sure you ask her if Affinity&#8217;s going to stay free.&#8221;</strong></p>

<p class="has-text-align-none">I can very much say Affinity is going to be staying free. It&#8217;s a critical thing. We knew that that was a really critical part of why Affinity was created in the first place — being able to make it more accessible. And then a key part of Canva has always been having our free product. We&#8217;ve got hundreds of millions of people using our free product. Affinity itself has had more than 5 million downloads since we announced it. So yeah, it&#8217;s a really key part. Affinity is free and will be.&nbsp;</p>

<p class="has-text-align-none"><strong>That&#8217;s great. Canva 2.0 is basically in beta, right? You&#8217;ve announced it, but it&#8217;s in a small beta.</strong></p>

<p class="has-text-align-none">Yeah, in a research preview.</p>

<p class="has-text-align-none"><strong>When does it go big?</strong></p>

<p class="has-text-align-none">At Canva Create, we gave one million people access to Canva AI 2.0. And so we&#8217;re really excited to be watching how everyone is using it and how it&#8217;s helping them to achieve their goals.</p>

<p class="has-text-align-none"><strong>Great. Well, I look forward to getting secret access to it so I can make even more silly posters for birthday parties. Melanie, it&#8217;s always so much fun talking to you. Thank you so much for being on </strong><strong><em>Decoder</em></strong><strong>.</strong></p>

<p class="has-text-align-none">Thank you so much for having me. Thanks for your great questions.</p>

<p class="has-text-align-none"><em><sub>Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!</sub></em></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Jess Weatherbed</name>
			</author>
			
			<title type="html"><![CDATA[The creative software industry has declared war on Adobe]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/913765/adobe-rivals-free-creative-software-app-updates" />
			<id>https://www.theverge.com/?p=913765</id>
			<updated>2026-04-17T09:23:51-04:00</updated>
			<published>2026-04-17T08:51:16-04:00</published>
			<category scheme="https://www.theverge.com" term="Adobe" /><category scheme="https://www.theverge.com" term="Analysis" /><category scheme="https://www.theverge.com" term="Creators" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[All empires eventually fall, and it seems the creative software industry has collectively decided that Adobe's time has come. The Creative Cloud provider's suite of design tools have been considered the industry standard for decades - despite unpopular decisions to fully embrace generative AI and abandon software licenses in favor of expensive, complicated subscriptions. Pricing [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: The Verge; Shutterstock" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/adobe-war_c7d8c4.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">All empires eventually fall, and it seems the creative software industry has collectively decided that Adobe's time has come. The Creative Cloud provider's suite of design tools have been considered the industry standard for decades - despite unpopular decisions to <a href="https://www.theverge.com/tech/912287/adobe-firefly-ai-assistant-announcement-editing">fully embrace generative AI</a> and abandon software licenses in favor of <a href="https://www.theverge.com/tech/894555/adobe-75-million-doj-settlement-subscriptions">expensive, complicated subscriptions</a>. </p>
<p class="has-text-align-none">Pricing in particular has given competitors an opening to attack. Some of the best alternatives aren't just undercutting Adobe's price - they're available for <em>free</em>. People love free.</p>
<p class="has-text-align-none">One example that was <a href="https://www.maxon.net/en/autograph">announced this week is Autograph</a>, motion design software akin to Ado …</p>
<p><a href="https://www.theverge.com/tech/913765/adobe-rivals-free-creative-software-app-updates">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Jess Weatherbed</name>
			</author>
			
			<title type="html"><![CDATA[Adobe embraces conversational AI editing, marking a ‘fundamental shift’ in creative work]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/912287/adobe-firefly-ai-assistant-announcement-editing" />
			<id>https://www.theverge.com/?p=912287</id>
			<updated>2026-04-15T09:15:28-04:00</updated>
			<published>2026-04-15T09:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="Adobe" /><category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Creators" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Adobe is fully embracing AI tools that enable creators to edit their work using descriptive prompts, instead of manually using specific Creative Cloud apps. The software giant's new Firefly AI Assistant allows users to describe what they want to change by typing their own words into a conversational interface. Adobe says this marks a "fundamental [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A screenshot of the Firefly AI Assistant editing a headshot." data-caption="You don’t need to understand any fancy editing terms — just describe what changes you want to make. | Image: Adobe" data-portal-copyright="Image: Adobe" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/04/Adobe-Firefly-Innovations-Visual-Asset.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	You don’t need to understand any fancy editing terms — just describe what changes you want to make. | Image: Adobe	</figcaption>
</figure>
<p class="has-text-align-none">Adobe is fully embracing AI tools that enable creators to edit their work using descriptive prompts, instead of manually using specific Creative Cloud apps. The software giant's new Firefly AI Assistant allows users to describe what they want to change by typing their own words into a conversational interface. </p>
<p class="has-text-align-none">Adobe says this marks a "fundamental shift in how creative work is done" by removing skill barriers and laborious tasks, while still giving creatives full control over their work. It'll be "available soon" on the Firefly AI studio platform according to Adobe, though no specific launch date was provided in the announcement.</p>
<p class="has-text-align-none">The unifi …</p>
<p><a href="https://www.theverge.com/tech/912287/adobe-firefly-ai-assistant-announcement-editing">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Jess Weatherbed</name>
			</author>
			
			<title type="html"><![CDATA[Adobe’s AI image generator can now be trained on your own art]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/897243/adobe-firefly-ai-custom-models-image-public-beta" />
			<id>https://www.theverge.com/?p=897243</id>
			<updated>2026-03-19T11:19:14-04:00</updated>
			<published>2026-03-19T09:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="Adobe" /><category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Creators" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Adobe is launching customizable AI image generators that can mimic specific artistic styles and character designs. The Firefly Custom Models are available in public beta starting today, allowing creators and brands to train a model on their own assets to ensure generated images follow a consistent aesthetic for characters, illustrations, and photography. The tool aims [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="An illustrated image representing Adobe’s Custom Firefly AI Model public beta." data-caption="Just feed your images into the bot and it’ll try to mimic specific styles and aesthetics. | Image: Adobe" data-portal-copyright="Image: Adobe" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/03/Firefly-Custom-Models-Hero-Image.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Just feed your images into the bot and it’ll try to mimic specific styles and aesthetics. | Image: Adobe	</figcaption>
</figure>
<p class="has-text-align-none">Adobe is launching customizable AI image generators that can mimic specific artistic styles and character designs. The Firefly Custom Models are available in public beta starting today, allowing creators and brands to train a model on their own assets to ensure generated images follow a consistent aesthetic for characters, illustrations, and photography.</p>
<p class="has-text-align-none">The tool aims to streamline workflows for teams and creators that need to produce high volumes of content, providing a reusable foundation that preserves visual consistency across multiple projects, instead of having to start from scratch each time. Adobe says that custom models can help pr …</p>
<p><a href="https://www.theverge.com/tech/897243/adobe-firefly-ai-custom-models-image-public-beta">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Jess Weatherbed</name>
			</author>
			
			<title type="html"><![CDATA[Adobe will pay $75 million to settle US cancellation fee lawsuit]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/894555/adobe-75-million-doj-settlement-subscriptions" />
			<id>https://www.theverge.com/?p=894555</id>
			<updated>2026-03-13T14:24:58-04:00</updated>
			<published>2026-03-13T13:52:01-04:00</published>
			<category scheme="https://www.theverge.com" term="Adobe" /><category scheme="https://www.theverge.com" term="Creators" /><category scheme="https://www.theverge.com" term="Law" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Adobe says it will pay $75 million to resolve a lawsuit filed by the US government alleging that the creative software giant harmed consumers by making its subscriptions intentionally hard to cancel and concealing termination fees. The payment aims to resolve the complaint raised in June 2024, in which the US Justice Department accused Adobe [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Red artwork of the Adobe brand logo" data-caption="" data-portal-copyright="Illustration by Alex Castro / The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/23624357/acastro_STK124_03.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Adobe says it will pay <a href="https://news.adobe.com/news/2026/03/adobe-statement">$75 million to resolve a lawsuit</a> filed by the US government alleging that the creative software giant harmed consumers by making its subscriptions intentionally hard to cancel and concealing termination fees.</p>
<p class="has-text-align-none">The payment aims to resolve the <a href="https://www.theverge.com/2024/6/17/24180196/adobe-us-ftc-doj-sues-subscriptions-cancel">complaint raised in June 2024</a>, in which the US Justice Department accused Adobe of breaking federal consumer protection laws by failing to properly disclose important terms for its "annual paid monthly" plans, and forcing Creative Cloud subscribers through an "onerous and complicated" cancellation process. The lawsuit said that customers would then be "ambushed" with early terminat …</p>
<p><a href="https://www.theverge.com/tech/894555/adobe-75-million-doj-settlement-subscriptions">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Jess Weatherbed</name>
			</author>
			
			<title type="html"><![CDATA[You can now ask Photoshop’s AI assistant to edit images for you]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/891998/adobe-photoshop-web-mobile-ai-assistant-beta-launch" />
			<id>https://www.theverge.com/?p=891998</id>
			<updated>2026-03-10T08:38:07-04:00</updated>
			<published>2026-03-10T09:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="Adobe" /><category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Creators" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Adobe announced more agentic AI features for its Creative Cloud apps this week, allowing users to edit images and documents by describing the changes to a chatbot. A native AI-assistant is now available in public beta for Photoshop on web and mobile, and some Adobe apps, including Acrobat and Express, will soon be available to [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A screenshot showing the beta AI Assistant in Adobe Photoshop for web." data-caption="You can ask Adobe’s AI assistant to edit images in Photoshop for web by describing conversationally how you want it to change. | Image: Adobe" data-portal-copyright="Image: Adobe" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/03/Photoshop-for-web-AI-assistant.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	You can ask Adobe’s AI assistant to edit images in Photoshop for web by describing conversationally how you want it to change. | Image: Adobe	</figcaption>
</figure>
<p class="has-text-align-none">Adobe announced more agentic AI features for its Creative Cloud apps this week, allowing users to edit images and documents by describing the changes to a chatbot. A native AI-assistant is now available in public beta for Photoshop on web and mobile, and some Adobe apps, including Acrobat and Express, will soon be available to access directly within Microsoft's Copilot service.</p>
<p class="has-text-align-none">The AI Assistant in Photoshop for web and mobile was introduced in <a href="https://www.theverge.com/news/807811/adobe-photoshop-lightroom-premiere-pro-ai-max-2025">a private beta in October</a>, but now more people can use it to remove distractions, change backgrounds, refine lighting, adjust color, and more. This follows Adobe launching <a href="https://www.theverge.com/news/807802/adobe-express-ai-assistant-prompt-editing-beta-max-2025">similar AI assistants for Ex …</a></p>
<p><a href="https://www.theverge.com/tech/891998/adobe-photoshop-web-mobile-ai-assistant-beta-launch">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Jess Weatherbed</name>
			</author>
			
			<title type="html"><![CDATA[Adobe’s new AI video editing tool stitches clips into a first draft]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/884285/adobe-firefly-ai-video-editing-quick-cut" />
			<id>https://www.theverge.com/?p=884285</id>
			<updated>2026-02-25T08:54:31-05:00</updated>
			<published>2026-02-25T09:00:00-05:00</published>
			<category scheme="https://www.theverge.com" term="Adobe" /><category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Creators" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Adobe is launching a new Firefly tool that helps video editors to focus on storytelling by creating a first cut to refine and build around. The Quick Cut feature is launching in beta today for Firefly's video editor, allowing users to automatically assemble clips together based on text prompts and simple creator inputs. "Quick Cut [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="An image showcasing Adobe Firefly’s new Quick Cut video editing tool." data-caption="Quick Cut aims to help you spend less time staring at a blank editing timeline. | Image: Adobe" data-portal-copyright="Image: Adobe" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/02/Adobe-Quick-Cut-hero.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Quick Cut aims to help you spend less time staring at a blank editing timeline. | Image: Adobe	</figcaption>
</figure>
<p class="has-text-align-none">Adobe is launching a new Firefly tool that helps video editors to focus on storytelling by creating a first cut to refine and build around. The Quick Cut feature is launching in beta today for <a href="https://www.theverge.com/news/807809/adobe-firefly-ai-audio-generate-soundtrack-speech">Firefly's video editor</a>, allowing users to automatically assemble clips together based on text prompts and simple creator inputs.</p>
<p class="has-text-align-none">"Quick Cut empowers creators to upload their own b-roll or generate new footage and instantly turn it into a structured first cut. Goodbye empty timeline. Hello momentum," Adobe's head of product marketing for creators, Mike Polner, said in the announcement. "It's a fast way to get from 'I have clips' to 'I have an edit I ca …</p>
<p><a href="https://www.theverge.com/tech/884285/adobe-firefly-ai-video-editing-quick-cut">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Emma Roth</name>
			</author>
			
			<title type="html"><![CDATA[Here are the brands bringing ads to ChatGPT]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/877148/openai-chatgpt-advertisers-target-adobe-audible" />
			<id>https://www.theverge.com/?p=877148</id>
			<updated>2026-02-11T16:01:58-05:00</updated>
			<published>2026-02-11T11:08:03-05:00</published>
			<category scheme="https://www.theverge.com" term="Adobe" /><category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Amazon" /><category scheme="https://www.theverge.com" term="Creators" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Online Shopping" /><category scheme="https://www.theverge.com" term="OpenAI" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="Transportation" />
							<summary type="html"><![CDATA[OpenAI officially launched its advertising pilot in ChatGPT, leaving us with a better idea of the kinds of products we might see stuffed beneath our conversations with the AI chatbot. Several companies have announced plans to show ads inside ChatGPT - placements that will reportedly cost them a pretty penny - ranging from major retailers [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Vector illustration of the ChatGPT logo." data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25462010/STK155_OPEN_AI_CVirginia_D.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">OpenAI officially launched <a href="https://www.theverge.com/ai-artificial-intelligence/876029/openai-testing-ads-in-chatgpt">its advertising pilot</a> in ChatGPT, leaving us with a better idea of the kinds of products we might see stuffed beneath our conversations with the AI chatbot. Several companies have announced plans to show ads inside ChatGPT - placements that <a href="https://www.theverge.com/news/867898/openai-chatgpt-ad-pricing">will reportedly cost them a pretty penny</a> - ranging from major retailers like Target to automakers like Ford and Mazda.</p>
<p class="has-text-align-none">You'll only see ads in ChatGPT if you're a free user or <a href="https://www.theverge.com/news/863466/openai-chatgpt-go-global-release">subscribed to its cheaper $8 / month Go plan</a>. OpenAI <a href="https://www.theverge.com/news/863428/openai-chatgpt-shopping-ads-test">has said it will "clearly" label</a> the ads and that they won't influence ChatGPT's response. Here are all the brands that we know are partnering with Open …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/877148/openai-chatgpt-advertisers-target-adobe-audible">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Nilay Patel</name>
			</author>
			
			<title type="html"><![CDATA[Reality is losing the deepfake war]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/podcast/874038/ai-deepfakes-war-on-reality-c2pa-labels" />
			<id>https://www.theverge.com/?p=874038</id>
			<updated>2026-02-07T14:39:36-05:00</updated>
			<published>2026-02-05T10:00:00-05:00</published>
			<category scheme="https://www.theverge.com" term="Adobe" /><category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Creators" /><category scheme="https://www.theverge.com" term="Decoder" /><category scheme="https://www.theverge.com" term="Podcasts" /><category scheme="https://www.theverge.com" term="Social Media" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Today, we’re going to talk about reality, and whether we can label photos and videos to protect our shared understanding of the world around us. No really, we’re gonna go there. It’s a deep one. To do this, I’m going to bring on Verge reporter Jess Weatherbed, who covers creative tools for us — a [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A photo illustration showing X’s Community Notes and AI metadata designed to try and sort real from fake images." data-caption="" data-portal-copyright="Image: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/02/VRG_DCD_020526.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">Today, we’re going to talk about reality, and whether we can label photos and videos to protect our shared understanding of the world around us. No really, we’re gonna go there. It’s a deep one.</p>

<p class="has-text-align-none">To do this, I’m going to bring on <em>Verge</em> reporter Jess Weatherbed, who covers creative tools for us — a space that’s been totally upended by generative AI in a huge variety of ways with an equally huge number of responses from artists, creatives, and the huge number of people who consume that art and creative output out in the world.</p>

<p class="has-text-align-none">If you’ve been listening to this show or my other show <em>The Vergecast</em>, or even just been reading <em>The Verge</em> these past several years, you know we’ve been talking about how the photos and videos taken by our phones are <a href="https://www.theverge.com/2024/9/23/24252231/lets-compare-apple-google-and-samsungs-definitions-of-a-photo">getting more and more processed and AI-generated</a> for years now. Here in 2026, we’re in the middle of a <a href="https://www.theverge.com/2024/8/22/24225972/ai-photo-era-what-is-reality-google-pixel-9">full-on reality crisis</a>, as fake and manipulated ultra-believable images and videos flood social platforms at scale and without regard for responsibility, norms, or even basic decency. The White House is <a href="https://www.theverge.com/news/866465/what-is-a-photo-whitehouse-edition">sharing AI-manipulated images of people getting arrested</a> and defiantly saying it simply won’t stop when asked about it. We have gone totally off the deep end now.</p>

<div class="wp-block-vox-media-highlight vox-media-highlight"><img src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24792604/The_Verge_Decoder_Tileart.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="" data-portal-copyright="" />


<p><em>Verge</em> subscribers, don&#8217;t forget you get exclusive access to ad-free <em>Decoder</em> wherever you get your podcasts. Head <a href="https://www.theverge.com/account/podcasts">here</a>. Not a subscriber? You can <a href="https://www.theverge.com/subscribe">sign up here</a>. </p>
</div>

<p class="has-text-align-none">Whenever we cover this, we get the same question from a lot of different parts of our audience: why isn’t there a system to help people tell the real photos and videos apart from fake ones? Some people even propose systems to us, and in fact, Jess has spent a lot of time <a href="https://www.theverge.com/2024/8/21/24223932/c2pa-standard-verify-ai-generated-images-content-credentials">covering a few of these systems</a> that exist in the real world. The most promising is something called C2PA, and her view is that so far, it’s been almost entirely failures.</p>

<p class="has-text-align-none">Is this episode, we’re going to focus on C2PA, because it’s the one with the most momentum. C2PA is a labeling initiative spearheaded by Adobe with buy-in from some of the biggest players in the industry, including Meta, Microsoft, and OpenAI. But C2PA, also sometimes referred to as Content Credentials, has some pretty serious flaws.</p>

<p class="has-text-align-none">First, it was designed as more of a photography metadata tool, <a href="https://www.theverge.com/report/806359/openai-sora-deepfake-detection-c2pa-content-credentials">not an AI detection system</a>. And second, it’s really only been only half-heartedly adopted by a handful, but not nearly all, of the players you would need to make it work across the internet. We’re at the point now where Instagram chief Adam Mosseri is <a href="https://www.theverge.com/news/852124/adam-mosseri-ai-images-video-instagram">publicly posting that the default should shift</a> and you should not trust images or videos the way you maybe could before. </p>

<p class="has-text-align-none">Think about that for one second. That’s a huge, pivotal shift in how society evaluates photos and videos and an idea I’m sure we’ll be coming back to a lot this year. But we have to start with the idea that we can solve this problem with metadata and labels — that we can label our way into a shared reality. And why that idea might simply never work.</p>

<p class="has-text-align-none">Okay, <em>Verge </em>reporter Jess Weatherbed on C2PA and the effort to label our way into reality. Here we go.</p>

<iframe frameborder="0" height="200" src="https://playlist.megaphone.fm?e=VMP3917335756" width="100%"></iframe>

<p class="has-text-align-none"><em>This interview has been lightly edited for length and clarity.</em></p>

<p class="has-text-align-none"><strong>Jess Weatherbed, welcome to <em>Decoder</em>. I want to just set this stage. Several years ago, I said to Jess, &#8220;Boy, these creator tools are criminally under-covered. Adobe as a company is criminally under-covered. Go figure out what&#8217;s going on with Photoshop and Premiere and the creator economy because there&#8217;s something there that&#8217;s interesting.&#8221;&nbsp;</strong></p>

<p class="has-text-align-none"><strong>And fast-forward, here you are on <em>Decoder</em> today and we&#8217;re going to talk about whether you can label your way into consensus reality. I just think it&#8217;s important to say that&#8217;s a weird turn of events.</strong></p>

<p class="has-text-align-none">Yeah. I keep likening the situation to the <a href="https://www.reddit.com/r/videos/comments/1at69qj/your_scientists_were_so_preoccupied_with_whether/"><em>Jurassic Park</em> meme</a>, where people thought so long about whether they could, they didn&#8217;t actually stop to think about whether they should be doing this. Now we&#8217;re in the mess that we&#8217;re in.</p>

<p class="has-text-align-none"><strong>The problem, broadly, is that there&#8217;s an enormous amount of AI-generated content on the internet. Much of it just depicts things that are flatly not real. An important subset of that is a lot of content that depicts modifications to things that actually happened. So our sense that we can just look at a video or a picture and sort of implicitly trust that it&#8217;s true is fraying, if not completely gone. And we will come to that, because that&#8217;s an important turn here, but that&#8217;s the state of play.</strong></p>

<p class="has-text-align-none"><strong>In the background, the tech industry has been working on a handful of solutions to this problem, most of which involve labeling things at the point of creation. At the moment you take a photo or the moment you generate an image, you&#8217;re going to label it somehow. The most important one of those is called C2PA. So can you just quickly explain what that stands for, what it is, and where it comes from?</strong></p>

<p class="has-text-align-none">So this is effectively a metadata standard that was kickstarted by Adobe. Interestingly enough, Twitter as well, back in the day. You can see where the logic lies. It was supposed to be that everywhere a little bit of content goes online, this embedded metadata would follow.&nbsp;</p>

<p class="has-text-align-none">What C2PA does is this: at the point that you take a picture on a camera, you upload that image into Photoshop, all of these instances would be recorded in the metadata of that file to say exactly when it was taken, what has happened to it, what tools were used to manipulate it. And then as a two-part process, all of that information could then hypothetically be read by online platforms where you would see that information.&nbsp;</p>

<p class="has-text-align-none">As consumers, as internet users, we wouldn&#8217;t have to do anything. We would be able to, in this imaginary reality, go on Instagram or X and look at a photo and there would be a lovely little button there that just says, &#8220;This is AI-generated,&#8221; or, &#8220;This is real,&#8221; or some sort of authentication. That has obviously proven a lot more difficult in reality than on paper.</p>

<p class="has-text-align-none"><strong>Tell me about the actual label. You said it&#8217;s metadata. I think a lot of people have a lot of experience with metadata. We are all children of the MP3 revolution. Metadata can be stripped, it can be altered. What protects the C2PA metadata from just being changed?</strong></p>

<p class="has-text-align-none">They argue that it&#8217;s quite tamper-proof, but it&#8217;s a little bit of an “actions speak louder than words” situation, unfortunately. Because while they say it&#8217;s tamper-proof, this thing is supposed to be able to resist being screenshot, for example, but then OpenAI, who is actually one of the steering community members behind this standard, openly says it&#8217;s incredibly easy to strip to the point that online platforms might actually do that accidentally. So the theory is there&#8217;s plenty behind it to make it robust, to make it hard to remove, but in practice, that just isn&#8217;t the case. It can be removed, maliciously or not.</p>

<p class="has-text-align-none"><strong>Are there competitors to C2PA?</strong></p>

<p class="has-text-align-none">It&#8217;s a little bit of a confusing landscape, because I think it’s one of the few tech areas that I would say there shouldn&#8217;t actively be competition. And from what I&#8217;ve seen, from what I&#8217;ve spoken to with all these different providers, there isn&#8217;t competition between them as much as they&#8217;re all working towards the same goal.&nbsp;</p>

<p class="has-text-align-none"><a href="https://www.theverge.com/news/824786/google-gemini-synthid-ai-image-detection">Google SynthID is similar</a>. It&#8217;s technically a watermarking system more so than a metadata system, but they work on a similar premise that stuff will be embedded into something you take that you&#8217;ll then be able to assess later to see how genuine it is. The technicalities behind that are difficult to explain in a shortened context, but they do operate on different levels, which means technically they could work together. A lot of these systems can work together.</p>

<p class="has-text-align-none">You&#8217;ve got inference-based systems as well, which is where they will look at an image or a video or a piece of music and they will pick up telltale signs that apparently it may have been manipulated by AI and they will give you a rating. They can never really say yes or no, but they&#8217;ll give you a likelihood rating.&nbsp;</p>

<p class="has-text-align-none">None of it will stand on its own to be a one true solution. They&#8217;re not necessarily competing to be the one that everyone uses, and that&#8217;s the mess that C2PA is now in. It&#8217;s been lauded and it&#8217;s been grandstanded. They say, &#8220;This will save us,&#8221; whereas it was never designed to do that, and it certainly isn&#8217;t equipped to.</p>

<p class="has-text-align-none"><strong>Who runs it? Is it just a group of people? Is it a bunch of engineers? Is it simply Adobe? Who&#8217;s in charge?</strong></p>

<p class="has-text-align-none">It&#8217;s a coalition. The most prominent name you&#8217;ll see is Adobe because they’re the ones that shout about it the most. They&#8217;re one of the founding members of the Content Authenticity Initiative, which has helped to develop the standard. But you&#8217;ve got big names that are part of the steering committee behind it, which are supposed to be the groups involved with helping other people to adopt it, which is the important thing, because otherwise it doesn&#8217;t work. And part of this process, if you&#8217;re not using it, C2PA falls over. And OpenAI is part of that. Microsoft, Qualcomm, Google, all of these huge names are all involved with that and are supposedly helping to &#8230; They&#8217;re very careful not to say “develop it,” but to promote its adoption and to encourage other people, in regards to who&#8217;s actually working on it.</p>

<p class="has-text-align-none"><strong>Why are they careful not to say they&#8217;re developing it?</strong></p>

<p class="has-text-align-none">There isn&#8217;t any confirmation that I can find where it&#8217;s got something like, I don&#8217;t know, Sam Altman saying, &#8220;We&#8217;ve found this flaw in C2PA, and therefore we&#8217;re helping to address any kind of falls and pitfalls it may have.&#8221; It&#8217;s always just anytime I see it mentioned, it&#8217;s whenever a new AI feature has been rolled out and there&#8217;s a convenient little disclaimer slapped on the bottom, kind of a, &#8220;Yay, we did it. Look, it&#8217;s fine, a new AI thing, but we have this totally cool system that we use that&#8217;s supposed to make everything better.&#8221; They don&#8217;t actively say what they&#8217;re doing to improve the situation, just that they&#8217;re using it and they&#8217;re encouraging everyone else to be using it too.</p>

<p class="has-text-align-none"><strong>One of the most important pieces of the puzzle here is labeling the content at capture. We&#8217;ve all seen cell phone videos of protests and government actions and horrific government actions. And I think Google has C2PA in the Pixel line of phones. So video that comes off a Pixel phone or photos that come off a Pixel phone have some embedded metadata that says it&#8217;s real.&nbsp;</strong></p>

<p class="has-text-align-none"><strong>Apple notably doesn&#8217;t. Have they made any mention of C2PA or any of these other standards that would authenticate the photos or videos coming off an iPhone? That seems like an important player in this entire ecosystem.</strong></p>

<p class="has-text-align-none">They haven&#8217;t officially or on record. I have sources that say apparently they were involved in conversations to at least join, but nothing public-facing at the minute. There has been no confirmation that they are actually joining the initiative or even adopting Google SynthID technology. They&#8217;re very carefully skirting on the sidelines for some reason.&nbsp;</p>

<p class="has-text-align-none">It&#8217;s a little bit unclear as to whether they&#8217;re letting their caution about AI generally stem into this at this point. Because as far as I&#8217;m concerned, there is not going to be one true solution, so I don&#8217;t really know what Apple is waiting for, and they could be making a difference, but no, they haven&#8217;t been making any kind of declarations about what we should be using to label AI.</p>

<p class="has-text-align-none"><strong>That&#8217;s so interesting to me. I mean, I love a standards war, and we&#8217;ve covered many standards wars and the politics of tech standards are usually ferocious. And they&#8217;re usually ferocious because whoever controls the standard generally stands to make the most money, or whoever can drive the standard and an extended standard can make a lot of money.&nbsp;</strong></p>

<p class="has-text-align-none"><strong>Apple has played that game maybe better than anybody. It’s driven a lot of the USB standard. It was behind USB-C. It drove a lot of Bluetooth standard, which it extended for AirPods. I can&#8217;t see how you make money with C2PA, and it seems like Apple is just letting everyone else figure it out and then they will turn it on, and yet it feels like the responsibility to be the most important camera maker in the world is to drive the standard so people trust the images and videos that come off the cameras.</strong></p>

<p class="has-text-align-none"><strong>Does that dynamic come out anywhere in your reporting or your conversations with people about this standard — that it&#8217;s not really there to make money, it&#8217;s there to protect reality?</strong></p>

<p class="has-text-align-none">The moneymaking side of things never really comes into the conversation. It&#8217;s always that people are very quick to assure me that things are progressing. There&#8217;s never any kind of a conversation about incentive to motivate other people to do so. Apple doesn&#8217;t stand to really gain anything financially from this other than maybe the reassurance that people know that if they&#8217;re taking a picture with their iPhone, it could help to contribute to some sense of establishing what is still real and what isn&#8217;t. But then that&#8217;s a whole other can of worms because if iPhone is doing it, then all the platforms that we see those pictures on also have to be doing it. Otherwise, I&#8217;m just kind of verifying that this is real to my own eyes as me, the person that uses my iPhone.</p>

<p class="has-text-align-none">Apple may be aware that all the solutions that we currently have available are inherently flawed, so throwing your lot in as one of the biggest names in this industry and one that could arguably do the most difference, you&#8217;re almost exacerbating the situation that Google and OpenAI are now in, which is that they keep lauding this as the solution and it doesn&#8217;t fucking work. I think Apple needs to be able to stand on its laurels about something, and nothing is going to offer them that at the minute.</p>

<p class="has-text-align-none"><strong>I want to come back to how specifically it doesn&#8217;t work in one second. Let me just stay focused on the rest of the players on the content creation side of the ecosystem. There&#8217;s Apple, and there&#8217;s Google, which uses it in the Pixel phones. It&#8217;s not in Android proper, right? So if you have a Samsung phone, you don&#8217;t get C2PA when you take a picture with a Samsung phone. What about the other camera makers? Are Nikon and Sony and Fuji all using the system?</strong></p>

<p class="has-text-align-none">A lot of them have joined. They&#8217;ve released new camera models that have got the system embedded. The problem that they&#8217;re having now is in order for this to work, you don&#8217;t just have to do it on your new cameras, because every photographer in the world worth their salt isn&#8217;t going to go out every year and buy a brand new camera because of this technology. It would be inherently useful, but that&#8217;s just not going to happen. So backdating existing cameras is where the problem is going to be.</p>

<p class="has-text-align-none">We&#8217;ve spoken to a lot of different companies. As you said, Sony has been involved with this, Leica, Nikon, all of them. The only company willing to speak to us about it was Leica, and even they were very vague on how internally this is progressing. They just keep saying that it&#8217;s part of the solution, it&#8217;s part of the step that they&#8217;re going to be taking. But these cameras aren&#8217;t being backdated at the minute. If you have an established model, it&#8217;s 50/50 whether it&#8217;s even possible to update it with the ability to log these metadata credentials in from that point.</p>

<p class="has-text-align-none"><strong>There are other sources of trust in the photography ecosystem. The big photo agencies require the photographers who work there to sign contracts that say they won&#8217;t alter images, they won&#8217;t edit images in ways that fiddle with reality. Those photographers could use the cameras that don&#8217;t have the system, upload their photos to, I don&#8217;t know, Getty or AFP or Shutterstock, and then those companies could embed the metadata, and so “You can trust us.” Are any of them participating in that way?</strong></p>

<p class="has-text-align-none">We know that Shutterstock is a member. At the minute, the system that you&#8217;re describing would probably be the best approach that we have to making this beneficial, at least for us as people that see things online and want to be able to trust whether protest images or horrific things that we&#8217;re seeing online are actually real. To have a trusted middleman, as it were. But that system itself hasn&#8217;t been established. We do know that Shutterstock is involved. They are part of the C2PA committee, or they have general membership.&nbsp;</p>

<p class="has-text-align-none">So they are on board with using the standard, but they&#8217;re not actively part of the process behind how it&#8217;s going to be adopted at a further stage. Unless we can also get the other big players involved for stock imagery, then who knows whether this is going to go, but Shutterstock actually implementing it as a middleman system would be probably the most beneficial way to go.</p>

<p class="has-text-align-none"><strong>I&#8217;m just thinking about this in terms of the stuff that is made, the stuff that is distributed and the stuff that is consumed. It seems like at least at the moment of creation, there is some adoption, right? Adobe is saying, &#8220;Okay, in Photoshop, we&#8217;re going to let you edit photos and we&#8217;re going to write the metadata to the images and pass them along.&#8221; A handful of phonemakers, Google, or at least in its phones, are saying, &#8220;We&#8217;re going to write the metadata. We&#8217;re going to have SynthID.&#8221; OpenAI is putting the system into Sora 2 videos, </strong><a href="https://www.theverge.com/report/806359/openai-sora-deepfake-detection-c2pa-content-credentials"><strong>which you wrote about</strong></a><strong>.</strong></p>

<p class="has-text-align-none"><strong>On the creation side, there&#8217;s some amount of, &#8220;Okay, we&#8217;re going to label this stuff. We&#8217;re going to add the metadata.&#8221; The distribution side seems to be where the mess is, right? Nobody&#8217;s respecting the stuff as it travels across the internet. Talk about that. You wrote about Sora 2 videos and how they exploded across the internet. This is when it should have not been controversial to put labels everywhere saying, &#8220;This is AI-generated content,&#8221; and yet it didn&#8217;t happen. Why didn&#8217;t that happen anywhere?</strong></p>

<p class="has-text-align-none">It generally exposes the biggest flaw that this system has, and every system like it, to its credit. I don&#8217;t want to defend C2PA because it&#8217;s doing a bad job. It wasn&#8217;t ever designed to do it on this scale. It wasn&#8217;t designed to apply to everything. So in this example, yes, platforms need to be adopting it to actually read that metadata, providing they&#8217;re not the ones ripping it out during the process of actually supposedly scanning for it, but unless this is absolutely everywhere, it&#8217;s just not going to go.</p>

<p class="has-text-align-none">Part of the problem that we&#8217;re seeing is, as much as they can credit saying, &#8220;It&#8217;s going to be really robust, it&#8217;s going to be really efficient, you can embed this at any other stage,&#8221; there are still flaws with how it&#8217;s being interpreted, even if it is scanned. So that&#8217;s a big thing. It&#8217;s not necessarily that platforms aren&#8217;t picking up the metadata or stripping it out. It&#8217;s that they have no idea what to do with it when they actually have it. And at the point of uploading any images, there are social media platforms. LinkedIn, Instagram, Threads are all supposed to be using this standard, and there is a chance that when you upload any kind of image or video to the platform, any metadata that was involved in that is just going to be stripped out regardless.</p>

<p class="has-text-align-none">Unless they can all come to an agreement, every platform, literally every platform that we access and use online, can come to an agreement that they are going to be scanning for very, very specific details, they&#8217;re going to be adjusting their upload processes, they&#8217;re going to be adjusting how they communicate to their users, there needs to be that uniform, total uniform conformity for a system like this to actually make a difference, not even just to work. And we&#8217;re clearly not even going to see that.</p>

<p class="has-text-align-none">One of the conversations I had, actually, was when I was grilling Andy Parsons, who is head of content credentials at Adobe—that&#8217;s their word for implementing C2PA data—I commented on the Grok mess that we&#8217;ve had recently. Twitter was a founding member of this, and then when Elon purchased the platform, it disappeared. And by the sounds of it, they&#8217;ve been trying to entice X to get back involved, but that&#8217;s just not going anywhere. And X, however we see its user base at the minute, has millions of people using it, and that is a portion of the internet that is never going to benefit from this system because it has no interest in adopting it. So you&#8217;re never going to be able to address that.</p>

<p class="has-text-align-none"><strong>I&#8217;m going to read you this quote from Adam Mosseri, who runs Instagram. On New Year&#8217;s Eve, he just dropped a bomb and he <a href="https://www.theverge.com/2024/9/23/24252231/lets-compare-apple-google-and-samsungs-definitions-of-a-photo">put out a blog post in the form of a 20-carousel Instagram slideshow</a>, which has its own PhD thesis of ideas about how information travels on the internet embedded within it, but he put out a 20 slideshow on Instagram. In it, he said, &#8220;For most of my life, I could safely assume photographs or videos were largely accurate captures of moments that happened. This is clearly no longer the case and it&#8217;s going to take us years to adapt. We&#8217;re going to move from assuming what we see as real by default to starting with skepticism.&#8221;</strong></p>

<p class="has-text-align-none"><strong>This is the end point, right? This is “you can&#8217;t trust your eyes,” which means you can no longer trust a photo, you can&#8217;t trust a video of any event is actually real, and reality will start to crumble. And you can just look at events in the United States over the past month. The reaction to ICE killing Alex Pretti was, &#8220;Well, we all saw it,&#8221; and it&#8217;s because there was lots of video of that event from multiple angles and everyone said, &#8220;Well, we can all see it.&#8221;</strong></p>

<p class="has-text-align-none"><strong>The foundation of that is we can trust that video. And I&#8217;m looking at Adam Mosseri saying, &#8220;We&#8217;re going to start with skepticism. We can no longer assume photos or videos are accurate captures of moments that happened.&#8221; This is the turn. This is the point of the standard. Do you see Mosseri saying this out loud about Instagram as the end point of this? Is this war just lost?</strong></p>

<p class="has-text-align-none">I would say so. I think we&#8217;ve been waiting for tech to basically admit it. I see them using stuff like C2PA as a meritless badge at this point because they&#8217;re not endeavoring to push it to its utmost potential really. Even if it was never going to be the ultimate solution, it could have been at least some kind of benefit.&nbsp;</p>

<p class="has-text-align-none">We know that they&#8217;re not doing this because in the same message, Mosseri is describing this like, &#8220;Oh, it would be easier if we could just tag real content. That&#8217;s going to be so much more doable, and that would be good, and we&#8217;ll circle those people.&#8221; It&#8217;s like, &#8220;My guy, that&#8217;s what you&#8217;re doing.&#8221; C2PA is that. It&#8217;s not specifically an AI tagging system. It&#8217;s supposed to say, &#8220;Where has this been and who took this? Who made this? What has happened to it?&#8221;</p>

<p class="has-text-align-none">So if we&#8217;re going for authenticity, Mosseri is just openly saying, &#8220;We&#8217;re using this thing and it doesn&#8217;t work, but imagine if it did. Wouldn&#8217;t that be great?&#8221; That&#8217;s deeply unhelpful. It&#8217;s his way of deeply unhelpfully musing into some system that will be able to, I don&#8217;t know, regain some kind of trust, I guess, while also acknowledging that we&#8217;re already there.</p>

<p class="has-text-align-none"><strong>I&#8217;m going to make you keep arguing with Adam Mosseri. We&#8217;ve invited Adam on the show. We&#8217;ll have him on and maybe we can add this debate with him in person, but for now you&#8217;re going to keep arguing with his blog post. He says, &#8220;Platforms like Instagram will do good work identifying AI content, but it&#8217;ll get worse over time as AI gets better. It&#8217;ll be more practical to fingerprint real media than fake media. Labeling is only part of the solution,&#8221; he says. &#8220;We need to surface much more context about the accounts sharing content so people can make informed decisions.&#8221;</strong></p>

<p class="has-text-align-none"><strong>So he&#8217;s saying, &#8220;Look, we&#8217;ll start to sign all the images and everything, but actually, you need to trust individual creators. And if you trust the creator, then that will solve the problem.&#8221; And it seems like you&#8217;re really skipping over the part where creators are fooled by AI-generated content all the time. And I don&#8217;t mean that to say creators as a class of people. I mean, literally just everyone is fooled by AI content all the time. If you&#8217;re trusting people to understand it and then share what they think is real, and then you&#8217;re trusting the consumers to trust the people, that also seems like a whirlwind of chaos.</strong></p>

<p class="has-text-align-none"><strong>On top of that, and you&#8217;ve written about this as well, there&#8217;s the notion that these labels make you mad at people, right? If you label a piece of content as AI-generated, the creator gets furious because it makes their work seem less important or less valuable. The audiences yell at the creators. There&#8217;s been a real push to get rid of these labels entirely because they seem to make everyone mad.</strong></p>

<p class="has-text-align-none"><strong>How does that dynamic work here? Does any of this have a way through?</strong></p>

<p class="has-text-align-none">I mean, it doesn&#8217;t. And the other amusing thing is Instagram knows this the hard way. Mosseri should remember, one of the very first platform implementations they did of reading C2PA was done by Facebook and Instagram a couple of years ago where they were just slapping “made with AI” labels onto everything because that&#8217;s what the metadata told them.&nbsp;</p>

<p class="has-text-align-none">The big problem here that we have isn’t just communication, which is the biggest part of it. How do you communicate a complex bucket of information to every person that&#8217;s going to be on your platform and get them only the information that they need? If I&#8217;m a creator, it shouldn&#8217;t have to matter if I was using AI or not, but if I&#8217;m a person trying to see if, again, a photo is real, I would greatly benefit from just an easy button or label that verifies authenticity.</p>

<p class="has-text-align-none">Finding the balance for that has proven next to impossible because, as you said, people just get upset about it. But then how do you define how much AI in something is too much AI? Photoshop and all of Adobe&#8217;s tools, they do embed these content credentials in all of this metadata, it will say when AI has been used, but AI is in so many tools, and not necessarily in the generative way that we assume it&#8217;s going to be like, &#8220;I&#8217;m going to click on this. It&#8217;s going to add something new to an image that was never there before and that&#8217;s fine.&#8221;</p>

<p class="has-text-align-none">There are very basic editing features that video editors and photographers now use that will have some kind of information embedded into them to say that AI was involved in that process. And now when you&#8217;ve got creators on the other side of that, they might not know that what they are using is AI. We&#8217;re at the point where, unless you can go through every platform, every editing suite with a fine tooth comb and designate what we count as AI, this is a non-starter. He&#8217;s already hit the point that we can&#8217;t communicate this to people effectively.</p>

<p class="has-text-align-none"><strong><em>Let’s pause here for a second, because I want to lay out some important context before we keep digging in.&nbsp;</em></strong></p>

<p class="has-text-align-none"><strong><em>If you’ve been a </em>Verge<em> reader, you know that we’ve been asking a very simple question for over five years now: What is a photo? It sounds simple, but it’s actually quite complicated. After all, when you push the shutter button on a modern smartphone, you’re not actually capturing a single moment in time, which is what most people think a photo is. </em></strong></p>

<p class="has-text-align-none"><strong><em>Modern phones actually take a lot of frames both before and after you press the shutter button and merge them into a single, final photo. That’s to do things like even out the shadows and highlights of the photo, capture more texture, and accomplish feats like Night Mode. </em></strong></p>

<p class="has-text-align-none"><em><strong>There was a <a href="https://www.theverge.com/2023/3/13/23637401/samsung-fake-moon-photos-ai-galaxy-s21-s23-ultra">mini-scandal a few years ago</a> where if you tried to take a photo of the moon with a Samsung phone, the camera app would just generate a picture of the moon.</strong> <strong> Of course, Google Pixel phones have all kinds of <a href="https://www.theverge.com/2023/10/7/23906753/google-pixel-8-pro-photo-editing-tools-ai">Gemini-powered AI tools in them</a>, to the point where Google now says the point of the camera is to capture “memories,” not moments in time. This is a lot, and like I said, we’ve been talking about it for years here at </strong></em><strong>The Verge</strong><em><strong>.</strong></em></p>

<p class="has-text-align-none"><strong><em>Now, generative AI is pushing the “what is a photo” debate to its absolute limits. It’s hard to even agree on how much AI editing makes something an AI-edited photo, or even whether these features should be considered AI in the first place. If that’s so hard, then how can we possibly reach consensus on what’s real and what we label as real? Camera makers have basically <a href="https://www.theverge.com/2024/9/23/24252231/lets-compare-apple-google-and-samsungs-definitions-of-a-photo">thrown their hands up here</a>, and now we’re seeing the major social media platforms do the same thing.&nbsp;</em></strong></p>

<p class="has-text-align-none"><strong><em>I bring this up partially because it’s an obsession of mine, but also I think laying it all out makes it obvious how very, very complicated this all is, which brings us back to Adam Mosseri, Instagram, and the AI labeling debate.</em></strong></p>

<p class="has-text-align-none"><strong>I will give some credit to Instagram and Adam Mosseri here in that they are at least trying and thinking about it and publicly thinking about it in a way that none of the other social networks seem to have given any shred of consideration to. TikTok, for example, is nowhere to be found here. They are just going to distribute whatever they distribute without any of these labels, and it doesn&#8217;t seem like they&#8217;re part of the standard. I think X is absolutely just fully down the rabbit hole of distributing pure AI misinformation. YouTube seems like the outlier, right? Google runs SynthID, they&#8217;re in C2PA, they&#8217;re embedding the information literally at the point of capture in Pixel phones. What is YouTube doing?</strong></p>

<p class="has-text-align-none">A very similar approach to TikTok actually, because weirdly enough, TikTok is involved with this. It uses the standard. It’s not necessarily a steering member, but it is involved. And it has a similar approach, where you&#8217;ll get an AI information label somewhere towards, depending on what format you&#8217;re viewing on, mobile or your TV, your computer, you&#8217;ll get a little AI information label that you have to click in and ascertain the information that you need from that.</p>

<p class="has-text-align-none">So their problem is making sure it&#8217;s robust enough, because this doesn&#8217;t appear consistently. There are AI videos all over YouTube that don&#8217;t carry this and there&#8217;s never a good explanation. Every time I&#8217;ve asked them, it&#8217;s always just, &#8220;We&#8217;re working on it. It&#8217;s going to get there eventually,&#8221; whatever, or they ask for very specific examples and then run in and fix those while I&#8217;m like, &#8220;Okay, but if this is falling through the net, how can you stand by this as a standard and your own SynthID stuff? And you&#8217;re clearly using it to soothe concerns that people have despite its ineffectiveness.&#8221;</p>

<p class="has-text-align-none">They don&#8217;t seem to be progressing any further than just presenting those labels probably because of what happened to Instagram, and now we&#8217;ve just got this situation where Meta does seem to be standing on the sidelines going, &#8220;Well, we tried, so let&#8217;s just see what someone else can do and maybe we&#8217;ll adopt it from there.&#8221; But YouTube doesn&#8217;t really want to address the slop problem because so much of YouTube content that&#8217;s shown to new people is now slop and it&#8217;s proven to be quite profitable for them.</p>

<p class="has-text-align-none"><strong>Google just had one of its best quarters ever. Neal Mohan, the CEO of YouTube, has </strong><a href="https://www.theverge.com/22606296/youtube-shorts-fund-neal-mohan-decoder-interview"><strong>been on the show in the past</strong></a><strong>, and we will have him on the show again in the future. He announced at the top of the year that the future of YouTube is AI and they have features that they&#8217;ve announced like that creators can have AI versions of themselves do the sponsored content, so that the creators can do whatever that the creators actually want to do.</strong></p>

<p class="has-text-align-none"><strong>There’s a part of me that completely understands that. Yes, my digital avatar should go make the ads so I can make the content that the audience is actually here for. And there&#8217;s a part of me that says, &#8220;Oh, they&#8217;re never going to label anything,&#8221; because the second they start labeling that as AI-generated, which clearly will be, they will devalue it. And there&#8217;s something about that in the creative community with the audience that seems important.</strong></p>

<p class="has-text-align-none"><strong>I know you&#8217;ve thought about this deeply. You&#8217;ve done some reporting here. What is it about the AI-generated label that makes everything devalued, that makes everybody so angry?</strong></p>

<p class="has-text-align-none">I think it&#8217;s people trying to put a value on creativity itself. If I was looking at luxury handbags and I see that they&#8217;ve not paid a creative team—This is a creative company that makes wonderful products, it&#8217;s supposed to stand on the quality of all of the stuff that it sells you. If I find that you&#8217;re not involving creative personnel in making an ad for me to want to buy your handbag, why would I want to buy it in the first place?</p>

<p class="has-text-align-none">Not everyone will have that perspective, but as someone that worked in the creative industry for a long time, you see the work that goes into something, even if it&#8217;s something as laughable as a commercial. I love TV commercials because as annoying as they are and as much as they&#8217;re trying to get me to buy something, you can see the work that went into it, that someone had to write that story, had to get behind the film cameras, had to make the effects and all that kind of stuff.</p>

<p class="has-text-align-none">So it feels like if you&#8217;re taking a shortcut to remove all of that, then you&#8217;re already cheapening the process yourself. I feel, from the conversations I&#8217;ve had with the other creatives, that the initial response of thinking AI looks cheap is because it&#8217;s meant to be cheap. That&#8217;s why it exists. It exists for efficiency and affordability. If you&#8217;re coming across with trying to sell me something on that, it&#8217;s probably not going to make the best first impression unless you make it utterly undetectable. And if you have a big “made with AI” or “assisted with AI” label on that, it&#8217;s no longer undetectable because even if I can&#8217;t see it, you&#8217;ve now just admitted that it&#8217;s there.</p>

<p class="has-text-align-none"><strong>That&#8217;s a lot of mixed incentives for these platforms. And it occurs to me as we&#8217;ve been having this conversation, we&#8217;ve been kind of presuming a world in which everyone is a good-faith actor and trying to make good experiences for people. And I think a lot of the executives of these companies would love to presume that that is the world in which they operate, and whether or not the label makes people mad and you want to turn it off or whether or not you can trust the videos of significant government overreach and cause a protest, that&#8217;s still operating in a world of good faith.</strong></p>

<p class="has-text-align-none"><strong>Right next to that is reality, the actual reality in which we live, where lots of people are bad-faith actors who are very much incentivized to create misinformation, to create disinformation, and some of those bad-faith actors at this moment in time are the United States government. The White House publishes AI photos all the time. Department of Homeland Security, AI-generated imagery, up, down, left, right, and center. You can just see AI manipulated photos of real people modified to look like they&#8217;re crying as they&#8217;re being arrested instead of what they actually looked like.</strong></p>

<p class="has-text-align-none"><strong>This is a big deal, right? This is a war on reality from literally the most powerful government in the history of the world. Are the platforms ready for that at all? Because they&#8217;re being faced with the problem, right? This is the stuff you should label. No one should be mad at you for labeling this, and they seem to be doing nothing. Why do you think that is?</strong></p>

<p class="has-text-align-none">I think it&#8217;s because it&#8217;s the same process, right? What we&#8217;re talking about is a two-way street. You&#8217;ve got the people who want to identify AI slop, or maybe they don&#8217;t, but people want to be able to see what is and what isn&#8217;t AI, but then you&#8217;ve got the more insidious situation of, “We actually want to be able to tell what is real, but it unfortunately benefits too many people to make that confusing now.” The solution is for both. AI companies and platforms are profiting off of all of the stuff that they&#8217;re showing us and making it much more efficient for content creators to slap stuff in front of you.</p>

<p class="has-text-align-none">We&#8217;re in a position now where there&#8217;s more online than we&#8217;ve ever seen because everything is being funneled out. Why would they want to harm that profit stream, effectively, by having to slam on the brakes of development until they can figure out how they are going to effectively be able to call out when deepfakes are proving to be a problem. The methods of being put in front of it, rather than setting up some kind of middle system like the Shutterstock model we discussed earlier, where all press images now have to come from one authority that has to verify the identity of everyone taking them. Maybe that&#8217;s a possibility, but we are so far from that point and, to my knowledge, no one&#8217;s instigated setting something like that up. So they&#8217;re just kind of relying on everyone talking about this in good faith.</p>

<p class="has-text-align-none">Again, every conversation I&#8217;ve had with this is, &#8220;We&#8217;re working on it. It&#8217;s a slow process. We&#8217;re going to get there eventually. Oh, it was never designed to do all of this stuff anyway.&#8221; So it&#8217;s very blase and low effort really—&#8221;We&#8217;ve joined an initiative, what more do you want?&#8221; It’s incredibly frustrating, but that seems to be the reason that everything is not developing, because in order to develop any further, in order to actually help us, they would have to pause. They would have to stop and think about it, and they&#8217;re too busy running out every other tool and feature that they can think of doing because they have to. They have to keep their shareholders happy. They have to keep us as consumers happy while also saying, &#8220;Ignore everything else that&#8217;s going on in the background.&#8221;</p>

<p class="has-text-align-none"><strong>When I say there&#8217;s mixed incentives here, one of the things that really gets me is that the biggest companies investing in AI are also the biggest distributors of information. They&#8217;re the people who run the social platforms. So Google obviously has massive investments in AI. They run YouTube. Meta has massive investments in AI, to what end unclear, but massive investments in AI. They run Instagram and Facebook and WhatsApp and the rest.</strong></p>

<p class="has-text-align-none"><strong>Just down the line, you can see, “Okay, Elon Musk is going to spend tons of money in xAI and he runs Twitter.” And this is a big problem, right? If your business, your money and your free cash flow is generated by the time people are spending on your platforms and then you&#8217;re plowing those profits back into AI, you can&#8217;t undercut the thing you&#8217;re spending the R&amp;D money on by saying, “We&#8217;re going to label it and make it seem bad.”</strong></p>

<p class="has-text-align-none"><strong>Are there any platforms that are doing it, that are saying, “Hey, we&#8217;re going to promise you that everything you see here is real?” Because it seems like a competitive opportunity.</strong></p>

<p class="has-text-align-none">Very small. There&#8217;s an <a href="https://cara.app/explore">artist platform called Cara</a>, which says that they&#8217;re so for supporting artists that they&#8217;re not going to allow any AI-generated artwork on the site, but they haven&#8217;t really clearly communicated how they are going to do that, because saying it is one thing and doing it is another thing entirely.&nbsp;</p>

<p class="has-text-align-none">There are a million reasons why we don&#8217;t have a reliable detection method at the minute. So if I, in complete good faith, pretend to be an artist that&#8217;s just feeding AI-generated images onto that platform, there&#8217;s very little they can really do about it. Anyone that&#8217;s making those statements saying, &#8220;Yeah, we&#8217;re going to stand on merit and we&#8217;re going to keep AI off of the platform,” well how? They can&#8217;t. The systems for doing so at the minute are being developed by AI providers, as we&#8217;ve said, or at least AI providers are deeply involved with a lot of these systems and there is no guarantee for any of it.&nbsp;</p>

<p class="has-text-align-none">So we&#8217;re still relying on how humans intercept this information to be able to tell people how much of what they can see is trustworthy. That&#8217;s still kind of putting the onus on us as people. It&#8217;s, &#8220;Well, we can give you a mishmash of information and then you decide whether it&#8217;s reliable or not.&#8221; And we haven&#8217;t operated in that way as a society for years. People didn&#8217;t read the newspapers to make their own mind up about stuff. They wanted information and facts, and now they can&#8217;t get that.</p>

<p class="has-text-align-none"><strong>Is there user demand for this? This does seem like the incentive that will work. If enough people say, &#8220;Hey, I don&#8217;t know if I can trust what I see. You have to help me out here, make this better,&#8221; would that push the platforms into labeling?&nbsp;</strong></p>

<p class="has-text-align-none"><strong>Because it seems like the breakdown is at the platform level, right? The platforms are not doing enough to showcase even the data they have, let alone demand more. But it also seems like the users could simply say, &#8220;Hey, the comment section of every photo in the world now is just an argument about whether or not this is AI. Can you help us out?&#8221; Would that push them into improvement?</strong></p>

<p class="has-text-align-none">I would like to think it would push them into at least being more vocal about their involvement at the minute. We&#8217;ve got, again, a two-sided thing. At the minute, you can&#8217;t tell if a photo is real, but also, a less nefarious thing is that Pinterest is now unusable. As a creative, if I want to use the platform Pinterest, I cannot tell what is and what isn&#8217;t AI. I mean I can, but a lot of people won&#8217;t be able to. And there is so much demand for a filter for that website just to be able to go, &#8220;I don&#8217;t want any of this, please don&#8217;t show me anything that&#8217;s generated by AI.&#8221; That hasn&#8217;t happened yet. They&#8217;ve done a lot of other stuff on it, but they&#8217;re involved with the process behind developing these systems.</p>

<p class="has-text-align-none">It&#8217;s kind of more the problem that they&#8217;ve set themselves an impossible task. In order to use any of the systems that we&#8217;ve established so far, you need to be best friends with every AI provider on the planet, which isn&#8217;t going to happen because we&#8217;ve got nefarious third-party things that focus entirely on stuff like nudifying people or a deepfake generation entirely. This isn&#8217;t OpenAI or the big name models, but they exist and they&#8217;re usually what&#8217;s used to do this kind of underground activity. They&#8217;re not going to be on board with it. So you can&#8217;t make bold promises about resolving the problem universally when there is no solution at hand at the minute.</p>

<p class="has-text-align-none"><strong>When you talk to the industry, when I hear from the industry, it is the drumbeat that you&#8217;ve mentioned several times. &#8220;Look, it&#8217;s going to get better. It&#8217;s going to be slow. Every standard is slow. You have to give it time.&#8221; It sounds like you don&#8217;t necessarily believe that. You think that this has already failed. Explain that. Do you think this has already failed?</strong></p>

<p class="has-text-align-none">Yeah, I would say this has failed. I think this has failed for what has been presented to us because what C2PA was for and what companies have been using it for are two different things to me. C2PA came about as a &#8230; I will give Adobe its credit because Adobe&#8217;s done a lot of work from this. And the stuff it was meant to do was, if you are a creative person, this system will help you prove that you made a thing and how you made a thing. And that has benefit. I see that being used in that context every day. But then a lot of other companies got involved with that and said, &#8220;Cool, we&#8217;re going to use this as our AI safeguard basically. We&#8217;re using this system and it&#8217;ll tell you, when you post it somewhere else, whether it&#8217;s got AI involved with it, which means that we&#8217;re the good guys because we&#8217;re doing something.&#8221;</p>

<p class="has-text-align-none">And that&#8217;s what I have a problem with. Because C2PA has never stood up and said, &#8220;We are going to fix this for you.&#8221; A lot of companies came on board and went, &#8220;Well, we&#8217;re using this and this is going to fix it for you when it works.&#8221; And that&#8217;s an impossible task. It&#8217;s just not going to happen. If we&#8217;re thinking about adopting this platform, just this platform, even in conjunction with stuff like SynthID or inference methods, it&#8217;s never going to be an ultimate solution, so I would say resting the pressure on “We have to have AI detection and labeling,” it&#8217;s failed. It&#8217;s dead in the water. It&#8217;s never going to get to a universal solution.</p>

<p class="has-text-align-none">That doesn&#8217;t mean it&#8217;s not going to help. If they can figure out a way to effectively communicate all of this metadata and robustly keep it in check, make sure it&#8217;s not being removed at every instance of being uploaded, then yeah, there&#8217;ll be some platforms where we&#8217;ll be able to see if something was maybe generated by the eye or maybe it was a verified creator badge, something, whatever Mosseri is talking about where we&#8217;re going to have to start verifying photographers through metadata and all of this other information, but there is not going to be a point in the next three, five years where we sign on and go, &#8220;I can now tell what&#8217;s real and what&#8217;s not because of C2PA.&#8221; That&#8217;s never going to happen.</p>

<p class="has-text-align-none"><strong>It does seem like these platforms, maybe modernity as we experience it today, have been built on, “You can trust the things that come off these phones.” You can just see it over and over and over again. Social movements rise and fall based on whether or not you can trust the things that phones generate. And if you destabilize that, you&#8217;re going to have to build all kinds of other systems. I&#8217;m not sure if C2PA is it. I&#8217;m sure we will hear from the C2PA folks. I&#8217;m sure we will hear from Adam and from Neal and the other platform owners on Decoder. Again, we&#8217;ve invited everybody on.</strong></p>

<p class="has-text-align-none"><strong>What do you think the next turn here is? Because the pressure is not going to relent. What&#8217;s the next thing that could happen?</strong></p>

<p class="has-text-align-none">From this turn of events, there&#8217;s probably going to be some kind of regulatory effort. There&#8217;s going to be some kind of legal involvement, because up until this point, there have been murmurs of how we&#8217;re going to regulate stuff, like with the Online Safety Act in the UK. Everything is now pointing toward, &#8220;Hey, AI is making a lot of deepfakes of people that we don&#8217;t like and we should probably talk about having rules in place for that.&#8221;</p>

<p class="has-text-align-none">But up until that point, these companies have basically been enacting systems that are supposed to help us out of the goodness of their heart: &#8220;Oh, we&#8217;ve spotted that this is actually a concern and we&#8217;re going to be doing this.&#8221; But they haven&#8217;t been putting any real effort into doing so. Otherwise, again, we would have some kind of solution by now where we would see some sort of widespread results at the very least. It would involve working together, having widespread communications, and that&#8217;s supposed to be happening with the CAI, with the initiative that everyone else is currently involved with. There are no results. We are not seeing them.</p>

<p class="has-text-align-none">Instagram made a bold effort over a year ago to stick labels on and then immediately ran back with its tail between its legs. So unless regulatory efforts actually come in clamping down on these companies and saying, &#8220;Okay, we actually now have to dictate what your models are allowed to do and what we are going to have repercussions for you if we find out what your models are doing and not supposed to be doing,&#8221; that is the next stage. We have to have this as a conjunction. I think that will be beneficial in terms of having that with labeling, with metadata tagging and stuff. But alone, there is never going to be a perfect solution to this.</p>

<p class="has-text-align-none"><strong>Well, sadly, Jess, I always cut off </strong><strong><em>Decoder</em></strong><strong> episodes when they veer into explaining the regulatory process to the European Union. That&#8217;s just a hard rule on the show, but it does seem like that&#8217;s going to happen and it seems like the platforms themselves are going to have to react to how their users are behaving.</strong></p>

<p class="has-text-align-none"><strong>You&#8217;re going to keep covering this stuff. I find it fascinating how deep into this world you&#8217;ve gotten starting from, &#8220;Hey, we should pay more attention to these tools,&#8221; and now here we are at “Can you label reality into existence?” Jess, thank you so much for being on Decoder.</strong></p>

<p class="has-text-align-none">Thank you.</p>

<p class="has-text-align-none"><em><sub>Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!</sub></em></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Jay Peters</name>
			</author>
			
			<title type="html"><![CDATA[Adobe actually won’t discontinue Animate]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/873621/adobe-animate-maintenance-mode-reverse-course" />
			<id>https://www.theverge.com/?p=873621</id>
			<updated>2026-03-01T11:47:56-05:00</updated>
			<published>2026-02-03T20:28:47-05:00</published>
			<category scheme="https://www.theverge.com" term="Adobe" /><category scheme="https://www.theverge.com" term="Apps" /><category scheme="https://www.theverge.com" term="Creators" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Adobe is no longer planning to discontinue Adobe Animate on March 1st. In an FAQ, the company now says that Animate will now be in maintenance mode and that it has "no plans to&#8239;discontinue or remove access" to the app. Animate will still receive "ongoing security and bug fixes" and will still be available for [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Adobe" data-caption="" data-portal-copyright="Illustration by Alex Castro / The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/23624358/acastro_STK124_04.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Adobe is no longer planning to discontinue Adobe Animate on March 1st. In an FAQ, the company <a href="https://helpx.adobe.com/animate/kb/maintenance-mode.html">now says</a> that Animate will now be in maintenance mode and that it has "no plans to&#8239;discontinue or remove access" to the app. Animate will still receive "ongoing security and bug fixes" and will still be available for "both new and existing users," but it won't get new features. </p>
<p class="has-text-align-none">Many creators <a href="https://x.com/chikn_nuggit/status/2018448360521482395?s=20">expressed</a> <a href="https://x.com/Megacharlie159/status/2018396398614438381">frustration</a> after Adobe's original discontinuation announcement from Monday (<a href="https://web.archive.org/web/20260202202634/https://helpx.adobe.com/animate/kb/end-of-life.html">here's a Wayback Machine link</a>), and the application is still used by creators <a href="https://x.com/DAVID_FIRTH/status/2018426558264820186">like David Firth</a>, the person behind the <a href="https://www.theverge.com/2018/11/30/18119749/salad-fingers-new-episode-december">animated web series <em>Salad Fingers</em></a>. Now, Adobe says that …</p>
<p><a href="https://www.theverge.com/tech/873621/adobe-animate-maintenance-mode-reverse-course">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
	</feed>
