<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">The Code Conference 2023: all the news as it happens &#8211; The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2025-03-17T16:24:52+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/2023/9/26/23890396/code-conference-2023-interviews-news" />
	<id>https://www.theverge.com/rss/stream/23654437</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/rss/stream/23654437" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Nilay Patel</name>
			</author>
			
			<title type="html"><![CDATA[Microsoft CTO Kevin Scott on how AI and art will coexist in the future]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/23900198/microsoft-kevin-scott-ai-art-bing-google-nvidia-decoder-interview" />
			<id>https://www.theverge.com/23900198/microsoft-kevin-scott-ai-art-bing-google-nvidia-decoder-interview</id>
			<updated>2023-10-03T11:00:00-04:00</updated>
			<published>2023-10-03T11:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Decoder" /><category scheme="https://www.theverge.com" term="Google" /><category scheme="https://www.theverge.com" term="Microsoft" /><category scheme="https://www.theverge.com" term="Podcasts" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[I co-hosted the Code Conference last week, and today&#8217;s episode is one of my favorite conversations from the show: Microsoft CTO and EVP of AI Kevin Scott. If you caught Kevin on Decoder a few months ago, you know that he and I love talking about technology together. I really appreciate that he thinks about [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24971211/KevinScott_Decoder.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-end-mark">I co-hosted the Code Conference last week, and today&rsquo;s episode is one of my favorite conversations from the show: Microsoft CTO and EVP of AI Kevin Scott. If you <a href="https://www.theverge.com/23733388/microsoft-kevin-scott-open-ai-chat-gpt-bing-github-word-excel-outlook-copilots-sydney">caught Kevin on <em>Decoder</em> a few months ago</a>, you know that he and I love talking about technology together. I really appreciate that he thinks about the relationship between technology and culture as much as we do at <em>The Verge</em>, and it was great to add the energy from the live Code audience to that dynamic.</p>

<p>Kevin and I talked about how things are going with Bing and Microsoft&rsquo;s AI efforts in general now that the initial hype has subsided &mdash; I really wanted to know if Bing was actually stealing users from Google.</p>
<div class="wp-block-vox-media-highlight vox-media-highlight alignnone"><h3 class="wp-block-heading" id="">&nbsp;</h3>

<img src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24792604/The_Verge_Decoder_Tileart.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="" data-portal-copyright="" />


<p>Listen to <em>Decoder</em>, a show hosted by <em>The Verge</em>&rsquo;s Nilay Patel about big ideas &mdash; and other problems.&nbsp;Subscribe&nbsp;<a href="https://podcasts.apple.com/us/podcast/welcome-to-decoder/id1011668648?i=1000496212371&amp;itsct=podcast_box&amp;itscg=30200&amp;ls=1&amp;at=1001l7uV&amp;ct=verge091322">here</a>!</p>
</div>
<p>Kevin also controls the entire GPU budget at Microsoft, and access to GPUs is a hot topic across the AI world right now &mdash; especially access to Nvidia&rsquo;s H100 GPU, which is what so many of the best AI models run on. Microsoft itself runs on H100s, but Kevin is keenly aware of that dependency, and while he wouldn&rsquo;t confirm any rumors about Microsoft developing its own AI chips right now, he did say a switch from Nvidia to AMD or other chip vendors should be seamless for Microsoft&rsquo;s customers if the company ever does make that leap.</p>

<p>I also asked Kevin some pretty philosophical questions about AI: why would you write a song or a book when AI is out there making custom content for other people? Well, it&rsquo;s because Kevin thinks the AI is still &ldquo;terrible&rdquo; at it for now, as Kevin found out firsthand. But he also thinks that creating is just what people do, and AI will help more people become more creative. Like I said, this conversation got deep &mdash; I really like talking to Kevin.&nbsp;</p>

<p>Okay, Microsoft CTO Kevin Scott. Here we go.</p>
<iframe frameborder="0" height="200" src="https://playlist.megaphone.fm/?e=VMP3068930806" width="100%"></iframe>
<p><em>This transcript has been lightly edited for length and clarity.</em></p>

<p><strong>I could talk about literally anything with Kevin. He&rsquo;s a maker. You&rsquo;re a renaissance&#8230; We were talking about crimping ethernet cables before we walked out onstage &mdash; literally anything. But we got to talk about AI. So I want to just ask from the beginning: Microsoft kicked off a huge moment in the AI rush with the announcement of Bing, the integration of OpenAI into the products. There&rsquo;s obviously Copilots. How is that going? Has the integration of AI into Bing led to a market share gain, led to increased usage?</strong></p>

<p>Yeah, for sure. It&rsquo;s small market share gains, but <a href="https://www.theverge.com/2023/3/9/23631912/microsoft-bing-100-million-daily-active-users-milestone">definitely gains in ways that we hadn&rsquo;t seen before</a>. So super interesting learnings, and a lot of interesting things coming. Like we announced DALL-E 3 integration into Bing Chat and a bunch of other things just last week, so we continue to take all of the feedback and try to improve and iterate. That team moves pretty quickly, so it&rsquo;s been a really interesting platform for us to do a bunch of experimentation. And a bunch of things that we&rsquo;ve learned on Bing have been directly transferable to the other Copilot products that we&rsquo;re building and even the API business that&rsquo;s growing super fast right now.</p>

<p><strong>The context of this question is, as we sit here on the West Coast having this conversation, on the East Coast, Google is in the middle of an antitrust trial about how it might&rsquo;ve unfairly created a monopoly in search. And a huge theme in that trial is, &ldquo;Well, hey, Microsoft exists. If they wanted to compete, they could. We&rsquo;re just so good at this that they can&rsquo;t.&rdquo; Do you think Bing actually creates an edge in that race right now?</strong></p>

<p>I think Bing is a very good search engine. It&rsquo;s the search engine that I use. I will tell you, in all honesty, I&rsquo;ve been at Microsoft for six and a half years. When I got there, I was still a Google Search user for a while. And it got to the point where the combo of the Edge browser and Bing search &mdash; just because the team is constantly grinding away, trying to improve quality &mdash; was more than good enough to be my daily driver browser-plus-Edge combo. And we&rsquo;ve seen a growth in market share.</p>

<p>And I think the only thing that anybody can ask for is that you do high-quality product work, and you want marketplaces to be fair so you can compete. And I think it&rsquo;s true for big companies, small companies, individuals who are trying to break through. Just whatever it is that is that notion of fairness is what everybody&rsquo;s asking for. And it&rsquo;s a complicated thing to go sort out. I will not comment on what&rsquo;s going on on the East Coast right now.</p>

<p><strong>Broadly, the East Coast.</strong></p>

<p>Yeah, broadly. But I do think we have to be asking ourselves all the time about what&rsquo;s fair and how can everyone participate. Because that&rsquo;s the goal at the end of the day. We&rsquo;re all creating these big platforms, whether it&rsquo;s search as a platform, these cloud platforms&hellip; we&rsquo;re building AI platforms right now. I think everybody is very reasonable in wanting to make sure that they can use these platforms in a fair way to do awesome work.</p>

<p><strong>I think the conventional wisdom is that [in] an AI-powered search experience, you ask the computer a question, it just tells you a smart answer, or it goes out and talks to other AI systems that sort of collect an answer for you. That is the future. I think if you just broadly ask people, &ldquo;What should search do?&rdquo; &ldquo;You ask a question, you get an answer.&rdquo; That really changes the idea of how the web works. The fundamental incentive structure on the web is appearing in search results. Have you thought about that with Bing?</strong></p>

<p>Yeah. So I think what you want from a search engine and what you&rsquo;re going to want from an agent is a little more complicated than just asking a question and getting an answer. A whole bunch of the time, what you want is you&rsquo;re trying to accomplish a task, and asking questions are part of the task, but sometimes, it&rsquo;s just the beginning. Sometimes, it&rsquo;s in the middle.</p>

<p>Like you&rsquo;re planning a vacation, you&rsquo;re doing research on how to wring out the ethernet cables in a house you&rsquo;re remodeling, whatever it is. That may involve purchasing some things or spending some time reading a pretty long thing because you can&rsquo;t get the information that you need in just some small transaction that you&rsquo;re having with an agent. I think it&rsquo;s unclear the extent to which the dynamic will actually change. I think the particular thing is everybody is worried about referrals, and how is this going to&#8230; If the bot is giving you all the answers, what happens to referral traffic?</p>

<p><strong>What&rsquo;s the incentive to create new content? This is what I&rsquo;m thinking about a lot.</strong></p>

<p>Correct. Yeah, yeah.</p>

<p><strong>If an AI search product can just summarize for you what I wrote in a review of the new phone, why would I ever be incentivized to create another review of a phone if no one&rsquo;s ever going to visit me directly?</strong></p>

<p>I don&rsquo;t think that&rsquo;s actually the thing that anybody wants. It&rsquo;s certainly not the thing that I want, individually. There needs to be a healthy economic engine where people are all participating. They&rsquo;re creating stuff, and they&rsquo;re getting compensated for what they create.&nbsp;</p>

<p>Now, I think the compensation structure and how things work just evolves really rapidly. And it feels to me like, even independent of AI, things are changing very rapidly right now &mdash; like how people find an audience for the things that they&rsquo;re creating, how people turn audience engagement into a real business model. On the one hand, it&rsquo;s difficult because some of these funnels are hard to debug. You don&rsquo;t really know what&rsquo;s going on in an algorithm somewhere that&rsquo;s directing traffic to your site.</p>

<p>So, I think that&rsquo;s one of the opportunities that we can have right now in the conversation about how these AI agents are going to show up in the world. It&rsquo;s not necessarily preserving exactly what that funnel looks like but being transparent about what the mechanics of it are so that if you&rsquo;re going to spend a bunch of effort or try to use it as a way to acquire an audience, that you at least understand what&rsquo;s going on, that it&rsquo;s not arbitrary and capricious and, one day, something changes that no one told you about and you no longer know how to viably run your business.</p>

<p><strong>The flip side of that is you also make a lot of tools that can create AI content. And you see these distribution platforms immediately being flooded with AI content. And something like a search engine or even training a new model being flooded with its own AI spam essentially leads to things like model collapse, leads to a drastic reduction in quality. How do you filter that stuff out?</strong></p>

<p>We&rsquo;ve got an increasingly good set of ways, at least on the model training side, to make sure that you&rsquo;re not ingesting low-quality content, and you&rsquo;re sort of recursively getting&mdash;</p>

<p><strong>Is there a difference between low-quality content and AI-generated content?</strong></p>

<p>Sometimes, AI-generated content is good, and sometimes, it&rsquo;s not. I think it&rsquo;s sort of less interesting. It&rsquo;s kind of a technical problem, whether or not you&rsquo;re ingesting things into your training process that are causing the performance of a trained model to become worse over time. That&rsquo;s a technical thing. I think it&rsquo;s an entirely solvable problem.</p>

<p>I think the thing that you want in general is, as a consumer of content, you just don&rsquo;t want to be reading a bunch of spammy AI-generated garbage. I don&rsquo;t think anyone wants that. And I would even argue&#8230; This is an interesting thing you and I haven&rsquo;t chatted about, but I think the purpose of making a piece of content isn&rsquo;t this flimsy transactional thing that sometimes people think it is. It is trying to put something meaningful out into the world, to communicate something that you are feeling or that you think is important to say and then trying to have some kind of connection with who&rsquo;s consuming it.</p>

<p>So, there&rsquo;s nothing about an AI being 100 percent of that interaction that seems interesting to me. I don&rsquo;t know why I would want to be consuming a bunch of AI-generated content versus things that you are producing.</p>

<p><strong>I feel the same way.</strong></p>

<p>I think you are almost certainly going to want to use some of these AI tools to help produce content. One of the things that I did last fall when we were playing around with this stuff for the first time is: I was like, &ldquo;Oh, I&rsquo;ve wanted to write a science fiction book since I was a teenager, and I&rsquo;ve never been able to just sort of get the activation energy.&rdquo; And I started to attempt doing that with GPT-4, and it was terrible at using it in the way that you would expect. So you can&rsquo;t just go into the model and say, like, &ldquo;Hey, here&rsquo;s an outline for a science fiction book I&rsquo;d like to write. Please write chapter one.&rdquo;</p>

<p><strong>That&rsquo;s the model today. We&rsquo;re in the context of the writers strike resolving. Even in that conversation, they were not worried about the model&rsquo;s capabilities today. There will be a GPT-5 and a GPT-6, right?</strong></p>

<p>Correct. And I actually agree with that. But the point that I was making is the useful thing about the tool is it helped keep me in flow state. So I&rsquo;ve written a nonfiction book. I&rsquo;ve never written a fiction book before. So the useful thing for it was not actually producing the content but, when I got stuck, helping me get unstuck, like if I had an ever-present writing partner or an editor who had infinite amounts of time to spend with me. It&rsquo;s like, &ldquo;Okay, I don&rsquo;t know how to name this character. Let me describe what they&rsquo;re about. Give me some fun names.&rdquo;&nbsp;</p>

<p>So, it was really amazing the extent to which having an AI creative partner helped unblock me. But it was still&#8230; It was all my trying to figure out how the plot of this book ought to work. And I don&rsquo;t think it would be particularly interesting to me as a reader to consume a novel worth of content that was 100 percent generated by an AI, with no human touch whatsoever. I don&rsquo;t even know what that&rsquo;s doing.</p>

<p><strong>We&rsquo;ve arrived now at the nature of art, so I&rsquo;m going to make a hard shift to GPUs. This is what I mean about Kevin &mdash; we can go everywhere with Kevin. I just want to make sure we hit it all.</strong></p>

<p>Very non-artistic.</p>

<p><strong>Why do people make art? The AI moment has provided us the opportunity to ask that question in a serious way. Because the internet has basically been like, &ldquo;To make money.&rdquo; And I think there&rsquo;s a divergence there, as our distribution channels get flooded. I just don&rsquo;t know that we&rsquo;ll hit the answer in the next 10 minutes.</strong></p>

<p>Correct.</p>

<p><strong>So, the last time you and I spoke, you said something to me that I have been thinking about ever since. This man controls the entire GPU budget at Microsoft &mdash; every dollar that flows into GPUs, right here.</strong></p>

<p>Well, it&rsquo;s not just me. It&rsquo;s&#8230; But I&rsquo;m the one that resolves the hard conflicts.</p>

<p><strong>Yeah, that&rsquo;s control. That&rsquo;s what I mean. Is that job getting easier or harder for you?</strong></p>

<p>It&rsquo;s easier now than when we talked last time. So we were in a moment where I think the demand&#8230; Because a bunch of AI technology had ripped onto the scene in a surprising way, and demand was far exceeding the supply of GPU capacity that the whole ecosystem could produce. That is resolving. It&rsquo;s still tight, but it&rsquo;s getting better every week, and we&rsquo;ve got more good news ahead of us than bad on that front, which is great. It makes my job of adjudicating these very gnarly conflicts less terrible.</p>

<p><strong>There was some reporting this week. You actually mentioned it before, </strong><a href="https://www.theinformation.com/articles/how-microsoft-is-trying-to-lessen-its-addiction-to-openai-as-ai-costs-soar"><strong>in <em>The information</em></strong></a><strong>, that Microsoft is heavily invested in smaller models that require less compute. Are you bringing down the cost of compute over time?</strong></p>

<p>Well, I think we are. And the thing that I will say here, which we were chatting about backstage, is when you bill one of these AI applications, you end up using a full portfolio model. So, you definitely want to have access to the big models, but for a whole bunch of reasons. If you can offload some of the work that the AI application needs to do to smaller models, you probably are going to want to do it.</p>

<p>And some of the motivations could be cost. Some of it could be latency. Some of them could be that you want to run part of the application locally because you don&rsquo;t want to transit sensitive information to the cloud. There&rsquo;s just a whole bunch of reasons why you want the flexibility to architect things where you&rsquo;ve got a portfolio of these models.</p>

<p>And the other thing, too, is the folks at OpenAI, with some help from folks at Microsoft, have been working furiously on optimizing the big models, as well. So it&rsquo;s not an either-or. You want both, and you want both to be getting cheaper and faster and more performance and higher quality over time.</p>

<p><strong>Can you bring down the cost of compute?</strong></p>

<p>Yeah.</p>

<p><strong>I&rsquo;m looking at Copilot in Office 365. It&rsquo;s $30 a seat. That&rsquo;s an insane price. I think some people are going to think it&rsquo;s very valuable, but that&rsquo;s not a massive market for an AI pricing scheme. Can you bring that down?</strong></p>

<p>I think we can bring the underlying cost of the AI down substantially. One of the interesting things that OpenAI did this spring is they reduced the cost by a factor of 10 to developers for access to the GPT-3.5 API. That was almost entirely passing along a whole bunch of performance optimizations. So, the chips are getting better price performance-wise, generation over generation. And the software techniques that we&rsquo;re using to optimize the models are bringing tons of performance without compromise to quality down. And then, you have these other techniques of how do you compose your application of small and big models that help, as well. So yeah, definitely, the cost goes down. And the price is just what value you&rsquo;re creating for people. So the market sort of sets the price. And if the market tells us that the price for these things is too high, then the price goes down.</p>

<p><strong>This is the first time anyone has ever priced these things, so I guess we&rsquo;ll find out. Is that signal working for you?</strong></p>

<p>Yeah, we&rsquo;re getting really good signal about price right now. And I think the thing that you just said is important. It is very early days right now for the commercialization of generative AI. So you have a whole bunch of things that you&rsquo;ve got to figure out in parallel. One of them is how do you price them, and what is the market actually for these things? And there&rsquo;s no reason to overprice things. The thing that you want is everybody getting value from them, as many as humanly possible. So we&rsquo;ll figure that out, I think, over time.</p>

<p><strong>When I think about compute &mdash; these big models, running tools for customers &mdash; obviously, the story there is Nvidia chips, right? It&rsquo;s access to H100s. It&rsquo;s building capacity there. They&rsquo;ve got 80 percent of the overall market share. How much do they represent for you?</strong></p>

<p>Yeah, they&rsquo;re&#8230; If you look at our key AI workloads, they&rsquo;re a substantial fraction of our compute.</p>

<p><strong>What&rsquo;s your relationship with Nvidia like? Is that a good working relationship?</strong></p>

<p>They are one of our most important partners. And we work with them on a daily basis, on a whole bunch of stuff, and I think the relationship is very good.</p>

<p><strong>I look at Amazon, Google &mdash; they&rsquo;re kind of making their own chips. I talked to the CEO of AWS a few weeks ago on <em>Decoder</em>. He didn&rsquo;t sound thrilled that he had this existential dependency on Nvidia. They want to move to their own systems. Are you thinking about custom chips? Are you thinking about diversifying that supply chain for yourself?</strong></p>

<p>Going back to the previous conversation, if you want to make sure that you&rsquo;re able to price things competitively, and you want to make sure that the costs of these products that you&rsquo;re building are as low as possible, competition is certainly a very good thing. I know <a href="https://www.theverge.com/23894647/amd-ceo-lisa-su-ai-chips-nvidia-supply-chain-interview-decoder">Lisa Su, from AMD, is here at the conference</a>. We&rsquo;re doing a bunch of interesting work with Lisa, and I think they&rsquo;re making increasingly compelling GPU offerings that I think are going to become more and more important in the marketplace in the coming years. I think there&rsquo;s been a bunch of leaks about first-party silicon that Microsoft is building. We&rsquo;ve been building silicon for a really long time now. So&mdash;</p>

<p><strong>Wait, are you confirming these leaks?</strong></p>

<p>I&rsquo;m not confirming anything. But I will say that we&rsquo;ve got a pretty substantial silicon investment that we&rsquo;ve had for years. And the thing that we will do is we&rsquo;ll make sure that we&rsquo;re making the best choices for how we build these systems, using whatever options we have available. And the best option that&rsquo;s been available over the past handful of years has been Nvidia. They have been really&mdash;</p>

<p><strong>Is that because of the processing power in the chip, or is it because of the CUDA platform? Because what I&rsquo;ve heard from folks, what I heard from Lisa yesterday, is that actually, what we need to do is optimize one level higher. We need to optimize at the level of PyTorch or training or inference. And CUDA is not the thing, and that&rsquo;s what Nvidia&rsquo;s perceived mode is. Do you agree with that? That you&rsquo;re dependent on the chip? Or are you dependent on their software infrastructure? Or are you working at a level above that?</strong></p>

<p>Well, I think the industry at large benefits a lot from CUDA, which they&rsquo;ve been investing in for a while. So if your business is like, &ldquo;I got a whole bunch of different models, and I need to performance tune all of them,&rdquo; the PyTorch-CUDA combo is pretty essential. We don&rsquo;t have a ton of models that we&rsquo;re optimizing.&nbsp;</p>

<p>So we have a whole bunch of other tools like Triton, which is an open-source tool that OpenAI developed, and a bunch of other things that help you basically do exactly what you said, which is up-level the abstraction so that you can be developing high-performance kernels for your both inference and training workloads, where it&rsquo;s easier to choose what piece of hardware you&rsquo;re using. The thing to remember is even if it&rsquo;s just Nvidia, you have multiple different hardware SKUs that you&rsquo;re deploying in production at any point in time, and you want to make it easy to even optimize across all of those things.</p>

<p><strong>So I asked Lisa yesterday, &ldquo;How easy would it be for Microsoft to just switch from the Nvidia to AMD?&rdquo; And she told me, &ldquo;You should ask Kevin that question.&rdquo; So here you are. How easy right now would it be if you needed to switch to AMD? Are you working with them on anything? And how easy would it be in the future?</strong></p>

<p>Well, let me deploy my finest press training and say that if you are an API customer right now &mdash; like you&rsquo;re using the Azure OpenAI API or using OpenAI&rsquo;s instance of the API &mdash; you don&rsquo;t have to think about what the underlying hardware looks like. It&rsquo;s an API. It is presented to you to be the simplest possible way to go build an AI application on top of that API.&nbsp;</p>

<p>So yeah, not trivial to muck around with this hardware. It&rsquo;s all big investments. If that&rsquo;s the way that you&rsquo;re building your AI application, you shouldn&rsquo;t have to care. And there are a bunch of people who are not building on top of these APIs where they do have to care. And then, that&rsquo;s a choice for all of them individually about how difficult they think it might be. But for us, it&rsquo;s a big complicated software stack, and the only part of that that the customer sees is that API interface.</p>

<p><strong>The other theme that a bunch of folks at the conference yesterday asked me to ask you about is open source. You obviously have a huge investment in your models. OpenAI has GPT. There&rsquo;s a lot of action around that. On the flip side, there&rsquo;s a bunch of open-source models that are really exciting. You were talking about running models locally on people&rsquo;s laptops. Are these real moats around these big models right now? Or is open source going to actually just come and disrupt it over time?</strong></p>

<p>Yeah, I don&rsquo;t know whether it&rsquo;s even important to think about the models as moats. So there are some things that we&rsquo;ve done, and a path forward for the power of these models as platforms, that are just super capital intensive. And I don&rsquo;t think even if you&rsquo;ve got a whole bunch of breakthroughs on the software, they don&rsquo;t become less capital intensive. So, whether it&rsquo;s Microsoft or someone else, the thing that will have to happen with all of that capital intensity&hellip; because it&rsquo;s largely about hardware and not just software, and it&rsquo;s not just about what you can put on your desktop &mdash; is you have to have very large clusters of hardware to train these models. It&rsquo;s hard to get scale by just sort of fragmenting a bunch of independent software efforts.</p>

<p>So, I think the open-source stuff is super interesting, and I think it&rsquo;s going to help everybody. We&rsquo;ve open-sourced this super good model called <a href="https://the-decoder.com/microsofts-tiny-phi-1-language-model-shows-the-importance-of-data-quality-in-ai-training/">Phi</a> that&rsquo;s trending on Hugging Face as of last week. A bunch of open-source innovations we&rsquo;re excited about. But I think the big models will continue to make really amazing progress for years to come.</p>

<p><strong>I&rsquo;ve got a few more questions. If you have questions for Kevin, please start lining up. I&rsquo;d love to hear from all of you. I want to make sure we talk about authenticity and metadata, marking things as real, something you and I have talked about a lot in the past. There&rsquo;s a lot of ideas about how you might mark content as real or mark it as generated by AI. We&rsquo;re going to see some from Adobe later today, for sure. Have you made any progress here?</strong></p>

<p>Yeah, I think we have. One of the things I think we talked about before is for the past handful of years, we&rsquo;ve been building a set of cryptographic watermarking technologies and trying to work with both content producers and tool makers to see how it is we can get those cryptographic watermarks &mdash; they&rsquo;re manifests that say, &ldquo;This piece of content was created in this way by this entity&rdquo; &mdash; and have that watermark cryptographically preserved with the content as it gets moved through transcoders and CDNs and as you&rsquo;re mashing it up a bunch of different ways.</p>

<p><strong>That might work for images. Can you do that for text? It feels like text is a big deal right now. A bunch of lawsuits are brewing.</strong></p>

<p>Text is definitely harder. There are some things that are research-y that folks are working on, where you can, in the generation of the text, subtly add a statistical fingerprint to how you&rsquo;re generating the text. But it&rsquo;s much harder than visual content, where it&rsquo;s easy to just hide the watermark in the noise in the pixels and not have it really alter the experience you have as a user viewing the image or the video. So it&rsquo;s a tougher problem, for sure.</p>

<p>But it doesn&rsquo;t mean that you can&rsquo;t solve it. You don&rsquo;t have to do it with cryptographic watermarks. You could also just say, &ldquo;Hey, we&rsquo;re going to adopt a set of conventions in the products that we build, where we clearly identify in the products when you have AI-generated text.&rdquo; So with an email message, for instance, if you use Microsoft 365 Copilot to write an email, we can add a piece of text to that message that says&#8230; Or even there with email&mdash;</p>

<p><strong>There&rsquo;s nothing I want more than someone sending me an email that says it was generated from AI at the bottom. When I think about my inbox, that&rsquo;s what would fix it.</strong></p>

<p>But&mdash;</p>

<p><strong>Hold on, there&rsquo;s like a party line of people waiting to talk to you.</strong></p>

<p>Yeah, but these are all preferences. We will have to figure out what that line is.</p>

<p><strong>Oh, I know what my preference for those emails is. I&rsquo;m going to tell Cortana to delete &lsquo;em right away. Fair warning to all of you. If you write me AI, it&rsquo;s gone.&nbsp;&nbsp;</strong></p>
<h2 class="wp-block-heading" id="88dJ5r">Audience Q&amp;A</h2>
<p><strong>Nilay Patel: Alright. Please introduce yourself.</strong></p>

<p><strong>Pam Dillon: Good morning, Kevin. Pam Dillon of Preferabli. This question is not being generated by ChatGPT. We&rsquo;ve been talking a lot about assimilating the world&rsquo;s knowledge in a general sense. Do you think about how we&rsquo;re going to start to integrate specialized bodies of knowledge areas where there&rsquo;s real domain expertise? Say, for example, in medicine or health, demands a sensory consumer?</strong></p>

<p>Kevin Scott: Yeah, we are thinking a lot about that. And I think there&rsquo;s some interesting stuff here on the research front that shows that those expert contributions that you can make to the model&rsquo;s training data, particularly in this step called reinforcement learning from human feedback, can really substantially improve the quality of the model in that domain of expertise. We&rsquo;ve been thinking in particular a lot about the medical applications.</p>

<p>So one of my direct reports, Peter Lee, who runs Microsoft Research and who&rsquo;s also a fellow at the American Medical Association, wrote a great book about medicine and GPT-4, and there&rsquo;s a whole bunch of good work. And all of that is exactly what you said. It is how &mdash; through reinforcement learning, through very careful prompt engineering, through selection of training data &mdash; you can get a model to be very high performing in a particular domain. And I think we&rsquo;re going to see more and more of that over time, with a whole bunch of different domains. It&rsquo;s really exciting, actually.</p>

<p><strong>NP: Over here, please introduce yourself.</strong></p>

<p><strong>Alex: Hi Kevin, my name is Alex. I have a question about provenance. Yesterday, the CEO of Warner Music Group, Robert Kyncl, was talking about his expectation that artists are going to get paid for work that is generated off of their original IP. Today, obviously, provenance is not given by LLMs. My question to you is from a technical standpoint: Let&rsquo;s say that somebody asks to write a song that&rsquo;s sort of in the style of Led Zeppelin and Bruno Mars. But in the generation, the LLM is also using music by the Black Keys because they kind of sound a lot like Led Zeppelin. Would there be a way, technically, to be able to say, from a provenance standpoint, that the Black Keys&rsquo; music was used in the generating of the output so that artist could get compensated in the future?</strong></p>

<p>KS: Yeah, maybe. Although, that particular thing that you just asked, I think, is a controversial thing for human songwriters. I know there was this big lawsuit with Ed Sheeran about exactly this, where it&rsquo;s pretty easy for a human songwriter to be influenced in very subtle ways. And a lot of pop songs, for instance, have a lot of harmonic similarity with one another.&nbsp;</p>

<p>So, I think you have to think about both sides of things. What is actual, AI aside, how do you measure the contribution of one thing to another? Which is hard. And then technically, if we were able to do that part of the analysis, you probably could figure out some technical solutions. It&rsquo;s very easy to make sure that you are not having generations that are parroting. It&rsquo;s either in whole or in snippets, so that&rsquo;s possible. It&rsquo;s a little bit more technically difficult, I think, to figure out, through this gigantic volume of contribution that any piece of data has, how has that influenced a particular generation.&nbsp;</p>

<p><strong>NP: Music copyright is like&#8230; Just find me later, and we&rsquo;ll talk about it. It&rsquo;s one of my favorite things. Go ahead.</strong></p>

<p><strong>Gretchen Tibbits: Hi, Gretchen Tibbits, DC Advisory. Rewind slightly from the question the gentleman just asked. There&rsquo;s been already some cases and some questions of the information from publishers, from creators, that have been used to train these models. Forget about generating music and the next, but that&rsquo;s been trained and asking for percentages or rights or recognition of that. I&rsquo;m wondering &mdash; and not asking you to comment on any active case &mdash; but philosophically, thoughts on that?</strong></p>

<p>KS: Oh God, we&rsquo;ve got 25 seconds on the timer like that.</p>

<p><strong>No, you&rsquo;re going longer. Don&rsquo;t worry. We&rsquo;re going to take a few more. The clock can&rsquo;t save you now.</strong></p>

<p>KS: So, here&rsquo;s a thought exercise. By raise of hands, how many of you have read <em>Moby Dick</em>? So, I&rsquo;m guessing that all of you who raised your hand probably read <em>Moby Dick</em> many, many years ago &mdash; high school, college maybe. And if I ask you, you could tell me <em>Moby Dick</em> is about a whale. There&rsquo;s a captain. Maybe you remember his name is Ahab. Maybe he has some sort of fixation issues that he&rsquo;s focusing on this animate object. You could tell me a bunch of things about <em>Moby Dick</em>. Some of you who are literature fans might even be able to recite a passage or two from <em>Moby Dick</em> exactly as they appear in the book.</p>

<p>None of you, I would wager, could, if I ask you [to] tell me, recite verbatim the third paragraph of page 150 of the Penguin sixth printing of <em>Moby Dick</em>. And these neural networks work a little bit like that. Not even in the way that a search engine does, they are not storing the content of music or books or papers that people are generating. They are ingesting some of these things. And I think everybody thinks right now &mdash; and this is part of what we will determine, I&rsquo;m guessing, over the coming years &mdash; everybody thinks that all of the training that is being done right now is covered by fair use.&nbsp;</p>

<p><strong>NP: Well, some people think that.</strong></p>

<p>KS: Some people think that.</p>

<p><strong>NP: Some very important people do not.</strong></p>

<p>KS: And that&rsquo;s the thing that will get sorted out. And I don&rsquo;t know the answer to that question because it relies on judges and lawmakers, and we will sort of figure this out as a society. But the thing that the models are attempting to do isn&rsquo;t&#8230; They&rsquo;re not some gigantic repository of all of this content. You&rsquo;re attempting to build something that, like your brain, can remember conceptually some of these things about a thing that was present in the training. And we will sort of have to see&hellip;</p>

<p>So let me just back all the way up and say nobody wants to&hellip; As an author myself, I don&rsquo;t want to see anyone disenfranchised. The economic incentives for people to produce content and to be able to earn a living writing books and being&#8230; Especially, God forbid, folks who sit down and do the work of writing a really thoughtful, super well-researched piece of nonfiction. Or someone who pours their heart and soul into writing a piece of fiction. They need to be compensated for it. And this is a new modality of what you&rsquo;re doing with content. And I think we still have some big questions to ask and answer about exactly what&rsquo;s going on and what is the fair way to compensate people for what&rsquo;s going on here.</p>

<p>And then, what&rsquo;s the balance of trade, too? Because hopefully, what we&rsquo;re doing is building things that will create all sorts of amazing new ways for creative people to do what they&rsquo;re best at, which is creating wonderful things that other people will consume that creates connection and enhances this thing that makes us human.</p>

<p><strong>NP: Alright. We have time, very quickly, for a couple more. So just very quickly, Jay, hit me.</strong></p>

<p><strong>Jay Peters: Hi, Jay Peters for <em>The Verge</em>. When you mentioned that you don&rsquo;t want to read spammy AI-generated garbage, that made me think of this thing last month, where Microsoft&rsquo;s MSN network published this kind of spammy-feeling travel article that </strong><a href="https://www.theverge.com/2023/8/17/23836287/microsoft-ai-recommends-ottawa-food-bank-tourist-destination"><strong>recommended a food bank as a travel destination in Ottawa</strong></a><strong>. And that was made apparently in combination with algorithmic techniques, techniques with human review. So if something whiffs that badly with human intervention, how can we trust fully in AI-generated summaries?</strong></p>

<p>KS: Yeah. With that particular thing, it was less about the AI and more about how the human piece of that was working. Honestly, that would&rsquo;ve been a little bit better if there&rsquo;d been more AI.</p>

<p><strong>NP: You&rsquo;re blaming the people.</strong></p>

<p>KS: No, I&rsquo;m not blaming anyone. I think the diagnosis of that problem is some of these things on MSN &mdash; and I know this is true for other places &mdash; gets generated in really complicated ways. It wasn&rsquo;t the case of: there was, at some point, a Columbia-trained journalist who was sitting down writing this, and all of a sudden, there was now a faulty, defective AI tool that was doing the thing that they used to do. That&rsquo;s not what was going on here.</p>

<p><strong>NP: Alright. Very, very quickly.</strong></p>

<p><strong>Dan Perkel: Hi, Dan Perkel, IDEO. I had a question about an exchange you had earlier about flooding the world with AI-generated content and the discussion about quality. And in the scenario you were thinking of, who&rsquo;s determining the quality of that content, and how are they determining it? Because I wasn&rsquo;t quite following where that was going.&nbsp;</strong></p>

<p>KS: Well, I think you all are going to judge the quality of the content. If it&rsquo;s directed at you, you&rsquo;re the ultimate arbiters of, &ldquo;Is this good or bad? Is it true, or is it false?&rdquo; One of the seeds I will plant with you all is, one of the things that these AI tools may prove to be useful at is actually helping navigate a world where there are going to be a whole bunch of tools that are able to generate low-quality content. And having your own personal editor-in-chief that&rsquo;s helping you assemble what you think are high-quality, truthful, reliable sources of information and helping you sort of walk through this ocean of information and identify those things will be, I think, super useful. I think what you all are doing, by the way &mdash; and many of you in the room, I&rsquo;m sure, are in media businesses &mdash; I think having all of this content out there makes your job more important.</p>

<p><strong>NP: Oh boy.</strong></p>

<p>KS: Way more important. Because somebody has to have someone that they trust, that has high editorial standards, and who are helping figure out signal and noise. It&rsquo;s absolutely true.</p>

<p><strong>NP: Alright. We got to leave it here. I&rsquo;m available for a very high fee. Thank you so much, Kevin. I really appreciate it.</strong></p>

<p>KS: Thank you.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Alex Heath</name>
			</author>
			
			<title type="html"><![CDATA[Linda Yaccarino was set up to fail]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/2023/9/29/23896248/linda-yaccarino-x-twitter-set-up-to-fail-command-line" />
			<id>https://www.theverge.com/2023/9/29/23896248/linda-yaccarino-x-twitter-set-up-to-fail-command-line</id>
			<updated>2023-09-29T15:11:15-04:00</updated>
			<published>2023-09-29T15:11:15-04:00</published>
			<category scheme="https://www.theverge.com" term="Command Line" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="Twitter - X" />
							<summary type="html"><![CDATA[The buzz out of the Code Conference this week is, naturally, all about the disastrous performance of X / Twitter CEO Linda Yaccarino, who closed out the two-day affair in spectacular fashion. Vox's Peter Kafka, who has been going to the conference since it started in 2008, called it "the weirdest session I've ever seen." [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="Linda Yaccarino. | Photo by Santiago Felipe/Getty Image, illustration by William Joel/The Verge" data-portal-copyright="Photo by Santiago Felipe/Getty Image, illustration by William Joel/The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24963156/Command_Line_Site_Post_Linda_Yaccarino.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Linda Yaccarino. | Photo by Santiago Felipe/Getty Image, illustration by William Joel/The Verge	</figcaption>
</figure>
<p>The buzz out of the Code Conference this week is, naturally, all about <a href="https://www.theverge.com/2023/9/28/23895150/linda-yaccarino-code-conference-2023-x-twitter">the disastrous performance of X / Twitter CEO <strong>Linda Yaccarino</strong></a>, who closed out the two-day affair in spectacular fashion. <em>Vox's </em><strong>Peter Kafka</strong>, who has been going to the conference since it started in 2008, <a href="https://link.vox.com/view/629692b4e096882bab0a61e5jkemw.569/5e3481c5">called it</a> "the weirdest session I've ever seen." If I had to sum up the vibe as everyone trickled off to dinner afterward, it would be stunned disbelief. As for Yaccarino, she immediately fled the premises with her six-person security detail.</p>
<p>Given how her first interview on the job <a href="https://www.cnbc.com/video/2023/08/10/watch-cnbcs-full-interview-with-x-corp-ceo-linda-yaccarino-on-twitter-rebrand-and-more.html">with CNBC<em> </em>went<em> </em>about a month ago</a>, I had low expectations for her ability to field question …</p>
<p><a href="https://www.theverge.com/2023/9/29/23896248/linda-yaccarino-x-twitter-set-up-to-fail-command-line">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Nilay Patel</name>
			</author>
			
			<title type="html"><![CDATA[AMD CEO Lisa Su on the AI revolution and competing with Nvidia]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/23894647/amd-ceo-lisa-su-ai-chips-nvidia-supply-chain-interview-decoder" />
			<id>https://www.theverge.com/23894647/amd-ceo-lisa-su-ai-chips-nvidia-supply-chain-interview-decoder</id>
			<updated>2025-03-17T12:24:52-04:00</updated>
			<published>2023-09-29T10:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="AMD" /><category scheme="https://www.theverge.com" term="Decoder" /><category scheme="https://www.theverge.com" term="Nvidia" /><category scheme="https://www.theverge.com" term="Podcasts" /><category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Politics" /><category scheme="https://www.theverge.com" term="Regulation" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Today, we’re bringing you something a little different. The Code Conference was this week, and we had a great time talking live onstage with all of our guests. We’ll be sharing a lot of these conversations here in the coming days, and the first one we’re sharing is my chat with Dr. Lisa Su, the [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24959966/LisaSu_Decoder.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-drop-cap">Today, we’re bringing you something a little different. The <a href="https://www.theverge.com/2023/9/26/23890396/code-conference-2023-interviews-news">Code Conference was this week</a>, and we had a great time talking live onstage with all of our guests. We’ll be sharing a lot of these conversations here in the coming days, and the first one we’re sharing is my chat with Dr. Lisa Su, the CEO of AMD.</p>

<p>Lisa and I spoke for half an hour, and we covered an incredible number of topics, especially about AI and the chip supply chain. These past few years have seen a global chip shortage, exacerbated by the pandemic, and now, coming out of it, there’s suddenly another big spike in demand thanks to everyone wanting to run AI models. The balance of supply and demand is overall in a pretty good place right now, Lisa told us, with the notable exception of these high-end GPUs powering all of the large AI models that everyone’s running.</p>

<div class="wp-block-vox-media-highlight vox-media-highlight"><img src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24792604/The_Verge_Decoder_Tileart.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="" data-portal-copyright="" />


<p>Listen to <em>Decoder</em>, a show hosted by <em>The Verge</em>’s Nilay Patel about big ideas — and other problems.&nbsp;Subscribe&nbsp;<a href="https://podcasts.apple.com/us/podcast/welcome-to-decoder/id1011668648?i=1000496212371&amp;itsct=podcast_box&amp;itscg=30200&amp;ls=1&amp;at=1001l7uV&amp;ct=verge091322">here</a>!</p>
</div>

<p>The hottest GPU in the game is Nvidia’s H100 chip. But AMD is working to compete with a new chip Lisa told us about called the MI300 that should be as fast as the H100. There’s also a lot of work being done in software to make it so that developers can move easily between Nvidia and AMD. So we got into that.</p>

<p>You’ll also hear Lisa talk about what companies are doing to increase manufacturing capacity. The CHIPS and Science Act that recently passed is a great step toward building chip manufacturing here in the United States, but Lisa told us it takes a long time to bring up that supply. So I wanted to know how AMD is looking to diversify this supply chain and make sure it has enough capacity to meet all of this new demand.</p>

<p>Finally, Lisa answered questions from the amazing Code audience and talked a lot about how much AMD is using AI inside the company right now. It’s more than you think, although Lisa did say AI is not going to be designing chips all by itself anytime soon.</p>

<p>Okay, Dr. Lisa Su, CEO of AMD. Here we go.</p>

<iframe frameborder="0" height="200" src="https://playlist.megaphone.fm/?e=VMP8955394685" width="100%"></iframe>

<p><em>This transcript has been lightly edited for length and clarity.</em></p>

<p>Hello, hello. Nice to see you.</p>

<p><strong>Nice to see you. </strong></p>

<p>Thank you for having me.</p>

<p><strong>I have a ton to talk about — 500 cards’ worth of questions. We’re going to be here all night. But let’s start with something exciting. AMD made some news today in the AI market. What’s going on?</strong></p>

<p>Well, I can say, first of all, the theme of this whole conference, AI, is the theme of everything in tech these days. And when we look at all of the opportunities for computing to really advance AI, that’s really what we’re working on. So yes, today, we did have an <a href="https://www.theregister.com/2023/09/26/amd_instinct_ai_lamini/">announcement this morning from a company, a startup called Lamini</a>, a great company that we’ve been working with, some of the top researchers in large language models.</p>

<p>And the key for everyone is, when I talk to CEOs, people are all asking, “I know I need to pay attention to AI. I know I need to do something. But what do I do? It’s so complicated. There are so many different factors.” And with these foundational models like Llama, which are great foundational models, many enterprises actually want to customize those models with their own data and ensure that you can do that in your private environment and for your application. And that’s what Lamini does.</p>

<p>They actually customize models, fine-tune models for enterprises, and they operate on AMD GPUs. And so that was a cool thing. And we spent a bit of time with them, quite a bit of time with them, really optimizing the software and the applications to make it as easy as possible to develop these enterprise, fine-tuned models.</p>

<p><strong>I want to talk about that software in depth. I think it’s very interesting where we’re abstracting the different levels of software development away from the hardware. But I want to come back to that.</strong></p>

<p><strong>I want to begin broadly with the chip market. We’re exiting a period of pretty incredible constraint in chips across every process node. Where do you think we are now?</strong></p>

<p>It’s interesting. I’ve been in the semiconductor business for, I don’t know, the last 30 years, and for the longest time, people didn’t really even understand what semiconductors were or where they fit in the overall supply chain and where they were necessary in applications. I think the last few years, especially with the pandemic-driven demand and everything that we’re doing with AI, people now are really focused on semiconductors.</p>

<p>I think there has been a tremendous cycle. One, a cycle where we needed a lot more chips than we had, and then a cycle where we had too many of some. But at the end of the day, I think the fact is semiconductors are essential to so many applications. And particularly for us, what we’re focused on are the most complex, the highest performance, the bleeding edge of semiconductors. And I would say that there’s tremendous growth in the market.</p>

<p><strong>What do you think the bottleneck is now? Is it the cutting edge? Is it at the older process nodes, which is what we were hearing in the middle of the chip shortage?</strong></p>

<p>I think the industry as a whole has really come together as an ecosystem to put a lot of capacity on for the purposes of ensuring that we do satisfy overall demand. So in general, I would say that the supply / demand balance is in a pretty good place, with perhaps the exception of GPUs. If you need GPUs for large language model training and inference, they’re probably tight right now. A little bit tight.</p>

<p><strong>Lisa’s got some in the back if you need some.</strong></p>

<p>But look, the truth is we absolutely are putting a tremendous amount of effort getting the entire supply chain ramped up. These are some of the most complex devices in the world — hundreds of billions of transistors, lots of advanced technology. But absolutely ramping up supply overall.</p>

<p><strong>The </strong><a href="https://www.theverge.com/2022/8/9/23298147/biden-chips-act-semiconductors-subsidies-ohio-arizona-plant-china"><strong>CHIPS and Science Act passed last year</strong></a><strong>, a massive investment in this country in fabs. AMD is obviously the largest fabless semiconductor company in the world. Has that had a noticeable effect yet, or are we still waiting for that to come to fruition?</strong></p>

<p>I do think that if you look at the CHIPS and Science Act and what it’s doing for the semiconductor industry in the United States, it’s really a fantastic thing. I have to say, hats off to Gina Raimondo and everything that the Commerce Department is doing with industry. These are long lead time things. The semiconductor ecosystem in the US needed to be built five years ago. It is expanding now, especially at the leading edge, but it’s going to take some time.</p>

<p>So I don’t know that we feel the effects right now. But one of the things that we always believe is the more you invest over the longer term, you’ll see those effects. So I’m excited about onshore capacity. I’m also really excited about some of the investments in our national research infrastructure because that’s also extremely important for long-term semiconductor strength and leadership.</p>

<p><strong>AMD’s results speak for themselves. You’re selling a lot more chips than you were a few years ago. Where have you found that supply? Are you still relying on TSMC while you wait for these new fabs to come up?</strong></p>

<p>Again, when you look at the business that we’re in, it’s pushing the bleeding edge of technology. So we’re always on the most advanced node and trying to get the next big innovation out there. And there’s a combination of both process technology, manufacturing, design, design systems. We are very happy with our partnership with TSMC. They are the best in the world with advanced and leading-edge technologies.</p>

<p><strong>They’re it, right? Can you diversify away from them?</strong></p>

<p>I think the key is geographical diversity, Nilay. So when you think about geographical diversity, and by the way, this is true no matter what. Nobody wants to be in the same place because there are just natural risks that happen. And that’s where the CHIPS and Science Act has actually been helpful because there are now significant numbers of manufacturing plants being built in the US. They’re actually going to start production over the next number of quarters, and we will be active in having some of our manufacturing here in the United States.</p>

<p><strong>I talked to Intel CEO Pat Gelsinger when he broke ground in Ohio. They’re trying to become a foundry. He </strong><a href="https://www.theverge.com/2022/10/4/23385652/pat-gelsinger-intel-chips-act-ohio-manufacturing-chip-shortage"><strong>said very confidently</strong></a><strong> to me, “I would love to have an AMD logo on the side of one of these fabs.” How close is he to making that a reality?</strong></p>

<p>Well, I would say this. I would say that from onshore manufacturing, we are certainly looking at lots and lots of opportunities. I think Pat has a very ambitious plan, and I think that’s there. I think we always look at who are the best manufacturing partners, and what’s most important to us is someone who’s really dedicated to the bleeding edge of technology.</p>

<p><strong>Is there a competitor in the market to TSMC on that front?</strong></p>

<p>There’s always competition in the market. TSMC is certainly very good. Samsung is certainly making a lot of investments. You mentioned Intel. I think there are some activities in Japan as well to bring up advanced manufacturing. So there are lots of different options.</p>

<p><strong>Last question on this thread, and then I do want to talk to you about AI. There has been a lot of noise recently about Huawei. They </strong><a href="https://www.cnbc.com/2023/09/19/huaweis-chip-breakthrough-poses-new-threat-to-apple-in-china.html"><strong>put out a seven-nanometer chip</strong></a><strong>. This is either an earth-shattering geopolitical event or it’s bullshit. What do you think it is?</strong></p>

<p>Let’s see. I don’t know that I would call it an earth-shattering geopolitical event. Look, I think there’s no question that technology is considered a national security importance. And from a US standpoint, I think we want to ensure that we keep that lead. Again, I think the US government has spent a lot of time on this aspect.</p>

<p>The way I look at these things is we are a global company. China’s an important market for us. We do sell to China more consumer-related goods versus other things, and there’s an opportunity there for us to really have a balanced approach into how we deal with some of these geopolitical matters.</p>

<p><strong>Do you think that there was more supply available at TSMC because Huawei got kicked out of the game?</strong></p>

<p>I think TSMC has put a tremendous amount of supply on the table. I mean, if you think about the CapEx that’s happened over the last three or four years, it’s there because we all need more chips. And when we need more chips, the investment is there. Now chips are more expensive as a result, and that’s part of the ecosystem that we’ve built out.</p>

<p><strong>Let’s talk about that part of it. So you mentioned GPUs are constrained. The Nvidia H100, there’s effectively a black market for access to these chips. You have some chips, you’re coming out with some new ones. You just announced Lamini’s training fully on your chips. Have you seen opportunity to disrupt this market because Nvidia supply is so constrained?</strong></p>

<p>I would take a step back, Nilay, and just talk about what’s happening in the AI market because it’s incredible what’s happening. If you think about the technology trends that we’ve seen over the last 10 or 20 years — whether you’re talking about the internet or the mobile phone revolution or how PCs have changed things — AI is 10 times, 100 times, more than that in terms of how it’s impacting everything that we do.</p>

<p>So if you talk about enterprise productivity, if you talk about personal productivity or society, what we can do from a productivity standpoint, it’s that big. So the fact that there’s a shortage of GPUs, I think it’s not surprising because people recognize how important the technology is. Now, we’re in such the early innings of how AI and especially generative AI is coming to market that I view this as a 10-year cycle that we’re talking about, not how many GPUs can you get in the next two to four quarters.</p>

<p>We are excited about our road map. I think with high-performance computing, I would call generative AI the killer app for high-performance computing. You need more and more and more. And as good as today’s large language model is, it can still get better if you continue to increase the training performance and the inference performance.</p>

<p>And so that’s what we do. We build the most complex chips. We do have a new one coming out. It’s called <a href="https://www.anandtech.com/show/18915/amd-expands-mi300-family-with-mi300x-gpu-only-192gb-memory">MI300 if you want the code name there</a>, and it’s going to be fantastic. It’s targeted at large language model training as well as large language model inference. Do we see opportunity? Yes. We see significant opportunity, and it’s not just in one place. The idea of the cloud guys are the only users, that’s not true. There’s going to be a lot of enterprise AI. A lot of startups have tremendous VC backing around AI as well. And so we see opportunity across all those spaces.</p>

<p><strong>So MI300?</strong></p>

<p>MI300, you got it.</p>

<p><strong>Performance-wise, is this going to be competitive with the H100 or exceed the H100?</strong></p>

<p>It is definitely going to be competitive from training workloads, and in the AI market, there’s no one-size-fits-all as it relates to chips. There are some that are going to be exceptional for training. There are some that are going to be exceptional for inference, and that depends on how you put it together.</p>

<p>What we’ve done with MI300 is we’ve built an exceptional product for inference, especially large language model inference. So when we look going forward, much of what work is done right now is companies training and deciding what their models are going to be. But going forward, we actually think inference is going to be a larger market, and that plays well into some of what we’ve designed MI300 for.</p>

<p><strong>If you look at what Wall Street thinks Nvidia’s mode is, it’s CUDA, it’s the proprietary software stack, it’s the long-running relationships with developers. You have ROCm, which is a little different. Do you think that that’s a moat that you can overcome with better products or with a more open approach? How are you going about attacking that?</strong></p>

<p>I’m not a believer in moats when the market is moving as fast as it is. When you think about moats, it’s more mature markets where people are not really wanting to change things a lot. When you look at generative AI, it’s moving at an incredible pace. The progress that we’re making in a few months in a regular development environment might’ve taken a few years. And software in particular, our approach is an open software approach.</p>

<p>There’s actually a dichotomy. If you look at people who have developed software over the last five, seven, or eight years, they’ve tended to use… let’s call it, more hardware-specific software. It was convenient. There weren’t that many choices out there, and so that’s what people did. When you look at going forward, actually what you find is everyone’s looking for the ability to build hardware-agnostic software because people want choice. Frankly, people want choice. People want to use their older infrastructure. People want to ensure that they’re able to move from one infrastructure to another infrastructure. And so they’re building on these higher levels of software. Things like PyTorch, for example, which tends to be that hardware-agnostic capability.</p>

<p>So I do think the next 10 years are going to be different from the last 10 as it relates to how do you develop within AI. And I think we’re seeing that across the industry and the ecosystem. And the benefit of an open approach is that there’s no one company that has all of the ideas. So the more we’re able to bring the ecosystem together, we get to take advantage of all of those really, really smart developers who want to accelerate AI learning.</p>

<p><strong>PyTorch is a big deal, right? This is the language that all these models are actually coded in. I talk to a bunch of cloud CEOs. They don’t love their dependency on Nvidia as much as anybody doesn’t love being dependent on any one vendor. Is this a place where you can go work with those cloud providers and say, “We’re going to optimize our chips for PyTorch and not CUDA,” and developers can just run on PyTorch and pick whichever is best optimized?</strong></p>

<p>That’s exactly it. So if you think about what PyTorch is trying to do — and it really is trying to be that sort of hardware-agnostic layer — one of the major milestones that we’ve come up with is on PyTorch 2.0, AMD was qualified on day one. And what that means is anybody who runs CUDA on PyTorch right now, it will run on AMD out of the box because we’ve done the work there. And frankly, it’ll run on other hardware as well.</p>

<p>But our goal is “may the best chip win.” And the way you do that is to make the software much more seamless. And it’s PyTorch, but it’s also Jax. It’s also some of the tools that OpenAI is bringing in with Triton. There are lots of different tools and frameworks that people are bringing forward that are hardware-agnostic. There are a bunch of people who are also doing “build your own” types of things. So I do think this is the wave of the future for AI software.</p>

<p><strong>Are you </strong><a href="https://www.theverge.com/2023/5/5/23712242/microsoft-amd-ai-processor-chip-nvidia-gpu-athena-mi300"><strong>building custom chips</strong></a><strong> for any of these companies?</strong></p>

<p>We have the capability of building custom chips. And the way I think about it is the time to build custom chips is actually when you get very high volume applications going forward. So I do believe there will be custom chips over the next number of years. The other piece that’s also interesting is you need all different types of engines for AI. So we spend a lot of time talking about big GPUs because that’s what’s needed for trading large language models. But you’re also going to see ASICs for some… let’s call it, more narrow applications. You’re also going to see AI in client chips. So I’m pretty excited about that as well in terms of just how broad AI will be incorporated into chips across all of the market segments.</p>

<p><strong>I’ve got Kevin Scott, CTO of Microsoft, here tomorrow. So I’ll ask you this question so I can chase him down with it. If, say, Microsoft wanted to diversify Azure and put more AMD in there and be invisible to customers, is that possible right now?</strong></p>

<p>Well, first of all, I love Kevin Scott. He’s a great guy, and we have a tremendous partnership with Microsoft across both the cloud as well as the Windows environment. I think you should ask him the question. But I think if you were to ask him or if you were to ask a bunch of other cloud manufacturers, they would say it’s absolutely possible. Yes, it takes work. It takes work that we each have to put in, but it’s much less work than you might have imagined because people are actually writing code at the higher-level frameworks. And we believe that this is the wave of the future for AI programming.</p>

<p><strong>Let me connect this to an end-user application just for a second. We’re talking about things that are very much raising the cost curve: a lot of smart people doing a lot of work to develop for really high-end GPUs on the cutting-edge process nodes. Everything’s just getting more expensive, and you see how the consumer applications are expensive: $25 a month, $30 a seat for Microsoft Office with Copilot. When do you come down the cost curve that brings those consumer prices down?</strong></p>

<p>It’s a great, great question. I do believe that the value that you get with gen AI in terms of productivity will absolutely be proven out. So yes, the cost of these infrastructures is high right now, but the productivity that you get on the other side is also exciting. We’re deploying AI internally within AMD, and it’s such a high priority because, if I can get chips out faster, that’s huge productivity.</p>

<p><strong>Do you trust it? Do you have your people checking the work that AI is doing, or do you trust it?</strong></p>

<p>Sure. Look, we’re all experimenting, right? We’re in the very, very early stages of building the tools and the infrastructure so that we can deploy. But the fact is it saves us time — whether we’re designing chips, where we’re testing chips, where we’re validating chips — it saves us time, and time is money in our world.</p>

<p>But back to your question about when do you get to the other side of the curve. I think that’s why it’s so important to think about AI broadly and not just in the cloud. So if you think about how the ecosystem will look a few years from now, you would imagine a place where, yes, you have the cloud infrastructures training these largest foundational models, but you’re also going to have a bunch of AI at the edge. And whether it’s in your PC or it’s in your phone, you’re going to be able to do local AI. And there, it is cheaper, it is faster, and it is actually more private when you do that. And so, that’s this idea of AI everywhere and how it can really enhance the way we’re deploying.</p>

<p><strong>That brings me to open source and, honestly, to the idea of how we will regulate this. So there’s a White House meeting, everyone participates, great. Everyone’s very proud of each other. You think about how you will actually enforce AI regulation. And it’s okay, you can probably tell AWS or Azure not to run certain work streams. “Don’t do these things.” And that seems fine. Can you tell AMD to not let certain things happen on the chips for somebody running an open-source model on Linux on their laptop?</strong></p>

<p>I think it is something that we all take very seriously. The technology has so much upside in terms of what it can do from a productivity and a discovery standpoint, but there’s also safety in AI. And I do think that, as large companies, we have a responsibility. If you think about the two things around data privacy as well as just overall ensuring as these models are developed that they’re developed to the best of our ability without too much bias. We’re going to make mistakes. The industry as a whole will not be perfect here. But I think there is clarity around its importance and that we need to do it together and that there needs to be a public / private partnership to make it happen.</p>

<p><strong>I can’t remember anyone’s name, so I’d be a horrible politician. But let’s pretend I’m a regulator. I’m going to do it. And I say, “Boy, I really don’t want these kids using any model to develop chemical weapons. And I need to figure out where to land that enforcement.” I can definitely tell Azure, “Don’t do that.” But a kid with an AMD chip in a Dell laptop running Linux, I have no mechanism of enforcement except to tell you to make the chip not do it. Would you accept that regulation?</strong></p>

<p>I don’t think there’s a silver bullet. It’s not, “I can make the chip not do it.” It’s “I can make the combination of the chip and the model and have some safeguards in place.” And we’re absolutely willing to be at that table to help that happen.</p>

<p><strong>You would accept that kind of regulation, that the chip will be constrained?</strong></p>

<p>Yes, I would accept an opportunity for us to look at what are the safeguards that we would need to put in place.</p>

<p><strong>I think this is going to be one of the most complicated&#8230; I don’t think we expect our chips to be limited in what we can do, and it feels like this is a question we have to ask and answer.</strong></p>

<p>Let me say again, it’s not the chip by itself. Because in general, chips have broad capability. It’s the chips plus the software and the models. Particularly on the model side, what you do in terms of safeguards.</p>

<p><strong>We could start lining up for questions. I’ve just got a couple more for you. You’re in the PS5; you’re in the Xbox. There’s a view of the world that says cloud gaming is the future of all things. That might be great for you because you’ll be in their data centers, too. But do you see that shift underway? Is that for real, or are we still doing console generations?</strong></p>

<p>It’s so interesting. Gaming is everywhere. Gaming is everywhere in every form factor. There’s been this long conversation about: is this the end of console gaming? And I don’t see it. I see PC gaming strong, I see console gaming strong, and I see cloud gaming also having legs. And they all need similar types of technology, but they obviously use it in different ways.</p>

<h2 class="wp-block-heading" id="TNTt0x">Audience Q&amp;A</h2>

<p><strong>Nilay Patel: Please introduce yourself.</strong></p>

<p><strong>Alan Lee: Hi, Lisa. Alan Lee, Analog Devices. One and a half years after the Xilinx acquisition, how do you see adaptive computing playing out in AI?</strong></p>

<p><strong>Lisa Su: </strong>First of all, it’s nice to see you, Alan. I think, first of all, the Xilinx acquisition was an acquisition we completed about 18 months ago — fantastic acquisition. Brought a lot of high-performance IP with adaptive computing IP. And I do see that particularly on these AI engines, engines that are optimized for data flow architectures, that’s one of the things that we were able to bring in as part of Xilinx. That’s actually the IP that is now going into PCs.</p>

<p>And so we see significant IP usage there. And together, as we go forward, I have this belief that there’s no one computer that is the right one. You actually need the right computing for the right applications. So whether it’s CPUs or GPUs or FPGAs or adaptive SoCs, you need all of those. And that’s the ecosystem that we’re bringing together.</p>

<p><strong>NP: This tall gentleman over here.</strong></p>

<p><strong>Casey Newton: Hi, Casey Newton from <em>Platformer</em>. I wanted to return to Nilay’s question about regulation. Someday, it’s sad to say, but somebody might try to acquire a bunch of your GPUs for the express purpose of doing harm — training a large language model for that purpose. And so I wonder what sort of regulations, if any, do you think government should place around who gets access to large numbers of GPUs and what size training runs they’re allowed to do.</strong></p>

<p><strong>LS: </strong>That’s a good question. I don’t think we know the answer to that, particularly in terms of how to regulate. Our goal is, again, within all of the export controls that are out there, <a href="https://www.reuters.com/technology/us-restricts-exports-some-nvidia-chips-middle-east-countries-filing-2023-08-30/#:~:text=Sign%20InRegister-,US%20curbs%20AI%20chip%20exports%20from%20Nvidia,to%20some%20Middle%20East%20countries&amp;text=Aug%2030%20(Reuters)%20%2D%20The,countries%20in%20the%20Middle%20East.">because GPUs are export controlled</a>, that we follow those regulations. There are the biggest and the next level of GPUs that are there. I think the key is, again, as I said, it’s a combination of both chip and model development that really comes about. And we’re active at those tables and talking about how to do those things. I think we want to ensure that we are very protective of the highest-performing GPUs. But also, it’s an important market where lots of people want access.</p>

<p><strong>Daniel Vestergaard: Hi, I’m Daniel from DR [Danmarks Radio]. To return to something you talked about earlier because everyone here is thinking about implementing AI in their internal workflows — and it’s just so interesting to hear about your thoughts because you have access to the chips and deep machine learning knowledge. Can you specify a bit, what are you using AI internally for in the chip-making process? Because this might point us in the right direction.</strong></p>

<p><strong>LS: </strong>Thanks for the question. I think every business is looking at how to implement AI. So for us, for example, there are the engineering functions and the non-engineering: sales, marketing, data analytics, lead generation. Those are all places where AI can be very useful. On the engineering side, we look at it in terms of how can we build chips faster. So they help us with design, they help us with test generation, they help us with manufacturing diagnostics.</p>

<p>Back to Nilay’s question, do I trust it to build a chip with no humans involved? No, of course not. We have lots of engineers. I think copilot functions in particular are actually fairly easy to adopt. Pure generative AI, we need to check and make sure that it works. But it’s a learning process. And the key, I would say, is there’s lots of experimentation, and fast cycles of learning are important. So we actually have dedicated teams that are spending their time looking at how we bring AI into our company development processes as fast as possible.</p>

<p><strong>Jay Peters: Hi, Jay Peters with <em>The Verge</em>. Apple seems to be making a much bigger push in how its devices, and particularly its M-series chips, are really good for AAA gaming. Are you worried about Apple on that front at all?</strong></p>

<p><strong>NP: They told me the </strong><a href="https://www.ign.com/articles/apple-iphone-15-pro-gaming-interview"><strong>iPhone 15 Pro is the world’s best game console</strong></a><strong>. And that’s why it’s “Pro.” It’s a very confusing situation.</strong></p>

<p><strong>LS: </strong>I don’t know about that. I would say, look, as I said earlier, gaming is such an important application when you think about entertainment and what we’re doing with it. I always think about all competition. But from my standpoint, it’s how do we get&#8230; It’s not just the hardware; it’s really how do we get the gaming ecosystem. People want to be able to take their games wherever and play with their friends and on different platforms. Those are options that we have with the gaming ecosystem today. We’re going to continue to push the envelope on the highest-performing PCs and console chips. And I think we’re going to be pretty good.</p>

<p><strong>NP: I have one more for you. If you listen to <em>Decoder</em>, you know I love asking people about decisions. Chip CEOs have to make the longest-range decisions of basically anybody I can think of. What’s the longest-term bet you’re making right now?</strong></p>

<p><strong>LS: </strong>We are definitely designing for the five-plus-year cycle. I talked to you today about MI300. We made some of those architectural decisions four or five years ago. And the thought process there was, “Hey, where’s the world going? What kind of computing do you need?” Being very ambitious in our goals and what we were trying to do. So we’re pretty excited about what we’re building for the next five years.</p>

<p><strong>NP: What’s a bet you’re making right now?</strong></p>

<p><strong>LS: </strong>We’re betting on what the next big thing in AI is.</p>

<p><strong>NP: Okay. Thank you, Lisa.</strong></p>

<p><strong>LS: </strong>Alright.</p>

<p><strong>NP: I did my best.</strong></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Jacob Kastrenakes</name>
			</author>
			
			<title type="html"><![CDATA[Watch Linda Yaccarino’s wild interview at the Code Conference]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/2023/9/28/23895150/linda-yaccarino-code-conference-2023-x-twitter" />
			<id>https://www.theverge.com/2023/9/28/23895150/linda-yaccarino-code-conference-2023-x-twitter</id>
			<updated>2023-09-28T20:11:43-04:00</updated>
			<published>2023-09-28T20:11:43-04:00</published>
			<category scheme="https://www.theverge.com" term="Business" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="Twitter - X" /><category scheme="https://www.theverge.com" term="Web" />
							<summary type="html"><![CDATA[On Wednesday evening, X CEO Linda Yaccarino appeared onstage at the Code Conference with frustration and protest. "I think many people in this room were not fully prepared for me to still come out on the stage," she told interviewer Julia Boorstin, senior media and tech correspondent at CNBC. Yaccarino sounded rattled. She'd found out [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24961130/1705119879.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>On Wednesday evening, X CEO Linda Yaccarino appeared onstage at the Code Conference with frustration and protest. "I think many people in this room were not fully prepared for me to still come out on the stage," she told interviewer Julia Boorstin, senior media and tech correspondent at CNBC.</p>
<p>Yaccarino sounded rattled. She'd found out earlier in the day that Kara Swisher, a Code Conference co-founder, had booked a surprise guest to appear an hour before her: Yoel Roth, Twitter's former head of trust and safety. He has been an outspoken critic of the direction Elon Musk has taken the site.</p>
<p>In his interview with Swisher, Roth recounted how M …</p>
<p><a href="https://www.theverge.com/2023/9/28/23895150/linda-yaccarino-code-conference-2023-x-twitter">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Jess Weatherbed</name>
			</author>
			
			<title type="html"><![CDATA[Adobe’s Photoshop on the web launch includes its popular desktop AI tools]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/2023/9/27/23892889/adobe-photoshop-for-the-web-firefly-ai-generative-fill-full-release-price-date" />
			<id>https://www.theverge.com/2023/9/27/23892889/adobe-photoshop-for-the-web-firefly-ai-generative-fill-full-release-price-date</id>
			<updated>2023-09-27T19:30:00-04:00</updated>
			<published>2023-09-27T19:30:00-04:00</published>
			<category scheme="https://www.theverge.com" term="Adobe" /><category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Apps" /><category scheme="https://www.theverge.com" term="Creators" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="Web" />
							<summary type="html"><![CDATA[After almost two years in beta, Adobe's Photoshop on the web service - a simplified online version of the company's desktop photo editing app - is now generally available starting Wednesday, September 27th. According to information Adobe shared with The Verge, Photoshop on the web is launching with the popular Generative Fill and Generative Expand [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="Adobe says it no longer has plans for a free-to-use version of the online photo editing software. | Image: Adobe" data-portal-copyright="Image: Adobe" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24957435/Subway_shoe.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Adobe says it no longer has plans for a free-to-use version of the online photo editing software. | Image: Adobe	</figcaption>
</figure>
<p>After almost two years in beta, Adobe's Photoshop on the web service - a simplified online version of the company's desktop photo editing app - is now generally available starting Wednesday, September 27th. According to information Adobe shared with <em>The Verge</em>, Photoshop on the web is launching with the popular <a href="https://www.theverge.com/2023/9/13/23871537/adobe-firefly-generative-ai-model-general-availability-launch-date-price">Generative Fill and Generative Expand tools</a> that were recently released for the desktop version of Photoshop. </p>
<p>Powered by <a href="https://www.theverge.com/2023/3/21/23648315/adobe-firefly-ai-image-generator-announced">Adobe's Firefly generative AI model</a>, these features are available for commercial use and allow users to quickly add to, remove from, or expand an image using text-based descriptions in over 100 languages, all while …</p>
<p><a href="https://www.theverge.com/2023/9/27/23892889/adobe-photoshop-for-the-web-firefly-ai-generative-fill-full-release-price-date">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Jay Peters</name>
			</author>
			
			<title type="html"><![CDATA[Artifact is becoming Twitter, too]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/2023/9/27/23887416/artifact-mike-krieger-code-2023-posts" />
			<id>https://www.theverge.com/2023/9/27/23887416/artifact-mike-krieger-code-2023-posts</id>
			<updated>2023-09-27T14:32:07-04:00</updated>
			<published>2023-09-27T14:32:07-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Apps" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Artifact, the AI-powered news app from Instagram's co-founders, is adding a major new feature: the ability to post. So far, the app has been an aggregator for news and links from around the internet, but you're going to be able to add posts directly to the app. Mike Krieger, one of the co-founders of Artifact, [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: Artifact" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24398651/Product_Shot.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>Artifact, the AI-powered news app from Instagram's co-founders, is adding a major new feature: the ability to post. So far, the app has been an aggregator for news and links from around the internet, but you're going to be able to add posts directly to the app.</p>
<p>Mike Krieger, one of the co-founders of Artifact, announced the new features onstage in a conversation with Casey Newton at the Code Conference on Wednesday. The new feature is a logical next step from Artifact's recently launched update that lets users <a href="https://www.theverge.com/2023/9/13/23871561/artifact-links-news-reading-app-tiktok">share links</a>. This new feature means you won't just be limited to links; your posts can include things like a title, text, and photos …</p>
<p><a href="https://www.theverge.com/2023/9/27/23887416/artifact-mike-krieger-code-2023-posts">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
	</feed>
