<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Nvidia | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2026-03-31T10:29:17+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/nvidia" />
	<id>https://www.theverge.com/rss/nvidia/index.xml</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/rss/nvidia/index.xml" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Jess Weatherbed</name>
			</author>
			
			<title type="html"><![CDATA[Nvidia rolls out DLSS 4.5 update with new frame generation features]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/903934/nvidia-dlss-4-5-multi-frame-generation-app-beta-launch" />
			<id>https://www.theverge.com/?p=903934</id>
			<updated>2026-03-31T06:29:17-04:00</updated>
			<published>2026-03-31T09:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="Gaming" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Nvidia" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Nvidia has launched the next major update for its Deep Learning Super Sampling (DLSS) feature, introducing AI-powered frame generation modes for supported GeForce RTX graphics cards. These DLSS 4.5 upgrades are included in the new Nvidia app beta update starting today, allowing players to improve performance and image quality across more than 20 gaming titles. [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Vector collage of the Ndivia logo." data-caption="" data-portal-copyright="Cath Virginia / The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25343250/STK083_B_NVIDIA_A.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Nvidia has launched the next major update for its Deep Learning Super Sampling (DLSS) feature, introducing AI-powered frame generation modes for supported GeForce RTX graphics cards. These <a href="https://www.nvidia.com/en-us/geforce/news/dlss-4-5-dynamic-multi-frame-generation-6x-mode-released/">DLSS 4.5 upgrades</a> are included in the new <a href="https://www.nvidia.com/en-us/geforce/news/nvidia-app-dlss-4-5-dynamic-multi-frame-generation-available-now/">Nvidia app beta update</a> starting today, allowing players to improve performance and image quality across <a href="https://www.nvidia.com/en-us/geforce/news/dlss-4-5-rtx-path-tracing-game-announcements-gdc-2026/">more than 20 gaming titles</a>.</p>
<p class="has-text-align-none">The main standout of DLSS 4.5 is its <a href="https://www.theverge.com/tech/854610/nvidia-dlss-4-5-announcement-multi-frame-generation-6x-specs">6x Multi Frame Generation feature</a> for users with RTX 50-series GPUs. Nvidia says this mode uses its second-generation transformer AI model to generate "five additional frames for every single natively rendered one," with "minimal impact" to respons …</p>
<p><a href="https://www.theverge.com/tech/903934/nvidia-dlss-4-5-multi-frame-generation-app-beta-launch">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Stevie Bonifield</name>
			</author>
			
			<title type="html"><![CDATA[Mark Zuckerberg and Jensen Huang are part of Trump’s new ‘tech panel’]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/policy/900340/trump-tech-panel-mark-zuckerberg-jensen-huang" />
			<id>https://www.theverge.com/?p=900340</id>
			<updated>2026-03-25T11:56:09-04:00</updated>
			<published>2026-03-25T10:41:21-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Meta" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Nvidia" /><category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Meta CEO Mark Zuckerberg, Oracle CTO and executive chairman Larry Ellison, Nvidia CEO Jensen Huang, and Google cofounder Sergey Brin will be the first four members of the President's Council of Advisors on Science and Technology (PCAST), according to the Wall Street Journal. The panel, which will "weigh in on AI policy," will include 13 [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Photo of Mark Zuckerberg in front of background of Meta logo." data-caption="Mark Zuckerberg. | Image: Cath Virginia / The Verge, Getty Images" data-portal-copyright="Image: Cath Virginia / The Verge, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/04/STKS507_FTCxMETA_ANTITRUST_CVIRGINIA_2_E.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Mark Zuckerberg. | Image: Cath Virginia / The Verge, Getty Images	</figcaption>
</figure>
<p class="has-text-align-none">Meta CEO Mark Zuckerberg, Oracle CTO and executive chairman Larry Ellison, Nvidia CEO Jensen Huang, and Google cofounder Sergey Brin will be the first four members of the President's Council of Advisors on Science and Technology (PCAST), according to the <a href="https://www.wsj.com/politics/policy/trump-to-name-mark-zuckerberg-larry-ellison-and-jensen-huang-to-tech-panel-ded1ec6f?mod=hp_lead_pos2"><em>Wall Street Journal</em></a>.</p>
<p class="has-text-align-none">The panel, which will "weigh in on AI policy," will include 13 members to start, but could grow to 24. Trump's AI and crypto czar David Sacks and White House tech advisor Michael Kratsios will co-chair the panel. </p>
<p class="has-text-align-none">According to the White House's January <a href="https://www.whitehouse.gov/presidential-actions/2025/01/presidents-council-of-advisors-on-science-and-technology/">announcement</a> for the panel, PCAST will "advise the President on matters involving science, technology, education, and  …</p>
<p><a href="https://www.theverge.com/policy/900340/trump-tech-panel-mark-zuckerberg-jensen-huang">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Richard Lawler</name>
			</author>
			
			<title type="html"><![CDATA[Arm’s first CPU ever will plug into Meta’s AI data centers later this year]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/899823/arm-agi-cpu-meta" />
			<id>https://www.theverge.com/?p=899823</id>
			<updated>2026-03-24T18:28:49-04:00</updated>
			<published>2026-03-24T16:43:14-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="AMD" /><category scheme="https://www.theverge.com" term="Intel" /><category scheme="https://www.theverge.com" term="Meta" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Nvidia" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[After decades of only licensing its chip designs for others to use, UK-based Arm revealed the first chip it's producing on its own, and the first customer. Dubbed the Arm AGI CPU, it's another chip designed for inference, or running the cloud processing for AI tools like AI agents that can continue to spawn more [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Stylized image showing the edge of a CPU" data-caption="" data-portal-copyright="Image: Arm" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/03/20260216_VISION25_ExplodedTight_Chip-01-16x9_16bit_v2-1200x675-1.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">After decades of only licensing its chip designs for others to use, UK-based Arm <a href="https://newsroom.arm.com/news/arm-agi-cpu-launch">revealed the first chip</a> it's producing on its own, and the first customer. Dubbed the Arm AGI CPU, it's another chip designed for inference, or running the cloud processing for AI tools like AI agents that can continue to spawn more and more tasks to run at once. The first company in line to use it is Meta, which has <a href="https://www.ft.com/content/d3b50dfc-31fa-45a8-9184-c5f0476f4504">reportedly struggled</a> to launch its own AI chips.</p>
<p class="has-text-align-none">Meta <a href="https://about.fb.com/news/2026/03/meta-partners-with-arm-to-develop-new-class-of-data-center-silicon/">says it's both the lead partner and co-developer</a>, and plans to work on "multiple generations" of the data center CPUs, for use along with hardware from other vendors like <a href="https://www.theverge.com/ai-artificial-intelligence/880513/nvidia-meta-ai-grace-vera-chips">Nvidia</a> and <a href="https://www.theverge.com/tech/883593/amd-forges-100-billion-deal-with-meta-for-ai-chips">AMD</a>. Arm cus …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/899823/arm-agi-cpu-meta">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Hayden Field</name>
			</author>
			
			<title type="html"><![CDATA[Nvidia CEO Jensen Huang says &#8216;I think we&#8217;ve achieved AGI&#8217;]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/899086/jensen-huang-nvidia-agi" />
			<id>https://www.theverge.com/?p=899086</id>
			<updated>2026-03-23T16:23:28-04:00</updated>
			<published>2026-03-23T15:42:33-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Nvidia" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[On a Monday episode of the Lex Fridman podcast, Nvidia CEO Jensen Huang made a hot-button statement: "I think we've achieved AGI." AGI, or artificial general intelligence, is a vaguely defined term that has incited a lot of discussion by tech CEOs, tech workers, and the general public in recent years, as it typically denotes [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Digital photo collage of Nvidia CEO Jensen Huang." data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge, Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25835739/STKP210_JENSEN_HUANG_B.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">On a Monday <a href="https://lexfridman.com/jensen-huang/">episode</a> of the Lex Fridman podcast, Nvidia CEO Jensen Huang made a hot-button statement: "I think we've achieved AGI."</p>
<p class="has-text-align-none">AGI, or artificial general intelligence, is a vaguely defined term that has incited a lot of discussion by tech CEOs, tech workers, and the general public in recent years, as it typically denotes AI that's equal to or surpasses human intelligence. In recent months, tech leaders have tried to distance themselves from the term and <a href="https://www.theverge.com/ai-artificial-intelligence/845890/ai-companies-rebrand-agi-artificial-general-intelligence">create their own terminology</a> that they view as less over-hyped, more useful, and more clearly defined (although the new phrases they've come up with essentially mean the same thing as AG …</p>
<p><a href="https://www.theverge.com/ai-artificial-intelligence/899086/jensen-huang-nvidia-agi">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Sean Hollister</name>
			</author>
			
			<title type="html"><![CDATA[Nvidia has lost the plot with gamers]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/games/895753/nvidia-dlss-5-slop-face-fake-frames" />
			<id>https://www.theverge.com/?p=895753</id>
			<updated>2026-03-19T11:37:34-04:00</updated>
			<published>2026-03-18T09:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="Gaming" /><category scheme="https://www.theverge.com" term="Nvidia" /><category scheme="https://www.theverge.com" term="PC Gaming" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Nvidia surely thought it was doing a good thing for gamers by "upgrading" the faces of our favorite video game characters. But that just shows how much the company has lost the plot. Nvidia could've marketed its new DLSS 5 real-time lighting technology as a way to make future, next-gen games look better. Instead, it [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Image: Nvidia" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/03/nvidia-dlss-5-geforce-rtx-resident-evil-requiem-comparison-001-on.jpeg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Nvidia surely thought it was doing a good thing for gamers by "upgrading" the faces of our favorite video game characters. But that just shows how much the company has lost the plot.</p>
<p class="has-text-align-none">Nvidia could've marketed its new DLSS 5 real-time lighting technology as a way to make <em>future, next-gen </em>games look better. Instead, it told the world that games people already know and love <em>look bad</em>. It focused on retconning characters' faces. And now, confronted with the predictable backlash, <a href="https://www.tomshardware.com/pc-components/gpus/jensen-huang-says-gamers-are-completely-wrong-about-dlss-5-nvidia-ceo-responds-to-dlss-5-backlash">Nvidia's CEO is telling critics that we're "completely wrong</a>."</p>
<p class="has-text-align-none">Regardless of how it works, the tech presents as an AI filter that tries to optimize everyone and everythi …</p>
<p><a href="https://www.theverge.com/games/895753/nvidia-dlss-5-slop-face-fake-frames">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Richard Lawler</name>
			</author>
			
			<title type="html"><![CDATA[DLSS 5: Has Nvidia&#8217;s AI graphics technology gone too far?]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/games/896518/nvidia-dlss-5-ai-3d-rendering" />
			<id>https://www.theverge.com/?post_type=vm_stream&#038;p=896518</id>
			<updated>2026-03-19T12:12:03-04:00</updated>
			<published>2026-03-18T08:30:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Gaming" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Nvidia" /><category scheme="https://www.theverge.com" term="PC Gaming" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Nvidia has revealed a new “3D guided neural rendering model” called DLSS 5 that can change a game’s lighting and materials in real-time, and… many gamers aren’t happy. From DLSS 5 memes to complaints about how it’s “yassified” Resident Evil Requiem characters in demos, the first impression has not been a good one, no matter [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="DLSS 5 comparison image of Resident Evil Requiem" data-caption="" data-portal-copyright="Image: Nvidia" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/03/screencapture-nvidia-en-us-geforce-news-dlss5-breakthrough-in-visual-fidelity-for-games-nvidia-dlss-5-resident-evil-requiem-geforce-rtx-comparison-screenshot-001-2026-03-17-20_21_56.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Nvidia has revealed a new “3D guided neural rendering model” called DLSS 5 that can change a game’s lighting and materials in real-time, and… many gamers aren’t happy. From DLSS 5 <a href="https://bsky.app/profile/amanita-design.net/post/3mhawmtv44c2e">memes</a> to <a href="https://bsky.app/profile/mikebithell.bsky.social/post/3mh7f6vgrfc2s">complaints</a> about how it’s “yassified” <em>Resident Evil Requiem</em> characters in demos, the first impression has not been a good one, no matter how much Nvidia <a href="https://www.nvidia.com/en-us/geforce/forums/rtx-technology-dlss-dxr/37/583738/dlss-5-faq/">insists</a> that this pursuit of photorealism is still honoring the original artists’ intent.</p>

<p class="has-text-align-none"><em>Follow along below for all the latest updates about Nvidia’s DLSS 5 upgrades</em>.</p>
<ul>
					<li>
				<a href="https://www.theverge.com/games/895753/nvidia-dlss-5-slop-face-fake-frames">Nvidia has lost the plot with gamers</a>
			</li>
					<li>
				<a href="https://www.theverge.com/games/896457/jensen-huang-on-the-critical-reaction-to-dlss-5-well-first-of-all-theyre-completely-wrong">Jensen Huang, on the critical reaction to DLSS 5: “Well, first of all, they&#8217;re completely wrong.”</a>
			</li>
					<li>
				<a href="https://www.theverge.com/entertainment/896213/nvidia-dlss-5-ai-faces-motion-smoothing">Nvidia&#8217;s DLSS 5 is like motion smoothing for video games, but worse</a>
			</li>
					<li>
				<a href="https://www.theverge.com/news/895472/nvidia-dlss5-generative-ai-pc-graphics">DLSS 5 looks like a real-time generative AI filter for video games</a>
			</li>
					<li>
				<a href="https://www.theverge.com/tech/895421/nvidia-just-announced-dlss-5-and-digital-foundry-already-has-a-video">Nvidia just announced DLSS 5 and Digital Foundry already has a video.</a>
			</li>
			</ul>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Andrew J. Hawkins</name>
			</author>
			
			<title type="html"><![CDATA[Nvidia says China’s BYD and Geely will use its robotaxi platform]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/895301/nvidia-robotaxi-byd-geely-hyperion-lyft-halos" />
			<id>https://www.theverge.com/?p=895301</id>
			<updated>2026-03-16T16:47:35-04:00</updated>
			<published>2026-03-16T16:30:00-04:00</published>
			<category scheme="https://www.theverge.com" term="Autonomous Cars" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Nvidia" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="Transportation" />
							<summary type="html"><![CDATA[Nvidia added two leading Chinese automakers, BYD and Geely, to its robotaxi program, as the chipmaker seeks to put its stamp on the growing autonomous vehicle market worldwide. At its GTC conference today, Nvidia announced that BYD and Geely, as well as Isuzu and Nissan, would use the chipmaker's Drive Hyperion platform, which combines the [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Nvidia DRIVE Hyperion" data-caption="" data-portal-copyright="Image: Nvidia" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/03/DRIVE-Hyperion-L4-Image.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Nvidia added two leading Chinese automakers, BYD and Geely, to its robotaxi program, as the chipmaker seeks to put its stamp on the growing autonomous vehicle market worldwide.</p>
<p class="has-text-align-none">At its GTC conference today, Nvidia announced that BYD and Geely, as well as Isuzu and Nissan, would use the chipmaker's Drive Hyperion platform, which combines the chips, computers, sensors, and software needed for for the development of Level 4 autonomous vehicles.</p>
<p class="has-text-align-none">BYD currently uses Nvidia's chips in its manually driven cars, and now, under this expanded agreement, it will use the company's Hyperion platform to build next-generation Level 4 vehicles. Geely, meanw …</p>
<p><a href="https://www.theverge.com/tech/895301/nvidia-robotaxi-byd-geely-hyperion-lyft-halos">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Andrew J. Hawkins</name>
			</author>
			
			<title type="html"><![CDATA[Nvidia’s head of autonomous driving opens up about his plan to beat Waymo and Tesla]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/892395/nvidia-autonomous-vehicle-xinzhou-wu-interview" />
			<id>https://www.theverge.com/?p=892395</id>
			<updated>2026-03-17T13:06:09-04:00</updated>
			<published>2026-03-11T08:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="Autonomous Cars" /><category scheme="https://www.theverge.com" term="Exclusive" /><category scheme="https://www.theverge.com" term="Nvidia" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="Transportation" />
							<summary type="html"><![CDATA[Every six months or so, Nvidia's head of automotive, Xinzhou Wu, invites CEO Jensen Huang to go for a ride in a vehicle equipped with the company's hands-free autonomous driving system. But only when Wu has "good confidence" in the system's driving capabilities. Recently, the two went for a drive from Woodside, California, to downtown [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Nvidia DRIVE Hyperion " data-caption="Nvidia is offering its DRIVE Hyperion platform to automakers who want to enable a range of autonomous features. | Image: Nvidia" data-portal-copyright="Image: Nvidia" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2026/03/DRIVE-Hyperion-Image.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Nvidia is offering its DRIVE Hyperion platform to automakers who want to enable a range of autonomous features. | Image: Nvidia	</figcaption>
</figure>
<p class="has-text-align-none">Every six months or so, Nvidia's head of automotive, Xinzhou Wu, invites CEO Jensen Huang to go for a ride in a vehicle equipped with the company's hands-free autonomous driving system. But only when Wu has "good confidence" in the system's driving capabilities.</p>
<p class="has-text-align-none">Recently, the two went for a drive from Woodside, California, to downtown San Francisco in a Mercedes CLA sedan with MB.Drive Assist Pro, a hands-free driver-assist system partly designed by Nvidia that's similar to Tesla's Full Self-Driving. The mood was light, even if the traffic was pretty heavy. </p>
<p class="has-text-align-none">"Let me know when you're in autonomous mode," Huang said to Wu, according to a vid …</p>
<p><a href="https://www.theverge.com/tech/892395/nvidia-autonomous-vehicle-xinzhou-wu-interview">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Jay Peters</name>
			</author>
			
			<title type="html"><![CDATA[Nvidia’s DLSS 4.5 with 6x Frame Generation is rolling out at the end of March]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/892111/nvidia-dlss-4-5-6x-frame-generation-dynamic-frame-generation" />
			<id>https://www.theverge.com/?p=892111</id>
			<updated>2026-03-10T12:51:52-04:00</updated>
			<published>2026-03-10T11:30:00-04:00</published>
			<category scheme="https://www.theverge.com" term="Gaming" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Nvidia" /><category scheme="https://www.theverge.com" term="PC Gaming" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Nvidia's DLSS 4.5 with 6x Multi Frame Generation will be available starting March 31st for users with RTX 50-series GPUs, the company announced on Tuesday. With 6x Multi Frame Generation, Nvidia claims that DLSS 4.5 can generate "five additional frames for every single natively rendered one, for a maximum 6X multiplier." It's an increase from [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/11/STK083_NVIDIA_2_A.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Nvidia's DLSS 4.5 with 6x Multi Frame Generation will be available starting March 31st for users with RTX 50-series GPUs, the company <a href="https://www.nvidia.com/en-us/geforce/news/gdc-2026-nvidia-geforce-rtx-announcements/#geforce-now">announced on Tuesday</a>. With 6x Multi Frame Generation, Nvidia <a href="https://www.nvidia.com/en-us/geforce/news/dlss-4-5-dynamic-multi-frame-gen-6x-2nd-gen-transformer-super-res/">claims that</a> DLSS 4.5 can generate "five additional frames for every single natively rendered one, for a maximum 6X multiplier." It's an increase from the maximum of three additional frames possible with DLSS 4.</p>
<p class="has-text-align-none">On March 31st, Nvidia will also release Dynamic Frame Generation for 50-series GPUs, which can automatically switch between Multi Frame Generation multipliers to hit your target frame rate for a game or your display's refresh rate.</p>
<p class="has-text-align-none">Nvidia an …</p>
<p><a href="https://www.theverge.com/tech/892111/nvidia-dlss-4-5-6x-frame-generation-dynamic-frame-generation">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Stevie Bonifield</name>
			</author>
			
			<title type="html"><![CDATA[Nvidia&#8217;s spending $4 billion on photonics to stay ahead of the curve in AI]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/tech/887635/nvidia-ai-photonics-lumentum-coherent" />
			<id>https://www.theverge.com/?p=887635</id>
			<updated>2026-03-02T11:56:49-05:00</updated>
			<published>2026-03-02T11:56:49-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Nvidia" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Nvidia announced on Monday that it's investing $2 billion each into Lumentum and Coherent, which are both developing photonics technology for data centers, like optical transceivers, circuit switches, and lasers, which are used to move data at high speeds over long distances. Their tech could improve energy efficiency, data transfer speeds, and bandwidth in future [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="An image of Nvidia’s logo" data-caption="" data-portal-copyright="Image: Cath Virginia / The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/06/STK083_NVIDIA_2.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">Nvidia announced on Monday that it's investing $2 billion each into <a href="https://nvidianews.nvidia.com/news/nvidia-announces-strategic-partnership-with-lumentum-to-develop-state-of-the-art-optics-technology">Lumentum</a> and <a href="https://nvidianews.nvidia.com/news/nvidia-and-coherent-announce-strategic-partnership-to-develop-optics-technology-to-scale-next-generation-data-center-architecture">Coherent</a>, which are both developing photonics technology for data centers, like optical transceivers, circuit switches, and lasers, which are used to move data at high speeds over long distances. Their tech could improve energy efficiency, data transfer speeds, and bandwidth in future AI data centers, after Nvidia already capitalized on its 2020 acquisition of the <a href="https://www.theverge.com/2024/12/9/24317016/nvidia-mellanox-antitrust-china-ai-chips">network hardware company Mellanox</a> to beef up NVLink and increase the amount of data moving between its GPUs. </p>
<p class="has-text-align-none">For Lumentum, the nonexclusive multiyear deal includes a "multibillion purchase commitment …</p>
<p><a href="https://www.theverge.com/tech/887635/nvidia-ai-photonics-lumentum-coherent">Read the full story at The Verge.</a></p>
						]]>
									</content>
			
					</entry>
	</feed>
