<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Sono Motoyama | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2018-08-27T15:11:07+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/author/sono-motoyama" />
	<id>https://www.theverge.com/authors/sono-motoyama/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/authors/sono-motoyama/rss" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Sono Motoyama</name>
			</author>
			
			<title type="html"><![CDATA[Inside the United Nations’ effort to regulate autonomous killer robots]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/2018/8/27/17786080/united-nations-un-autonomous-killer-robots-regulation-conference" />
			<id>https://www.theverge.com/2018/8/27/17786080/united-nations-un-autonomous-killer-robots-regulation-conference</id>
			<updated>2018-08-27T11:11:07-04:00</updated>
			<published>2018-08-27T11:11:07-04:00</published>
			<category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Amandeep Gill has a difficult job, though he won&#8217;t admit it himself. As chair of the United Nations&#8217; Convention on Conventional Weapons (CCW) meetings on lethal autonomous weapons, he has the task of shepherding 125 member states through discussions on the thorny technical and ethical issue of &#8220;killer robots&#8221; &#8212; military robots that could theoretically [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Photo: Christophe Morin /IP3 / Getty Images" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/12562769/GettyImages_974852790_sized.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>Amandeep Gill has a difficult job, though he won&rsquo;t admit it himself. As chair of the United Nations&rsquo; Convention on Conventional Weapons (CCW) meetings on lethal autonomous weapons, he has the task of shepherding 125 member states through discussions on the thorny technical and ethical issue of &ldquo;killer robots&rdquo; &mdash; military robots that could theoretically engage targets independently. It&rsquo;s a subject that has attracted a glaring <a href="https://www.washingtonpost.com/news/innovations/wp/2017/08/21/elon-musk-calls-for-ban-on-killer-robots-before-weapons-of-terror-are-unleashed/?utm_term=.746bba42e073">media spotlight</a> and pressure from NGOs like Campaign to Stop Killer Robots, which is backed by Tesla&rsquo;s Elon Musk and Alphabet&rsquo;s Mustafa Suleyman, to ban such machines outright.</p>

<p>Gill has to corral national delegations &mdash; diplomats, lawyers, and military personnel &mdash; as well as academics, AI entrepreneurs, industry associations, humanitarian organizations, and NGOs in order for member states to try to reach a consensus on this critical security issue.</p>

<p>The subject of killer robots can spark heated emotions. The Future of Life Institute, a nonprofit that works to &ldquo;mitigate existential risks facing humanity&rdquo; such as artificial intelligence, launched its sensationalistic short film <em>Slaughterbots</em> at a side event hosted by the Campaign to Stop Killer Robots at the CCW&rsquo;s meetings last November. The film, which depicts a dystopian near-future menaced by homicidal drones, immediately went viral.</p>
<div class="youtube-embed"><iframe title="Slaughterbots" src="https://www.youtube.com/embed/9CO6M2HsoIA?rel=0" allowfullscreen allow="accelerometer *; clipboard-write *; encrypted-media *; gyroscope *; picture-in-picture *; web-share *;"></iframe></div>
<p>Gill, a former disarmament ambassador for India, sought to quell the rising hysteria sparked by a vision of murderous drone armies. &ldquo;Ladies and gentlemen, I have news for you,&rdquo; Gill said, speaking to the press after the initial round of CCW meetings. &ldquo;The robots are not taking over the world. Humans are still in charge.&rdquo;</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>“The robots are not taking over the world. Humans are still in charge.”</p></blockquote></figure>
<p>An engineer by training, Gill honed his negotiating chops during his participation in discussions on the Comprehensive Nuclear-Test-Ban Treaty (CTBT) and civil nuclear agreements between India and its partners; he was also a member of India&rsquo;s <a href="http://www.aitf.org.in">Artificial Intelligence Task Force</a>. Gill is slated to be the executive director at the UN&rsquo;s newly created High-Level Panel on Digital Cooperation, which is co-chaired by Melinda Gates and Alibaba Group&rsquo;s Jack Ma.</p>

<p>The CCW will meet for the third time for discussions on lethal autonomous weapons (LAWs), from August 27th through 31st, after which it will likely issue a report and decide upon continuing discussions next year. <em>The Verge</em> spoke to Gill about&nbsp;Hollywood depictions of dangerous machines, weapons that already exist or are in development, and a potential ban on killer robots.</p>

<p><em>This interview has been edited for length and clarity.</em></p>

<p><strong>I understand that the official definition of autonomous lethal weapons is still under discussion, but give us a sense of what we&rsquo;re talking about when we say &ldquo;killer robots.&rdquo; Drones, planes, ships, tanks, computer systems? </strong></p>

<p>The jury is still out on whether we have lethal autonomous weapons, as some people define them, out there yet. For others, yes, there are such weapons systems out there in labs and so on&hellip; To give you a concrete example, the C-RAM system (the Counter-Rocket, Artillery, and Mortar system) that the US deployed in the Iraq theater some years back. So this system responds in an autonomous fashion to incoming fire, but there is a degree of human control that is exercised. The discussion at our convention was around how meaningful that control is. If you had another system of response, would that be more respectful of IHL (international humanitarian law) or not? That was a useful way of visualizing some of the challenges with future systems.</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>“I don’t think that these visualizations of Terminators or drones going berserk are very helpful.”</p></blockquote></figure>
<p><strong>So we&rsquo;re not talking about the Terminator or tiny autonomous drones exploding on people&rsquo;s heads. </strong></p>

<p>I don&rsquo;t think that these visualizations of Terminators or drones going berserk are very helpful in having an advanced conversation about intelligent autonomous systems. But we have to deal with what&rsquo;s there &mdash; Hollywood and the rest of it. I think the best way would be thinking about the loss of human control. So the systems that we&rsquo;re dealing with, whether it&rsquo;s in the civilian space or in the military space, if they exhibit this aspect of autonomy, whereby human supervision becomes hard to implement in practice, we have a difficulty.</p>

<p>Whether that is a safety-related challenge with regard to autonomous vehicles or a hacking challenge in the civilian space &mdash; people hacking into autonomous vehicles or poisoning the data that&rsquo;s used for training these systems &mdash; or in the military, if it&rsquo;s a loss of control over these weapons systems in the battle space by commanders that results in friendly fire or accidental triggering of hostilities among states.</p>

<p><strong>There are some governments and NGOs that would like to see a ban of lethal autonomous weapons. Do you see that as a possibility or likelihood?</strong></p>

<p>The Convention on Conventional Weapons provides a range of possibilities for controlling weapons use, either banning systems in advance or accepting their inevitability but proscribing their use in certain scenarios, or prescribing some ways of exchanging information or warning people on their use, etc. So banning LAWs is one of the possibilities among the options. But there could be yet another option. There are some states that are quite content with leaving this to national regulations, to industrial standards. So at this point in time, there is no consensus on any option.</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>“There are some states that are quite content with leaving this to national regulations, to industrial standards.”</p></blockquote></figure>
<p>As chair, I don&rsquo;t have a view on what option states should take. I have to make sure that whatever option states decide on, the results of the discussion are able to support that option.</p>

<p>There is a degree of common understanding in the room that the notion of human accountability for the use of force cannot be dispensed with. So if that is so, what is the quality of the human-machine interface that we are looking for?</p>

<p><strong>Unlike nuclear weapons, an issue you&rsquo;ve also worked on, the tools to build autonomous weapons are relatively accessible. Does that pose a challenge to controlling this technology?</strong></p>

<p>Any military system today uses a number of technologies that are available off the shelf. But the international community has found ways to regulate these systems, to control their proliferation, whether it is through technology export control or treaties and conventions that have broader applicability or other ways of working with industry &mdash; such as in the area of nuclear security, for example &mdash; of managing the risks and unattended consequences.</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>“We need to have a conversation that is open to all stakeholders.”</p></blockquote></figure>
<p>So AI is perhaps not so different from these earlier examples. What is perhaps different is the speed and scale of change, and the difficulty in understanding the direction of deployment. That is why we need to have a conversation that is open to all stakeholders. If we set out to govern these through only one level, let&rsquo;s say the international treaty-making level, ignoring what is done at the national level or by industry, then our chances of success would be not that great. That is my experience of the past couple of years. But if all these different levels move in sync, move in full cognizance of what the other levels are attempting, then we have a better chance of succeeding in managing some of the risks that are associated with AI.</p>

<p><strong>What are your thoughts on the efforts of the Campaign to Stop Killer Robots? In 2014, Elon Musk said that &ldquo;killer robots&rdquo; would be here in five years, so that would be next year. Do you have any response?</strong></p>

<p>No, I don&rsquo;t want to comment on&hellip; There are a lot of predictions, a lot of assessments around, so I think it would be very brave of me to comment on any one of these predictions.</p>

<p><strong>I respect your not wanting to sensationalize the subject, but I think there is this fear out there. I&rsquo;m just wondering what your response to that fear is.</strong></p>

<p>I think making policy out of fear is not a good idea. So as serious policy practitioners, we have to look at what has become of the situation in terms of technology development and what is likely to happen. What is the context we are dealing with? Here in the Geneva discussions, our context is international humanitarian law, the laws of armed conflict and other concerns related to international security implications of the possible deployment of lethal autonomous weapons systems. So we have to keep that context in mind and deal with it in a rational, systematic manner which carries along all the 125 states that are in the CCW. I don&rsquo;t think being fearful or being paralyzed into inaction &mdash; or being cavalier about the risk either &mdash; is very helpful.</p>

<p><strong>Do you have goals for this cycle of meetings in August?</strong></p>

<p>Yes, indeed. One goal is to build on the consensus outcome of last year whereby we shaped the agenda into four distinct sets of issues. We agreed on a set of understandings around the concerns. In particular, the understanding that existing international mandating law continues to apply to weapons systems in whatever shape or form.</p>

<p>The four agenda items are first, the characterization issue &mdash; how do you define lethal autonomous weapons systems? Second, what should be the nature of the human element in the use of force through such systems? What should be the human-machine interface when such systems are deployed or developed? The third item is, what are the various options for dealing with the international humanitarian law and the international security-related concerns coming from the potential deployment of such weapons systems?</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>“How do you define lethal autonomous weapons systems?”</p></blockquote></figure>
<p>When I say &ldquo;options,&rdquo; I mean whether it should be a legally binding instrument, another protocol to the convention, or whether it should be a political declaration, a politically binding set of rules and principles, or whether it should center on the applicability of the existing rules, such as weapons reviews.</p>

<p class="has-end-mark">The fourth point is about technology review. In this field, more than any other today, technology is evolving very rapidly. So you want your policy responses to be tech-neutral. They should not have to be fundamentally revised when technology changes. At the same time, you want to make sure that the implementation stays in step with technical developments. &hellip; In the August meeting, it is my hope that we come up at the end of the meeting with a good report that captures some building blocks in these four areas.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Sono Motoyama</name>
			</author>
			
			<title type="html"><![CDATA[Meet the ‘Lady Gaga of Mathematics’ helming France’s AI task force]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/2018/3/28/17170104/cedric-villani-french-mathematician-ai-report-interview" />
			<id>https://www.theverge.com/2018/3/28/17170104/cedric-villani-french-mathematician-ai-report-interview</id>
			<updated>2018-03-28T08:10:18-04:00</updated>
			<published>2018-03-28T08:10:18-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Features" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[On a crisp Saturday morning in Orsay, a southwestern suburb of Paris with some 16,500 inhabitants, the rue de Paris was bustling. But while many residents were doing their usual weekend shopping at the fishmonger or the butcher shop, further up the street, in a small former chateau that is now the town&#8217;s cultural center, [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Illustration by Alex Castro / The Verge. Photo by Sono Motoyama" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/10538695/acastro_180327_2414_0001.0.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>On a crisp Saturday morning in Orsay, a southwestern suburb of Paris with some 16,500 inhabitants, the rue de Paris was bustling. But while many residents were doing their usual weekend shopping at the fishmonger or the butcher shop, further up the street, in a small former chateau that is now the town&rsquo;s cultural center, about 80 people had set aside their late-morning hours to hear the &ldquo;<em>voeux</em>&rdquo; of their legislative representative to the National Assembly, C&eacute;dric Villani.</p>

<p>The <em>voeux</em>, or &ldquo;new year&rsquo;s wishes,&rdquo; are a standard exercise of French politicians from the president on down, in which they review activities of the past year and lay out projects for the year to come. Villani, a mathematician and Fields Medal winner (often shorthanded as the equivalent of the Nobel Prize in mathematics), was new to the practice; only six months earlier, he was still an academic. He was dressed as always &mdash; winter or summer &mdash; in a black three-piece suit, a shirt with cufflinks, a spider brooch on his lapel, and a large, floppy tie called a <em>lavalli&egrave;re</em> (today&rsquo;s version in purple). He cut an unmistakable figure, sporting a three-day beard, his dark hair styled in a pageboy. He mingled, smiling with attendees, and posed for selfies before taking the stage.</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>He was dressed as always in a black three-piece suit, a shirt with cufflinks, a spider brooch on his lapel, and a large, floppy tie </p></blockquote></figure>
<p>The fact that a mathematician could be considered, as he is, a &ldquo;rock star&rdquo; &mdash; or, better yet, <a href="https://www.newyorker.com/tech/elements/cedric-villani-france-famous-mathematician-birth-theorem">&ldquo;the Lady Gaga of mathematics&rdquo;</a> &mdash; says perhaps more about the French than Villani. Nonetheless, Villani, 44, has become a darling of President Emmanuel Macron&rsquo;s young technocratic government, accompanying the president to Ouagadougou, Burkina Faso, in November and Beijing in mid-January. The government has piled the work on his desk, which is evidence, Villani says, of the need for people with scientific expertise in politics. But of all his projects &mdash; from math education to the future of New Caledonia to tax evasion &mdash; perhaps his most all-consuming mission is his task force on artificial intelligence and the highly anticipated report it&rsquo;s set to release tomorrow. If successful, the report will help set the AI agenda in France and Europe for years to come.</p>

<p>In view of a world where &ldquo;artificial intelligence will be everywhere, like electricity,&rdquo; as Villani has said, becoming a leader in the field is critical for France. Many feel that Europe is already at an enormous disadvantage compared to the US and China and will need to do some Usain Bolt-style sprinting to catch up. For one thing, France and Europe don&rsquo;t have the data-gathering platforms necessary to fuel machine learning: they lack the power of what the acronym-loving French call GAFA (Google, Apple, Facebook, and Amazon). French bureaucracy has also historically been a drag on entrepreneurship and invention. Compared to the US, cooperation between academia and industry is much less frequent. And though France is known for the quality of its engineers and scientists, much of the top-level talent goes abroad, where there is more money and freedom to pursue research without constraints. Addressing these issues by sketching the nation&rsquo;s AI road map has fallen on the well-tailored shoulders of Villani.</p>
<hr class="wp-block-separator" />
<p>Several months before delivering his new year&rsquo;s wishes, after a local TEDx presentation in November on &ldquo;How AI Will Revolutionize Health,&rdquo; I sat down with Villani, who was cordial if a bit distant. After winning the Fields Medal, Villani, a self-described &ldquo;formerly shy&rdquo; person, took a media training workshop. His large eyes, luminous skin, thin body, and slightly walleyed expression accentuated the impression of speaking to an &ldquo;extraterrestrial,&rdquo; as <em>Paris Match</em> once put it.</p>

<p>The report Villani is set to release isn&rsquo;t a first for France. At the very end of Fran&ccedil;ois Hollande&rsquo;s presidency last spring, his administration released a rushed AI report, offering broad brushstrokes. But, Villani says, his report should offer both &ldquo;a panorama &hellip; and make a diagnosis of the subject&hellip;. It must pose questions explicitly and offer practical solutions and implementations.&rdquo;</p>

<p>Villani&rsquo;s six-member task force (@MissionVillani) is made up of a machine learning researcher, an engineer with the defense ministry, and four members of a French digital technology advisory council, with expertise in everything from philosophy to law. The group was charged with a broad-ranging mission, covering industrial, data policy, employment and training, environmental, ethical, and research issues. Contrary to the reports issued by the Obama administration (which one international observer notes &ldquo;have not led to a single bit of U.S. policy&rdquo;), Villani&rsquo;s team expects that some concrete measures will be put into play within months. Focusing on four key sectors &mdash; health, transportation, environment, and defense &mdash; the team also emphasizes that it has devoted substantial attention to the ethics of data policy within the context of the General Data Protection Regulation (GDPR), which is slated to go into effect in May and will broaden privacy protections for individuals in Europe. Considering the current Facebook meltdown over the Cambridge Analytica scandal, the French team has probably chosen wisely. As for the question of financing, while France&rsquo;s previous AI report suggested a 1.5 billion-euro investment in AI, it is unclear precisely how much funding will be allotted.</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>“The subject had grown so significantly that you would have to have been blind and deaf not to be interested.”</p></blockquote></figure>
<p>Given the breadth and complexity of the subject of artificial intelligence, a skeptic may doubt one man&rsquo;s ability to understand the field without years of study, but there is probably no other member of Parliament better suited to lead this project on the subject. By all accounts a quick study with an enormous work capacity and a diplomatic, optimistic temperament, Villani was, of course, also a high-level researcher in a related field, which helped him grasp AI concepts quickly.</p>

<p>Asked if he had an interest in artificial intelligence before he was assigned to the task force, Villani enunciated clearly and deliberately in his high-pitched, slightly theatrical French: &ldquo;The subject had grown so significantly that you would have to have been blind and deaf not to be interested.&rdquo; In any case, &ldquo;the big concerns are not really about the most technical issues.&rdquo; He has indicated in the past that he hopes to avoid AI&rsquo;s potentially &ldquo;devastating effects on economic issues and the democratic fabric,&rdquo; partly by making sure that AI is &ldquo;everybody&rsquo;s business.&rdquo; Hence his large-scale offensive in the French press to educate the public and his push to seek broad-ranging input for his report.</p>

<p>In November, Villani estimated that he would speak to 250 people for the AI report and finish it by the end of January. But, solicited by hundreds of people who wanted their say, he kept speaking to more and more parties. (&ldquo;He always says yes,&rdquo; said one of his team members.) Among these were scientists, lawyers, doctors, philosophers, labor union representatives, business leaders, startup entrepreneurs, and even one roundtable of 15 girls, who discussed the involvement of girls in the sciences. In the end, the task force interviewed about 350 people in groups of 10 to 15, gathered according to topic, in an off-white conference room at the Digital Ministry. There were also 1,600 contributors on a public online platform.</p>

<p>Now, the report is slated to be officially delivered to the president at a March 29th ceremony at Paris&rsquo; Coll&egrave;ge de France, which will be attended by an estimated 500 guests, with the participation of tech-celebrity guests like Facebook&rsquo;s chief AI scientist and deep-learning guru Yann LeCun.</p>

<p>AI experts in France and abroad are anxious to see the report. &ldquo;Governments around the world are struggling with whether they have to do something preemptively about AI,&rdquo; said Joshua Gans, professor of strategic management at the University of Toronto and co-author of the forthcoming <em>Prediction Machines: The Simple Economics of Artificial Intelligence</em>. &ldquo;There have been a lot of concerns about potential issues, from safety to jobs to what are its privacy implications.&rdquo;</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>“Governments around the world are struggling with whether they have to do something preemptively about AI.”</p></blockquote></figure>
<p>If he had to sum up international government reaction to AI, Gans said, &ldquo;it is a great dropping of the ball &hellip; not so much on the research side but on ensuring that privacy laws are up to date and moving toward international agreements &hellip; on autonomous weapons.&rdquo; The fact that France has put a &ldquo;very high-profile person&rdquo; in charge of the task force attests to how seriously it is taking these issues, he said.</p>

<p>Jean-Gabriel Ganascia, a professor of computer science at the Sorbonne and author of <em>Le mythe de la singularit&eacute;</em> (&ldquo;The Myth of the Singularity&rdquo;), was one of the people interviewed by Villani and his team. &ldquo;We&rsquo;re waiting for the report and then its translation into concrete actions,&rdquo; Ganascia said. &ldquo;The important thing is not to write reports and create strategies. We have to act.&rdquo;</p>

<p>And there is not a moment to waste; some maintain that it is already too late for France and Europe. &ldquo;A country that doesn&rsquo;t have an AI industry will be underdeveloped tomorrow. It will be a slow process of technical, political and military colonization,&rdquo; said provocateur Laurent Alexandre, author of <em>La guerre des intelligences</em> (&ldquo;The Intelligence War&rdquo;), in an interview with the French business daily <em>Les Echos</em>. Noting the dominance of American and Chinese tech giants, he said, &ldquo;Europe has completely lost the AI battle.&rdquo;</p>
<hr class="wp-block-separator" />
<p>Roxanne Varza, the 33-year-old American director of Paris&rsquo; new Station F, which bills itself as the world&rsquo;s biggest startup campus, let out a melodious laugh when I mentioned Alexandre&rsquo;s grim views. &ldquo;That is such a pass&eacute; view,&rdquo; she said, sitting in a glass-walled room overlooking her monster tech playground. &ldquo;Maybe you could have said that five years ago, but I don&rsquo;t think you can say that anymore. With the Brexit climate, Donald Trump, high Silicon Valley prices, the political situation worldwide, and with Macron now in power in France, I think we&rsquo;ve seen a huge shift.&rdquo;</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>“We need more infrastructure, we need a European cloud, we need more European intensive computing centers.”</p></blockquote></figure>
<p>In fact, Varza said, the top countries that apply for programs at Station F are the US and the UK, respectively. &ldquo;France has a huge opportunity in [the AI] space because French engineers and data scientists are so well known.&rdquo; And feared American tech giants Facebook and Microsoft are right on the Station F campus, apparently eager to tap into some of that <em>je ne sais quoi</em> in Macron&rsquo;s &ldquo;Start-up Nation.&rdquo; (Add to this the announcement made earlier this year about Facebook&rsquo;s 10 million-euro investment in France for AI research and a Google AI research center in Paris.)</p>

<p>For his part, Villani doesn&rsquo;t want an AI war; he wants competition, yes, but also collaboration, such as one he set up with Microsoft when he was director of the Institut Henri Poincar&eacute;, a prestigious math institute. At an &ldquo;Ask Me Anything&rdquo; event at Station F, he said, &ldquo;We need more infrastructure, we need a European cloud, we need more European intensive computing centers, we need a European hardware industry, we need more European research centers &mdash; and that will take time and money. But it&rsquo;s worth it because that will [help ensure] European sovereignty. And it should be constructed not in an ambiance of war but in the spirit of competition.&rdquo;</p>
<hr class="wp-block-separator" />
<p>Marc Schoenauer, 60, Villani&rsquo;s guide in the AI netherworld, arrived at a dive bar south of Paris on a motorcycle, his long, wavy gray hair and beard askew, wearing a baggy sweater and jeans. If Villani&rsquo;s attire and studied manner evoke a 19th century dandy with Goth overtones, Schoenauer gives off a former hippie vibe. A researcher at the French Institute for Research in Computer Science and Automation (Inria), Schoenauer has spent 30 years studying artificial intelligence and is the AI expert on Villani&rsquo;s task force. He accepted the invitation, he said, &ldquo;naively,&rdquo; ignorant of the long hours and the months the report would go into overtime.</p>

<p>During the task force&rsquo;s three-hour hearings, Schoenauer said, interviewees would, one by one, make a statement of their recommendations, which would be followed by a discussion, after which Villani would circle back to issues that interested him. Each task force member was responsible for a chapter of the approximately 200-page document, which would then be reread by their colleagues. In the final phase, a draft was distributed to French ministries for feedback on the feasibility of the recommendations and the necessary funding.</p>

<p>On working with Villani, Schoenauer commented, &ldquo;He&rsquo;s very impressive, first of all, because of the amount of work he can do.&rdquo; He also noted that Villani had to head or contribute to other task forces and is a regular member of Parliament. Furthermore, Schoenauer said, &ldquo;He remembers everything that was said in the hearings and fills up small notebooks. Later, he knows exactly which notebook he put the note in.&rdquo;</p>

<p>&ldquo;I think his motivation is pure in politics. It&rsquo;s not at all to put himself forward,&rdquo; said his task force collaborator Yann Bonnet, general secretary of the French digital advisory commission Conseil National du Num&eacute;rique. &ldquo;He&rsquo;s a very good politician. &hellip; He always listened carefully, found compromises, was able to understand the balance of power. &hellip; It made it so everyone was happy to have contributed to the task force.&rdquo;</p>

<p>&ldquo;I&rsquo;ve never seen him angry,&rdquo; said another task force member, Anne-Charlotte Cornut. &ldquo;I never heard him raise his voice in the six months we worked together. And we worked 24 hours a day.&rdquo;</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>“There was a need to serve the nation in a moment of great confusion.”</p></blockquote></figure>
<p>Schoenauer allowed that perhaps Villani enjoys his public profile and persuading others to his point of view. However, contrary to the &ldquo;shocking&rdquo; attitude of some cabinet members and politicians who &ldquo;don&rsquo;t give a shit,&rdquo; he said of Villani, &ldquo;I think he&rsquo;s motivated to do something useful for France and mankind in general.&rdquo;</p>

<p>Interviewed this month in a private room at Le Bourbon, a brasserie favored by the French political elite, Villani said he was motivated to enter politics because of what he saw as the recent &ldquo;chaos&rdquo; in French politics. And Macron&rsquo;s pro-European, &ldquo;neither right-wing nor left-wing&rdquo; stance suited Villani&rsquo;s own long-held political views.</p>

<p>&ldquo;There was a need to serve the nation in a moment of great confusion, and [there was also] the idea that this was a chance that shouldn&rsquo;t be missed.&rdquo; He continued, sometimes spasmodically tapping the table for emphasis, &ldquo;During the second round of voting [during the presidential election], it was possible that it was going to be a choice between the extreme left or the extreme right. It was chaos!&rdquo;</p>

<p>Villani admits that, as a new representative, he had to learn &ldquo;everything&rdquo; about political life. &ldquo;You learn the way laws are made, how political influence works, relations between [governmental] groups, cultural questions, work on constitutional reform,&rdquo; he enumerated.</p>

<p>When it was suggested that some mathematicians might not understand the abandonment of research by one of its highest practitioners, he was slightly defensive. &ldquo;Every time scientists have the feeling that there&rsquo;s a subject that mixes science and politics &mdash; <em>baf!</em> &mdash; they come to see me, or they write me.&rdquo; He said he had spoken to a half-dozen AI researchers just that morning who urgently wanted to explain their point of view. &ldquo;I see in all these examples how important it is to have scientists in politics. It&rsquo;s important for politics. It&rsquo;s important for science.&rdquo;</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>“I think we have the diagnosis, I think we have the recommendations.”</p></blockquote></figure>
<p>As for the long-awaited AI report, Villani is satisfied. &ldquo;I think we have the diagnosis, I think we have the recommendations, and I think we&rsquo;ve listened to enough people to be fairly sure of our recommendations,&rdquo; he said, noting that not only had his team formally interviewed hundreds of experts, but his constant presence at conferences large and small, as well as his blanket coverage in the mainstream media have elicited feedback &mdash; good and not so good &mdash; from government and industry observers.</p>

<p>&ldquo;When you do an interview in a mass circulation publication, if there&rsquo;s something wrong in what you say, you can be sure that people will tell you so!&rdquo; Villani remarked.</p>

<p>He paused to take a sip of orange juice. His pale skin seemed a bit gray, having lost the translucent glow observed during a previous meeting. While his colleague Marc Schoenauer was looking forward to returning to his research after the delivery of the AI report (&ldquo;I learned a lot &hellip; but that&rsquo;s it now&rdquo;), Villani&rsquo;s talents will be required for its implementation.</p>

<p class="has-end-mark">&ldquo;It&rsquo;s not the end, but it&rsquo;s a step we&rsquo;ve taken,&rdquo; Villani said of the report&rsquo;s completion. &ldquo;We&rsquo;re going to the next stage of the mission.&rdquo; A waiter informed him he had a group of people waiting in the wings to meet with him. The <em>deput&eacute;</em> seemed game, if a little tired. He was still on the nation&rsquo;s clock.</p>
						]]>
									</content>
			
					</entry>
	</feed>
