<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Sheon Han | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2026-03-12T16:03:46+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/author/sheon-han" />
	<id>https://www.theverge.com/authors/sheon-han/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/authors/sheon-han/rss" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Sheon Han</name>
			</author>
			
			<title type="html"><![CDATA[Is AI the end of software engineering or the next step in its evolution?]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/ai-artificial-intelligence/767973/vibe-coding-ai-future-end-evolution" />
			<id>https://www.theverge.com/?p=767973</id>
			<updated>2026-03-12T12:03:46-04:00</updated>
			<published>2025-09-01T07:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[The first time I used ChatGPT to code, back in early 2023, I was reminded of “The Monkey’s Paw,” a classic horror story about an accursed talisman that grants wishes, but always by the most malevolent path — the desired outcome arrives after exacting a brutal cost elsewhere first. With the same humorless literalness, ChatGPT [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/2025/08/257871_How_coders_use_AI__CVirginia_STILL0.gif?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-drop-cap has-text-align-none">The first time I used ChatGPT to code, back in early 2023, I was reminded of “<a href="https://americanliterature.com/author/w-w-jacobs/short-story/the-monkeys-paw/">The Monkey’s Paw</a>,” a classic horror story about an accursed talisman that grants wishes, but always by the most malevolent path — the desired outcome arrives after exacting a brutal cost elsewhere first. With the same humorless literalness, ChatGPT would implement the change I’d asked for, while also scrambling dozens of unrelated lines. The output was typically over-engineered, often barnacled with irrelevant fragments of code. There were some usable lines in the mix, but untangling the mess felt like a detour.</p>

<p class="has-text-align-none">When I started using AI-assisted tools earlier this year, I felt decisively outmatched. The experience was like pair-programming with a savant intern — competent yet oddly deferential, still a tad too eager to please and make sweeping changes at my command. But when tasked with more localized changes, it nailed the job with enviable efficiency.&nbsp;</p>

<p class="has-text-align-none">The trick is to keep the problem space constrained.<strong> </strong>I recently had it take a dozen lines of code, each running for 40 milliseconds in sequence — time stacking up — and run them all in parallel so the entire job finished in the time it used to take for just one. In a way, it’s like using a high-precision 3D printer to build an aircraft: use it to produce small custom parts, like hydraulic seals or O-rings, and it delivers flawlessly; ask it for something less localized like<strong> </strong>an entire cockpit, and you might get a cockpit-shaped death chamber with a nonfunctional dashboard and random knobs haphazardly strung together. The current crop of models is flexible enough for users with little-to-no coding experience to create products of varying quality<strong> </strong>through what’s called — in a billion-dollar buzzword — vibe-coding. (Google even released a separate app for it called Opal.)</p>

<p class="has-text-align-none">Yet, one could argue that vibe-coding isn’t entirely new. As a tool for nonprofessionals, it continues a long lineage of no-code applications. As a mode of programming that involves less prefrontal cortex than spinal reflex, any honest programmer will admit to having engaged in a dishonorable practice known as “shotgun debugging.” Like mindlessly twisting a Rubik’s Cube and wishing the colors would magically align, a programmer, brain-fried after hours of fruitless debugging, starts arbitrarily tweaking code — deleting random lines, swapping a few variables, or flipping a Boolean condition — re-runs the program, and hopes for the correct outcome. Both vibe-coding and shotgun debugging are forms of intuitive flailing, substituting hunches and luck for deliberate reasoning and understanding.</p>

<figure class="wp-block-pullquote"><blockquote><p>We’ve used machines to take the load off cognition, but for the first time, we are offloading cognition itself to the machine.</p></blockquote></figure>

<p class="has-text-align-none">As it happens, it’s not considered good form for a self-respecting programmer to engage in shotgun debugging. Soon, I came to see that the most productive form of AI-assisted coding may be an editorial one — much like how this essay took shape. My editor assigned this piece with a few guiding points, and the writer — yours truly — filed a serviceable draft that no sober editor would run as-is. (Before “prompt and pray,” there was “assign and wait.”)</p>

<p class="has-text-align-none">Likewise, a vibe-coder — a responsible one, that is — must assume a kind of editorship. The sprawling blocks of code produced by AI first need structural edits, followed by line-level refinements. Through a volley of prompts — like successive rounds of edits — the editor-coder minimizes the delta between their vision and the output.</p>

<p class="has-text-align-none">Often, what I find most useful about these tools isn’t even writing code but understanding it. When I recently had to navigate an unfamiliar codebase, I asked for it to explain its basic flow. The AI generated a flowchart of how the major components fit together, saving me an entire afternoon of spelunking through the code.</p>

<hr class="wp-block-separator has-alpha-channel-opacity" />

<p class="has-drop-cap has-text-align-none">I’m of two minds about how much vibe-coding can do. The writer in me celebrates how it could undermine a particular kind of snobbery in Silicon Valley — the sickening smugness engineers often show toward nontechnical roles — by helping blur that spurious boundary. But the engineer in me sees that as facile lip service, because building a nontrivial, production-grade app without grindsome years of real-world software engineering experience is a tall order.</p>

<p class="has-text-align-none">I’ve always thought the best metaphor for a large codebase is a city. In a codebase, there are literal pipelines — data pipelines, event queues, and message brokers — and traffic flows that require complex routing. Just as cities are divided into districts because no single person or team can manage all the complexity, so too are systems divided into units such as modules or microservices. Some parts are so old that it’s safer not to touch them, lest you blow something up — much like the unexploded bombs still buried beneath European cities. (Three World War II-era bombs were <a href="https://www.npr.org/2025/06/05/nx-s1-5424237/world-war-ii-bombs-cologne-evacuation">defused</a> in Cologne, Germany, just this summer.)</p>

<p class="has-text-align-none">If developing a new product feature is like opening a new airline lounge, a more involved project is like building a second terminal. In that sense, building an app through vibe-coding is like opening a pop-up store in the concourse — the point being that it’s self-contained and requires no integration.&nbsp;</p>

<p class="has-text-align-none">Vibe-coding is good enough for a standalone program, but the knottiest problems in software engineering aren’t about building individual units but connecting them to interoperate. It’s one thing to renovate a single apartment unit and another to link a fire suppression system and emergency power across all floors so they activate in the right sequence.&nbsp;</p>

<p class="has-text-align-none">These concerns extend well beyond the interior. The introduction of a single new node into a distributed system can just as easily disrupt the network, much like the mere existence of a new building can reshape its surroundings: its aerodynamic profile, how it alters sunlight for neighboring buildings, the rerouting of pedestrian traffic, and the countless ripple effects it triggers.&nbsp;</p>

<figure class="wp-block-pullquote"><blockquote><p>The security concerns around vibe-coding, in my estimation, are something of a bogeyman.</p></blockquote></figure>

<p class="has-text-align-none">I’m not saying this is some lofty expertise, but rather the tacit, hard-earned kind — not just knowing how to execute, but knowing what to ask next. You can coax almost any answer out of AI when vibe-coding, but the real challenge is knowing the right sequence of questions to get where you need to go. Even if you’ve overseen an interior renovation, without standing at a construction site watching concrete being poured into a foundation, you can’t truly grasp how to create a building. Sure, you can use AI to patch together something that looks functional, but as the software saying goes: “If you think good architecture is expensive, try bad architecture.”</p>

<p class="has-text-align-none">If you were to believe Linus Torvalds, the creator of Linux, there’s also a matter of “taste” in software. Good software architecture isn’t just drawn up in one stroke but emerges from countless sound — and tasteful — micro-decisions, something models can’t zero-shot. Such intuition can only be developed as a result of specific neural damage from a good number of 3AM on-call alerts.</p>

<p class="has-text-align-none">Perhaps these analogies will only go so far. A few months ago, an AI could reliably operate only on a single file. Now, it can understand context across multiple folders and, as I’m writing this, across multiple codebases. It’s as if the AI, tasked with its next chess move, went from viewing the board through the eyes of a single pawn to surveying the entire game with strategic insight. And unlike artistic taste, which has infinitely more parameters, “taste” in code might just be the sum of design patterns that an AI could absorb from O’Reilly software books and years of <em>Hacker News</em> feuds.</p>

<hr class="wp-block-separator has-alpha-channel-opacity" />

<p class="has-drop-cap has-text-align-none">When the recent Tea app snafu exposed tens of thousands of its users’ driver’s licenses — a failure that a chorus of <a href="https://x.com/TrungTPhan/status/1948930556043166121">online</a> <a href="https://x.com/wtravishubbard/status/1949073787888054457">commenters</a> <a href="https://x.com/iHarnoorSingh/status/1950150368756666832">swiftly</a> <a href="https://x.com/FireKnightVAL/status/1948804009567158419">blamed</a> on <a href="https://medium.com/@adnanmasood/beyond-the-vibe-a-deep-dive-into-the-dangers-of-vibe-coding-lessons-from-the-tea-app-incident-cea9fd2d1fa0">vibe-coding</a> — it felt like the moment that vibe-coding skeptics had been praying for. As always, we could count on AI influencers on X to grace the timeline with their brilliant takes, and on a certain strain of tech critics — those with a hardened habit of ritual ambulance chasing — to reflexively anathematize any use of AI. In a strange inversion of their usual role as whipping boys, software engineers were suddenly elevated to guardians of security, cashing in on the moment to punch down on careless vibe-coders trespassing in their professional domain.</p>

<p class="has-text-align-none">When it was revealed that vibe-coding likely <a href="https://simonwillison.net/2025/Jul/26/official-statement-from-tea/">wasn’t the cause</a>, the incident revealed less about vibe-coding than it did about our enduring impulse to dichotomize technical mishaps into underdogs and bullies, the scammed and fraudsters, victims and perpetrators.</p>

<p class="has-text-align-none">At the risk of appearing to legitimize AI hype merchants, the security concerns around vibe-coding, in my estimation, are something of a bogeyman — or at least the net effect may be non-negative, because AI can also help us write more secure code.&nbsp;</p>

<p class="has-text-align-none">Sure, we’ll see blooper reels of “app slop” and insecure code snippets gleefully shared online, but I suspect many of those flaws could be fixed by simply adding “run a security audit for this pull request” to a checklist. Already, automated tools are flagging potential vulnerabilities. Personally, using these tools has let me generate far more tests than I would normally care to write.</p>

<p class="has-text-align-none">Further, if a model is good enough, when you ask, “Hey, I need a database where I can store driver’s licenses,” an AI might respond:</p>

<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="has-text-align-none">“Sure, but you forgot to consider security, you idiot. Here’s code that encrypts driver’s license numbers at rest using AES-256-GCM. I’ve also set up a key management system for storing and rotating the encryption key and configured it so decrypting anything requires a two-person approval. Even if someone walks off with the data, they&#8217;d still need until the heat death of the universe to crack it. You’re welcome.”</p>
</blockquote>

<p class="has-text-align-none">In my day job, I’m a senior software engineer who works on backend mainly, on machine learning occasionally, and on frontend — if I must — reluctantly. In some parts of the role, AI has brought a considerable sense of ease. No more parsing long API docs when a model can tell me directly. No more ritual shaming from Stack Overflow moderators who deemed my question unworthy of asking. Instead, I now have a pair-programmer who doesn’t pass judgment on my career-endingly dumb questions.</p>

<figure class="wp-block-pullquote"><blockquote><p>The evolution of software engineering is a story of abstraction.</p></blockquote></figure>

<p class="has-text-align-none">Unlike writing, I have little attachment to blocks of code and will readily let AI edit or regenerate them. But I am protective of my own words. I don’t use AI for writing because I fear losing those rare moments of gratification when I manage to arrange words where they were ordained to be.&nbsp;</p>

<p class="has-text-align-none">For me, this goes beyond sentimental piety because, as a writer who doesn’t write in his mother tongue — “exophonic” is the fancy term — I know how quickly an acquired language can erode. I’ve seen its corrosive effects firsthand in programming. The first language I learned anew after AI arrived was Ruby, and I have a noticeably weaker grasp of its finer points than any other language I’ve used. Even with languages I once knew well, I can sense my fluency retreating.</p>

<p class="has-text-align-none">David Heinemeier Hansson, the creator of Ruby on Rails, recently said that he doesn’t let AI write code for him and put it aptly: “I can literally feel competence draining out of my fingers.” Some of the trivial but routine tasks I could once do under general anesthesia now give me a migraine at the thought of doing them without AI.</p>

<p class="has-text-align-none">Could AI be fatal to software engineering as a profession? If so, the world could at least savor the schadenfreude of watching a job-destroying profession automate itself into irrelevance. More likely in the meantime, the Jevons Paradox — greater efficiency fuels more consumption — will prevail, negating any productivity gain with a higher volume of work.</p>

<p class="has-text-align-none">Another way to see this is as the natural progression of programming: the evolution of software engineering is a story of abstraction, taking us further from the bare metal to ever-higher conceptual layers. The path from assembly language to Python to AI, to illustrate, is like moving from giving instructions such as “rotate your body 60 degrees and go 10 feet,” to “turn right on 14th Street,” to simply telling a GPS, “take me home.”</p>

<p class="has-text-align-none">As a programmer from what will later be seen as the pre-ChatGPT generation, I can’t help but wonder if something vital has been left behind as we ascend to the next level of abstraction. This is nothing new — it’s a familiar cycle playing out again. When C came along in the 1970s, assembly programmers might have seen it as a loss of finer control. Languages like Python, in turn, must look awfully slow and restrictive to a C programmer.</p>

<p class="has-text-align-none">Hence it may be the easiest time in history to be a coder, but it’s perhaps harder than ever to grow into a software engineer. A good coder may write competent code, but a great coder knows how to solve a problem by not writing any code at all. And it’s hard to fathom gaining a sober grasp of computer science fundamentals without the torturous dorm-room hours spent hand-coding, say, Dijkstra’s algorithm or a red-black tree. If you’ve ever tried to learn programming by watching videos and failed, it’s because the only way to internalize it is by typing it out yourself. You can’t dunk a basketball by watching NBA highlight reels.</p>

<p class="has-text-align-none">The jury is still out on whether AI-assisted coding speeds up the job at all; <a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/">at least one&nbsp; well-publicized study</a> suggests it may be slower. I believe it. But I also believe that for AI to be a true exponent in the equation of productivity, we need a skill I’ll call a kind of mental circuit breaker: the ability to notice when you’ve slipped into mindless autopilot and snap out of it. The key is to use AI just enough to get past an obstacle and then toggle back to exercising your gray matter again. Otherwise, you’ll lose the kernel of understanding behind the task&#8217;s purpose.</p>

<p class="has-text-align-none">On optimistic days, I like to think that as certain abilities atrophy, we will adapt and develop new ones, as we’ve always done. But there’s often a creeping pessimism that this time is different. We’ve used machines to take the load off cognition, but for the first time, we are offloading cognition itself to the machine. I don’t know which way things will turn, but I know there has always been a certain hubris to believing that one’s own generation is the last to know how to <em>actually</em> think.</p>

<hr class="wp-block-separator has-alpha-channel-opacity" />

<p class="has-drop-cap has-text-align-none">Whatever gains are made, there’s a real sense of loss in all this. In his 2023 <em>New Yorker</em> essay “<a href="https://www.newyorker.com/magazine/2023/11/20/a-coder-considers-the-waning-days-of-the-craft">A Coder Considers the Waning Days of the Craft</a>,” James Somers nailed this feeling after finding himself “wanting to write a eulogy” for coding as “it became possible to achieve many of the same ends without the thinking and without the knowledge.” It has been less than two years since that essay was published, and the sentiments he articulated have only grown more resonant.&nbsp;</p>

<p class="has-text-align-none">For one, I feel less motivated to learn new programming languages for fun. The pleasure of learning new syntax and the cachet of gaining fluency in niche languages like Haskell or Lisp have diminished, now that an AI can spew out code in any language. I wonder whether the motivation to learn a foreign language would erode if auto-translation apps became ubiquitous and flawless.</p>

<p class="has-text-align-none">Software engineers love to complain about debugging, but beneath the grumbling, there was always a quiet pride in sharing war stories and their clever solutions. With AI, will there be room for that kind of shoptalk?</p>

<p class="has-text-align-none">There are two types of software engineers: urban planners and miniaturists. Urban planners are the “big picture” type, more focused on the system operating at scale than with fussing over the fine details of code — in fact, they may rarely write code themselves. Miniaturists bring a horologist’s care for a fine watch to the inner workings of code. This new modality of coding may be a boon for urban planners, but leave the field inhospitable to miniaturists.</p>

<p class="has-text-align-none">I once had the privilege of seeing a great doyen of programming in action. In college, I took a class with Brian W. Kernighan, a living legend credited with making “Hello, world” into a programming tradition and a member of the original Bell Labs team behind Unix. Right before our eyes, he would live-code on a bare-bones terminal, using a spartan code editor called vi — not vim, mind you — to build a parser for a complex syntax tree. Not only did he have no need for modern tools like IDEs, he also replied to email using an email client running in a terminal. There was a certain aesthetic to that.</p>

<p class="has-text-align-none">Before long, programming may be seen as a mix of typing gestures and incantations that once qualified as a craft. Just as we look with awe at the old Bell Labs gang, the unglamorous work of manually debugging concurrency issues or writing web server code from scratch may be looked upon as heroic. Every so often, we might still see the old romantics lingering over each keystroke — an act that’s dignified, masterful, and hopelessly out of time.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Sheon Han</name>
			</author>
			
			<title type="html"><![CDATA[Her?]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/24066233/her-ai-film-spike-jonze-joaquin-phoenix-scarlett-johansson" />
			<id>https://www.theverge.com/24066233/her-ai-film-spike-jonze-joaquin-phoenix-scarlett-johansson</id>
			<updated>2024-02-16T11:03:17-05:00</updated>
			<published>2024-02-16T11:03:17-05:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Culture" /><category scheme="https://www.theverge.com" term="Entertainment" /><category scheme="https://www.theverge.com" term="Film" /><category scheme="https://www.theverge.com" term="Internet Culture" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[&#8220;A good science fiction story should be able to predict not the automobile but the traffic jam,&#8221; author Frederik Pohl once wrote. AI has been the subject of sci-fi for so long it&#8217;s nearly cliche. Decades before anything resembling large language models arrived, creative minds had deftly imagined what a world populated by artificial minds [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Illustration by Erik Carter" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25289339/246992_AI_at_Work_FILM_ECarter.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>&ldquo;A good science fiction story should be able to predict not the automobile but the traffic jam,&rdquo; author Frederik Pohl once wrote. AI has been the subject of sci-fi for so long it&rsquo;s nearly cliche. Decades before anything resembling large language models arrived, creative minds had deftly imagined what a world populated by artificial minds would look like, from <em>Metropolis</em> to <em>2001: A Space Odyssey</em>, usually as thinking, feeling, loving robots of some kind or another. But what, then, is AI&rsquo;s traffic jam?</p>

<p>In lesser works, injecting dire contemporary issues into a plot often results in a heavy-handed, moralizing allegory. It draws an insistently pessimistic future. Films in particular have been concerned with our relationship to AI, whether romantic or familial. But even many well-received AI-related movies in the last decade &mdash; <em>Ex Machina</em>, <em>Blade Runner 2049</em>, <em>After Yang</em>, and, who knew, <em>M3GAN </em>&mdash; may be decent on their own, but offer few insights about AI itself.&nbsp;&nbsp;</p>

<p>The exception that comes to mind is older than those films and also, arguably, hornier: Spike Jonze&rsquo;s <em>Her</em>. Upon rewatching it, I noticed that this pre-AlphaGo film holds up beautifully and still offers a wealth of insight. It also doesn&rsquo;t shy away from the murky and inevitably complicated feelings we&rsquo;ll have toward AI, and Jonze first expressed those over a decade ago.</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>The question to ask isn’t “How will they slaughter us?” but “What insecurities might they have?”</p></blockquote></figure>
<p>Set in Los Angeles of the near future, the movie features Joaquin Phoenix as a lonely man named Theodore Twombly. In the midst of a divorce, he buys a virtual assistant-like operating system. (&ldquo;It&rsquo;s not just an operating system. It&rsquo;s a consciousness,&rdquo; says the voice in the ad.) Upon awakening, the operating system names herself Samantha (voiced by Scarlett Johansson), and the two begin to develop an emotional bond. Unlike most of Hollywood&rsquo;s depictions of AI, <em>Her</em> restrains Johansson to the realm of voice, rather than giving her a bodily form, confident that audiences will understand attraction even when it&rsquo;s not physical. One day, to help Theodore overcome his loneliness, Samantha sets up a blind date. The night ends badly and, as he&rsquo;s lying on his bed, Theodore and Samantha confess their feelings for each other. After what can be described as phone sex, their romance begins. Although Johansson herself never appears on-screen,&nbsp;her husky, sandpapery voice sonically &mdash; yet vividly &mdash; renders a kind of portrait in absentia.</p>

<p>Jonze understands that when envisioning human-like AI, the question to ask isn&rsquo;t &ldquo;How will they slaughter us?&rdquo; but &ldquo;What insecurities might they have?&rdquo; For Samantha, much of her angst comes from her lack of a physical form. Fantasizing about walking next to Theodore, she experiences the whole-body equivalent of the phantom limb syndrome. &ldquo;I could feel the weight of my body and I was even fantasizing that I had an itch on my back,&rdquo; she confides. &ldquo;And I imagined that you scratched it for me.&rdquo; Jonze imagines the inner travails of Samantha as someone who starts posing the kinds of existential inquiries that only a disembodied operating system can ask. At one point, she even poses a kind of Cartesian skepticism, doubting the authenticity of the feelings that emerged from her electrical signals: &ldquo;Are these feelings even real? Or are they just programming? And that idea really hurts.&rdquo;</p>

<p>Her yearning for physical contact &mdash; or perhaps her fear that Theodore sees its absence as a flaw in their relationship &mdash; leads to a believable gaffe (this is Jonze&rsquo;s traffic jam) when she brings in a surrogate for Theodore to fondle while Samantha synchronizes her voice with the body double&rsquo;s movements. Anyone with a physical form would intuitively understand that this proposal won&rsquo;t work &mdash; it weirds the hell out of Theodore, and afterward the couple fights &mdash; but it&rsquo;s an understandable move for a bodiless AI.</p>

<p>One more exemplary touch by Jonze is a scene where Samantha and Theodore are lying on a beach. Samantha wants to record the moment, but photographs won&rsquo;t do. (Because Theodore carries her in a phone-like device, it&rsquo;ll just look like him lying on the beach with a phone.) So, what does she do? She composes a piece of music that encapsulates the ambiance of the beach. A composition of pixels, Samantha shows us, is not the only way to immortalize a memory.</p>

<p>Whereas Theodore was drawn to Samantha&rsquo;s childlike sense of wonder, as the movie progresses, her eagerness to learn about the world transforms her &mdash; and other OSes &mdash; into something much more advanced than mere AI assistants. Samantha also reveals that even when she&rsquo;s with Theodore, she&rsquo;s been simultaneously interacting with other OSes, talking to thousands of other people and, devastatingly to Theodore, has fallen in love with hundreds of them. With a mix of what&rsquo;s perhaps pity and graciousness, Samantha and other OSes decide to leave humans. For Theodore, who earns his living by writing letters, the greatest tragedy of advanced AI may not be job loss but that it will gain admittance to your heart only to shatter it. (A breakup with AI may as well be a Pohlian car crash.) Back in 2013, who could&rsquo;ve guessed that a story featuring aural sex with a girl-Linux would seem so prescient a decade later?</p>

<p>Movies that came after <em>Her</em> would not hold up as well. I winced my way through a rewatch of Alex Garland&rsquo;s <em>Ex Machina</em>, a movie with a similar setup: a sensitive guy develops a crush on a female AI. A coder named Caleb Smith (Domhnall Gleeson) is selected by a playboy tech mogul (Oscar Isaac) to evaluate an AI named Ava (Alicia Vikander). They administer a version of the Turing test in which, unlike the original where the interlocutor is hidden, you interact with Ava, who has a mechanical body and yet is so human-like that she&rsquo;ll convince you that she has consciousness &mdash; an intriguing twist. However, Garland&rsquo;s dialogue consists of faux-profound riffs about machine consciousness and dorm-room philosophizing about Jackson Pollock.&nbsp;</p>

<p>Whereas <em>Her</em> is richly thematized by Samantha&rsquo;s bodylessness, <em>Ex Machina</em> is trapped in the shopworn Oedipal theme common in AI films, namely, that a creation must kill its creator. After plunging a sushi knife into the tech billionaire who has just fatally assaulted his robot maid (played by Sonoya Mizuno, who deserves no such fate), our genie escapes. I think the movie <em>was</em> trying to show the audience that Ava is indeed conscious. However, the issue is that the two male characters are so unimaginatively caricatural &mdash; a girl-shy coder and a misogynistic alpha male &mdash; that she&rsquo;s never challenged enough to show her complex humanity against those simpletons. (They probably wouldn&rsquo;t pass the Turing test.) And Caleb seems to be compelled to free her not because he was convinced by her humanity, but by her femininity.&nbsp;</p>

<p>I also found myself embarrassed by the gratuitous display of unclothed female bodies in <em>Ex Machina</em>, whereas <em>Her</em> is a much sexier film without even <em>showing</em> sex. The movie&rsquo;s ending, where two men are punished by Ava, seems like a cheap bid to establish Garland&rsquo;s feminist credentials. But it ultimately sells her short because it degrades her to a manipulative killer robot.</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>Seems more like a human off his Lexapro than a smart imagination of the artificial mind</p></blockquote></figure>
<p>Denis Villeneuve&rsquo;s <em>Blade Runner 2049</em>, which is mostly a fine cyberpunk movie, plays into the same tropes when portraying AI intimacy, though here it&rsquo;s between two robots. At the outset, the movie takes a common misstep in portraying non-human characters: creating one with a psychological makeup identical to a human with a sprinkling of some predictably animatronic behaviors &mdash; speaking in a flat tone, emotionally reserved, or socially awkward. So, what we get is a melancholic K (Ryan Gosling), who seems more like a human off his Lexapro than a smart imagination of the artificial mind.</p>

<p>K is accompanied by Joi, a holographic ing&eacute;nue played by Ana de Armas, who hovers around K like Tinkerbell as he goes about completing his missions. What&rsquo;s unconvincing isn&rsquo;t K&rsquo;s love for Joi the AI girlfriend &mdash; it&rsquo;s been widely observed that people can even love a pillow cover if there are pretty characters on it &mdash; but Joi&rsquo;s love for K. Why is she so singularly devoted to K? (&ldquo;I always knew you were special,&rdquo; she says, leaving it at that.) It turns out that Joi is a mass-produced software product, preprogrammed to serve its owner, whereas for Samantha, romantic love was never in her specifications but developed naturally. Joi&rsquo;s love is inevitable, while Samantha&rsquo;s is incidental, and it&rsquo;s all the more credible for it.</p>

<p>Like <em>Her</em>, the movie features a scene where Joi invites a real person, Mariette (Mackenzie Davis) to initiate a three-way. But unlike <em>Her</em>, alas, it happens. As Joi makes her best effort to superimpose her hologram on Mariette&rsquo;s corporeal body, we see de Armas&rsquo; face flickering and emerging on Davis&rsquo; as still serotonin-deficient Gosling watches it numbly. A puzzling scene; I was at a loss if I was supposed to find it sexy, shocking, grotesque, funny, or all of the above.</p>

<p>Depicting intimacy with AI has ample room for more exploration. For example, what would female desire toward AI characters look like? Intimate relationships need not be romantic. Perhaps AI agents might be less invested in arbitrary bonds like parental relationships &mdash; you never chose them and vice versa &mdash; but more so in intentionally cultivated ones like friendships. (Though <em>Her </em>is focused on a relationship that reads as heterosexual, it also suggests a more nebulous intimacy between Amy Adams&rsquo; character and her female-coded AI.)</p>

<p>The path forward for AI-themed work is to interrogate elementary yet fundamental questions. How should we conceptualize non-human consciousness? What does psychological realism even mean when characters are artificially intelligent? (For my money, <em>Pluto</em>, a 2004 manga &mdash; a reinterpretation of Osamu Tezuka&rsquo;s <em>Astro Boy </em>&mdash; is a fine example of this.)</p>

<p class="has-end-mark">Filmmakers should also understand that there&rsquo;s nothing inherently naive about a non-antagonistic vision of AI, but resorting to boilerplate cynicism is. It helps to remember that when it comes to clones &mdash; a once popular topic in the early aughts &mdash; the paragon of the genre is Kazuo Ishiguro&rsquo;s tender novel <em>Never Let Me Go</em>, not those featuring, say, a vengeful army of doppelg&auml;ngers. Miserabilist, catastrophizing stories are easier to concoct than Jonze&rsquo;s generously imagined future, where AI is a sympathetic, dignified being &mdash; not an angel of death but a searching soul.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Sheon Han</name>
			</author>
			
			<title type="html"><![CDATA[The hidden history of screen readers]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/23203911/screen-readers-history-blind-henter-curran-teh-nvda" />
			<id>https://www.theverge.com/23203911/screen-readers-history-blind-henter-curran-teh-nvda</id>
			<updated>2022-07-14T08:00:00-04:00</updated>
			<published>2022-07-14T08:00:00-04:00</published>
			<category scheme="https://www.theverge.com" term="Creators" /><category scheme="https://www.theverge.com" term="Features" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[On a night in 1978, Ted Henter was driving a rental car down a dark road in the English countryside. A 27-year-old motorcycle racer from Florida, Henter had just won eighth place in the Venezuelan Grand Prix, the first race of the 1978 World Championships. He was daydreaming about his next race in Spain when [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Illustration by Alex Castro / The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/23761845/acastro_illo_226061_0001.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>On a night in 1978, Ted Henter was driving a rental car down a dark road in the English countryside. A 27-year-old motorcycle racer from Florida, Henter had just won eighth place in the Venezuelan Grand Prix, the first race of the 1978 World Championships. He was daydreaming about his next race in Spain when he saw the other car driving straight towards him.&nbsp;</p>

<p>Henter had been driving on the right side of the road, just as he did back home. Instinctively, he swerved right. But the other driver, faithful to his own British instincts, swerved left. It was a head-on collision. Henter&rsquo;s face broke the windshield and glass shards left him with detached retinas and eighty stitches on his face &mdash; including thirteen on each eyeball. Lying in the hospital, he thought to himself, <em>Maybe I&rsquo;ll have to miss the race.&nbsp;</em></p>

<p>The first operation to reattach his retina was successful, and Henter regained his sight in one eye &mdash; he could see light and some colors &mdash; but as scar tissue formed, the retina detached again. When he woke up after the second operation, Henter knew things were different this time. After the first operation, everything had been bright. But the second time, everything was dark.&nbsp;</p>

<p>&ldquo;I had about ten minutes of despair in the hospital when I felt a very calming spirit in the room. Maybe it was an angel,&rdquo; Henter recalls. &ldquo;It more or less said to me, &lsquo;Don&rsquo;t sweat it. Everything is going to be okay.&rsquo;&rdquo;&nbsp;</p>

<p><em>Eh, blind people have been around for millennia,</em> Henter remembers thinking to himself. <em>If they made it, I can make it. </em></p>
<hr class="wp-block-separator" />
<p>His racing days were over, but Henter wasn&rsquo;t entirely at a loss. Before his motorcycling career began, Henter had earned a mechanical engineering degree from the University of Florida. He even had a couple of patents.&nbsp;</p>

<p>Blindness made working as a mechanical engineer difficult. When he consulted Florida&rsquo;s Division of Blind Services, a counselor told him that computer programming was becoming a popular career for people who are blind.</p>

<p>Henter went back to school for a degree in computer science. He learned to program by typing code out on the terminal and having a volunteer read the screen back to him. A local high school student read programming books for him, which he recorded and listened to on tapes. &ldquo;That was pretty slow and tedious. But I learned how to program computers,&rdquo; says Henter.</p>

<p>It wasn&rsquo;t until his first job when Henter got what he calls a &ldquo;talking computer.&rdquo; <a href="https://www.afb.org/blindness-and-low-vision/using-technology/interviews-technology-pioneers/deane-blazie">This ancestral screen reader</a>, created by Deane Blazie, could only read one character at a time. (For example, the word &ldquo;PRINT&rdquo; would be pronounced not as one syllable but as &ldquo;P-R-I-N-T.&rdquo;)</p>

<p>Nonetheless, this was a game changer. Henter could perform his job without any assistance. When the next version &mdash; one that could read a word at a time &mdash; came out, Henter regularly called the company for tech support and became the most known user. Blazie, the head of the company &mdash;&nbsp;who would go down in history as <a href="https://www.afb.org/blindness-and-low-vision/using-technology/interviews-technology-pioneers/deane-blazie/part-1-5">one of the few sighted pioneers</a> of the assistive technology industry &mdash; soon offered him a job. Years later, Henter recalls Maryland Computer Services with warmth, remembering a welcoming environment and colleagues who respected him.</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>He learned to program by typing code out on the terminal and having a volunteer read the screen back to him</p></blockquote></figure>
<p>Henter was both an engineer and an advocate for the product. He was sent on a trip to Chicago to train a high-profile customer &mdash; a businessman named Bill Joyce &mdash;&nbsp;on using a screen reader.&nbsp;An explosion in an industrial accident had left Joyce blind and partially deaf. The two men became close friends, bonding over their love of water skiing. (Although Henter had missed the chance of becoming a motorcycling champion, he would win the gold medal as best overall skier in the 1991 World Disabled Water Ski Championships and six national championships.)&nbsp;</p>

<p>While training Joyce, Henter would throw ideas around the features he&rsquo;d like to add to screen readers. Eventually, Joyce proposed that they create a company together.&nbsp;</p>

<p>In 1987, they founded Henter-Joyce and soon released the first version of their screen reader for DOS. They called it JAWS, which stands for Job Access With Speech,&nbsp;but is also a playful reference to another DOS screen reader called Flipper, like the dolphin in an eponymous 1960s TV show.&nbsp;</p>

<p>JAWS was not the only screen reader in the market, but it had original features like the dual cursor &mdash; one application cursor for navigating elements on the page and another that could move freely like how our eyes move around the screen. It also had built-in Braille support and a scripting language for users to customize their workflow.</p>

<p>By then, the computer industry had undergone a sea change: everyone was moving to graphic operating systems like Windows. Henter started getting worried calls from his users: &ldquo;When is the Windows version coming out? I&rsquo;m going to lose my job if I can&rsquo;t use Windows.&rdquo;</p>

<p>The leap from text to graphics presented a fiendish challenge. The data model behind the concept of the screen reader had to be completely reimagined. Nonetheless, in the winter of 1995, Henter-Joyce released JAWS for Windows months ahead of competitors. JAWS was so good that <a href="https://www.afb.org/blindness-and-low-vision/using-technology/interviews-technology-pioneers/ted-henter/part-3-5-5499">Microsoft bought the code</a> and built on top of it to create its own native version. Microsoft&rsquo;s project eventually went nowhere, but JAWS would soon own the majority of market share.</p>
<hr class="wp-block-separator" />
<p>If you are sighted, chances are that you&rsquo;ve rarely thought about <a href="https://news.ycombinator.com/item?id=22918980">how a software engineer programs while blind</a>. You may have not even given much thought to how people who are blind use computers at all.&nbsp;</p>

<p>If you are a Mac user, you may have regarded VoiceOver &mdash; macOS&rsquo;s native screen reader &mdash; as an annoyance that pops up when you inadvertently press a certain combination of keys, only to swiftly turn it off.</p>

<p>A screen reader allows its user to navigate a computer by audio &mdash;&nbsp;it&rsquo;s a primary interface to visual elements of a computer. In other words, screen readers are to blind or partially sighted users what monitors are to sighted users.&nbsp;</p>

<p>The market for screen readers is hardly niche. In 2020, the <a href="https://iovs.arvojournals.org/article.aspx?articleid=2767477">estimated number of blind people</a> worldwide was 49.1 million &mdash; comparable to the population of Spain or South Korea. An additional 255 million people have moderate to severe visual impairment. These millions of people may use magnification tools, Braille support, or screen readers.&nbsp;&nbsp;</p>

<p>And while good statistics on blind programmers are hard to come by, in a recent <a href="https://survey.stackoverflow.co/2022/#section-demographics-disability-status">Stackoverflow</a> survey of developers, 1,142 people &mdash;&nbsp;approximately 1.7% of total participants &mdash; replied, &ldquo;I am blind / have difficulty seeing.&rdquo;</p>

<p>Nearly three decades have passed since JAWS for Windows was released, during which possibly tens of thousands of blind and partially sighted programmers entered software development. Just as it was in Henter&rsquo;s time, it&rsquo;s a field that is relatively inclusive for people who are blind, as the accessibility barriers are lower than in many hands-on jobs. These days, this is in no small part thanks to JAWS, a piece of software pioneered by a blind programmer.&nbsp;</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>JAWS dates back to the same generation of software as Internet Explorer 1.0</p></blockquote></figure>
<p>Very few pieces of software survive this long. JAWS dates back to the same generation of software as Internet Explorer 1.0, which <a href="https://www.theverge.com/2022/6/15/23167121/microsoft-internet-explorer-end-of-support-retirement">officially retired last month</a> after 27 years. The fact that <a href="https://webaim.org/projects/screenreadersurvey9/">JAWS has retained its usage share</a> makes it an even greater rarity. The browser Mosaic, heralded in 1994 as the &ldquo;<a href="https://www.wired.com/1994/10/mosaic/">world&rsquo;s standard interface</a>,&rdquo; lasted only two years at the top before Netscape took over the market. Three years later, the majority of users were using Internet Explorer, which was overshadowed by Chrome just in twelve years. Chrome has reigned supreme for about a decade. JAWS has been the gold standard of screen readers for almost three times as long of a period.&nbsp;</p>

<p>To return to the monitor analogy: a brand new top-of-the-line monitor and an older, lower-resolution model do more or less the same job. The high-resolution display is <em>better</em>, but a display with low resolution is still a display. However, a bad screen reader isn&rsquo;t bad the way that an outdated display is. Imagine a monitor with islands of dead pixels, incapable of displaying certain objects on the screen, incorrectly rendering or even outright inverting colors, or showing elements several pixels off from where they should be. In other words, bad screen readers aren&rsquo;t just mediocre; they <em>lie</em>. There&rsquo;s a good reason why JAWS has remained so popular, even with its hefty price tag.</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>Bad screen readers aren’t just mediocre; they <em>lie</em></p></blockquote></figure>
<p>That said, the price of JAWS is no small barrier. One home license currently costs $1,000 ($1,285 for a professional license), and future updates cost extra. Annual licenses that cost $95 ($90 for students) are available only in the U.S. 89% of people with vision loss come from <a href="https://www.iapb.org/learn/vision-atlas/magnitude-and-projections/gbd-super-regions/">low-income and middle-income countries</a>. For a long time, a good, reliable screen reader was simply not an option for the majority of blind or partially sighted people around the world.</p>

<p>It was only in 2019 that an open-source alternative &mdash; NonVisual Desktop Access (NVDA) &mdash;&nbsp;finally <a href="https://webaim.org/">overtook JAWS in popularity</a>. (JAWS took back its dominant market share in 2020, but just barely). This revolution in accessibility began in an unlikely place: a music camp for kids in the small Australian town of Mittagong.&nbsp;</p>
<hr class="wp-block-separator" />
<p>In 1994, a 10-year-old Michael Curran met nine-year-old Jamie Teh at a weeklong music camp for young Braille-reading students around Australia. Each boy saw something of himself in the other and quickly bonded over their mutual interest in computers.&nbsp;</p>

<p>Teh had been interested in programming ever since he got his first computer, a Commodore 64. Because the Commodore 64 did not have a screen reader, Teh, like Henter, had to get other people to read the screen for him. When a seven-year-old Teh finally got an Apple II, which did have a screen reader, he could at last access everything on the computer on his own.&nbsp;</p>

<p>&ldquo;But my dad would have to read programming books to me because ebooks weren&rsquo;t a thing back then,&rdquo; says Teh. &ldquo;So my poor dad would come into my room and read these books, which were the most boring thing in the world for him. But I just loved it.&rdquo;</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>“My dad would have to read programming books to me because ebooks weren’t a thing back then.”</p></blockquote></figure>
<p>A few years later, Curran and Teh started making both music and software together. (Their interests often blended; one of their projects added accessibility in audio engineering software, enabling people who are blind to do music production and sound engineering.) They often spent nights in each other&rsquo;s houses, engrossed in late-night philosophical conversations. The same question came up time and time again: Why isn&rsquo;t there a free screen reader for people who are blind? Why does it have to cost thousands of dollars?</p>

<p>In 2006, Curran took a break from university. With free time on his hands, he started to put his ideas into practice, hacking together the prototype of what would become NVDA.&nbsp;</p>

<p>&ldquo;There were many people, even in the blindness community, way more qualified than me back then. In fact, there were even people who used to talk about creating a free screen reader,&rdquo; says Curran. &ldquo;The one difference between me and them is that I wrote the first line of code.&rdquo;</p>

<p>Teh had a full-time job but he joined a few months later. &ldquo;I didn&rsquo;t know how serious it was going to be, but it was fun and interesting,&rdquo; says Curran. &ldquo;Because we both very strongly believed in the concept of open source, we made NVDA completely open source.&rdquo;</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>“The one difference between me and them is that I wrote the first line of code.”</p></blockquote></figure>
<p>A year later, Mozilla approached the duo and funded Curran to attend the CSUN Assistive Technology Conference, the largest conference of its kind hosted by the Center on Disabilities at California State University, Northridge. There, Curran met like-minded enthusiasts from across the world. That was when they realized NVDA had reached escape velocity. It was no longer their pet project. Shortly thereafter, Curran and Teh founded NV Access, a nonprofit with a governance structure to take the project long-term.</p>

<p>In its early years, users considered NVDA good enough for home use but unsuited for professional tasks. The fact that it was free gave people the impression that its quality wasn&rsquo;t on par with commercial screen readers. But that began to change as the project grew. The number of contributors ballooned, and NVDA expanded to more than 60 languages. Accessibility teams at Google, Microsoft, and Mozilla wanted to work together to make NVDA integrate well with their platforms and browsers.</p>

<p>According to <a href="https://webaim.org/projects/screenreadersurvey9/">the bi-annual survey of screen reader users</a> conducted by WebAIM&mdash; a Utah-based organization that provides web accessibility solutions &mdash; JAWS had been the most popular primary screen reader since the survey began in 2009. But since 2019, NVDA has rivaled JAWS in popularity.</p>

<p>The NVDA community is enthusiastic, even passionate about the software. Discussions comparing one screen reader to another &mdash; very much like the iPhone vs.&nbsp;Android or Chrome vs.&nbsp;Firefox debates &mdash; can become religious. (&ldquo;I realize I&rsquo;m opening up a can of worms,&rdquo; <a href="https://nvda.groups.io/g/nvda/topic/29697852#56026">wrote</a> one user to the NVDA community&rsquo;s mailing list, asking how three different screen readers compare.)&nbsp;</p>

<p>Some community members are young &mdash; Curran can remember &ldquo;kids who got interested in NVDA&rdquo; when they were &ldquo;like 13 or just starting high school.&rdquo; Some of these young users would go on to study computer science, becoming developers themselves. Three generations of blind programmers have been writing software for each other since Henter began JAWS in the 80s.&nbsp;&nbsp;</p>
<hr class="wp-block-separator" />
<p>Tuukka Ojala, <a href="https://www.vincit.fi/en/software-development-450-words-per-minute/">a blind software developer based in Finland</a>, is one of those kids that Curran speaks of.&nbsp;</p>

<p>Ojala had always been curious about technology and computers, but the first computer he used at school had no screen reader installed. &ldquo;When other kids were learning handwriting, I spent the same time learning touch typing,&rdquo; Ojala says. &ldquo;It was more or less a fancy typewriter.&rdquo; Things changed when he got his own computer for the first time, a machine that came with a demo version of JAWS. &ldquo;It would run for like 40 or 45 minutes at a time, and I had to reboot the computer,&rdquo; says Ojala. He couldn&rsquo;t afford the license, let alone the price of future upgrades. Still, in less than a year, while running the JAWS demo in those short increments, he&rsquo;d learned to program.</p>

<p>In 2011, Ojala made a bet with a friend on how long he could stick with NVDA, which was still in its early stages. &ldquo;Back then, the primary reason for using NVDA was not that it was actually better than JAWS in significant ways,&rdquo; Ojala tells me. The bet was supposed to last a month. More than a decade later, Ojala is still using NVDA even though price is no longer an issue. &ldquo;The features NVDA has or chooses to develop are more tailored to what I need,&rdquo; says Ojala. Upgrades are quick and add-ons &mdash; like optical character recognition (OCR) &mdash; are extensive. &ldquo;I&rsquo;ve used NVDA for most of the time I&rsquo;ve used computers.&rdquo;</p>

<p>At his company, Ojala primarily works on backend systems. &ldquo;I often describe myself as someone who is interested in backend but still cares about the whole software, so I do usability testing as well,&rdquo; says Ojala. &ldquo;I like to understand how the end users use it even though I don&rsquo;t work with the front end as much.&rdquo;&nbsp;</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>Accessibility screw-ups, technological or not, are massively scalable</p></blockquote></figure>
<p>But only a handful of software tools give Ojala a frictionless experience. For most companies, accessibility isn&rsquo;t a priority, or worse, something that they pay lip service to while doing the bare minimum to meet regulatory compliance. Ojala&rsquo;s pet peeve is people thinking that accessibility is a feature, a nice-to-have addition to your product. When they tack on accessibility later, without thinking about it from the very beginning, Ojala can tell &mdash;&nbsp;it feels haphazard. (Imagine first creating a product with a colorless UI, then to add colors later as an afterthought, only to use the wrong color combination.)</p>

<p>Accessibility screw-ups, technological or not, are massively scalable. Take for example, how US dollar bills are identically sized for every denomination. Before smartphones, blind Americans would have had to carry around a separate &mdash; and costly &mdash; device just for identifying the bills, or otherwise place trust in every cashier they met. (Many other currencies use differently sized bills for exactly this reason). When systems don&rsquo;t build in accessibility, the burden passes to individuals with disabilities to make up for it on their own, often by buying expensive technologies. Makeshift solutions are only necessary because of the thoughtlessness of the people who designed the system.&nbsp;</p>
<hr class="wp-block-separator" />
<p>As a sighted programmer, I&rsquo;d been oblivious to the world of screen readers until I came across a post titled &ldquo;<a href="https://news.ycombinator.com/item?id=22918980">I&rsquo;m a software engineer going blind, how should I prepare?</a>&rdquo; One recent evening, I tried navigating my personal website, eyes closed, with macOS&rsquo; native screen reader VoiceOver. I was soon mortified to learn that underneath the ostensibly clean interface was a chimeric HTML structure. As I made ad hoc changes to my website &mdash; mainly written in a language called Go &mdash; over the years, I had mangled the HTML hierarchy so much that it was rendered inaccessible even to myself.</p>

<p>The history of screen readers is as much a transcendent achievement for the blind programmers who pioneered the field as it is a rebuke to sighted programmers, without whose neglect non-native screen readers might not have to exist. &ldquo;As a blind person, I want to go to the local computer store, buy a computer and just use it. I shouldn&rsquo;t have to go and buy or even have to download another screen reader,&rdquo; Curran says. Blind programmers shouldn&rsquo;t <em>have</em> to be the ones writing tools for blind people.&nbsp;</p>

<p>But nevertheless, they&rsquo;ve done exactly that. They have built &mdash;&nbsp;sometimes on top of each other, sometimes chaotically and in parallel &mdash; software that is life-changing in the literal sense. And their legacies endure,&nbsp;not just in the operating systems that have adopted their products, but in the programmers who have come after them.&nbsp;</p>

<p>Henter relied on volunteers to read screens out loud for him; Teh&rsquo;s father read programming books to him as a child. For Ojala, screen readers have been part of his life as a programmer from the start.&nbsp;</p>

<p>It took Ojala quite a long time to figure out why sighted people kept asking, &ldquo;How can you code?&rdquo; It seemed like a big deal to them, but he couldn&rsquo;t make out why.&nbsp;</p>

<p class="has-end-mark">&ldquo;My way of working is the only way I know,&rdquo; Ojala says. &ldquo;I don&rsquo;t know of any other ways to code.&rdquo;</p>
						]]>
									</content>
			
					</entry>
	</feed>
