‘2001: A Space Odyssey’ turns 50: Why HAL endures

Even after five decades of technological advancement, the murderous artificial intelligence in Stanley Kubrick’s philosophical sci-fi film remains the definitive metaphor for technology’s dark side.

Astronaut David Bowman (Keir Dullea) peers through his space helmet as he shuts down the malevolent HAL 9000 computer in Stanley Kubrick's 1968 film, '2001: A Space Odyssey.'

Turner Entertainment/AP

April 3, 2018

“I’m sorry, Dave. I’m afraid I can’t do that.”

With those nine words, HAL-9000, the sentient computer controlling the Jupiter-bound Discovery One, did more than just reveal his murderous intentions. He intoned a mantra for the digital age.

In the 50 years since the US premiere of Stanley Kubrick’s “2001: A Space Odyssey,” virtually everyone who has used a computer has experienced countless HAL moments: “an unexpected error has occurred,” goes the standard digital non-apology. The machine whose sole purpose is to execute instructions has chosen, for reasons that are as obscure as they are unalterable, to do the opposite.

Tracing fentanyl’s path into the US starts at this port. It doesn’t end there.

There’s something about HAL’s bland implacability that makes him such an enduring symbol of modernity gone awry, and such a fitting vessel for our collective anxiety about an eventual evolutionary showdown against our own creations.

“HAL is the perfect villain, essentially...,” says John Trafton, a lecturer in film studies at Seattle University who has taught a course on Stanley Kubrick through the Seattle International Film Festival. “He’s absolutely nothing except for a glowing eye.... Essentially we’re just projecting our own fears and emotions onto HAL.”

HAL’s actual screen time is scant, beginning an hour into the nearly three-hour film and ending less than an hour later. And yet, during that interlude, his personality eclipses those of the film’s humans, whom Roger Ebert described in his 1968 review as “lifelike but without emotion, like figures in a wax museum.”

While the film’s human characters joylessly follow their regimens of meals, meetings, exercise routines, and birthday greetings, we see HAL, whose his name stands for “Heuristically programmed ALgorithmic computer,” expressing petulance, indecisiveness, apprehension, and at the end, remorse and dread.

It’s this blending of human emotionality with mathematical inflexibility that some experts find troubling. Human biases have a way of creeping into code for mass-produced products, giving us automatic soap dispensers that ignore dark skin, digital cameras that confuse East Asian eyes with blinking, surname input fields that reject apostrophes and hyphens, and no shortage of other small indignities that try to nudge us, however futilely, into the drab social homogeneity of Kubrick’s imagined future.

Why Florida and almost half of US states are enshrining a right to hunt and fish

“One of the things that makes HAL a really enduring character is he faces us with that kind of archetypal technological problem, which is that it’s a mirror of our own biases and predilections and things that we are maybe not conscious of,” says Alan Lazer, who teaches courses including “The Films of Stanley Kubrick” at the University of Alabama in Tuscaloosa.

Moral machines?

Machine learning – a programming method in which software can progressively improve itself through pattern recognition – is being used in more walks of life. For many Americans, artificial intelligence is shaping how our communities are policedhow we choose a college and whether we get admitted, and whether we can get a job and whether we keep it.

Catherine Stinson, a postdoctoral fellow at the University of Western Ontario who specializes in philosophy of science, cautions that the software engineers who are writing the algorithms governing more and more socially sensitive institutions lack training in ethics.

“Everybody thinks that they are an expert in ethics. We all think that we can tell right from wrong that if presented with a situation we’ll just know what to do,” says Dr. Stinson. “It’s hard for people to realize that there there are actually experts in this and there is space for expertise.”

In an op-ed in The Globe and Mail published last week, Dr. Stinson echoed Mary Shelley’s warning in “Frankenstein,” a novel that turned 200 this year, of what happens when scientists attempt to exempt themselves from the moral outcomes of their creations.

She points out that MIT and Stanford are launching ethics courses for their computer science majors and that the University of Toronto already has long had such a program in place.

Other groups of computer scientists are trying to crowdsource their algorithm’s ethics, such as MIT’s Moral Machine project, which will help determine whose lives – women, children, doctors, athletes, business executives, large people, jaywalkers, dogs – should be prioritized in the risk-management algorithms for self-driving cars.

But those who crowdsource their ethics are ignoring the work of professional moral theorists. Stinson notes that many computer scientists have an implicit orientation to utilitarianism, an ethical theory that aims to maximize happiness for the greatest number by adding up each action’s costs and benefits.

Utilitarianism enjoys support in American philosophy departments, but it’s far from unanimous. Critics charge that such an approach denies basic social and familial attachments and that it permits inhumane treatment in the pursuit of the greatest good.

Ordinary people tend to hold a mix of utilitarian and non-utilitarian views. For instance, most survey participants say that self-driving cars should be programmed to minimize fatalities. But when asked what kind of self-driving car they’d be willing to buy, most people say they would want one that prioritizes the lives of the vehicle’s occupants over all else.

Either way, there’s something undeniably creepy about dealing with an autonomous machine that reduces your personal worth and dignity down to code. “We can’t use our human wiles on them,” says Stinson.

It’s this disquiet that HAL evokes that Matthew Flisfeder, a professor at the University of Winnipeg department of rhetoric, writing, and communications says is the same unease we feel when our social choices are determined by impersonal forces of the market.

“There’s this constant goal,” says Dr. Flisfeder, “to try to be efficient and objective and rational, and when we see that presented to us back in the form of the dryness of a machine like HAL, we started to realize the instrumentality in that and how it’s actually very dehumanizing.”

Predicting technology’s triumph over humanity was not, however Kubrick’s aim. HAL is ultimately defeated, in one of cinema’s most poignant death scenes, and Dave moves on to the film’s – and humanity’s – next chapter.

“Essentially you have a film of this fear of artificial intelligence making humans obsolete,” says Trafton, the Seattle University lecturer. ”Yet what does the movie end with? It ends with a Star Child. It ends with human beings recycling back.”