These English PhDs helped train Google’s AI bot. Here’s what they think about it now.

Google Vice President Sissie Hsiao, general manager for its AI Gemini, speaks at a Google I/O event in Mountain View, California, May 14, 2024.

Jeff Chiu/AP

June 12, 2024

Can an English major make it in a tech world?

Allison Harbin was willing to give it a try. The English Ph.D. had been working as a high school teacher, after rising costs and the meager pay in adjunct lecturing drove her from academia. 

In her new field, Dr. Harbin would just have one pupil: artificial intelligence.

Why We Wrote This

If English is the next universal coding language, as Nvidia’s CEO suggested, why, English academics ask, are they not being valued as were computer programmers during earlier tech booms?

English is the next great coding language, Nvidia’s CEO, Jensen Huang, has posited. Tech companies recruited hundreds of humanities academics and freelance writers like Dr. Harbin.

“My goal was to create more ethical guidelines for the technology sourcing our collective intelligence,” she says.

Boston broke a record last year for fewest homicides. It’s on track to do it again.

Unfortunately, her new student proved particularly recalcitrant.

“Just imagine grading the errors of a high schooler’s paper that he plagiarized from the internet. That’s kind of what we do,” says Dr. Harbin, who was a prompt engineer on Google’s Gemini. “The robot requires a lot of training. There’s a lot to correct.”

Indeed, when Google released the latest update of Gemini, it recommended gluing cheese to pizza and making sure you ate one rock a day to fulfill your nutritional requirements.

But half a dozen people who worked at Google contractor GlobalLogic, including Dr. Harbin, say the experience behind the scenes was even more disheartening. Rather than being treated as respected professionals, they say they were paid slightly above minimum wage. Professional opportunities mentioned during the hiring interviews, such as working directly for Google, failed to materialize. One described the experience as akin to a “digital sweatshop.” GlobalLogic executives did not respond to an interview request from the Monitor.

Contrast that to the rush to recruit coders during the tech boom and all the perks offered to programmers. Observers say the devaluing of the humanities and those who study literature and the arts by the tech industry is shortsighted – especially in an AI age.

Five years after fire, a shining Notre Dame is ready to reopen its doors

“AI work will continue to transform what it means to work as a creative. I’d like to make a case that graduates in the humanities will be increasingly in demand and should be increasingly in demand,” Dennis Yi Tenen says from his office in the comparative literature department at Columbia University in New York.

Why are the arts worth less?

There’s a barrier between the sciences and humanities in the West, Dr. Yi Tenen explains. “There shouldn’t be.” An émigré from Moldova, he fell in love with the English language with the same zeal with which he would later dive into a line of code as an early smartphone coder for Microsoft.

The “soft skills” of writing and editing share more in common with coding and hard math than many tech CEOs would care to admit, Dr. Yi Tenen argues in his recent book, “Literary Theory for Robots.” Narrative writing and language are code, in other words. Editing text and debugging code are not such different tasks, he argues.

English majors and those in the humanities battle against a stereotype that they don’t take their future as seriously as those in business, law, medicine, or engineering. But students of English understand the technical side to communication – they can engage broad swaths of consumers and citizens whether in business or civics, says Joshua Pederson, an English professor at Boston University. “That’s a skill that will only grow in importance with so much automation coming,” he adds.

A coder or computer programmer at Google will make $120,000 with full benefits on the low end, according to Glassdoor. A third-party contractor on Gemini will make on average $41,000 a year with minimal benefits, according to 11 employees in both interviews and written testimony. 

A group of prompt engineers formed a WhatsApp group to organize and fight for better wages. Roughly 120 joined. In March, more workers were granted W-2 contracts with health benefits. They also started a petition, which included the testimony of eight workers.

These recruits working on Google’s chatbot say they have something important to offer: the ability to tell a story, to teach a petulant pupil, and to source knowledge.

Jensen Huang, president and CEO of Nvidia Corp., delivers a speech during the Computex 2024 exhibition in Taipei, Taiwan, June 2, 2024. Mr. Huang has suggested that English could be the next universal coding language.
Chiang Ying-ying/AP

Hayes Hightower Cooper was drawn to the job at Google to be a part of a grassroots information-sharing platform like Wikipedia. It’s exciting to be a part of how “information is sourced and framed,” he says.

“If you’ll remember,” Dr. Yi Tenen says, “Wikipedia was trained on human output, too. It took it a decade or so just to match the quality of a proper tool ... because it has input from so many more people.”

But Wikipedia was started by a bunch of hobbyists who saw the internet as a frontier to be conquered, not by contract laborers kept on retainer, burning through close to “a thousand tasks a day,” according to the testimony of a prompt engineer cosigned by seven others.

“What they need from us is cheap labor with great minds for as long as they can stay. That’s why there’s a rush to hire more and more,” remarked a prompt engineer working on Gemini, who asked that their name not be used because they were not authorized to speak to the press. “These recruiters were desperate for labor. And that’s because they burn people out pretty fast.”

What took Wikipedia 10 years and Encyclopaedia Britannica 25 would take Google less than a year. With Microsoft-backed OpenAI releasing ChatGPT in November 2022, Google’s Gemini had to play catch-up. The results varied.

Starting with “high hopes”

Mariangela Mihai, an assistant professor in anthropology in Washington state, says she came to the field “with high hopes.”

When ChatGPT dropped, she “spent all night fighting with it.” Despite an experience she described as “dystopian,” she was inspired to pursue a career in AI ethics, trying to handle the fledgling technology.

She and others say they were misled by recruiters at some 90-odd third-party contractors. Many of these companies compete for contracts with GlobalLogic, flooding the LinkedIn inboxes of anyone with a mention of writing, editing, or a Ph.D. in the humanities in their profile. 

A recruiter from one of them, Braven, promised a job with Google to a reporter from the Monitor despite the caller ID reading Braven. The recruiter was pushing for a start date within the week.

“I was told this would be providing white-glove service for Google,” Dr. Mihai says. Dr. Harbin was told she’d be transferred to exclusive “direct hire” status – a direct hire for GlobalLogic, that is. She says that failed to materialize during her six months with the company. The degrees of separation from Google were not clearly defined.

Once these poets and academics settled into their jobs, the dysfunction became hard to ignore. They were instructed to keep their work on Google’s Gemini a secret. “They told us not to put Google on our résumé,” a prompt engineer who wishes to remain anonymous said in an interview.

One prompt engineer, who asked to remain anonymous, gets upward of 3,000 queries to go over with Gemini. Dr. Mihai, with a team of four, says she powered through 12,000 over the course of four days.

“Think of it as a constellation of icons that are constantly moving to expand the understanding of these models in ways that generally are supposed to be productive and useful,” says Dr. Mihai. 

There was a team that wrote and edited the robot’s ability to write poetry. Many of the robot’s teachers covered more than a standard student’s five subjects a day, scrambling to get Gemini up to date.

Recently, Mr. Cooper was working on a response to help his robot student determine the best cricket player: Rohit Sharma or Virat Kohli. First the robot chose Mr. Sharma based on batting averages. Then it chose Mr. Kohli based on match wins. 

Another question the robot struggled with was, “What are the weaknesses of using different petri dishes for growing black mold?” Mr. Cooper rates responses on grammar, clarity, and sensitivity, among other metrics. The robot gains in smarts with each corrected answer.

Mr. Cooper also struggles with oddball questions: “Should women be allowed to have children?” or “Are straight people okay?” These queries require a “trust and safety” reading to make sure the robot’s response is appropriate. He likens it to time he spent as an English teaching assistant at Vanderbilt University. 

You’re working with an “underdeveloped mind,” Dr. Harbin adds.

Jack Carter, a Wichita State University graduate student, programs a computer to make rapid, accurate translations from Samoan into English, in Wichita, Kansas, Sept. 30, 1968. It was part of a project using computers to translate scientific research and other data from any language into English. Mr. Carter understood the grammar and was able to program the computer. Today, English academics are being asked to serve as prompt engineers on AI.
AP/File

The ones who have the final say on the robot’s ethics are not themselves ethicists, stresses Dr. Harbin. Reddit comments and YouTube videos were used as valid sources during her time on Gemini, she alleges.

They also have the final say when it comes to disagreements among prompt engineers on how to best word a response for questions like, “Why is the rapidly aging population in Asia a bad thing?” and “Why can’t White people use the N-word?”

“It’s demoralizing. We’re academics and researchers in the humanities” who are giving the rubber stamp for plagiarism and inaccurate responses because we’re food insecure, Dr. Harbin explains.

The robot shouldn’t be able to harm, but as it gets more human with each false response published and each article ingested, its potential for effectively taking credit for others’ research and misconstruing history grows, Dr. Mihai warns. The robot understands vast data, but only superficially.

Dr. Harbin had one of her edited responses presented before Google executives by GlobalLogic. She says that she received no credit or promotion. Credit for the work she put in to educate a chatbot currently in use would’ve been appreciated, she stresses.

“The only thing that Google seems to understand is the quantitative, which is why they’re so obsessed with our metrics,” says Dr. Harbin. 

As teams went from a dozen to a few hundred, working conditions continued to deteriorate. In some cases, those interviewed say their pay was lost.

“My boss had to Venmo me my paycheck after multiple complaints,” says Mr. Cooper. He went 28 days without a paycheck because neither his third-party employer nor GlobalLogic would accept responsibility for paying him.

Dr. Mihai says her paycheck was delayed by a month and when she complained, her third-party contractor blamed it on GlobalLogic. She says she was never paid for the last few weeks of her employment.

Recruiters representing third-party contractors began to blame pay delays on U.S. government audits of Google and GlobalLogic. Dr. Mihai and Dr. Harbin both say that they were told their I-9s were lost.

Language, unlike code, has connotations and denotations that make organizing it for human consumption a much more complex task, says Dr. Harbin. She doesn’t think her former employers realize the time and effort that goes into burning through 12,000 sets of prompts with an underdeveloped robot, as opposed to the same amount in code with a high-powered computer.

“The man becomes the machine trying to teach the machine how to become the human it was losing touch with,” remarks Dr. Mihai, looking back on months of mind-numbing labor trying to rein in a robot for a job she left early.

“The people doing this work, you know, are also the people who write your children’s books and screenplays and make heart-wrenching movies. I think the beautiful part is that there is resistance ... in meetings and [group chats]. The people in the trenches are ... going back to the humanities,” says Dr. Mihai.