Book Review by Geri Lipschultz
The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted and Why We Need to Fight Back Now Book by Hilke Schellmann Published by Hachette January 2024
The Algorithm is a deeply researched investigation into the powerful and already widely entrenched systems of A.I. upon the workplace. In clear, clean, unassuming but well-documented and meticulously ordered prose, Hilke Schellmann tells the story both from the point of view of companies looking for employees and from the job seekers, themselves. Schellmann describes technology whose purpose is both to analyze and determine the viability of those seeking positions, thereby providing a great service to businesses overwhelmed by the sheer number of applicants for relatively few positions. Where one might suppose there is more objectivity in the calculations of a machine than in the judgment of humans, Schellmann’s stories suggest otherwise. Those most vulnerable, Schellmann shows, are the usual suspects: females, non-whites, those who are disabled, and their intersections. She writes: “We need to talk about how we hire, promote, monitor, and fire human beings….to talk about how to change the incentives inside companies…. And we need to talk about how we want to treat humans in an AI-driven world.”
Schellmann’s is a story about machines and their power, the power we endow them with, the power we give up to our technology.
The companies themselves, she writes, do not adequately test technologies—and there have been cases of lawsuits by individuals protesting their methods. But this is cost-prohibitive for both. Problems exist where correlation is considered causation, when the various contrivances of these systems do not measure what they intend to measure; the consequence is paid by the employee who is fired, for example, because a surveillance machine has determined they are not producing adequately, or they failed an interview by a robot; it's paid by a potentially fully qualified applicant whose facial readings signified they were a risk, or whose gaming techniques were not considered top notch. But we're not talking of one or two people--we're talking of thousands, and hundreds of thousands, of people—and more—because of the large search engines (such as Indeed and Monster) HR departments use in hiring. Companies might receive thousands of applications for one or two positions and need to find ways of filtering applicants. Schellmann investigates the use of surveillance materials where, for example, companies may easily track their employees’ or potential employees’ social media accounts, pore through them, deduce matters of mental health or make character judgments—with or without the employees knowing. She asks for more transparency, she questions the ethics, and she cites issues of privacy.
Schellmann writes: “It’s a dark outlook—a system in which algorithms define who we are…. What if the algorithms get it wrong?” She describes systems that determine our emotions and character by movements of the face, systems that determine our intelligence by IQ-like tests, systems where humans seeking jobs are interviewed by machines that “decide” whether or not they get the job. She investigates systems that do, albeit unintentionally, make biased decisions against gender and disability and race, and she questions, not whether or not that is the intention of the creators of the system, but if they test sufficiently to correct the mistake. She not only asks what if the algorithms get it wrong, she shows where they do—and how grave the consequences are—and she is not without suggestions about how to begin to solve the problems.
The warning given here, more than once, is that unless laws are made, regulations are put into effect, this practice of submitting to the technology will have dangerous repercussions—that you cannot walk back.
Namely, it will be out of our hands. It will be too late.
Some of this is quite literal, when you consider that in one case Schellmann writes about, “productivity rates” were set by an algorithm, and “terminations were not initiated by human managers but by algorithms.”
She also asks, “What if the algorithms take away human variability?”
And then questions whether we should “build and use such technology?”
Schellmann interviews Annette Bernhardt, director of the Technology and Work Program at the University of California, who says: “It’s so invisible, and now these systems are a black box to workers and to policy makers alike.”
To which Schellmann adds that “the public is seeing only the tip of the iceberg.”
It would seem the use of this technology is only increasing, according to the research of analyst, Brian Westfall: “…more than three hundred HR leaders in mostly midsize companies, and 98 percent of them said that they would rely on algorithms, HR software, and AI if they needed to make layoff decisions in 2023.”
One wonders whether if it might already be too late.
Among the forefathers of computer science, along with Alan Turing (now well known for the “Turing Test”), John McCarthy (who came up with the phrase “artificial intelligence”), Mark Minsky (who said that the human “brain is merely a meat machine”) was Joseph Weizenbaum, who in the 1970s expressed serious concerns about the repercussions of the technology upon human civilization. Pertinent here, he wrote about the need for accountability among those who profit by its use. As a scientist, he warned about the unknown ramifications, including the power given by its creators. The idea that algorithms are doing the work of managers and “the HR folks” is reminiscent of the concerns of Weizenbaum, who some might say was a bit of a prophet. He constructed the first chatbot, by the name of ELIZA. A few important discoveries he made that concerned him, namely that those who interacted with his machine developed an unsettling relationship with it: there was a transference that occurred, and a number of psychologists advocated for this machine to relieve them of some of their work—they could reduce their hours, that a conversation with the machine could replicate for the client what happened in therapy.
Weizenbaum rejected this notion on a number of grounds, among them, the dehumanizing effect upon a human being. He raised the question of intelligence itself, citing the danger of comparing human intelligence to the information compiled by the computer. Among his fears: that humans would sublimate themselves to the machine, that humans would objectify themselves, that scientism would be seen as an end rather than as a means.
To that end, Schellmann speaks of the neuroscience that will eventually be able to “record the electrical activity in our brain and measure focus or mind wandering.”
Of course, I imagine that Weizenbaum might call this “mind wandering” something else, might find it a productive occupation in ways the workforce or even the neuroscientist would never be able to determine, no matter what tool they have at hand.
Weizenbaum also reflected upon the sheer impossibility to project the ramifications of a machine dedicated to replicating human intelligence—and the subsequent unknown dangers. Like Schellmann—and decades earlier, he, too asked whether we should build and use such technology.
Schellmann offers the thoughts of Nita Farahany, Duke professor of philosophy and law: “With our growing capabilities in neuroscience, artificial intelligence, and machine learning, we may soon know a lot more of what’s happening in the human brain…. But wasn’t the brain the one area that you thought that you had some mental reprieve, the last bastion of freedom, the place that you thought you could have ideas and creativity, fantasize about something, have an absurd idea, have a brilliant idea. Is that the one space that you thought would always be secure?”
Schellmann’s approach is compelling, well-structured, and full of stories. She has traveled to many conferences, attended meetings, interviewed hundreds of people, the vendors, the companies using the systems, as well as experts in psychology, sociology, and computer science, and she herself has undergone the numerous and various tests that are used to determine worthy hiring candidates. She manages to weave a mountain of material into a fascinating exposé that lays out very clearly what the issues are—and there are many.
Cited in the New York Times as one of the five best books on Artificial Intelligence, this—her first—book represents a labor of at least five years, during which time Schellmann has done podcasts and published a number of articles for the New York Times, The Wall Street Journal, and MIT Technology Review, to name a few. A young and extremely accomplished writer and documentarian, Schellmann won an Emmy for her PBS Frontline film Outlawed in Pakistan—one of many awards honoring her thorough and ground-breaking investigative work. She is also a professor of journalism at NYU and the mother of a toddler, a little girl, to whom she has dedicated this book.
To the extent that Schellmann's research uncovers serious inequities in the way (what she calls) "these tools" are currently being used, Schellmann is a whistleblower, a Cassandra, someone who is looking to make those who profit accountable—but she’s not a complete naysayer by any stretch; she accepts that A.I. is here to stay, and in this book she tries to suggest methods of making it safe, making it reliable, making it do the work it sets out to do. She is well aware that current hiring procedures are laced with issues of bias, and that there is hope for what she calls the potential “magic” of this technology, but more work needs to be done to capture the more subtle configurations of biased selection.
What is particularly remarkable about this book is its readability.
Hilke Schellmann does not want her daughter to grow up in a world where she—and every other human being—is “quantified.”
Author Photo & Bio:
Twice a Pushcart nominee, Geri Lipschultz has published in Terrain, The Rumpus, Ms., New York Times, the Toast, Black Warrior Review, College English, among others. Her work appears in Pearson’s Literature: Introduction to Reading and Writing and in Spuyten Duyvil’s The Wreckage of Reason II. She has an MFA from the University of Iowa and a Ph.D. from Ohio University and currently teaches writing at Borough of Manhattan Community College. She was awarded a CAPS grant from New York State for her fiction, and her one-woman show (titled ‘Once Upon the Present Time’) was produced in NYC by Woodie King, Jr. Her novel will be published by Dark Winter Press in September 2025.