Listen to the article (AI powered narration)

Published on August 16, 2024

AI has an anthropomorphism problem—a paradox, really.

On the one hand, attributing human characteristics to computers helps us understand AI technologies. No, AI isn’t artificial or intelligence. But it’s similar. As a metaphor to aid discussion, then, AI-as-human works: It gives us an intuitively understandable, if imprecise, way to talk about machine “learning” and “neural” networks and “cognitive” computing and the other AI technologies.

On the other hand, anthropomorphizing AI hinders our understanding of its impact, especially the impact of the generative AI (GenAI) technologies discussed below. This misunderstanding leads us to exaggerate GenAI’s benefits and drawbacks. Enterprise leaders dream of outsized ROI and productivity gains. Employees in the trenches dread obsolescence when AI takes their jobs.

Both groups are fueling the GenAI hype storm and derailing your AI efforts. Here’s how to get those projects back on track.

Remember AI is technology

Watch a GenAI app in action, and it’s easy to worry about your job—or get cocky about it. ChatGPT, Dall-E, Github Co-Pilot, and their ilk are stunningly fast and adept at writing and coding and creating images. Humans simply can’t compete with GenAI’s speed and volume. They can, however, fantasize about the new products and services that GenAI enables.

The anthropomorphic, GenAI-as-human metaphor doesn’t help here, so don’t use it. Don’t let others use it, either. GenAI isn’t human. It’s computer technology.

“Large language models (LLMs) like ChatGPT are essentially a very sophisticated form of auto-complete,” wrote Dr. Michael Wooldridge, professor of computer science at Oxford. “The reason they are so impressive is because the training data consists of the entire internet.”

Despite “dazzling competence in human-like communication,” noted Wooldridge, an LLM has no awareness—much less, experience—of the passage of time, the world at large, or the events that happened after it was trained. It’s just “a computer program that is literally not doing anything until you type a prompt, and then simply computing a response to that prompt, at which point it again goes back to not doing anything.”

Of course, when an LLM does do something, it may do it wrong. Its results may be biased or hallucinated or offensive or failures of some other sort.

That’s because LLMs “generate responses merely based on common patterns in their training, data, without regard for factual truth, or accuracy. Their goal is plausibility based on text distributions—not fidelity to reality,” posted Joseph Shieber, professor of philosophy at Lafayette College. “They have no mechanisms for distinguishing truth from fiction or ascertaining the veracity of statements. Their judgments are grounded solely in text probabilities rather than external states.”

It’s harder to imagine loosing your job to GenAI—or creating vast wealth from GenAI—when the technology is seen in that light, i.e., profound yet profoundly limited.

Remind users they’re human

“Many fears about A.I. are based on an underestimation of the human mind,” wrote New York Times columnist David Brooks.

When we see ourselves as computers—a sort of unavoidable, reverse-anthropomorphism–we are going to lose to computers. We need to see ourselves as humans again. Michael Ignatieff helps us do that.

“We have impoverished our understanding of thinking by analogizing it to what our machines do,” wrote Ignatieff, the president of Central European University. “What we do is not processing. It is not computation. It is not data analysis. It is a distinctively, incorrigibly human activity that is a complex combination of conscious and unconscious, rational and intuitive, logical and emotional reflection.”

The complexity is so extreme, according to Ignatieff, that thinking has yet to be modeled by neurologists or philosophers despite decades or centuries (respectively) spent in the attempt. Meanwhile, “some of the simplest forms of pattern recognition that human cognition does so effortlessly” continue to baffle AI engineers.

And Brooks reminds us that human intelligence includes physical, bodily capabilities as well as the impulses to “pursue goodness, to marvel at and create beauty, to seek and create meaning. A.I. can impersonate human thought…[but it lacks] a unique worldview based on a lifetime of distinct and never to be repeated experiences.”

We are more than our computational capacity. That realization dials down the GenAI hype, tempering fears and fantasies alike.

Let AI augment users, not replace them

Employees won’t be replaced by AI. They’ll be augmented with AI. And if they refuse to use AI, then they’ll be replaced—by other employees who will use it. That’s the prevailing wisdom. Forrester research backs it up.

“A recent Forrester survey found 36% of workers polled fear losing their jobs to automation or AI in the next 10 years,” the company posted. “But the truth is, many more jobs will be influenced [read: augmented] by GenAI than will be lost. …So workers should be more focused on how to leverage the technology than how to compete with it.”

Forrester predicts GenAI will eliminate 2.4 million jobs but will influence more than 11 million jobs. The GenAI tools should be “very useful” to employees who do math, science, writing, critical thinking, and memorization.

Make no mistake. Employees remain in the driver’s seat when GenAI augments their work. Employees’ curiosity and creativity drive GenAI responses. And employees are the arbiters, deciding whether the responses are satisfactory and taking responsibility for the results.

“The quality of creative content produced by AI today doesn’t match the creativity of humans,” according to Forrester. Company VP and principal analyst J.P. Gownder added that GenAI content can be used “to mock up some ideas, but ultimately it will be a human that will turn it into something creative.”

Anthropomorphize selectively

At this point, stripping AI of its anthropomorphic elements isn’t necessary. It’s not even desirable. We’ve spent decades imbuing AI with human capabilities so that we could advance the state of the AI art. “AI as human” works when you’re talking about the technologies themselves. Anthropomorphize all you want.

When you’re talking about AI’s impact, however, skip the anthropomorphism. Talk about “AI as computer technology”—computaremorphism, if you will. That will reduce the hype and keep your AI projects on track where they belong.

Brent Dorshkind

Brent Dorshkind

Enterprise Analyst, ManageEngine

Brent Dorshkind is the editor of ManageEngine Insights. He covers spiritual capitalism and related theories, and their application to leadership, culture, and technology.

Brent believes today’s IT leaders are among the best qualified candidates for the CEO seat, thanks in part to the acceleration of digital transformation in the workplace. His goal is to expose leaders at every level to ideas that inspire beneficial action for themselves, their companies, and their communities.

For more than 30 years, Brent has advocated information technology as a writer, editor, messaging strategist, PR consultant, and content advisor. Before joining ManageEngine, he spent his early years at then-popular trade publications including LAN Technology, LAN Times, and STACKS: The Network Journal.

Later, he worked with more than 50 established and emerging IT companies including Adaptec, Bluestone Software, Cadence Design Systems, Citrix Systems, Hewlett-Packard, Informix, Nokia, Oracle, and Sun Microsystems.

Brent holds a B.A. in Philosophy from the University of California, Santa Barbara.

 Learn more about Brent Dorshkind
x