top of page
  • Adrien Book

Why are Men so Scared of AI?

If you’ve been following the — very recent — news on Generative Artificial Intelligence, you might have noticed a pattern: most people publicly and loudly warning us about the potential perils of AI are (white) men.

  • You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills” (The New York Times) — Tristan Harris, Aza Raskin, and Yuval Harari

  • Why Are We Letting the AI Crisis Just Happen?” (The Atlantic) — Gary Marcus

  • Pause Giant AI Experiments: An Open Letter” (Future Of Life Institute) — Spearheaded by Elon Musk and Steve Wozniak.

  • Pausing AI Developments Isn’t Enough. We Need to Shut it All Down” (TIME) — Eliezer Yudkowsky

Why is this the case? Are men more knowledgeable or concerned about AI than women? Are women too busy or too optimistic to write about AI? Or is it simply because our tech overlords fear suffering at the hand of algorithms as women and minorities have for years?

Why are Men so Scared of AI?

The Frankenstein Complex Hypothesis

A “Frankenstein Complex” is a term coined by Isaac Asimov (based on a woman’s work) to describe the fear of creating something that could turn against its creator.

Due to inequalities in STEM fields, many — but far from all — AI advances have been spearheaded by men. According to a 2018 report by UNESCO, only 28% of researchers in STEM fields are women. And only 12% of AI researchers are women, according to a 2019 study by Element AI.

Perhaps all AI researchers are projecting their own insecurities and anxieties onto their creations, imagining scenarios where AI could rebel, harm, or replace them. But we hear more from men because they are just more of them in the field. That’s a problem in its own right… and does not fully explain the rather sudden anxiety, as AI has been around for decades.

The Black Mirror Syndrome Hypothesis

Another hypothesis is that men specifically suffer from some sort of “Black Mirror Syndrome”. We may have created so many apocalyptic scenarios about the future of technology and AI over the years — HAL / SHODAN / Ultron / SkyNet / GLaDOS —  that we’ve internalized them as plausible or inevitable outcomes of AI development.

These stories often reflect a deep-seated fear of being replaced or dominated by something that we have created but cannot understand or control — all very masculine instincts. This may explain why tech influencers portray AI as an existential threat that could destroy humanity or enslave us.

They are also likely attracted to the themes of heroism, rebellion, and resistance that these stories offer, while unconsciously expressing their own insecurities and aggressions through a dystopian narrative. Maybe Musk just wants to be John Connor instead of constantly getting made fun of on Twitter.

The God Complex Hypothesis

A third — and I believe correct — hypothesis is a good old God Complex. Today, tech influencers hold most of the wealth, power, and influence in society. They set the rules, norms, and values that govern how we live. They shape the narratives and agendas that drive our collective decisions. And they’re freaking out because AI might change the status quo in a way they cannot control.

AI has the potential to change the world in so many ways — maybe even for the best: by creating new forms of intelligence and agency so we may care for our most vulnerable; by empowering marginalized groups and voices with new tools; by challenging existing assumptions and biases; by exposing hidden injustices and inequalities; by demanding new forms of accountability and transparency; by opening up new possibilities and opportunities for all.

All of this may make tech influencers less rich and less powerful. So they’re fighting it. Because the status quo fits them, and because their world has always been a zero-sum game.

They present their arguments as objective, rational, or universal. They claim to speak for humanity as a whole, or for some abstract notion of good or evil. They appeal to authority, evidence, or logic to support their claims. But in reality, their arguments are based on their values and preferences, which are not universal, and may not even be desirable. They reflect their own perspectives, interests, and agendas. They ignore or dismiss alternative viewpoints, experiences, and aspirations. It is a product of a specific historical and cultural context that values competition over cooperation, domination over collaboration, and hierarchy over diversity. We need to hear more voices… and we already have!

The unheard women

While most of the prominent voices warning about the dangers of AI today are men, activists have been shouting about AI danger for years now. We just didn’t listen until the White Guys were worried.

Women and minorities have long been well aware of AIs’ many dangers. The likes of Joy Buolamwini, Timnit Gebru, Fei-Fei Li, Meredith Whittaker, Safiya Noble, Karen Hao, Ruha Benjamin, Latanya Sweeney, Kate Crawford and Cathy O’Neil (to name a few) have been documenting how AI can discriminate against people based on their race, gender or class for a decade! And they were heard then. A little. But now that the status quo is in danger for the top guys, everyone is listening.

AI is no more dangerous today than it was 5 years ago. It’s got more capabilities, but the risks are the same. We are panicking now because some people want us to panic under their terms.

Panic. But on your own terms.

Good luck out there.

コメント


You may also like :

Thanks for subscribing!

Get the Insights that matter

Subscribe to get the latest on AI, innovative business models, corporate strategy, retail trends, and more. 

No spam. Ever.

Let's get to know each other better

  • LinkedIn
  • Twitter
  • Instagram
  • Buy Me A Coffee
bottom of page