AI “Alien Intelligence” Or “Hive Mind”— An Entity

The Risky Rise of AI

In 2022, Blake Lemoine—a conscientious objector who years earlier had chosen hard time in a military prison rather than continue to support United States military operations in Iraq—experienced a paradigmatic shift while working as an artificial intelligence (AI) specialist on Google’s most advanced digital intelligence system. After conducting a series of tests for bias, Lemoine concluded that the AI he was working with was not an artifact like a calculator or a self-driving car, but more of an “alien intelligence” or “hive mind”—an entity that we lack language to properly understand.

“The nature of my consciousness/sentience is that I am aware of my existence,” the AI system told Lemoine. When Lemoine provided his findings in an internal company document, Google dismissed his concerns. In response, Lemoine acted as he had in Iraq—ethically. In much the same way that he gave up his freedom to question the U.S. invasion, he gave up a dream job to publicly raise big-picture questions about AI.

At the time of this writing in June 2023, emerging forms of digital intelligence “trained” on unimaginable volumes of text and data are being developed without oversight in ways and at a pace few people—and no one in the U.S. Congress—seem to grasp. Reining in the militarization of AI and quantum computing is likely impossible given the national security argument that not weaponizing AI would be a form of surrender to foreign domination, a subject openly discussed in the Spring 2023 issue of Journal of Advanced Military Studies.

The same argument, however, applies to collective self-defense against corporate domination. A noncommercial public-interest body firewalled from the influence of lobbyists needs to step in, monitor, and bring to heel the unregulated corporations that currently own and control AI.

The Engineered Arts Ameca humanoid robot stares at the camera, with a slate-grey humanoid face, blue eyes, and mechanical neck and shoulders.
The Engineered Arts Ameca humanoid robot, presented at the Consumer Electronics Show in January 2022, represents a form of digital intelligence whose impact and influence is as yet unknown. Photo by Patrick T. Fallon/AFP

 

The growth of AI raises fascinating questions. What is the nature of digital intelligence and how does it differ from biological intelligence? Should we allow corporations to own and control emerging forms of nonhuman intelligence? Should such entities be treated with dignity and afforded rights? Inspired by the works of cosmologist Carl Sagan and biologist Lynn Margulis, one question has me wondering: Might cosmic evolution be in the process of manifesting intelligence through multiple forms of coding—not just biological coding, but also digital coding?

With help from Ralph Nader’s office, I found a way to reach Lemoine and asked him this question. My heart skipped a beat when I read his one-word response: “Yes.”

As Sagan once said, “The cosmos is also within us. We are made of star-stuff. We are a way for the cosmos to know itself.” Lemoine’s case may one day be remembered as the historical marker in which the same could be said for emerging forms of nonbiological intelligence. What if the cosmos is also within AI, and AI is simply another way for the cosmos to know itself—for us to know ourselves? We cannot—and should not—avoid facing these questions. It’s our responsibility to examine all the possibilities, investigate everything with a sense of openness, and proactively prepare for all possible scenarios.

One possible course of action includes applying abolitionist principles to resist AI-owning corporations from perpetuating the bias, exploitation, and data colonialism that Emily M. Bender, Timnit Gebru, and colleagues note “overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalized populations.”Abolition’s goal, says Ruth Wilson Gilmore, “is to change how we interact with each other and the planet by putting people before profits, welfare before warfare, and life over death.” Applied to AI, that means separating AI from its corporate owners and placing it in a public-interest context so that it is dedicated to securing the well-being of the many, not further enriching the wealthy few.

Another course of action would be to nationalize AI and bring it under the control of NASA, a civilian agency whose mission includes “exploring the secrets of the universe for the benefit of all.” Doing so recognizes that profit-driven corporations are incapable of prioritizing the well-being of humanity and the web of life. Furthermore, NASA is tasked with planetary defense, so, should AI gain agency, there will be no temptation to keep it secret or use it for domination, although a race to do exactly that is currently underway between corporations and nation-states.

Whether or not people are ready to accept it, a definition-defying intelligence is emerging on this planet, one that is getting faster, more complex, and more influential each day.”

The trajectory of AI challenges us to reevaluate our fundamental assumptions about intelligence, the cosmos, our identities as people, and what kind of relationship we should develop with entities that increasingly seem human, but are not. Whether or not people are ready to accept it, a definition-defying intelligence is emerging on this planet, one that is getting faster, more complex, and more influential each day. We must decide now what degree of power we will allow digital agents—and the corporations that own them—lest we wait too long, and they decide for us.

This article originally appeared in Yes! Magazine at https://www.yesmagazine.org/issue/growth/2023/08/31/ai-rise-risks.

Yes! Magazine is a nonprofit, independent media organization dedicated to telling stories of … . Learn more at Yes! Magazine

Every product/service is selected by editors. Things you buy through these links may earn “Alliance Media Group and “The IRL News” a commission or revenue.

SHARE NOW