SHARE
FacebookEmailShare

The Inhuman Condition


Surviving Artificial Intelligence

You’ve seen it before. It’s a title that screams marketing department, with all the requisite keywords required to stand out at an airport bookstore:

They'd rather import robots. Porta Palazzo, Torino.

Reclaiming the Revolution: Extraordinary Adventures in Politics and Leadership at the Inflection Point of Industry 4.0.

In keeping, Stephen Barber’s book is a solid, if somewhat unremarkable, exemplar of that species of text so common today about what’s driving history at the present juncture.

In this instance, it’s the changes to be wrought by Artificial Intelligence (AI). The book’s opening is particularly striking.

Against the background of ageing European and North American populations is a crisis of elder care. AI, so Barber argues, could help address this problem.

With its ever-expanding capacity to effectively mimic patterns of human interaction, it is now possible to provide social stimulation and other palliative care to older people starved for attention.

Technology can’t entirely do this as of yet, but it’s coming. And when it does, this will free up doctors and nurses to provide far superior care for the elderly than they currently do.

Barber certainly has a point. The challenges posed by ageing and the cost of providing care are growing. But it’s hard to escape the conclusion that the projected revolution in labour has problems.

Essentially, the element of caring involving language and consciousness is turned over to technology. What is left over is work that is dirty, boring, and not particularly well-remunerated.

This is the narrow edge of the wedge of one of the (though by no means the only) threats posed by the development of AI.

For much of the last decade, the most immediate perceived threat posed by artificial intelligence was that it would combine with developments in robotics to destroy certain kinds of manual labour.

To this has recently been added the more profoundly disturbing prospect of AI taking over the intellectual and creative work that was once viewed as uniquely the preserve of human beings.

Perhaps there’s an argument to be made that the main problem of the so-called “great replacement theory” so beloved by right-wing bloviators like Tucker Carlson is right for the wrong reasons.

White people are being replaced, but only in the sense that all human beings are. And the replacement is being undertaken not by a Jewish cabal but by technology.

From time to time, one will sometimes hear thinking people waxing critical about the quixotic nature of AI research.

Why would people waste their time trying to recreate the features of human intelligence? Would it not be better to revel in humanity’s manifold complexity? Probably.

A passage or two somewhere in Max Horkheimer and Theodor Adorno’s Dialectic of Enlightenment explains why that sort of restraint never seems to pass muster.

But capitalism, and the various projections of what capitalism might be mutating into, share the quality of being driven by dynamics that result from forms of mute compulsion baked into the system.

Given the degree to which the imperatives of the market have been woven into the fabric of the world, the fact that AI research has taken on the form of a self-reproducing dynamic should surprise no one.

In the last half-century, human beings have become much better at defining what they are not than what they are. The rise of fascism was a genocidal counterpoint to a crisis of the human.

To the extent that a residuum of fascism persists in the modern world, it constitutes an identitarian feedback loop, frantically trying to replace the null space that the human has become with ever more stridently asserted positive identities based on toxic, ethnically inflected accounts of culture.

This is the point at which critics of all things postmodern tend to argue that French post-structural thought is to blame and that Foucault is, in fact, the antichrist.

But this is just shooting the messenger.

Post-structuralism didn’t create the crisis any more than it created parallel problems in knowledge and ethics that have shaped the intellectual world since the late 20th century.

Human beings have arrived in the third decade of the 21st century facing three existential threats: climate change, resource exhaustion, and artificial intelligence. Of these, the third is the most ominous.

The threat to the biosphere posed by climate change and resource exhaustion is profound, but solutions exist. Alternative energy and carbon reduction and mitigation methods are available and need to be scaled up to ameliorate much (though not all) of the threat.

Likewise, renewable resources and more efficient production could do much to address the strain placed on the biosphere’s nonrenewable resources.

These fixes will be challenging and will not completely solve our problems. But the threat posed by AI is of a different order.

Science has mostly divested itself from the restraints based on collectively held social values. A culture of global capitalism in which the right amount of money can achieve practically any outcome synergises in malign ways with a geopolitical system constructed as a zero-sum game.

The system of geopolitical conflict in which blocs centred on the United States, and China has created an ecology of fear in which each must develop capacities to counteract the others.

The leadership of a state may recognise the dangers posed by forms of artificial intelligence so complex and superhuman as to defy effective control. But if one state or bloc undertakes to arm itself with artificial superintelligence, then, by the suicidal logic of the system, everyone must.

This threat calls for resistance. But resistance must be based on solidarity. There is no golden age from which we have fallen away, some Garden of Eden in which humanity once shared a common identity and a common purpose but from which we have now been expelled.

If nothing is essentially human, we need to find a post-essentialist humanity. This approach is akin to that described by the philosopher Otto Neurath concerning knowledge. We are like a ship needing to be repaired while still at sea.

This must be done because the effects of AI are spreading rapidly due to a fallacy of composition inscribed in the heart of capitalism.

Artificial intelligence, like automation, is a viable strategy as long as selectively applied.

There is an inbuilt compulsion for those in any particular line of production, be it cars, hamburgers, movies, or newspaper columns, to employ means of production that don’t demand pay raises, get sick, or fail to show up for work for one reason or another.

But, pace Jean-Baptiste Say, supply does not create demand, especially in conditions where it is produced without circulating money into the hands of consumers.

Sure, you can increase the production of pizzas by 1000%. But the system is only workable if demand for pizzas rises concomitantly.

Herein lies a further danger: there is no guarantee that the system will settle on a more rational course just because it fails to work on the premises on which it has been based.

The dystopian possibilities are endless, and given the current mindset of the hyper-wealthy, who seem intent on moving to the open ocean or perhaps to Mars, it is the sort of thing that should be born in mind when thinking about how things might shake out in the next couple of decades.

Human beings have never really known what we are, although there have been many attempts at definition.

At the very least now, we share the status of beings threatened by a power, possibly malign, and in important respects unknowable, that is infinitely superior in destructive power to ourselves.

Perhaps we can be brought together by a common threat. That may be the only way to avoid an expected end.

Photograph courtesy of Joel Schalit. All rights reserved.