SHARE
FacebookEmailShare

Programmed Obsolescence


Killing Ourselves With Technology

The economic changes threatened by AI have become the stuff of conspiracy theory. This is hardly surprising.

"And we are passing from one horizon to another." Barriera di Milano, Torino.

Each new shift in the technological organisation of production over the last two hundred years has brought forth predictions of the apocalypse.

Is our current situation any different?

Not if one believes the artificial intelligence boosters, who claim it’s just another step in the long march of progress.

In Protagoras, Plato rewrote the myth of Prometheus and Epimetheus, distributing capacities to the various creatures created by the gods.

Humans ultimately get the capacity to make tools.

It’s a metaphor for technology, as a way of intensifying our humanity, that has shaped our understanding of it until today.

However, artificial intelligence is a different kind of technology.

It can magnify other human characteristics, in the way that cars allow us to move faster, planes allow us to fly, and computers enable us to work with speed and precision.

In the earliest eras of research into artificial intelligence, it was meant to model human thought.

In the 1970s, researchers at MIT tried to reproduce cognitive processes by supplying a computer with a million facts.

As the philosopher Hubert Dreyfus later noted, the problem was that the computer didn’t know that if Richard Nixon was in Washington DC, his left foot was there, too.

We’ve come a long way since then. The question of whether it is possible to model human thinking is a lot closer to being answered.

Computers are now better than human beings at chess and Go. They’re also superior at video games like Call of Duty, but the differences are crucial.

It’s one thing for a game to exhibit faster reaction times than any human being. But chess and Go have been staples of creativity and strategic thinking for centuries.

It’s not just that computers have beaten humans at chess for decades.

When a computer was given the rules of Go, it took only a few days to start playing at the Grandmaster level.

If you want to see the power of AI, watch chess videos in which analysts like Gotham Chess (Levy Rozman) attempt to understand Stockfish, the best chess computer.

Sometimes, they understand why the computer does what it does. However, the complexity of the ideas put forth by Stockfish often confounds chess players at the highest level.

This is not worrisome. There are some things computers can do better than humans by brute force calculation.

What is more concerning is that advances in artificial intelligence are taking over tasks that are enriching rather than drudgery.

The promise of robots and artificial intelligence (the two are fundamentally intertwined) was that they would free people from repetitive tasks, allowing them to pursue creative undertakings.

AI is attractive to the Elon Musks of the world since it’s more profitable than people. It will not call in sick, quit, or go on strike.

However, the rise of AI represents a large-scale fallacy of composition. Profits can’t be realised by employing it if everyone else is using it, too.

The reality of capitalist production is that if you’re using a Widgetmatic 5000 to make x product, it won’t be long before all your competitors are doing so as well.

This is what Marx referred to as the “law of the tendency of the rate of profit to fall”.

But the danger here is more profound. Proponents of artificial intelligence talk about its spread as if it will merely be a magnifier of intrinsic human abilities.

However, AI can replace humans in the equation.

How capitalism will work if that happens is an open question since we will no longer be paying people to produce; they will also not have the resources to consume.

There is a larger problem here. Human beings are in the process of losing any sense of what it means to be human.

Modern philosophy has contributed to this, although it is not responsible in any meaningful sense.

Conservatives are fond of criticising postmodernism for destabilising the grand narratives that once purported to give meaning to human life – religion, family and heterosexuality, for example.

Even the concept of the human has proved susceptible to destabilisation in this way.

But we should be careful when conservatives call upon us to conflate questions about the meaning of philosophical concepts with the destruction of community and culture.

There is no agreement about what it means to be human. This is nothing new. The concept has been a matter of unresolved debate for two centuries.

There was never a golden age in which this wasn’t the case. In most respects, the question is academic.

Although philosophers since Kierkegaard have pointed to our capacity to think about what we are as definitively human, people live their lives without doing so without consequence.

Artificial intelligence, however, presents us with a different challenge, making the question of who we are and what we are meant to be doing more urgent.

If AI can do things that define humanness better than humans, the question then becomes, “What is left for us?”

It has never been the case that humanity, such as it is, had an agreed-upon collective goal, and it will certainly never be so in the future.

Having said that, artificial intelligence now poses the question of what humans are since it can do most of the things we do better.

The promise of capitalism in the developed world is that the highest forms of human endeavour can synergise with the profit motive.

The challenge posed by AI is not only what it can accomplish and what might be left to us in its wake but why we should want it at all. What sort of utopia are we aiming at?

The openness offered by poststructuralism is that, with the grand narratives put to bed, human beings would be free to live whatever kind of lives might appeal to them.

Artificial intelligence forecloses this openness with the promise not only of doing most things better than we do but by no longer needing us to teach it to do other things.

One of the dangers often discussed by technology theorists is that we don’t know what an AI that achieved sentience might actually want.

This is a sobering prospect, given the danger that a sufficiently powerful and malevolent entity might decide it has better uses for our atoms than we do.

But artificial intelligence’s real challenge is less immediately dystopian. What if, instead of destroying, AI crowds us out from everything that gives our lives meaning?

The prospect then might not just be economic collapse but a hollowing out of humanness itself.

Once that happens, we become obsolete.

Please support The Battleground. Subscribe to our free newsletter and make a donation to ensure our continued growth and independence.

Photograph courtesy of Joel Schalit. All rights reserved.