THE RECURSIVE PROBLEM WITH ARTIFICIAL INTELLIGENCE

Does the robot feel sentimental when it receives a flower, or does it simply complete the task according to its programming?

Welcome human readers! 

As much as the AI conversation has focused on complex matters relating to authorship, ownership, cybernetic hallucinations, deep fakes, and whether an idea can be considered relevant if there isn’t direct human involvement, I feel like I’m distracted. These practical AI questions may be vital, but they only circumscribe the deeper, more meaningful cultural questions. Consider this: in an AI-influenced world, what are the implications for how individuals and cultures interact with creative works of any kind? When non-human actors —artificial intelligence, in other words—can materially shape both the substance and style of what we consume, it’s critical to ask if the ghost in the machine is now out of the machine, walking around.

There’s no debate that AI will have profound benefits to society. I absolutely believe that, once perfected, smart cars will do a much better job on the roads than sleep-deprived, preoccupied drivers. I much prefer the prospect of always astute AI radiologists as compared to potentially distracted humans who might miss something subtle on a scan. But data analysis is one thing; soul is another. What I worry about concerns how AI will  shape the recursive process of creative development that’s been humanity’s foundational engine for millennia. Until this moment in human history, creative development of all types has always been fundamentally recursive. Tomorrow’s creative idea emerges from the raw materials mined yesterday. The implications of breaking that truism present profoundly worrisome considerations. By digesting massive libraries for raw informational fuel, AI may superficially ground its creative efforts in older works. But separated from the cultural and personal fermentation that comes from emotional and intellectual encounters with new creative works, I’m not sure the process is equivalent. Proponents will say that AIs trained on existing libraries will simply benefit from creative legacies. My concern is that after a while this process begins to eat itself, as future AI results start to derive from libraries built of AI. Think of it this way. A photocopy of a photocopy looks pretty good in the modern world. But run that process through the machine a hundred times, you’ll begin to spot artifacts and imperfections that deny its authenticity as compared to the original.

There’s been plenty written about AI recently, including an ironic percentage that’s been written without much human influence beyond the prompt itself. It’s a simple truth: our new electronic tools have begun to change a vast range of how we live our lives in the modern world. 

A scientist always believes he or she can control the creature created in the lab. I marvel at the billions of venture capital dollars rushing into the hands of sleepless engineers racing to create technologies that will arguably do away with a big portion of their own jobs. I’m not opposed to modernity or the soul of a new machine. I fully embrace the inevitable process where innovation forces new ways of working that are destined to replace older modes. My specific concern is that the embrace of AI’s siren promise for innovation seems uncaring about its implications while simultaneously acting as a transformational agent. 

One way to address some of these concerns is to side-step the obvious questions. Rather than wonder if artificially generated content has inherent value—a seemingly Buddhist koan — it’s valuable to reconsider matters of cultural determinism. Here’s an example:

Not long from now AI services will be built into various audio-visual playback devices—televisions, tablets, electronic billboards, etc. This means that matters of content selection will be outsourced to machines. In service of black-box algorithms, likely refined recursively by non-human actors, what happens when images or words deemed to be objectionable are modified on the fly?  While streaming a television program or movie, you would never know that original content had been modified from its original forms. Curse words might be less salty. Political rhetoric could be smoothed out or modified to appease censors of various stripes. Instead of seeing a nude body, actors might appear draped in some sort of computer generated clothing or scrim or curtain or shadow. Or, perhaps, a subjectively objectionable scene could simply be excised altogether—edited out in real time! The challenges of artificial intelligence interfering with creative intentions are not only profound in terms of their Orwellian implications, but profound in terms of what that means for the future growth of culture. The more we allow an outsourcing of our own censorship, effectively shielding our eyes before we even have a chance to discover or explore our own feelings about an idea, the more we outsource our values and morals. Taken only a few steps further, when our creative inputs are constrained by systems that make choices in loco parentis, we replace recursive creativity with recursive repression. 

This is the equivalent of a cultural bank run. The more culture abrogates its own inventive abilities or aesthetic choices, the more it tamps down sparks of invention or aesthetic exploration. It’s not hard to imagine a moment when people who consciously choose to eschew electronic means for accomplishing new creative works will be regarded as second class creatives. At extremes, they could be regarded as heretics, as apostates of modernity. The profound irony in this example is that those cast to the edges would be the very people trying hard to hold the center, the best parts of humanity.

Fortunately we’re not there yet, but this stuff is coming fast. Millions of users are adopting these technologies looking for quick payoffs without considering the long term implications. I’m not saying all AI is bad, just like I’m not saying all cars are bad. But I am suggesting that technical potentials without a measure of reasonable forethought seems like a recipe for a crash.

@michaelstarobin

facebook.com/1auglobalmedia