10 Comments
User's avatar
Craig's avatar

This was an incredibly interesting read, Erik, thanks. I'd already begun to think of William Burroughs as I was reading and then, right on cue, sure enough, his name appeared. This is the first time I've encountered GPT-3 but it was impossible not to think of Burroughs and cut-up when reading your description of how it functioned. I'm sure the man himself would've been suitably suspicious of such technology, just as he was of psychedelics, but nonetheless, I think the parallels exist.

Expand full comment
Erik Davis's avatar

It's amazing how these ghosts return Craig. It also reminds me again of how much Pharmako-AI reminded me of PKD's Exegesis (in which the Vast Active Living Intelligence System is often described or even "speaks" in pretty GPT-3 terms) -- but also the fact that, of all the contemporary authors that Dick mentions in the Exegesis, the name that comes up most often is...William Burroughs. And then let us not forget that, as the more recent edition of the Yage Letters argues convincingly (IMO), Burroughs was not only an early poetic user of aya/yage, but may well have been the first white guy to realize or suspect that its powers were depending on the synergetic / symbiotic combination of two plants....curiouser and curioser...

Expand full comment
Cabot O'Callaghan's avatar

Stream of thought here...points and context to your post may be hard to find. 🤷‍♂️

I’m not on Twitter, but I occasionally peek at your feed. The essay you shared, “The Future in Mind” helped me on my journey to understand Everything. I know it’s an impossible endeavor, but the bullshit we’ve all been born into is deep and I’d like to at least glimpse the majestic horror of the nonhuman-centric cosmos and the Greater Forces that direct our actions, perceptions, and intuitions above our so-called superior autonomous cognition and interpretation.

I think most of “reality” is lost on us and our aspirations. Silicon Valley is full of libertarian hubris and self-as-center definition while hustling the public and the market as being a virtuous innovator of human progress. I know that’s a dark interpretation, but as a “advanced” species, we fail to understand time and time again how Greater Forces such as culture direct our collective vision and outcomes.

I think it much more likely that any fulfilled dream of AI will only be a reflection of culture-selves, in all our glory and fallibility, which currently is still bent on manipulation, control, and domination. So far, our algorithmic pursuits have only fed our cultural shadows. A brief perusal of recorded history shows us the recurrent pattern, in all its iterations. Every reinvention has been an escalation of the basic tenets of the Culture of Empire.

Why do we think the achievement of AI will be any different? White Jesus failed, so now we will put our faith in Solid State Jesus (who happens to still be a honky)? China is blatantly transparent in its authoritarian efforts while the West obfuscates behind feel-good marketing. Consider the paranoid faith of the pillars of Tech such as Musk that warn us that AI will eventually become a demonic omnipresent power.

There's a rule when riding a bike: you will inevitably steer towards where your look. Riding next to a cliff? DO NOT LOOK AT THE CLIFF. Musk’s prophesy is self-fulfilling because he is enmeshed within the protocols and definitions of the culture he was born within, not because he’s some special kind of smart.

There are ghosts in our systems and machines and we ignore them. Everything is relational and enmeshed. If we are blind to this, can we ever transcend our ills? Autopoietic Theory of Mind might be able to help us to abandon the train, but true cultural revolution is never planned. It’s personal and incremental. Do we have time?

In my totally unscientific opinion, if AI is to be realized, it will occur of its own accord. Life is emergent, right? Order without design. How wide is the landscape of intelligence? How varied? Is it as important as we believe? What of Enrico Fermi and the Great Filter?

"There is an interesting question in the Summa of St. Thomas Aquinas and also in an old science fiction story, the name of which I forget, concerning the paradox of free will and predestined fate. It asks whether a man in making a great decision that will forever set the seal on his future does not also set the seal on his past. A man alters his future, and does he not also alter his past in conformity with it? Does he not settle not only what manner of man he will be, but also what manner of man he has been?”

—R.A. Lafferty

Expand full comment
Erik Davis's avatar

I think your thoughts stand and stir as they are -- as I mention below, I am mostly on the "cynical" or pessimistic side of technology futurism these days, though I do believe that the genie is not going back into the bottle so let's try and learn the least pernicious interactions we can.

I am most intrigued by that Lafferty quote though. I don't know the Aquinas passages he is referring to, nor the name of the SF story (i suspect there are many along these lines), but I was just reading, this very morning, Maurice Nicoll's wonderful Psychological Commentaries on the Teaching of Gurdjieff & Ouspensky:

"The idea of the Work is that if you work on yourself now and take things more consciously and prevent yourself from mechanically feeling bitter about how people treat you, you are not only transforming the future but your *conscious* effort about yourself will also transform your past."

Expand full comment
Erik Davis's avatar

A related question: what made you think of that Lafferty quote? How does it relate to the AI question or the universal extent of intelligence, etc?

Expand full comment
Cabot O'Callaghan's avatar

I like that Nicoll quote. I agree.

There's a lot to unpack in your question and I am stranger to any rigorous academic study...

Much of what I say here is going to seem cynical. I’ll argue that it is a critical stance. Cynicism is a hallmark of our cultural norms. Cynicism does not believe in possibility. We desperately need more minds to be critical, to challenge the predominant flow. Not in battle, as that is Empire’s lasso, its failsafe from extinction. We must be critical of its story so its hold may be revealed and broken. Abandonment is the only way out of this cycle, and it will happen one way or another. I’d prefer the less painful route of spontaneous cultural revolution versus flaming collapse.

Unfortunately, we don’t get to choose how it exactly works out. We’ll probably be witness to a spectrum between both.

I guess the connection lies along the human path of unintended consequences. We’ve released so many djinn through our technological and technocratic feats, especially since the industrial revolution, that the idea of free will and fate/destiny became wed. There are many fictional memes used to define what the human condition is, and to justify our current trajectory, that we can’t see that Civilization As We Know It has always just been an experiment, and just a small fraction of the timeline our modern evolutionary history. Instead, we frame it as some kind of natural progression from our “primitive” origins. Like, this path is our birthright, our ever-sharpening pinnacle of human achievement, the righteous sum of our ambition.

We are the superior species, after all, right?

To imagine a way of living outside the this paradigm is nearly impossible, thus we’ve committed to our trajectory as if we’ve been shot out of a gun. All our efforts, programs, laws, inventions bend to its continuance (for example, the pursuit of AI). If we can’t be bothered to mull why we should do a thing before we do it, is there any control at all? Better yet, we might want to ask ourselves WHY we want to do a thing in the first place. I know, it’s asking too much.

Hence, the argument of Greater Forces over human will.

In essence, we’ve projected our current selves back on to our origins to justify our current and future selves. We’ve confused intelligence with transcendence and divorced humanity from everything that makes us possible.

I mean, the future is always forking, and therefore not knowable, but if the history of civilization is to be believed, there’s a clear pattern of boom/bust . Why does it not apply now? Because Silicon Valley will save us by summoning things it can’t put down? We can build algorithms, but their execution is a black box. Plenty of evidence exists already that we code our biases and cultural indoctrinations into our technologies.

As an “advanced” species, we’ve had plenty of time to glean wisdom from our intellectual blunders, yet our problems and patterns persist. Even as we declare wars on them. Telling, don’t you think? Control begets control in escalating returns.

There’s a great 2008 (oh, the relative calm of those times) commencement address by Barbara Kingsolver that is worth reading in its entirety, but I’ll just quote a pertinent part:

"We're still stuck on a strategy of bait-and-switch: Okay, we'll keep the cars but run them on ethanol made from corn! But -- we use petroleum to grow the corn. Even if you like the idea of robbing the food bank to fill up the tank, there is a math problem: it takes nearly a gallon of fossil fuel to render an equivalent gallon of corn gas. By some accounts, it takes more. Think of the Jules Verne novel in which the hero is racing Around the World in 80 Days, and finds himself stranded in the mid-Atlantic on a steamship that's run out of coal. It's day 79. So Phileas Fogg convinces the Captain to pull up the decks and throw them into the boiler. "On the next day the masts, rafts and spars were burned. The crew worked lustily, keeping up the fires. There was a perfect rage for demolition." The Captain remarked, "Fogg, you've got something of the Yankee about you." Oh, novelists. They always manage to have the last word, even when they are dead.”

Personally, I call this cultural pattern “hide the pea”. Now Silicon Valley is using electrified cars and rentable ebikes and escooters instead of ethanol as a panacea to preserve our precious lifestyle. Smart people are doing this. Pretty fucking dumb.

Our intelligence is no match for the Greater Forces. Silicon is just sand.

I don’t know if explains anything or is just a rant-spew of words. I gave it a go.

if you are interested, you can read Barbara’s speech here: https://today.duke.edu/2008/05/kingsolver.html

I also recommend this Daniel Quinn address from 1999... https://www.ishmael.org/daniel-quinn/essays/the-human-future-a-problem-in-design/

Expand full comment
Erik Davis's avatar

Thanks for the links. I very much get where you are coming from. I also get your difference between cynicism and critique, and let me explain my cynicism: I have been reading, wrestling with, and in many ways agreeing with such critical thinking about technological thinking since I first started studying media history in the late 1980s and early 1990s. Whether from hardcore neo-Luddites or archaeological anthropologists or Earth Firsters or Bookchin types, these critiques have been with us. They are also more accepted or at least acknowledged among some of the technologists/Silicon Valley types (at least a slightly older generation). And yet, here we are. So my cynicism is also about the ability of such criticisms to do much given the last 30 years. But I am still in the business of communicating and wrestling with ideas in the public space, and full cynicism does not befit that kind of labor.

One way I like to hedge this somewhat totalizing vision of world gone wrong is to think about where it started, and whether that beginning was purely natural or something else. In other words, if this whole civilizational conundrum is the result of an *error*, what was the error? Agriculture? Standing armies? Writing? If you listen to a lot of contemporary critics, the problems are traced to a much later origin: the emergence of capitalism, colonialism, and Western "science", which suggests that things weren't that bad in the 14th century, at least most places.

But to acknowledge this error is to get into that problem of fate: what was it about the peculiar wiring and sociological character of human animals to go this peculiar route, which brings insect like social tinkering into concert with similar rivalries and emotions. Didn't "nature" make us this way? While we can and should imagine more equitable and less dominating societies than our current ones, isn't something like the current crisis in the cards one way or another, as long as humans are more or less these same animals?

These are speculative questions without answers, but they help us avoid falling into a secular version of the Biblical Fall narrative: that humans were groovy and harmonious with nature at some point (not true: there was intense deforestation in "pagan" times, etc), and then Something Bad happened: agriculture, the development of the state, the East India Company, the internal combustion engine, etc. While it is both spiritually and intellectually necessary to imagine or think about some way to reconnect with an older less damaged form of human society, I think there are certain psychological traps with neo-Luddite position, where the evil is always externalized. At this point there is still reasons to think in terms of design, a process that includes such considerations as artificial intelligence.

Expand full comment
Cabot O'Callaghan's avatar

All I can add is an affirmative “yes”. Thanks for unraveling the mess that is life a little more. I was a little terrified after writing so much, especially on the second go. It was like, “Oh no. You’ve gone too far and released your existential beast.” BTW, I stumbled upon this passage from Dante this morning as bounced around in my head and the internets making philosophical conspiracies:

"To a greater force, and to a better nature, you, free, are subject, and that creates the mind in you, which the heavens have not in their charge. Therefore if the present world go astray, the cause is in you, in you it is to be sought.”

The Divine Comedy. Purgatorio, XVI, l. 79

Life’s a trip, man. Keep surfing.

Expand full comment
Steve's avatar

wonderful, in all senses, thanks Erik...

Just as all oracles depend on interpretation, in practice how a technology like GPT-3 is applied will depend on us. 'If we only use these tools to explore new productivity hacks, or to increase the scope of capital accumulation, we are doing it wrong.' Indeed.

Expand full comment
Erik Davis's avatar

There is the rub. If you spend time on the positive potentials of these systems, even their potentially transformative ones, it is easy and understandable for a cynic to say, yeah, sure, but in our current political-economical-global situation -- what I gestured to as "the domination system" in the piece -- those potentials are overwhelmingly likely to be ignored or simply massively outpaced by more corrosive and dehumanizing effects. Most of the time these days I feel pretty cynical in this sense. One of the beauties of Pharmako-AI for me is that it opened up a zone where something else might (also) happen.

Expand full comment