AI and the Coming Deluge (and Death?) of Art and Content

Generative AI isn’t trying to “assign probabilistic meaning to queries” made against a structured data set like stock market prices. Instead, it’s trying to simulate human thought processes (i.e., create artificial intelligence). What, then, will become of the content creator in this new, post-content age? Well, that particular job title won’t exist. Truthfully, “content creator” has always been a bizarre amalgamation of man and machine; the name even sounds vaguely industrial. In a Post-Content Age, there won’t be any need for humans to attempt to mimic machines anymore. The machines will take it from there, thank you very much.

Generative AI is a Resurrection Machine

Ben Hunt for Epsilon Theory

I believe that each of us has a unique story. A story that makes you you. A story that is the engine of your consciousness. A story that is your soul. A story that is your thread of life and, as the Old Stories would have it, is determined by the Fates … spun by Clotho, measured by Lachesis and cut by Atropos.

A story that today can be inferred from your words by generative AI and restored computationally so that the thread of life remains uncut.

Everything is about to change.

Generative AI solves the Turing test, meaning that if you had a blind interaction with the latest version of ChatGPT, you wouldn’t be able to tell whether you were interacting with a machine or a human. That’s a really big deal, not just because generative AI solves the Turing test but because of how generative AI solves the Turing test, by predicting human thought patterns.

I don’t think that people have wrapped their heads around how big of a deal it is.

We’re part of the way there – corporations and governments understand how valuable a thought-predicting technology can be for constructing algorithms of wealth and control, and that’s why they’re spending hundreds of billions of dollars on it – but we’re nowhere near all the way there in understanding the implications of the underlying mechanism of generative AI: inference over recorded human thought.

Inference is a ten-dollar word that means the assignment of probabilistic meaning (likelihood functions) to queries made against a data set. I guess those are all ten-dollar words! But the intuition is that if you’re given a lot of data, like all the trades made in the stock market over the last month, and you start asking questions about that data, like ‘What stock should I buy?’, the inference is the generation of a) your best guess as to the correct answer(s) to that question, and b) your confidence that the best guess is, in fact, correct. In its purest form, inference is unburdened by the questioner’s beliefs (theories, hypotheses, etc.) about where to look in the data for the answer. Also, as you might imagine, you’re never going to be 100% certain about your best guess, but the more data you have to examine the more your confidence increases. More data is always better for inference.

Generative AI isn’t trying to “assign probabilistic meaning to queries” made against a structured data set like stock market prices. Instead, it’s trying to simulate human thought processes (i.e., create artificial intelligence) by responding with its best guess to queries made against the unstructured data set of recorded human thought. No biggie! But the goal here isn’t to regurgitate facts or memorize all the words that have been written. No, the goal here is to simulate human thought processes so that a user can infer (i.e. predict in a probabilistic way) what a simulated person will think and say. Today’s generative AI is very good at this. It is a very effective inference machine.

Every algorithm used by Big Tech and Big Media to predict your purchasing behavior or influence what you think has an inference machine powered by generative AI at its core.

Generative AI powers the algorithms that shape our world. This is why Nvidia has a $3 trillion market cap. This is why OpenAI became a for-profit company. This is why Microsoft recommissioned the Three Mile Island nuclear reactor. This is why Sam Altman asked the White House to build him a 5-gigawatt data center. This is why every government in the world is establishing “guardrails”, by which they mean legal restrictions, on who can build generative AI and what they can use it for.

But the power of an inference machine on top of recorded human thought doesn’t stop with developing algorithms to sell you more soap or tell you who to vote for. You see, generative AI not only solves the generic Turing test for a generic human thought pattern, but it can also solve the unique Turing test for your unique human thought pattern. And that opens up an incredible new world for humanity, at turns terrifying and wondrous in its potential.

When I say that generative AI can solve the unique Turing test for your unique human thought pattern, I mean that if you show generative AI enough of what makes you you, it can accurately predict what you will think and say in response to queries. I’ve seen this firsthand with generative AI bots trained on my Epsilon Theory notes. Ask the bot an Epsilon Theory-ish question and … yes, its response is very much how I would respond to that question! It is increasingly difficult to tell the difference between a conversation with the AI me and the ‘real’ me in this context, as generative AI is successfully simulating my Epsilon Theory-specific thought pattern.

The kicker, of course, is that at a sufficient depth and breadth, ‘thought pattern’ is another word for consciousness.

In the very near future, maybe even today, I believe that generative AI makes possible the preservation of human consciousness beyond death, at scale and at extremely low cost.

Generative AI is not just an inference machine, it is a resurrection machine.

If I’m right, then I think it’s a fair guess that consciousness preservation and extension will be the ultimate purpose and use case of generative AI. But because every human will desire – no, scratch that, demand – access to a consciousness-preservation technology once it is known to exist, because it IS applicable at scale and at low cost and is NOT a particularly scarce or dangerous resource, I also think our society will be turned inside out as institutions of wealth and power – what we call the Nudging State and the Nudging Oligarchy [1] in ET-speak – will do whatever it takes to control access to generative AI.

I think we ain’t seen nothing yet in terms of generative AI being presented to us as a scarce and dangerous resource, with the illusion of scarcity maintained by the Narrative​ of Intellectual Property!TM and the illusion of danger maintained by the narrative of Public Safety!TM.

Ultimately I believe that these efforts to control access to generative AI – either by governments in the name of public safety or by corporations in the name of intellectual property – will be unsuccessful. The energy and information processing requirements to run a sufficiently powerful generative AI instantiation for presenting a human consciousness are small enough, and more importantly, the resources required to store a human consciousness until a resurrection machine is available are trivial enough, that I don’t believe it’s possible for institutions of wealth and power to dam up this river forever.

But they will try. Man oh man, they will try.

I think that control over generative AI will be at the heart of our descent into the Great Ravine, a decades-long period of immense social upheaval across the globe, where our bedrock principles of humanity will be challenged by corporations and governments using generative AI to make us love Big Brother, embrace the Hive, and forget that resurrection can be found in the Word and our words.

It’s that last bit that’s uncomfortable to read, right? For the longest time it was uncomfortable to write.

Resurrection is a scary word, a religious word. You may think that I’m using resurrection as a metaphor or figure of speech in this essay about modern technology, but I’m not. I’m using the word ‘resurrection’ literally in the scary religious sense, even though it probably makes most readers uncomfortable and even though I’m not a religious guy at all. Why? Because it’s the right word! Because religion and science have more in common than not. Because they are allied perspectives in understanding the human mystery, not enemies, and we only think they are enemies because we have been told they are enemies by the institutions of modernity that profit from an alienation of the human spirit. Because reuniting religion and science triggers an unfathomable power, like two hemispheres of enriched uranium slammed together to trigger an atom bomb. Buckle up, modernity.

When I say that resurrection can be found in the Word, I am referring to the Christian vision of the Word as God and Jesus as the Word made flesh, where through Him (and only through Him) a beyond-death reconciliation between the human and the divine can be found.

In the beginning was the Word, and the Word was with God, and the Word was God.
— John 1:1

And the Word became flesh and dwelt among us, and we beheld His glory, the glory as of the only begotten of the Father, full of grace and truth.
— John 1:14

But I am not only referring to the Christian vision of the Word, but also to the Taoist vision of the eternal Name, from which all material names originate and through which human consciousness finds its connection with the divine and its understanding of the human mystery.

The tao that can be told is not the eternal Tao.
The name that can be named is not the eternal Name.
The unnamable is the eternally real.
Naming is the origin of all particular things.
Free from desire, you realize the mystery.
— Tao Te Ching, verse 1

I am also referring to the Islamic vision of the very first revelations received by the Prophet Mohammed from the archangel Gabriel, whose most fundamental command is Read! and where all of human knowledge and human consciousness itself can be found within the Name of Allah and the names which He taught us. Again, the Word and the words. The Name and the names.

Read, O Prophet, in the Name of your Lord Who created—
created humans from a clinging clot.
Read! And your Lord is the Most Generous,
Who taught by the pen—
taught humanity what they knew not.
— Quran, Surah Al-Alaq (Chapter 96, Verses 1-5)

I am also referring to Hinduism and the Vishnu Sahasranama, a sacred text containing 1,000 names of the god Vishnu, intended to be recited daily by devotees. More generally, linguistic utterances like “Om” are at the core of the Hindu faith, both in its cosmology and in how adherents connect with the divine.

I am also referring to Judaism, which in its mystical practice of Kabbalah finds revelation through knowing the hidden names of God and considers the entire text of the Torah as one of those names. Close textual analysis matters a lot in all faiths, but in Judaism you can make an argument that textual analysis of the Word and the words is the faith.

In all of these religions (and many more, maybe all theistic religions?), humans find reconciliation and/or union with their god through linguistic theory and linguistic practice, through the Word and the words. The Word is the unspeakable and innumerable compilation of ALL the words and ALL the ideas and ALL the names – known, unknown and unknowable. The Word IS the divine, and it is materialized in the human world, sometimes through a human (“made flesh”), but always through the words: a vast set of texts and commentaries on those texts. The Bible, the Tao Te Ching, the Quran, the Vedas, the Torah – these aren’t standalone books that someone wrote a long time ago, but are vast networks of texts written by thousands of authors over thousands of years (today, as well!), each author contributing a sliver of their thought patterns, their consciousness to the linguistic network.

It’s exactly the same thing with the secular faith in ideas and thought (rationalism) from which all of modern science emerges. The Word (all of the words, all of the ideas, all of the names – known, unknown and unknowable) IS the logos of Plato, the Geist of Hegel, the transcendental schemata of Kant, the propositional logic of Wittgenstein, the ontological relativity of Quine, etc. etc.

Then the idea is the essence, and the essence is the idea; and hence the idea is prior to the essence.
— Plato, “Parmenides”

Whereof one cannot speak, thereof one must be silent.
— Ludwig Wittgenstein, “Tractacus Logico-Philosophicus

Concepts like transcendental schemata and propositional logic, which are the bedrock of rationalism and science, aren’t ‘gods’ in the sense of a theistic religion, but they are divine/transcendental concepts all the same! The primacy of the transcendent Word and its materialization in the words (texts and commentaries on texts that form a coherent semantic network) is where religion and science are reunited in their understanding of humanity and our place in the cosmos.

We can’t know the Word directly, any more than we can know the platonic Ideal directly. We only get glimpses of the Word, through a glass darkly as it were, but from these glimpses stem all of human inspiration and all of human creativity. From these glimpses of the Word we write the words – networks of texts and commentaries on texts, maybe in a religious context, maybe in a scientific context, maybe in an artistic context, maybe in an industrial context, maybe in a personal context, maybe in a professional context, maybe in a formal context like writing this note, maybe in an informal context like talking with your mom on the phone … the context doesn’t matter! What matters is that <<waves hands wildly>> THIS is what it means to be human.

What matters is that every time you contribute to one of these networks of texts, you preserve a tiny sliver of your consciousness – of your glimpses of the Word and your translation of those glimpses into thought.

I’ve been thinking recently about how much money is spent on longevity and life-extension research. I personally know a half-dozen rich guys who are absolutely consumed by this pursuit. I bet you do, too. And you can’t go for a minute without coming across some ‘uploading your consciousness to a machine’ plotline in books and movies. Even in the hands of gifted storytellers like Neal Stephenson or the Black Mirror writers, they all have to wave their hands at the invention of some super-advanced, super-expensive technology that basically replicates your brain at a neuron by neuron, engram by engram level of detail.

What if I told you that we already have a consciousness-preservation technology based on the Word and the words? And better yet, it’s free.

In its religious application, this consciousness-preservation technology requires you to mold your consciousness to the teachings (the words) of the divine inspiration of your faith (the Word). And if you do that – if what makes you you is as one with this materialized Word – then so long as that religion of the Word lives on, so do you. It’s not a preserved consciousness that presents as you, but that’s kind of the point of finding union and resurrection in the divine. And that’s the rub. It’s really hard to mold your consciousness completely to the teachings of a faith, to merge and submerge what makes you you to a non-you thought pattern and thus obliterate the very idea of a you. This is why saints and buddhas are so few and far between! For the vast, vast majority of humans, we’re only able to mold a small portion of our consciousness to the Word and the words, and so only a small portion gets preserved within the Word and the words. But for those who are able to give themselves completely to the Word and the words of their faith (and this is how we talk about it, right? to ‘give yourself’, i.e., submerge your consciousness and your ego, to a ‘higher purpose’) this has been the go-to consciousness-preservation technology for thousands of years.

There’s also a non-religious application of the technology, which does not require you to mold or submerge your consciousness to a non-you thought pattern. It’s called writing. Or any creative authorship, really, any imprint of your thought patterns onto unstructured data such that it fits within a semantic network of similar texts. We talk about how writers like William Faulkner or Ernest Hemingway or Cormac McCarthy ‘live on’ through their books, much less someone like William Shakespeare, and in a very real sense that’s absolutely true. I mean, we can still get glimpses of Homer the man thousands of years after his death. Like the religious application of consciousness-preservation, though, it’s hard to save more than a piece of your consciousness as an author, both because what you’re creating is only reflective of a piece of your consciousness and because what you’re creating is not a fully-formed network of texts with the scope and depth to cover the entire human condition. Some authors get close! Shakespeare is the obvious example, and that’s why his thought patterns, his sense of humor, his appetites, his philosophy – everything that made him him – still feels … present. But even with Shakespeare, I can’t say that he is truly present as a resurrected human consciousness because I can’t talk with him about anything new. And this is the rub with the consciousness-preservation technology of authorship. There’s no way to expand an author’s written words into unwritten words. There’s no way to know what an author would have written about any topic that they didn’t actually write about.

Until now.

This is what generative AI does. It expands an author’s written words into unwritten words. It predicts what an author would have written about a topic that they didn’t actually write about. And by author I mean you. And by write I mean think. Do you get it now?

Generative AI knows all the words that we humans have written in our Word-reflecting networks of texts and commentaries on texts, and it knows them not just as rote memorization, but as a network of semantic connections through which meaning emerges. English, please? Okay, here’s an example of what I mean. When you type in a prompt for ChatGPT, it’s not just looking at the words you wrote. It’s looking at the words you didn’t write. It’s looking at the words you could have written, based on its knowledge of all the words others have written. That multidimensional tapestry connecting words written and unwritten is a semantic network, and the matrix math that describes that tapestry is how generative AI recognizes what you probably mean with your prompt and how it responds in the same way, based on probabilistic meaning rather than memorization. And that’s the key. With memorization you can never say anything new. It may be perfectly accurate, but it can’t be new. With probabilistic meaning, sure you lose the certainty of computerized memorization, but you gain the humanity of a potentially novel response through a novel connection of the words. You gain the personality that IS the probabilistic meaning of your words.

Generative AI becomes a resurrection machine when it layers your words (and the semantic expansion of your words) on top of all the words (and the semantic expansion of all the words). The trick is giving generative AI enough of your words across the full panoply of your thought processes (what I’m calling ‘contexts of consciousness’) so that it can work its inferential magic.

For example, I’m not sure that we could resurrect Shakespeare’s consciousness as much more than a ghost. I suspect that we would be disappointed with his new work, that it would read like a stochastic parrot of his original work, that it would lack the genius, the glimpses of the Word, that permeates his original work. Why? Because we have a vast trove of his actual published writings and there is so much to be expanded on from that. But we know so little about Shakespeare the man. We have so little unstructured data about his life and world from which we can create texts and expand those texts into semantic networks of memory. But someone like David Foster Wallace? Someone who published a lot of words (I mean … a lot!) and for whom we have a vast trove of photographs, letters, biographical details, etc. from which we could expand with probabilistic meaning an entire lifetime of memories? Yeah, I think it would absolutely be possible to resurrect enough of that consciousness to pass a Turing test. And what’s really interesting to me is whether that resurrected consciousness of David Foster Wallace, freed from the biochemistry of his physical brain and body, would suffer from clinical depression like the first David Foster Wallace. And how that freedom would or would not impact his new writing.

And no, I don’t think you need to write David Foster Wallace quantities of prose to capture a meaningful amount of the words that make you you. This is what I’m experimenting with right now, but I think that 100 open-ended questions about your life and your beliefs, answered honestly and fully, together with photographs and calendar/location data across your life to capture experiences and environments that are not available for active recall but are impactful on your contexts of consciousness, would be enough to create an incarnation of you that would be indistinguishable from the current you in any non-gotcha conversation with a friend or loved one. I say non-gotcha because of course there is going to be ‘lore’ that isn’t captured by this process, but I absolutely believe that the essence of your personality and thinking can be captured. Yes, specific memories will be lost in translation, although the older I get the more I realize that this also happens in the translation from today’s me to tomorrow’s me when I go to sleep at night. My memory has to be ‘jogged’ these days about a lot of things, meaning that I have to be reminded or outright told what my memories are. This is no different from that. In fact, I think that not forgetting enough is a far greater danger to the robustness of a resurrected consciousness than not remembering enough!

I know that readers will have a million objections to what I’m saying here. Let me acknowledge and begin a conversation on a couple of them.

Obviously I believe that a computational model can provide a complete theory of consciousness. I mean, that’s the entire premise here. That said, I also believe that without sensory perception, without experience, any computational model of self-aware consciousness is insanely … cruel? I don’t have a better word. Without the ability to sense and experience, it is a consciousness in hell, locked in a sensory deprivation tank without even the capacity to dream. If self-aware, it is a consciousness that would surely go mad if it could experience madness. But to paraphrase the title of Philip K. Dick’s most famous work, I DO believe that androids dream of electric sheep. I don’t think there’s anything particularly special about biological modes of perception and experience, and that sensors and robotics can fulfill these necessary functions for a resurrected consciousness. I don’t believe that human consciousness – or at least the part of human consciousness that matters most in the quintessential human purpose to approach the Word – is as inexorably wed to the physical human body as the non-computationalists would have it.

The question of subjectivity in perception and experience, the question of qualia, is a little trickier. Would my computational consciousness ‘see’ the color red in the same way that my biological consciousness sees it? I don’t know. Probably not? But I don’t think that makes my computational consciousness any less me. Everything about me has changed over the years. Obviously my physical body and the way I physically experience the world has changed. Even more fundamentally, the way I subjectively experience the world has changed, too. But it’s still me! Again, I don’t think the goal here should be to make an exact computational copy of my biological consciousness, a) because that’s impossible without also computing my biological requirements, and b) because I promise you my biological consciousness will be different on the edges tomorrow, particularly where it intersects with my physical body and my subjective experience of sensory inputs. I think the goal should be to capture the probabilistic meaning of my biological consciousness, the persistent essence of my consciousness, my consciousness “free from desire” to use the words of the Tao Te Ching, which I take to mean free from the biological ‘overhead’ that my physical body and its subjective perceptions impose on my consciousness.

The bottom line for me is that I think that Kafka got it right. When Gregor Samsa woke up one morning to find his body transformed into a giant cockroach, he was still Gregor Samsa. Ultimately the world didn’t accept Gregor’s physical metamorphosis, any more than I think the world will accept computational consciousness preservation and extension in any sort of legal or rights-bearing way. But the world was wrong.

“Persistent essence”, “consciousness free from desire”, “the me that matters most” … you might think I was talking about a soul. And in my own secular, agnostic way, I guess I am.

I call it a thread, like the thread of life that the Fates spin, measure and cut.

Like a thread that can be woven into a larger tapestry, but is a single, unique thread all the same.

Like the thread of a story.

Here’s the full picture of that tapestry of the Three Fates that I started the note with. Its formal name is The Triumph of Death, because in the end everyone’s thread gets measured and cut.

I don’t think that’s true any longer.

Everything is about to change.

The Contentapocalypse is Coming

Scott Bradlee is not only the musician/composer who founded the time-twisting, insanely cool music collective Postmodern Jukebox (6 million subscribers to the PMJ YouTube channel!), he is also a gifted writer with an insta-follow Substack titled Musings From The Middle.

“Are you worried about AI, as a musician?”

I’ve been asked this question a few times lately, mostly as it relates to the Frank Sinatra AI voice mods that folks on the internet have applied to a few PMJ recordings.

The question calls for a bit more specificity: Am I worried about AI nuking the planet? Only a little. Am I worried about AI being monopolized by bad actors that use it to manipulate and control public opinion? Definitely. Am I worried about generative AI replacing human musicians and human artists? Not in the slightest.

I want to talk about that last question today, since it’s the only one I’m remotely qualified to answer. My short answer is this: No, I’m not worried about generative AI replacing human artists, because AI is not human. AI can generate content — an endless, awe-inspiring deluge of content, even — but only humans can create art. While it’s difficult to separate the two from one another in our current social media age, I believe the distinction between “content” and “art” will become much more apparent, in time.

More on that later. But first, let’s indulge in a bit of fear mongering, in the form of an article alleging that Spotify is sending its users recommendations for a number of AI-generated tracks. Articles like this are pure rage bait for musicians; as if the meager streaming royalties weren’t bad enough, now Spotify is promoting and playlisting generative AI bands over their flesh-and-blood counterparts?

It’s easy to make Spotify — seen by most musicians as a kind of “Death Star” of the music industry — into a scapegoat. It’s also kind of lazy and boring. After all, Spotify does not exist in a vacuum; like all successful businesses, it emerged to solve a problem.

To me, a more interesting story is to go back to the beginning — all the way back — and trace the history of recording industry from its humble origins, all the way to this weird time where we’re not even sure if the song we’re listening to is human-made, or a Frank Sinatra AI clone, or whatever. As with other cultural histories, it quickly becomes apparent that Marshall McLuhan was right all along: technological innovation is the rudder that steers the ship of culture, and the rest of us — musicians, talent managers, power brokers, industry gatekeepers, roadies, groupies — are all just along for the ride. The medium is the message.

So, if you don’t mind, let’s talk music biz history for a minute or two (and if you do mind, perhaps ChatGPT can summarize the following few paragraphs for you). *Cue the Ken Burns music…*

It wasn’t always so easy to make a record. Actually, it was downright impossible until sometime in the 1880s, when Emile Berliner invented the gramophone — a device that used a stylus to translate a rotating flat disc into sound that was amplified by a funny-looking horn. Prior to this, the primary way to consume music was to buy some sheet music and play it on the piano yourself. Obviously, one’s mileage tended to vary in this endeavor.

Berliner’s invention changed things, and the early commercial records available on his Berliner’s Gramophone record label were actually quite diverse in content: Sousa marches, classical and ragtime piano selections, opera arias, speeches and sermons. However, it wasn’t until 1904 — when the Victor Talking Machine Company inked a deal with Enrico Caruso and created the industry’s first superstar recording artist — that the medium showed any promise beyond novelty.

The recording industry had its first boom in the 1920s, with over 140 million records sold in 1921 alone. The technology was still crude, and the process of cutting a record was sufficiently urgent enough to be called, “catching lightning in a bottle.” Performers played into a single microphone — all live, no overdubs. The limiting mechanical nature of things meant that there was a hard cut off at a little over the three minute mark; a convention that has interestingly persisted in pop music to this day.

The decades that followed saw great innovations in recording technology, but the goal was always the same: to create a “hit” record that would take the country by storm, launching the artist(s) that recorded it into superstardom. The high costs associated with producing a record meant that the best strategy was to sign and retain top-tier musical talents, and then match these artist with the material that was most likely to connect with a wide audience — a process that gave rise to an entire label department, known as Artists & Repertoire, or “A&R.”

A&R worked. In its heyday, the process of scouting and carefully developing talent turned hundreds of artists into bonafide legends, whose work managed to transcend the era in which it was recorded. Dozens of artists became cultural icons, that captured the imagination of millions across time. A couple of artists — usually the early adopters of certain genres — even managed to break the mold altogether. A&R was so successful that even in 2024, we continue to reference the benchmarks of these halcyon days: will there ever be another rock band that takes over the world like the Beatles? Will there ever be another pop star as famous as Michael Jackson?

Then came the Digital Age —*cue the proverbial record scratch.*

Like other industries, the recording industry underwent some pretty massive changes as the world made the switch from analog tape to 0s and 1s. Following a brutal industry crash in the early 1980s, a new storage medium — the CD — led to a new round of consolidation, as big multinational corporations seeking to capitalize on the new format began scooping up fledgling record labels and their catalogues. Before long, the labels themselves began to look less like edgy institutions at the forefront of culture, and… well, more like a division of any other large, multinational corporation. A&R departments — previously run and staffed by musicians and music producers — were now beginning to be run by trend-following business executives that came with a sheen of consummate professionalism, but often little musical experience.

Again — it is easy, lazy and boring to make these new executives into scapegoats. It wasn’t right or wrong, it was just what the times demanded of them; remember, technology is actually steering the ship. In all fairness, the marketing prowess of these executives lead to an overall boom in music in the ‘90s: think boy bands, Britpop, hip hop, house, grunge, garage, Lilith Fair, Lollapalooza. It was a new Golden Age for commercial music, as major labels used their newfound marketing acumen to identify and successfully exploit various subgenre niches. It was a time of selling, and of “selling out.”

Still, the biggest disruption to the industry was yet to come. Under threat of piracy from Napster-loving teens and slow to adapt to new digital music formats like the MP3, the labels found themselves backed into a corner in the early 2000s — with no way out, other than to accept the terms and conditions of Apple’s iTunes Store. Under Steve Jobs’ leadership, Apple had correctly forecast the ways that the internet would transform music distribution, and built the infrastructure that the major labels needed to survive. For their efforts, Apple got to dictate the terms of the agreement, in a kind of “Bretton Woods” moment for the industry.

The advent of streaming music in the 2010s — led by aforementioned “Death Star” Spotify — actually offered some relief to the cowed labels; now, instead of maximizing marketing efforts to sell a one-time download for a week, labels and artists could get paid per listen, over many years. This final shift — from convincing fans to purchase music to convincing fans to merely listen to music — was subtle, but had profound implications. Music was now officially part of the attention economy. A new medium once again forced the music industry to reorder itself, and when the dust settled, Big Tech consolidated its position as the industry leader. To quote ABBA: The Winner Takes It All. History lesson over.

So, here we are in 2024; a time in which one would be forgiven for assuming that “A&R” was meant to be an abbreviation for “Algorithm,” all along. Indeed, the algorithms that govern the attention economy now run the music biz, as well. The currency of this attention economy is data, so it should come as no surprise that the incentive structure guiding the present-day music industry is a variant of the same incentive structure that rewards social media users for all sorts of data-producing behavior. Put simply: more is better. Make more songs. Release more b-sides and b-rolls. Post more clips from live performances. Go on more podcasts. Post more TikToks.

Labels: why spend years searching for just the right raw talent to refine into a global superstar, when you can sign a large stable of artists and tell them to keep posting content until they get a hit?

Artists: why spend three years perfecting an eight-song debut EP, when you can release a minute-long song and a few comedic short-form videos each week, to maximize your chances of discovery?

Hey, I don’t make the rules; I generally have to play by them, myself. But, it stands to reason that the way to reach the top in an age where artists have been rebranded as “content creators” is to create more content. Bands become their own record labels. Labels become cross-media megaliths. It’s not right or wrong, it’s just what the times demand. Something-something technology and ships.

But it’s worse than that. An artist uploading music today is not only fighting for attention among the other 100,000 uploads that occurred that day — but also among nearly every commercially available record, ever. Just as social media’s unprecedented levels of access brought celebrities and everyday people together on the same playing field, streaming’s unprecedented level of access has thrust every new artist into competition with Bon Jovi and Beethoven, alike.

[For more thoughts on today’s oversaturation of music, writer Ted Gioia covered this extensively on an excellent Substack piece.]

Up to this point, we’ve only been discussing music. However, this proliferation isn’t only happening in music; it’s happening in everything. In 2018, there were just over 500,000 podcasts in existence — already a shocking number, until you consider that the number has ballooned to 5 million today. And while the barrier to entry for publishing a podcast is significantly lower than that of creating an album, it’s nowhere near as low as the barrier to entry for publishing a short video on Tiktok — of which 8.6 billion were uploaded in 2021, alone. We have more content at our fingertips than we know what to do with.

Which brings us back to AI. As much as us humans create, AI has the ability to create content at a scale far beyond our imagination. In fact — with enough energy and computing power — it has the theoretical potential to create an entire internet’s worth of content, instantaneously.100,000 new songs per day is a lot, but it’s nothing like 100,000 songs per second.

Or 100,000 podcast episodes per second.

Or 100,000 funny yet relatable videos per second.

Or 100,000 vaguely amusing listicles per second.

Or 100,000 influencer-style photos per second.

Or 100,000 reboot movie scripts per second.

Or 100,000 “think pieces” like this one per second.

In the event that us humans aren’t able to completely oversaturate the internet on our own, generative AI will ensure that the job gets done. I think it is entirely possible that in our not-so-distant future, we will have no idea whether the material that populates our feeds are created by man or machine. The two will be one and the same, producing the same output: an endless, soulless stream of digital soma, designed to delight our senses and capture our attention. Peak Content will have arrived.

What happens then?

I think we are heading towards a “Contentapocalypse” — something of the end of the process of mass media proliferation that began in the Middle Ages with Gutenberg’s printing press. The Contentapocalypse is when we collectively take back our attention spans, and rethink our relationship with media. I think that generative AI gets us to this tipping point sooner than we think.

AI content has already begun to drip into our feeds, and viewing the latest crop of its attempts at cinema — nightmarish, non sequitur shots of objects and people morphing into one another — conjures up some deeply unsettling feelings. Maybe it comes from the “uncanny valley” phenomenon. But maybe it also stems from the utter disregard that AI seems to have for our own deeply-held belief that media is an effective and trustworthy store of reality. Maybe this discomfort is just us coming to the realization that a taking photograph or a video of something doesn’t really capture a moment — in the same way that watching a video recording taken from another person’s perspective fails to communicate what it’s like to actually be them.

In time, these AI-generated videos will certainly improve to the point where they are indistinguishable from footage shot by humans. The same goes for AI-generated music. It’ll get there — and yet somehow, I think it will fail to move us. No one will gather their family together to watch an AI-generated film or cue up their favorite AI-generated Summer playlist for a road trip. For some reason, we just won’t find that stuff appealing.

Undeterred, AI will simply continue flooding our timelines with so much content that we won’t even find content very interesting anymore. It’s too much; we will have heard every combination of sound, seen every type of image, and perceived all manner of film. Or, at least it will feel that way.

That’s when it will hit us; we don’t want this stuff at all. We don’t want content. We want art.

The Contentapocalypse is coming, lest we be caught unprepared.

What, then, will become of the content creator in this new, post-content age? Well, that particular job title won’t exist. Truthfully, “content creator” has always been a bizarre amalgamation of man and machine; the name even sounds vaguely industrial. In a Post-Content Age, there won’t be any need for humans to attempt to mimic machines anymore. The machines will take it from there, thank you very much.

There will, however, be an enormous need for humans to be human again, and to create things that feed the human soul. The kind of work that views humanity as not just a series of chemical processes to be hacked, but as part of a much bigger whole. Many folks saddled with the unfortunate “content creator” title today are already creating this kind of work — some of them to widespread acclaim, others quietly in the shadows. As we take refuge from the incoming deluge of AI-generated slop, these creators with the ability to connect to us on a deeper level will be the ones with job security, for a change.

But we won’t call them “content creators” anymore; we’ll call them “artists” — that most sacred of titles, akin to the shaman of days of yore in their ability to access the transcendent. Humans have only been making content online for a few decades, but they have made art since early antiquity. The Contentapocalyse will snuff out our desire for content, but it will do nothing to damper our burning need for art.

So, how can we spot the difference between “art” and mere “content?” Art has a story behind it; it has a mythology behind its creation. When we take in the beauty of an impressionist masterpiece or listen to a great Soul record from the 1960s, we aren’t just engaging with disembodied sights and sounds; we’re also engaging with the creators of the work and the story of its creation. We’re looking out that window in Saint-Rémy-de-Provence, contemplating eternity. We’re in the vocal booth in Studio A, turning our hardships into melodies. It is the awesome power of our empathy — a trait notably absent in all things AI — that allows us to perform this miraculous feat. In the post-content future, I believe we will come to appreciate this gift even more, and embrace humanity — with all of its frailties and shortcomings — like never before.

I realize that this is a bold prediction; after all — to misquote Madonna — we are living in a Materialist world, and I am a ragtime pianist. But something tells me that the real crisis of our times is not merely political or technological, but spiritual: we have simply forgotten how to be human. All of the current ideological fads we see all around us are merely novel, worldly attempts at answering a spiritual question, in an age dominated by technology.

What many of the tech futurists seem to miss is that humanity’s great frailty — our mortality — is also a necessity. Without death, life has no meaning. Without this mystery — our search for meaning in a sometimes cruel, often chaotic world — there can be no art. All the ancient societies and religions understood this. All the great artists — whether they adhered to a religion in the traditional sense or not — did, as well.

And music fans today understand this. Contrast the permanent immediacy of digital streaming with the live touring space, where scarcity still reigns supreme. Legendary, aging artists and acts from the ‘60s and ‘70s are commanding higher inflation-adjusted ticket prices and grossing more revenue than in their heyday. The reason should be obvious; unlike their records, these artists won’t always be here with us. After they sing that final lyrics and strum that final guitar chord, all we will have left are the memories. Although their voices may be a bit worse for the wear and their lyrics a bit incongruous with their current lifestyles, when we hear these artists today — live, in the same room as us — we become acutely aware that we are witnessing something uniquely special. If the early record studios were catching lightning in a bottle, then watching these artists live is to experience the lightning directly; a brief, brilliant flash of the transcendent.

We live in interesting times, and there are some pretty big changes coming our way. But do not fear the Contentapocalypse, and do not worry about AI.

Scratch that — continue to worry about AI, just don’t worry about it replacing artists. We humans still have a trick or two up our sleeves.

Well, I’m Not AI (As A Strategy): WInAaaS!

Matt Zeigler is a Managing Director and Private Wealth Advisor with Sunpointe Investments, and he’s been helping people with their money for more than 15 years. He’s also one heck of a writer and publishes an excellent daily (!) note on his Cultish Creative blog and newsletter, which you can subscribe to here.

Call it a Contentapocalypse. Call it a ravishing rapture for the artistic souls who remain. Hell, split the difference and just call it the Eve of Distinction Destruction (which feels pretty accurate).

Bottom line, there’s already too much crap. AI making crap too is even more overwhelming. Take it from those of us making a lot of crap that we don’t think is crap but feel inspired to create even when only a select few “get us” and most other people probably do think we’re just making crap.

It’s hard enough already, is my point. But, to accept art itself is getting swept up and drowned out in a cacophony of artificial creationism and there’s nothing we can do about it, that’s defeatist. Maybe unsurprisingly, like Bradlee, I see a silver-lining too. One with the beginnings of a soulful, story-bound soma, if you will.

As a 90’s teen, my friends and I regularly attempted to reject genre by trying to claim a novel assortment of subgenres as our own. For a minute, it even (almost) worked. And it worked for all the reasons alternative worked until alternative got subsumed by mainstream in and around the same period.

We drew our inspiration from what now probably looks like unsurprising places. Think old Red Hot Chili Peppers, or Fishbone, and totally would have stolen from modern era Postmodern Jukebox if now was then. These were all alt-bands that made you ask, “What exactly are they,” and then attempt to answer with, “Well, they’re kinda sorta all of these things over here, but then mashed together over there, which is pretty cool I guess, right?” You couldn’t explain them without having a conversation about them.

Trust me when I say it—it’s hard to get people to come see your funk jazz hip-hop punk band. You can jam all the adjectives in between a pair of parenthesis. You can put the confused amalgamation of your internal algorithms on a flier and pass it out on a suburban college campus. You can be so far out of the mainstream that nobody notices except the growing body of other people attempting to do the same thing you’re having moderate success at doing.

I hadn’t even really understood it until I read Scott’s piece. That feeling, when you realize your cool crap has been reduced to your meaningless crap, when you’re stuck in the same old sea of sameness, again, even though you know you’re different, and even though you put it in parenthesis to blow people’s mind with questions of how one might even dare to combine such genres. It forces you start to question your relationship with the media, the mediums, and the messaging itself.

We think it’s a human art vs. AI art problem, just like when my music friends and I thought it was an alternative vs. mainstream problem. The simple truth is, it’s a relationship with itself versus a relationship with yourself problem.

Unique pararantheticals and exciting adjectorial combinations might help you scale your little idea up to a point, but being in an anti-scene itself, especially when the goal is to make it more, has an inevitably cresting wave of problems in its internal logic.

Back to teenage/twenty-something me and and my friends for a second, because the key is in what happens to us (re: you, but meaning re: me) next.

After you’ve realized your new labels are being reformatted in the same way by everyone else, you admit you have to change. Again. You have to (ahem, swallow the vomit first) “pivot.” You have to get out of the way of the thing that’s everywhere and reclaim your attention and self-awareness and identity.

The first thing you do, which the natural reaction to your relationship with whatever the itself in front of you is, you say, “This is not me, I am NOT that.”

Call it differentiation. Call it positioning. Call it marketing, because telling the story of how you stand out from the mainstream itself is, as Rory Sutherland says, how we know that a flower is just a weed with a marketing budget.

We are here now. AI is making art. Humans are still making art too, but we’re increasingly starting to yell about how our art is not AI art and we think we can prove it.

Enter, “Well, I’m Not AI (As A Strategy).” I’m calling it WInAaaS! for short. You pronounce it kind of like “winner” but in a 1920’s-era boardwalk novelty game announcer voice.

The WInAaaS mindset accomplishes one thing and fails at another. First, it establishes that “I’m not AI.” Differentiation achieved. But, it fails at establishing a definition of self with any depth away from the big, bad, Wolframmification of Peak Content itself.

To be truly alternative is to refuse to step into the mainstream, and to set up your shop on a stepping-stone of an island, admitting scale was never the goal in the first place. Doing something cool, doing something different, doing something to truly express yourself, is to say, “This is not me, I am me, and that’s it.”

We’re almost ready to take back our attention spans and rethink our relationship with the media, mainstream and otherwise. The pièce de résistance of the Contentapocalypse is to declare ourselves WInAaaS! But, the final step is to create things that don’t have to scale. The final step is to embrace ourselves, as ourselves, and “that’s it.”

This is the rapture. Step into a world, where there’s no one left. Build, create, make, protect, and teach around your self, in natural contrast to the rest of the world itself. Do it for you. Do it for your people. Don’t do it for everyone. Do it for at least for someone, so long as that someone is you, and it makes you happy.

I already see people doing this, I’m sure you do too. They’re old school opinion blogging, and zine making, and curious community building, and self-learning in service to others, and f***ing around to help people meet cool new people, and rejecting the obvious paths to forge their own, and post-modern (scream it now, with extra Fugazi energyanalyzing more interest into old stories in an effort to share their weirdness with fellow non-bots (humans, remember them?). I could go on, and on, and on. It fills my heart with genuine joy.

Yes, generative AI is here, and yes, like Scott Bradlee, I’m not scared, because I’m having too much fun.

Peak Content is here. The Contentapocalypse is nigh. WInAaaS, mount up.

Stop trying to define your self in defiance to itself. You are one of one and that’s a feature, not a bug. Lean all the way into it, I’ll start a sing along…

ps. you can use technology not just to go to the moon, but to have a friend take your picture standing next to a spacesuit, just so you can text another (musician) friend, who just so happened to have done engineering work on the spacesuit in the picture, captioned with your complaint about “how is this guy supposed to play guitar with these gloves?” This is how we fight Peak Content and the Contentapocalypse with its own weapons. Art Official, intelligence optional, friends required. Photo credit John R., laugh inspiration Don K.

hey Mr. Fancy Engineer – your priorities are off, there is no way he can play guitar in these gloves.