[This post originally appeared on Cohost.]
The title of this post is a paraphrase of a tweet by the opinion columnist and investment manager Conor Sen: “AI is potentially to the world of ideas what low-cost index funds were to investing.” (This was in turn in response to another tweet by Derek Thompson: “an AI that spits out the synthetic average of everything it’s been exposed to is an interesting metaphor for a lot of stuff ppl say, write, and think.”)
I found this to be a very interesting and potentially rich analogy, and thought it would be fun to explore it a bit.
But, first, here’s what an AI system itself (the AI-powered Lex online word processor) generated when asked to continue a text starting with that sentence; note that this is a representative example culled from a number of trials:
It has the potential to offer answers to questions and solutions to problems that may have previously been considered impossible to resolve. AI can help individuals and organizations alike to quickly analyze data in real-time, make predictions, and accurately identify patterns in a way that was not previously possible. AI may also be able to help us better understand the world’s most complex systems, from climate change to world markets. Ultimately, AI has the potential to create unprecedented solutions and insights that could revolutionize the way we view and interact with the world.
This is perfectly competent prose, but it totally misses the point of the analogy. So please allow me, a mere human, to take a shot at explaining what it means:
How index funds conquered the world
Prior to index funds, the accepted way to succeed at investing was to do a lot of work to analyze companies whose stocks are traded on public markets, and based on that analysis decide which companies’ stock prices would be likely to increase, and which would be likely to decrease—the point of course being to buy the former and sell the latter. Those investors not willing or able to do this work themselves would outsource it to an investment manager, who would charge a fee for the service (typically a small percentage of the total amount invested by the client). Rather than picking stocks, the job of the investor was then simply to pick a good investment manager.
With index funds the recommended strategy is not to try to pick stocks, or have someone pick them for you, but simply to invest in an overall collection of stocks of a certain type—for example, the S&P 500 (for the stocks of large companies), the Russell 2000 (for small companies), or a very broad collection of all stocks in the US or worldwide.
Index funds subverted the “active management” paradigm in multiple ways. First, they removed the need for human judgement and replaced it with a mechanical rule. This enabled companies like Vanguard offering index funds to charge significantly lower fees and thus enabled investors to retain a higher percentage of stock market gains. (This was especially significant since such gains would compound over time.)
Second, they lowered the risk for investors: the returns from an index fund were comparable to the returns achieved by an average active manager, and better than an average active manager after subtracting their higher fees. An investor would therefore be better off investing in an index fund than with a typical active manager. They wouldn’t get the extraordinary returns achieved by the very best active managers, but they also wouldn’t get the poor returns achieved by the worst.
Conversely, index funds made the job of an active manager harder, since they had to work more to try to get better returns than an index fund. Then, given that a typical active manager would not be able to beat the returns from an index fund, they had to justify why that work deserved a higher fee than one would pay for an index fund.
The horn of plenty and the worm Ouroboros
Now consider AI, in particular the type of AI exemplified by systems like GPT-3. The systems have so-called large language models (LLMs) trained on lots of human-generated text, and based on that text can perform what seems like magic: given a string of text, to predict a suitable string of text that would continue on from that point (as in the example today), or (as in the recent ChatGPT) given a question, to generate a plausible response.
Based on the analogy to index funds, we can imagine several responses by both readers and writers to the growing capabilities of LLMs. Readers may simply accept AI-generated texts as a “good enough” product for most purposes; this is very similar to how we use index funds or (closer to home) consult Wikipedia as a “good enough” substitute for trying to seek out other sources of information on a topic with which we’re unfamiliar. This approach leaves little or nothing for (human) writers to do, except to provide uncompensated “grist for the mill,” as their writings past or present get fed into the maw of the LLMs, then used to produce an almost inexhaustible stream of new writing. But under this scenario, who (but an AI) would bother to write?
And if no one (human) decides to write, where would the writing come from to feed LLMs in future? For example, one team has predicted that in the next five years the size of training sets used as input to LLMs will exceed the amount of “high-quality” data available (e.g., books, news articles, scientific papers), leaving them to be trained on lower-quality data (e.g., YouTube comments). We can imagine one possible future in which the input to LLMs (or their successors) will primarily consist of text previously generated by other AIs.
This resembles the scenario warned of by some opponents of index funds (for example, as cited in The Atlantic): that a stock market dominated by index funds will no longer perform its supposed function of efficiently allocating investor capital. Instead the price of a firm’s stock may simply rise and fall based on whether it is included in major indexes (like the S&P 500). Similarly, the popularity of certain ideas may in future depend on their reinforcement by various AIs, and not on the number of humans who actually believe in or espouse them.
Writing after the end of writing
So, what’s a writer to do in a world where AIs can generate more text than we humans could ever create?
A writer might commit wholeheartedly to the use of AI as an aid to writing, in the hopes that it might provide some sort of exploitable edge—perhaps a way to write faster, to surface previously obscure source material, to serve as inspiration, to come up with a striking turn of phrase, and so on—anything that might help them stand out from the crowd and reward them monetarily or otherwise. This is analogous to the “quants” in investing, who throw ever more elaborate mathematical models (and now machine learning) and ever-increasing amounts of compute power at the problem of finding exploitable trading opportunities.
Alternatively, a writer might scale back their ambitions to achieve widespread success, and focus intensively on either a particular group of readers or a particular niche topic. The former approach is analogous to that of local financial advisors, who may not provide any better returns than an index fund or any better advice than a robot, but have the advantage of knowing a particular set of local investors and providing a personal touch.
The latter approach is similar to that of investment advisors who specialize in particular areas (e.g., biotech, or energy) and don’t attempt to provide advice on well-covered areas like consumer Internet services. But there is a danger here: as AIs extend their reach to encompass more and more of human knowledge and writing, the territory untouched by them may become smaller and smaller, until writers on niche topics ultimately are writing about things of interest only to themselves.
Then there is another more speculative approach: In tweets adjacent to those above, the legal entrepreneur Scott Stevenson muses that we should stop thinking in words, as businesses and investment funds strive to become illegible and no longer use stories to explain their world, a world that “can only be understood as a matrix of numbers”.
Of course, producing words is what a writer does, and stories are the natural way we structure the world, whether they end in “happily every after” or “quod erat demonstrandum.” But there is a type of writing that relies much less on stories, especially in its most compressed and compact form. Perhaps writing about ideas should aspire to the condition of poetry, stringing concept after concept together for our appreciation and (ideally) elucidation, like a linear combination of vectors in a very high-dimensional space, pointing the reader to a destination previously unknown to them.
But writing good poetry is truly hard—as Randall Jarrell put it, like spending a lifetime standing in thunderstorms, waiting for lightning to strike. (The ratio for Sturgeon’s law in poetry is much closer to 0.99, or even 0.999, than 0.9.) And the audience for such “nonfiction poetry” would perhaps be small, since to truly appreciate it one would need to be familiar with all the concepts touched on and pointed to but not explained at length—like classical Japanese or Chinese poetry, where each individual poem seems banal unless you know the host of older poems that went into its making.
I myself am no poet, and so will take on a much less ambitious task: posting to my small group of cohost followers, about topics of peculiar interest to me, and surviving on the occasional like or share, or the very occasional comment.