On AI Summaries

Share

Literally just started this blog specifically because I got asked this question on a campus podcast recently, and I didn't get to answer it as fully as I wanted to in that medium, and I don't particularly want to dox myself by posting the direct link on bsky. So, here are my thoughts on this question as a digital scholarship librarian at a liberal arts college.

If an AI can summarize a 40-page article in seconds, what happens to the deep reading skills we value in a liberal arts environment?

I'm glad you asked this, because first of all: AI can't summarize. Let me explain.

A summary is a brief account, typically in plain, direct language, of the key pieces of information communicated through a long form medium. The defining characteristic of a summary is that it is an accurate reflection of the content it is summarizing. A summary can therefore only be trusted to be an accurate reflection of the content if it has been created within a transparent structure of accountability, which is to say it has a known author and you have reason to trust this author. In other words, a trustworthy agent is identified as responsible for the information you are ingesting.

To produce a summary, one must

1) consume the long form media in a sense making manner,

2) understand the long form media,

3) leverage discernment to identify what has been communicated,

4) exercise subjective judgment to decide what's most important, and

5) express this judgment clearly and succinctly in language.

An LLM can consume the long form media, and it can produce an output of linguistic shapes that structurally resembles the form of a summary. How an LLM consumes though is to "take as input," which is to say ingest linguistic forms and word order sequence as patterns, not read and make sense of the work to be summarized. The linguistic forms and orders of an LLM output are a visualization of the resonance pattern triggered within the model in response to the input, remember: the model is an abstraction of the distribution patterns of word forms within its training corpus. The model has also been manipulated by human intervention during the model's fine tuning phase, and part of that process creates a mould of a shape labelled summary that conforms to our communicative norms. The output is pressed into that mould before it is printed on your screen.

This synthetic summary is not the product of understanding, nor discernment, nor judgment, and has no communicative intent. It additionally has no accountability, because it has no author. The only way to verify the claims printed on the screen is to consume the thing in its entirety yourself. And even if the synthetic summary turns out to be correct, its because a (or many) person(s) whose labour was appropriated for training data was correct, but you can't backtrace to that source from the output because of mechanical processes that built the model. Any time LLM output is correct, it is incidental, not intentional, and has no bearing on whether anything else that comes out of it will be correct.

So, I would argue that the synthetic summary has more in common with a forgery than a summary, and that a summary can actually only be produced through deep reading. All the AI does is perform a cheap illusion of cognitive forgery, and it is fundamentally a waste of time to use one for these types of tasks.

Now let's talk about the purpose of a summary. If you're looking to read a summary, any 40 page document will have either an abstract or an executive summary because those are our communicative norms, so they actually come ready made with trustworthy options for catching the highlights, and synthetic summaries are redundant, regardless of their validity. The summary is meant to help you decide if you want to read more in depth, which is why its accuracy and validity are so important.

If you're being asked to write a summary in an academic context, the purpose of the activity isn't the product, but rather to train you in close reading, and help you hone your comprehension, discernment, judgment, communication, knowledge integration, and knowledge retention. Automating this process is just cheating yourself of the knowledge you're paying thousands of dollars to acquire.

So what happens to deep reading skills, such as attentiveness, patience, discernment, sense making, knowledge integration, critical engagement, etc, if for some reason we start choosing synthetic summaries over reading? Well, frankly, we lose them. Same as how your muscles weaken and atrophy if you don't use them, or you lose your fine motor skills controlling a pencil if you stop drawing. Reading and thinking are skills, and they require regular practice if you want to maintain them. I think it would be a catastrophic tragedy if we abandoned deep reading, especially over something so inane and inferior to our own minds.

If your cognitive skills matter to you, you have to take responsibility for your actions and choose to do the things that matter to you, even if they are less convenient, or slow, or hard, or not on trend.