home

I know you didn't write this

I received a link from a co-worker with the accompanying note:

I put together a plan for the project, take a look.

Taking a quick scan through the linked document I’m pleased to see that there’s some substance to it. And look, there are tables and step-by-step instructions. At the bottom there are risks and potential mitigations. They have definitely put together a plan and it’s definitely in this document.

Later, I poured another cup of coffee and actually read the document and something twigged a part of my brain. Suspicions aroused, I clicked on the “Document History” button in the top right and saw a clean history of empty document – and then wham – fully-formed plan, as if it had just spilled out of someone’s brain, straight onto the screen, ready to share.

So it’s definitely AI. I felt betrayed and a little foolish. But why? If this LLM has slurped up the entirety of human written output, shouldn’t this plan be better than what one person could ever dream up? Perhaps that’s exactly the thought process they had when they turned to their LLM of choice.

I recall looking back at the note to double, triple check that they didn’t call out the use of AI. If this was their best attempt then to save face I’m going to have to write the plan myself.

Regardless of their intent I realised something subtle had happened. Any time saved by (their) AI prompting gets consumed by verification overhead, the work just gets passed along to someone else – in this case me.

Have you been the victim of AI workslop?

A recent, well-covered article in Harvard Business Review explores this new category of newly-coined “workslop” – the process of relying on AI for the creation of working materials. The study provides extensive examples of cases where people have reached for AI with the direct outcome of greatly increasing the amount of collective work that’s required.

That increased work is verification – figuring out whether someone actually thought about what they sent you - and it rhymes with a completely different domain.

A core principle of the cryptographic systems that keep our information private online are mathematical constructs that are easy to verify but hard to compute.

With AI writing, we’ve inverted this: generation is trivial, verification is expensive. We still read, but we read differently: guards up, trust withheld, looking for tells. The document history button becomes mandatory due diligence.

It’s just not nice

Using AI when writing for others is fundamentally about etiquette. It’s not polite to share purely AI-generated writing without disclosing the provenance of it. In most cases we’re looking for an equitable exchange of ideas. If you know in your heart of hearts that you didn’t put the work in, you’re undermining the social contract between you and your reader.

By passing off AI as your own work it’s inevitable that you become passive, an observer of the act of creating, an assistant to the creator.

If you can’t explain what you’ve written, do you have any right to share it? There’s a reason most PhD candidates defend their work orally.

Why should I bother to read something you didn’t bother to write?

Accountability-shirking as a Service

In serious engineering circles we’re reaching consensus that developers are held accountable for all code committed and shared, regardless of how it was produced.

Other work is in different territory. Side projects, throwaway code, single-use applications – building something you lack the skills to create otherwise. But if you ship it and people use it, you’ve created an implicit promise: that you can maintain, debug, and extend what you’ve built. If AI assembled it and you can’t answer basic questions about how it works, you’ve misled users about what they can depend on. The work document and the shipped app both create dependencies – one on your strategic thinking, one on your technical follow-through.

Engineers who have embraced coding assistants to do the messy bit of actually putting code to editor see concrete, if modest, productivity boosts.

The same is happening for writers. Unless pressured by unrealistic expectations or deadlines (or in some cases, pleading ignorance to the risks) professional writers will converge on the same view as software engineers. Anything worth writing has to be written.1

Writers and other professionals want to do good work and be recognised for their good work. This leads us to explore where AI aids that work and understand where it impedes it. It doesn’t help that we’re working this all out as we go along.

Transcribe, Translate, Transfer

Despite the name, conversion work – not generation – is where generative AI justifies itself. In journalism, Jason Koebler @ 404 Media notes:

YouTube’s transcript feature is an incredible reporting tool that has allowed me to do stories that would have never been possible even a few years ago. YouTube’s built-in translations and subtitles, and its transcript tool are some of the only reasons that I was able to do this investigation into Indian AI slop creators, which allowed me to get the gist of what was happening in a given video before we handed them to human translators to get exact translations.

When the team did the admirable thing to translate important reporting on ICE in Spanish they turned to human translators to get that extra certainty. Some people would be happy with the LLM translation. That’s their line. For responsible, authentic journalism 404 Media took the higher road.

In “Good Bot, Bad Bot”, Paul Ford compliments the proposal to use AI to help academics package and market their work to a non-technical author.

He notes

It makes economic sense. Researchers who aren’t affiliated with giant companies or large research labs at universities often have few resources to promote their research. And for the most part, biology postdocs cannot write good posts—not least in their native language, but especially in multiple languages. AI won’t be as good at posting as a thoughtful human, but it will likely be better at fun, emoji-laden social media posts than, say, an actuarial scientist adjunct who speaks English as their fourth language.

It’s refreshing that Ford acknowledges the pragmatic realities. Promotional posts aren’t the research itself – marketing your paper isn’t the same as writing it. That’s Ford’s line. The economic reality of underfunded academics means embracing AI in ways that might actually be welcome.

The Guessing Game

Undisclosed AI is becoming the default assumption. Reading anything is now an act of faith that someone thought about the results longer than it took to fire off a prompt.

Faced with uncovering the fingerprint of the author, will we get tired of the guessing game?

Verification today often leads to difficult conversations about the nature of work and effort, authenticity and etiquette. Those conversations are the work now.


Thanks to Sarah Moir, Harrison Neuert & Geoff Storbeck for their invaluable feedback.


Footnotes

  1. A fun corollary for this is the increase in fake bylines for news content – the person who wrote this doesn’t exist so there’s no “one” to blame if it’s wrong.