home

An app can be a meal kit

In “Does smitten kitchen have a chocolate zucchini bread recipe?”, I wrote about creativity-on-rails: creating something because AI implies that you should have made it already.

My take was bleak and I realise now I automatically used another contentious use of language to describe model behaviour: hallucinations.

In From the other side of the bridge, Matt Webb defends the hallucination and points to countless examples of dreams that inspired individuals to discover novel ideas to generate considerable impact. He, like others, welcomes the spontaneous generation of ideas, even if they’re not rooted in reality.

He shares a word for this: a hyperstition, a

self-fulfilling idea that becomes real through its own existence.

When I see someone sharing a small project or prototype online now it invariably has the hallmark of AI-assisted development. I only have to look at the various tools and sites I’ve developed with Claude in the past year to understand why: for those with an idea and a Claude Code subscription1 this is the golden age of side projects.

If, with some handwaving to the gods of continued progress, code is now free, any idea for an app or tool can exist simply by being imagined.

In his 2025 retrospective on the influential “An app can be a home cooked meal,” Robin Sloan invites the reader to rustle up their own further abuse of the analogy.

We’ve entered an era of app development as meal kit. It’s apt: a subscription service to reduce the friction of cooking a nutritious, satisfying meal. Critics point to packaging waste and atrophied skills, but the demand is real.

In this mania of creation, what happens to the original creators? The ones who inspired with the promise of an actual home-cooked meal (or just dessert)?

We’re wired to create. The path from problem to solution is lush and evergreen. The idea becomes the tool. By embracing the code-is-free lifestyle AI adopters chant as one: we build because we can.

Footnotes

  1. s/Claude Code/your favourite coding assistant/g

the Great Cleavening

I’m constantly reassessing whether the decision to think about, write and work with new generative AI tooling is justified. I struggle to reconcile the new capabilities with the undoubtedly bad background and grim impact of releasing the capability to the world.

Above all my stance is that there’s no going back. While it’s sometimes appealing to pretend we can hope it goes away, it’s not going to go away. Less gently:

No one will ask your permission to build a world you do not understand.

Appearing at just the right time to bolster this threadbare justification is an extended quote from Patrick Tanguay’s Sentiers. It’s a response to the first of Robin Sloan’s pop up newsletter on the topic of AI:

To those who think the piece might be too positive about LLMs, I’ll remind you that one can be critical of all the pitfalls and misunderstandings, and be aware of the semantic traps, and still have their brain explode when working with LLMs. All these things are true. The biased training and permissionless taking of people’s work, the purposeful use of words to make it sound like it’s human/actually thinking, the extractive business models, the imperial attitude, the broligarchy, the misleading chat-focused interfaces, etc. But also the breadth of what they can do, the uncannyness of it, the power, the potential, the questions. It’s in part why I find the field so fascinating, but also, I think, why it’s so cleaving.

graph-easy-ts

He’s only gone and done it.

As predicted, I LLM-nerd-sniped at least one person enough to make them dedicate some time and effort to porting Graph Easy to TypeScript.

Reading through the write-up, it struck me that Tom didn’t appear to have done anything drastically different to me. The biggest obvious change was using an AI coding agent for VS Code called “Azad”. It appears this went a long way to keeping the agent “on track” and avoided many of the premature terminations I discussed in the original blog post.

I also note he picked up the freshly-released GPT-5.2 which I’m taking as an extremely loose benchmark of improved frontier model capability.

After just under 24 hours, ~250 million tokens, and $148.58, all the tests passed! The rendered graphs match identically.

The nice thing about this is I get to update graph-easy.online to use graph-easy-ts for super-fast conversions of text to ASCII. I also took the opportunity to add a serverless REST endpoint for headless conversion.

curl -X POST https://api.graph-easy.online/v1/convert \
  -H "Content-Type: application/json" \
  -d '{"input": "[A] -> [B] -> [C]", "format": "ascii"}'

{"success":true,"output":"+---+     +---+     +---+\n| A | --> | B | --> | C |\n+---+     +---+     +---+\n","format":"ascii","timing_ms”:0.5}

Thanks to Tom for picking this up and closing the loop.

First, you absolutely do not have to hand it to Grokipedia.

In “We’re not taking the fact-checking powers of AI seriously enough. It’s past time to start.”, digital literacy expert Mike Caulfield researches how Grokipedia appears to have correctly fact-checked a claim that professional fact-checkers missed.

In the article, Caulfield demonstrates his 3000+ word “Deep Background” instruction prompt, a calibrated directive to guide an LLM through a detailed fact-checking exercise. He uses this prompt for a successful micro-investigation into a claim relating to the Nobel Prize ceremony and makes the case for an augmented approach to fact-checking.

But I remain convinced that there is no future of verification and contextualization that doesn’t involve both better understanding of LLMs and more efficacious use of them.

Caulfield also challenges “hallucination” as a flawed catch-all for the ways in which LLMs are inaccurate. Modern LLMs are more likely to be conflating through extrapolation or overweighting unreliable sources rather than purely fabricating.

His hotly contested stance is that disengagement is a mistake. One must engage with the technology in order to understand it.

My set of understandings led me to discover a substantial error by a news organization that, if search hasn’t failed me, seems to have been missed by everyone up until now. ⁠⁠What has “I can’t analyze the output because its [sic] meaningless fancy autocomplete” done for you?

I applaud Caulfield for raising awareness of the successful AI-assisted fact-check by Grokipedia despite feeling negatively about the site itself.

I was initially buoyed by Caulfield’s argument, but on reflection I think he still gives Grokipedia too much credit. I think we’ll need more than a single example of a fact-check slip made by humans and corrected by AI to make a case that this is revolutionary technology for fact-checkers.

parsing JustHTML’s success

After trying, failing and sharing my doomed efforts to port a Perl library over to TypeScript using AI tools, I eye enviously Emil Stenström’s account (via) of writing a HTML5 parser with a 17 point summary of the journey that concluded with a working implementation.

Some thoughts on where he succeeded where I didn’t:

  • The models likely understand HTML5 more than bespoke, arcane routing algorithms tuned for ASCII diagrams.
  • Stenström found a reliable way to keep the agents running in a loop, something I struggled with.
  • He cites Gemini 3 Pro as pivotal for speed improvements, arriving at about the right time to give him a boost. This is similar to where I leaned into GPT-Codex-High.
  • Building custom tools for fuzzing, profiling and scraping contributed to Stenström’s success. His 8,500+ passing tests are an order of magnitude higher than the piddly ~100 load-bearing tests that I assumed would suffice.

the Bach faucet

As per Dr Kate Compton:

A Bach Faucet is a situation where a generative system makes an endless supply of some content at or above the quality of some culturally-valued original, but the endless supply of it makes it no longer rare, and thus less valuable

She links a 15 year old article in the Guardian that recounts how Composer and Computer Scientist David Cope built “a little analytical engine” to generate thousands of original Bach chorales. With time, Cope built a successive tool (which he named Emily) to understand and ultimately emulate “the works of 36 composers.”

Cope will ask Emily a musical question, feeding in a phrase. Emily will respond with her own understanding of what happens next. Cope either accepts or declines the formula, much in the way he would if he was composing “conventionally”.

I can’t find any clear evidence of it after a short search but these releases were apparently met with dismay from the industry:

Critics convinced themselves that they heard no authentic humanity in it, no depth of feeling, Cope was characterised as a composer without a heart; his recent memoir is called Tin Man.

As noted in an Offscreen review of a 2021 documentary about Cope, he “takes pleasure but also tremendous inspiration and motivation in the public’s criticisms.”

“I want the negative reaction,” Cope professes, “I feed off of it. I keep going because of it. It’s mine and mine alone, and I love it.”

Cope mischievously and semi-ironically calls it “blasphemous music.”

And in an interview with Ryan Blitstein in the Pacific Standard, also from 2010, his emphatic position aligns with the pithy adage “good artists copy, great artists steal”:

“Nobody’s original,” Cope says. “We are what we eat, and in music, we are what we hear. What we do is look through history and listen to music. Everybody copies from everybody. The skill is in how large a fragment you choose to copy and how elegantly you can put them together.”

Cope felt pretty strongly about the tools he had created:

He can’t imagine the possibility of going back to writing with just his own intuition and a pen and paper. “The programs are just extensions of me. And why would I want to spend six months or a year to get to a solution that I can find in a morning? I have spent nearly 60 years of my life composing, half of it in traditional ways and half of it using technology. To go back would be like trying to dig a hole with your fingers after the shovel has been made, or walking to Phoenix when you can use a car.”

Another mention of a shovel caught my eye in the Pacific Standard piece:

“All the computer is is just an extension of me,” Cope says. “They’re nothing but wonderfully organized shovels. I wouldn’t give credit to the shovel for digging the hole. Would you?”

David Cope died at age 83 earlier this year.

“People tell me they don’t hear soul in the music,” he says. “When they do that, I pull out a page of notes and ask them to show me where the soul is. We like to think that what we hear is soul, but I think audience members put themselves down a lot in that respect. The feelings that we get from listening to music are something we produce, it’s not there in the notes. It comes from emotional insight in each of us, the music is just the trigger.”

why “ammil industries”?

Ammil is “the sparkle of morning sunlight through hoar-frost.” That exact moment when sun hits ice crystals on leaves on trees before they melt. Exists for minutes, requires witness, disappears.

Industries suggests a sense of scale, perpetuity, production without pause. No witness required. You can’t wrap your arms around industries, unlike a tree.

Ammil was a particularly obvious gem of a word in Robert Macfarlane’s Landmarks, an inquiry into how language shapes our perception of terrain, weather and place. Lost words gain life through use – if you use them, we use them, they’re not truly lost.

In a post-LLM world, everything can be generated. Text pours out at industrial pace. Links proliferate without providence, provenance unknown. Products multiply, bearing the uncredited fingerprints of millions.

ammil industries is a small-r research group exploring what remains authentic when everything can be synthetic.

We write, build tools, and create visualizations that trace the care, craft, and work required to persist.