Explanations, not Algorithms

By Artyom Bologov An almost black-and-white thumbnail. On it,

Algorithm (in the context of tech) has two main meanings:

Both of these imply a certain impenetrability and obfuscation. Be it via a programming language or a soothsayer server. Relying on these means giving up one's agency to some abstract party you cannot control. So algorithms are overrated and should be replaced with something more graspable: Explanations.

But Algorithms Are Explanations Already! #

CS algorithms—finite sequences of steps usually encoded in pseudo-code, C, or Lua. They are understandable to us programmers. But not all of them and not to all of us.

The next (unlikely) time you have to implement a sorting algorithm, which one would you pick? Quicksort or Mergesort? While Mergesort is slightly slower (worst-case) than Quicksort, it's much easier to explain:

So we split a sequence in single-element subsequences. They are sorted by definition—nothing to sort if there's only one thing. Then we merge these subsequences in pairs by putting them into new sequences in the right order. It's nothing, because single-element subsequences are "sorted" already. And then we merge the intermediate sequences in pairs. It's mostly mechanical because they are sorted too: Just pick the least element from one of the sequences until you run out of them. Repeat until the whole thing is merged and sorted.
Artyom explaining Mergesort

Somewhat long-winded, but quite understandable, isn't it? Now explain Quicksort to me.

Explain Your Model Weights #

No, not with ChatGPT. You need to understand the tech you use, otherwise you're dependent on it. Can you explain ChatGPT?

No one denies the many benefits of metahuman science, but one of its costs to human researchers was the realization that they would likely never make an original contribution to science again. Some left the field altogether, but those who stayed shifted their attention away from original research and towards hermeneutics interpreting the scientific work of metahumans.
Ted Chiang, The Evolution of Human Science

I remember reading a person playing with weights and "features" of generative image models in mid-late-2010s. Trying to understand whether this or that "feature" stand for hair color? Shirt color? Height? Presence of glasses? Prettiness? Ape-like-ness?

I see no such attempts today. It's a shame we seem to have given up on explaining stochastic models. It's billions of weights and features anyway, no use in trying. There are ideas and realization of urgency. But it's "within 5-10 years" that we're supposed to understand anything.

And yet we use ChantGPT as The Algorithm for many problems. We ask it for sorting algorithms too! And we will, until climate crisis hits us hard.

Bootstrappability #

There's this trend of "collapse computing." Computing after shit happens to us and the planet. Some—like Dusk OS and its ancestor, Collapse OS—are focused on surviving the collapse. Some—like Low Tech Magazine—search for simple, reusable, and energy-efficient solutions to soften the blow. One common solution is making tech that can be easily bootstrapped to something useful. From something small and easy to reuse.

Humans (antiquity to collapse) think in stories, preferably short ones. So what will be the most collapse-proof way to pass on sorting algorithm knowledge down the generations? A story. An explanation. Quicksort requires scaffolding like array indices and pivots. Mergesort doesn't.

Much like the Greek and Medieval bards, we programmers can disseminate the programming lore. As stories, not as index management minutiae. Visual depictions come close, but you have to somehow reproduce those. Stories don't need paper nor screens nor Internet connection.

The next time someone asks you about how some tech works. Ask yourself: do you really understand it? Like, at the level of transmitting the knowledge and reproducing it with other people. And how do you think of it? Not as hard-set algorithm steps or black boxes, I bet.

Explanations, not algorithms.

Leave feedback! (via email) #