LLMs, But Only Because Your Tech SUCKS
By Artyom Bologov
My company introduced the "AI Budget." So that we can buy our favorite ChatGPT flavors.
I'm buying none yet. Partially because I'm a conservative whining little... thing. But also because I don't need "AI." My tech (Clojure/Lisp + Emacs + REPLs + xref) covers most of LLM use-cases.
You only need LLMs if your technology SUCKS.
Disclaimer on "technology"
This post is mainly targeted at programmers. Thus some (or all) of the points might not apply if you aren't one. I'm not handling cases like text and image generation here. Such use of "AI" is to be roasted elsewhere. I'm only concerned about programming technology that SUCKS.
Boilerplate Generation: Macros!
Java. C++. JavaScript. All these languages need boilerplate generation, and LLMs provide. But that's only useful because these languages do need boilerplate and cannot avoid it. In contrast to providing syntactic abstractions. Like macros.
Asking ChatGPT to generate twenty lines of boring code is cool. But what if you didn't need these twenty lines in the first place? Just write a macro encapsulating these and use it everywhere.
Not a C Preprocessor macro (though I liked CPP and used it as my Static Site Generator.) Nor a Visual Basic macro (though it might be fun to exploit your colleagues' machines.) Real macro. As in Rust, Clojure, or Lisp.
You only need LLMs if your language SUCKS and doesn't provide macros.
Data Generation: Keyboard Macros!
Asking ChatGPT to generate a list of Unicode points for alphanumeric chars. Sounds like the only way to churn out this boilerplate?
No. You can just copy the data from the docs and process it. Especially if your system/shell/editor allows for user-facing automation. Say, by providing keyboard macros. Just define a new keyboard macro processing each data line. And run it on a set of lines you copied from the docs.
Alternatively, use tools like sed
or (God forgive) awk
to filter and shape lines.
Whatever works and (sure) takes less typing (and burned fossils) than prompting an LLM.
but sed/awk is hard!
You only need LLMs if your system SUCKS and doesn't allow for automation.
Tests: REPLs!
Tests are hard to write. Tests are disconnected from dev setups. Tests are expensive to run and need a dedicated cloud CI. Unavoidable use-case for test cargo-cult-ed boilerplate generation, right?
No. You don't need LLM-generated test boilerplate if you test system is good enough. Like providing tests in the code or allowing for interactive development.
I'm talking REPLs. True REPLs. The ones that allow
- Redefining classes and functions while the program is running.
- Debugging interactively when something goes wrong.
- Introspecting the state of the program and anything in it, structurally.
Common Lisp REPLs provide that.
Clojure and Scheme REPLs provide some of that.
Python REPLs don't and that SUCKS.
Bonus point for inline tests
REPL setup with inline tests is even better. Imagine: writing a function, testing it right inside the file, and preserving the results as tests right there. Might make for spaghetti files, but won't... if you're disciplined enough. You're disciplined enough, right?
You only need LLMs if your test setup SUCKS and doesn't allow for interactive and inline checks.
Explaining FFMPEG: Better Docs!
Learning and using ffmpeg
and imagemagick
is hard.
Remembering all their switches might require a brain wipe first.
Using them, even once in a decade (even more so—once in a decade!) is a horror material.
LLMs are the only option to help with these, aren't they?
No. Just design and document the APIs well enough. Then, maybe, just maybe, the tool will actually be usable by and for non-AI persons?
You only need LLMs if the documentation SUCKS and if the API is under-designed (and thus SUCKS.)
Information Search: Search Engines!
Remember these times when one encountered a problem. Googled it. And got reasonable results, like StackOverflow question or software manual.
This is no longer the case, and that's why one has to use something like Perplexity. To get a readable summary scraped from dozens of sites, ranked by usefulness, and highlighted in the resulting listing. Like... a search engine? Scraping information, ranking it, indexing it, and providing a good way to query it. That's what good search engines do. Not "Instant AI slop answers."
Related: Information literacy and chatbots as search. TL;DR: Chatbots incapacitate your ability to make informed search. By sounding authoritative and not showing human alternatives.
You only need LLMs if your search engine SUCKS and only cares about ads.
Conclusion: Tech!
Tech is cool and it doesn't end at fancy plagiarism machines. Try the existing stuff and use whatever actually solves the problem. And then maybe you won't need to burn all the fossils ChatGPT requires you to burn. Make Tech Not SUCK.