While I’m not an active member of the Malleable Systems Collective, I am sympathetic to the cause. I want to make programs that are inspectable and modifiable. Programs that give the agency back to the user.
But how do I marry rich layered Object-Oriented UIs with user-facing transparency? How do I allow the user to peek into what’s behind the UI? How can they change what’s there, right from inside the program? I have some ideas and I’ll list them here. But I’m mostly clueless really.
I need to introduce one term first:  
I know introducing new term is not the best move.
But I have no better term for what I’m researching.
So bear with me.
 
So there were these mysterious things called TTYs.
With a line-by-line interaction and printed paper.
That’s where
ed, the least customizable editor
originated from.
 
TTYs naturally result in shells and REPLs.
This text-only synchronous interface to query one’s computer.
 
And having this back-and-forth with a computer is not the most inspectable thing.
The program hides behind the lines one’s allowed to input.
 
Yet this interface is also extremely simple.
Easy to make  
One can build sub-shells/sub-REPLs with this too.
Some programs have an interactive inspector as a separate command or function.
Common Lisp has an inspect function in the standard, for example.
Some might simply expose the internal state of the language to the user/developer.
Like this despicable Python REPL modification.
It’s easy to make the program  
Now the next step in editor evolution was vi.
And Emacs.
And vim.
And nano.
Screen/visual command-oriented editors.
All relying on CRTs and full-screen terminal programs.
 
This extrapolates to other programs too.
You look at the screen showing the data you’re acting on.
And you use commands (usually bound to keyboard keys) to act on it.
 
Now this introduces two challenges:
 
Keys and commands are easy—just add a new command showing help for other commands.
Maybe for itself too.
Press something like Emacs’  
Now the challenge of the UI dissection.
The best  
TTYs and CRTs are fun, but what if we have color screens and pointing devices?
GUI frameworks!
Starting with Lisp Machines, Smalltalk, and continuing until today with GTK, Qt, and others.
Featuring widgets (like buttons.)
Windows.
Mouse interactions.
WIMP, you know.
 
A mouthful: multidimentional asynchronous screen-based interactions.
Which are complicated and alienating some of us people.
Something so complex that it’s hard to make  
Maybe, but...
One notable feature of Smalltalk (and Glamorous Toolkit in particular) is the ability to inspect widgets:
 
Thanks to the fediverse,
I also learned that
Blender does that too!
And Makepad, a revolutionary shader-based GUI toolkit for Rust!
 
Actually, let’s talk about Makepad some more.
They have an ambitious goal of making the design vs. development dichotomy disappear.
The widgets map to Rust-resident DSL, and the DSL maps straight to widgets.
A nice promise that marries graphicality with inspection and source-available mindsets.
 
Once we have widget inspection, we might want to inspect windows and whole screens.
The former is achievable in
my beloved StumpWM:
It allows to list X window properties and even act on windows programmatically.
So yes, GUI programs might provide REPLs/RPCs/shells for self-modification.
 
There’s this saying:  
It’s not enough that we had the separation between data and widgets displaying it.
Now we have a whole set of computers hidden behind the UIs we see!
So some of the data is simply inaccessible to the user.
No way to know the layout of the DB or the model that widgets (HTML & custom elements) are built from.
 
The problem is split between two of the front-end architectures, actually:
 
Both are bad, for different reasons.
Yet the general problem is there: network and dynamic content generation make Web UIs less  
Inspect the dynamic state of the DOM with JS Console.
Pick on generated/baked elements in the Inspector.
Check API calls in the Network tab.
 
Which is good, because one can dissect at least the widgets/elements/nodes.
This kind of tooling might have been useful to the (visual, graphical) UIs above too!
But it feels like we’re missing out on something nonetheless.
Like somehow mapping API calls to DOM nodes or deobfuscating page-generating JS.
But yeah, client-server architecture(s) are incapacitating to a degree.
 
Now that’s the one I’m entirely oblivious to.
Mentioning it for compleness only.
3D space, all around the person with the ability to move around.
No idea how to make a UI for inspecting this dimentionality.
 
No, flat interfaces won’t cut it.
Even though some VR programs still use flat interfaces and pointing devices.
Such an archaic idea.
So a good inspector will make these  
The further we go with the evolution of UIs, the harder it is to make them  
So we have progressively complicating interfaces.
With less  
We have a long way to go if we want to reclaim tech back.
And I personally have no ideas on how to make our tech more Shell and REPL
Visual Mode and Commands
 How does one know what keys and commands do?
Widgets and Desktop GUIs
 Click on the widget with something like right mouse button or 
Interlude on Debuggers
If you know assembly, every program is Open-Source
.
Much in the same vein, having a program-attacheable debugger makes it automatically inspectable...
Given that you understand how to use the debugger and make sense of what you see there.
So yes, that’s one way to make a UI transparent—force your user to debug it themselves!
Client-Server and Web Platform
 Server-Side Rendering bakes the data into the UI. This allows little to no inspection of the API.
VR
Conclusion
 Meta commands.