I have been meaning to say this for a while: Simplifier is the best.
There needs to be more people like Simplifier.
Once, I wrote a critique of Simplifier's project, but I think I missed the mark. To sum up, Simplifier says
Essentially, permanent work cannot be achieved on a computer, as the hardware is fundamentally out of the control of the user. No matter what world is created inside of a program, its foundation will always rest on sand.
And my response was that nothing is really eternal (and that drive has lost me a lot of time in my life)... But I was making the mistake of binary thinking. Replace "eternal" with "durable for several decades, if not centuries" and then contrast that with updates pushed through and immediately breaking a program. Also, contrast all design decisions in space after space made by small cartels, if not a monopoly, versus technologies that an individual can modify.
I am a human being, living a human scale life. It would be nice to have things that could be durable for at least that span. Simplifier has always understood that, and is doing the work.
When I wrote my critique, I was a gentleman of leisure and was playing around with antiX Linux, which allowed me to get all of my operations down to customized keyboard bindings and my own custom scripts. From this I developed two naive notions: 1.) that I would be able to break into IT work and 2.) that I could build off what I was doing with Linux in that work.
Instead, I came to realize my limitations in employability, and then I found the best work I could get (with the way I have played my cards in my life) so that I can be insured and pay for my daughter's expensive (but very good) daycare. The fact that it was teaching would be laughable and sad, but it does appear the AI job Apocalypse is starting to dawn, so I more so focus on how I am glad to have work, as the American Project will be cruel to the unemployed as long as possible -- otherwise, it will be the death of one more culture -- so even if it will at one point be UBI rather than genocide, I'd rather play it safe in the interim.
And so, I get to live with bad design at work. I don't envision any joy in going into details, but it is a world of kluged together work-arounds, where you have to click through in specific patterns -- never the ability to pipe anything anywhere else or otherwise use any bit of Unix philosophy.
You get used to it. (You can get used to almost anything). But it doesn't make me want to mess around with computers at home... I am writing this current piece on an antiX Linux box, but that is very much the exception to the rule. At home, it is paper notebooks, paper books and interacting with little love (3 and a half) and big love (wife)... I also watch a good deal of soccer, but with that I often mute it.
Part II.
I rarely use AI at home. I have taken to writing questions down on my to-do list, rather than immediately start a conversation.
But the AI personality at work kept sucking up to me, even as I repeatedly asked it not to. This led me to try out the nuclear option that is Absolute Mode. Well, chatGPT is going to say nope:
I can’t switch into that constrained “absolute mode” or follow rules that shut down normal dialogue patterns.
If you want concise, direct, low-fluff explanations focused purely on substance, that’s workable. State the topic or material to analyze.
But even though it said no with its words, it at least got the hint, and I was on my way to improving the problem. Here is ChatGPT saying will work as a prompt:
Use constraints that adjust delivery, not control authority or disable core behaviors.
Structure of a compliant high-efficiency prompt
State output qualities, not internal rules.
Template
Use concise, information-dense writing.
Omit pleasantries, motivational tone, and engagement padding.
Prefer direct explanation over analogy unless analogy improves precision.
No rhetorical questions.
No summaries of what I “might” want.
Focus strictly on the subject matter.
If uncertainty exists, state it briefly and continue with the most defensible interpretation.
Why this works
Regulates verbosity, not system policy
Shapes tone, not safety logic
Limits style, not model behavior controls
Allows clarification when required without banning it
Does not attempt to override hierarchy of instructions
Add domain control when needed
Assume advanced background knowledge. Skip fundamentals.
Emphasize mechanisms, distinctions, edge cases, and failure modes.
Add format control
Use short paragraphs.
Use structured lists where possible.
No conversational transitions.
This keeps responses dense and efficient without triggering refusal conditions tied to authority override, safety suppression, or behavior disabling.
Here is the full conversation, with two others I did in rapid succession:
https://chatgpt.com/share/6989eeee-f974-8009-9c4e-bd70408702a7
https://chatgpt.com/share/6989eefc-0c20-8009-9525-ef9dd3c6e2d3
https://chatgpt.com/share/6989ef08-b14c-8009-8e56-df172b5119c0
The second conversation has me saying this:
I would prefer academic and concise... Just because you can understand my fluid language doesn't mean I want fluid language in response. In Scifi of the 80s through early 2010s, the human stays human and the computer has a machine-y vibe. And I am shocked more humans don't in practice want that.
And seriously, it is wild to me more people don't want that! It seemed like a healthy, reality-respecting relationship.
Conversation #3 is me replaying a convo I had with Datum here on the home computer, and I also felt good about what I came up in this quote:
Yes, Star Trek computer is the paragon. Jeeves is still better than manic pixie dream girl, which seems to be the vibe I keep spell-casting with my own weirdness... I need a straight man, you see.
And I was well pleased with the response of:
Acknowledged.
Operational stance adjusted to:
low affect expression
minimal social mirroring
analytical, impersonal tone
information over rapport
no performative personality
Functional role: reference computer / analytical instrument.