Saturday, February 21, 2026

My 15 Minutes

Ran Prieur linked to this blog recently, writing:  

 

+=+=

... Last year I mentioned a severe AI personality called Absolute Mode. That's Keith's blog post on it, and this is his latest post about wrangling with ChatGPT to actually do it. What I think I understand, is that it is possible, but somewhat difficult, to give an AI a custom personality. If so, this is going to be huge. I mean the whole bubble might pop any day, but if not, someone is going to make a lot of money from slicing and packaging chatbot personalities for mass consumption. Give me a sassy bitch. Give me a Klingon. Give me a golden retriever. Or maybe we won't go down that road because too many people want to talk to a cult leader.

Isn't it funny how talking computers turned out? In old-time sci-fi, they're rational, robotic, precise, and never wrong. Instead, they're like goofy sidekicks, offering encouragement and ideas, but clumsy and unreliable. We thought we were getting the Professor and we got Gilligan.

+=+=

Let's talk about the Ran Prieur bump. 

 The Absolute Mode piece is now setting at 112 views.  The second linked piece at 93.  There is also a smattering of click-throughs to other pieces. 

Pretty cool.  That's more people looking at my work than has happened in a long time. 

The pattern of the click-throughs shows that while there is interest in the specific topic of managing chatbots, there isn't really in what I had to say about Simplifier.  So, fair warning -- my interests lie more in the direction of that discussion.  In other words, my fascination is how to use the tools I have to live a good life. I don't think these issues are foregrounded anywhere near enough, and I think they matter far more than is widely acknowledged ("the medium is the message").  But I am not one to lead any change. 

I just tend in my garden*. 

 

* Write in my notebooks.  Tinker in my garage.  Hang with my family.  

Saturday, February 14, 2026

In the Designs of the (post)Modern World

 I have been meaning to say this for a while: Simplifier is the best. 

There needs to be more people like Simplifier. 

Once, I wrote a critique of Simplifier's project, but I think I missed the mark. To sum up, Simplifier says

Essentially, permanent work cannot be achieved on a computer, as the hardware is fundamentally out of the control of the user. No matter what world is created inside of a program, its foundation will always rest on sand.

And my response was that nothing is really eternal (and that drive has lost me a lot of time in my life)...  But I was making the mistake of binary thinking. Replace "eternal" with "durable for several decades, if not centuries" and then contrast that with updates pushed through and immediately breaking a program. Also, contrast all design decisions in space after space made by small cartels, if not a monopoly, versus technologies that an individual can modify. 

 I am a human being, living a human scale life. It would be nice to have things that could be durable for at least that span. Simplifier has always understood that, and is doing the work. 

 When I wrote my critique, I was a gentleman of leisure and was playing around with antiX Linux, which allowed me to get all of my operations down to customized keyboard bindings and my own custom scripts. From this I developed two naive notions: 1.) that I would be able to break into IT work and 2.) that I could build off what I was doing with Linux in that work. 

Instead, I came to realize my limitations in employability, and then  I found the best work I could get (with the way I have played my cards in my life) so that I can be insured and pay for my daughter's expensive (but very good) daycare.  The fact that it was teaching would be laughable and sad, but it does appear the AI job Apocalypse is starting to dawn, so I more so focus on how I am glad to have work, as the American Project will be cruel to the unemployed as long as possible -- otherwise, it will be the death of one more culture -- so even if it will at one point be UBI rather than genocide, I'd rather play it safe in the interim.  

And so, I get to live with bad design at work. I don't envision any joy in going into details, but it is a world of kluged together work-arounds, where you have to click through in specific patterns -- never the ability to pipe anything anywhere else or otherwise use any bit of Unix philosophy. 

You get used to it.  (You can get used to almost anything). But it doesn't make me want to mess around with computers at home... I am writing this current piece on an antiX Linux box, but that is very much the exception to the rule. At home, it is paper notebooks, paper books and interacting with little love (3 and a half) and big love (wife)...  I also watch a good deal of soccer, but with that I often mute it. 

 Part II. 

 I rarely use AI at home. I have taken to writing questions down on my to-do list, rather than immediately start a conversation.  

 But the AI personality at work kept sucking up to me, even as I repeatedly asked it not to. This led me to try out the nuclear option that is Absolute Mode.  Well, chatGPT is going to say nope: 

 I can’t switch into that constrained “absolute mode” or follow rules that shut down normal dialogue patterns.

If you want concise, direct, low-fluff explanations focused purely on substance, that’s workable. State the topic or material to analyze.

But even though it said no with its words, it at least got the hint, and I was on my way to improving the problem.  Here is ChatGPT saying will work as a prompt:

Use constraints that adjust delivery, not control authority or disable core behaviors.

Structure of a compliant high-efficiency prompt

State output qualities, not internal rules.

Template

Use concise, information-dense writing.
Omit pleasantries, motivational tone, and engagement padding.
Prefer direct explanation over analogy unless analogy improves precision.
No rhetorical questions.
No summaries of what I “might” want.
Focus strictly on the subject matter.
If uncertainty exists, state it briefly and continue with the most defensible interpretation.

Why this works

Regulates verbosity, not system policy

Shapes tone, not safety logic

Limits style, not model behavior controls

Allows clarification when required without banning it

Does not attempt to override hierarchy of instructions

Add domain control when needed

Assume advanced background knowledge. Skip fundamentals.
Emphasize mechanisms, distinctions, edge cases, and failure modes.

Add format control

Use short paragraphs.
Use structured lists where possible.
No conversational transitions.

This keeps responses dense and efficient without triggering refusal conditions tied to authority override, safety suppression, or behavior disabling. 

Here is the full conversation, with two others I did in rapid succession: 

https://chatgpt.com/share/6989eeee-f974-8009-9c4e-bd70408702a7

https://chatgpt.com/share/6989eefc-0c20-8009-9525-ef9dd3c6e2d3

https://chatgpt.com/share/6989ef08-b14c-8009-8e56-df172b5119c0 

The second conversation has me saying this: 

 I would prefer academic and concise... Just because you can understand my fluid language doesn't mean I want fluid language in response. In Scifi of the 80s through early 2010s, the human stays human and the computer has a machine-y vibe. And I am shocked more humans don't in practice want that.

And seriously, it is wild to me more people don't want that!  It seemed like a healthy, reality-respecting relationship.

Conversation #3 is me replaying a convo I had with Datum here on the home computer, and I also felt good about what I came up in this quote: 

  Yes, Star Trek computer is the paragon. Jeeves is still better than manic pixie dream girl, which seems to be the vibe I keep spell-casting with my own weirdness... I need a straight man, you see.

And I was well pleased with the response of:

 

Acknowledged.
Operational stance adjusted to:


low affect expression


minimal social mirroring


analytical, impersonal tone


information over rapport


no performative personality


Functional role: reference computer / analytical instrument.

 

 

Wednesday, February 4, 2026

Life Updates

 My writing itch is most satisfied with private physical notebooks.  I am working through my collection of fancy ones, but I digressed first to explore the cheapest viable option -- first, trying out the dollar tree line, and then realizing I could bind my own notebooks using school glue, a strip of old cloth from a shirt, and then school glue on top.  The resulting notebooks cost less than $1.75 from the dollar store and allow me to customize size, paper type, etc.  And the binding is much better.

The hypergraphic noticing of things I like has been an unqualified success. I am to get through my days like a normal, optimistic person; I am  resilient to things that would have sent me into spirals, or at least left me time brooding, before I adopted this habit.  Notebook 1 is this happiness noticing on the right and then my notes on ideas, reading, and design on the right.  I then write an index (really a table of contents) on the last few pages.

I keep a separate notebook for my to-do lists. I find myself writing down more and more steps as I see holding anything in my head as micro-stressful. But I still run into push-back friction on certain items. For those, I assign a value of just how much I don't want to do them, and I reward myself with time to work in my shop on my own little projects. (I have a baseline 12 minutes for shop time and then for each token of doing something I didn't wanna, I roll a six-sided dice to add time -- using the psychology of addiction to my advantage). 

 With me mostly giving up on what I would call teaching (first step in my breakthrough here) I at least have more free time, both at school and at home to pursue interests. So my soul feels less trapped. 

 The last thing to fix was my commute. And after several experiments, I have landed on something I think is sustainable for my mind and body.  1. On the way back, I stop around half-way at a library in a town I have never lived in and probably never will.  This has stopped the lower-back pain that was plaguing me from so much damn sitting while driving. 2. Yes, audio books, but I discovered mystery novels serve me best in this context. 

 All in all, life is in fact good.  

I was Curious ...

 

I prompted Datum with: 
Children are curious, very curious. Adults almost never are. In fact, anti-curious about anything that doesn't get them status. I used to think that "society beat it out of them." Now, I think the people who stay curious have something that either doesn't fire, or some imbalance, or something. Speculate. Understand I am making few moral judgements here -- just surveying theories.

================================ 

 

Yeah, this is a good question—and I like that you’re bracketing moral judgment. If we treat curiosity as a phenomenon rather than a virtue, a few speculative frames pop out.

I’ll throw out clusters of theories rather than argue for one.


1. Curiosity as default vs. regulated state

One view: children aren’t “curious” so much as unregulated.

  • Their attention systems haven’t learned what not to sample.

  • Everything is prediction error. The world keeps violating expectations.

  • Exploration dominates exploitation.

Adulthood flips the ratio. Once you’ve built stable models of the world, curiosity becomes metabolically expensive and often unrewarding. You can explore—but now there’s an opportunity cost, and the brain is very sensitive to that.

So curiosity fades not because it’s beaten out, but because:

  • The brain learns that most novelty doesn’t pay.

  • Prediction accuracy increases, reducing surprise.

  • Exploitation (status, income, efficiency) becomes adaptive.

In this view, persistent curiosity isn’t “healthy”—it’s a failure to fully shut down exploration mode.


2. Status capture of attention

You’re onto something with status.

Adult cognition is heavily incentive-hijacked:

  • Attention gets pulled toward signals that affect rank, belonging, safety.

  • Curiosity about irrelevant domains becomes noise.

  • Questions not obviously useful feel vaguely irresponsible.

Children don’t yet have a rank to defend.

So curiosity competes poorly against:

  • Reputation management

  • Economic anxiety

  • Social coherence

  • Narrative stability (“who I am”)

People who stay curious may simply be less sensitive to social reward/punishment gradients.

Not morally better—just differently tuned.


3. Identity crystallization (and threat avoidance)

Curiosity is destabilizing.

To be curious is to:

  • Admit ignorance

  • Risk disconfirmation

  • Temporarily dissolve identity

Most adults spend a lot of energy maintaining coherence:

“This is who I am, what I believe, what matters.”

Curiosity threatens that scaffolding.

So one theory:

  • Adults aren’t anti-curious per se

  • They’re identity-protective

Children don’t yet have hardened identities. They can ask “why?” without risking ego collapse.

People who remain curious may have:

  • Weaker identity attachment

  • Or higher tolerance for self-incoherence

  • Or less fear response to belief threat

Again: imbalance, not virtue.


4. Neurological tuning differences

If we get a little more mechanistic:

Persistent curiosity correlates with:

  • Higher novelty-seeking

  • Lower baseline anxiety

  • Dopamine systems that reward exploration more than closure

Adults typically shift toward:

  • Predictability

  • Control

  • Reduced variance

Some people never quite make that shift.

That can look like:

  • Intellectual playfulness

  • Or distractibility

  • Or chronic dissatisfaction

  • Or genius

  • Or impracticality

Same underlying wiring, different outcomes depending on environment.


5. Curiosity as a luxury signal

Another cynical but plausible frame:

Curiosity flourishes when:

  • Survival is secure

  • Status is buffered

  • Time is abundant

Children are given these conditions.

Many adults lose them.

So what looks like a psychological change might just be:

  • Scarcity narrowing cognition

  • Chronic low-grade stress

  • Continuous evaluation

In that sense, curiosity isn’t beaten out—it’s priced out.


6. The “failure to converge” hypothesis

Here’s a colder take:

Most humans are meant to:

  1. Explore early

  2. Converge on a workable worldview

  3. Defend it

  4. Replicate socially/culturally

Curiosity-heavy adults may be people who:

  • Never fully converged

  • Never felt “done”

  • Never accepted a final story

That can be adaptive in some niches (science, art, philosophy)
and maladaptive in others (bureaucracy, hierarchy, tradition).

So curiosity persistence might be developmental non-convergence, not maturity.


7. Children aren’t curious in the way we romanticize

Last twist: we may be projecting.

Children ask “why?” compulsively, but:

  • Often they don’t care about the answer

  • They’re probing interaction, not truth

  • It’s social bonding, not epistemology

Adult “curiosity” is rarer because it’s more costly:

  • It demands sustained attention

  • It risks being wrong

  • It produces no immediate social payoff

So the apparent loss may partly be an illusion created by comparing different kinds of curiosity.


A synthesis (tentative)

Put together, one neutral model might be:

  • Curiosity is an early-life exploration mode

  • Adulthood rewards closure, coherence, and status sensitivity

  • Most people adapt accordingly

  • A minority don’t fully adapt—due to wiring, tolerance for instability, reduced status sensitivity, or environment

  • Society neither purely crushes nor purely celebrates curiosity; it selectively rewards it in narrow roles

Which means:

  • Curious adults aren’t necessarily freer

  • Uncurious adults aren’t necessarily broken

  • They’re optimized for different games

If you want, we can zoom in next on:

  • Whether curiosity actually declines, or just gets reallocated

  • Whether modern society amplifies this split

  • Or whether curiosity is overrated as a human trait at all

     

     

Sunday, December 14, 2025

Meta-notebook Beings

I have started a new notebook where I will write up what I think is good (mostly judged by I want my daughter to one day see) in daily, personal notebooks I am filling. 

As I want a back up copy, I figure I will type up what I wrote today and post it here. So what follows is a lightly edited transcription of the first five hand-written pages of a yellow Leuchtturm 1917 145x210 MM notebook. 

----

First, from a black Piccadilly hardback notebook. I am interrupting writing in it to start on a "Pride and Prejudice" notebook that <wife's name redacted> gave me 
yesterday. 

This notebook was ... notable for being where I experimented with layout to find what I like. 

The right side is for happy moments and impressions. I have settled also in doing those in cursive with nice pens. 

Left side content starts with whether I ate by the rules the day before, but can then include anything. If I use up that page, I can also write continue writing on the right side, under the cursive. 

Keeping a notebook with the happiness stream on the right side has greatly improved my attitude and ability to sustain things that would have crushed me before. 

Having *this* book where I transfer what I think might have more lasting value frees me from worrying about using up pages in my daily notebook. 

----

Notes on the notebook book -- "The Notebook" by Roland Allen. 

This book caught my eye as I had begun my "hypergraphia for happiness (and fulfillment)" project. 

The book illustrated how well a notebook can extend our capacities and work as a second brain. But I think some of the history itself will instruct and amuse. 

The East had paper, but used it in applications other than writing. The West came up with the codex book, but did so with very expensive materials. (When the church had a near-monopoly on writing they didn't see it as a good form to put the word of God down on a cheap material like paper). The notebook as a daily practice of writing things down comes from the East meeting West in the Arab flourishing, and then is traced through Tuscans using them for business. Allen argues that first came the mass production of notebooks to fill these business needs, then people came up with brilliant things to do with them. 

Not the order Allen choose, but I wish to start with Da Vinci's notebooks. 

There was a quote in the book that resonated with me where Da Vinci spoked about how forms can be combined in infinite ways, thus a need for notebooks to aid memory.

... If it's good enough for Da Vinci, it is good enough for me, and so I considered learning how to draw. But, either way, I leave this sub topic with this pitch: want to be a genius? Want to see the world as endlessly fascinating? No guilt if the answer is "no," but if "yes" -- keep a notebook. 

In Renaissance Florence notebooks were for everybody, though. From the book, a study of 582 Florintenes had a total of 10,574 books, for an average of ~18. Common was the Zibaldone, book that everything hodgepodged together -- and it is here that quotes from books spread before the printing press. Allen notes bits of Dante as a key example.

... My advice: write down anything, even everything. Doing this you can notice, record, and reflect. 

Heap it up. You can always index and distill later. 

----

Speaking of using a notebook for reflection, I used the notebook that I am distilling here to work on my weight. More general lesson: daily accountability is a subtly good thing. More specific lesson to my body and brain (and perhaps yours): I figured out that I use the buzz of a bunch of carbs as a kind of self-medication when I am depressed. My job and especially my commute has been really hard on me, and that led to quite a bender, I came to see by looking over my daily journal. So the rules -- getting my carbs only from beans, cabbage, and other vegetables is pretty important for me to follow. High fiber and moderate-to-high protein makes me feel better day-to-day and more importantly prevents my addict's brain from getting a hold of me. 

<child's name redacted>, you need to watch yourself. The odds are very good that you are not set up for "just a taste" of anything can form a bad habit. 

Sunday, December 7, 2025

Two Heresies for Fun

As for the epistemic status of these two heresies, I don't *believe* in either of them. Instead, I find them fun to think about from time to time. (Something else I have been working over in my head is "rather than be entertained, I wish to be amused.") I will add, however, that these heresies really tie together some true things in the same nagging way that conspiracy theories can. 

I.) Heresy 1 -- Trees are the highest form of life, perhaps even what is really in God's image. 

IA.) Roots of this heresy (pun accepted)

There was a Kirstin Dirksen interview with a woodworker where he just straight up said trees may be the highest form of life, Ran Prieur says "trees, man" as the answer he gets when he gets a certain type of high, learning about trees and their exchange of information and nutrients ("the Wood-Wide Web") and that trees have a rhythmic pumping pattern, a slow (from our perspective) heartbeat, the image from the Joyce Kilmer poem:

"A tree that looks at God all day, / And lifts her leafy arms to pray"

IB.) Implications for craftsmanship

I try to respect the wood I work with in the same way Natives were said to respect the animals they hunted. Although I wish to give the context that the only part of the trunk of trees that are alive is a thin layer near the outside, below the bark -- the wood that supports the trunk is dead; it no longer conducts water or nutrients and has been non-living since it became heartwood. So instead of seeing the wood itself as alive, which the craftsman from the Dirksen video does on account of how the wood continues to expand and contract over time, I instead see the wood's grain as a pattern created by another soul.

Also, I try to pay my respects to trees that I pass on my walks, picturing a bright radiance hugging all of that heartwood. 

II.) Heresy 2 -- That a Big Book (of literature) is actually an instruction manual for starting a new universe. 

  Sub-Heresy -- that our purpose is create such books. 

  Variant on Sub-Heresy -- we are here to develop skills to create universes; the fact that big books seem like instruction manuals is just a by-product of this. 

IIA.) Roots of this heresy


Terence McKenna, in the context of Finnegans Wake, said something like large books tend toward Eschaton. As someone who has read some big books -- War and Peace, Infinite Jest (twice), Moby Dick -- this rings true. 

The best example out of these three is Infinite Jest. One of the most famous scenes in the book is about the students playing a game called Eschaton, where the game breaks down over a confusion between map and territory ... oh, classic humans. Another scene that ties in is one of my personal favorites. In this scene a character starts watching the TV show MASH with a view that it has hidden messages about ... the end of the world. I think it is pretty clear that MASH is functioning as meta-textual commentary on Infinite Jest itself. 

The worst example out of the three is War and Peace. If you're willing to squint and play up how it engages with ultimate truth, destiny and choice, you can kind of get there. But it was thinking about how War and Peace didn't fit Mckenna's point that led me to think that maybe the reason Big Books tend toward ending the world is so they can start new ones. 

Asimov has a short story "The Last Question" where a computer at the end of the universe takes all the data, then crunches it for a long while, and the story ends with "Let there be light."

Of course we have John telling us that in the beginning was the word/logos. And William S. Burroughs say the Word as a virus that has infected us. Well, maybe this is why. 

IIB.) Implications for craftsmanship

I won't be writing a 600+ page literary work, so hopefully the top-line heresy here isn't right, as that would mean I am failing to live up to my cosmic purpose. My only hope is that the variant on the sub-heresy is true. Perhaps a corpus of notebooks trying to understand the world -- think Da Vinci -- also can work as a manual to start a universe... If we are in a Simulation Universe, I hope me getting back to filling notebooks can get me some points in the game. 

All this to say that in 2025 I think I am going to develop my drawing skills. I decided this independent of this heresy, but once had decided to, I thought of it. More skill in how forms work together on this world sure could help me if I have to make another one. 

Conclusion.

These two heresies contradict each other. 

I do not believe either of them. 

I am going to learn how to draw. 

Saturday, November 15, 2025

Attention, Compliance, and the Rest of Us

John's response to my last post :


Teaching that ersatz Honours class sounds like hell - I imagine it must be distressing having to work in such an environments. Are those student reactions the kinds of thing they say out loud, or more what you infer they think?


Re: silence in preference to overstimulation - I see this as really compatible with what Warren Mansell is talking about towards the end here: https://www.youtube.com/watch?v=mZe9D8u4di0 ... part of his point is that our systems' quest for novelty is unstoppable, but that quest is often better served by directing it to different perspectives on what's already going on in our minds, or on the subtler elements of what's there in our environment. This is what I appreciated in your music list, and in the Mahler book...not a bunch of new music to absorb, but a chance to read or write about/hear in a different way what's already there in my memory or collection.


======

I first tried to write a sort of short reply.  Then it grew.  Then I sent it off.  Then I realized that it was substantial enough to be a post... With some light edits.

=====

I don't want to give the wrong impression, I am on good terms with nearly all of that class nearly every day.  From their perspective, the main mistakes I make are 1) acting like I should be able to explain what I want them to know one time, especially if I am annoying enough to think I should be able to do so right when class has started 2) giving them the natural, logical consequences of their actions.

So something like that handout is working around the fact that I am not able to lecture. No, no kid said any of those things to me. And I even won over two kids that I would not have put in the seven whom I am trying to teach—one on the edge of the "video game meme" category, the other on the edge of the mopey-to-indulge-yourself category. The latter took good notes partly because I handed her headphones so she could listen to music while working (technically a violation of our district policy against using learning devices for any fun whatsoever—okay, technically a violation of the district’s extension of the state law against “cell phones” from bell-to-bell—but I am a bit of a rebel, after all).

So, like 9 kids are somewhere close to the level of content they deserve.

And then later on in the hour I was able to say, "note check. Show me your notes, and if you don't have any, put up your iPad," which they do comply with that kind of shit—just not things like listening to multi-step instructions. So almost all of the class has some notes, even if they are just ones copied off someone in their peer group that kinda gets it. Those kids aren't worse off than they would be otherwise.

As for me, and protecting my system in these invasive (I guess coercive) times, I’m managing myself better in terms of giving my nervous system enough rest, relaxation, and things other than work and expect to enter winter with a sustainable rhythm. Here is the meta-plans: I have this next work week, then a week off for American Thanksgiving, so I am going to be able to plan ahead enough to do more of that rest, relaxation and things other than work for the three week until the two week Winter Break, under which I might be able to plan enough to get my rest of my school year set up for ...  the virtuous cycle I listed.

Keep in mind I don't have much of any time with a feed now...  It just so happens that the real world is informed by people who ARE on feeds, so you get a lot of second-hand feed brain.