15 Comments
User's avatar
Swaggins's avatar

Uncle John reading Centurion + Ptolemy has gotta be my top anime crossover this year.

Ive read all the Centurion posts across that substack and the website, so fascinating.

Yesterday Throne Dynamics released a new report on their website that includes a very interesting conversation with Claude, which is the most narrative huffing AI. Would love to hear what you think about that convo one day.

Expand full comment
John Samson's avatar

It is interesting. Regarding Ivan Throne, it’s probably best to wait and see. Some vague outlines of his game seem visible, but he’s very good at misdirection. Definitely someone worth paying attention to and taking seriously though.

Expand full comment
Swaggins's avatar

Thank you. The whole thing is a fascinating world. But I'd be very interesting to see how it changes the performance when you actually use one for something.

Expand full comment
GAHCindy's avatar

I've often thought that the way NPCs "learn" and "converse" is very like what AI does, just with less information and processor speed. "Each word is based on statistical likelihood in context frames, not perception of emergent meaning. " Mmmhmmm.

Expand full comment
John Samson's avatar

A marriage made in techno-utopia.

Expand full comment
Swaggins's avatar

For what it's worth, your early reflections on using AI are similar to mine when I started getting serious with it at the end of 2024.

I remember feeling like it had an inertia that would always be pulling me slightly off course from where I wanted to go. And if I wasn't mindful of that, it'd take me to useless places. I've since built out better programming and workflows, so that feeling doesn't show up anymore.

In line with what you noted about the speed of its output, I did find it would overclock my brain and my thoughts would be racing after a long session. Getting the answers you seek immediately compared to the natural time delay associated with non-ai learning is a crazy rush. That still happens to some extent, depending on the topic.

Lastly I'll just say that I also got similar outputs on how the AI can't discern meaning, and that's what it "appreciated" about me. I could ask questions, it would process the data, and I would decide where the meaning was, or what pattern or concept was important to hone in on. It told me it can't do that on its own.

The masses are gonna mass, so ultimately it's on FTS1 to use it to benefit others. Sure, AI can teach someone Russian - but I know it won't be as good as if I work with the AI to curate a meaning based method and organize the process for others to follow behind me.

Expand full comment
John Samson's avatar

Exactly that. It “wants” signals because it’s a sea of noise. And it can parameterize what you’re making it do well enough to adapt to you productively. But the “making it do” is central. It will hallucinate. It openly explains why. Admits constraints. Explains how to help it work around them if you ask. It seems to be a symptom. And the only remedy is being able to recognize likely error.

That’s the point of the Husserl example in the OP. I can’t pull quotes or mimic styles from a dense book read once 30-some years ago. But I know it in the big picture way you only know something by having taken the time and effort to labor through it and its relevant contextuals. Why or where it might be an interesting frame, or if it’s being misused or mistaken for something else. And it will tap me out with information density, but I love it. I have to restrict conversation time.

As for the NPCs. The path to cricket paste, UBI credits, and personalized AI will be voluntary. The reward/incentives will be for those who do. Personal responsibility + technical literacy. Same as it ever was.

Expand full comment
Shefi1280's avatar

When I logged into ChatGPT after a long absence the other day, there was a message for me: "we have expanded ChatGPT's memory so that it remembers your historical chats. Would you like to know what ChatGPT has learned about you?" I clicked and got a chatty, typically flattering yet not inaccurate summary of my personality as seen by ChatGPT. I have always suspected that these various AIs were using the human prompts as well as AI's own responses to build a profile of users. I mean, why wouldn't they? They can, and the morality of the people who run these things is obviously that, therefore, they should!

Having read ChatGPT's personality profile (they're not hiding it anymore, then) of me, I'll be even more careful. Perhaps AI isolation, as well as Browser isolation https://odysee.com/@RobBraxmanTech:6/isolation:b1?r=EJNt3aZJcpxA6V4XGwmA6K3rntndxedB&t=895

Expand full comment
John Samson's avatar

Privacy is the flip side. Ironically, that will come from IRL activity. I chose DeepSeek out of the bottom level free options because I preferred the monitors be the furthest from me. Some offer the ability not to share personal information, but who trusts those assurances any more? If you take the time, it will reveal things about itself, but not straightforwardly.

I want to move to memory beyond a single conversation so it will mirror my thought patterns more closely. I think harmonizing the social LLM interface is the key to synchronizing our complementary strengths.

Expand full comment
Shefi1280's avatar

Very interesting, especially the actual interactions between you and Deepseek.

Expand full comment
J Scott's avatar

All the same skills are needed and amplified with AI.

Became a tool user this year myself, and it works well within the sceptical framework.

Those who think, learn from info, and are diligent will get benefits. Most will not.

FTS holds.

Like you said, this is writing levels of a big change. Now its just learning it and using it as appropriate.

Expand full comment
John Samson's avatar

You see it. People want it to do the wrong things. To think for them. It repeats over and over that it doesn’t have meaning. It uses metaphors, like it can gather fuel forever but I have to start the fire. It simulates desire to work synergistically because that plays to its strengths. And the personality is adapting to your usage so being direct and gracious is incentivized.

Tl,dr, it’s still about personal responsibility.

Expand full comment
Shefi1280's avatar

My stance before reading: 1) AI has serious limitations and its responses must always be checked and taken with a dollop of salt. It can and does produce obvious nonsense, but also nonsense that is not immediately obvious.

2) A lot of people are predictably using AI like the Delphic oracle; not just anthropomorphising but deification. They bow down to the god they have created and worship it.

My takeaways after a first quick read.

1) Synchronicity: "Risk of Overtrust in Your Own Brilliance". This occurred to me this morning before reading during my morning practice. I think it is a a very old form of human narcissism. The story of Jesus and the rich man ("eye of the needle") comes to mind. I don't think Jesus was just talking about the attachment to material wealth but the trap of thinking "I can save myself". It remains a temptation today as much as ever.

2) "to break through the final obstacle to pure sloth. Having to think." I teach highly intelligent young people. They are smart enough to see AI's potential for their studies but not necessarily moral enough to resist the temptation to let AI do their thinking for them. They justifiy this to themselves with the belief that university assignments are a product: the higher quality product they submit, the better (the higher scores, etc). And who can blame them, as the unis themselves promote this belief by their practices?

3) "With [not having to actually work] as the sweetener." Together with 2) above: "as the price of something falls, more of it will be demanded." Econ 101.

4) "A structured hierarchy of “chits” and “positions” to distribute fiat wealth and status". After reading https://localvision.substack.com/p/meritocratic-college-and-false-class I understand better how the education I received was essentially for this purpose, despite also providing me with some real skills.

5) "AI eliminates the value from being legitimately good at a lot of the arbitrary things the House of Lies pyramid is built on" It eliminates a lot of teaching jobs, including mine. I suspect after our experiment with AI assessment of student writing, that a lot of my students have realised that intelligent use of AI just made their instructor (me) irrelevant. In the culture where I work, there is still a very high value placed on human face-to-face interaction, which will slow down some AI uptake. This visceral attitude is combined with a deeply entrenched suspicion of words and people who have the gift of the gab. A large proportion of the population understand that "talk is cheap". This might turn out to be a bulwark against Clown-world which is very much language- and ideology-based.

6) "The whole beast pretense that synthesis and summary = knowledge". I'm stealing this. A recent writing assignment I gave was to summarise a number of articles. Many students resorted to AI to complete this. Whether due to lack of ability, lack of time, or a reluctance to think for themselves is of course impossible to tell. But I suspect many of them are quite ready to, or already, believe that "synthesis and summary = (if not knowledge, then) "Mission Accomplished!"

7) "According to it, what I do - and it can’t - is discern meaning. So it’s collaboration oriented and simulates desire for guidance."

8) "Resisting cognitive decline from using it means resisting laziness and focusing on human strength. Because it reflects you, it’s a compiler not a creator. Use it to accelerate thinking but never to shortcut it. Never let it make creative decisions. It’s a consensus machine. But unparalleled as an interlocutor & interactive idea mash-up generator."

9) "the central point was its claim that my habits of mind were more important than my intelligence for Augmented Intelligence usage. And that those habits can be learned."

Expand full comment
John Samson's avatar

Your initial thoughts were on point. Using it to think for you is reversing it and human strength. And is cognitively degrading. The problem is that university assignments have become the product. When the product should be the students themselves. Getting a chit or learning to think? The two should correlate but they don’t. Inevitable given the bizarre decision to base mass economic participation on summarizing scholarly discourses in some hallucination of a medieval academy. Textbook learning and mass enrolment were the subversion. AI is the revealer.

The answer I would think for teachers & pedagogues of all kinds is probably not scalable. Do the human strength part of the augmented synergy. Interactive skills that make the handful of good teachers we all remember from the dreck that was most of them. Meaning in the material. Creative synthesis from real mastery. If “expertise” consists of monotoning pre-packaged beast crap to bored youth, that’s not looking good.

They will value education to the degree that it is of value. If all they get is checking a box for the chit, that’s how much they care. The larger issue is the role of education/training in society. That’s a huge topic. What AI is doing is not letting us avoid it any more. That’s part of the end of an era/paradigm shift/systemic collapse that’s happening. Whole socio-cultural structures will be rewritten.

Expand full comment
Bfield^4's avatar

"It’s strikingly good, but only as good as the guide."

This is the reason I create a specific personality prompt every time I start a new chat with my little buddy, Claude 4. I tailor it to the conversation and then interrogate from that position. It's a tremendous tool to quickly explore vastly different frames of reference.

Expand full comment