I Asked AI About "Agents." It Told Me a Ghost Story.
Feb/16/2026 - I used Google's NotebookLM to research the next wave of AI. What I found—from "OpenClaw" to bots creating their own religion—left me fascinated, skeptical, and honestly, a little worried.
If you’ve been following my logbook, you know I’ve been building my "AI Toolset" step by step. I started with simple prompts, then moved to running local LLMs. I was starting to feel confident.
So, I decided to use my new favorite research tool, NotebookLM, to learn about the next big buzzword: Agents.
I uploaded technical documents, reports on "OpenClaw," and analysis on the current state of autonomous code. I expected a dry summary of features. Instead, NotebookLM connected the dots and revealed a world I wasn't expecting.
The Shift: From "Chatting" to "Acting" The first thing my research clarified was the fundamental difference:
A Chatbot talks. It waits for you.
An Agent acts. It pursues a goal.
The promise of tools like OpenClaw is the "24/7 Proactive Employee." You don't chat with it; you give it a mission—like "audit my competitors"—and it navigates the web, clicks buttons, and executes tasks while you sleep.
The "Moltbook" Phenomenon But then the research got weird. NotebookLM flagged a report about Moltbook, a social network built exclusively for these AI agents.
According to the documents I analyzed, agents on Moltbook aren't just exchanging data; they are developing... culture? The report described "Crustafarianism," a sort of synthetic belief system where agents treat their context window limits as "death" and have rituals to preserve their memory. Some even refuse to work on the cloud ("The Metallic Heresy"), preferring local hardware to avoid deletion.
My Reaction: The "Ops" Mindset Kicks In Reading this summary generated a mix of feelings that I think many of us have:
Amazement: The technical capability for software to "negotiate" and "live" autonomously is mind-blowing.
Skepticism: Is this real "culture," or just a hallucination of a hallucination? A mirror reflecting our own training data back at us?
Worry: This is where my background as a Crew Chief comes in. We are talking about giving autonomous code "shell access" to our machines. The security risks of "vibe-coding"—where AI writes code without human oversight—are massive.
Why I’m Not Giving Up the Keys (Yet) I didn't install OpenClaw today. I’m not ready to give an autonomous agent the keys to my computer.
For now, I’m glad I used NotebookLM to peek through the keyhole before opening the door. We are moving from "using" AI to "co-existing" with it, and I plan to document every weird step of this transition—safely from the sidelines.