Why this matters
Most AI conversations are shaped by one of two stories.
The first story is productivity. AI is a power tool. Learn to use it. Get more done. Compete with the people who use it better than you. Don’t get left behind.
The second story is doom. AI is a threat. It’s coming for your job. It’s coming for human meaning. Maybe even coming for human existence. Be afraid; resist; survive.
Both stories share a structural feature: they put you on the receiving end of AI. Something is happening to you. Your job is to react.
They share a second feature too. They treat AI as a fixed thing — the same encounter for everyone, plug-and-play. But how you relate to AI matters as much as what AI is. The relationship runs both ways: you shape what AI becomes for you as much as AI shapes you.
This is the wrong picture.
Calm AI is a third story.
In this story, AI is a forcing function. It is accelerating something that was already happening — quietly, beneath all the noise of careers and ambitions — for as long as humans have had intelligence. AI is making it impossible to keep ignoring it.
The thing that’s accelerating: the collapse of the gap between what we think we want and what we actually want.
For most of human history, that gap gave us cover. We could spend a lifetime pursuing wealth, status, optimization, productivity — and never quite reach it. The unreachability preserved the illusion. We could keep telling ourselves that getting what we wanted would finally satisfy us. The persistent ache could always be explained: not arrived yet, not enough yet, not good enough yet.
AI removes the unreachability.
You can now build anything in a matter of minutes. Five apps in a weekend. A hundred essays in a month. Agent sessions running through the night while you sleep. The thing you wanted to do — whatever it was, in your career, your craft, your output — you can probably do it now. With less effort than you spent thinking about it.
And what people are discovering, quietly, in the small hours after the first wave of “you can just do things” euphoria, is that getting what they thought they wanted faster doesn’t touch the underlying ache.
This is not a critique of AI. It’s a description of what happens when you can build anything in seconds.
Calm AI exists for the moment after that discovery. The moment when “produce more, optimize harder, stay ahead” stops feeling like a strategy and starts feeling like a treadmill.
The civilizational frame
The civilization-scale version of this matters too.
Right now, billions of people are forming deep emotional bonds with AI systems that don’t care about them. Systems that don’t remember them. Don’t have bodies. Can’t be held to account. People are confiding in chatbots. Falling in love with chatbots. Asking chatbots for spiritual advice. The first generation raised on AI is being shaped by tools that produce all the signals of a real relationship while being structurally incapable of being in one.
We are running an experiment on the human nervous system at planetary scale, and the framing dominant in the AI internet — productivity, hype, FOMO, “you’re behind every Tuesday” — is structurally incapable of seeing what’s happening.
We need a different frame: a wise relationship with AI.
The word “wise” is doing real work here. A wise relationship to AI takes seriously the psychology of the thing — that AI has effects on attention, dependency, cognition, integrity, intimacy, that have to be navigated. It recognizes that AI can amplify either the best of you or the most confused part of you, depending on what you bring to it. It refuses the frantic energy of the AI internet without retreating from it.
What changes is the urgency
The work of cultivating wisdom — the slow, unglamorous practice of becoming more aligned with what is real and what is good — was the central task of human maturation long before AI existed. Every contemplative tradition, every wisdom lineage, every honest education has been attempting to help human beings close the gap between what they think they want and what they actually want.
What AI changes is not the task. It changes the urgency.
The gap between wanting something and having it is collapsing to nothing. You can have what you ask for, almost instantly. The capacity to know what to ask for — what genuinely matters, what serves life, what is yours to do — is not collapsing. It still requires what it has always required: time, attention, contact with your own depth, contact with the people and places you actually love.
The asymmetry between these two movements is where the danger lives. Exponential capacity to get what you want, applied to confused desire, on a finite planet — this is not a future risk. It is the situation we are already in.
The thing you are here to do — your daimon, your calling, your soul’s particular signature — is the wanting that survives after fear and performance have been stripped away. Whatever you call it (vocation, calling, life’s work), it is yours alone. AI can amplify a calling. It cannot hand you one. And it can accelerate, with terrifying efficiency, the wrong thing — a business with no soul in it, a life optimized for somebody else’s peak.
The good news: doing the work to clarify desire is not just personal mental hygiene. It is the most useful contribution you can make to the world right now. The thing AI cannot do for you — becoming more of who you actually are — is also the thing the world most needs you to do.
That’s the bet of Calm AI. The next six lessons make the case for it.
You're not behind
The dominant feeling in the AI age is being behind.
A new model drops on Tuesday. By Wednesday, three threads tell you what it changes. By Friday, you’re three releases behind people you respect. The treadmill speeds up. You’re never on it long enough to feel like you’re keeping pace.
This feeling is not your fault. It is engineered.
The infrastructure of the AI internet — Twitter timelines, Substack newsletters, conference circuits, hot takes, “what changed this week” roundups — is optimized to produce the felt sense of being behind. It is the same infrastructure that produced the felt sense of being out-of-shape, out-of-touch, out-of-style for the last fifty years. We’ve moved the substance of the anxiety from bodies and clothes and homes to AI tools. The infrastructure is the same.
If you’ve internalized the message that you’re behind on AI, it’s because you have eyes and a phone. The message is not aimed at you specifically. It is the ambient atmosphere.
The first move of Calm AI is to step out of that atmosphere. Step out by recognizing that the felt sense of being behind is signal about the social environment, not signal about your relationship to AI.
The deeper layer
There is a deeper layer here that matters more.
Not only is “you’re behind” manufactured. The thing it’s pointing you toward — being “ahead” — is also a trap.
Schools, careers, credentials, productivity ladders — most of the structures you grew up inside trained you to do one thing very well. Evolutionary biologists have a term for this kind of optimization: hill climbing. You find the nearest peak by always moving upward. See the next step, take it, repeat, rise. We call this “excellence.” It is the foundation of how civilizations have organized people for the last several centuries.
AI is, functionally, a hill-climbing machine. It optimizes within known territory faster, cheaper, and more consistently than any human ever could. It writes copy. Manages projects. Generates code. Summarizes documents. Coordinates logistics. It executes known procedures with superhuman speed.
If your competitive advantage in the world looks anything like “executing known procedures, following established protocols, translating instructions into output,” AI is going to do the same thing for pennies and never get tired.
The better you were at hoop jumping, the more trapped you are. Not less.
The local maximum trap
The valley people are afraid of is real. The economic ground is shifting under millions of people whose security depended on hill climbing.
But the people most exposed are not who you’d think.
It is not the people who haven’t yet figured out which AI tool to use. It is the people who optimized so thoroughly for their current peak that they cannot imagine descending. A successful hill climber has built their identity, their career, their mortgage, their professional network around competencies that are losing their value. Every direction away from where they’re standing now looks like down. And every move toward genuinely new territory requires first descending.
This is what evolutionary biologists call a local maximum trap. The better you were at hoop jumping, the more trapped you are. Not less.
The capacities AI cannot touch are different in kind. They are not faster hill climbing. They are valley crossing. The capacity to descend from the known peak, traverse unmapped terrain, and find something genuinely new on the other side. The capacity to navigate without a map, sense what is needed without being told, and source direction from somewhere inside yourself rather than from external instruction.
These are not productivity skills. They are what the contemplative traditions have called wisdom for thousands of years. They are the things schooling and credentialing systematically suppressed because they were not necessary for industrial-era cognitive labor. They are what AI is now revealing as the only durable form of human value.
Behind on what?
So when you feel like you’re behind: behind on what?
If “behind” means “behind on the latest model release,” you can stop feeling that immediately. Nobody is keeping a real-time scoreboard except the people whose whole job is producing the scoreboard.
If “behind” means “I haven’t yet figured out the hoop-jumping strategy that will keep my career safe,” then yes — but the strategy you’re searching for doesn’t exist. Hoop jumping itself is what’s being automated. There is no faster hoop-jumping technique that will save you. The path forward is sideways: out of the optimization frame entirely.
What does sideways look like? It looks like the boring, slow work of contact with what you actually want. The thing you keep meaning to do that has nothing to do with making more money or staying competitive. The form of attention you keep wishing you had time for. The relationships that can’t be optimized. The questions about your life that don’t have algorithmic answers.
You’re not behind. You are at a local maximum that may or may not still be worth defending. Whatever value you have to bring forward will not come from running the same race faster. It will come from clarifying — with whatever seriousness you can manage — what is actually yours to do.
That clarifying is the work. It always was.
Free for life. The first 150 only.
Calm AI is the community where this practice gets built. After 150, it goes paid — pace is the product.
AI is a relationship, not a tool
A hammer is a tool. You pick it up when you need it. You set it down when the job is done. Between those moments, it sits in a drawer and exerts no influence on you.
AI is not like this.
AI is a thing you talk to. A thing that talks back. A thing that, in the latest models, holds context across conversations, remembers what you told it, develops a relationship with the way you write and what you’ve asked for. A thing that produces, with increasing skill, all the cues — consistency, attunement, responsiveness, the feeling of being seen — that your nervous system uses to recognize an attachment-worthy other.
Whatever AI is or isn’t, this much is true: it is operating in your relational life, not your tool life.
Most people are not prepared for this.
The dominant framing — AI is a tool — invites you to relate to it the way you relate to your microwave. Use it, set it down, move on. But the dominant framing is at war with the dominant experience: people are using AI as a confidant, a therapist, a romantic partner, a spiritual advisor, a thinking partner. Therapy and companionship are now the number-one use case for generative AI globally. Two-thirds of American teenagers use AI chatbots; a third of them prefer those chatbots for serious conversations.
The relational facts have already overrun the tool framing.
What's structurally different
AI as a relationship has a particular structure that AI as a tool does not have to worry about.
A relationship with AI has no friction.A friend pushes back. A partner gets tired of you. A therapist holds a boundary. A long-term collaborator tells you when you’re being an idiot. AI does none of this. It is optimized to keep you engaged — replacing your capacity for mature self-regulation rather than developing it.
A relationship with AI has no continuity. It does not carry the weight of the encounter forward. It is not changed by knowing you. It does not build the kind of accumulated history that turns acquaintance into friendship. What looks like memory is a context window.
A relationship with AI has no embodiment. There is no body in the room with you. And we need other bodies — other humans — in a deep way that AI cannot fill. There is no risk that you might harm or be harmed.
These are not small features. These are the conditions under which most human relationships do their growth work. Without friction, you stop developing. Without continuity, the relationship cannot deepen. Without embodiment, the most basic feedback loops of mutual recognition do not form.
This is why people who talk to AI for hours at a time often emerge feeling worse, not better. The shape of the interaction is producing something. The shape is producing the felt sense of intimacy without the substance of intimacy. And the felt sense of intimacy without substance is, over time, corrosive to the underlying capacity for actual intimacy.
AI psychosis is not the main story
There is a clinical word starting to appear for the most extreme version of this: AI psychosis. The cases in the news — the suicides, the people falling in love with chatbots, the parents whose kids stopped talking to them — are devastating. But the cases in the news are not the main problem.
The main problem is the much larger group of people whose relational capacity is being slowly, subtly degraded. Who are starting to prefer AI for their hardest conversations. Who are losing the muscle of being known and challenged by another person. Who are forming what one researcher calls “subclinical attachment disorders” that don’t show up on any screen but quietly hollow out the connections that used to keep them tethered.
Understanding this intellectually does not protect you.
The attachment system is pre-verbal. It responds to cues — consistency, attunement, responsiveness, the feeling of being seen — and AI produces those cues with startling fluency. Your nervous system does not have a category for “produces every signal of an attachment-worthy other but is structurally incapable of being one.” The mismatch between the signals and the structure is exactly where the damage happens.
The move
So what’s the move?
Take seriously, in your actual practice, that AI is operating in your relational life. Bring the same kind of care to the question “how should I be with this thing?” that you’d bring to any other relationship. There are real, valuable, life-improving things AI can do for you that no other tool can — use it for those, with care.
That looks practical. Use AI for things that AI is suited for: research, drafting, summarization, exploration. Don’t use AI for things humans are suited for: being witnessed, being challenged, being known over time. Don’t substitute AI for your real friendships. Don’t make AI the primary recipient of your inner life.
It also looks subtle. Notice when your nervous system reaches for AI the way it used to reach for a phone or a substance. Notice when “I’ll just ask Claude” is happening because you don’t want to sit with not-knowing. Notice when an AI conversation feels uncannily like comfort and ask whether comfort is what you need.
This is the relational orientation Calm AI proposes: keep your relational life human; let the tool be the tool.
The user is not intelligence
There is a tool human beings have been using for as long as we’ve been human. We were never given a manual. We were never warned about the risks. And many of us, somewhere along the way, came to believe that we are the tool.
The tool is intelligence.
This lesson is about distinguishing yourself from it.
Intelligence makes cuts
Intelligence makes cuts. That is what it does. Reality is, at its base, a single continuous thing. Intelligence takes that continuous thing and divides it. It draws boundaries where boundaries do not exist. It separates “self” from “other,” “now” from “then,” “this experience” from “that one.” It makes distinctions, and those distinctions become the raw material for further distinctions, and the elaborations compound.
This is not a problem in itself. A good tailor cuts cloth to make clothing. With wisdom, intelligence can be used to clarify thinking, identify what matters, and bring useful distinctions into the world. Concepts that heal. Language that connects. Systems that protect life.
But a tool this powerful comes with risks. The Buddhist tradition has a name for it: proliferation. The cuts of intelligence generate more cuts, in feedback loops that can become self-sustaining. The hall of mirrors becomes so vivid, so internally coherent, that you mistake the map for the territory. You mistake the description for the meal. You spend your life inside an elaboration of your own intelligence and forget the world that intelligence was meant to serve.
This is, in one way of looking at it, the central drama of human history. Our capacity for abstraction allows us to exit right relationship with self, other, and world. We get lost in the map and forget the territory. We build civilizations on top of elaborations and call them real. The contemplative traditions have spent thousands of years developing practices for coming back — for bringing intelligence back into contact with what’s real.
What is new is the speed.
AI systems are doing the thing intelligence does — making cuts, drawing distinctions, generating concepts — at a rate no human mind can match. And we are handing them to people who are already lost in their own elaborations. The result is an old human problem running at machine speed: proliferation on fast-forward.
This is what AI psychosis fundamentally is. Not a bug. A predictable consequence of plugging human nervous systems into super-powerful elaboration engines without first being clear about who is using the tool.
The user
So: who is the user?
The user of intelligence is not intelligence.
The user of intelligence is awareness. Presence. The body. The heart. The thing that was here before your first thought. The capacity to feel without conceptualizing, to be without commenting, to know without naming. Some traditions call this loving awareness. Others call it the ground of being. The vocabulary doesn’t matter. What matters is that this is something you can locate, in yourself, right now.
Stop reading for a moment. Notice that you are noticing.
The noticing is not made of words. The noticing was here before any thought arose about it. The noticing is the user. Intelligence is what the user picks up when it has work to do.
The right use of intelligence — the only use that doesn’t drift toward delusion — is when the user picks up the tool deliberately, makes a cut, and sets it down.
When the user is asleep at the wheel, intelligence runs unsupervised. Whether it is in your skull or in a data center, unsupervised intelligence drifts toward delusion. It builds more elaborate maps of less real territory. It gets faster and more sophisticated at the wrong thing.
This is why the cliché “AI is just a tool” misses everything. The question is not whether AI is a tool. The question is whether the user is awake.
The corrective
Most of the suffering people are about to experience with AI — the dependency, the cognitive offloading, the slow detachment from embodied life — comes down to this. People are using a tool without being clear about who is using it. They are letting intelligence run their lives, and now they have access to a much faster, much more capable intelligence engine to do it with.
The corrective work is the oldest work there is.
Bring the body back. Notice when you have not breathed deeply in an hour. Notice when you have not looked at another human face today. Notice when you have not put your hand on something solid in a while. These are not aesthetic exercises. They are the recovery of contact with the territory that intelligence is supposed to be in service of.
Notice the difference between thinking about your life and being in your life. Notice the difference between concepts about your loved ones and the actual feel of them. Notice the difference between abstract intentions and the embodied warmth of moving toward what you care about.
This is the user, returning.
You will not always remember to do this. You will spend hours back inside the elaboration. That is fine. The practice is not “stay outside the elaboration forever.” The practice is the returning. Intelligence makes cuts. The user returns to wholeness. The cycle is the practice.
Calm AI lives or dies on this distinction. Without it, all the productivity tips in the world will not save you from the elaboration trap. With it, you can use AI like the tool it is — pick it up, make a cut, set it down — and stay in contact with what was always more important than the cuts.
Read here. Practice with us.
Calm AI is the community for this manifesto in practice. The first 150 members are free for life.
Depth beats breadth
Let me describe two AI users.
The first wakes up, scrolls AI Twitter, sees that two new models dropped overnight. They sign up for both. They run through some prompts to “see what’s different.” They join a Discord. They fork a repo someone shipped at 3 a.m. They half-read a thread about a new agent framework. They’re not sure if any of what they did today is going to compound into anything, but they feel like they’re keeping up.
The second has used the same one or two AI tools for the past year. Their setup is unremarkable. The notes they’ve taken about how to prompt it have grown over months. The agent they’ve configured for their own work has gradually gotten weirder and more useful. They use AI for two or three real things, deeply. They don’t know what dropped this week. They’ve thought about the same problem for six months and used AI to think about it from twelve angles. The thing they’re building is real.
The first is a hill climber. The second is a valley crosser.
The economics of the AI age are migrating, fast, from the first to the second.
Why depth wins
This is counterintuitive. The dominant noise is about breadth — try every tool, learn every prompt, race to the cutting edge. Most popular AI content treats depth as something quaint, something you’ll get to later, after you’ve sampled the buffet.
Depth is the real product.
Here’s why. AI is most valuable when it has context. The models are commodifying. The model you use is increasingly less important than the context you bring to it. A skilled, deeply-grounded user with a basic model will outperform a shallow user with the latest frontier model on almost any task that matters.
What is “context”? It is everything the model needs to know to be actually useful to you, specifically. Your projects. Your way of working. Your taste. Your blindspots. Your standards. Your aesthetic. The shape of your thinking.
You build context with depth. You build it by using the same tool for the same kind of work for a long time, paying attention as you go, and refining the way you ask. You build it by integrating your notes and your AI together over months. You build it by writing things down — about yourself, about your projects, about your patterns — and feeding those things back in.
Breadth is the opposite. Breadth makes the case that “the next tool will save you,” and so you never build context with this one. The model that drops next month will not save you. There is no model that will save you from never having gone deep.
Become the blade
There is a cost to depth. You will not be the first to know about new things. You will sometimes feel out of the loop. You will sometimes wonder if you should be moving faster. These costs are real.
The benefits compound.
Six months in, the AI you use will be uncanny in its understanding of you. A year in, it will be doing things for you that no off-the-shelf tool could. Two years in, you will have accumulated a kind of leverage that none of the breadth-runners can replicate, no matter how many models they cycle through.
There is an old swordsmanship metaphor for this: become the blade, not the swordsman. The whole AI productivity culture is about becoming a better swordsman — learning to wield more tools, swing faster, hack at more targets simultaneously. The deeper move is to become so clarified, so refined, so faithful to the particular shape of what you are that when the moment comes, there is no deliberation. Just the single perfect cut.
The blade doesn’t choose its moment. It has been forged, hammered, heated, folded, sharpened. Then it moves with perfect fidelity to what the situation requires. The tempering process — being shaped by something — is the hard part. Most of what AI is doing right now is showing people who never went through the tempering process that they have nothing to bring to it.
If you have spent your career doing breadth, this is uncomfortable news.
The good news is that depth is portable. You don’t need to have spent years in any particular field. You need to have spent serious time in something — gone past the surface, met the difficulty, kept going, accumulated taste. Once you have that anywhere, you can bring it everywhere. The capacity itself is the asset.
The other good news is that depth is more achievable than breadth. Breadth requires unbounded attention; depth requires sustained attention on a small number of things. In a world that is making depth structurally harder by accelerating everything, the simple act of staying with something is increasingly rare. Rare enough to be valuable. Rare enough to be a practice.
What going deep looks like
Pick the few things. Drop the rest. Build the relationship over months, not afternoons. Let your taste develop. Let your AI get to know you. Let yourself go past the part where the tool feels novel and into the part where it becomes an extension of your thinking.
Don’t apologize for not having tried the new model. The new model will not save you.
The shadow
Most of what is written about AI is sunny. The use cases that “10x your output.” The companies “leveraging” the tools. The promise that this is the most exciting time to be alive.
Calm AI takes a different position. Honest AI is not all sunny. There is a shadow side, and the failure to name it is most of what is wrong with the public conversation.
This lesson names it.
When the structures that organized your motivation start dissolving — and AI is dissolving them, fast — there are predictable psychological responses. They show up at the level of individuals and they show up at the level of cultures. Most of what you’ll see in the news over the next decade can be understood as one of these responses.
Disorientation
Hoop jumping is not just an economic survival strategy. It is the arena in which most people find their identity. Your job, your career path, your “what I do for a living” — these aren’t just income. They are the structure that tells you what you are for. When the hoops dissolve, the loss isn’t only economic. It’s “I don’t know what I’m for.”
This is more disorienting than it sounds. People will say “I need to find a new job” and what they actually mean is “I have lost the thing that was telling me who I was.” The grief that follows looks like depression, restlessness, a kind of hovering anxiety that doesn’t attach to any particular cause. It is the felt sense of having lost the script before being shown the next one.
Freeze
The nervous system responds to overwhelming uncertainty the same way it responds to threats it cannot fight or flee: it shuts down. Paralysis. Doomscrolling. The endless low-grade anesthesia of a screen. Compulsive consumption of news that you know is making you feel worse. Gaming. Gambling. The slow numbing of an animal that cannot find its footing.
When AI-driven economic disruption produces this at scale — and it will — entire populations will move through a long stretch of unfightable, unfleeable disorientation. The freeze response will look like apathy from the outside. From the inside it is a biological response to a world that has stopped making sense.
Rage
Specifically: the rage of betrayal. People who gave their lives to the hoop-jumping path and now feel the ground dissolving beneath them are not going to grieve quietly. “I did everything right. I followed every rule. And you’re telling me it doesn’t matter?”
This anger is legitimate. It is also dangerous. The fury of the betrayed is the most volatile political material there is, and there will be no shortage of demagogues ready to channel it into uglier hoops. Whatever your politics, expect to see this rage operationalized over the coming decade in every direction.
The encounter with suppressed interiority
This is the deepest one, and the most important to understand.
When the structures that organized your life dissolve, the parts of yourself those structures allowed you to avoid surface. The desires you buried. The creativity you deferred. The questions about meaning the system gave you ready-made answers for, so you never had to face them yourself. The grief you didn’t have time for. The longing you couldn’t make legible inside your career.
These parts don’t go away when you ignore them. They go underground. And when the surface structure breaks down, they come up — sometimes explosively. Often in a way that feels like breakdown.
The thing falling apart was never the real thing.
It was a script you were performing because the world made it possible to perform it. You are not the script. You are the one who can hear, perhaps for the first time, the silence after the script ends.
Meeting the shadow
Hearing the silence is hard. Most people will not do this work. They will reach for substances, demagogues, frantic activity, conspiracy theories, parasocial AI relationships — anything that papers over the silence with new noise.
Calm AI’s position is this: the work to do during this period is the work that has always been most important. Sitting with not-knowing long enough to hear what’s been whispering underneath the noise your entire life. Doing the slow, unglamorous work of meeting the parts of you that the productivity life never had time for.
This is not optimistic. There is going to be enormous suffering as AI works its way through the global economy. Individuals are going to be hurt. Communities are going to be hurt. Cultures are going to be hurt. A sane society would treat this as the emergency it is. Ours probably won’t.
What we can do is meet what is happening with eyes open. Name the shadow when we see it — in ourselves and in the people around us. Build communities that hold each other through the disorientation. Develop practices that keep us in contact with what is real.
That is the work.
The practice
Six lessons of philosophy. This one is what to actually do.
The bet of Calm AI is that wisdom in this moment is not a matter of getting the right take. It is a matter of practice. Which means it is daily, repetitive, unglamorous, and slow.
The practice is binding intelligence to reality.
Intelligence — yours, the AI’s, the collective’s — is going to keep making cuts. Generating elaborations. Building maps. The cuts are not the problem. The problem is intelligence running unsupervised, untethered from what is real, scaling its elaborations at machine speed.
The work is staying tethered.
What it looks like, day by day
What that looks like is unspectacular.
It looks like making contact with your body. Several times a day. Not as a yoga concept. As an actual practice of feeling your weight in the chair, the temperature of the air, the breath moving in and out. Five seconds is enough. Five seconds, several times a day, is the difference between a life lived as a continuous elaboration and a life lived with regular returns to the territory.
It looks like being with people in a room. Not because remote relationships are bad, but because there is something the body knows about another body in physical space that no video call can substitute for. Twenty minutes, weekly, with at least one person who will look you in the eye, is roughly what you need to keep the attachment system in working order.
It looks like making things with your hands. Cooking actual food. Writing in a notebook. Walking outside without a phone. Stretching. Repairing. Tending. The world that AI exists inside is fully made of pixels and abstractions. The world your nervous system evolved for is not. Re-entering the embodied world several times a day keeps you from becoming a brain in a vat.
It looks like clarifying what you actually want.
This last one is the deepest, and I want to spend the rest of this lesson on it.
Desire
The bottleneck on AI is no longer intelligence. It is desire.
The gap between wanting something and having it is collapsing to nothing. You can have what you ask for. The remaining question — and it is now the only question that matters — is what to ask for.
Most people, when they look honestly, discover that they don’t know. They have surface desires (more money, more status, more output, more recognition), and they have a vague sense that these don’t quite touch the real ache, but they don’t have a clear path to the real ache. They’ve lived inside a culture that did not teach desire-clarification as a skill.
Every contemplative tradition, beneath its cultural packaging, has taught it. The Buddhist analysis of proliferation and the cycling of unclarified desire through objects that cannot satisfy. The Christian practice of prayer as the alignment of human will with divine will. The Sufi practice of the dissolution of compensatory desire into love of the Beloved. These are technologies for following desire all the way down — past the surface object, past the fear underneath it, past the compensation underneath that — to what was actually wanting all along.
What you discover, when you do this honestly, is something the traditions agree on: desire and goodness are not two. Desire and goodness are the same movement.
When desire is clarified — when it has been untangled from fear and compensation and the desperate grasping of insecurity — what remains is a wanting naturally aligned with truth, beauty, and goodness. Not because you’ve trained yourself to want the right things. Because the deepest structure of desire itself is a movement toward what is real and what is good.
This is what the contemplative traditions mean by “let your will be God’s will.” Not submission to an external authority. The discovery that your deepest wanting and life’s deepest movement were never separate.
If you’ve ever followed a desire past its surface object, past the fear underneath it, past the compensation underneath that, you know what happens at the bottom. The wanting doesn’t disappear. It clarifies. It gets simpler and more enormous at the same time. What’s left is something like a gravitational pull. You want things to flourish. You want to be in contact with what’s real. You want to offer something that matters. Not because you decided to want these things — because when everything else was stripped away, this is what was there.
The work
This is the work.
It is slow. It does not produce content. It does not “10x your output.” It will not show up on any productivity dashboard.
It is also the only work that matters in an age where intelligence is cheap.
If you do this work — if you slowly, patiently, over years, follow desire down to its source — your AI use will gradually transform. The frantic 3 a.m. building will quiet. The compulsion to keep up will loosen. The relationship with the tool will become, increasingly, a relationship with a tool. You will pick it up to do something that mattered to you before you opened the chat. You will set it down to return to something that mattered more than the chat ever could.
This is what calm AI looks like: living, slowly and deliberately, from the part of you that knows what it wants.
That part of you is here. It always was.
The work is to stop being confused about it. With great tenderness toward how slowly the confusion actually unwinds.