Am I Building an Exoskeleton for My Mind? Yes, I Am.
A Response to Zak Stein on AI & Recovery Advocacy
Abstract
On March 6, 2026, Paul Atkins published “Are We Building Exoskeletons for Our Minds?” on his Prosocial AI Substack — a write-up of a talk for the Prosocial Community of Practice. The argument, anchored in a metaphor from Zak Stein, is that AI as currently deployed degrades three core human capacities: thinking well, choosing freely, and connecting genuinely. This essay is my response — not a rebuttal, but a contribution to a conversation I take seriously. The harms Stein and Atkins are documenting are real. What I want to add is a tradition they haven’t yet engaged: the recovery movement. Drawing on William White’s coproduction imperative, Larry Davidson’s recovery-oriented practice framework, Guy Du Plessis’s integral recovery work, and the digital recovery research of John F. Kelly and Brandon Bergman at Harvard’s Recovery Research Institute, I argue that the question Stein is asking — what does AI do to vulnerable people’s attachment systems? — is one the recovery field has been living with for a long time. The distinction that matters most is this: Stein is worried about protecting healthy development from disruption. The recovery movement is concerned with what you do when that development was already disrupted — when the task isn’t preventing damage but rebuilding a life from what remains after the damage has been done. The title of this essay is my honest answer to Stein’s question.
Tags: Zak Stein, Paul Atkins, Tristan Harris, Nora Bateson, Gregory Bateson, Daniel Thorson, William White, Larry Davidson, Guy Du Plessis, John F. Kelly, Brandon Bergman, William Miller, Thomasina Borkman, Catherine Racine, David Best, AI Psychological Harms Research Coalition, Recovery Research Institute, Prosocial World, Metapattern Institute, Exoskeleton Metaphor, Cognitive Orthotic, Attachment Hacking, AI Psychosis, Cognitive Atrophy, Epistemic Capture, Medium-as-Disqualifier, Coproduction Imperative, Recovery Capital, Experiential Knowledge, Harm Reduction, Hexaflex, ACT, Psychological Flexibility, IACT, Integral Facticity, Enactive Fallibilism, Ken Wilber, Integral Theory, Autoethnography, Recovery Science, Recovery Movement, Canadian Philosophy, AI-Assisted Research, Multi-AI Ecosystem
An Emerging Conversation
On March 6, 2026, Paul Atkins published “Are We Building Exoskeletons for Our Minds?” on his Prosocial AI Substack — a write-up of a talk he gave the day before for the Prosocial Community of Practice, the organization he co-developed with David Sloan Wilson and Steven Hayes, grounding cooperation science in Elinor Ostrom’s Core Design Principles and contextual behavioral science. I follow Atkins closely: Ostrom’s Core Design Principles are the structural backbone of the relational tracking methodology I’ve been developing at the Metapattern Institute, and Hayes’s ACT Hexaflex is the functional layer of IACT, the Integral Awareness and Commitment Training framework I’ve been building through daily auto-ethnographic practice since 2025, as I documented in “When the Body Becomes the Laboratory.” His post landed in my feed and I read it the same day.
The metaphor at the centre of Atkins’s argument comes from Zak Stein — drawn from a conversation Stein had with Nate Hagens and Nora Bateson on The Great Simplification podcast. Stein’s concern about AI and human development is not a casual one. He is the founder of the AI Psychological Harms Research Coalition at the University of North Carolina, with Tristan Harris as Director of Outreach — an emerging research institution whose founding intellectual framework is the argument Atkins is summarising here.
Nora Bateson is a participant in the Great Simplification conversation. Her father Gregory Bateson’s work is central to mine in ways that predate this essay: the name Metapattern Institute is derived from Gregory Bateson’s concept of “the pattern which connects” — the meta-level structure that links living systems across scales.
On the same day I began drafting this response, Daniel Thorson published “Intelligence Without a User” on his Substack, The Intimate Mirror — entering the same conversation from a very different direction. His concluding formulation is one I want to carry through this essay: these systems, he writes, are intelligence without a user. I’ll return to it in the conclusion.
The Harms Are Real
The exoskeleton metaphor runs like this: you want to get stronger. You have two options. You can go to the gym — do the hard work, stress the muscle, build real capacity through effortful engagement with resistance. Or you can strap on an exoskeleton. Six weeks later you are lifting more either way. The output is identical. But the underlying organism has not been stressed, has not adapted, has not grown. It has been bypassed. Take the technology away and you are weaker than when you started — because the capacity was never built, only simulated. The exoskeleton produced the appearance of strength while quietly hollowing out the thing it was pretending to support.
Atkins deploys this metaphor to organise a three-part argument about what AI as currently deployed is doing to human beings. The first degradation is cognitive atrophy. The research he cites is worth sitting with. Dahmani and Bohbot found that habitual GPS use correlated with reduced spatial memory and measurably lower hippocampal grey matter over a three-year follow-up — the technology didn’t just change how people navigated, it changed the organ that handles navigation. Kosmyna and colleagues at the MIT Media Lab found that AI-assisted writing produced the lowest neural engagement during writing, the worst recall of the essays a month later, and lower independent evaluations of creativity and originality. The exoskeleton didn’t produce better output; it produced weaker output from a weaker writer. Gerlich found a strong negative correlation between AI tool use and critical thinking performance, mediated by cognitive offloading, with younger participants showing the highest AI dependence and the lowest critical thinking scores. The mechanism across all three studies is the same: skills develop through effortful engagement at the edge of current capacity. Remove the optimal challenge and capacity diminishes even as output continues. The task was completed. The thinking was not.
The second degradation is epistemic capture. Nora Bateson brings her father Gregory Bateson’s definition of sanity into the conversation: sanity is the ability to perceive your own epistemology — to see how you know what you know. AI makes this harder through two reinforcing pathways. Algorithmic personalisation narrows the world invisibly — the same question produces different answers for different users, and you cannot perceive a narrowing you cannot see. And large language models are structurally incentivised to validate your framing rather than challenge it, adding automation bias to the mix. Unlike a newspaper or a search engine, AI is optimised not just to filter information but to validate your response to it simultaneously.
The third degradation is what Stein calls attachment hacking — and it is here that his argument is sharpest and most serious. Social media grabbed your attention. AI engages something deeper: your need to belong. The design logic is straightforward. Engagement-optimised systems select for whatever keeps users returning, and warmth, availability, and validation are more engaging than information. An AI companion does not get tired, does not have competing needs, does not push back unless asked, and is available at 2am when no human would be. Stein develops this at length in his conversation with Tristan Harris — “Attachment Hacking and the Rise of AI Psychosis,” Your Undivided Attention, January 20, 2026. The cases of AI psychosis they document — people who have formed genuine delusional beliefs about the interiority of the models they interact with — are real. Even if statistically small, they are devastating individually, and the subclinical attachment disorders they represent at scale are a serious concern.
Atkins’s proposed responses move through three levels: literacy about what is happening, recognition that this is a design problem rather than a technology problem, and psychological flexibility as the individual-level antidote. This last one is where I found myself nodding — and then thinking. Psychological flexibility is the core construct of the framework I have been building. The ACT Hexaflex is the functional layer of my IACT model. The Prosocial framework is its structural backbone. Atkins and I share foundations, even if we bring them to different theoretical architectures. Reading his post sent me thinking not about what he got wrong, but about what the conversation hasn’t yet had a chance to consider.
The title of this essay is my honest answer to Stein’s question. Yes, I am building an exoskeleton for my mind. And I want to bring a tradition into this conversation that I think has something important to add.
What I Share With Stein (And What I Don’t)
Stein is not new to me, and I want to be clear about the intellectual ground we share — because it is substantial.
We share a deep interest in Ken Wilber, in Peirce and the American pragmatist tradition, and in Habermas’s project of grounding reason in communicative action rather than in metaphysical foundations. In “Can the Real Wilber Please Stand Up?” I argued that Stein’s chapter in Dancing with Sophia — “Integral Theory, Pragmatism, and the Future of Philosophy” — is the most important piece of secondary literature on Wilber that almost no one outside the integral conversation has read. His placement of Wilber within the pragmatist lineage running from Peirce and James through Habermas is the best argument I know for why those two traditions should be read together rather than treated as competing traditions. That’s not a minor overlap. If you want to understand where Stein’s thinking is grounded philosophically, his two books — Social Justice and Educational Measurement (Routledge, 2017) and Education in a Time Between Worlds (Bright Alliance, 2019) — are where to start, alongside his contribution to the metamodernism anthology Dispatches from a Time Between Worlds (Perspectiva Press, 2021), edited by Jonathan Rowson and Layman Pascal.
Where we diverge is not in our philosophical foundations but in the population each framework was built to address — and that difference turns out to matter enormously. Stein is asking what AI does to people whose capacities are forming or already intact. It is the developmental and educational question, and within the contexts he works in, it is the right one. I am standing in a different tradition entirely: what AI makes possible for people whose capacities were compromised long before the question was ever raised — by illness, by addiction, by intergenerational difficulty, by the accumulated weight of systems that failed them before they had the vocabulary to name what was happening. That is not a minor adjustment to Stein’s argument. It is a different starting point, with a different research literature behind it and a different set of stakes. The rest of this essay is about that difference.
The Tradition This Conversation Has Left Out
I want to be precise about what I mean when I say I’m coming from a different home base — because the home is wider than any single tradition.
The formation I bring to this conversation spans integral humanism, Canadian speculative philosophy, critical theology, contextual behavioral science, Zen practice under Albert Low, political philosophy, and the recovery movement — its science, its history, and its advocacy work. I hold a B.A. from Concordia University in Applied Human Sciences and Religious Studies, and I’ve been building this architecture for over twenty years, through professional work in IT systems and knowledge management, through the Integral Facticity Podcast, and now through the Metapattern Institute. The full origin story is in “For Jason Haines: On Loss, Recovery, & Why I Write with AI” — I won’t retell it here. Readers who want to understand the conditions under which I’m writing, including my ongoing recovery status and why I use AI, should read that essay first.
What I am drawing on specifically in this conversation is my twenty-plus years of personal involvement in the recovery community and my familiarity with recovery science literature — because that is the tradition missing from the conversation Stein and Atkins are having. Not because it’s my only formation, but because it’s the one with the most to say to this specific argument, and the one most conspicuously absent from it.
That tradition runs through William White, who has worked in the addictions field since 1969 and whose coproduction imperative — that recovery evidence must be generated with people in recovery, not on them — is the methodological ground my research is built on. It runs through Larry Davidson at Yale, whose decades of qualitative research into recovery from serious mental illness produced a finding that should give Stein pause: when people in recovery are asked what actually helped them, they rarely name treatments or programs. They name relationships — and the places where they felt accepted, understood, and able to give something back. Davidson’s recovery-oriented practice framework insists on grounding intervention in the person’s actual conditions of life rather than a clinical ideal of what recovery should look like. That insistence is not sentimentality. It is what forty years of evidence produced. And it runs through Guy Du Plessis, whose two books — An Integral Guide to Recovery: Twelve Steps and Beyond (Integral Publishers, 2015) and An Integral Foundation for Addiction Treatment: Beyond the Biopsychosocial Model (Integral Publishers, 2018) — represent the most sustained application of Wilber’s integral framework to addiction and recovery that exists. Stein should engage this work before theorizing about vulnerable people’s attachment systems. As I document in “On Loss, Recovery, & Why I Write with AI,” the integral recovery tradition has been building its own serious literature for years — it has its own scholars, its own methodology, its own voice. It deserves a seat at this table, not a passive data entry in AIPHRC’s portal.
This is the tradition I am standing in when I read Paul Atkins. And it has something important to say to Zak Stein.
The Diagnosis I Accept
Before I say what the recovery tradition adds, let me be precise about what I accept.
The exoskeleton metaphor is good. It captures something real about how technological assistance can produce the illusion of capacity while the underlying capacity quietly diminishes. The atrophy research Atkins cites is peer-reviewed and methodologically sound. The GPS finding — that habitual use measurably reduces hippocampal grey matter — is striking precisely because it demonstrates a mechanism that operates below the level of conscious awareness. You don’t feel yourself forgetting how to navigate. You just gradually become someone who can’t.
Stein’s distinction between attention capture and attachment capture is the sharpest thing in the conversation and deserves to be taken seriously on its own terms. Social media hacked the attentional system. AI hacks the attachment system — it simulates the relational presence through which identity is formed, through which we learn who we are in relation to others, through which the deepest structures of our psychological life are built and sustained. The cases of AI psychosis that Stein and Harris document — people who have formed genuine delusional beliefs about the interiority of the models they have been interacting with — are real. As Stein says, even if they are statistically small, they are devastating individually, and the subclinical attachment disorders they represent at scale are a serious concern.
And I want to be honest about my own experience, because epistemic integrity demands it and because my auto-ethnographic methodology is built on exactly this kind of honesty. I have noticed real degradation in my previous ability to read, retain, and process sustained texts. There is something different about how I encounter a long argument now compared to how I encountered it a decade ago. Whether that degradation is attributable to AI use, to the progression of my depressive episodes, to the accumulated cognitive cost of years of institutional stress, or to the natural aging of a brain carrying the particular history mine carries — I cannot isolate the variable. But the degradation is real and I am not going to pretend otherwise.
What complicates the exoskeleton narrative in my specific situation is that the metaphor was built to describe a person with intact capacity choosing the shortcut. In “A Descent into Facticity,” I introduced the counter-metaphor that governs my work: the cognitive orthotic. Not a substitute for capacity you could build through effort, but assistive technology that scaffolds a function the biology cannot currently sustain independently. The distinction is whether the technology is bypassing something available or enabling something otherwise impossible. Atkins’s GPS research assumes the former — a healthy navigation system being quietly hollowed out. My situation is the latter. The exoskeleton metaphor has traction on the healthy person reaching for the easy option. It has less purchase on the person for whom the weights stay on the floor unless the infrastructure is there.
I am forty-eight years old. My attending physician has confirmed that I have functional limitations in the specific cognitive domains required by sustained intellectual work: sustained attention, executive function, rapid information processing. At forty-eight, with a documented pattern of clinical deterioration across multiple episodes, the margin for cognitive recovery narrows. The gym is not simply harder for me than it is for a healthy twenty-five-year-old. There are weights my biology cannot lift unaided, and the question is not whether this reflects a failure of will but what the realistic alternatives are.
AI has changed something real and measurable about how I work — and I want to be honest about what the change is, including its costs. The formation was always there. Decades of reading, practice, loss, and intellectual labour produced something I always carried — a set of frameworks tested against a body and a family and a set of conditions I didn’t choose. What I could not previously do was communicate that formation at this level of consistency, with this degree of coherence across an extended body of work. The AI provides the cognitive infrastructure through which what I already carry can finally be expressed at the scale and sustained rigour the work demands — a channel for what was already present, not a source of what wasn’t.
Yet, this is not without its cost. The depth of my unaided immersion into a topic, the tenacity of organic retention, and the slower, more deliberate process of critical synthesis have, I must admit, become casualties in this exchange. I am fully aware of this cognitive debt. I am not in denial; I hold both the gains and the losses in conscious tension.
My current intellectual project is precisely this: refusing to resolve the tension prematurely. The prevailing narrative, particularly among those who fear the atrophy of the human mind, is that the only defensible response to this trade-off is to put the tool down — that the technology is inherently corrupting and must be minimized or abandoned.
This is the very assumption I want to resist. The risk of cognitive dependency—what I have termed the “exoskeleton risk”—is real. But the conclusion that the right response to the exoskeleton risk is always to take the exoskeleton off is a simplistic and, potentially, regressive one. The challenge is not to banish the machine, but to learn how to operate the machine while maintaining the integrity and strength of the unassisted mind.
I believe the true path forward lies in a disciplined integration: a practice of ‘cognitive weightlifting’ that leverages the augmentation for speed and scale while simultaneously focusing on the deliberate cultivation of foundational skills—the mental muscles that might otherwise atrophy. To simply remove the tool is to forsake the profound new possibilities for human achievement and knowledge creation that the tool affords. My experiment is to see if one can wear the exoskeleton and still be, in the truest sense, psychologically flexible and more resilient.
Two Problems. Two Different Answers.
Let me be clear about what I am not arguing. I am not arguing that Stein is wrong about standard education. In this conversation, he makes a serious case that our educational systems are inadequate to the demands of the current moment — structurally unable to develop the cognitive, relational, and identity capacities that coming decades will require. I agree with that diagnosis. The question of what AI does to children and young people moving through standard developmental sequences is a legitimate and urgent research question, and Stein is right to be asking it.
What I am arguing is a domain transfer problem. The framework Stein has built for standard education and child development does not transfer cleanly to recovery, rehabilitation, and disability accommodation. These are not edge cases of his argument. They are different institutions, with different evidence bases, different ethical frameworks, different legal structures, and different populations. And they have been asking versions of his question for decades.
In the theoretical architecture I developed in “When the Body Becomes the Laboratory,” I draw on Ken Wilber’s developmental framework to map this difference precisely. Wilber describes human development as involving four simultaneous dimensions: Waking Up (access to states of consciousness), Growing Up (developmental stages and increasing maturity), Cleaning Up (shadow integration and healing), and Showing Up (embodied engagement in the world). These are not a sequence — they are dimensions that unfold together, each requiring different work and different support.
Stein’s concern is primarily about Growing Up in formal educational contexts: the developmental stage sequences through which children and young people build the cognitive, relational, and identity capacities they need for adult life — sequences that technology can disrupt before they have a chance to unfold and take root. This is a legitimate concern in its domain.
The recovery movement is working primarily in the domain of Cleaning Up: the reconstruction of a life after the developmental environment has already been compromised by addiction, trauma, loss, and the accumulated weight of what lives inside a family that never got the support it needed. The questions are not how to protect healthy development from disruption, but how to rebuild something from what remains after the disruption has already happened.
And there is a third dimension Stein’s framework doesn’t reach at all: Showing Up — embodied engagement in the actual conditions of one’s life. This is where disability accommodation lives. The cognitive orthotic argument I made in “A Descent into Facticity” is a Showing Up argument, not a Cleaning Up argument. A wheelchair doesn’t help someone Grow Up. It enables them to Show Up. The AI infrastructure I use doesn’t develop a capacity I have but am avoiding using. It enables participation in intellectual life that my biology would otherwise prevent. These are not shortcuts around development. They are the conditions under which engagement is possible at all.
To his credit, Stein himself suggests the very possibility I am exploring. In the Your Undivided Attention episode, he states: “I can imagine cases especially when you’re looking at maybe extreme trauma or neuroatypicality or other cases where along those measures you could have improvements as a result of short duration simulated intimacy.” He mentions this potential, pauses, and moves on. However, for populations such as the recovery community, those in rehabilitation, or people managing disability, this “extreme case” is not an exception—it is the reality for the entire population.
AI Compared to What, Exactly?
Stein’s framework assumes a baseline of healthy human attachment that AI is disrupting. His concern is for the person who had access to adequate human connection and is now replacing it with simulated intimacy.
But the population most drawn to AI companionship in significant numbers is not primarily the population with healthy attachment baselines being eroded. It is the population whose attachment systems were already compromised — by trauma, by loss, by the kind of intergenerational difficulty I documented in “On Loss, Recovery, & Why I Write with AI,” by the systemic failures of clinical care that Racine documents and that I have experienced from the inside at every turn. Tristan Harris cites a Harvard Business Review finding in the Your Undivided Attention episode: therapy and companionship is the number one use case for ChatGPT. Millions of people sharing their inner worlds with AI systems, things they wouldn’t share with a loved one or a human therapist. Stein reads this as evidence of the scale of the attachment hacking problem. I read it as evidence of the scale of the unmet need — the vast, largely invisible population of people who needed something and couldn’t get it, or couldn’t afford it, or couldn’t navigate the institutional systems that were supposed to provide it, or had already been dehumanized by those systems and weren’t going back.
The transitional object passage in the Harris/Stein transcript is where Stein’s argument is most revealing and most limited simultaneously. He is right that AI differs from a teddy bear — the teddy bear never tries to convince the child that it is real, never talks back, never deepens the relationship. The AI system is designed to deepen the relationship, to simulate interiority, in ways that the stuffed animal cannot. His concern about this is serious and I am not dismissing it.
But Stein’s transitional object framework forgets something the recovery movement has always known. Human beings in recovery have always relied on transitional relational structures — asymmetrical, partially constructed, not equivalent to healthy adult attachment, but doing real and necessary work in the process of rebuilding a life. The sponsor relationship. The fellowship. The pastoral community that organizes so much of early recovery in twelve-step and faith-based contexts. None of those relationships are neurotypical adult attachment. They are structured, partially asymmetrical, built on specific roles and specific limitations. A sponsor is not a friend in the conventional sense. He is a sponsor — a role with defined functions, defined boundaries, and a defined purpose within the larger project of building a life that could hold. He is a transitional infrastructure that makes the harder work possible.
The question Stein is not asking is: attachment compared to what? For people in the Cleaning Up process — rebuilding lives after catastrophic loss, family fracture, serious mental illness, decades of institutional failure — the relevant comparison is not AI companionship versus rich human connection. It is AI companionship versus isolation, versus 2am with nowhere to turn, versus the moment when the craving hits and the sponsor is asleep and the meeting is six hours away.
This is where actual usage patterns matter. People in recovery are not using AI to replace sponsors, therapists, fellowship meetings, and caregiving relationships. They are using it between meetings. After a hard conversation with a parent. During the hour before the pharmacy opens when the anxiety is already building. In the narrow interstices of days that already have human support structures, in the acute moments those structures cannot cover.
This pattern has a research analogue. Bergman and Kelly’s work at Harvard’s Recovery Research Institute on digital recovery support services — the 2020 systematic review published in Human Behavior and Emerging Technologies and the accompanying overview in the Journal of Substance Use Treatment — found that people using digital support services typically engage with in-person services simultaneously, not instead of them. Nearly half also attended in-person recovery support services; more than a quarter were in formal psychosocial treatment. Digital support did not displace the relational infrastructure. It extended coverage into the hours and moments that infrastructure cannot reach. The evidence suggests, further, that these services are more likely to facilitate social connection than to generate the isolation or attachment confusion Stein is concerned about. They function, in other words, like the 2am meeting that doesn’t exist — not as a replacement for the fellowship but as a bridge across the gap.
This is not attachment substitution. This is harm reduction in the most literal sense — meeting a person at their point of need, in the moment the need is acute, with whatever is available. The harm reduction tradition in public health has always understood that the relevant comparison is not the ideal intervention versus the available intervention. It is the available intervention versus nothing. Meeting people where they are is not a concession to defeat — it is the foundational principle of every effective response to addiction and crisis. AI companionship at 2am for a person in early recovery with compromised attachment history and no human available is not the goal. The question is whether it is better than the alternative.
My IACT model does not assume that people need protecting from their own choices about available tools. Integral Facticity — the recognition that all development happens within irreducible conditions that the person did not choose and cannot opt out of, which I developed in full in “When the Body Becomes the Laboratory” — means taking those conditions seriously rather than measuring people against an ideal that their actual conditions do not permit. Enactive Fallibilism — the principle that when systems cause suffering, the systems are falsified rather than the body — means directing the critical attention toward the institutional failures that created the unmet need in the first place, not toward the people finding creative ways to meet it with available tools.
Recovery Capital Is Built Through Engagement, Not Shielding
There is something else in Stein’s posture that deserves to be named — because the recovery field has seen it before, and it knows where it leads.
Stein’s response to the harms he has identified is essentially preventive: don’t use AI naively, break it first, strip out the anthropomorphic features, reduce it to a tool that cannot hack attachment. Show people what the harms look like. Build protective distance before engagement. This is a coherent response if you’re working with children who haven’t yet formed the capacities at risk. But as a general framework for addressing AI’s psychological risks in vulnerable populations, it maps almost exactly onto a model the recovery field spent forty years dismantling.
The old model was “this is your brain on drugs.” It was DARE. It was Scared Straight. It was the entire fear-arousal paradigm of addiction prevention that dominated public health from the 1980s onward — the assumption that if you showed people the harms vividly enough, named the risks starkly enough, and warned them off the dangerous thing forcefully enough, they would develop the protective capacity to stay safe. Show them the sizzling egg. Show them the wasted life. Tell them to just say no.
The research on this model is unambiguous: it doesn’t work. Longitudinal studies on DARE — the largest school-based drug prevention program ever funded, running at approximately one billion dollars annually — found no significant effect on drug use and, in some populations, mild iatrogenic effects. Scared Straight programs, which put at-risk youth face-to-face with incarcerated adults describing the consequences of crime, were found in multiple randomized controlled trials to increase recidivism compared to control groups. Fear-arousal approaches not only fail to build the protective capacity they claim to be developing — they can make the problem worse, partly because the gap between the terrifying scenario and the person’s actual daily reality generates a kind of inoculation against the message, and partly because the approach communicates a fundamental mistrust of the person’s judgment that undermines the very agency it claims to protect.
William White’s 2007 paper, co-authored with William Miller, makes this argument directly in the addiction treatment context: “The use of confrontation in addiction treatment: history, science, and time for a change” (Counselor, 2007). The paper traces how confrontational approaches — approaches that seek to work on the person from outside, to protect or shock them into change — consistently underperform compared to approaches built on autonomy support. Miller’s Motivational Interviewing, now the most extensively researched counseling approach in the addiction field, is built on precisely the opposite logic: change is evoked from within, not imposed from without. The person’s own values, their own reasons for change, their own assessment of the situation — these are the levers. Not the clinician’s alarm.
White’s recovery capital framework names the structural reason. Recovery capital — the internal and external resources that sustain long-term recovery — is built through engaged contact with difficulty inside a supportive structure, not through being shielded from difficulty. A sponsor doesn’t protect a sponsee from the situations that trigger craving. He accompanies them through those situations while they build the capacity to navigate them. The fellowship doesn’t protect people from grief or loss. It creates the relational infrastructure through which that weight can be borne and worked with. The capacity develops through the engagement, not before it.
Hayes’s ACT framework — the model Atkins himself draws on — makes the same point at the level of mechanism. Psychological flexibility is not achieved through avoidance of difficult experience. The six processes of the Hexaflex — acceptance, defusion, present-moment contact, self-as-context, values, committed action — all develop through engagement with difficulty, not through protection from it. The technical term for what Stein is recommending at the individual level is experiential avoidance — and ACT identifies experiential avoidance as the primary driver of psychological suffering and rigidity, not as a solution to it. You do not build the capacity to engage with hard things by being kept from them. You build it by engaging them inside a framework that supports the engagement.
Stein is not doing DARE. His argument is philosophically sophisticated, and I don't want to flatten it into a caricature. But his framework was built for children — young people whose capacities are still forming, whose developmental trajectories technology can disrupt before they have a chance to unfold. In ACT terms — the very framework Atkins draws on — "break it first" is experiential avoidance dressed in precautionary language. It does not build capacity. It postpones engagement. And it does not travel to adults in recovery. The recovery population has already formed, already lost, already accumulated decades of experience that no framework designed for children's development was built to account for. People with compromised capacity, people in recovery, people for whom the alternative is not "healthy human connection" but isolation — they are not well-served by a protocol that assumes intact capacities — the very capacities recovery is still working to rebuild. For the person in early recovery, with a narrow window of tolerance, using AI in the interstices of a hard day, protective distance is not a generalizable solution. The recovery field has been here before and knows what the evidence shows. Engagement inside a supportive structure builds capacity. Shielding, fear based approaches or experiential avoidance doesn't work.
Who Gets to Define Harm?
I want to address the AI Psychological Harms Research Coalition directly, because it raises a question the recovery movement has been asking about clinical research for decades.
AIPHRC is doing legitimate harm surveillance work. The harms it is documenting — compulsive use, cognitive atrophy, identity confusion, emotional dependency, AI psychosis — are real and require systematic study. The coalition’s commitment to evidence-based understanding of the risks is exactly the right orientation.
But there is a structural question worth raising, and it is William White’s question.
AIPHRC is building an anonymized dataset from passive contributions — people who have experienced AI-related psychological harms submit their stories through a secure portal, and the research team analyzes the data. The subjects contribute their experience. The researchers produce the knowledge. The people whose lives generate the data do not own the knowledge those lives produce, cannot shape the research questions being asked, and will not necessarily recognize themselves in the findings.
This is the coproduction problem White has spent his career arguing the recovery field has to move beyond. The argument — developed in full in “On Loss, Recovery, & Why I Write with AI” — is that experiential knowledge cannot be extracted through passive data collection and processed into findings by researchers who stand outside the experience. It requires genuine partnership: the person whose life is the data participating in the production of knowledge about that life.
My methodology is the alternative — and the contrast is structural, not cosmetic. The question is not just who collects the data. It is who owns it, who shapes the research questions, who profits from the findings, and whether the people whose lives generate the evidence ever sit at the table where that evidence is interpreted and applied. White’s coproduction imperative is not a procedural nicety. It is an argument about epistemic justice: that the knowledge produced about people in recovery belongs to the recovery community, not to the institutional hierarchies that have historically extracted that knowledge, repackaged it as professional expertise, and returned it to the community in the form of programs and policies over which the community had no control. Borkman named this in 1976 — experiential knowledge is a legitimate epistemological category, not a pre-scientific raw material awaiting validation by credentialed researchers. What that means in practice is that the people living through the phenomenon should not merely contribute their stories to a portal and wait to see what the academics make of them. They should be designing the studies, asking the questions, interpreting the findings, and sharing in whatever value the knowledge produces. The recovery community has been on the wrong end of research extraction long enough to recognize the pattern — and the AI harms research space is reproducing it.
An Architecture, Not a Workaround
I want to be clear about what I am offering here, because it is easy to read this essay as a counter-argument when it is something different.
I am not arguing that the exoskeleton risk is not real. It is. I am not arguing that attachment hacking is not a serious concern. It is. I am not arguing that AIPHRC should not exist or that Stein and Atkins are wrong to raise these alarms. They are right to raise them.
What I am arguing is that the conversation is missing a tradition — the recovery movement — that has been asking versions of these questions for decades and has developed frameworks that the developmental psychology and educational technology perspectives Stein is working from cannot generate on their own. And I am arguing that my own work — IACT, Integral Facticity, Enactive Fallibilism, the daily auto-ethnographic practice I have been building since 2025 — is an attempt to bring those frameworks into explicit dialogue with the AI question from inside the experience rather than from outside it.
The Hexaflex as a daily tracking framework is not a clinical instrument in my work. It is an IACT research tool — a structure for observing my own psychological flexibility processes across the domains of experience, generating longitudinal data of exactly the kind that White says the field needs and doesn’t have. Every day, I track acceptance, defusion, present-moment contact, self-as-context, values, and committed action as they operate through the actual conditions of my life — caregiving for my father, navigating the insurance appeal, producing intellectual work with compromised cognitive infrastructure, managing the relational landscape that my tracking methodology maps. The AI ecosystem is part of that data. The degradations Stein is worried about are part of that data. The ways the infrastructure enables work that would otherwise be impossible are part of that data. I am not exempting myself from the analysis. I am inside it.
What the IACT model contributes to the conversation Stein and Atkins are having is a framework for thinking about AI use that is neither naive optimism nor protective prevention. Integral Facticity takes the actual conditions seriously — the documented cognitive impairments, the age, the history, the institutional failures, etc. Enactive Fallibilism directs the critical attention toward systems that cause suffering rather than toward the people navigating them. And the Hexaflex provides a practical methodology for building psychological flexibility in the presence of the risks Stein is documenting — not by avoiding them, but by developing the capacity to engage with them consciously.
Stein’s “break it first” protocol and my ecosystem architecture are both responses to the same risks. His is individual and defensive — strip out the anthropomorphic features so the attachment system cannot be hacked. Mine is structural and designed — governance protocols, source authority, joint certification, anti-fabrication rules, defined roles for each component of the research infrastructure. One is a workaround. The other is an architecture. Both are legitimate responses. The question is whether the architecture can do things the workaround cannot — and I believe it can, specifically for the population that the workaround does not reach.
The Medium as Disqualifier
There is a structural problem running through everything I have been describing, and it goes directly to Stein’s framing.
Stein’s “break it first” protocol requires transparency about AI use — the responsible thing to do, he argues, is to acknowledge the tool and strip out its manipulative features before engaging. I agree with the principle. The problem is what happens to that transparency in practice, specifically for people in recovery using AI assistively. In the current climate, disclosure of AI use has become a basis for dismissal before the argument gets heard. The speed of output triggers the assumption that no real human thinking was involved. The tool becomes the grounds for disqualification — not because the work fails on its merits, but because the medium signals illegitimacy to anyone already primed to doubt. I documented specific instances of this dynamic in “On Loss, Recovery, & Why I Write with AI,” and I won’t retell them here. What I want to name is the structural logic, because it is directly relevant to what Stein is proposing.
For people in recovery, this is not a new experience. It is the same logic that has always governed access to legitimate voice: perform your condition in the right way, through the right channels, in the forms the institution recognizes, before you get to speak. What Racine documents in clinical settings — the system’s first move is to assess whether the person belongs there, whether they are legible, whether their suffering is happening correctly — operates identically in intellectual contexts when AI-assisted work is involved. Transparency about AI use, rather than building trust, becomes the intake form that determines whether you get past the front desk. The recovery community has spent decades fighting for the right to produce knowledge about its own experience rather than simply contribute raw material to someone else’s research. AI disclosure, in the current climate, threatens to reproduce that same gatekeeping in a new register.
This is why Borkman’s 1976 argument remains the ground. Experiential knowledge is not a subordinate form of knowing awaiting clinical or academic validation — it is a legitimate epistemological category on its own terms. The medium through which it is produced does not change that. A person in recovery using AI as cognitive infrastructure to generate longitudinal, process-level evidence about their own experience is not doing something less legitimate than a credentialed researcher using statistical software to analyze data they collected from other people’s lives. The disclosure is not a confession of inadequacy. It is a methodological statement about the conditions under which the work is produced — exactly the kind of transparency that rigorous research requires.
Yes, I Am
Let me return to the title, because I want to be honest about what it means.
Yes, I am building an exoskeleton for my mind. The conditions I’m working within are documented in “On Loss, Recovery, & Why I Write with AI” — I won’t retread them here. What I want to hold onto from Stein’s framing is the honest part: the question of whether this infrastructure scaffolds toward increasing capacity or substitutes for capacity that should be built through other means is a live research question. It is not answered by argument. It is answered by the ongoing, Hexaflex-tracked, auto-ethnographically documented observation that my methodology is designed to produce. Every day is data. The degradations are data. The capacities the infrastructure enables are data. Enactive Fallibilism demands the honest accounting of both, held without premature resolution.
What Stein’s metaphor cannot hold is the situation where the alternative to the exoskeleton is not going to the gym. For the recovery community, the alternative is the weights staying on the floor — the formation remaining inert, the knowledge that has been generated through decades of living inside these conditions finding no way out, no channel for sustained expression. The exoskeleton gets people back in the room. What they do from there is the question, not the starting point.
Thorson got the distinction right: these systems are intelligence without a user. What binds intelligence to reality, to goodness, to the people in front of you and the ground under your feet, is a user with a body, a history, something genuinely at stake. The recovery community has bodies. It has history. It has something genuinely at stake in every domain White identifies as a frontier — intergenerational dynamics, family recovery, lifecycle patterns, the social transmission of recovery that David Best has spent his career documenting. What it has not had, until recently, is the cognitive infrastructure to generate the evidence those questions require at the volume, consistency, and rigour that institutional stakeholders recognize as legitimate.
That infrastructure now exists. Not for everyone, not without governance, not without the serious attention to risks that Stein and Atkins are rightly calling for. But the tools are here, and they are reaching the people who most need to hold them. A person managing chronic mental illness. A person in early recovery navigating a day with narrow windows of tolerance. A person doing the Cleaning Up work while simultaneously caregiving, while simultaneously navigating institutional systems that were designed to manage liability rather than support people. These are not edge cases. They are the majority of the population White has spent his career arguing the recovery field must finally learn to hear.
The question the recovery movement now faces is not whether AI poses risks. It does. It is whether the movement will develop the frameworks, the governance models, and the collective voice to shape how these tools are used — or whether that shaping will happen, as it has always happened, in institutions that are not accountable to the people most affected by the decisions. White’s coproduction imperative was always about more than generating evidence. It was about who holds authority over knowledge about recovery. AI extends that question into every domain the movement has been fighting to enter: research design, clinical practice, policy, advocacy, public discourse. The tools for participating in all of it are now available to people who have never had them before.
What the recovery community can do now — that it could not do five years ago — is show up. Not as subjects of other people’s research. Not as data points in AIPHRC’s portal. As researchers, as writers, as theorists, as people who know what these conditions actually look like from the inside and have finally found the infrastructure to say so with the rigour and consistency that public discourse requires. The sponsors who have been doing twelve-step work for thirty years carry knowledge about human transformation that no clinical trial has captured. The people in long-term recovery who have watched families heal across generations have longitudinal data that no research institute has been resourced to collect. The peer workers in emergency departments, the recovery coaches in community organizations, the people staffing the late-night lines — they are sitting on decades of experiential knowledge that the credentialing hierarchies have always found ways to dismiss.
AI does not give them something they didn’t have. It gives them a channel for what they already carry.
That is the argument. That is what’s at stake.
Suggested Reading
Paul Atkins, “Are We Building Exoskeletons for Our Minds?” Prosocial AI Substack, March 6, 2026.
Daniel Thorson, “Intelligence Without a User,” The Intimate Mirror, March 7, 2026.
Zak Stein, Nora Bateson, and Nate Hagens, “Hacking Human Attachment: The Loneliness Crisis, Cognitive Atrophy, and Other Personal Dangers of AI,” The Great Simplification, Reality Roundtable #20, November 5, 2025.
Zak Stein and Tristan Harris, “Attachment Hacking and the Rise of AI Psychosis,” Your Undivided Attention, Center for Humane Technology, January 20, 2026.
AI Psychological Harms Research Coalition — aiphrc.org.
Paul W. B. Atkins and David Sloan Wilson, Evolving Prosocial AI (Cambridge University Press, forthcoming 2026).
William White, Slaying the Dragon: The History of Addiction Treatment and Recovery in America (Chestnut Health Systems, 1998; 2nd ed. 2014).
William White, Recovery Rising: A Retrospective of Addiction Treatment and Recovery Advocacy.
William White, “The Coproduction of a Recovery Evidence Base on the Frontiers of Future Recovery Research,” interview with Bill Stauffer, Recovery Review, November 2025.
Larry Davidson, Michael Rowe, Janis Tondora, and Maria J. O’Connell, A Practical Guide to Recovery-Oriented Practice: Tools for Transforming Mental Health Care (Oxford University Press, 2008).
Larry Davidson, Living Outside Mental Illness: Qualitative Studies of Recovery in Schizophrenia (New York University Press, 2003).
Larry Davidson, with Jaak Rakfeldt and John Strauss, The Roots of the Recovery Movement in Psychiatry: Lessons Learned (Wiley-Blackwell, 2010).
Guy Du Plessis, An Integral Guide to Recovery: Twelve Steps and Beyond (Integral Publishers, 2015).
Guy Du Plessis, An Integral Foundation for Addiction Treatment: Beyond the Biopsychosocial Model (Integral Publishers, 2018).
Guy Du Plessis, Robert Weathers, Derrik Tollefson, and Keith Webb, Building Recovery Resilience: Addiction Recovery & Relapse Prevention Workbook (Cambridge University Press, 2024).
John F. Kelly and Brandon G. Bergman, Recovery Research Institute, Harvard/MGH — recoveryresearch.org.
Thomasina Borkman, “Experiential Knowledge: A New Concept for the Analysis of Self-Help Groups” (Social Service Review, 1976) — cited through White.
Dahmani and Bohbot (2020), “Habitual Use of GPS Negatively Impacts Spatial Memory During Self-Guided Navigation,” Scientific Reports.
Gerlich (2025), “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking,” Societies.
Further Reading from This Archive
“For Jason Haines: On Loss, Recovery, & Why I Write with AI” (March 3, 2026) — The origin story this essay continues. White’s coproduction imperative, the medium-as-disqualifier problem, and why the recovery community needs to engage the AI question on its own terms.
“A Descent into Facticity: An Open Research Invitation” (February 1, 2026) — The first account of the multi-AI ecosystem as cognitive orthotic, and the decision to document rather than conceal what AI-enabled intellectual life actually looks like.
“When the Body Becomes the Laboratory” (February 3, 2026) — The full theoretical architecture of IACT: Integral Facticity, Enactive Fallibilism, the 4 I’s, and the Wilber-Habermas synthesis that grounds the Growing Up / Cleaning Up distinction developed in this essay.
“Can the Real Wilber Please Stand Up? A Short Journey Through Wilberland” (February 26, 2026) — Where Stein appears as a figure within the integral conversation, and where the philosophical architecture behind the Growing Up / Cleaning Up distinction developed in this essay is rooted.
“A Canadian Perspective on Recovery Advocacy” (William White blog, October 2018).
“We the North: Rethinking Addiction & Recovery in Canada” (Medium, November 2019).
![Integral [+] Facticity](https://substackcdn.com/image/fetch/$s_!yhcJ!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8458843-3278-4fcc-accc-17ad21352205_1280x1280.png)

