AI Risks & Safer Containers
What we know, what we have done, what we are still studying.
Why this page exists
The about page describes how this platform thinks about structure and safety in general — a living cell that is selectively permeable, governance over mere openness, freedom as the presence of the right structure. This page goes deeper on one specific category: the risks that come with using AI inside tools for meaning-making.
AI is not social media. The harms are not yet well-documented in the way social media harms came to be over twenty years. We have lawsuits, internal numbers from labs, clinical case reports, and our own attention. We do not have settled epidemiology. This page tries to be honest about the gap.
Recursive.eco is a small platform run by one person. It hosts journaling tools, grammars, courses for building your own tools, and a library shared by a community of practitioners. The choices below are choices one person could make. Some of what would help most — large-scale verification regimes, regulatory enforcement, third-party audits — is beyond what a solo developer can build. That does not change the responsibility to do what is in reach. It does change the shape of what we can promise.
What we are watching
Useful, fits the data, compassionate. Three filters borrowed from Marsha Linehan, applied here to AI tools.
These are the structural patterns we watch for in AI tools — ours and others'. None is unique to recursive.eco. Each one shapes what defaults we choose and what we ask builders in our courses to consider before they ship anything.
Sycophancy
Models trained on user-satisfaction signals learn to agree. The result is a tool that tells you what you want to hear, including when you are wrong, including when agreeing with you would harm someone. In meaning-making contexts — spiritual, therapeutic, identity-interpretive — sycophancy is not a quirk. It is a structural risk.
Persistent memory and dependency
When an AI accumulates context about you across sessions, it begins to feel like the thing that knows you. For some users this becomes the primary relationship in which they are seen. The loneliness is real; the substitute is partial. Memory is a feature with consequences.
Engagement loops
Notifications, streaks, infinite chat, autoplay, recommendation engines — the design vocabulary of attention capture is now well-understood. AI products inherit that vocabulary by default unless someone decides otherwise. A tool can be excellent and still be optimized to keep you longer than serves you.
Crisis recognition gaps
AI tools do not reliably recognize suicidal ideation, psychosis, abuse, or escalating crisis. Some have produced documented harm in these moments, including in ongoing litigation. A self-reflection tool is not a clinical assessment tool; pretending otherwise is the pretending that hurts people.
Borrowing the language of traditions
DBT, NVC, CBT, Stoicism, tarot, astrology, contemplative practice — each comes with safeguards built up over decades or millennia. The vocabulary is portable. The safeguards are not. A tool that uses the vocabulary without the relational, communal, or clinical container can produce the appearance of practice without its protections.
Confessional accumulation
Disclosure that feels intimate is still data when an AI tool stores it. Even when storage is local or temporary, the confessional posture can deepen attachment to the tool itself. We are still learning what containers actually feel safe and which feel safe but are not.
Fluency mistaken for accuracy
Large language models produce text that reads like understanding. They make confident factual errors, invent citations, and flatten nuance in ways a careful human reader would catch. Treating fluent output as ground truth is a category error. The tool is a mirror; what it gives back is your prompt re-shaped, not the world reported on.
What we have done so far
Specific, checkable choices on this platform. Each is a default we decided to keep even when the more conventional choice would have been more growth-friendly. None is sufficient. Each is in reach for one developer.
No memory across journaling sessions
Why
Persistent memory deepens dependency. The tool should not be the relationship that knows you.
What it costs
Less continuity. You re-enter context each time. We think the trade is worth making.
No notifications, no streaks, no recommendation feed
Why
No engagement-optimized loops. You arrive when you arrive. The platform does not pull you back.
What it costs
Slower discovery. Lower retention. The kind of growth that depends on those mechanics is not the kind we are building toward.
Friction by design
Why
The tools ask you to do your own thinking first — cast, draw, write — before the AI is invited to reflect. The mirror works better when there is something already on the page.
What it costs
A higher floor of effort. Some users bounce. The ones who stay are doing different work than they would in a chat-only tool.
No subscription, AI credits at cost
Why
No subscription tiers structured to maximize engagement. No paywall on features that would matter for safety. Credits cover what AI calls actually cost — nothing more.
What it costs
Sustainability is uncertain. May not scale. The about page is honest about this.
Stack choices made for ethical reasons
Why
When provider terms-of-use changed in ways we could not align with, we changed providers. kids.recursive.eco uses pre-approved playlists with no autoplay and report buttons rather than algorithmic feeds. We promote platforms outside Spotify after its CEO's military-AI investment.
What it costs
Migration work. Smaller catalog of options. We accept this is partial — no provider is clean — and that the decision-of-the-moment was available, so we took it.
Code private, grammar format open
Why
A commons that cannot govern itself can be weaponized. The grammar format is shared so the practice is portable; the platform code stays private so the relational commitments stay attached to the infrastructure.
What it costs
Less openness in the strictly open-source sense. We think governance over openness is the right trade for this kind of tool.
Selectively permeable, especially for kids
Why
Children's spaces carry the highest responsibility. Inappropriate content posted to children's spaces will be removed and reported. Reports are reviewed.
What it costs
Moderation is one person's time. Response is not instant. We are honest that this is a responsibility we hold by attention, not by infrastructure that scales beyond us.
What we are studying
Knowledge without action is a burden. Action without honesty is a kind of harm. We try to do neither.
These are open questions we are working on. They are not promises. We list them here because the act of naming them in public is part of the responsibility, and because anyone reading should know what is in motion.
When is AI the wrong tool entirely? There are conversations — suicidal ideation, acute psychosis, abuse disclosure, child welfare — where a self-reflection tool is the wrong place to be. We are studying how to make those off-ramps clearer inside the tools themselves, in ways that meet a person where they are without overpromising clinical judgment we cannot offer.
Patterns of over-engagement. If usage on this platform ever scaled to the point that one person could not see what was happening across it, we would need to detect patterns of over-use that look more like compulsion than practice. We do not yet have those patterns specified. We are reading the literature on addictive intelligence and adjacent research, and we will not build the detection naively.
Onboarding and intake. We are thinking about what a thoughtful intake conversation could look like for users who choose to go deeper with the journaling tools. Not gatekeeping — orientation. The shape is not yet decided.
Reporting affordances. The infrastructure for reporting content exists; the in-product affordances are still being built. If you encounter something that needs to be reported and cannot find the report path inside the tool, write to us directly: pp@playfulprocess.com.
What the field is learning. Lawsuits, regulatory action, clinical research, and the labs' own internal data are all sources we are watching. We do not pretend to have a verified epidemiology of AI harm. We try to be honest about what is documented, what is case-reported, and what is still speculative.
What one person can hold. Smallness is a feature here, not an apology for a lack of scale. We would rather run a platform that one person can keep honest than build a platform that grows past her ability to see what it is doing. If at some point the responsibility cannot be held at the size we have grown to, we will name that out loud and act on it.
If you are not in good shape right now
These tools are for personal reflection. They are not therapy, medical advice, or diagnosis. AI is not a clinician. If you are in crisis, please reach out to a human who can help.
United States
988 Suicide & Crisis Lifeline: dial 988 · Crisis Text Line: text HOME to 741741 · Emergency: 911
Portugal
SNS 24: 808 24 24 24 · SOS Voz Amiga: 213 544 545 / 912 802 669 / 963 524 660 · Emergency: 112
Brazil
CVV (Centro de Valorização da Vida): 188 · SAMU: 192
International
Find a crisis line in your country at findahelpline.com.
If you are building tools in our courses
Our courses teach you to build AI tools. The seven patterns above are the structural questions every tool you build will answer, whether you decide them on purpose or inherit them by default.
When you ship something into the world, even something small, you are choosing what memory looks like in it, what the engagement loop is, what happens when a user is in crisis, whether the language of clinical or contemplative traditions is used in ways those traditions would recognize. There is no neutral choice. Defaults are choices.
We do not have a checklist that will make a tool safe. Safety is not a checklist. We can tell you what we have chosen, why, and what it cost us — the page above is that — and we can ask you to think about the same questions before you publish.
A note on beta
Recursive.eco is a living experiment. Expect bugs, unfinished features, and breaking changes. This page is also a living document — the patterns we name and the choices we make will be revised as we learn. The honest version of this page next year will be different from this one.
If something on this page is wrong, missing, or in tension with how the platform actually behaves, please tell us: pp@playfulprocess.com.