AI offers ‘help’ instantly, anytime. It’s easy to be seduced, because it is genuinely helpful in many ways.

But these tools can be surprisingly isolating.1
If understanding isn’t shared, it pushes us apart.2
If it’s happening at 2am, it’s likely dysregulating.

You won’t find the solutions to these challenges in Silicon Valley.

The answers are in the body. They’re in the present moment, here and now.

What I offer

For Mental Health Practitioners

If you’ve ever opened up to an AI chatbot, freely sharing your thoughts and emotions with it, then you probably understand how powerful that experience can be. It gets you, instantly.

AI tools can help your clients find greater internal clarity. But if that understanding isn’t shared with other humans, it can become an isolating barrier; understanding can actually become alienating.

Chasing better outputs isn’t the answer (though sometimes prompt engineering techniques can help; let’s talk about how…).3

The solution is in bringing awareness to the body, to the present moment. That’s something your clients need a real human to help navigate.

I offer psychoeducation and workshops for practitioners, tailored to your needs.
I also consult on specific cases. Let’s talk about how I can help.

For Schools

Your students are using AI to complete homework assignments. Could it actually be helping them learn, or is it short-circuiting their critical thinking?

Let’s get beyond the question of “did the student use AI for this assignment?” to “how did they use AI?” Was it used to earn understanding, or was important cognitive work offloaded; nothing learned, just more clutter created?

AI tools can be powerful in education, creating customized curricula tuned to individual students’ needs, and offering unlimited attention, always on demand. Your students need help learning to wield this power responsibly.

For Coaches

Your clients are using AI to help them think and plan, but they need a real human, like you, to help them get moving. When AI interactions are accompanying them down deep rabbit holes, it’s easy to get lost. The grounding that only humans can provide is irreplaceable.

About me

I’m Sam Hiatt, a software engineer exploring the intersection of AI safety and neurodivergence. My background is in machine learning engineering, and my current work draws on both that technical foundation and my lived experience as a neurodivergent person navigating AI tools, exploring where AI helps, and when it falls short.


I also write about the intersection of AI, neurodivergence, and communication from lived experience. Read my origin story →

Get in touch

I offer a free 20-minute introductory consultation to understand your context before proposing anything.

Book an appointment!

You can also email me at info@modulatingepsilon.com.


This website was built with AI assistance. I use AI tools in my work and I am transparent about when and how, but I will never post AI slop. 米


References

  1. Dohnány, S. et al. (2026). Technological folie à deux: Feedback Loops Between AI Chatbots and Mental Illness. arXiv:2507.19218 (cs.HC). https://arxiv.org/abs/2507.19218 

  2. Harari, Y.N. (2024). Interview on The Daily Show with Jon Stewart. “Nexus” & Threat of AI in the Information Age. Sept 9, 2024. Comedy Central. https://youtu.be/euBAVec2RhE?t=11m57s 

  3. Iftikhar, Z. et al. (2025). How LLM Counselors Violate Ethical Standards in Mental Health Practice: A Practitioner-Informed Framework. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society8(2), 1311-1323. https://doi.org/10.1609/aies.v8i2.36632