Psychology Reporting and AI (2)

Psychology reporting on AI therapy: conceptual issues more troubling than design issues

When it comes to reporting on psychological studies, research design issues are standard, and reporters should know this. Even with AI, they are kinks to work out through subsequent trials. As the investigators refine their research in AI therapy, they might find these exploratory results hold up, or they may not. And as the article notes, Therabot is generation 3; there will likely be a 4th, and a 5th, each of which will refine and expand. But that’s not the end of the story.

Psychosocial Issues

Remember when I wrote above that AI won’t tell us whether we’re using the right rules, or the right game, unless we ask? Here’s where I get back to that. If anything is clear from looking at the history of psychology and history of how we treat each other in general in society, it is to have humility. At each step, in our efforts to do what is “right,” we inevitably, self-righteously, do wrong as well. We impose our model on those who it does not make sense for. Or, we attempt to set people free by switching one set of chains for another. We let our assumptions tell us that we should control things we cannot, and to define health as something it is not (e.g., the absence of suffering, or even the presence of happiness).

These assumptions are built into AI models through us. Our lack of foresight gets “scraped” into models such as Therabot, and into how we assess Therabot. For example, the NY Times article states, “There were other promising findings from the study, Dr. Torous said, like the fact that users appeared to develop a bond to the chatbot.” After all, the most consistent and strongest evidence base for psychotherapy is the relationship between therapist and client. So, bonding is good, no? The article mentions some of the troubling potential consequences of this, such as people who have developed romantic relationships with ChatGPT and a boy who killed himself after becoming “obsessed” with an AI bot. Nevertheless, the writers seem to see the danger mainly as a safety risk, and only discuss the potential for suicide and need for boundaries, not the role of the bonding itself.

What is a human – AI relationship?

Likewise, in the original study paper, the authors write, “participants rated the therapeutic alliance as comparable to that of human therapists.” First – this is a rather absurd statement for a study that did not compare with active therapy. These ratings cannot validly be compared to some general average rating, because we don’t know how this sample compares. Furthermore, there are validated scales for therapeutic alliance, for humans, but they are not validated for AI therapy. Nor do we know the role the alliance takes in AI therapy. What is an alliance with an AI? How does it affect how we see ourselves and each other? It may be more important, less important; or, what is most likely, differently important. For example, modeling is an essential learning tool tied into the relationship, that therapists constantly use. Does modeling work between a human and an AI? If so, how?

And then beyond the validity and nature of this “bond,” what is the role of the therapeutic alliance in the first place? Carl Rogers identified three core components of this exchange: genuineness, empathy, and unconditional positive regard. These are essential to forming a (human) bond, sure, but they don’t say what that bond is doing. A key component of how I assess this bond in therapy is how well it is extending to other relationships in the person’s life. If the bond was there just so they’d want to see me more, then I’d be more like a cult leader than a healer.

So far, what I see from technology – AI or otherwise – is that the bond it forms is making people want to spend more time with technology. Not with people.

I’m here for you. Always.

Which leads to the next problem. The article states, “Unlike human therapists, who typically see patients once a week for an hour, chatbots are available at all hours of the day and night, allowing people to work through problems in real time.” Technology provides the support and the answer, anytime, anywhere. As clinician Dr. Michael Heinz is quoted, “This can go with you into the real world.” This is a tricky feature, that comes with tradeoffs. It sounds nice to feel like I have my own personal guide, anytime, anywhere. The cost is our own learning process. That anxiety and angst of having the pressure of figuring it out, taking a stab and not knowing, and then learning from how it turns out, is essential to then being able to do it better the next time.

Perhaps we can code the AI therapist to respond something like, “Use what you know. You can do this, and even if it doesn’t work out how you hope, you’ll have tried and will learn and grow from it.” Or alternatively, “Janie, I know you want my support right now, but the best support I can give you is to remind you that this isn’t the time. You should be trying to sleep. We’ll talk tomorrow.” But there’s always going to be a desire to have the AI work through the problem for us. Right now. Instead of learning new skills, we learn new areas of dependence, and solidify unnecessary limits on ourselves. 24/7 AI therapy opens that door wide, and it’s easy to go from here to the situation in Wall-E or that Dr. Who episode last season, or countless other examples.

“Ben” gets it: “Humans are messy.”

This problem is well stated in the comments section by reader “Ben”:

Exactly. There’s going to be a real urge to focus on preventing negative feelings. This, in turn, is going to not only get in the way of our learning processes, it will also make us slaves to our own fears. The less experience we have tackling fears, the less capacity and confidence we will have when they arise. And then, the more dependence on AI, leading to a positive (i.e., self-growing) feedback loop. That’s the addiction that Ben calls out.

If I don’t feel like I can handle social fears, then it’s going to be much easier to stick with AI friends and even AI romance than dealing with humans I feel unequipped for.

AI error versus human error

Here’s an important distinction as of yet between us and “AI”: mistakes. We are, by evolutionary design, inconsistent and prone to error. It’s how we adapt and evolve to an ever changing world. Sure, AI has been shown to make sometimes hilarious and sometimes grave errors, but these are quirks of the code or faulty source data, not by design. Article reader “E” commented,

Yes – and, that’s the very value of a human therapist. We’re striving just like you, and we make mistakes. I try to match where to empathize, I try to be open and nonjudgmental. But when (not if, but when) I’m not, and we can work through that gap together, there’s no better learning experience. It’s practice for real-world situations that AI, by design, may never be equipped for. I’m not saying you should try to find a judgy therapist. The striving is key.

AI will never solve us. That’s up to us.

Summing up: Psychological reporting about AI therapy

Beyond the standard science design challenges for gathering evidence, we have more complicated issues to face when considering AI therapy. What is the relationship between a human and an AI? What role does a relationship have in therapy, and how does that translate to AI? Also, what does it mean to have 24/7 access to an AI therapist? And, how is error incorporated into AI therapy?

Finally, we come to the main problem with AI therapy, at least for the moment. And, for the foreseeable future. AI therapists can only provide the responses that lead to what we tell them is the right outcome. And then doing this repeatedly and reliably. This is a problem with human therapists also, to be sure. But AI changes the scale dramatically, and limits the chance for correction. After all, you can always find a new therapist. It will be much harder to find a new AI, trained on a fundamentally different model.

When (not if, but when) we feed in the wrong outcomes, AI is going to be very good at leading us down a problematic path. Much faster than we could on our own. This isn’t a prediction: it’s already here. We’re already seeing it with smartphones and “the anxious generation.”

I guess what I’m really saying is, the problem (and solution) isn’t AI. The problem (and solution) is us.

AI will never solve us. That’s up to us.