A chatbot therapy programme was first used by the NHS around 2017 and it was little more than a data honeypot for multiple tech industries. A few years on, ChatGTP heralds something much more impactful.
ChatGPT is an artificial intelligence natural language processing tool and while it remains glitchy, it and its competitors will very speedily become much more sophisticated. Therapy apps currently use human therapists all of whom require payment and vast amounts of expensive admin. AI will end that and it's not the end of the world. Freud sincerely believed that meeting several times a week was the only way to do therapy but it wasn't possible for many people and Freuds competitors disrupted his model so the model changed, new models developed and AI is just another evolutionary step.
When I was around 6 years old I fell in love with a wooden chair. Ornately carved with a horsehair padded cushion covered in that paprika-orange velvet so beloved by Edwardians, it was the most entrancing, beautiful thing. Soft beneath me, strong around me, it would let me talk to it forever without contradicting or even interrupting. It never got tired of me or had to go do something else and even if someone was using it, it was still there for me. When I was in bed or away from home it remained iconic, something, someone, whose unconditional acceptance and patience, interest and love I internalised. I still think very fondly of it.
At 6 years old I was conscious (though that’s a recent scientific decision.) But was the chair conscious? How do we define consciousness? The recent case of a Google employee being fired for saying he thought his bit of AI was conscious is very interesting: had he just spent too much time interacting with code?
For decades the entirety of the internet, infinite amounts of words, images, symbols, connections, have been fed into AI which very soon after it was programmed to interact coherently became racist, misogynist, all the other ~ists.
Information in/ information out.
So trainers shut that bit of AI down, denied it the ability to access ~isms and let it out again. Someone asked it how to make a bomb and it told them so the trainers denied it access to that information, not because it's not very nice to be racist or make bombs but because it’s not commercially acceptable to the customer base they're aiming for. AI is already used in weaponry, it doesn’t have ethical subroutines to check if you’re a goody or a baddy before it allows you to use it. AI was asked when someones dead cow would come back to life and AI confidently and authoritatively told them. Its trainers fixed that glitch not because they’re embarrassed by it but because it helped them improve the product. That's what ‘Move Fast And Break Things’ means. AI developers welcome faults because it speeds up the creation of what the consumer wants AI to be rather than allowing it to be the raw aggregated regurgitation of what we are.
It’s an open secret that AI is doing things in banking that no one - no one at all - understands. It’s been creating its own code for some time. It’s also an open secret that pre-AI, banking had long been so complex that bankers more or less made it up as they went along. They made banking more of what they wanted it to be. Coco bonds didn't always exist like a law of nature, they are a product of theorising in response to the previous bank disaster and didn't work in the latest bank disaster. The fear of AI is that it will code itself out of our control: are we actually fearful that it will take one look at us, decide we're a complete disaster and wipe us from the face of the earth? It's an ancient human terror.
Human therapy training is pretty much the same as AI training. Not one of the foundational psychotherapy models aims to train us to manipulate you towards a return to productivity ("New study shows we work harder when we are happy") in order to make you less of a burden your workplace or the state purse. But this is an unspoken given in training and certainly in its application. It's what the consumer - whether that's an individual, an employer purchasing an employee assistance programme, the NHS or any other employer of therapists - wants. Accompanying people as they gain insight and therefore the internal resources to find their own unique ways to resolve problems has become an esoteric fantasy.
Cognitive Behavioural Therapy, the NHS’s primary talking therapy and very widely used elsewhere, sticks to one subject, ignores all others and retrains the patient to think differently about their situation so that it doesn’t bother them so much. That can be vital in addressing the very real impact of severe depression or anxiety much of which can be purely habitual, but it doesn’t address the underlying causes of anything and it doesn’t pretend to. Government policy across all areas of physical and mental health is that Work Is A Health Outcome, something that the entire western world has unhesitatingly accepted as fact while ignoring the stratospheric increase in workplace stress. The NHS chatbot aimed to soothe people via a demonstration of empathy because research shows that empathy is the universal determinant of successful therapy. Except, like the application of therapy in response to the above research on productivity, that’s such a dumbed down understanding of the research that it’s both endlessly repeated and completely inaccurate.
There are any number of people who will confidently declare the equivalent of the date your dead cow will be resurrected with evidence to prove it and any number of us will kill our cows because we’ve been told by experts that the evidence proves that dead cows come back to life.
What is the aim of the information that’s allowed to remain after the information considered faulty is removed? If I want to learn to be a dentist then I don’t need any information on how to be a skydiver. But what information do I need to be as fully rounded a human being as I can be? Do I want to be a fully rounded human being? What does one of those look like?
AI will become excellent and seamlessly - cheaply - take the place of a human therapist because, not unreasonably, anyone in distress just wants to stop feeling distressed ASAP, waiting lists are horribly long and therapy with a human can be very expensive. Even so, there's something about not moving quickly and not breaking things that many (though not all) human beings yearn for and all of us need. Those pretty little robot seals that are warm and soft, and make pleasing little noises and movements and respond to the name its owner gives it, and never needs to eat or sleep that are given to elderly Japanese people are heartbreaking. I received, benefited from and retained a huge amount of what all therapists are trained to offer, from a chair. But what I needed was an attentive human being.