Steve de Shazer often said that we cannot know whether a session was useful until the client comes back and only if the client reports change can we say that we did something useful. I would perhaps be a little more careful even than Steve. Many of you will have heard me say, perhaps even a number of times, that in any one piece of work we can never ever know that we have been useful to the client even when the client reports change. After all the client may have changed anyway, since many people left on a waiting list do indeed change. In fact it is possible that the client may even have changed quicker had the client not seen us; it is truly impossible to rule out the possibility that we slowed the client down. All we can know is that we met with the client and the client changed and yet from this observation it is not possible to deduce causation. All that perhaps we can say for sure is that we did not stop the client who reports change changing. However, of course, when we see lots and lots of clients who report change, and our outcomes are better than the percentage of clients who change when left on a waiting list then we can perhaps conclude that something that we did was useful to some of them but it is not possible to know whom we might have helped and whom not.
I was thinking about this during the course of the week because I was thinking about training. I have just completed two very enjoyable Level 1 SF training courses. One was an in-service course for a group of professionals working in CAMHS (Child and Adolescent Mental Health Services) and the other was one of our own open access courses, both online of course. I might be tempted to say that these were ‘good courses’, that the courses ‘went well’, even perhaps that I proved myself to be a ‘good trainer’. But how can I tell? At the end of the course the feedback was good. People said some lovely things in the chat and I have received a number of lovely emails but still can I say that the courses were ‘good’ courses? In order to be able to say that the courses were ‘good’ courses I would have to decide on a set of criteria. How do I judge? I certainly doubt that we can tell at the end of a course! People may be happy, they may have enjoyed the course, they may feel inspired, but we can surely only know that the training was ‘good training’ if it impacts on the service that clients receives.
However perhaps even a post-course perspective is complicated. I always say that it is not my job to try to persuade people to use the SF approach, I am not a super-seller, a sales-person; it is merely my job, I typically say, to try to describe the approach clearly enough for people to make good decisions about whether they might want to make use of SF in their practice going forward. So what if no-one after a course chose to use the SF approach and yet attenders’ outcomes improved when using their own approach? I could imagine this happening. Would this mean that it had been a ‘good’ course – or not? What if people following a course were able to describe SF perfectly clearly but had decided that SF was not their ‘cup of tea’? Would this make it a good course?
In the past BRIEF has used ‘happy’ forms at the end of programmes and the results of these evaluation forms were consistently extremely good, indeed almost surprisingly good. We have gone further than this and followed course participants up for 12 months to see how many people are using SF and in what ways. But how many of us as trainers have followed up our participants’ clinical work following a training, comparing their outcomes with the outcomes of people who did not attend, or who attended another training in another model? And the answer I imagine is . . . . virtually none. I do truly believe that the training that BRIEF offers is excellent but now I am wondering whether I can say that for sure. We could blow our own trumpets even louder and assert that BRIEF Solution Focused training is ‘the best’ but what are we basing that on? Can we with integrity say this? We truly cannot know unless we think how to judge, what criteria to use, and the only truly important criterion surely, in the end, must be clinical outcome. In the end it is always our clients who matter most. This week has been a bit unsettling.
Evan George
London
23 January 2022