In our previous blog, we discussed the core themes of the social housing green paper and possible implications for housing providers when the white paper lands this autumn. And while there are many questions surrounding the requirements of the paper, the sense of ‘waiting for the white paper’ felt by social housing providers is, in large part, related to benchmarking. Waiting to find out if the government will be specifying new standardised measures for social housing.
Given the impact these changes would have on social housing as a whole, the second blog of our Green Paper series will be discussing benchmarking concerns and explaining why, in this example, one size doesn’t fit all.
Social Housing League Tables
For some social housing providers their greatest fear is that their performance scores will be shown in a public league table, open to scrutiny. The Government were vague about league tables in the social housing green paper. But, unpopular as it is, I can understand why they proposed the idea.
As a social housing tenant, you don’t get a lot of choice. You don’t get to choose who your landlord is – you’re offered a home and you either choose to move in, or not. Tenants currently have no means to compare performance between providers, or to challenge performance. Given that context, it seems fair that there is a greater balance of power and that tenants should be able to see how well their landlord performs against others in the sector.
Having said that, I also understand social housing providers’ concern that listing hundreds of organisations in a league table from worst to best doesn’t tell the whole story. Each organisation serves different communities, focuses on different areas of service delivery, with different proportions of tenures and customer needs. You’re not comparing apples with apples.
Why One Size Doesn’t Fit All
When it comes to making performance information transparent, the drive is less about offering a real choice to tenants for selecting a landlord, but rather about challenging landlord performance on repairs, complaints, and management of their homes.
There is a middle ground here though: grouping providers into ‘nearest neighbour’ groups.
Nearest neighbours don’t have to be geographically close. They could simply have similar characteristics such as: demographics, customer base, size and scale of organisation. It’s a similar concept to the group of nearest neighbours who are similar to you, as in the old Best Value Performance Indicator (BVPI) days.
Open, transparent performance information compared to nearest neighbours compares like-for-like in a way that is useful for both landlords and their customers.
Nearest neighbours are more likely to have similar areas, so where there is a reasonable comparison to make, tenants could challenge performance. And while tenants might prefer a league table, it’s about being able to make a fair comparison within the realm of possibility.
This nearest neighbour approach would also provide opportunities for housing providers to learn from one another. Social housing is unique in that providers are not competitors in the classic business sense. As a result, they will openly share learning and share the same vision to offer a better service through collaborative working. Best practice sharing opportunities is what benchmarking should be about.
What We Need From The White Paper
In the upcoming white paper, if we are going to move forward with new measures and new ways of benchmarking services, it will raise concerns in the sector.
- When will they specify the measures – and are the measures applicable and useful?
- What is the specified minimum sample size – and the appropriate methods to gather the sample?
- How will the regulator ensure that when providers submit their data it’s real and can’t be gamed?
Through our role as insight providers we would want to see guidance similar to STAR where there is a specified minimum sample depending on stock size. This ensures a robust, reliable result and avoids, for example, a provider with 50,000 customers submitting data based on a survey of 250 customers. There should be a robust sample for each tenure so it must be in the guidance to include multiple tenures, such as sheltered, leaseholders, and shared ownership.
Lots of organisations choose not to survey their homeownership customers, sometimes with good reason. But, with the green paper focus on increasing homeownership and shared ownership products, in my view they should be included in the sample as mandatory.
Data Collection Methods
We already know from current benchmarking exercises that, in part, the score is impacted by the data collection method used. If you collect repairs satisfaction by the contractor on a doorstep with a PDA then that’s a different measure in comparison to an independent telephone call, SMS, or online survey; and those will have a different result to a postcard through the door.
It’s important to offer a range of collection methods, so that tenants feedback in a way that suits them with low effort and low cost. If it’s a compulsory requirement of social housing providers to collect this information, then it must be low cost.
Questions and Response Codes
If the Government are specifying questions that should be asked, it’s also important to specify the response scale. Can it be varied, for example, the use of the word ‘happy’ instead of ‘satisfied’?
One of the dangers of a new set of questions is that we simply pick up questions in the style of the seven core STAR questions. With questions so high level that when you receive your responses back, you have to do further research in order to improve your services.
Times are a Changin’
We’ve been hearing, both from clients and at sector events, of a big shift in the sector, with providers looking at alternative metrics to satisfaction, including the relationship between tenant and landlord, customer trust, customer effort and reputation.
Customer Trust: Do you trust your provider? It’s about relationship building between the landlord and customer.
Customer effort: First contact resolution is not just for repairs anymore – it’s expanding to other services. It’s all about making customer interaction as easy as possible.
The Ministry of Housing, Communities & Local Government (MHCLG) has done some research and engagement into what the questions should be. We hope they are well specified and useful to landlords to continue to improve services to their customers. Specific functional guidance is needed here to understand what providers are expected to ask, how to ask it, and whether there are any red lines to make sure everybody is doing the same thing.
If no one trusts the score, then it’s an expensive mandatory exercise but will offer little value, and no one will ask high scorers for guidance. To have a positive impact on the sector the benchmarking process must offer clear, fair comparisons so customers can realistically assess and challenge performance and enable the sector to improve.
Video Webinar series
In November, IFF Research Head of Housing, Katy Wilburn hosted webinars with Westward Housing in the Southwest and Peabody in London. These video webinar recordings are available to view:
Social Housing White Paper Webinar with Barbara Shaw, Chief Executive at Westward Housing
Social Housing White Paper Webinar with Warren Earl, Data Quality & Business Intelligence Manager at Peabody