We have been working with Digital Gaps and Placecube to develop an interoperable model. I thought it would be good to share the core concepts in delivering a robust place-based approach. See the Model for a Place-based DoS article.
We have also recently been included in a CGLdotTV video episode highlighting this approach with Lancashire and South Cumbria Health and Care Partnership and supported by Blackburn with Darwen Borough Council and BURNLEY, PENDLE & ROSSENDALE COUNCIL FOR VOLUNTARY SERVICE.
Gosh @Marcus-DigiCoproduct there’s some great detail there. The OpenReferralUK email address is getting enquiries from councils wanting to learn the basics of how they can apply the standard so I’ll point them to the video.
Please keep up updated on how the Lancashire and South Cumbria work continues - particularly with respect to:
- ongoing maintenance of good quality data on services
- aggregating data from multiple sources
@Marcus-DigiCoproduct thanks for sharing this, it’s very exciting stuff.
I’m glad to see more folks recognizing the limitations of search as a mode of information navigation; if I’m reading correctly, you’re recommending use cases that might involve screening (in which people answer questions and their answers trigger recommendations/suggestions), which I think is one of the areas of most potential benefit from better-structured resource data. I don’t assume, however, that this eliminates the need for a use case of searching a directory; presumably both searching and screening are useful modes of navigation, and communities might want to promote one or another or both methods depending on the context.
Also, I wanted to check an assumption about this statement: “The ultimate aim is that the providers of the services will maintain their own data.” In our context, usually we frame the ultimate aim as being: this information should be reliable. And what we’ve found (at least in the US) is that organizations are just not now and maybe never will be reliable sources of information about their own services. At least not at scale. Too many complicating factors and perverse incentives. Which isn’t to say that we shouldn’t encourage organizations to play a role in providing reliable information – but that we need to have intermediary capacities in place to verify the information one way or another for it to be reliable. I see you establish that such capacities ought to be in place for some services; I’d encourage you to consider whether it might actually be feasible and appropriate to assume that information verification is actually an essential service for all services as a best practice. This does involve labor, but in the scale of things, perhaps it’s better to make the case for the investment of resources to perform that labor.
I think there’s plenty else I’d love to discuss, maybe at the upcoming network convening, or otherwise.
Thanks again for sharing!
The aim for service providers to maintain their own data is described as an ultimate aim because we know it is not possible just yet. It implies reliable data but we go beyond that because we need affordable sustainable reliable data. The service provider maintaining it themselves is the cheapest option for the tax payer and it is good data quality practice for the data owner to take responsibility so we think it is a good aspiration.
We know there are little drivers for a service provider to maintain their own data but this is a catch-22 situation. As soon as there is a demand for this information then providers will see the benefit of them maintaining it once. However, their lack of data maintenance limits the current demand because the data is out of date and costly to maintain. We need to drive things forward to attain the critical mass so we can reach the ultimate aim.
For now we have several initiatives going on to help maintain accurate data. We also categorise things as suggesting data, collecting data and assuring data. We are looking to use volunteers and frontline workers to suggest new services, amendments and point out errors. We have service providers (labelled proxy on the diagram) who need someone to maintain their data for them, some providers (labelled monitor) have a best endeavour to maintain but may need help and then some are providers that we trust (labelled trusted) to maintain it themselves. The assurers are the fall back and we hope are a diminishing resources as momentum builds.
Hope that explains our thinking on the ultimate aim.
Hi Ian – (just noticed this, somehow didn’t get a notification of your response…)
I understand the interest in reducing costs, but given what I’ve learned from people who maintain directories – and also from my experience being a staffer at an organization who is asked to update information – I think it’s only responsible to assume that for the foreseeable future the quality of information at scale will depend upon reliable verification.
There are just too many factors that interfere with organizations’ reliability as sources of information about their own services. (Perhaps things are different in the UK. I doubt it’s substantively different, but I hold that assumption lightly – and would love to find conditions in which organizations can be expected at scale to provide accurate, timely, and useful information about their own services.)
In the meantime, when it all comes down to it, the cost of employing and training and managing people to conduct reliable verification seems reasonable, if it’s for verifying information that can then be re-used in many systems simultaneously, as our approach enables.
My perspective is that it’s a challenge to figure out financing mechanisms by which revenue can be generated by at least some of the many institutions – but it’s a tractable challenge. There are specific arrangements we can already test, evaluate and implement. And I think initiatives might improve their strategies by focusing on this challenge, rather than by hoping that incentives suddenly somehow align themselves for organizations to provide reliable information without intermediation…
I am agreeing with you that we need the verifier/assurer role at the moment for pretty much all services. Hence why i am just saying it is an ultimate aim for providers to maintain their own data.
We are currently testing out a hybrid model where frontline workers, volunteers and citizens can suggest errors and new services which go through to the assurers. The Service providers are encouraged to request any changes to their service information and ask for services to be added. It is the assurer that does the bulk of the work and assures the accuracy.
However, we also have a trusted provider initiative where the provider can sign up to look after their own data. This is currently minimal but the bigger providers do seem to want to look after their own data. The hope is that as demand at the frontend grows for this data then more organisations will want to become ‘trusted’ to maintain their own data.
I accept this is currently a hope and an aspiration.
Sure, I think we do agree to an extent – I think it’s at least plausible to imagine a future in which these systems of directory information sharing reach a critical mass such that users (frontline workers, volunteers, citizens) are able to effectively hold organizations accountable for the reliability of their own information… I don’t know that we should expect the need for human resources for intermediation to hit zero, but would welcome opportunities to leverage user feedback more and more effectively.
It makes sense that you’re thinking about this stuff – it’s proper co-production! And my interest in co-production is what led me into this work. Hope to have the chance to learn from y’all