Copy
 

Kia ora! 👋

I'm Penny Hagen, Co-design Lead at the Auckland Co-design Lab in Aotearoa New Zealand. 

 

My background is in participatory design. Making sure that people impacted by what is being ‘designed’ to enhance wellbeing - like a programme or policy - are involved and influential in the development of it. Building government’s capacity to design with people and place (and the complexities that brings) rather than do to people is what’s important to me.  

 

It’s easier said than done! Here are some of the challenges we’ve been grappling with. I hope they are useful for you too.
 

Cheers, Penny.

Fowarded this email? 🚀 Sign up here.

 

Where intentions meet complex reality

There’s often a disconnect between policy intent and what people experience on the ground. In part because our current approach to policy and service design, delivery and evaluation has “been too fragmented and not built on an understanding of the complex social systems they must work in” (pdf).  

 

One way for us to work more effectively with that complexity is by exploring the evidence and ‘measures’ that we use to inform our decision-making and approach to innovation.
 

A grown-up understanding of evidence

There is a bias in government towards certain kinds of data and evidence. This bias limits our understanding of what is going on for people, and what might actually make a difference to improved wellbeing outcomes. As the Brookfield Institute says (pdf):
 

“There are 'phantom rules' or orthodoxies in government around what is allowable and qualifies as valid evidence that may inhibit policy professionals from innovating."
 


You see this in evidence-based practice. It gives us confidence, and a sense of certainty, because it suggests we are making decisions and basing policy or investment on things that are known to work. It’s a sure bet! But in evaluating what works, ‘success’ is often defined in narrow terms.
 

Many of our current social issues - like inequity and child wellbeing - are compounded by a policy and service system that takes a simplistic view of issues. And our responses to those issues require us to expand our understanding of what is or isn’t evidence. We need to question whose perspectives have been included and what kinds of evidence count.
 

It’s not about disregarding evidence-based policy. But rather, balancing approaches like RCTs with evidence that has been developed and tested with people in place; evidence that recognises different kinds of experiences and engages with (rather than reduces!) complexity.
 

A few ideas for exploring this in your work:

Surface and examine your relationship to different forms of evidence. Where are you confident, comfortable and capable (and where are you not)?

Get out of old habits to
understand what kinds of evidence you could or should be drawing upon (pdf).

Challenge your team around the ‘
phantom rules’ that might be holding you back from policy innovation.

Look at approaches researchers are taking to
translate evidence-based programmes into effective practice (pdf).
 

Are we paying attention to the right things?

We often see a gap between what we “measure” and the outcomes that make the difference. Families we talk with value things you probably value too; strong friendships, places to be without judgement, to feel safe and to see their culture reflected and valued.

 

At a high level, we know these things matter. Yet the data we report still tends to focus on things like: the number of visits, signs-ups to a service, or attendance in a government programme. It reflects a service-level view of “success” that falls short of what matters to families. And worse, we find the data that providers so dutifully collect and report on often isn’t even used in meaningful ways!


We’re experimenting with developing local indicators of child wellbeing with families. We want to pay attention to what they value, and use that to shape the system that’s designed to support them. This is quite a shift from the default for us in government. Four ideas to get you started:
 

1. Use the “What we track”(pdf) worksheet to reflect on your process for identifying ‘success criteria’ and the values that underpin them. Find those blind spots!

 

2. Co-design (pdf) and developmental evaluation lend themselves to defining outcomes in a localised and participatory way. We expand on this in a blog on place-based approaches

 

3. The Developmental Evaluation Institute has the basics of developmental evaluation outlined, including Jamie Gambles's early primer on developmental evaluation (pdf).


4. Mark Cabaj’s resources on evaluation and systems change are really useful for thinking about outcomes at different levels. So you can zoom in and out of an issue to see signs of change (and get less bogged in the weeds).
 
 Announcements from us

🖥️ We just ran 'Field work' with Tomas Dominguez Vidal, the first in a series bringing together practitioners who've created the space in government to try new approaches to old problems. We'll share what we learnt and dates for the next one soon!

🤓 Still time for any final applications for the 🇬🇧 UK wide learning programme and also the 🇦🇺 Australia and New Zealand programme 🇳🇿. Both begin in January.

👋 Say hi! If you have any feedback or tips, please let us know in our anonymous form, or just hit reply!
 


Copyright © 2019 States of Change, All rights reserved.

Our mailing address is:
58 Victoria Embankment, London, EC4Y 0DS


Want to change how you receive these emails?
You can
update your preferences or unsubscribe from this list.