Designing a Way to Measure the Impact of Design
Designing a Way to Measure the Impact of Design- By Cheryl Heller
Those of us who practice, fund, commission, and teach the nascent discipline of social design agree without hesitation on a couple of things: People who experience this type of design in action believe it can transform the way we approach and solve social problems, and are investing a great deal of money and energy—by any measure—in developing the field based on results so far.
We also agree that we don’t agree on whether it should be called social design, human centered design, social innovation design, or impact design; nor can we agree on precisely where the boundaries lie between it and more traditional design approaches.
We find ourselves at an inflection point, with a need to define, measure, and scale the impact of social design if we are to realize its potential.
This complex and somewhat amorphous challenge, as it happens, is exactly the kind that designers like to take on. And it was, in fact, the inspiration for the Measured Summit our Design for Social Innovation MFA program hosted at New York’s School of Visual Arts. Social designers, researchers, foundation heads, monitoring and evaluation leaders, and data scientists gathered to take on these challenges, beginning with the impact of design on human health.
The core principles of social design are: Solutions come from understanding and engaging communities in need of help (not from conference rooms), prototyping and observation are more effective than five year plans, and all social issues are systemic and must be understood and acted upon that way.
One clear lesson that emerged from the summit, however, is that while social design—wherever we practice it and at whatever scale—is defined by a common process, we cannot always measure it in the same way. We need different yardsticks to measure the impact of product design, service design, built environments, and the design of new cultures. Each application impacts change in a unique way.
For example, when Michael Murphy, architect and co-founder of MASS Design Group, creates a building, he measures how well the structure delivers on its mission. How does the hospital design contribute to the health of the people who stay there? Do they leave in better health than when they arrived? He also measures what he calls “indirect” and “systemic” attributes by evaluating the governance and transparency of the process, environmental impact, the natural resources used, health and happiness of the surrounding community, labor and human rights, diversity and equality, formal beauty, the impact on relevant economies, and the ongoing impact on the culture. He uses tools to measure these dimensions that are as diverse as the dimensions themselves.
The traditional evaluation approach is to look at building costs compared to budget, perhaps how the design was received (likely by other architects or critics), and probably the building’s use of energy. In contrast, MASS Design Group is measuring how well the structure serves the wellbeing of the entire community.
Doug Powell—a distinguished designer at IBM and director of a program intended to scale design and designing thinking throughout the company—is approaching evaluation in a different way. He’s creating a new global culture comprised of those ineffable dynamics that change everything about an organization’s capacity and resilience. He measures the degree to which the design process fosters radical collaboration within the company, and the quality and relevance of the innovation that collaboration generates. He uses the Net Promoter Scoreto measure user sentiment.
This allows IBM to track the success of various products and services it develops. It also aids the company in understanding the conditions for success it’s creating within its culture so that it can continually strengthen and scale those conditions.
Then there is Evan Thomas, director of the Sustainable Water, Energy and Environmental Technologies Lab (SWEETLab) at Portland State University, which develops technologies that measure the effectiveness of products intended to solve health issues, as well as how often and how well people use them. It measures the difference between what people say they do and what they actually do, and the relevance of data other organizations base decisions on.
The benefit of measuring the adoption and function of products meant to improve people’s lives, rather than how well a product performs in a laboratory setting, is the difference between solving problems and wasting opportunities.
While we need a more concise and consistent way to define the field overall and explain its principles, we also need to dispel the notion that there’s a simple formula for evaluating where, how, and to what extent it works. In particular we need to disabuse some enthusiasts of the notion that evaluating social design requires nothing more than Post-it Notes, a cluster diagram, and anecdotes from the field about how much people liked it. We need language and information architecture that illustrates the different applications of social design, and links their various goals to metrics and outcomes. We need to audit all the types of tools people are using to measure impact and very likely develop additional ones. In short, we need to map the process.
Several years ago, Kyle Reis (then at the Ford Foundation, now with TechSoup) and I mapped the social design process and the philanthropic process to see where they overlapped and didn’t. What we learned (or more accurately, what we were able to see rather than just intuit) is that the critical decision points within each process are misaligned. The design process requires that we immerse ourselves in the problem and context without preconceived ideas about what the “answer” is. This is where real innovation comes from, inspired by the needs of the people being served, rather than a pat solution that someone has seen somewhere else before. Generally, the philanthropic process requires that we define solutions and specific tactics before getting funding to begin.
It’s time to expand on that mapping effort––to diagram the social design process within the complexity of all the work that government agencies, researchers, practitioners, and others are doing in social innovation.
Social design can’t happen in isolation. It works only when it’s integrated with other fields of expertise and diverse perspectives. It doesn’t always begin at the beginning of a project, and most often designers don’t get to stay involved in long-term initiatives through the final stages of implementation (and short-term initiatives typically don’t have lasting impact).
There is one generality about social design, though, that is not only accurate, but also essential to its successful implementation. It’s an infallible “secret sauce”—wisdom applicable to any context or scale. It is that simply participating in the process inspires collaboration and engagement between diverse minds with different needs and experience. In other words, the answer inevitably lies in the process used to find it.
The journey summit participants have embarked on to measure the impact of social design is, in fact, the social design process itself: immersing diverse minds in the subject; listening carefully to discover divergent perspectives; interrogating evidence to uncover the underlying questions, principles, and what they mean; co-creating ideas to act on opportunities that emerge; and then prototyping, testing, refining, and implementing them.
Social design has the potential to address issues at a systems level, integrating human dynamics and relationships with new technologies and services. It integrates the wisdom and the experience of the people in need of help, giving them agency and a voice. It’s time to take its measure so that we can put it to greater use.