Understanding the Voice of the Employee: A Q&A With Darwin Pivot

Oct 3, 2022 11:15 AM ET
Two women smiling

When it comes to building and scaling a successful corporate social responsibility (CSR) program, continual program assessment and data analysis are key. It’s important to look beyond surface-level metrics like dollars donated, hours logged, and the number of participants. While these are important data points, they don’t get to the root of barriers or challenges that may exist. The most successful CSR initiatives speak to the needs of employees, but getting inside the minds and hearts of your people can be challenging.

We partnered with Darwin Pivot to uncover best practices for collecting and interpreting essential information needed to deliver a program that employees can’t help but get involved in! Darwin Pivot is a multidisciplinary management consulting firm supporting organizational leaders in building and designing the employee experience. The firm leverages a specialized form of data science coupled with deep industry knowledge and behavioral science principles to help its clients troubleshoot, build, and scale rave-worthy culture.

Don’t miss out on an exciting opportunity to hear from co-founders Nicole McPhail and Emily Hazell on October 5th at 1:00 PM EST at CyberGrants’ webinar—Using the Scientific Method to Collect & Action Feedback. During this session, they’ll share best practices for designing an effective employee survey, collecting meaningful feedback, interpreting qualitative data, and more. Register today!

Can’t wait until then? We sat down with Emily and Nicole ahead of their upcoming session to provide a sneak preview of what you can expect. Check out the conversation below!

Q&A With Darwin Pivot

Question: What is impact and how can you measure it? What data is most important to focus on? And how do impact stories fit into your reporting framework?

Emily: When it comes to impact reporting, it's important to take a step back and make sure that you understand what problem you're solving. Once you understand your priorities and strategies, those will be your reporting framework and will dictate which metrics you want to focus on. But more importantly, you’ll want to hone in on your program health metrics—what is success to you? For example, if your leaders agree that participation in a program is tied to employee engagement and retention, then it makes sense to measure it.

But let’s dig a little deeper! When discussing any program metric, you have to examine it from a couple of different angles. Just because your organization garnered 100% program participation, you don’t necessarily have a healthy or successful program. For example, ask yourself how you achieved that participation metric. If the CSR initiative was required or involved arm twisting, was it still a successful program? Probably not! As a result, you’ll want to break down the different components of your campaign to better understand the different factors that may or may not have led to your 100% participation statistic. To get a better perspective, we recommend looking at factors like office locations or team demographics and considering the role of employee tenure and seniority, for example. In doing so, you’ll develop a deeper program understanding and can better assess if it was successful.

Anecdotal reporting is just another layer of that same story! You can leverage that qualitative information to supplement your data. However, it's important to avoid selecting an outlier story and then using it to make broader assumptions. Take the time to look at the specific stories alongside your quantitative data, and then select which stories you can leverage and how they can bolster your report. This way, your anecdotal stories provide context to the data but aren’t used to singlehandedly assess program health.

Nicole: The context piece is especially critical for data. If you’re presenting this information to your executives or board members, you have to be conscious of the fact that this is not their domain. Context is essential for them to understand the relative success of what you are doing.

Question: Can you talk about sentiment analyses and how you can quantify qualitative data?

Emily: Once you have executed your CSR program, you can implement a sentiment survey to gauge how your employees felt about the initiative and understand their perspectives. Once you have gathered this data, you can codify it and reclassify it into values that can be quantified. Within the survey, you’ll ask your workforce to reflect on the program and what they felt during different parts of the experience. After your employees’ answers have been gathered, develop a ranking system for their responses. For example, a “one” could signify a positive experience and a “five” could signify a negative experience. As a result, you’ve turned qualitative data into values that can be assessed to identify trends.

We often think of sentiment data as something that can only be anecdotal or qualitative. However, survey results can act as a blend of both qualitative and quantitative data. And it’s important to examine sentiments company-wide, but I would also recommend honing in on specific cohorts of your company to identify differences by teams, location, seniority, etc.

Nicole: We actually saw the benefits of this firsthand with one of our clients. The company was running a United Way campaign that the board and C-suites were very excited about, and they had great participation numbers to back it up. However, when they conducted a sentiment survey, they found that the participation rates were high because the program involved a lot of arm twisting. As a result, employees were getting involved, but they weren’t genuinely and authentically engaged in the program. By implementing feedback from their workforce, they saw massive improvements overall.

Emily: It’s easy to get married to what one metric is telling us! Therefore, when you’re assessing any kind of data point you have to consistently ask yourself, “what is this information telling me, and do I need more information to draw a conclusion? What follow-up questions should I ask?” Beyond merely examining the program itself, look to context variables such as time of year—do employee sentiments change depending on the time period? And if there were lower or more negative sentiments during a specific date range, what was happening internally and externally? Cover all your bases and make sure you have all available information before drawing conclusions about your findings. Being curious is critical!

Question: For CSR leaders facing pressure from executives or C-Suite members to achieve a 100% participation metric, how do you recommend opening up a discussion around sentiments?

Nicole: This comes back to asking yourself and your executives; what are we solving for? If you’re trying to increase employee engagement, participation isn’t the metric to worry about! Also, think about why you’re so focused on these success metrics, and ask if it's self-serving. It’s easy to get hooked on that 100% benchmark, but it takes self-awareness and critical thinking to identify the data that matters and truly understand what’s happening with your programs.

Emily: This is a great opportunity to leverage qualitative and quantitative data together. For example, if your quantitative metric is 100% participation but your qualitative data is showing negative sentiments—identify the gaps in what you’re measuring and find the information that’s missing. Once you’ve identified the missing pieces you can start to understand how your program can be improved to increase overall employee engagement, which is what you’re actually solving for!

Question: As a CSR leader, once you have assessed your program health by implementing a sentiment analysis, how do you recommend they move forward with making program pivots or adjustments?

Nicole: It’s imperative that you are cautious with how you pivot because once you get leadership buy-in, you’re going to lose credibility if the alternative plan doesn’t work. We recommend developing a hypothesis and running a lean experiment. For example, your hypothesis could be that people will be more inclined to participate if their family can be involved. You will first want to identify your sample population. This can be a small cohort within your company that is comfortable being involved in the test. Next, you’ll develop success metrics for running an experiment. If you’re measuring program participation, for example, this can be one of your measurements for success. If you see an uptick in employee participation, create a slightly larger testing zone and see if you garner similar results. And always keep in mind that culture and your relationships factor into the outcomes. Therefore, run these tests across different groups and geographies until you have sufficient evidence to escalate the information to your decision-makers to see if they’d be interested in piloting your new program or feature.

And it’s important to keep in mind that once something has been communicated to your workforce, you can’t take it back. When launching a pilot program or running a lean experiment, communication is key! Ensure all participants are aware that this isn’t a new program and that you’re simply testing new offerings and measuring for success.

Emily: Make sure these pivots or updates are iterative! The entire program may not be flawed, and it could just be a small component that is hindering your success. Without making adjustments in iterations, you’ll never get to the root of the issue with your programming. Don’t make assumptions regarding your program as a whole, begin to tackle small barriers that may exist first. And if you pivot your entire program too many times, you’ll begin to lose stakeholder buy-in and your employees’ trust. Without that, you won’t have a program!

Nicole: To a certain extent, it all comes down to what we have in common. It changes slightly across cultures, but we’re all humans and how we make decisions and what motivates us is at the core of everything. Therefore, when I’m developing a hypothesis I often think about human behavior and the emotions that may be the barrier to entry. Is it fear? Is it not the right incentive? Running experiments can be grounded in those ideas rather than on a specific program element.

Looking Ahead

Ready to dig a little deeper? In our upcoming webinar with Darwin Pivot, Emily and Nicole will outline the best ways to reach the hearts and minds of your people and sustain the most impactful giving strategy for your organization.

During this session, you’ll: 

  • Understand best practices for designing an effective employee survey.
  • Learn the frequency and approach to collecting feedback.
  • Interpret when and why to pull in qualitative data points (i.e., sentiments, etc.)
  • Gain tips for removing personal bias and avoiding assumptions when interpreting results.
  • Learn to leverage this data and knowledge to get the buy-in you need to make program changes.

Register now and join us on Wednesday, October 5th at 1 PM EST—we can’t wait to see you there! 

View original content here.