Skip to content

Here’s how equity programs can create shared future impacts

Following the success of the Women in STEM and Entrepreneurship (WiSE) evaluation webinar, on August 24 the Office of the Women in STEM Ambassador hosted an “Evaluating your equity program” webinar for members of the public running equity programs across Australia. Isabelle Kingsley, author of the National Evaluation Guide, helped over 60 participants understand how to evaluate their programs.

Isabelle highlighted that evaluation is a key priority for the Australian Government’s national efforts to advance the participation of women and girls in science, technology, engineering and maths (STEM) studies and careers. These include the Advancing Women in STEM Strategy, the Women in STEM Decadal Plan, and the 2020 Action Plan. Evaluating equity programs will help guide decision making and drive investment and effort into measures that work.

How did participants feel about evaluation before the webinar?

With the help of Slido, Isabelle Kingsley asked participants to write how they felt about evaluation – our own form of evaluation. Not surprisingly, and similar to previous consultations with program owners, attendees’ perceptions of evaluation were 99.8% negative. The world cloud below sums it up.

Then, Isabelle went through the five steps of evaluation: Define, Plan, Design, Execute and Share, pointing participants to the National Evaluation Guide for STEM gender equity programs – a free resource available and recommended for anyone running a program.

5 Steps to evaluate STEM gender equality program infographic

It was time for the Q&A. What were attendees keen to know?

How can I know the long-term impact of my program?

Individual programs have different individual intended outcomes that all contribute to shared future impacts on equity. The issue is that many of the impacts will take decades to manifest and a lot of factors can come into play. These factors can affect how much you can link impacts to a program. There are also many logistical challenges to evaluating long-term impact (e.g., losing contact with participants, like students through their school trajectory). And so, the individual long-term impacts of a program are elusive and remain largely unknown. What to do?

Define the long-term impacts as aspirational and focus your evaluation on the short- and mid-term outcomes. They are directly measurable or ‘knowable’. The short- and medium-term outcomes are extremely useful to know.

We can find out the shared impacts of our collective efforts by looking at whether the dial is shifting. But what is this dial we often talk about? Here are a few examples of datasets that monitor how Australia is doing on gender equity.

  • ABS gender indicators – monitors national gender pay gaps, gender representation in companies and government, etc
  • STEM Equity Monitor – workforce participation in different STEM industries, broken down by gender, parents’ and educators’ perceptions of students’ abilities in STEM, etc
  • Workplace Gender Equality Agency – monitors gender equity in Australian workplaces
  • Evaluation Toolkit – an Office of the Women in STEM Ambassador project under development. It will be a digital platform where you can plan your evaluation, report your findings and it will act as a repository of evaluations.
 
“Define the long-term impacts as aspirational and focus your evaluation on the short- and mid-term outcomes.”

 

It’s unlikely we will be able to attribute the effects of specific programs in shifting the numbers of national datasets. But Professor Harvey-Smith reminds us that “We’re a community working towards a shared goal, and what’s important is that our collective efforts are successful.”

What else can I measure besides the number of participants and whether they enjoyed the program? 

When evaluating outcomes and impacts, you need to align them with your program goals. There is a difference between measuring outcomes/impacts of the program (the change you want to see) and obtaining feedback about a program (thoughts and perceptions about the design and lessons learnt). This is a crucial part of the Define phase of evaluation, when you define the intended outcomes, and of the Design phase, when you identify the indicators that will measure those outcomes. 

Outcomes

What change do you want the program to make? What markers will demonstrate if that change happened? To do this, you will need to use a tool that will specifically measure the marker of that change. As much as possible, you want to use validated instruments. Validated instruments have been tested and proven as reliable and valid ways of measuring that specific thing you want to measure – like making sure that you calibrate a scale before weighing something.

Feedback

What did participants like about the program? What could be improved, and how? These questions will likely be very specific to your particular program because you want to know how it went and what people thought of it. In this case, you can ask about what you want to know.

Missed the webinar? Don’t sweat! Watch the recording.

How and when do I need to obtain ethics?

If you do not intend to use the data collected for academic research and publication, you need to abide by the National Statement on Ethical Conduct, but don’t need to apply for ethics approval. If you ever want to use the data collected from your program in an academic publication, you will need to undergo approval from an ethics committee.

How did participants feel about evaluation after the webinar?

Since we recommend you evaluate your engagements, we felt it was important to do the same for ours. To gauge whether participants’ feelings about evaluations had changed after the webinar, our Research Associate Isabelle Kingsley asked participants the same question at the end of the webinar: how does evaluation make you feel? We were glad to find that attendees’ perceptions were now 84.5% positive – recall that, at the start, they were 99.8% negative. See the word cloud below.

Missed the webinar? Don’t sweat! Watch the recording.

Are you planning or working on an equity program? How do you know if your actions are working?

Evaluation is a key part of any program. By evaluating your program, you can understand if and how your actions are creating change. It also helps others understand what works and how to improve what doesn’t.

More systematic evaluation is needed, as research from the Australian National University finds that of 337 gender equity programs in Australia, only 7 have publicly available evaluations [1].

References:

  1. McKinnon M. (2020). The absence of evidence of the effectiveness of Australian gender equity in STEM initiatives. Australian Journal of Social Issues. Advanced online publication. https://doi.org/10.1002/ajs4.142 

Read more latest news

The Office of the Women in STEM Ambassador supports publications about women's participation in STEM in Australia. Click to find out more.

Find more resources

Explore science communication programs for women in STEM, podcasts about STEM gender equity and other resources. Click to find out more.