Why's and what's of evaluation: Tech Girls reveal the secrets to a good response rate
‘Tech Girls are Superheroes’ is a Tech Girls Movement Foundation program that engages girls in education by busting myths and stereotypes about STEM. Women mentors support schoolgirls in a hands-on, 12-week challenge* that combines problem-solving, producing business plans, pitches and app prototypes to solve real-world problems. This award-winning program is based on 20 years of research on gender, diversity, STEM, and entrepreneurship.
We had a chat with CEO of Tech Girls Movement Foundation, Dr Jenine Beekhuyzen OAM, and Business Manager Amie Cossens about their experiences evaluating their programs. We discussed why it’s important to evaluate, what are the barriers to evaluating and how to overcome them. We also asked them for their best pieces of evaluation advice. Dive in!
Why is evaluation important?
“Evaluation sets aside the programs that are authentic in the space from the ones that aren’t.”
Tech Girls have a strong evidence-base. They draw on 20 years of research on gender, diversity, STEM, and entrepreneurship. In their view, to run a successful program, program owners need to base it on evidence. Program owners need to create the program by drawing on existing research. They also need to collect their own evidence through evaluation and use that evidence to improve their program. For Tech Girls, evaluation helps them know what the program is doing right or wrong. “I’d rather not teach them at all than teach them badly”, says Jenine.
An evidence-based program has more credibility and reputability. Funders are often more confident in programs that are grounded in evidence, and thus more likely to fund them.
This is a win-win for organisations. Evidence allows you to create the best program possible and increase your chances of getting funding to do it.
What evaluation barriers do Tech Girls face and how do they overcome them?
Despite the challenges, “Don’t underestimate the importance of evaluation”, say Jenine and Amie.
According to Tech Girls, some of the barriers they face are:
- Good response rate: how many people answer feedback surveys
- Expertise: availability of people with expertise in research methods
- Resources: financial resources and the team having time to plan and execute evaluation
To improve their response rates, Tech Girls have used a combination of a) creating evaluation tools that are bespoke to their audiences b) embedding evaluation into their systems c) using incentives d) including evaluation as a program deliverable and e) having evaluation ‘champions’.
a) Tech Girls have different cohorts which they evaluate separately. Students aged 7-17 receive shorter surveys and these can include questions that use Emojis to measure students’ enjoyment of the day instead of traditional Likert scales, for example. Mentor surveys might be longer and include more in-depth questions.
b) Tech Girls embed evaluation surveys into the systems where participants register for the program. A user in the system might need to complete a survey before the next registration step is available.
c) Incentives include offering respondents to go into a draw for gift vouchers if they complete the surveys. This does, however, remove anonymity, because they are collecting e-mail addresses.
d) At the end of the competition, program participants must submit a series of deliverables. Tech Girls have included the post-program evaluation as an activity that must be completed.
e) Lastly, evaluation ‘champions’ are mentors who push for students to complete their evaluation. Champions take responsibility for ensuring evaluation is completed.
Tech Girls stress that evaluation should be done in a systematic way and grounded in a tested methodology. Their team is fortunate to have a researcher in-house with this expertise. This is not the case for many programs, but you don’t have to be a researcher to do quality evaluation.
Another way is to outsource to a third party. This, however, can be costly, and a third party might not understand the fundamentals of your program. For outsourcing to be successful, ensure third-party evaluators are immersed in your program from the beginning.
Program owners can do evaluation themselves, simply and inexpensively. There are resources to help you, such as the ones listed below.
This includes funding and capacity in the team. Program owners often ‘run out of time’ at the end of a program to do evaluation. If evaluation is embedded into the program design and timeline, teams start treating it as an integral part of the program, rather than an afterthought. Teams that are busy running programs will rarely ‘find the time’ to do it. Organisations need to plan dedicated times for evaluation from the outset.
How do you know what to measure? What are the hardest things to measure?
There is no need to reinvent the wheel. To find out what to measure, Tech Girls consult existing literature – such as evaluation of Technovation, a similar initiative – international standards – the STEM Career Interest Survey  – and publicly available instruments, such as “Evaluating STEM gender equity programs: A guide to effective program evaluation”. They then adapt instruments and standards for their program.
Tech Girls find personal growth of girls who go through the program challenging to measure. Part of the reason is personal growth takes a while to ‘sink in’. Due to that longer timeframe, another challenge is collecting data over a long period of time. Longitudinal data collection requires keeping an updated database of participants’ contacts, which Tech Girls don’t do as their contacts are mentors and coaches, not the girls themselves.
One best piece of advice for evaluation
Tech Girls were generous enough to offer us two pieces of advice.
- If you are looking to measure – and create – change, Tech Girls strongly recommend doing a pre and post evaluation. A survey at the end of the program makes it difficult to measure change because there is nothing to compare it to.
- Don’t underestimate the importance of evaluation. Sometimes it can seem like the least important part, but evaluation is not something you throw together as a last–minute thought.
If you are a program funder looking to support programs to evaluate, we have compiled some tips on how to help out program owners with this.
About the series
‘Why’s and what’s of evaluation: Program leaders reveal the secrets of effective evaluation’ is a series by the Office of the Women in STEM Ambassador which provides insights for program owners looking to evaluate their programs. We delve into why it’s important to evaluate, the barriers organisations face with evaluation and how they overcome them.
Still have a question about evaluation?
Download “Evaluating STEM gender equity programs: A guide to effective program evaluation” and find other tools to drive gender equity in STEM.
- Kier, M.W., Blanchard, M.R., Osborne, J.W., & Albert, J.L. (2014). The Development of the STEM Career Interest Survey (STEM-CIS). Research in Science Education, 44(3), 461-481.
*Due to COVID-19 the program in 2020 was extended to 16 weeks.
Clara Gomes - Digital Content Officer
Clara Gomes is a communicator and writer. She is passionate about creating a more equitable STEM sector in Australia. As the Digital Content Officer for the Office of the Women in STEM Ambassador, Clara creates, edits and shares content on the Office's social channels and website.
Isabelle Kingsley - Research Associate
Isabelle Kingsley is a researcher, science communicator and educator. Her research focuses on measuring the impacts of science education and outreach. As Research Associate for the Office of the Women in STEM Ambassador, Isabelle is investigating gender issues to inform how to drive needed cultural and social change for gender equity in STEM across Australia.