Blog

Relevant education and rigorous assessment

Kåre Moberg, Senior Researcher for the Danish Foundation for Entrepreneurship-Young Enterprise Denmark

Is it possible to assess the impact of entrepreneurship education? Some would say that this is fairly simple whereas others believe it to be impossible. Many researchers take a position somewhere in between.

Without throwing any rocks, I would say that I am fairly sceptical about the way in which many assessment studies are performed today, but I do believe that an increased focus on the activity is crucial. There are numerous problems to face when we attempt to perform assessment studies, and it is next to impossible to address them all since it is human activity, intertwined in complex social settings, and affected by a myriad of internal and external factors, that we as educational programme evaluators try to study. It is difficult to argue that simple cause-effect relations are possible in such a setting.

So… what can we actually study, assess and evaluate? This is an ongoing debate within the research community which is often referred to as the relevance vs. rigour debate. On one side, there are researchers who argue that the gold standard for rigorous assessment studies is randomized controlled trials (RCT). On the other side, we have the researchers who claim that the meticulous rigour of the RTC methodology renders it useless for studying anything relevant. In RCTs the participants who get the educational “treatment” are randomly selected and matched with a control group (an elaborate presentation of this method can be found here and here). Critics of the RCT methodology say that  it is more or less impossible to control for all factors influencing educational outcomes, and therefore often argue that a different sampling method should be used. Instead of getting a small amount of information from a large number of respondents, their claim is that a study that focuses on a smaller number of respondents but dedicate more time with each respondent would generate more relevant insights. The participants selected for such a study would typically be the participants that were affected the most by the educational initiative – a so-called extreme sampling method (interesting perspectives on this can be found here and here

The importance of randomization

While I agree that the latter approach may generate relevant and valuable insights about how educational initiatives influence participants I do not see how these types of studies can demonstrate the effectiveness of different educational initiatives. As long as the educational “treatment” has not been randomly distributed there are often problems with “self-selection bias” which is turned into a problem with “survival bias” by the extreme sampling method applied. It therefore becomes impossible to compare the effectiveness of different educational initiatives.

Even if a course or a programme is mandatory rather than elective it is still very difficult to assess its impact and influence. This has to do with the fact that it is more or less impossible to control for all factors influencing educational outcomes (I know, I am nagging about this) regardless of whether the data has been collected with short questionnaires or extensive and lengthy qualitative interviews. The only way around this problem is to use randomization. If the sample size is sufficient and the chance of getting the educational treatment is equal, then all other factors influencing the educational outcome will also be distributed at random. It will thus be equally likely that factors influencing the educational outcome occur in both of the groups (experiment group and matched control group). Events influencing for example entrepreneurial intentions, attitudes and self-efficacy, such as friends or family members starting companies or experience bankruptcy are equally likely to occur to participants receiving and participants not receiving the educational “treatment”.

Addressing the challenges

There are a lot of assessment tools available today that both evaluators and practitioners can use (an overview of examples can be found here). By combining these tools it is possible to answer a broad scope of different research questions. So, it is no longer an issue of how we measure the outcomes, rather how we structure the data collection.

Naturally, the method applied should be determined by the questions the study seeks to answer, and usually a mixed method approach would generate the most interesting insights. However, when it comes to assessing the effectiveness and efficiency of different educational initiatives, it is, in my view, necessary to follow the foundational principals of the RCT method, however difficult this might be. As evaluators, we need to be creative when designing our programme evaluations. A way to use randomization while making sure that all participants are equally rewarded is to use an “in-phase” randomization method, that is, randomly selecting who gets the educational treatment first and who gets it next. If the randomization is performed at the institutional level it would also be possible to follow the participants in a longitudinal way as it will be a new set of participants that function as treatment and control groups.

Should we bother?

So, is it really that important that we perform rigorous and relevant assessment studies of educational initiatives? Well, according to a survey that the UN started in 2015, the almost 10 million participating respondents held education to be the most important topic out of 16 possible areas, spanning from access to clean water, jobs and healthcare, to democracy and climate issues. I would thus say that we have an obligation to perform rigorous evaluations that generate relevant insights. It will not be easy, but in order to further our understanding about how educational initiatives are best designed and delivered it is important that we rise to the challenge.

Category : entrepreneurship education, impact research Posted : 1 June 2016 08:34 UTC
About the Author
Kåre Moberg, Senior Researcher for the Danish Foundation for Entrepreneurship-Young Enterprise Denmark

Kåre Moberg has been working close to policy makers during the last eight years with a specific focus on developing and assessing entrepreneurship education at different levels of the education system. From 2007 to 2011 he worked as a project manager at the Øresund University with developing entrepreneurship education at fourteen universities in Sweden and Denmark. During this period he also engaged in many different projects and where the project leader in a project about social entrepreneurship. Since 2011 he has been working as a researcher at the Danish Foundation for Entrepreneurship – Young Enterprise, and in collaboration with Copenhagen Business School he wrote his PhD thesis about how to assess entrepreneurship education – from ABC to PhD. Kåre had also an active role in the EU funded project ASTEE (Assessment tools and indicators for entrepreneurship education), where he developed questionnaires to be used to assess the effects of entrepreneurship education at primary, secondary and tertiary level of education.

Comments(6)

A very relevant blog, thank you. Of course self reporting is only a part of any story, and this remains until appropriate metrics to enable more appropriate learning outcomes are developed. How can learners demonstrate flexibility and adaptability for example... in over formulaic classes where learning outcomes are based around examination and tests? Importantly these predict the results so that comparative measurements can be made... and when did you ever predict a good new idea?

Our work at OECD addresses this by highlighted 'Two I' lenses in assessment of student performance. Are we assessing 'Implementation' (Doing as told) or are we assessing 'Innovation' (New ideas realisation in situations of ambiguity and risk).

Great article, but more work needs to be done in my view.
Andy Penaluna
Andy Penaluna 2016-06-06 07:34 Reply
Thanks for your comment Andy. I agree, averages are allways only averages and there are many important outcome variables that we should include in evaluation studies. In my view there has been some great progress when it comes to evaluation tools due to the increased use of ICT, like OctoSkills which automates the analysis and provide direct information to teachers, and LoopMe which can be used as a formative evaluation tool, and ESP which both assess cognitive and non-cognitive entrepreneurial skills. In this post I wanted to emphasise the importance of how the data is collected and why we should use randomization. But I would be happy to continue the discussion. Do you have a link to the work you are doing for the OECD?
Kåre Moberg
Kåre Moberg 2016-06-08 07:52 Reply
I would like to agree with two the author's insights:


It really is important to implement rigorous, serious and relevant assessment the effects of educational initiatives, so that the creating of educational policy would not be like wandering in the dark.


With good reason, we should be skeptical about the insights that come from many today assessments studies of educational initiatives.

Thank you very much for a very clear and useful article.

Radovan
Radovan Zivkovic
Radovan Zivkovic 2016-06-06 09:04 Reply
Thank you Radovan - I am happy to hear that you find the text useful
Kåre Moberg
Kåre Moberg 2016-06-08 07:53 Reply
Entrepreneurship education is relatively new to the curriculum in many countries. The excellent efforts of researchers like Dr Moberg are critical to increasing uptake of "EE" in schools, to improving the quality of programs and teacher training. It is also essential to for policy makers as they develop national strategies.
Caroline Jenner, CEO JA Europe
Caroline Jenner, CEO JA Europe 2016-06-07 07:36 Reply
Thank you Caroline :)
Kåre Moberg
Kåre Moberg 2016-06-08 07:53 Reply

Post a comment

Security code Refresh

Related Articles

Subscribe to our blog

We use cookies to ensure you get the best experience on our website. Find out more here.

I accept cookies from this site: