Evidence-based teaching: how to enhance practice to improve learner outcomes
In this blog, Dr Katy Bloom, Associate Professor of Initial Teacher Education at York St John University, explores what evidence-based teaching (EBT) is, why it is beneficial and what it looks like in practice.
We have an outstanding tradition of educational research, often leading the rest of the world. But much of it rarely filters down into actual teaching, and it can be fashionable, politically driven and contradictory.
What is evidence-based teaching?
Put simply, it is the act of using educational research of different kinds to inform planning, teaching and assessment, so it is also known as evidence-informed teaching. Sounds easy, doesn’t it? The phrase is supposed to come from the philosophy behind which medicines are tested: randomized control trials (RCTs) that ensure that the ‘best treatment’ can be given and is shown to be both reliable and valid.
Since 2011, the EEF has funded over 130 RCTs, involving more than one quarter of all maintained schools in England. RCTs are held to be the ‘gold standard’ of educational research; even better are meta-analyses of RCTs, a statistical approach that combines findings from different studies.
Why does evidence-based teaching matter?
The last ten years have seen a rise in the call for teaching to be an evidence-informed, or evidence-based profession, as if this is directly opposed to current classroom practice based on a blend of intuition, common sense and experience. Yet we have all been through a training programme that drew heavily upon learning theory, and current ITE programmes have never been so well supported by research and theory, as underpinned by guidance such as the ‘ITT core content framework’ (for Initial Teacher Education) and the ’Early career framework’ [ECT].
But in the day-to-day planning, teaching and assessing, we often fall back on what we feel ‘works best’, and this does sometimes mean a custom-based practice rather than an evidence-based one. Let’s take the example of deploying teaching assistants (TAs), which is a cost to schools of more than £5 billion. TAs are often assigned to the ‘struggling’ individual or group, who are often disproportionately from low-income backgrounds, to ‘give them more attention’, thus they are being helped by the least experienced adult in the classroom. This helps them neither academically nor aspirationally. Consultation of the EEF’s ’Making best use of teaching assistants’ would however flip custom into evidence-informed by the class teacher intervening more with this group, but also in training TAs to perform more structured one-on-one or small group interventions.
How to find and use evidence-based practices in teaching and learning
The EEF provides accessible summaries of international research to guide teachers and senior leaders on how best to deploy resources to improve learning outcomes; a sort of ‘best bang for your buck’. Evidence-informed teaching strategies can be compared in the ‘Teaching and learning toolkit 5-16’ and the ‘Early years toolkit’. Each section provides further information, approaches and references, and compares the implementation cost, evidence strength and impact in terms of months of gain.
The Sutton Trust’s’What makes great teaching?’ reviewed over 200 pieces of research to identify six key elements of teaching with the strongest evidence of improving attainment, (also finding some custom-based practices to actually be harmful to learning, with no grounding in research, such as ‘learning styles’). Factors offering the strongest evidence of improving attainment included:
- (pedagogical) content knowledge - teachers’ content knowledge, including their ability to understand how students think about a subject and identify common misconceptions
- quality of instruction - which includes using strategies like effective questioning and the use of assessment
- classroom climate
- classroom management
- teacher beliefs
- professional behaviours
- challenging students to identify the reason why an activity is taking place in the lesson
- asking a large number of questions and checking the responses of all students
- spacing out study or practice on a given topic, with gaps in between for forgetting (so that the forgetting can be ‘interrupted’)
- making students take tests or generate answers, even before they have been taught the material
For example, research on teaching effectiveness suggests that achievement is likely to be maximised when teachers actively present material and structure it by providing overviews and/or reviews of objectives, outlining the content to be covered and signalling transitions between different parts of the lesson, calling attention to main ideas, and then reviewing those main ideas. It is also useful in being able to reject existing, ineffective practices, such as differentiating resources and activities to cater to pupils’ so-called different ‘learning styles’ (which the evidence base does not support).
There is so much ‘choice’, so how do we know what to use?
In 2009, John Hattie published the findings of his research team in his book, ‘Visible Learning’. This was a detailed list of ‘effect-size studies’, in which he and his team used meta-studies to compare the impact of interventions and effects (such as disadvantaged background) across the sources of variance within education: the students themselves, home, school, curricula and teachers. The impact on outcomes of students receiving a particular intervention, for example, using problem-solving strategies, is statistically measured against the outcomes of students not receiving it, and this difference is known as the effect size of that particular intervention, or circumstance. Some of the effects can actually be negative, such as suspension/expulsion, or lack of sleep.
So, should we be putting more of our efforts into those effects and interventions with higher effect sizes? The danger of such an approach is to use the higher effect values as a ‘shopping list’ and ignore the lower ‘others’. Homework is a good example of this. At a ‘low’ effect size of 0.29, it could be argued that this is below some acceptable threshold, and so homework should be scrapped. However, when you look into the different contributing research projects listed under this area, there is such variance in the effect size (lower, sometimes even a negative effect in the primary phase), that simply rejecting homework as a low impact is akin to throwing the baby out with the bathwater.
Instead, we should use the evidence available to have a more nuanced discussion about what effective homework for learning would look like. What function should that homework be doing for us as teachers (thinking, learning, but not busy-work or a tick on a list to say it has been set). A key argument about using research findings on evidence-based treatment is to ensure that any initiative is fit for purpose and that you monitor the impact of change as you go along.
This brings us to another point when perusing the ‘shopping list of what works’: sometimes we are surprised by where our ‘favourites’ appear. Does this mean we are doing it all wrong? No, but it should remind us that classrooms are places where complexities abound; meta-analyses seek ‘big facts’ but do not seek to explain complexities. They also assume that all learners are the same and have the same needs, whereas in the real classroom, moderators can come into play, eg, age, gender and starting points. They ignore contexts, and we know as practitioners that what works well in one place might not work well in another.
It doesn’t necessarily follow that something with a large effect size is best: a small one might be good in terms of being low cost/easy to implement, and a high one too expensive to do. There are good examples of when small effect sizes are more beneficial (like small doses of aspirin), and some interventions are like ‘Babushka dolls’. Also, if effect size is intended on a narrow educational outcome, it will be different to a more generalised outcome.
But also, and really importantly, meta-analyses do not tell us why or how these methods or effects work. We should be careful of what we call a small, medium or large effect size because there is a danger of only using high-effect-size approaches instead of engaging with the literature about why they work. So, in reflecting on what implications this will have on your own classroom practice in terms of the approaches you can use, we turn to principles of evidence informing your practice.
The principles of evidence-based teaching
- It’s not good enough to know what works, you need to know why. Having a better understanding of the learning theory that underpins it enables you to respond more knowledgeably to those classroom complexities as they arise.
- Make your instructional decisions based on all the evidence, not cherry-picked evidence. Compare it with alternatives that might achieve the same goals. Experiment on your practice and reflect on what the outcomes were for your students and for you – a sort of continuous action research cycle.
- Uncomfortable though you may feel, principle 2 makes you look at those areas of your practice that need re-examining, so constantly review your own teaching in light of the available evidence.
Hattie himself has tried to make sense of his own lists in distilling three principles, two that can be seen directly in the effects high in the list: that achievement depends on the amount of challenge set, and that achievement is enhanced by feedback. The last is more derived from the outcomes: that increases in student learning involve a reconceptualisation of learning. So, to ‘Hattify’ your lessons, set a challenging goal, deploy effective feedback against their efforts, and have clear success criteria against which to measure that reconceptualised learning.
Coe, R., Aloisi, C., Higgins, S and Major, L.E., 'What makes great teaching?’, 2014 - https://www.suttontrust.com/our-research/great-teaching/
DfE, ‘Early Career Framework’, 2019 - https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/978358/Early-Career_Framework_April_2021.pdf
Education Endowment Foundation, ‘Teaching and Learning Toolkit’, 2023 - https://educationendowmentfoundation.org.uk/evidence-summaries/teaching-learning-toolkit/
Hattie, J., ‘Visible Learning’, 2008 - https://www.taylorfrancis.com/books/mono/10.4324/9780203887332/visible-learning-john-hattie
Hattie, J., ‘Visible learning for teachers: maximizing impact on learning’, 2012 - https://www.routledge.com/Visible-Learning-for-Teachers-Maximizing-Impact-on-Learning/Hattie/p/book/9780415690157
Hattie, J. & Clarke, S., ‘Visible Learning: Feedback’, 2019 - https://www.routledge.com/Visible-Learning-Feedback/Hattie-Clarke/p/book/9781138599895
Marzano, Robert J., ‘Classroom instruction that works: research-based strategies for increasing student achievement. Alexandria, Va. Association for Supervision and Curriculum Development’, 2013 - https://books.google.co.uk/books/about/Classroom_Instruction_that_Works.html?id=c25kDO0adxwC&redir_esc=y
Marzano, R. J., ‘The new art and science of teaching’, 2017 - https://www.solutiontree.com/new-art-science-teaching.html
Masters, G., ‘The role of evidence in teaching and learning’, 2018 - https://www.teachermagazine.com/au_en/articles/the-role-of-evidence-in-teaching-and-learning
Ofsted, ‘Education inspection framework: overview of research’, 2019 - https://www.gov.uk/government/publications/education-inspection-framework-overview-of-research
Visible Learning - https://visible-learning.org/category/infographics/