Summative Exams as Formative Devices with Implications

Let’s begin with what I mean by the following: 

Summative assessment – is normally based upon a sample of a domain.  This is used for looking at the distribution of pupil attainment at a point in time.
Formative assessment – is normally used for identifying what do next. 
I’ve previously written at length about the mastery learning model and how fundamental formative assessment is to this.  Teachers need to use formative assessment to make judgments about pupil performance and attempt to make inferences about pupil learning.  We can never know what has been learned at the point of teaching, as learning takes place over time. 
In my experience, formative assessment has been almost unanimously accepted as being a good thing by the profession.  Summative assessment, on the other hand, can be a topic of much debate:  When should it happen?  What should the format be?  Are exams a fair way of awarding qualifications?  What about coursework?
Something that I feel that is often missed out is that summative assessment in itself can be used formatively.  A well designed assessment can be used to look at the distribution of pupil attainment but also provide a summary of where the pupil is at just now, and help us plan for what to do next.
Not all summative assessments make good formative assessment tools.  Daisy Christodoulou, in Making Good Progress, talks about how final exams such as GCSE/National 5 are not particularly useful for this purpose.  She suggests that to make valid inferences about specific topics covered we need to make the exams longer.  The more questions that are included in an exam, the less the sampling unreliability. 
For example: if the paper contains one cosine rule question and the pupil gets it wrong this doesn’t tell us very much. 
·      Does it mean the pupil cannot apply the cosine rule? 
·      Does it mean the pupil cannot decide which trigonometry rule to use?
·      Does it instead tell us that the pupil cannot get past the context the question is wrapped in e.g. bearings? 
The inferences we can make about competence are not entirely reliable in this case.  We’d need a range of questions on trig triangles to identify the real issue:  decision making, applying procedures, interpreting contexts or something else. In order to make inferences about learning across some unit of work we need a large sample size of assessment questions.  This happens naturally in ongoing formative assessment, due to it’s very nature, but is not the case in the UK summative exams I am aware of.  
Our approach
With this in mind, the end of phase tests we’ve created at Hillhead aim to serve both as summative and formative devices.  This means they are long.  A phase typically contains four or five topics but the tests last for two hour long periods.
Below I have included one section from an assessment.  The assessment would include a section like this for each topic.  While I make no claims about these assessments being perfect I suggest they give us some valuable information. 

These tests help the teacher and pupil to identify more accurately where knowledge and skills are deficient within a topic.   Further, no pupil will score zero for a topic, as due to the mastery formative cycle in lessons the pupil wouldn’t have progressed through the work unless they’d demonstrated performance at the time of learning.  It’s unlikely a pupil will have forgotten everything they have learned on a given topic.  If I’d only included only one of the contexts question – a score of zero on equations may have been quite possible.  One of those questions alone does not give sufficient insight to make inferences about equation solving skill.  However, a score of 100% on the questions where solving is required out of context and a poorer performance on contextual questions helps us to identify clearly the area in need of remediation. 
A concern some may have is that so many of the questions are similar and that pupils may be penalized over and over for the same mistake.  The design of the marking scheme is weighted in order to avoid this, but further, the mastery curriculum planning should have ensured that no child cannot, for instance, deal with ANY of the fractional equations.  In our experience of this particular test so far our expectations have been confirmed.
The assessment is marked with the component scores for each of the four of five topics recorded.  This gives a quick indication of relative areas of strength and weakness, which can then be used to plan interventions.  Mastery learning literature suggests a score of 80% in formative assessment is sufficient to move on.  We have maintained this level of expectation with our summative assessments.  For each topic where a pupil scores under 80% there may be some follow up, which I discuss below. A score in the 50 or 60s on a topic is not considered to be good.  It shows a lot of work is required in order to be at the expected level.
One concern with our tests is that the pupils do not have to identify the topic the question is about.  We’ve included questions explicitly designed to test strategy selection (area v perimeter, Pythagoras v trig etc), but I am aware these assessments are not entirely like the final exams offered by exam boards.  Whereas external exams are designed to test a sample of course content across 2+ years of learning, ours are focused on a deep dive into 3-6 months of work.  This experience will in part be addressed by prelim/mock exams later in schooling, study of past papers and by regular use of low stakes quizzes and utilization of, for example, SSDD problems. 
One of the benefits of having summative assessments has, in our experience, been that they encourage many of the pupils to revise.  In terms of long-term retention of information this is very powerful – interrupting the forgetting curve.  Another benefit is that in the act of doing the assessment the testing effect applies, again, with consequences for long-term retention.
Pupil Engagement and Interventions
I’ve written before about how we operate a no in-class revision policy.  Pupils are encouraged to take ownership of their own study.   It is about instilling good habits for later in their school careers.  This study is facilitated by the issuing of comprehensive self-assessment grids, revision questions and worked solutions.  The self-assessment grids don’t ask pupils to traffic light based upon their own feelings.  In my experience pupils are notoriously poor at rating their own knowledge.  Just because you have heard of Pythagoras and recognise the word doesn’t mean that you can use the theorem!  With this in mind, the self-assessment grids have a sample question attached to each learning intention to aid pupils in self-diagnosing.   
We’ve found that some pupils ignore the self-assessment aspect of the process and simply complete the entire revision pack.  These are very, very long and contain many hours of work, but we will not discourage pupils doing extra work! 
All of this is facilitated by after school supported study drop-ins, which all of our department offer on an almost nightly basis.  Of course, the teacher will be doing on-going revision via starters and quizzes, but this alone will not be enough for pupils to achieve well. 
The results of these assessments have overall been very impressive.  Many pupils have demonstrated that their instantaneous performance in lessons has transferred into sustained learning a number of months later.  The distributions of scores across the year groups in question have tightened up and moved in the positive direction. 
However, what matters with assessment is not so much the outcome but the impact of the outcome.  The summative data is useful for tracking, but the interventions we make for each topic based upon pupil scores is the key element of the process. 
For some pupils who have performed below expectation in a topic or two, they would be allowed to re-sit those sections in a few weeks time.  Sometimes the shock of not having done well is enough to motivate learners to go away and do the required study.  Other pupils need more support.  As such we’ve found that lettering home and attaching a revision pack has encouraged parents to support their child as they study for the re-sit.  Allowing pupils to re-sit sections has been a positive move.  It gives learners the opportunity to improve and actually remediate the learning.  This is what really matters, not the test score in itself.
However, we’ve reached a point where for some pupils these interventions are not working.  In my own S3 class there are 5 or 6 pupils who are the classic borderline candidates – they have the ability to achieve Higher by S5 but are working more akin to National 5 by S5. They learn at broadly the same rate as their classmates- matching or exceeding their performance in formative assessments, but because they do not study they perform significantly less well summatively.
Each of these pupils has similarities.  Attendance is less than perfect; they are from similar socio-economic profiles and letters home after poor summative scores have had little impact – even when they have been about re-sit opportunities.  These pupils haven’t worked hard enough in the run up to tests and do not work hard given opportunity to re-sit.  In the past I’d have maybe accepted that they lacked the work-rate for Higher and watched them drift off to the N5 in S5 pathway. However, from my reading on mastery learning, I now realise there is still hope and still something I can do.

So now, after giving them the opportunity to work hard, but them failing to take it, the latest intervention is to be more assertive and direct in telling them what they have to do.  I’ve gone through their tests and carefully picked out which questions from the revision pack they need to go away and attempt.  This is very much me taking ownership of their learning – the very thing I didn’t want to have to do.  I want them to do this!  I will be issuing this as a homework task, rather than as a study suggestion.  Non-compliance will then result in an opportunity to do the revision over a number of lunchtimes (detention).   I don’t want to have to force pupils in this way, but my hope is that once they revise and then do well in their re-sit they might realise that if they study before tests they can do well.  One of them said to me “it was too hard”.  Which is nonsense, as I’ve witnessed him do it all before – perfectly.  He’s one of the most able in the class.  I’m not accepting difficulty as an excuse from bright pupils, who are in a class going at the correct pace for their rate of learning.  All the learning has been at an appropriate level for these pupils.  There can be no excuses for letting them slide off from the Higher pathway.

Advertisement

One comment

  1. The assessments are really specifically tied to our curriculum. This is broken up into many objectives. For instance we’ll focus on various permutations of equations in a block of work on that topic. In the assessment we’ll be looking to do a check on each of these. The skills questions will be things the kids have done in class before, and demonstrated they can do at the time of learning. But we aren’t just about drill exercises. Conceptual development, application in problems, mathematical thinking and inquiry are important. Using summative testing for this is harder. This takes two approaches: we sometimes include application questions where one or more topics are interleaved together such as the angles and equations one on the blog. Other times we’ll put in an always/sometimes/never true question. Sometimes it might be a grouping task within the assessment or it might be simply “give an example of”. We also check that pupils can work with the different representations of an idea: e.g. “place this fraction on the number line”, “find this fraction of x” “add this fraction to y” etc. Another approach is to use a conceptual multiple choice hinge question. We had the surreal experience of kids being able to find the area of complex compound shapes but getting Q1 wrong where they were asked for the area of a rectangle. Why? The rectangle, deliberately had the 4 side lengths given and included distractors in the multi-choice options.I use a variety of places for ideas: our own stuff, Don Steward, the excellent assessments from Kangaroo Maths BAM tasks, the stuff I brought with me from previous school, NCTEM mastery resources too. Another common thing would be to look at N5 papers and see if there is something which is essentially just 3rd level wrapped up. We might use that. The quadratic formula (without the +/- , just +) is a favorite. Revision packs are not ideal yet, and have been fired together from a mix of textbook Qs, exercises that didn’t make the cut for our curriculum planner, homemade stuff and *whisper it* TJ.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s