Assessment Literacy and Other Mythical Creatures in Kenya’s Education System

Assessment Literacy and Other Mythical Creatures in Kenya’s Education System

The debate over the alarming KCSE results revealed how much we, as a nation, do not know about testing. If someone were to grade our comments about the performance, we would score enough Ds and Es to make Matiangi and Magoha proud. This collective ignorance is not entirely surprising since assessment literacy is often undervalued even in relevant curricular such teacher education. This explains why we asked all questions but the relevant ones.

What exactly is the purpose of KCSE, and, therefore, what does it measure? This question should be answered by test specifications of each KCSE paper. Test specifications are a blueprint for exams and they provide explanations and justifications for decisions made by test developers. For instance, specifications would explain why English paper 2 has a grammar section, what the grammar sections is intended to measure, what grammar knowledge a student needs to have to attempt the sections, and how various scores in the sections would be interpreted relative to the student’s language ability. What is distressing about KCSE is that KNEC treats test specifications as a secret document yet all schools, teachers and students should have access to them. KNEC is also obligated to provide all necessary information about the tests and the expected responses including samples and grading rubrics to schools to ensure fairness -an important test quality. In subjects like English and Kiswahili, it is important that students understand the qualities of responses expected by the examiner, it is KNEC’s responsibility to furnish teachers and students with this information.

Can a test with almost 90% failure rate be valid? This is highly unlikely given that we had just over 600 000 students from across the country. The diversity of our study sample makes it improbable that all students had similar learning experience that would explain low performance. While it is possible that some schools may not have covered their syllabi, it is wrong to generalize that assumption -like Matiang’i would have us believe. In fact most schools are through with their four-year syllabi before the end of the third year. Even if we assumed that all public schools did not prepare their candidates well, what would we say about private schools? On average, top private schools have lower student-teacher ratio, better facilities and more motivated teachers than public ones -which normally translates to higher mean grades in such private schools.

Another possible explanation for the low performance would be low academic ability of the 2017 KCSE class -as some commentators claimed. The omniscient History professor, Edward Kisiang’ani tweeted “these results represent the correct ability of our children….” While I cannot speak for the good professor’s children, as a teacher and education student, I find it impossible to believe that after 4 years of learning less that 150 students out of 600 000 would score grade A in any valid test. Questioning the students’ ability is completely untenable given their entrance behavior. The 2017 cohort posted a sterling performance in their 2013 KCPE. Out of the 450,000 candidates who has for the exam that year, over 360,000 scored at least a C+. Over 10,000 students scored A- and above. In 2013 the average performance was a C-; compared to the D- the class recently posted in KCSE. This comparison is important because a valid test should predict the future performance of students. Therefore, KCSE results should correlate with KCPE performance of the same cohort. This concept, called criterion related validity in testing parlance, is an important measure of test quality. Matiangi’s sole excuse for this anomaly has been that there was widespread cheating cases in national exams. Well, KNEC reported less than one percent cheated in 2013 KCPE. We can therefore safely assume that cheating had a negligible effect on performance that year.

KCSESo what could be the reason for the mass failure? My guess is the grading process played a big role. Other indicators of test quality is the ability of separate items in a test to predict high and low scorers in the whole test (internal consistency and item discrimination). For instance, we would say that the Biology paper was good if students who scored As in other subjects also scored As in Biology and conversely if students scored Ds in other subjects, they also scored Ds in Biology. Any results that do not show this correlation hint at poor test development, grading rubric or grading process. This is why it would have been interesting to see a subject analysis of all the candidates and centres. You would expect that Matiang’i would provide these analyses with the speed he announced the top 100 students and schools. To date, the closest thing to a meaningful analysis of the performance provided by the government is an Egoji TTC tender notice labeled “2017 KCSE Statistics” on www.mygov.co.ke.  Yet, sadly, this is not an isolated case of lack of transparency in the ministry of education.

 

Leave a Reply

Your email address will not be published. Required fields are marked *