National Tests and Diagnostic Feedback: What Say Teachers in Trinidad and Tobago? (Part 1)

by Launcelot I. BrownDuquesne University
Laurette Bristol, Charles Sturt University, Australia
Joyanna De Four-Babb, University of Trinidad and Tobago, American University in Cairo, Egypt
Dennis A. Conrad, State University of New York at Potsdam

ABSTRACT:
The authors explored teachers’ and principals’ perceptions of the feedback report from the National Tests in Trinidad and Tobago and the extent to which they used the report in making curricular decisions to impact student learning. The sample comprised 133 primary school teachers (79 from low-performing and 54 from high-performing schools) and 10 principals. Results of the quantitative and qualitative data indicated that while many teachers were uncomfortable with interpreting the data presented in the report, teachers in higher performing schools were more inclined through department-wide collaboration to use the report to make pedagogical and curricular decisions. The major conclusion drawn was the need for teacher training in the use and interpretation of assessment data. Other issues emerging from the data and a possible subject for further research included the branding of schools as good schools and bad schools based on the school performance on the tests.

Keywords: assessment, Caribbean, feedback, national tests, Trinidad and Tobago

A major concern engaging the attention of governments and scholars worldwide, and in the Caribbean in particular, is the high percentage of the student population who are underperforming academically, or not meeting national academic standards (Cassen & Kingdon, 2007; Jules, Miller, & Armstrong, 2000). One response to raising scores has been the introduction of standardised tests that seek to identify areas of academic weakness and strength in the student population as a whole, rather than individual performance. But how do teachers use evidence of student performance from standardised tests to improve student performance? What sort of evidence do these tests provide for teachers when making curricular and pedagogical decisions? What challenges do teachers face in interpreting these test scores? And what organisational conditions need to be present to facilitate teacher learning from the collection and interpretation of data? These are questions that need to be addressed as we seek to determine whether teachers use the feedback from these standardised tests in their effort to improve student performance.

In this study, we used a case study of Trinidad and Tobago to ascertain the extent to which primary school teachers examine the feedback from the National Tests and how they use the feedback as they plan to meet the academic needs of their students. Specifically, we sought to determine whether primary school teachers use the report to inform curricular decisions and teacher pedagogy, the extent to which teachers use the report in their communications with parents, and the extent to which schools adopt department or schoolwide approaches and strategies to address the findings in the report.

Assessment and Student Achievement

There are many factors that impact student academic performance; however, the literature reviewed on student achievement stresses the importance of intervention at the primary or elementary level, as well as the fundamental role of ongoing assessment in improving student learning outcomes (Black & Wiliam, 1998a; Moss & Brookhart, 2009). While there is consensus on the positive impact of assessment on student learning (e.g., Nabors-Ol´ah, Lawrence, & Riggan, 2010), the critical aspects of any assessment is the quality of the feedback and how that feedback is used to inform the teaching, teaching-learning, and learning processes. For instance, Black and Wiliam (1998a) in their synthesis of the literature on the impact of formative assessment on student learning reported effect sizes ranging between 0.4 and 0.7 (Black &Wiliam, 1998b)—standard differences “larger than most of those found for educational interventions” and that “improved formative assessment helps low achievers more than other students and so reduces the range of achievement while raising achievement overall” (Black & Wiliam, 1998b, p. 141; see also Parr & Timperley, 2010).

Noting that Black and Wiliam (1998a) acknowledged not performing a quantitative meta-analysis on their data, Kingston and Nash (2011) conducted a meta-analysis to investigate the average effect size of formative assessment on educational achievement in K–12 classrooms. They reported a median effect size of .25 and a weighted mean effect size of .20. While these standard differences are considered small (Cohen, 1988), this does not mean they lack practical significance. Using the Trinidad and Tobago Minister of Education’s estimate of approximately 50% of students in Standard 1 and Standard 3 (U.S. equivalent Grades 2 and 4) scoring at or above 50% on the National Tests, a median effect size of .25 results in an approximate increase of 10% of students improving their performance on the tests. These are not trivial gains when translated into number of additional students achieving at higher levels. The implications of this finding are clear and very much of relevance to Trinidad and Tobago where the concern for the underperformance of students was the main reason for introducing the Continuous Assessment Programme (CAP; Trinidad and Tobago Ministry of Education, 1998).

 

CREN student classThe literature reviewed on student achievement stresses the importance of intervention at the primary or elementary level, as well as the fundamental role of ongoing assessment in improving student learning outcomes. Photo credit: Cecil Cuthbert

 

Assessments and Feedback

But, assessment does not necessarily result in improved student achievement (Black & Wiliam, 1998b). The critical importance of assessment is the potential for student learning, and this is dependent on the purpose of the assessment and how the information that emerges from the assessment is interpreted and “put to use in bringing about improvement” (Sadler, 1989, p. 119). This observation addresses directly the commonly stated distinction between formative assessment—assessment for learning—and summative assessment, conceptualised as assessment of learning. The focus of formative assessment is to promote and improve learning and achievement and is integrated into the teaching-learning process (Black &Wiliam, 1998b; Moss & Brookhart, 2009). These assessments could be formal, in that the assessments are required as part of school or educational policies, or informal which could be planned (i.e., done usually with the whole class at the beginning of the lesson to guide the teacher) or interactive (i.e., done during the lesson in response to student immediate learning needs; Cowie & Bell, 1999), providing teachers with “accurate information about the specific processes and outcomes of student learning . . . and students with accurate self-assessments to guide their learning processes” (Halverson, 2010, p. 132).

Summative assessment, on the other hand, is often defined as a measure of what has been learned—an audit of attainment (Moss & Brookhart, 2009). It is most often used to make decisions on what was learned and can be used to make a judgment at which point the assessment stops; that is, it has served its purpose—what Taras (2009) referred to as “uniquely summative” (p. 58). However, this is not always the case. According to Brookhart (2001), “formative and summative assessments need not be mutually exclusive” (p. 157), and as is often the reality, summative assessments do serve formative purposes. This is especially so in the current educational environment of multipurpose assessments (Bell & Cowie, 2001; Gipps, McCallum, & Hargreaves, 1994), examples of which are the large-scale interim assessments of the School District of Philadelphia (Blanc et al., 2010).

Taras (2009), citing the work of Scriven (1967), Ramaprasad (1983), and Sadler (1989), contended that the “process of assessment is a single process which makes a judgment according to criteria and standards” (p. 58) In this regard, formative assessment is an additional step that follows summative assessment and necessitates feedback indicating the possible gap in addressing the criteria. This feedback, Taras says, is “information [that] must be used by the learner in future activities” (p. 58). Evidently, the element that gives assessment the potential to significantly impact student learning is the feedback the student receives, whether from a teacher, peer, or any agent, on his or her performance on the assessment (Hattie & Timperley, 2007).

In distinguishing between teacher and pupil use of feedback, Sadler (1989) argued that while students may use feedback to monitor their own learning, educators use feedback to monitor their teaching effectiveness. Parr, Glasswell and Aikman (2007), quoting Timperley and Parr (2004) agreed, asserting that “teachers can use assessment . . . to . . . look backwards to reflect on the effectiveness of their own practice and forwards to work out what needs to be taught or re-taught” (p. 69). Similarly, the feedback from the Trinidad and Tobago National Tests are directed at teachers among other stakeholders. Teachers are expected to use the information to make curricular and pedagogical decisions to address gaps in student content knowledge identified by the assessments.

Challenges in Feedback

Shute (2008), quoting Kulhavy and Stock (1989), argued that “effective feedback provides the learner with two types of information: verification—simple judgment of whether an answer was right or wrong; and elaboration—information providing relevant cues to guide the student” (p. 158). Similarly, Gipps et al. (2000) suggested that feedback fell into two broad categories: evaluative or judgmental—for example, giving rewards: good job, excellent—and descriptive—what Cowie (2005) referred to as informational and Parr and Timperley (2010) called evaluative—in that it must be specific to a “task and how to do it more effectively” (Hattie & Timperley, 2007, p. 84). It is this evaluative aspect that makes the difference in learning outcomes, and the focus of much of the literature on the effective use of feedback.

Providing feedback however is premised on the assumption that teachers know how to interpret and utilise such information, which is often not the case (Black & Wiliam, 1998a). For example, studies conducted by Bell and Cowie (1999), Cowie and Bell (2000), and Parr and Timperley (2010) were parts of larger initiatives that included professional development components for the teachers on how to identify the learning gaps and give effective feedback. Parr and Timperley contended that “both the needs of students in progressing their . . . learning and what teachers needed to know in order to address those needs were considered in concert in designing the professional learning” (p. 72).While their focus was on teachers in classroom settings gathering and interpreting evidence of student learning as part of professional development in the use of writing assessment tools, the need for professional development also holds true for interpreting evidence from large-scale assessments. To believe otherwise presupposes that teachers know how to interpret and use the assessment evidence, and are willing to use the data in future curricular and pedagogical decisions on student learning.

Nabors-Ol´ah et al. (2010) addressed this concern in their examination of Philadelphia teachers’ use of benchmark assessment results. As they observed, there was little evidence about how educators used data from large-scale assessments, or about the conditions that supported their ability to use these data to improve instruction. This is important, because built into the dialogue about providing data to improve student outcomes is the assumption that the data are accurate and sufficiently detailed to guide the teaching and learning process and that schools know how to effectively use the data, or have the support systems in place to facilitate use of the data.

Halverson (2010) captured the previous sentiment when he asserted that assessments, no matter how well designed, basically provide information. Consequently, “schools need structured occasions to turn assessment information into actionable knowledge” (Halverson, 2010, p. 133). Blanc et al. (2010) also examined the interim assessments in elementary schools in Philadelphia and concluded that “assessment had the potential to contribute to instructional coherence and instructional improvement if they are embedded in a robust feedback system” (p. 205). Schools then may need to implement systems or restructure their day in such a way that it gives them the time for collective interpreting of the data. Whether this is done through school or department-wide meetings, or by creating working groups by grade level or teacher expertise, or even identifying and bringing in outside expertise, the fact is, schools have to create the conditions to turn the feedback into actionable knowledge.

Looney (2011), using the term classroom-based formative assessment, distinguished between the interactive assessments that engage students in their own learning and the large-scale assessments that provide data on the effectiveness of the education system and overall performance of schools and students. In Looney’s estimation, there are limits to using data from “large-scale, standards based assessments to target specific student needs or shape classroom instruction” (p. 15). She posited that large-scale assessments are "designed to ensure that data are valid, reliable and generalisable, and cannot easily capture student performance on more complex tasks, such as problem solving, reasoning, or collaborative work. [They] do not provide the detailed information needed to diagnose the specific sources of student difficulty," (Looney, 2011, p. 15).

Additionally, feedback from large-scale assessments takes time. It may take several weeks or months before schools receive the assessment data. However, while this is true, it does not negate the assertion with regard to the potential of the assessment to contribute to improved instructional practice (Blanc et al., 2010; Halverson, 2010). A common thread running through the feedback from large-scale assessments is the quality of the feedback and the essentiality of support. However, developing a robust feedback system is easier said than done for as Blanc et al. (2010) observed, it “requires skill, knowledge and concerted attention on the part of school leaders” (p. 205).

Herein lies the challenge for effectively using large-scale assessments for formative purposes. First, there must be a system in place that supports the data emerging from the assessment and so, create the conditions for principals and teachers to use the information. Second, the feedback must be accurate, sufficiently detailed and presented in such a way that principals and teachers can use it. Third, principals and teachers must know how to interpret the data and be willing to schedule time to critically examine and reflect on the data. All three conditions are essential if feedback from large-scale assessments is to significantly impact student learning—that is, to serve a formative purpose.

 

This is an Accepted Manuscript of an article published by Taylor & Francis in the Journal of Educational Research on 25 Nov 2013, available online: http://www.tandfonline.com/doi/abs/10.1080/00220671.2013.788993.

This is Part One of this exciting two-part series based on the feedback on National Tests in Trinidad and Tobago. You can read Part Two here.

About the Authors

Launcelot I. Brown is the Barbara A. Sizemore Distinguished Professor, Associate Professor, and Chair of the Department of Educational Foundations and Leadership at Duquesne University. His research interests are in the areas of school leadership and student achievement and teachers use of assessment data from national assessments in the Caribbean.

Laurette Bristol is Assistant Director for the Research Institute for Professional Practice, Learning and Education at Charles Sturt University,Australia. Her research explores the ways in which the historical conditions of colonialism shape the contemporary pedagogical practices of primary school teachers and principals in the Caribbean.

Joyanne De Four-Babb is a former Assistant Professor and Coordinator of the Practicum Programme at the University of Trinidad and Tobago and Adjunct Assistant Professor in the Graduate School of Education at American University in Cairo, Egypt. She uses narrative inquiry methodology to analyse the experiences of beginning and practicing teachers.

Dennis A. Conrad is Professor and Chair of the Department of Inclusive and Special Education at the State University of New York at Potsdam. His research interests include learner diversity and difficulties, inclusive leadership, self-study in teaching, and Caribbean issues in education.

You have no rights to post comments.
Create an account to post comments.