National Tests and Diagnostic Feedback: What Say Teachers in Trinidad and Tobago? (Part 2)

by Launcelot I. Brown, Duquesne University
Laurette Bristol, Charles Sturt University, Australia
Joyanna De Four-Babb, University of Trinidad and Tobago, American University in Cairo, Egypt
Dennis A. Conrad, State University of New York at Potsdam

[Editor's note: This is Part 2 of the series 'National Tests and Diagnostic Feedback: What Say Teachers in Trinidad and Tobago?' To read Part 1, click here.]

The Case of Trinidad and Tobago

Trinidad and Tobago, the southernmost island state in the Caribbean chain of islands, is multiethnic and multireligious with the predominant ethnicities being of East Indian (40.3%) and African (39.6%) descents (World Factbook, 2012). The country is divided into eight educational districts each headed by a School Supervisor III (SSIII) assisted by SSIIs responsible for secondary schools and SSIs responsible for primary schools. All educational policies and mandates emanate from the Ministry of Education, the central authority that runs the public education system (Oplatka, 2004). The public education system comprises four levels: early childhood care and education (3–4-year-olds), primary education (5–11+ years old), secondary education (12–17 years old), and the tertiary level. There are seven levels within the primary school. These consist of the infant department, which comprises two levels—first and second-year infants; and the junior department that comprises five grade levels called standards (Std.)—1–5.

Poor academic performance by students is seen at the Secondary Education Assessment (SEA) and the National Tests. The SEA is a national examination taken at age 11 + years and is used to place students within the hierarchical secondary education system. Based on conservative estimates, approximately 30% of students taking this exam attain passing scores at Level 1: below proficient. In 2010, the then–Minister of Education stated that approximately 50% of students in Std. 1 and Std. 3 achieve scores below 50% on the National Tests (see Allaham, 2011; De Lisle, 2011; Ramdass, 2010).

As part of an overall strategy to address student learning outcomes at all levels of the primary level, in 2001 the government introduced the Continuous Assessment Programme (CAP). As stated in the CAP Operation Manual, CAP as a model is designed to “inform on students readiness for the next level of learning and so facilitate smooth transition through the school system; support appropriate decision-making about individual students, programmes in schools and instructional approaches” (Trinidad and Tobago Ministry of Education, 1998, p. 7).

CAP, as originally defined, comprises nine steps that begin at admission of the child into the school, through screening and interventions, classroom activities that include formative and summative assessments, analysis and reporting, and evaluation (Trinidad and Tobago Ministry of Education, 1998). The summative assessment component comprises the National Tests, which are administered yearly to students in Stds. 1–4 in public and private primary schools. These tests assess student performance in four areas—mathematics and language arts in Stds. 1 and 3, and social studies and general science in Stds. 2 and 4. A critical component of the National Tests is Step 8—analysis and reporting—that is, the feedback report on individual districts and schools’ performance on the tests. There are four performance levels: Level 1: below proficient—the student performs well below the standard of work required at this level; Level 2: partially proficient—the student nearly meets the overall standard of work; Level 3: proficient—the student meets the overall standard of work; and Level 4: advanced proficiency—the student exceeds the overall standard of work required at this level (Trinidad and Tobago Ministry of Education, 2004, 2008). All raw scores are converted into normal curve equivalent (NCE) scores (M = 50, SD = 21.06). Table 1 shows the norm-referenced bands. Schools in which more than 30% of students score at Level 1 are designated as in need of Performing Enhancement Programmes (PEP), which are programmes designed to address the identified areas of weakness. The purpose of the National Tests is diagnostic and therefore does not represent pass or fail situations based on given criteria. The emphasis is on identifying areas of academic weakness at the national level and determining performance trends among different groups of students by educational district, locality of the school, type of school, classification of school, and gender. As stated in the 2004 report (Trinidad and Tobago Ministry of Education, 2004), the objectives of the National Test are to:

• gather information for decision making at the school, district, and national levels,
• identify areas of the system that require further investigation,
• identify national norms,
• compare students’ performance by school and educational districts, and
• track students’ progress in school.

Table1 Brown

Each school receives a feedback report on the overall performance of the school and performance on specific content areas, and specific skill areas by individual students.

As such, while the National Test is a summative assessment, it also serves a formative purpose. As can be seen in the feedback cycle (Figure 1), it is expected that the district offices use the feedback report to identify the level at which schools in their districts are performing and provide resources to help them address their areas of weakness. Also, it is expected that teachers and principals use the feedback in a formative capacity to inform the school’s response and teachers’ pedagogy to positively impact student learning.

In preparation for the implementation of the CAP, the Ministry of Education conducted a series of workshops for teachers. However, there are no reports on the effectiveness of these professional development workshops and whether principals and teachers had acquired the skills necessary for making effective use of the data presented as part of the National Tests report. Also, we do not know how many of the teachers who received the initial training, are still working in the educational system. The fact is, while NCE scores may seem “easy to interpret” (Trinidad and Tobago Ministry of Education, 2004, p. 3), the report also states that “scores between 1 and 29 are considered below average; between 29 and 71 are considered average and those 71 and above reflect above-average performance” (p. 3; see Table 1). Statistically these make sense but can be confusing to teachers when compared to the data sent to the schools (Tables 2 and 3).

Therefore, in this study, we gauged responses from primary school teachers and principals to determine the extent to which they used the feedback from the National Tests in making curricular and pedagogical decisions. Specifically, we asked the following question: Was there a mean difference in overall use of the National Tests Report (the combined dependent variables of teacher use for curricular and pedagogical decisions, department-wide approaches, communication with parents) between low-performing schools (PEP schools) and higher performing schools (non-PEP schools)? Additionally, we sought to determine on which of the dependent variables they differed. We also sought to ascertain if there were mean differences on the overall use of the National Tests Report by sex, teaching experience, and teacher qualification.

Figure1 Brown

Method

We employed quantitative and qualitative approaches and techniques in analysing the data. For the quantitative analyses, descriptive statistics were reported. In addition, multivariate analyses of variance (MANOVAs) were conducted to test for differences between groups on their use of the National Tests Report. In addition, we ran discriminant analysis and examined analysis of variance (ANOVA) tables.

The qualitative methods used for collecting data comprised individual semi structured and focus group interviews. Ten principals were interviewed with the aim being to explore their perceptions of the usefulness and their use of the report in making decisions related to student achievement. Similar questions were asked of 19 teachers in three focus groups, with regard to their classroom and pedagogical decisions. The authors conducted both the interviews and focus groups, which were audio-recorded and transcribed.

Participants

A non-stratified random sample of 20 primary schools (PEP schools = 13; non-PEP schools = 7) from cluster schools in the Port of Spain and Environs Education District provided the sample for this study. Only teachers who were currently teaching, or had taught classes that had taken the National Tests were solicited to be part of the study (n=256). Of those teachers, 133 returned completed questionnaires (women =96, men = 37; PEP = 79, non-PEP = 54). The breakdown by qualifications was the following: not a trained teacher =4, teacher’s diploma = 81, bachelor’s degree = 40, master’s

degree = 5, no response = 3. Teaching experience ranged from 0.5 to 40 years with a mean of 10.15 years (SD = 7.1) at their present school. The ratio of female to male teachers was reflective of the ratio in the teaching population. However, there were significantly more teachers with degrees in the non-PEP schools (n = 26) than the PEP schools (n = 19), χ2(1, N = 126) = 9.58, p = .002. The mean number of students per class in the PEP schools was 19 (SD = 3.6) and in the non-PEP schools 23 (SD = 5.1). This difference was statistically significant, however, it was not considered important to the outcome of the study because research has shown that without a change in teachers’ teaching behaviors, class size by itself has minimum if any effect on student achievement (Ehrenberg, Brewer, Gamoran, & Willms, 2001). From this sample, 10 principals agreed to be interviewed and 19 teachers agreed to participate in one of three focus groups.

Procedure

Following permission from the Trinidad and Tobago Ministry of Education, we contacted schools directly. Principals who indicated a willingness to have their schools be part of the study were sent copies of the permission letter and the confidentiality statement, the questionnaires seeking the teachers’ opinion on the feedback from the National Tests, a letter requesting teacher participation in one of three focus groups, and a letter to the principals requesting an interview. The questionnaires were distributed by the principals to teachers willing to participate in the study. Teachers placed their completed and sealed questionnaires in an envelope kept in the administration office. These were collected by one of the researchers the following week. Teachers willing to participate in the focus groups indicated their willingness via electronic or by direct response to the researchers’ request.

Table2 Brown

Table3 Brown

Instruments


Three sources provide data for this study: teachers and principals’ responses to items on a short questionnaire, principals’ interview data, and data from the teachers’ focus groups. The questionnaire was developed from three workshops for primary school teachers on meeting students’ learning needs in the regular classroom. Eighteen items were selected by the teachers and vetted by a panel of senior teachers and principals. Items 1–7 sought demographic information. The other 11 items rated teachers’ responses on a 6-point Likert-type scale ranging from 1 (disagree) to 6 (totally agree). Item 8 asked teachers to rate the timeliness of the National Tests Report. The remaining 10 items sought to ascertain the extent to which they used the information to make curricular decisions. Examples of items are the following: “The National Tests Report is sufficiently detailed to allow me to make informed teaching decisions,” “Within the same department or class level we hold planning meetings to discuss approaches and strategies to address areas of weakness highlighted in the report,” and “I use the report to reflect on my teaching.”

We subjected the 10 items to a factor analysis using as our method of extraction maximum likelihood extraction with direct oblimin oblique rotation, suppressing loadings on variables lower than .30. This yielded a three-factor solution that accounted for 68.02% of variance in the set of variables with the first and second factors accounting for 44.72% and 11.95% of the variance (Tables 4 and 5). Correlation coefficients for the 10 items ranged from r = 0.04 to 0.69 and correlations between the factors ranged from r= .37 to .52. The items to factor loadings made theoretical sense. The five items pertaining to teacher use of the feedback data to make curricular decisions loaded on Factor 1 (teacher use; Cronbach’s α =.85). The two items pertaining to departmental planning loaded on Factor 2 (department collaboration; Cronbach’s α =.81); and three items that addressed communicating the data with parents formed Factor 3 (communication; Cronbach’s α = .66). Internal consistency reliability for the entire instrument was .86 (Table 5). Indicating the factorability of the correlation matrix and as a result, the suitability of the data for factor analysis, Kaiser-Meyer-Olkin measure of sampling adequacy was .82, and Bartlett’s test of sphericity was significant, χ2(45, N = 133) = 547.27, p <. 001.

Table4 Brown

The Interview

Three questions were asked of the principals during their interviews. These questions were also posed to the 19 teachers in the focus groups. These questions were the following:

• Do you conduct a critical analysis of the feedback from the National Test? If yes, how is this done?
• To what extent is there an educational plan at the department and or school level to address the findings of the National Tests Report?
• How is the report used by the teachers in their teaching and in the curricular decisions they make in their effort to improve student learning?

Table5 Brown

Data Analysis

Before conducting the statistical analyses, all appropriate statistical assumptions were tested and found to be tenable. First, descriptive statistics were run to determine the extent to which teachers used the feedback data. Second, because the three factors were considered simultaneously, we ran MANOVAs using school designation (PEP and non-PEP), gender, years teaching (<5 years and >5 years), and teacher qualification (teacher’s diploma and degree) as independent variables and the three correlated factors as dependent variables. As a follow-up to a significant MANOVA, we ran the discriminant analysis to determine which variable(s) maximised separation between the groups. Following this, we examined the ANOVAs to compare the perceptions of teachers in low-performing schools (PEP schools) and higher performing schools (non-PEP schools) on each of the three factors: the usefulness and their use of the feedback data in making curricular decisions, the extent to which there was a department wide response to the feedback report, and the difference in teachers’ use of the data in communication with parents. Effect sizes were reported to quantify the magnitude of significant findings. With regard to the interviews and focus groups, all data were transcribed. The transcribed data were independently analysed for emerging themes and categories by two of the researchers with expertise in qualitative research.

Results

Quantitative

The first step in this study was to determine whether there was a significant difference between the ways teachers in low-performing schools and higher performing schools use the feedback from the National Tests to inform pedagogical decisions and curricular decisions. The results of the MANOVA indicated that teachers in PEP schools differed significantly from teachers in non-PEP schools on their use of the feedback report (Wilks’s λ = .571), F(3, 129) =32.37, p < .001, partial η2 = .43. Maximising the difference between schools was Factor 2, within-department collaboration (correlation with the discriminant functions =.993; and the standardised discriminant function coefficient = .981). An examination of the ANOVAs indicated that non-PEP schools were more inclined to discuss the feedback report within their department, F(1,131) = 97.31, p < .001, η2 = .43; use the feedback report to make informed decisions about their teaching, F(1, 131) = 20.47, p < .001, η2 = .14; and share the findings of the report with parents and students, F(1,131) = 6.22, p = .015, η2 =.5.

Of further interest was the difference between teachers on their use of the report. The comparison of teachers by sex on the three factors combined indicated that male and female teachers did not differ in their use of the feedback report (p> .10). A similar comparison between teachers with teachers’ diploma and those with bachelor’s and master’s degrees, and teachers with 1–5 years and more than 5 years teaching indicated that while teachers with degrees and those with more 5 years teaching displayed higher mean ratings on each factor, these differences were not significant (p > .10). The results for item 8 indicated that overall, teachers agreed that the feedback reports were not received in a timely manner that would allow them to make more effective use of the data. On this item there was neither a statistically significant difference between male and female teachers nor a difference based on years of teaching or level of qualifications (p >.15).

Qualitative

The analysis of the qualitative data suggested that the principals and teachers were not comfortable with interpreting the reports. Twelve of the nineteen teachers stated that they did not fully understand what the numbers stood for and felt lost in the statistics, while eight stated that they were never trained in the use and interpretation of the tests results. This was more typical of teachers in the PEP schools who also stated that beyond a visit from the school supervisor to inform them of their poor performance, they received little or no assistance from the Ministry of Education. There was a high level of frustration reflected in the comments of the teachers. For example, one teacher from a PEP school lamented, “By the time you get the report, the children are ready to move up to the next class. Also, the principal gets the report and gives it to you. I try but I don’t always understand everything and there is no one to call. The supervisor can’t even help.”

Another PEP school teacher shared, “[T]he feedback is quite limited: 1, in terms of its timeliness; 2, we get figures per child and we are not sure what that figure means. So a child getting 76, I am not sure whether it is 76 out of a 100. . . . I am not too clear on what that means. Some parts of the statistics if you are not into math could be very difficult to understand and to be able to interpret.”

A similar sentiment was expressed by this senior teacher from another PEP school:“It is taking time away from what schools, teachers, and principals should be doing. About the results, everyone is saying it takes so much time to analyse the results. That time should be spent planning how to fix this problem . . . and as someone said, it is so tiring that it affects the effort to fix the problem. So I think there is a clear indication to have these results put in a manner that all I have to do is read and I get the information that I need.”

The previous statements were similar to those from teachers in the non-PEP schools. Threading these comments was the frustration in interpreting the report either due to an inability to interpret the numerical data, or the amount of time it took to understand the data.

The teachers in the non-PEP schools also expressed frustration with the timeliness of the feedback report and admitted to their discomfort, with some of the “more detailed statistical information.” Despite these challenges, they were prepared to discuss the report “to see where they were going wrong, and what they had to do differently.” For example, one teacher said, “I think the feedback is very useful because it comes in detail but the short coming is that we eventually get it so late in the school year that it impacts on the ability to plan strategies and programmes to address the weaknesses and shortcomings and identify the areas that need addressing.”

Another non-PEP school teacher found the report useful: “When we analysed ours, we saw that our problem was within our junior school. That is where our weaknesses lie so we realised that we had to do something to assist the junior school teachers to improve and I am speaking particularly about math and language; there was where our weaknesses were and certain particular strands of the math areas. We were able to pick up from the national tests where the weaknesses of the children lie, what reflected on the teacher’s inability to teach certain things, so we picked that up in the national tests.”

In agreeing with a teacher in a PEP school, one non-PEP teacher argued,“That is true, the information is given. However just as Ms. X and they were saying, it has to be user friendly for everybody. We can interpret it because we have people on staff who find it easy and can work it; if you cannot do it, it will be frustrating. We meet together, we stay back in the afternoon and then we interpret. It is tedious and that should not be because it is a deterrent to those people who want to do it but do not have the resources or skills to do it.”

The teachers in the non-PEP schools concurred that the feedback report could be more user-friendly. In contrast to those in the PEP schools, they tended to meet as a department or team to identify and address the weaknesses highlighted in the report. Yet an important consideration is the significantly greater number of teachers with degrees in the non-PEP schools who would have been exposed to, at least, introductory courses in statistics.

The interviews with the principals supported the findings of the focus groups. The principals of the PEP schools all lamented the lack of resources and support to address the weaknesses identified in the feedback report. For example, one PEP school principal explained,“My school was labelled a PEP school. Last year they [Ministry of Education] called me out from my vacation to attend meetings and we had to identify all our strategies, and I was promised resources, and promised a reading facilitator, and promised a special education teacher. Another national test has come along. I’ve received none . . . nothing.”

However, the data did highlight an interesting contrast between the principals and teachers of the PEP schools and the non-PEP schools. In the low-performing schools, principals held a higher perception of the extent to which their teachers used the feedback report in making curricular decisions, a perception that was not supported by the quantitative or qualitative data. There was much more agreement between principals and teachers in the high-performing schools. Yet principals in both PEP and non-PEP schools recognised the importance of the feedback report in identifying gaps between what the students know and what is expected of them based on the national standards.

The following statement is generic to what principals said: “Apart from the whole school approach to rectifying the problem, we did conference one and one with the teachers. “Miss, well your class results showed the children were weak in mathematics, geometry, statistics, do you have any idea why this happened?” And most of the times they would be honest enough to let me know “Miss, I did not reach so far in the syllabus.” And then I will offer suggestions and ask of them suggestions to rectify the problem in the following year. Coming to the end of the term some of them are just rushing through some of the topics to finish the syllabus for the year and that shows up in the national test results.”

Both sets of principals from PEP and non-PEP schools made similar claims with regard to how they use the National Tests Report in their schools. They used the report to identify subject areas, and strands within subject areas in which the school or class did not perform at the expected standard. Curricular and pedagogical adjustments addressed whole-class concerns. No mention was made of using the report to meet individual student needs. Yet based on the data from the teachers, the difference between the PEP and non-PEP schools appears to be in the ability of one group to interpret the report, which certainly impacts the extent to which they use the report.

Discussion

The government of Trinidad and Tobago has initiated a number of reforms over the past 15 years to address issues surrounding student achievement in schools. Critical to this effort is CAP, to which the National Tests are integral and have the greatest impact on policy decisions with regard to school performance and service delivery to schools. Because CAP is intended to serve a formative purpose, there is a built in assumption that the National Tests Report would be used by schools to positively impact student learning.

Hattie and Timperley (2007) suggested that feedback provides answers to three major questions: How am I going, where am I going, and where to next? With regard to feedback from large-scale assessments, these questions can be modified in the following way: How did I go and how did my students go, where did I take my students, and where am I taking my students next? To answer these questions teachers have to understand and know how to use the feedback data. Utilising the data to make curricular and pedagogical decisions is what makes the assessment formative. But if teachers and principals find it difficult to interpret the reports or choose not to use the reports in curricular decisions, then the formative aspect of the assessment is lost and does not serve its intended purpose. The researchers who investigated the impact of interim assessment in the Philadelphia elementary schools (Halverson et al., 2010; Nabors Ol´ah et al., 2010) made a similar observation. Missing from the design of the process was a system that supported schools’ effective use of the feedback data.

The studies conducted by Bell and Cowie (2001), Hattie and Timperley (2007), and Gipps et al. (2000) all emphasised the need for professional development in relation to teachers’ effective use of feedback. In this study, the teachers and principals, especially in the lower performing schools, admitted that they were not comfortable with interpreting the data. This is a legitimate concern. For example, in looking at Table 2, how would a teacher interpret the scores for student B, and would that interpretation be consistent across schools or even teachers? Gipps et al. (1994) questioned the assumption of universality, that the “test score has the same meaning for all individuals” (p. 6). Therefore a NCE score of 18 on the comprehension strand of the language arts could be an indication that the student has difficulty in comprehending, or difficulty in decoding words. The teacher’s response would depend on the skill deficit to be remediated. But it is necessary to recall Looney’s (2011) caution about the limits of data from summative assessments in providing the detailed information needed to diagnose the specific sources of student difficulty. While the feedback data from the National Tests identify the strands on which students underperformed, teachers still need to conduct classroom formative assessments that allow them to identify and address the specific sources of student difficulty.

All participants spoke of the time it took to receive the feedback. This is not surprising. With reference to timeliness, Looney (2011) cited Allal and Schwartz (1996) who refer to “formative assessments that directly benefit students who were assessed as ‘Level 1’ and formative assessments where data gathered are used to benefit future instructional activities or new groups of students as ‘Level 2”’ (p. 17). Definitely, based on the time taken for the schools to receive the report, the data represent Level 2 assessments. However, it might be necessary for the Trinidad and Tobago Ministry of Education to look at different models of large-scale formative assessments as they attempt to address the timeliness of the feedback to the schools (see Black and Wiliam, 2005; Looney, 2011).

Despite the challenges of timeliness and in interpreting the data, the evidence suggests that some schools made more effective use of the report. However, it is not possible to say that the discomfort with interpreting the data caused the poor performance of the students in the PEP schools. Based on the results of the quantitative analyses, it can be said there appears to be an association between the schools that made greater use of the feedback data and student performance on the national tests.

The qualitative data highlighted the issue of resources and professional help for the schools. This is integral to the process if schools are to address the learning gaps. One principal of a higher performing school spoke of how she used the feedback to target professional development for her school. Another in a PEP school used the report to generate both short- and long-term remedial plans. However, the data indicated that there were more teachers with degrees in the non-PEP schools than the PEP schools, and teachers in the focus groups admitted that they had people on their faculty who were able to interpret the report. This is a small sample and the difference between schools might be coincidental. This difference brings to the fore the issue of equal access to resources, and has implications for existing teacher preparation programmes and in-service professional development programmes for teachers.

There were many other troubling issues emerging from the data. Among these were the mismatch between length of the class subject period and the length of the exam, especially for the younger students. There was also the issue of the conclusions drawn by the Trinidad and Tobago Ministry of Education and the unintentional classification, or branding of schools as good schools and bad schools. As stated by many of the teachers, this has created an atmosphere of competition among schools, has created a market for the publishing and purchase of practice test booklets, and has led some schools to adopt strategies that negate the diagnostic purpose of the assessments. These are genuine observations and subjects for further research.

We are not aware of any study that has examined how Trinidad and Tobago schools use the feedback from the National Tests in making curricular decisions. This is an area in need of further study. Based on our findings, there is need for training for principals and teachers in the purpose, use and interpretation of the data included in the National Tests Report. The most significant difference between higher performing and lower performing schools was on teachers collaborating and working together to develop strategies to address the findings in the report. Obviously, some principals are prepared to take leadership in scheduling time to allow for this collaboration. It may be necessary as part of a training programme to share these best practices with schools that are not making use of their most valuable resources within their schools—the teachers. Yet it comes down to the Trinidad and Tobago Ministry of Education having a system in place to support the schools through training in the interpretation and use of the data in the feedback report. Without this, teachers would continue teaching as they see fit, but without making the kind of impact that the Trinidad and Tobago Ministry of Education expected with the introduction of national testing. It is as Firestone, Winter, and Fitz (2000) stated: Assessment must be accompanied by adequate professional development to help teachers change practice.

This is an Accepted Manuscript of an article published by Taylor & Francis in the Journal of Educational Research on 25 Nov 2013, available online: http://www.tandfonline.com/doi/abs/10.1080/00220671.2013.788993.

References

Allaham, A. (2011, July 25). Gopeesingh: Failure rate alarming. The Trinidad Express. Retrieved from http://www.trinidadexpress.com/internal?st=print&id=126099773&path=/news

Bell, B.,&Cowie, B. (2001). The characteristics of formative assessment in science education. Science Education, 85, 536–553. doi:10.1002/sce.1022

Black, P., & Wiliam, D. (1998a). Assessment and classroom learning. Assessment in Education, 5, 7–73.

Black, P., & Wiliam, D. (1998b). Inside the black box. Phi Delta Kappan, 80, 139–148.

Black, P., & Wiliam, D. (2005). Lessons from around the world: how policies, politics and cultures constrain and afford assessment practices. The Curriculum Journal, 16, 249–261.

Blanc, S., Christman, J., Liu, R., Mitchell, C., Travers, E., & Bulkley, K. E. (2010). Learning to learn from data: Benchmarks and instructional communities. Peabody Journal of Education, 85, 205–225.

Brookhart, S. M. (2001). Successful students’ formative and summative uses of assessment information. Assessment in Education, 8, 153–169.

Caribbean Education Task Force. (2001). Education strategy: Report of the Caribbean Education Task Force.

Cassen, R., & Kingdon, G. (2007). Tackling low educational achievement. York, UK: Joseph Rowntree Foundation. Retrieved from http://www.jrf.org.uk/sites/files/jrf/2063-education-schools-achievement.pdf

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Mahwah, NJ: Erlbaum.

Cowie, B. (2005). Pupil commentary on assessment for learning. The Curriculum Journal, 16, 137–151.

Cowie, B., & Bell, B. (1999). A model of formative assessment in science education. Assessment in Education, 6, 101–116.

De Lisle, J. (2009, March 3). Schooling and Poverty. The Trinidad Express. Retrieved from http://www.trinidadexpress.com/commentaries/Schooling and poverty-115449789.html

Ehrenberg, R.G., Brewer, D. J., Gamoran, A.,&Willms, J. D. (2001). Class size and student achievement. Psychological Science in the Public Interest, 2, 1 30.

Firestone,W. A., Winter, J., & Fitz, J. (2000). Different assessments, common practice? Mathematics testing and teaching in the USA and England and Wales. Assessment in Education, 7, 13–37.

Gipps, C., McCallum, B., & Hargreaves, E. (2000). What makes a good primary school teacher? Expert classroom strategies. New York, NY: Routledge Falmer.

Halverson, R. (2010). Mapping the terrain of interim assessments: school formative feedback systems. Peabody Journal of Education. 85, 130–146.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81–112.

Jules, D., Miller, E.,&Armstrong,A. L. (2000). Education strategy: Report of the Caribbean Education Task Force. Washington, D.C. The World Bank.

Kingston, N., & Nash, B. (2011). Formative assessment: a meta-analysis and a call for research. Educational Measurement: Issues and Practice, 30, 28–37.

Looney, J. W. (2011). Integrating formative and summative assessment: progress toward a seamless system? (OECD Education Working Papers, No. 58).

Paris, France: Organisation for Economic Co-operation and Development. doi:10.1787/5kghx3kbl734-en

Moss, C. M., & Brookhart, S. M. (2009). Advancing formative assessment in every classroom: A guide for instructional leaders. Alexandria, VA: ASCD.

Nabors Ol´ah, L., Lawrence, N. R., & Riggan, M. (2010). Learning to learn from benchmark assessment data: How teachers analyse results. Peabody Journal of Education, 85, 226–245.

Oplatka, I. (2004). The principalship in developing countries: Context, characteristics and reality. Comparative Education, 40, 428–448.

Parr, J. M., Glasswell, K., & Aikman, M. (2007). Supporting teacher learning and informed practice in writing through assessment tools for teaching and learning. Asia-Pacific Journal of Teacher Education, 35, 69–87.

Parr, J. M., & Timperley, H. S. (2010). Feedback to writing, assessment for teaching and learning and student progress. Assessing Writing, 15, 68–85.

Ramdass, R. (2010, June 9). Low pass rate in government schools worries minister. The Trinidad Express. Retrieved from http://www.trinidadexpress.com/news/97562779.html

Sadler, D. R. (1989). Formative assessment in the design of instructional systems. Instructional Science, 18, 119–144.

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78, 153–189.

Taras, M. (2009). Summative assessment: The missing link for formative assessment. Journal of Further and Higher Education, 33, 57–69.

Trinidad and Tobago Ministry of Education. (1998). Continuous assessment programme (CAP): operational manual for the pilot. Port-of-Spain, Trinidad: Ministry of Education.

Trinidad and Tobago Ministry of Education. (2004). National test report 2004. Retrieved from http://www.MOE.gov.tt/publications ntr.html

Trinidad and Tobago Ministry of Education. (2008). National tests report. Port-of-Spain, Trinidad: Ministry of Education.

World Factbook. (2012). Trinidad and Tobago. Retrieved from https://www.cia.gov/library/publications/the-world-factbook/geos/td.html

 

About the Authors

Launcelot I. Brown is the Barbara A. Sizemore Distinguished Professor, Associate Professor, and Chair of the Department of Educational Foundations and Leadership at Duquesne University. His research interests are in the areas of school leadership and student achievement and teachers use of assessment data from national assessments in the Caribbean.

Laurette Bristol is Assistant Director for the Research Institute for Professional Practice, Learning and Education at Charles Sturt University,Australia. Her research explores the ways in which the historical conditions of colonialism shape the contemporary pedagogical practices of primary school teachers and principals in the Caribbean.

Joyanne De Four-Babb is a former Assistant Professor and Coordinator of the Practicum Programme at the University of Trinidad and Tobago and Adjunct Assistant Professor in the Graduate School of Education at American University in Cairo, Egypt. She uses narrative inquiry methodology to analyse the experiences of beginning and practicing teachers.

Dennis A. Conrad is Professor and Chair of the Department of Inclusive and Special Education at the State University of New York at Potsdam. His research interests include learner diversity and difficulties, inclusive leadership, self-study in teaching, and Caribbean issues in education.

You have no rights to post comments.
Create an account to post comments.