Monday, October 1, 2018

I will be resuming this blog very soon.... 

Thursday, August 20, 2015

Early Grade Learning Assessment in the North Pacific

Getting the Basics Right: Quality Primary Education in the North Pacific




The Quality Primary Education in the North Pacific pilot project was designed to develop and trial new inputs in learning, assessment, teacher development, and data management to improve the quality of primary education in the northern Pacific Micronesian nations of the RMI, the FSM, and to evaluate student assessment system in Palau. The project operated in the RMI in five selected schools on Majuro, and in the FSM, the project worked with all six schools of Kosrae State and two selected schools of Pohnpei State. The QPENP was funded by the Japan Fund for Poverty Reduction managed by the Asian Development Bank. The Development Strategists International Consulting (DSIC) implemented the project.


What is the Early Grade Learning Assessment (EGLA) and how does it work?

The EGLA is a formative assessment tool that provides a detailed picture of student performance levels in reading and mathematics. EGLA can be used for multiple purposes—tailored teacher professional development, identifying appropriate learning resources, and building accountability. The EGLA was developed for the FSM and the RMI in a collaborative manner alongside education authorities of each project site, with intensive capacity building, piloting, analysis, and extensive trials. 



Unlike traditional standardized assessments, which provide overall information, the EGLA uses a representative sample of students to provide detailed information on the specific learning components that must be mastered in order to develop true competency in numeracy and literacy. The EGLA provides a clear picture of areas of strength and areas of challenge, allowing education authorities to structure targeted professional development for teachers with classroom-based resources that address students’ specific weaknesses. The results of the EGLA inform ministries and departments of education about overall system performance, allow them to establish priorities for professional development programs, and monitor the outcomes at the individual school and classroom level.

The EGLA involves teams of trained assessors going out to classrooms and conducting one-on-one interviews with a preselected random sample of students from the two targeted grades, Grade 3 and Grade 5. Each student who participates undergoes four separate assessments: literacy in the first language (L1), numeracy in L1, literacy in English, and numeracy in English. The interviewer, in a welcoming manner guides the student through a set of specific tasks, by asking students to show what they can do on each task in the areas of numeracy, reading, and writing. Linkages between numeracy and literacy were also incorporated in the assessments, so that students applied knowledge and strategies to word problems or number stories in familiar contexts, and could demonstrate higher thinking skills such as the application of a conceptordrawinga conclusion.


A short initial interview of the student’s world outside of the classroom is also conducted. This reveals the child’s home context, and aspects of the child’s own perceptions about schooling. These included questions such as: What is the main home language, and does the child have access to reading materials in their first language or in English? Does the child write stories in their first language or English at home? Is there a family member who provides assistance with reading or math homework? Is there a TV in the home? Are there devices like calculators in the home? 

Sunday, February 15, 2015

South-East Asia Primary Learning Metrics Meeting of Domain Review Panels

I just got back from a meeting of experts from all over Southeast Asia and UN agencies to review/confirm the South-East Asia Primary Learning Metrics (SEA-PLM).

Saturday, November 10, 2012

CAPSQ: A Tool to Measure Classroom Assessment Practices


The CAPSQ as a Measure of Classroom Assessment Practices


The  Classroom Assessment Practices Survey Questionnaire (CAPSQ: Gonzales, 2011) consisted of 60 items that were sorted according to thematic similarities using simple Q-sort method (Stephenson, 1953). Item content was derived from interview data from teachers and the works of Birembaum (1997), Bliem and Davinroy (1997), Brown (2002,2004), Brown and Lake (2006), Cheng, Rogers and Hu (2004), Hill, (2002), Mblelani (2008), Mertler (1998, 2003, 2009), Sanchez and Brisk (2004) and Zhang and Burry-Stock (2003). Emerging and recognized scholarship on assessment (Earl and Katz, 2006; Stiggins, 1997, 2008; Angelo & Cross, 1993; Airasian, 1997; and Black & William, 1998) also was consulted.

Items used a 5-point Likert type response scale describing frequency (1-never to 5-always) of doing an assessment activity. Three experts on scale development and classroom assessment reviewed the items in terms of format as well as content clarity and relevance. Items were subsequently categorized according to the four purposed of assessment based on a framework currently used by the Western and Northern Canadian Protocol and described by Earl (2003) and Earl and Katz (2004). These distinct but interrelated purposes include: 1) assessment of learning, 2) assessment as learning, 3) assessment for learning, and 4) assessment to inform. 

Psychometric Evaluation of the CAPSQ

Initial analyses were conducted to determine whether assumptions for univariate normality were met. Item skew (-.35 to -1.33) and kurtosis (05 to 1.63) values were within the acceptable range, /3/ and /10/, respectively (Kline, 2010). Inter-item correlation matrix indicates that the coefficients were generally small (e.g., r =.14 for items 8 & 19) to moderate (e.g., r = .34 for items 4 & 14), suggesting that the correlation matrix was appropriate for factor analysis. Item means (and standard deviations) ranged from 4.07 (.89) for item 7 to 4.46 (.75) for item 14.

Initial solutions for the exploratory factor analysis (EFA) that included Bartlett’s test of sphericity, χ2 (210, N = 364) = 5200.62, p < .001, and the size of the Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy (.94) suggest that the data were satisfactory for factor analysis (Hutcheson & Sofroniou,1999). Results of the initial PAF yielded a five-factor solution that accounted for a total variance of approximately 72%. However, promax rotation method with Kaiser Normalization indicated that only two items (16 and 24) loaded on factor 5, and their content cannot be easily interpreted. Further, items 7, 10, and 18 have similar loadings (i.e., ≥ .30) on two factors. These five items were subsequently eliminated and EFA was again conducted with the remaining 18 items. 

Initial communalities of the final 18 items ranged from .47 to .72, with a mean coefficient of .61. Results of the PAF for this solution suggest that a four- factor solution would best describe the structure of CAPSQ. Total variance explained by these four factors was approximately 62%: factor 1 accounted for the majority of the variance, 50.27%; factor 2 accounted 8.02%; factor 3 accounted 5.98%; and factor 4 accounted 5.45%.  Each of these four factors also contributed at least 3% of the sum of squared loadings. Items have pure loadings of at least .40 on only one factor. 
Internal consistency of the factor scores and total score was calculated using Cronbach’s alpha (α). The four factors demonstrated high internal consistency, with α = .92 for factor 1, α = .88 for factor 2, α = .83 for factor 3, and α = .85 for factor 4. Internal consistency for the total score also was high, α = .95. Inter- factor correlations ranged from .57 (moderate) to .72 (high). Correlations between CAPSQ factors and total score were all very high (r = .82-.92), indicating that total score can be the most accurate and valid estimate of the classroom assessment practices.
The factor structure of the CAPSQ conformed to the general purposes of classroom assessment that was considered as a framework in the conceptualization phase of the scale development. All factor and total scores demonstrated high internal consistency. However, there was strong evidence of factor-total score overlap suggesting that the total score is the most valid index when using the CAPSQ to describe classroom assessment practices. Although this is psychometrically true, item and factor information may be beneficial when determining teachers’ strengths and weaknesses in dispensing their roles related to classroom assessment. For example, school administrators and teachers themselves can examine the pattern of responses at the factor and item levels for professional development purposes. CAPSQ total score may be the information to use for research and longitudinal growth modeling in developmental program evaluation. Descriptions of the empirically derived four factors of CAPSQ are important to facilitate understanding of classroom assessment practices

Factor 1: Assessment as learning. This factor refers to the practices of teachers in giving assessment that is aimed at developing and supporting student’s knowledge of his/her thought process (i.e., metacognition). Assessment as learning is translated into practice when teachers assess students by providing them with opportunities to show what they have learned in class (Murray 2006), by creating an environment where it is conducive for students to complete an assigned tasks and by helping students to develop clear criteria of good learning practices (Hill, 2002). This factor also implies that teachers decide to assess students to guide them to acquire personal feedback and monitoring of their learning process (Murray, 2006; Sanchez & Brisk, 2004). Assessment as learning requires more task-based activities than traditional paper-pencil tests. This teaching practice provides examples of good self-assessment practices for students to examine their own learning process (Kubiszyn & Borich, 2007; Mory, 1992).

Factor 2: Assessment of learning.  This factor refers to assessment practices of teachers to determine current status of student achievement against learning outcomes and in some cases, how their achievement compare with their peers (Earl, 2005; Gonzales, 1999; Harlen, 2007).  The main focus of assessment of learning is how teachers make use assessment results to guide instructional and educational decisions (Bond, 1995; Musial, Nieminem, Thomas & Burle, 2009). Hence, this factor describes practices that are associated with summative assessment (Glickman, Gordon, Ross-Gordon, 2009; Harlen, 2007; Struyf, Vandenberghe, & Lens (2001). In summative assessment, teachers aim to improve instructional programs based on how students have learned as reflected by various assessment measures given at the end of the instructional program (Borko et. al., 1997; Harlen, 2008; Mbelani, 2008). Teachers conduct summative assessment to make final decisions about the achievement of students at the end of the lesson or subject (Stiggins, Arter, Chappuis & Chappuis, 2004)

Factor 3: Assessment to inform. This factor refers to the communicative function of assessment, which is reporting and utilizing results for various stakeholders (Jones and Tanner, 2008). Teachers perform assessment to provide information both to students and their parents, other teachers, schools, and future employers regarding students’ performance in class (Guillickson, 1984; Sparks, 2005). Assessment to inform is related to assessment of learning since the intention of assessment is to be able to provide information to parents about the performance of their children in school at the end of an instructional program (Harlen, 2008). Teachers use assessment to rank students and to use assessment results to provide a more precise basis to represent the achievement of students in class through grades and rating (Manzano, 2000; Murray, 2006; Sparks, 2005).

Factor 4: Assessment for learning.  This factor refers to practices of teachers to conduct assessment to determine the progress in learning by giving tests and other tools to measure learning during instruction (Biggs, 1995; Docky & McDowell, 1997; Murray, 2006; Sadler, 1989; Sparks, 2005). Assessment for learning or formative assessment requires the use of learning tests, practice tests, quizzes, unit tests, and the like (Boston, 2002; MacLellan, 2001; Stiggins et al, 2004). Teachers prefer these formative assessment tools to cover some predetermined segment of instruction that focuses on a limited sample of learning outcomes Assessment for learning requires careful planning so that teachers can use the assessment information to determine what students know and gain insights into how, when and whether students apply what they know (Earl and Katz (2006).

Interested researchers and users may contact the author (Email: drrichard.gonzales@gmail.com)

You can also find the copy of this study at http://ust-ph.academia.edu/RichardDLCGonzales

Classroom Assessment Preferences of Japanese Language Teachers in the Philippines and English Language Teachers in Japan

Very recently, I completed a study with Dr Jonathan Aliponga of Kansai University of International Studies, Hyogo, Japan entitled Classroom Assessment Preferences of Japanese Language Teachers in the Philippines and English Language Teachers in Japan. This study has also been recently published at MEXTESOL Journal, Volume 36, Number 1, 2012.


The following is the abstract.


Classroom Assessment Preferences of Japanese Language Teachers in the Philippines and English Language Teachers in Japan

Richard DLC. Gonzales
University of Santo Tomas Graduate School, Manila, Philippines

Jonathan Aliponga
Kansai University of International Studies, Hyogo, Japan


Student assessment provides teachers with information that is important for decision-making in the classroom. Assessment information helps teachers to understand their students’ performance better as well as improve suitability and effectiveness of classroom instruction. The purpose of the study was to compare the classroom assessment preferences of Japanese language teachers in the Philippines (n=61) and English language teachers in Japan (n=55) on the purposes of assessment as measured by the Classroom Assessment Preferences Survey Questionnaire for Language Teachers (CAPSQ-LT). Results revealed that overall, language teachers from both countries most preferred assessment practices that are focused towards assessment as learning and least preferred assessment practices that refer to the communicative function of assessment (assessing to inform). Comparatively, Japanese language teachers in the Philippines preferred assessment for learning, that is, they assessed to improve learning process and effectiveness of instruction, while the English language teachers in Japan are more concerned with the assessment of learning and the communicative and administrative function of assessment. The two groups did not significantly differ in their preference for assessment of learning and assessment as learning.


 The complete copy of this study can be access at http://www.mextesol.net/journal/index.php?page=journal&id_issue=0#898322425295588184e8c4b98cd2a02c

Roles of Testing

In today's system, testing has become a critical policy in any environment. Schools administer various kinds of tests to students, industries or companies give tests to applicants, organizations tests their members for various reasons. Regardless of whatever the purpose of testing, the main objective of testing is to differentiate and classify individuals to specific roles and functions.