Sunday, September 5, 2021
Thursday, August 20, 2015
Early Grade Learning Assessment in the North Pacific
Getting the Basics Right: Quality Primary Education in the North Pacific
The Quality Primary Education in the
North Pacific pilot project was designed
to develop and trial new inputs in learning,
assessment, teacher development, and
data management to improve the quality
of primary education in the northern
Pacific Micronesian nations of the RMI, the
FSM, and to evaluate student assessment
system in Palau. The project operated in
the RMI in five selected schools on Majuro,
and in the FSM, the project worked with
all six schools of Kosrae State and two
selected schools of Pohnpei State. The QPENP was funded by the Japan Fund for Poverty Reduction managed by the Asian Development Bank. The Development Strategists International Consulting (DSIC) implemented the project.
What is the Early Grade Learning Assessment (EGLA) and how does
it work?
The EGLA is a formative assessment tool that provides a detailed picture of student performance levels in reading and mathematics. EGLA can be used for multiple purposes—tailored teacher professional development, identifying appropriate learning resources, and building accountability. The EGLA was developed for the FSM and the RMI in a collaborative manner alongside education authorities of each project site, with intensive capacity building, piloting, analysis, and extensive trials.
The EGLA is a formative assessment tool that provides a detailed picture of student performance levels in reading and mathematics. EGLA can be used for multiple purposes—tailored teacher professional development, identifying appropriate learning resources, and building accountability. The EGLA was developed for the FSM and the RMI in a collaborative manner alongside education authorities of each project site, with intensive capacity building, piloting, analysis, and extensive trials.
Unlike traditional standardized assessments, which provide overall information,
the EGLA uses a representative sample of
students to provide detailed information
on the specific learning components that must be mastered in order to develop true
competency in numeracy and literacy. The
EGLA provides a clear picture of areas of
strength and areas of challenge, allowing
education authorities to structure targeted
professional development for teachers
with classroom-based resources that
address students’ specific weaknesses.
The results of the EGLA inform ministries
and departments of education about
overall system performance, allow them
to establish priorities for professional
development programs, and monitor the
outcomes at the individual school and
classroom level.
The EGLA involves teams of trained
assessors going out to classrooms and
conducting one-on-one interviews with
a preselected random sample of students
from the two targeted grades, Grade 3 and
Grade 5. Each student who participates
undergoes four separate assessments:
literacy in the first language (L1), numeracy
in L1, literacy in English, and numeracy in
English. The interviewer, in a welcoming
manner guides the student through a set
of specific tasks, by asking students to
show what they can do on each task in the
areas of numeracy, reading, and writing.
Linkages between numeracy and literacy
were also incorporated in the assessments,
so that students applied knowledge and
strategies to word problems or number
stories in familiar contexts, and could
demonstrate higher thinking skills such as
the application of a conceptordrawinga
conclusion.
A short initial interview of the student’s
world outside of the classroom is also
conducted. This reveals the child’s home
context, and aspects of the child’s own
perceptions about schooling. These
included questions such as: What is the
main home language, and does the child
have access to reading materials in their
first language or in English? Does the
child write stories in their first language or
English at home? Is there a family member
who provides assistance with reading
or math homework? Is there a TV in the
home? Are there devices like calculators
in the home?
Sunday, February 15, 2015
South-East Asia Primary Learning Metrics Meeting of Domain Review Panels
I just got back from a meeting of experts from all over Southeast Asia and UN agencies to review/confirm the South-East Asia Primary Learning Metrics (SEA-PLM).
Saturday, November 10, 2012
CAPSQ: A Tool to Measure Classroom Assessment Practices
The CAPSQ as a Measure of Classroom Assessment Practices
Items used a 5-point Likert type response scale
describing frequency (1-never to 5-always) of doing an assessment activity.
Three experts on scale development and classroom assessment reviewed the items
in terms of format as well as content clarity and relevance. Items were
subsequently categorized according to the four purposed of assessment based on
a framework currently used by the Western and Northern Canadian Protocol
and described by Earl (2003) and
Earl and Katz (2004). These distinct but interrelated purposes include: 1)
assessment of learning, 2) assessment
as learning, 3) assessment for learning, and 4) assessment to
inform.
Psychometric Evaluation of the CAPSQ
Initial solutions for the
exploratory factor analysis (EFA) that included Bartlett’s test of sphericity, χ2
(210, N = 364) = 5200.62, p < .001, and the size of the
Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy (.94) suggest that the
data were satisfactory for factor analysis (Hutcheson & Sofroniou,1999).
Results of the initial PAF yielded a five-factor solution that accounted for a
total variance of approximately 72%. However, promax rotation method with
Kaiser Normalization indicated that only two items (16 and 24) loaded on factor
5, and their content cannot be easily interpreted. Further, items 7, 10, and 18
have similar loadings (i.e., ≥ .30) on two factors. These five items were
subsequently eliminated and EFA was again conducted with the remaining 18
items.
Internal consistency of the factor scores and total score was calculated using Cronbach’s
alpha (α). The four factors demonstrated high internal consistency, with α =
.92 for factor 1, α = .88 for factor 2, α = .83 for factor 3, and α = .85 for
factor 4. Internal consistency for the total score also was high, α = .95.
Inter- factor correlations ranged from .57 (moderate) to .72 (high).
Correlations between CAPSQ factors and total score were all very high (r =
.82-.92), indicating that total score can be the most accurate and valid
estimate of the classroom assessment practices.
The factor structure of the CAPSQ conformed to the
general purposes of classroom assessment that was considered as a framework in
the conceptualization phase of the scale development. All factor and total
scores demonstrated high internal consistency. However, there was strong
evidence of factor-total score overlap suggesting that the total score is the
most valid index when using the CAPSQ to describe classroom assessment
practices. Although this is psychometrically true, item and factor information
may be beneficial when determining teachers’ strengths and weaknesses in
dispensing their roles related to classroom assessment. For example, school
administrators and teachers themselves can examine the pattern of responses at
the factor and item levels for professional development purposes. CAPSQ total
score may be the information to use for research and longitudinal growth
modeling in developmental program evaluation. Descriptions of the empirically
derived four factors of CAPSQ are important to facilitate understanding of
classroom assessment practices
Factor 1: Assessment as learning. This factor refers to the practices
of teachers in giving assessment that is aimed at developing and supporting student’s
knowledge of his/her thought process (i.e., metacognition). Assessment as
learning is translated into practice when teachers assess students by providing
them with opportunities to show what they have learned in class (Murray 2006),
by creating an environment where it is conducive for students to complete an
assigned tasks and by helping students to develop clear criteria of good
learning practices (Hill, 2002). This factor also implies that teachers decide
to assess students to guide them to acquire personal feedback and monitoring of
their learning process (Murray, 2006; Sanchez & Brisk, 2004). Assessment as
learning requires more task-based activities than traditional paper-pencil
tests. This teaching practice provides examples of good self-assessment
practices for students to examine their own learning process (Kubiszyn &
Borich, 2007; Mory, 1992).
Factor 2: Assessment of learning. This factor refers to assessment practices
of teachers to determine current status of student achievement against learning
outcomes and in some cases, how their achievement compare with their peers
(Earl, 2005; Gonzales, 1999; Harlen, 2007). The main focus of assessment of learning is how teachers
make use assessment results to guide instructional and educational decisions
(Bond, 1995; Musial, Nieminem, Thomas & Burle, 2009). Hence, this factor
describes practices that are associated with summative assessment (Glickman,
Gordon, Ross-Gordon, 2009; Harlen, 2007; Struyf, Vandenberghe, & Lens
(2001). In summative assessment, teachers aim to improve
instructional programs based on how students have learned as reflected by various
assessment measures given at the end of the instructional program (Borko et.
al., 1997; Harlen, 2008; Mbelani, 2008). Teachers conduct summative assessment to
make final decisions about the achievement of students at the end of the lesson
or subject (Stiggins, Arter, Chappuis & Chappuis, 2004)
Factor 3: Assessment to inform. This
factor refers to the communicative function of assessment, which is reporting
and utilizing results for various stakeholders (Jones and Tanner, 2008). Teachers
perform assessment to provide information both to students and their parents,
other teachers, schools, and future employers regarding students’ performance
in class (Guillickson, 1984; Sparks, 2005). Assessment to inform is related to assessment
of learning since the intention of assessment is to be able to provide
information to parents about the performance of their children in school at the
end of an instructional program (Harlen, 2008). Teachers use assessment to rank
students and to use assessment results to provide a more precise basis to
represent the achievement of students in class through grades and rating
(Manzano, 2000; Murray, 2006; Sparks, 2005).
Factor 4: Assessment for learning. This factor refers to practices of
teachers to conduct assessment to determine the progress in learning by giving
tests and other tools to measure learning during instruction (Biggs, 1995;
Docky & McDowell, 1997; Murray, 2006; Sadler, 1989; Sparks, 2005). Assessment
for learning or formative assessment requires the use of learning tests,
practice tests, quizzes, unit tests, and the like (Boston, 2002; MacLellan,
2001; Stiggins et al, 2004). Teachers prefer these formative assessment tools to
cover some predetermined segment of instruction that focuses on a limited sample
of learning outcomes Assessment for learning requires careful planning so that teachers
can use the assessment information to determine what students know and gain
insights into how, when and whether students apply what they know (Earl and Katz
(2006).
Interested researchers and users may contact the author (Email: drrichard.gonzales@gmail.com)
You can also find the copy of this study at http://ust-ph.academia.edu/RichardDLCGonzales
Classroom Assessment Preferences of Japanese Language Teachers in the Philippines and English Language Teachers in Japan
Very recently, I completed a study with Dr Jonathan Aliponga of Kansai University of International Studies, Hyogo, Japan entitled Classroom Assessment Preferences of Japanese Language Teachers in the Philippines and English Language Teachers in Japan. This study has also been recently published at MEXTESOL Journal, Volume 36, Number 1, 2012.
The following is the abstract.
Student assessment provides teachers with information that is important for decision-making in the classroom. Assessment information helps teachers to understand their students’ performance better as well as improve suitability and effectiveness of classroom instruction. The purpose of the study was to compare the classroom assessment preferences of Japanese language teachers in the Philippines (n=61) and English language teachers in Japan (n=55) on the purposes of assessment as measured by the Classroom Assessment Preferences Survey Questionnaire for Language Teachers (CAPSQ-LT). Results revealed that overall, language teachers from both countries most preferred assessment practices that are focused towards assessment as learning and least preferred assessment practices that refer to the communicative function of assessment (assessing to inform). Comparatively, Japanese language teachers in the Philippines preferred assessment for learning, that is, they assessed to improve learning process and effectiveness of instruction, while the English language teachers in Japan are more concerned with the assessment of learning and the communicative and administrative function of assessment. The two groups did not significantly differ in their preference for assessment of learning and assessment as learning.
The complete copy of this study can be access at http://www.mextesol.net/journal/index.php?page=journal&id_issue=0#898322425295588184e8c4b98cd2a02c
The following is the abstract.
Classroom Assessment Preferences of Japanese Language Teachers in the Philippines and English Language Teachers in Japan
Richard DLC. Gonzales
University of Santo Tomas Graduate School, Manila, Philippines
Jonathan Aliponga
Kansai University of International Studies, Hyogo, Japan
Student assessment provides teachers with information that is important for decision-making in the classroom. Assessment information helps teachers to understand their students’ performance better as well as improve suitability and effectiveness of classroom instruction. The purpose of the study was to compare the classroom assessment preferences of Japanese language teachers in the Philippines (n=61) and English language teachers in Japan (n=55) on the purposes of assessment as measured by the Classroom Assessment Preferences Survey Questionnaire for Language Teachers (CAPSQ-LT). Results revealed that overall, language teachers from both countries most preferred assessment practices that are focused towards assessment as learning and least preferred assessment practices that refer to the communicative function of assessment (assessing to inform). Comparatively, Japanese language teachers in the Philippines preferred assessment for learning, that is, they assessed to improve learning process and effectiveness of instruction, while the English language teachers in Japan are more concerned with the assessment of learning and the communicative and administrative function of assessment. The two groups did not significantly differ in their preference for assessment of learning and assessment as learning.
The complete copy of this study can be access at http://www.mextesol.net/journal/index.php?page=journal&id_issue=0#898322425295588184e8c4b98cd2a02c
Roles of Testing
In today's system, testing has become a critical policy in any environment. Schools administer various kinds of tests to students, industries or companies give tests to applicants, organizations tests their members for various reasons. Regardless of whatever the purpose of testing, the main objective of testing is to differentiate and classify individuals to specific roles and functions.
Tuesday, September 20, 2011
International Conference on Educational Measurement and Evaluation
International Conference on Educational Measurement & Evaluation (ICEME2012)
The Philippine Educational Measurement and Evaluation Association (PEMEA) is organizing an International Conference on Educational Measurement and Evaluation on August 9-11, 2012 In Manila with the theme "Educational Assessment in a Multicultural Environment". Prof. Thomas Oakland, a world renowned Educational Psychologist and presently Visiting Professor at Macau University will be the keynote speaker.
Three pre-conference workshops will be offered on August 8, 2012 and to be facilitated by:
Dr. Maria Di Benedetto (City University of New York) on "Microanalytic Technique on Assessing Student Learning"
Dr. Judy Wilkerson (Florida Gulf Coast University) on "Developing Rubrics to Assess Learning"
Dr. Thomas Oakland (University of Florida & Macau University) on "Test Development and Adaptation"
Call for Papers is now on.
For more details, please contact: pemea2008@yahoo.com or r-gonzales@consultant.com
The Philippine Educational Measurement and Evaluation Association (PEMEA) is organizing an International Conference on Educational Measurement and Evaluation on August 9-11, 2012 In Manila with the theme "Educational Assessment in a Multicultural Environment". Prof. Thomas Oakland, a world renowned Educational Psychologist and presently Visiting Professor at Macau University will be the keynote speaker.
Three pre-conference workshops will be offered on August 8, 2012 and to be facilitated by:
Dr. Maria Di Benedetto (City University of New York) on "Microanalytic Technique on Assessing Student Learning"
Dr. Judy Wilkerson (Florida Gulf Coast University) on "Developing Rubrics to Assess Learning"
Dr. Thomas Oakland (University of Florida & Macau University) on "Test Development and Adaptation"
Call for Papers is now on.
For more details, please contact: pemea2008@yahoo.com or r-gonzales@consultant.com
Thursday, May 20, 2010
NCEME 2010
The Second National Conference on Educational Measurement and Evaluation (NCEME)
"Educational Assessment in Knowledge Society"
August 5-6, 2010
CSB Hotel International Conference Center
Keynote Speaker
Dr. Anders Jonsson
The Second National Conference on Educational Measurement and Evaluation (NCEME) will be held on August 5-6 in CSB International hotel in Manila. The conference is organized by the Philippine Educational Measurement and Evaluation Association (PEMEA) in collaboration with the Center for Learning and Performance Asessment (CLPA) of De La Salle-College of Saint Benilde (DLS-CSB).
August 4, 2010: Pre-conference workshop
August 5, 2010: Conference Day 1
8:00-9:00 Registration
9:00-10:00 Opening Ceremonies
Invocation - CSB Chaplain
National Anthem
Welcome Remarks - Bro. Victor Franco, FSC
Opening Remarks & Presentation of Participants - Neil Parinas (PEMEA Vice
president)
Presidential Address - Richard Gonzales, PhD (PEMEA President)
Keynote Address - Anders Jonsson, PhD
Master of Ceremonies: Lina Miclat, PhD, PEMEA Secretary
10:00-10:30 Morning Snacks (Launching of the EME Review)
10:30-12:00 Panel Discussion: The Role of Educational Assessment in a Knowledge Society
12:00 - 1:00 Lunch Break
1:00-2:15 First Plenary Session: Assessment and Evaluation Practices and Standards in the Asia and the Pacific
2:15-3:00 Second Plenary Session: Modern Test Thoery and Development in Psychometrics
3:30-4:00 Afternoon Snacks
4:00-5:00 Third Plenary Session: Educational Evaluation
5:00-6:00 Challenges and Prospects of Educational Measurement and Evaluation in the Philippines (Division Chairs)
6:00-9:00 Recognition of the Editorial Board
Presentation of the First Copies to the Authors
Distribution of the EME Review
Fellowship and Dinner Socials
August 6, 2009: Day 2 Conference
8:30-10:30 First Concurrent Session
10:30-12:00 Second Concurrent Sessions
12:00-1:00 Lunch Break
1:00-2:00 PEMEA Business Meeting
President's report
Treasurer's report
Election of Board of Directors
2:00-4:00 Mini Workshops
4:00-5:00 Closing Ceremonies
Recognition of PEMEA Fellows
Recognition of PEMEA honorary members
Announcement of New Board of Directors
Closing Remarks
Master of Ceremonies: Belen Chu, PEMEA PRO
For more information you may contact the Secretariat Chair Dr. Lina Miclat at (02)5267441 loc 165 at the Center for Learning and Performance Assessment, De La Salle-College of Saint Benilde.
You may submit your abstracts for paper presentations (concurrent session) to the Scientific Committee Chair Dr. Carlo Magno at pemea2008@yahoo.com.
Friday, October 30, 2009
Defining Assessment
One of the many questions that teachers ask is what is assessment? Of course, if you do literature review, you will get 1001 definitions coming from different perspectives. Likewise, you get so many literature defining the concept and differentiating it from evaluation and even measurement.
Generally, various authors define assessment as the process that involves both gathering information and using that information as a means to improve teaching, student learning, student services, and administrative services. Authors also say that assessment includes making expectations explicit and public, and setting appropriate criteria and high standards.
Generally, various authors define assessment as the process that involves both gathering information and using that information as a means to improve teaching, student learning, student services, and administrative services. Authors also say that assessment includes making expectations explicit and public, and setting appropriate criteria and high standards.
As a process, it focuses on systematically gathering, analyzing, and interpreting evidence to determine how well performance matches those expectations and standards. Likewise, it enables one to use the resulting information to document, explain, and enhance performance.
Assessment is very helpful in creating a shared academic culture dedicated to continually improving the quality of education at all levels.
Thus, it is obvious that assessment is not a single set of actions, BUT an on-going cyclical and dynamic process permeating any institution of learning.
Labels:
assessment,
Defining assessment,
what is assessment
Thursday, April 30, 2009
Student Assessment and Examinations in Samoa
Starting April 14, 2009, I am posted as National Assessment and Examination Framework Specialist for the Samoa Education Sector Project II (ESPII). I will be serving as Consultant for this ADB-funded project for a total of 12 man-months.
This is my first assignment in the Pacific Region and it is really challenging! I am sharing you how Educational Assessment is practiced here in Samoa and in other Pacific Island countries.
Samoa, formerly known as Western Samoa, became the first South Pacific country to be gain political independence in 1962. It is located on the western part of the Samoan archipelago while American Samoa is one the eastern part.
In ESPII, one of my major outputs is to help the Ministry of Education, Sports and Culture (MESC) to prepare its National Assessment Policy Framework. This policy is in line with the modernization of Samoa's primary curriculum and assessment.
At present, Samoa's education system is loaded with national examinations. The following are existing examinations in the system:
- Samoa Primary Education Literacy Levels (SPELL) Test for Year 4
- SPELL Test for Year 6
- Year 8 National Examinations
- Samoan School Certificate Examinations for Year 12
- Pacific Senior Secondary Level Exams for Year 13 (administered by the South Pacific Board of Education Assessment based in Fiji)
All the national examinations are administered by the Assessment Unit of the Curriculum, Materials and Assessment Division of the Ministry of Education, Sports and Culture.
Purposes of the National Examinations in Samoa
1. Samoa Primary Education Literacy Levels Examinations (for Year 4 and 6)
- To monitor the literacy level in the primary levels focusing on English, Gagana Samoa and Numeracy.
2. Year 8 National Examinations
- For certification at the end of the primary level, and for ranking purpose to do selection to government senior schools (Avele College, Samoa College and Vaipouli College)
3. Samoa School Certificate (for Year 12)
- For certification and selection to Year 13 level
4. Pacific Senior Secondary Certificate Exams (for Year 13)
- For certification and selection to university and other tertiary institutions. This test is administered by South Pacific Board of Education Assessment based in Fiji.
Sunday, January 25, 2009
School Based Assessment Program in Sri Lanka: An Overview
This year, I am posted as Teaching-Learning Methodologies Specialist cum Team Leader of the Secondary Education Modernization Project II: Package 2 (Curriculum Development) of the Ministry of Education of Sri Lanka from January to July, 2009. This project is funded by Asian Development Bank.
The SEMPII has 3 packages. Package 1 focuses on Educational Planning and Management, Package 2 on Curriculum Development, and Package 3 is on Information and Communication Technology. In package 2, the major activities include development of capacity building in teaching-learning methodologies and development of resources and use, science instruction, technology/technical instruction, library development, and student and instructional evaluation.
On January 12-13, 2009, the consultants of all packages were gathered for a Project Orientation held at the National Education Institute (NIE), Meepe Campus. Among the topics presented was the School-Based Assessment Programme of Sri Lanka. As Educational Assessment Practitioner, I am summarizing some of the points presented by Dr. G.L.S. Nanayakkara based on the materials distributed during the orientation.
What is School-based Assessment or SBA?
- An asessment carried out in schools by pupils' own teachers, with the prime purpose of improving pupils' learning.
- SBA should be formative and diagnostic
- Overall aim of SBA is to improve the quality of learning, teaching and assessment.
Why is SBA superior than one-shot written examinations?
High Validity - in many subjects some objectives cannot be assessed though a single written test alone, during a short period (e.g., subjects involving practical skills). Subject objectives can be readily assessed through SBA.
High Reliability - over a period of about 2 years, a teacher is able to assess a pupil several times using a variety of modalities. The average of such assessment is more reliable.
Evolution of SBA in Sri Lanka
1986 - Department of Examinations initiated Continuous Assessment Scheme to serve a component of the GCE (O/L) Exams. It had been abandoned in 1989.
1994 - NIE initiated a Classroom Based Assessment Program for Grades 6-9. Piloted in NWP and lated extended to 100 schools. Revised and changed as SBA. A training program on SBA was also launched during the year.
1999 - Revised SBA Programme was implemented at national level in Grade 6 and 9, under the Education Reform Programme.
2001 - Department of Examinations did further revision and implemented the revised SBA program to GCE (O/L) Grades, with SEMP assistance.
2003 - Department of Examinations extended the implementation of SBA Program to GCE (A/L) Grades.
Inclusion of SBA Grades in the certificated commenced in 2002 for GCE (O/L) and 2005 for GCE (A/L).
What are the key characteristics of the current SBA Program?
- extraordinary simplicity
- teacher-friendliness
- openness and transparency of assessment practices
- amenable to continuous external monitoring.
- promotes the use of diverse assessment modalities
- encourages competency-oriented teaching-learning
- not being burdensome to teachers and students
- use of definitive number of criteria ( n=5) and definitive number of marks per criterion (n=4)
- use a simple 10-point scoring rubric
What are the identified strengths of the SBA Program?
- implemented satisfactorily in the majority of schools
- popularity is growing among teachers and pupils
- teachers pay more attention to plan assessment tasks than before
- pupils pay more attention to skills development, due to high recognition given for skills in SBA
- assessment results are used to give immediate feedback to pupils
What are the recent developments relating to SBA?
- A program to improve school term tests was pilot tested and arrangements are being made to include selected term tests marks to SBA marks (one per term)
- SBA mark sheets submitted by schools are checked by a special panel of Subject Directors at the zonal level, before the mark sheets are forwarded to the Department of Examinations.
- Discussions on integrating SBA marks and external examination marks in order to award a single grade for Aesthetic subjects of the GCE (O/L) are in progress.
- A tele-drama with 15 episodes for awareness creation.
Some future needs.....
- Developing suitable mechanisms to moderate raw SBA marks prior to integrating with external exams marks.
- Extending integration of SBA marks and external exams marks for more subjects.
- Training the National Testing and Examinations Service officers on SBA moderation practices.
- Re-establishing Zonal Monitoring Panels.
- Marking SBA Grades acceptable to the employer, community, and general public.
School Based Assessment Program in Sri Lanka: An Overview
This year, I am posted as Teaching-Learning Methodologies Specialist cum Team Leader of the Secondary Education Modernization Project II: Package 2 (Curriculum Development) of the Ministry of Education of Sri Lanka from January to July, 2009. This project is funded by Asian Development Bank.
The SEMPII has 3 packages. Package 1 focuses on Educational Planning and Management, Package 2 on Curriculum Development, and Package 3 is on Information and Communication Technology. In package 2, the major activities include development of capacity building in teaching-learning methodologies and development of resources and use, science instruction, technology/technical instruction, library development, and student and instructional evaluation.
On January 12-13, 2009, the consultants of all packages were gathered for a Project Orientation held at the National Education Institute (NIE), Meepe Campus. Among the topics presented was the School-Based Assessment Programme of Sri Lanka. As Educational Assessment Practitioner, I am summarizing some of the points presented by Dr. G.L.S. Nanayakkara based on the materials distributed during the orientation.
What is School-based Assessment or SBA?
- An asessment carried out in schools by pupils' own teachers, with the prime purpose of improving pupils' learning.
- SBA should be formative and diagnostic
- Overall aim of SBA is to improve the quality of learning, teaching and assessment.
Why is SBA superior than one-shot written examinations?
High Validity - in many subjects some objectives cannot be assessed though a single written test alone, during a short period (e.g., subjects involving practical skills). Subject objectives can be readily assessed through SBA.
High Reliability - over a period of about 2 years, a teacher is able to assess a pupil several times using a variety of modalities. The average of such assessment is more reliable.
Evolution of SBA in Sri Lanka
1986 - Department of Examinations initiated Continuous Assessment Scheme to serve a component of the GCE (O/L) Exams. It had been abandoned in 1989.
1994 - NIE initiated a Classroom Based Assessment Program for Grades 6-9. Piloted in NWP and lated extended to 100 schools. Revised and changed as SBA. A training program on SBA was also launched during the year.
1999 - Revised SBA Programme was implemented at national level in Grade 6 and 9, under the Education Reform Programme.
2001 - Department of Examinations did further revision and implemented the revised SBA program to GCE (O/L) Grades, with SEMP assistance.
2003 - Department of Examinations extended the implementation of SBA Program to GCE (A/L) Grades.
Inclusion of SBA Grades in the certificated commenced in 2002 for GCE (O/L) and 2005 for GCE (A/L).
What are the key characteristics of the current SBA Program?
- extraordinary simplicity
- teacher-friendliness
- openness and transparency of assessment practices
- amenable to continuous external monitoring.
- promotes the use of diverse assessment modalities
- encourages competency-oriented teaching-learning
- not being burdensome to teachers and students
- use of definitive number of criteria ( n=5) and definitive number of marks per criterion (n=4)
- use a simple 10-point scoring rubric
What are the identified strengths of the SBA Program?
- implemented satisfactorily in the majority of schools
- popularity is growing among teachers and pupils
- teachers pay more attention to plan assessment tasks than before
- pupils pay more attention to skills development, due to high recognition given for skills in SBA
- assessment results are used to give immediate feedback to pupils
What are the recent developments relating to SBA?
- A program to improve school term tests was pilot tested and arrangements are being made to include selected term tests marks to SBA marks (one per term)
- SBA mark sheets submitted by schools are checked by a special panel of Subject Directors at the zonal level, before the mark sheets are forwarded to the Department of Examinations.
- Discussions on integrating SBA marks and external examination marks in order to award a single grade for Aesthetic subjects of the GCE (O/L) are in progress.
- A tele-drama with 15 episodes for awareness creation.
Some future needs.....
- Developing suitable mechanisms to moderate raw SBA marks prior to integrating with external exams marks.
- Extending integration of SBA marks and external exams marks for more subjects.
- Training the National Testing and Examinations Service officers on SBA moderation practices.
- Re-establishing Zonal Monitoring Panels.
- Marking SBA Grades acceptable to the employer, community, and general public.
Monday, October 27, 2008
Learning Assessment in Kyrgyzstan
From October 3, 2008 until November 27, I am posted as Learning Assessment Consultant for the Second Education Project (SEP) of Kyrgyz' Ministry of Education and Science funded by a grant from ADB. The SEP basically aims to modernize curricula, learning assessment and textbook development for Grades 1-11. This is my second posting here, I have been posted last 2007 to work for Grades 1-4, this year, we are working for Grades 5-9. The remaining years of the project are devoted for Grades 10 and 11.
As Learning Assessment Consultant, I am responsible in developing a capacity building program for teachers on formative and summative evaluation. As an International Consultant, I am working with two national consultants, Olga Aksenova and Galina Sahorava, both are teachers in Bishkek City.
It is rather a very challenging to work for this project because, I always need a translator/interpreter even when I meet with my national consultants. I prepare learning and training materials in English, then they have to be translated into Russian and then to Kyrgyz language. Meetings and lectures take more time because of the translation and interpretation.
In this project, we translated assessment as "Ochenibanye" and evaluation as "Ochenka" . At first, both terms are translated interchangeably as Ochenka or Ochenibanye. However, to make it clearer, we have to decide the acceptable Russian translation.
Learning Assessment in Kyrgyzstan is fully influenced by the former Soviet Union, since Kyrgyzstan has been part of the former USSR until it became independent in 1991.
At the Ministry of Education and Science, there is the National Testing Unit (NTU) that oversees testing activities of school education here. However, most of its activities are focused on standardization of entrance tests and examinations for higher education or university. The NTU staff are well-trained and I was surprised to know that they have been doing item-banking for the past 5 years. Their tests for higher education uses an MCQ and they score this through a scanning machine. The NTU revealed that the technology was acquired through the assistance of DANIDA in 1996. However, it has not been updated yet very recently.
Wednesday, October 1, 2008
The Philippine Educational Measurement and Evaluation Association

On August 6-7, 2008, another milestone of Philippine Education happened. On these days held the First National Conference on Educational Measurement and Evaluation (NCEME) at CSB Hotel in Manila with the theme "Developing a Culture of Assessment in Learning Institutions. This conference was attended by more than 300 participants from all over the Philippines. This event was key noted by the well-known "guru" Dr. Milagros D. Ibe, a Professor Emeritus of UP Diliman and presently the Graduate School Dean of Miriam College.
In this conference, papers on educational measurement and evaluation were presented and discussed. As this conference was organized primarily to re-unite all graduates of the Master of Science in Educational Measurement and Evaluation (MSEME) of De La Salle University, most of the paper presenters and workshop facilitators were graduates of MSEME.
One of the highlights of this conference is the organization of the Philippine Educational Measurement and Evaluation Association (PEMEA). This organization was organized with the following purposes:
1. Promote standards in various areas of education through appropriate and proper assessment.
2. Provide technical assistance to educational institutions and process of attaining standards.
3. Enhance and maintain the proper practice of measurement and evaluation in both local and international level.
4. Enrich the theory, practice and research in evaluation and measurement in the Philippines.
Membership to this organization is open to all who are:
1. practitioners in the field of measurement, testing, assessment and evaluation,
2. scholars, researchers, and teachers that have direct and indirect experience in measurement and evaluation.
3. students that are engaged in educational measurement and evaluation.
An election was held for the first 11 Members of the Board of Trustees. There were 18 nominees.
The Founding Executive Officers of PEMEA are:
Chairman and President : Richard DLC. Gonzales, Ph.D. , UST Graduate School
Vice President : Neil O. Parinas, Ph.D. cand., DLS - College of Saint Benilde
Executive Secretary : Lina A. Miclat, Ph.D., DLS - College of Saint Benilde
Treasurer : Marife M. Mamauag, Ph.D. cand., DLS - College of Saint Benilde
External Relations Officer : Belen M. Chu, M.Sc., Philippine Academy of Sakya
The Members of the Board of Trustees of PEMEA are:
- Dennis Alonzo, MA, University of Southeastern Philippines
- Paz H. Diaz, Ph.D., Miriam College
- Ma. Lourdes M. Franco, M.Sc., Center for Educational Measurement
- Carlo P. Magno, Ph.D., De La Salle University - Manila
- Editha Y. Sillorequez, Ph.D., Western Visayas State University
- Jimelo Silvestre-Tipay, M.Sc., DLS - College of Saint Benilde
Wednesday, September 24, 2008
Student Assessment Unit in Nepal
I was posted as the Classroom Assessment and Examination Consultant for the Secondary Education Support Project (SESP) in Nepal from November 6, 2007, to May 6, 2008. I worked with the Department of Education, particularly with other four central agencies, namely: Office of Controller of Examinations (OCE), National Center for Educational Development (NCED), Curriculum Development Center and Higher Secondary Education Board (HSEB).
One of the major accomplishments of this assignment was the establishment of Student Assessment Units (SAU) in the four central agencies and eventually at the Regional Education Departments and in the District Education Offices.
The SAU is an organizational unit that is primarily responsible in defining, developing, prioritizing, administering and monitoring activities related to classroom assessment and examinations.
The SAU is committed to increasing student achieving by implementing higher standards through the assessment programs and activities. The SAU of the various agencies supports the administration of the following:
a) District Level Examinations for Grade 8
b) School Leaving Certification Examination (SLC Exams) for Grades 10 and 12
c) School-Based Assessment (SBA)
d) Teacher-training Evaluation
Continuous Assessment System (CAS), Liberal Promotion Policy (LPP) and SBA are the pillars supporting SAU in various agencies and offices under the Ministry of Education and Sports.
Saturday, September 20, 2008
"Teachers are Superhumans"
In my work as a consultant/specialist for Classroom Assessment and Examination Reforms in various countries adhering to diverse educational philosophies and curriculum, I realized that teachers are "superhumans" everywhere in the world.
I say they are superhumans (or let me say supermen and superwomen) because they can take on several tasks at the same time. They are the key persons in implementing curricular reforms, they are managers of a team (the class they handle or school they oversee), they are entertainers and performers to students who can only learn with funny persons in front of them, and sometimes they are referees to "fighting" students and also perform police powers and even conductors in school vehicles.
As teachers, they are not only mentors, lecturers, professors, and guidance counselors. They are also artists, scientists, linguists, mathematicians, historians, orators, sociologists, psychologists, activists, and motivators. They also serve as representatives of society and government organizations. They are also expected to be role models to students and politicians as well as social workers and community developers.
I say they are superhumans (or let me say supermen and superwomen) because they can take on several tasks at the same time. They are the key persons in implementing curricular reforms, they are managers of a team (the class they handle or school they oversee), they are entertainers and performers to students who can only learn with funny persons in front of them, and sometimes they are referees to "fighting" students and also perform police powers and even conductors in school vehicles.
As teachers, they are not only mentors, lecturers, professors, and guidance counselors. They are also artists, scientists, linguists, mathematicians, historians, orators, sociologists, psychologists, activists, and motivators. They also serve as representatives of society and government organizations. They are also expected to be role models to students and politicians as well as social workers and community developers.
On top of these so many roles, they teach students to the best they can. They are required to give tests and examinations and evaluate student work, reports, and projects. They are expected to keep track of what their students have learned and how many of their targeted learning outcomes were achieved. They are mandated to establish the credibility of their assessment procedures as well as the validity of their instruments. Hence, teachers do not only teach, but they must also perform assessment activities to determine the achievement, strengths, and weaknesses of their
students.
With these in mind, I come to believe that teachers are really superhumans. Hence, I am writing this blog on the assessment and evaluation of learning to share my experiences with teachers, researchers, and educationists.
I hope that through this blog, all teachers and researchers will achieve their vision, mission, and goals of providing the highest standards of classroom assessment and examinations.
With these in mind, I come to believe that teachers are really superhumans. Hence, I am writing this blog on the assessment and evaluation of learning to share my experiences with teachers, researchers, and educationists.
I hope that through this blog, all teachers and researchers will achieve their vision, mission, and goals of providing the highest standards of classroom assessment and examinations.
Labels:
assessment,
measurement.,
teachers,
testing programs
Subscribe to:
Posts (Atom)