Menu

Assessment Policy

Primary Assessment Policy
Secondary Assessment Policy

Introduction

The purpose of this statement of policy is to provide broad guidance on assessment in the Primary School at ARIS, which more specific departmental policies can interpret, articulate and implement. The Primary Assessment Philosophy Assessment at ARIS has always been framed by grades and test scores, as well as UK level system, giving parents and students the misconception that these are the only important indicators of overall success. Even though traditional reporting of grades and test scores are considered essential, it does not identify what students “know, understand, can do and feel at different stages of the learning process.” ARIS has for several years has been advocating a paradigm shift in assessment with a move toward more process and performance-based authentic assessment.

What are the aims of assessment?

ARIS professional staff not only endorses this move towards more performance-based authentic assessment, as an IB World school we aspire to be system trailblazers who lead and model what this looks like at the high level. Assessment involves much more than end-of-chapter tests or quizzes on discrete skills and concepts. These forms of traditional assessments can be given as practice worksheets, exercises, activities and tasks to determine pupils acquired understanding of the concepts and skills, rather than as a measure of their knowledge. Such scores, grades and levels are not to be disclosed to the learners, but instead they are to be given constructive feedback so as to encourage their continuous growth in the learning process and they determine the areas that they need to work harder on.

Standardised tests therefore, are used as a means to evaluate teachers’ performance as well as the schools’ curriculum as a whole, and used to set targets based on differentiation, that will allow pupils to develop in their individual skills and growth of learning according their unique capabilities. Al-Rayan International school believes that assessment involves students’ performances, demonstrations, and product development. They often involve real-world skills that encourage collaboration, critical thinking, and problem solving.

Interpreting the rubrics

Our commitment to the Primary Years Program as a candidate school is an indication of our continuous commitment toward inquiry, toward deepening and extended learning processes, and toward more effective assessment practices. Assessment in the PYP is focused into effectively guiding students through the five essential elements of the program: the acquisition of knowledge, the understanding of concepts, its transdisciplinary skills, the development of the Learner profile and attitudes, and the decision to take action.

Following to the guidelines in the unit of inquiry template during the preparation of units of inquiry provide the structure necessary for effective collaboration and assessment.

Purpose of assessment

Assessment is the gathering and analysis of information about student performance and is designed to inform practice. It identifies what students know, understand, can do and feel at different stages in the learning process. Students and teachers should be actively engaged in

assessing the student’s progress as part of the development of their wider critical-thinking and self-assessment skills. (Making the PYP Happen, 2007)

The purpose of assessment at ARIS is to promote and provide information about the process of student learning and to evaluate the efficacy of the programme. All individuals concerned with assessments, including students, teachers, parents and administrators are made aware of the reason for assessment, what is being assessed, the success criteria, and the assessment method.

Reflections on assessment

At Al-Rayan International School, our assessment provides information about student achievements, needs and future directions. It is an integral part of an ongoing process of gathering, recording and sharing information about student learning. ARIS assessment process aims to recognize that each student is unique and possesses their own talents, interests and abilities and presents the curriculum content applicable to the stages of development of our students. Students learn, develop and mature at different paces and stages; hence, not all children at the same chronological age will attain the same academic level. Students are therefore encouraged to give their best, regardless of their level of development.

Assessment in practice

Assessment is an integral part of the teaching process within the PYP. Identifying where assessment planning occurs in the teaching-learning continuum provides an essential insight into designing units of inquiry, which will effectively address the five essential elements of the PYP. We start with the end in mind as we collaboratively consider how to most effectively design an inquiry. Teachers at ARIS are encouraged to use a variety of assessment tools and strategies (Making the PYP Happen pages 48-49), to provide for differentiated instruction and to provide a balanced view of the students. We hope to take the tension and fear out of assessment as much as possible by involving the students and parents in the process.

Effective assessment at ARIS

  • Measures growth of learning
  • Measures the application of targeted knowledge, conceptual understanding and skills rather than the mere recall of facts.
  • Serves as target setting for teachers and students
  • Involves students in their own assessments and in that of their peers, as they are part of the collaborative planning
  • Involves active reflection on the part of the both student (analyze their learning) and teacher (analyze their teaching)
  • Allows for differentiation in assessments to meet individual needs
  • Provides constructive feedback as information to students, teacher, parents, and school administration for evaluation and continuous improvement in curriculum, instruction, meaningful work, and assessment tasks
  • Allows students to base their learning on real-life experiences that can lead to further inquiries
  • Collaboratively review and reflect on student performance and progress

Summative assessment

Summative assessment enables teachers and students have a clear insight into students' understanding. Planning of the summative assessment begins after the central idea has been established. It includes student’s input and will then be presented at the beginning of the unit of inquiry. Summative assessments occur at the end of the teaching and learning process and give students an opportunity to demonstrate what they have learned or how they have grown in regards to what they understand (concepts), what they are able to do (skills) what they feel and value (attitudes) and how reflecting has led them to towards action. Summative assessment assesses the central idea.

Formative assessment

Assessment is essential to understanding how students are performing in relation to the selected standard. Assessment for learning is as important as assessments of learning.

Formative assessment allows teachers to evaluate the effectiveness of instructional strategies, and potentially engages students in self-assessment and recognition of the success criteria. Formative assessments enhance learning by giving regular and frequent feedback, and hence provide information that is used in order to plan the next stage in learning. It is incorporated within the daily teaching so both the students and teachers know how they are doing at various intervals during the inquiry process. Interwoven with learning, formative assessment helps teachers and students to find out what the students already know and can do. Formative assessment assesses the lines of inquiry.

Assessment in the classroom includes:

  • collecting evidence of students' understanding and thinking
  • documenting learning processes of groups and individuals
  • engaging students in reflecting on their learning
  • students assessing work produced by themselves and by others
  • developing clear rubrics
  • identifying exemplary student work
  • keeping records of test/task results

Recording

Throughout the Candidacy period, ARIS encourages teachers to implement the assessment strategies that form the basis of a comprehensive approach and represent ARIS’s answer to the question, “How will we know what we have learned?” These methods of assessment include a broad range of approaches and have been chosen to provide a balanced view of the student.

Strategies

  • Observations: All students are observed regularly with a focus on the individual, the group, and/or the whole class.
  • Performance Assessments: It requires using a reservoir of knowledge and skills to accomplish a goal or solve an open-ended problem. Students are given a task that represents the kind of challenges that people face in the world beyond the classroom. Furthermore, it requires the thoughtful application of knowledge rather than recalling facts. It involves a realistic scenario, and has an identified purpose or audience, with established criteria, and requires developing an authentic product or performance.
  • Process-focused assessments: This method focuses on process and skills that the students go through in completing the assessment, as they are observed regularly, and multiple observations are recorded to enhance reliability. The transdisciplinary skills (research, thinking, communication, self-management and social skills) are thus regularly observed in real context. Examples are checklists, inventories and narrative descriptions (learning logs).
  • Selected responses: Single-occasion assessments that provide a snapshot of students’ specific knowledge, such as in tests and quizzes.
  • Open-ended tasks: Students are presented with a stimulus/challenge and asked to provide an original response. Students then answer might be through a drawing, presentation, diagram, solution or a brief written response. These may then be included in the portfolio to demonstrate growth and reflection of learning.

Tools

The above assessment strategies are put into practice at ARIS by using the following assessment tools.

  • Rubrics: Rubrics are established sets of criteria used for scoring or rating children’s tests, portfolios, or performances. The descriptors tell the child and the assessor what characteristics or signs to look for in the work and then how to rate that work on a predetermined scale. Rubrics can be developed by children as well as by teachers.
  • Exemplars/Benchmarks: Samples of children’s work that serve as concrete standards against which other samples are judged. Exemplars can be used in conjunction with rubrics or continuums. Benchmarks should be appropriate and useable within a particular school context.
  • Checklists: These are lists of information, data, attributes, or elements that should be present, example mark scheme.
  • Anecdotal records: Anecdotal records are brief, written notes based on observations of children. These records need to be systematically compiled and organized.
  • Continuums: These are visual representations of developmental stages of learning. They show a progression of achievement or identify where a child is in a process.

Standardized Test

Following the Cambridge International Examination Framework for Literacy, Numeracy and Science at ARIS, it is essential for teachers to ensure that pupils have acquired the skills and concepts within the schemes of work, by assessing pupils through practice sheets, tasks, exercises, online activities as well as through the PYP strategies and tools implemented in the units of inquiry.

Formal examinations are conducted once a year at the end of the third term. Children take the QCA optional examinations during the third term. These papers are a series of examination questions dispatched from the UK to assess children’s levels in the different areas of the curriculum. The results allows teachers to determine the attainment level at which the student is working, and can thus set targets for the next academic year. However, pupils in Year 2 and 6 are obliged to write the Standards and Testing Agency Tests, popularly known as STA. This is because both classes are the only two classes which go into transition the next academic year: Year 2 moving onto Key Stage 2 levels to Year 3, and Year 6 moving onto Year 7 which is the Secondary level or Key stage 3.

All these exams are marked by the teachers and moderated by the senior management to determine the right level for the pupils using the national level descriptors enshrined in the British curriculum.

In addition, The Year 6 students also write the Check point exam which is administered by the Cambridge International Examination Board, to determine their academic achievement throughout Primary before transition into Secondary. This assessment is not obligatory, as parents may choose not to allow their wards to seat for this exam. The exams are marked in the UK, and each student is given a report on the areas of strength and weakness; this not only guides pupils in evaluating and reflecting on their performances, but also allows the school to review its standards and benchmarks.

Hence, all results acquired from these standardized examination are for the purpose of evaluating teachers’ performance in the teaching and learning process, efficacy of the programme and curriculum of the school. In addition, it provides information which shows growth over time. It is also to guide teachers and students in setting targets according to each and every student’s needs, in order to encourage them towards working to achieve their goals. Furthermore, it allows the Learning Support Team to determine those students whose basic skills fall outside the normal range expected for students of that particular age. Hence, this data alongside other assessment information is used to develop support programmes such as Individual Education Plan. It is in no way a means of comparing our performance to that of other institution within the international arena.

Reporting

Reporting on assessment is a means of communicating what students know, understand and can do. Effective reporting should:

  • involve parents, students, and teachers as partners
  • reflect what the school community values
  • be comprehensive, honest, fair, and credible
  • be clear and understandable to all parties
  • allow teachers to incorporate what they learn during the reporting process into their future teaching and assessment practice
  • include student development according to the Learner profile

Conferences

The function of the conferences is to share information between teachers, students and parents so as to set goals in order to progress in the learning process.

Teacher-Student

These informal conferences encourage students’ learning and teacher planning. At ARIS, teachers and students meet very often during and out of lesson, to discuss and reflect on their work to further develop and refine their skills. Teacher assistants are very much involved in this process as well.

Teacher-Parent

Parents are regularly being invited to curriculum and programme presentations, and to discuss their wards progress during the course of the academic year. Teachers take the opportunity to gather background information, to answer parents’ questions, to address their concerns, and to help define their role in the learning process. Parents need to inform teachers of the cultural context of their ward’s learning.

Three Way–target setting

Our two three- way conferences are held during the first term in mid-October and in the second term, towards the end of January. All participants must understand their roles prior to the conference. It involves the student, parents and teacher. Students discuss their learning and understanding with their parents and teacher, who are responsible for supporting the student through this process. Students reflect upon work samples chosen with the support and guidance of their teacher, and could be from their portfolio. The student, parents and teacher collaborate to establish and identify the students’ strengths and areas for improvement. Targets are then set, with the process in achieving them all noted. Teacher takes notes of the discussion.

Student–led

The student led conference at ARIS occurs during the third term in mid-May. It involves the student and the parent. The students are responsible for leading the conference, and also take responsibility for their learning by sharing the process with their parents. It may

involve students demonstrating their understanding through a variety of different learning situations. There may be several conferences taking place simultaneously.

The conference will involve the students discussing and reflecting upon samples of work that they have previously chosen to share with their parents. These samples have been previously selected with guidance and support from the teacher, and could be from the student’s portfolio. The student identifies strengths and areas for improvement. It enables parents to gain a clear insight into the kind of work their child is doing and offers an opportunity for them to discuss it with their child. The conferences must be carefully prepared, and time must be set aside for the students to practice their presentations. The format of this conference will depend on the age of the student and all of the participants must understand the format and their roles prior to the conference.

Written Report

Assessment of student learning in the forms of progress reports; portfolios, report cards and standardized tests are provided to parents throughout the year. Examples of student work are maintained in portfolios. Regularly scheduled conferences are offered throughout the academic year.

Written assessment information is reported at the end of Term One and Term Three on stand- alone units that have taken place in single subject areas, Units of Inquiry, Learner Profiles and Attitudes focused upon within the unit, transdisciplinary skills and Key concepts.

Scores and marks for skills/standards within each stand-alone subject areas are transformed into a grading criterion that is evaluated using the scale standardized within the school context.

The written report contains a detailed summative record that clearly indicates areas of strength, areas for improvement, and where students are involved in providing input (through self-assessment).

The Exhibition

At the end of Year 6, the final Year of PYP, our students prepare a PYP exhibition that involves a transdisciplinary inquiry conducted in the spirit of personal and shared responsibility. It requires that each student demonstrates engagement with the five essential elements of the programme: knowledge, concepts, skills, attitudes and action. It is essential for the exhibition that pupils exhibit the attributes of the Learner Profile that have been developing throughout their development in the PYP. The Class Teacher guides the students in selecting the inquiry from any of the transdisciplinary themes.

The PYP exhibition has the following key purposes:

  • Students are engaged in an in-depth, collaborative inquiry
  • Students are provided with an opportunity to demonstrate independence and responsibility for their own learning
  • Students are provided with an opportunity to explore multiple perspectives
  • Allows students to apply their learning of previous years, and to reflect on their journey through the PYP
  • Provides an authentic process for assessing student understanding
  • Students demonstrates how they can take action as a result of their learning
  • Students, teachers, parents and other members of the school community are united in a collaborative experience that incorporates the essential elements of the PYP
  • Learners celebrate the transition from primary to secondary education

Individual Files Administration Files

Each child has an individual file. It is kept in the office for ongoing records to be added. It includes information from previous schools, reports from previous years, medical records, professional evaluations (from child psychologists, SEN Officer, school Counselor etc.), and results from standardised test/competition papers (e.g QCA tests, Admission Assessment)

References

International Baccalaureate Organization, Making the PYP Happen. Cardiff, Wales United Kingdom 2009

Wildwood World Magnet School PYP Assessment Policy Manual Revised Spring 2011

SPW Policies & Procedures: PYP Assessment Procedures

Secondary Assessment Policy

ARIS offers the International Baccalaureate Diploma Programme.

Introduction

The purpose of this statement of policy is to provide broad guidance on assessment in the Secondary School at ARIS, which more specific departmental policies can interpret, articulate and implement.

The assessment policy and procedures at departmental level will vary according to the relevant programme and subject followed, and in general are defined by the external examining body responsible for the curriculum at that student age level:

  • Cambridge Checkpoint for Key Stage 3, corresponding to Years 7, 8 and 9
  • International General Certificate of Secondary Education, corresponding to Years 10 and 11, and
  • International Baccalaureate Diploma Programme, corresponding to Years 12 and 13.

Woven into in each level are well-defined accounts of the nature of the assessments, the assessment instruments (examination components) to be used, the assessment objectives (AOs), the assessment criteria and descriptors, and the scheme of mensuration by which student performance is calibrated. Both criterion-specific and component-specific grade band descriptors are augmented by series of grade descriptors providing a statement of the attributes or characteristics of a student's achievement at each grade level in the subject as a whole. For example, the work of a student with a grade B in IGCSE Additional Maths will exhibit 'x, y z' characteristics. The entire territory is clearly mapped and serves as an essential reference for both students and teachers.

The above curricula comprise the majority of the subjects taught at ARIS. Accordingly, our teachers are trained in and required to comply with the concomitant assessment rubrics. The parameters of such assessments are therefore prescribed by external agencies (eg Cambridge International Examinations, the International Baccalaureate Organization). The prescription, however, invites useful questions.

What is assessment?

We may begin by asking ourselves what educational assessment is in general. Obviously, in its broadest sense, assessment is a fundamental part of the educational process. Assessment is going on continuously in a school such as ARIS – not only formal assessment in the form of marks, grades and comments being attached in some documented form to pieces of student work, but also in a more informal manner at every level of interaction between student and teacher. The basic process of getting to know students and achieving a general understanding of their quality of learning could justifiably be included here. More specifically, however, the course of a lesson may spontaneously change as a result of the teacher’s awareness of the level of comprehension of the students – this is also a process in which assessment is playing a crucial, if subliminal, role.

What are the aims of assessment?

Assessment in our Secondary School is important for a number of reasons. These include:

  • Providing a clear statement for all interested parties of the level of achievement reached by a student by a certain stage of the educational process
  • Acting as a diagnostic tool for teachers to plan subsequent learning experiences in the light of the student’s current level of understanding
  • Acting as feedback so that the student obtains realistic information about current achievement and his or her degree of progress
  • Contributing to the accountability of the school towards parents and guardians.

Although the boundary between them is not really sharp, it can be useful at this early point to distinguish between two broad types of assessment that are different in terms of their aims:

  • Formative assessment provides detailed feedback to students and teachers on the nature of strengths and weaknesses, and capabilities; focused on support for learning and the potential of the student; sometimes referred to as assessment for learning
  • Summative assessment comprises information about student achievement and supports teacher and school accountability; sometimes referred to as assessment of learning.

The IBO Guidelines state that:

'Formative assessment represents the process of gathering, analysing, interpreting and using the evidence to improve student learning and to help students to achieve their potential. It is one essential component of classroom practice and needs to be integrated into the curriculum. The assessment policy will make clear to the whole community what the expectations and practices relating to formative assessment in the school are [whereas]

Summative assessment is concerned with measuring student performance against Diploma Programme assessment criteria to judge levels of attainment. Teachers must be aware of the principles and practices that the IB uses to conduct summative assessment. Summative and formative assessments are, therefore, inherently linked and teachers must use their knowledge of IB summative assessment expectations and practices to help students improve performance in a formative way.'

(Source: Guidelines for developing a school assessment policy in the Diploma Programme, pub. IBO, 2010)

Interpreting the rubrics

It is intended that teachers will be able to design and implement assessment tasks that are consistent with the aims stated above. This document should also serve to standardize the understanding of the nature of assessment across the wider school community, depending on the degree of consensus in the responses to the questions raised.

Reflections on assessment

  • Why are we assessing? What is our rationale or intention? Are we assessing to gain information or knowledge? To what use will we be putting such information or knowledge? For whom are we assessing? Ourselves? The students? Third parties? Parents? Employers? Universities?
  • To what extent, if any, should the identity of the recipient of the assessment outcome affect how the assessment is conducted, and/or what is communicated?
  • Which of our own Ways of Knowing do we use in trying to find out what ARIS students know and can do? Or don't know and cannot do, but should? Our perception? Emotion? Reason? Language? All of them? Equally in all instances?
  • Should the Ways of Knowing we use be a function of the nature of what is being assessed? To what extent should we consciously and deliberately use different Ways of Knowing in different circumstances? Should I assess a musical performance or an artwork using the same ways of knowing as I assess the solution of a quadratic equation? How does the medium through which the assessment is undertaken (visual, aural, digital, analogue, for example) affect the findings?
  • Should my assessments be influenced by the affective context? By the identity, the feelings and emotions of the student(s) at that time? By the anticipated consequences of the outcome of the assessment?
  • To what extent, if any, do my assessments reflect a cultural bias detrimental to, or improving the outcome? Are those being assessed inherently disadvantaged by the cultural bias of the assessment tasks?
  • When are we assessing? Always? In real-time (second by second)? Spontaneously, as the perceived need arises? At predetermined intervals? On what bases should the time and frequency of assessments be determined? Do the assessment schedules and examination timetables mandated by the external educational institutions accord with the developmental needs of the students? If not, what are the implications for the way I manage the courses for which I am responsible?

The benchmarks for IGCSE and IB Diploma Programme assessment

In general, there are two different kinds of benchmark against which assessments can be made:

  • Norm-referencing: the simplest form involves applying a fixed distribution of grades to the results of an assessment
  • Criterion-referencing: involves making judgements concerning the degree to which a student has met written descriptions that correspond to grades.

Norm-referencing

With this method, it is decided in advance that, say, 5% of students in an examination will achieve the top grade, and this figure is maintained from one session to the next. The underlying assumption here is that each cohort of students is interchangeable (in terms of level and spread of performance) with any other cohort. This assumption may not be justified unless the cohort is very large. Also, norm-referencing gives the impression that the score that a student receives is dependent on, and a function of, the performance of other students.

Criterion-referencing

With this method, each student’s work is evaluated with reference to published Grade Descriptors. These descriptors tend to be rather broad and complex, as they are intended to describe overall standards in a subject course, so often they are broken down into assessment criteria that describe performance in particular skills or topics. This system means that, in principle, the student’s performance is measured independently and objectively on its own merit.

Both IGCSE and IBDP work is assessed by criterion-referencing. This means that Grade Descriptors are the benchmarks of the assessment process at ARIS.

Forms of assessment at ARIS

What forms of assessment are relevant to our ARIS Secondary School learning environment? What are the various types of assessment task which we may reasonably require of students? What categories or types of assessment are available to us? Informal/formal, casual/rule-governed, transient/permanent, impressionistic/criterion-based, oral/written, formative/summative. And so forth.

Conventional tasks include:

 

  • Class Work: short assignments for the individual student without test conditions
  • Class Quiz: short assignments administered under test conditions in class
  • Unit Test: more substantial assignment administered under test conditions in class, usually timed to coincide with the end of the treatment of a particular topic
  • Essay/Written Production: assignment requiring an extended response comprising continuous prose
  • Presentation: individual or group oral assignment delivered to the teacher or the rest of the class
  • Commentary: assignment requiring a focused and informed interpretation of a text
  • Practical Report: recording of an experimental session in the sciences
  • Production/performance: Musical or dramatic presentation (solo or ensemble)
  • Exhibition/portfolio: Presentation of art work
  • Portfolio/Dossier Entry: one of a number of short assignments which collectively form students’ coursework
  • Project: individual, perhaps self-directed exercise carried out over an extended period, involving practical activity and report
  • Oral Task: assignment requiring spoken response
  • Listening Comprehension: responding to a spoken text
  • Formal examination.
  • As is to be expected, the menu of appropriate task types varies from one subject to another, and should be elaborated in each set of departmental policies and procedures. In each case, there will be a balance between tasks to be undertaken in the classroom and for homework. There will also be a balance between tasks assigned on individual and group bases.
  • However, our conventional understanding of what properly constitutes classwork and what would normally constitute homework may possibly be inverted by radical changes in pedagogy.
  • What assessment guidelines do we use at ARIS?

    The guidelines for assessment depend on the task. Broadly, there are two types – both of which are consistent with the process of criterion-referenced assessment:

    • Analytic markschemes: marks are explicitly assigned to specific questions or part of questions; this allows the rewarding of partial success; the markscheme is specific to one examination or test
    • Assessment Criteria: descriptors of a hierarchy of levels are used on a ‘best fit’ model; these are appropriate when a task is so open-ended that the variety of valid responses is too great to allow the application of analytic markschemes; the criteria remain the same over different examinations or assignments.

    Students may request to see their own personal assessment record in the form of either reports or transcript. However, they are encouraged to maintain their own record of assessments in their subject portfolios. Teachers may view student records – either on the network drive or in hard copy from the Head of Secondary’s office. Parents have access to assessment records in the form of term reports for their own wards, and will be provided with appropriate whole-school performance information by bulletin and scheduled parent/teacher conferences.

    Who makes the assessments?

    • Teacher assesses student
    • Student assesses self
    • Student assesses other student(s)

    It is almost self-evident that it is a fundamental aspect of the job of a teacher to make assessments of student work. However, it is also very important that students are given opportunities to assess their own work (self-assessment) and that of other students (peer-assessment). Such processes entail analysis, evaluation and synthesis; in other words, crucial elements in the acquisition of higher order cognitive skills.

    What student attributes should be assessed?

    It is important to be clear about what aspect(s) of the student is/are being assessed. The aspect(s) could be:

    • Aptitude: an assessment of the student’s academic ability or potential
    • Achievement: an assessment of the student’s academic performance (skills and knowledge)
    • Effort: an assessment of the student’s attitude to learning.

    The distinction between the first two of these attributes is reflected in two types of testing:

    • Psychometric testing: calibrated short questions to measure aptitude in a particular area, goals are discrimination and ranking
    • Performance Assessment: students expected to carry out tasks that directly reflect the range of knowledge and skills learned in the classroom (including higher order cognitive skills).
    When and how are these attributes assessed at ARIS?
    While aptitude, and by inference, potential, might be or should be the focus of assessment in an entrance (placement) examination, assessment of achievement has central importance in everyday school activity. The principal means by which achievement is recorded is through the use of quantitative marks and grades on student work and reports – representing both ongoing term work and examination performance. The various departmental policies outline the assessment tools employed in each academic subject in order to produce these marks and grades. Constructive comments on student work and in reports are likely to mention effort as well as achievement.
    • Achievement: assessed formally through marks and grades on individual pieces of submitted work, and through grades and comments attached to term reports
    • Effort: assessed formally through written comments on individual pieces of submitted work and on term reports as well as through term reports.
    Both can be assessed less formally through dialogue between student and teacher.

 

What are the characteristics of effective assessments?
There are two basic aspects of an assessment task that should be constantly kept in mind:
• Validity: the extent to which an assessment actually measures what it claims to measure. (In boiling water at sea level at sunset in Tung Lo Wan Road in Hong Kong my thermometer registers 100 degrees Celsius. Is it accurate?)
• Reliability: the extent to which similar assessments are consistent over time. (I do the same experiment in the plaza outside the Chilean Naval Academy in Valparaiso at midnight a year later. Is the thermometer reading the same?)
IB Diploma Programme example: Pedro from Buenos Aires gets a grade 6 in his Physics HL exam in November 2015. Amanda from Helsinki gets a grade 6 in her Physics HL in May 2016. Alex, the university admissions officer at MIT, needs to be confident that the two candidates have reached similar levels of performance. She wonders about the validity (Do these two Physics exams actually measure what they claim to measure?) and the reliability (Was the Spanish version of the exam in the southern hemisphere summer of 2015 consistent with the English version of the exam in the northern hemisphere summer the following year?).
In addition to the basic tenets of validity and reliability there are further considerations to be taken into account in devising effective assessments. For example, bias can be expressed as a difference in outcome of an assessment process that is not related to a genuine difference in the attribute being measured. This is to be avoided wherever possible.
Also to be avoided is overlap between the assessment criteria and/or the assignments. Are you measuring the same thing twice, and obtaining different results? To what is the difference attributable? Replication of assessment tasks can also overload students to the extent that the repeated task is approached with a negative attitude, perhaps further skewing the results.
In any assessment task there is a balance to be struck between validity and reliability. Validity means that the task measures what it is meant to be measuring; reliability means that the task yields consistent scores at different times and with different teachers.
If reliability is over-emphasized at the expense of validity, there is a danger that the task loses its relevance to the desired learning outcomes, and students become burdened with assignments that do not improve their performance. An extreme example of this situation would be setting a multiple choice test to measure students’ understanding of an historical period or comprehension of a passage from Shakespeare. Students may become de-motivated if they cannot see the point of the assignment or how it could help them.
If validity is over-emphasized at the expense of reliability, there is a danger that measurement of student performance will become inconsistent. While the desired learning outcomes might well be addressed, the student’s scores might not reflect this effectively, and hence they lose credibility. Students may be justified in feeling that their scores are unfair, and teachers may be hampered in making clear appraisals of students’ progress. In guarding against this scenario, detailed markschemes can help.
Maintaining the balance between validity and reliability becomes particularly challenging when complex, higher-order skills are involved. The founder of the IB, Alec Peterson, put it in this way:
"What is needed is a process of assessment that is as valid as possible, in the sense that it really assesses the whole endowment and personality of the pupil in relation to the next stage of his life, but at the same time is sufficiently reliable to assure pupils, parents and teachers, and receiving institutions, that justice is being done. Yet such a process must not, by its backwash effect, distort good teaching, nor be too slow, nor absorb too much of our scarce educational resources." (Alec Peterson, Schools Across Frontiers, Open Court, Illinois 1971)

Internal examinations
It is expected that internal examinations for both IGCSE and IBDP students will, overall, follow the formats of the external IGCSE or IB examinations. A mock exam and the concomitant markscheme should as far as possible be a replica of the ‘real thing’, though syllabus content not yet covered in class should obviously be excluded. Apropos of this, the physical environment in which such internal examinations are conducted should also replicate the 'real thing'. Security of examination question papers and markschemes, seating arrangements in the exam room, invigilator training and so forth should be a dress rehearsal for the performance. Administrators of examinations should never underestimate candidates' ingenuity in finding shortcuts to success, and the requirements of external examining boards must be adhered to punctiliously.

A properly balanced examination paper should display an equivalent overall level of difficulty to that of an IGCSE or IB past paper in its entirety. This equivalence legitimizes the use of published grade boundaries from recent IGCSE or IB examinations in the subject (IB data is available on IBIS and from the Subject Reports) as a guide in setting appropriate grade boundaries for the internal examination. Final mark totals or percentages for the internal examination can be converted in this way to generate a 1−7 grade (or its IGCSE equivalent) to describe the student’s performance, and thus be a reasonably reliable predictor of future performance, ceteris paribus .

Coursework
The situation described for internal examinations cannot possibly apply to the variety of tasks set by teachers during normal lessons and homework. Thus a different way to arrive at grades must also be devised.

The rudimentary word descriptions for the 1−7 scale need to be interpreted in the context of Grade Descriptors that, in the case of the IBDP, are published on the OCC for each subject or group of subjects. The equivalent Grade Descriptors for IGCSE subjects are also available in the syllabus documents. Teachers must refer to these descriptors in order to develop and maintain an accurate understanding of what quality of work corresponds to each point on the IGCSE or IB scale for the subject/course concerned. Assigning a grade to a piece of work is a professional judgement made by the teacher in the context of this supporting documentation and his/her previous experience.

Grade Descriptors should also be engaged in the process of setting an assignment for assessment purposes. In particular, a clear judgement must be made as to whether the assignment, on its own, provides an opportunity for the student to perform at a level corresponding to the top (eg A* or 7) level of the Grade Descriptors. In the case of some simple tasks, this may not be the case, but it is vital that most tasks given during the course of a term challenge students to attain this level of performance. If this approach is not adopted, the assignment set will not adequately be able to differentiate between levels of performance, perhaps causing a modal clustering and/or unrepresentative skewing/kurtosis in the distribution. Such an outcome should not be dismissed as a trivial statistical quirk, as it has a direct impact on the quality of the academic diagnosis and hence on future teaching. It may also be evidence of malpractice.

If the task allows the student to produce responses across the whole range of descriptors, then the student’s response must be evaluated by the award of a grade (eg 1−7). Depending on the subject and the nature of the specific assignment, this grade may be arrived at directly or as the end-product of a marking scheme (this is likely to depend on the degree of structure in the assignment). Where a markscheme is used, the teacher should make a judgement about what scores will correspond to which IGCSE/IBDP grades – again with reference to the published Grade Descriptors and the teacher’s professional experience in the matter.

If the task does not allow the student to achieve a level of performance consistent with the top end of the Grade Descriptors, this task cannot, on its own, be awarded a score on the A* to G or 1−7 scale. In other words, to set an assignment task where the top grade on the scale used is not achievable would obviously be unfair to the student. In this situation, the mark or score awarded for the assignment should be combined with marks and scores given over the course of the term for other assignments that are of a similar type. For example, class quizzes might be grouped together, or minor homework assignments. By the end of the term, it should be possible to award a grade for the category of work as a whole, as long as the group of tasks taken together have allowed the student access to the entire range of performance grades.

A coda: Assessment should be neither feared nor revered

Without pejoratively naming instances, it is claimed that some assessment systems, within which tens of millions of young people are today striving to succeed, are deeply flawed. Some systems have, in some cases irrevocably, fossilised into dead imitations of living organisms. Their operating systems are arcane, obscure, impenetrable, unaccountable and frighteningly complex. Their assessment outcomes, including those star quality grades of which the students dream, are feared and revered. That is not just.

Such assessment systems are not serving the needs of the generation for whom they have been devised.

It was earlier maintained that assessment is a fundamental part of the educational process; it is not simply the end product of that process, rather, it is an integral element of it.

Hence it is vital that if assessments, including those outlined in this policy document, are to meet educational needs and serve educational purposes effectively, they must inherently be capable of evolving; they must incorporate in their design mechanisms whereby they themselves are routinely subject to scrutiny, appraisal and evaluation, or to use a word by now familiar, assessment.

ARIS offers the International Baccalaureate Diploma Programme.

Introduction

The purpose of this statement of policy is to provide broad guidance on assessment in the Secondary School at ARIS, which more specific departmental policies can interpret, articulate and implement.

The assessment policy and procedures at departmental level will vary according to the relevant programme and subject followed, and in general are defined by the external examining body responsible for the curriculum at that student age level:

  • Cambridge Checkpoint for Key Stage 3, corresponding to Years 7, 8 and 9
  • International General Certificate of Secondary Education, corresponding to Years 10 and 11, and
  • International Baccalaureate Diploma Programme, corresponding to Years 12 and 13.

Woven into in each level are well-defined accounts of the nature of the assessments, the assessment instruments (examination components) to be used, the assessment objectives (AOs), the assessment criteria and descriptors, and the scheme of mensuration by which student performance is calibrated. Both criterion-specific and component-specific grade band descriptors are augmented by series of grade descriptors providing a statement of the attributes or characteristics of a student's achievement at each grade level in the subject as a whole. For example, the work of a student with a grade B in IGCSE Additional Maths will exhibit 'x, y z' characteristics. The entire territory is clearly mapped and serves as an essential reference for both students and teachers.

The above curricula comprise the majority of the subjects taught at ARIS. Accordingly, our teachers are trained in and required to comply with the concomitant assessment rubrics. The parameters of such assessments are therefore prescribed by external agencies (eg Cambridge International Examinations, the International Baccalaureate Organization). The prescription, however, invites useful questions.

What is assessment?

We may begin by asking ourselves what educational assessment is in general. Obviously, in its broadest sense, assessment is a fundamental part of the educational process. Assessment is going on continuously in a school such as ARIS – not only formal assessment in the form of marks, grades and comments being attached in some documented form to pieces of student work, but also in a more informal manner at every level of interaction between student and teacher. The basic process of getting to know students and achieving a general understanding of their quality of learning could justifiably be included here. More specifically, however, the course of a lesson may spontaneously change as a result of the teacher’s awareness of the level of comprehension of the students – this is also a process in which assessment is playing a crucial, if subliminal, role.

What are the aims of assessment?

Assessment in our Secondary School is important for a number of reasons. These include:

  • Providing a clear statement for all interested parties of the level of achievement reached by a student by a certain stage of the educational process
  • Acting as a diagnostic tool for teachers to plan subsequent learning experiences in the light of the student’s current level of understanding
  • Acting as feedback so that the student obtains realistic information about current achievement and his or her degree of progress
  • Contributing to the accountability of the school towards parents and guardians.

Although the boundary between them is not really sharp, it can be useful at this early point to distinguish between two broad types of assessment that are different in terms of their aims:

  • Formative assessment provides detailed feedback to students and teachers on the nature of strengths and weaknesses, and capabilities; focused on support for learning and the potential of the student; sometimes referred to as assessment for learning
  • Summative assessment comprises information about student achievement and supports teacher and school accountability; sometimes referred to as assessment of learning.

The IBO Guidelines state that:

'Formative assessment represents the process of gathering, analysing, interpreting and using the evidence to improve student learning and to help students to achieve their potential. It is one essential component of classroom practice and needs to be integrated into the curriculum. The assessment policy will make clear to the whole community what the expectations and practices relating to formative assessment in the school are [whereas]

Summative assessment is concerned with measuring student performance against Diploma Programme assessment criteria to judge levels of attainment. Teachers must be aware of the principles and practices that the IB uses to conduct summative assessment. Summative and formative assessments are, therefore, inherently linked and teachers must use their knowledge of IB summative assessment expectations and practices to help students improve performance in a formative way.'

(Source: Guidelines for developing a school assessment policy in the Diploma Programme, pub. IBO, 2010)

Interpreting the rubrics

It is intended that teachers will be able to design and implement assessment tasks that are consistent with the aims stated above. This document should also serve to standardize the understanding of the nature of assessment across the wider school community, depending on the degree of consensus in the responses to the questions raised.

Reflections on assessment

  • Why are we assessing? What is our rationale or intention? Are we assessing to gain information or knowledge? To what use will we be putting such information or knowledge? For whom are we assessing? Ourselves? The students? Third parties? Parents? Employers? Universities?
  • To what extent, if any, should the identity of the recipient of the assessment outcome affect how the assessment is conducted, and/or what is communicated?
  • Which of our own Ways of Knowing do we use in trying to find out what ARIS students know and can do? Or don't know and cannot do, but should? Our perception? Emotion? Reason? Language? All of them? Equally in all instances?
  • Should the Ways of Knowing we use be a function of the nature of what is being assessed? To what extent should we consciously and deliberately use different Ways of Knowing in different circumstances? Should I assess a musical performance or an artwork using the same ways of knowing as I assess the solution of a quadratic equation? How does the medium through which the assessment is undertaken (visual, aural, digital, analogue, for example) affect the findings?
  • Should my assessments be influenced by the affective context? By the identity, the feelings and emotions of the student(s) at that time? By the anticipated consequences of the outcome of the assessment?
  • To what extent, if any, do my assessments reflect a cultural bias detrimental to, or improving the outcome? Are those being assessed inherently disadvantaged by the cultural bias of the assessment tasks?
  • When are we assessing? Always? In real-time (second by second)? Spontaneously, as the perceived need arises? At predetermined intervals? On what bases should the time and frequency of assessments be determined? Do the assessment schedules and examination timetables mandated by the external educational institutions accord with the developmental needs of the students? If not, what are the implications for the way I manage the courses for which I am responsible?

The benchmarks for IGCSE and IB Diploma Programme assessment

In general, there are two different kinds of benchmark against which assessments can be made:

  • Norm-referencing: the simplest form involves applying a fixed distribution of grades to the results of an assessment
  • Criterion-referencing: involves making judgements concerning the degree to which a student has met written descriptions that correspond to grades.

Norm-referencing

With this method, it is decided in advance that, say, 5% of students in an examination will achieve the top grade, and this figure is maintained from one session to the next. The underlying assumption here is that each cohort of students is interchangeable (in terms of level and spread of performance) with any other cohort. This assumption may not be justified unless the cohort is very large. Also, norm-referencing gives the impression that the score that a student receives is dependent on, and a function of, the performance of other students.

Criterion-referencing

With this method, each student’s work is evaluated with reference to published Grade Descriptors. These descriptors tend to be rather broad and complex, as they are intended to describe overall standards in a subject course, so often they are broken down into assessment criteria that describe performance in particular skills or topics. This system means that, in principle, the student’s performance is measured independently and objectively on its own merit.

Both IGCSE and IBDP work is assessed by criterion-referencing. This means that Grade Descriptors are the benchmarks of the assessment process at ARIS.

Forms of assessment at ARIS

What forms of assessment are relevant to our ARIS Secondary School learning environment? What are the various types of assessment task which we may reasonably require of students? What categories or types of assessment are available to us? Informal/formal, casual/rule-governed, transient/permanent, impressionistic/criterion-based, oral/written, formative/summative. And so forth.

Conventional tasks include:

  • Class Work: short assignments for the individual student without test conditions
  • Class Quiz: short assignments administered under test conditions in class
  • Unit Test: more substantial assignment administered under test conditions in class, usually timed to coincide with the end of the treatment of a particular topic
  • Essay/Written Production: assignment requiring an extended response comprising continuous prose
  • Presentation: individual or group oral assignment delivered to the teacher or the rest of the class
  • Commentary: assignment requiring a focused and informed interpretation of a text
  • Practical Report: recording of an experimental session in the sciences
  • Production/performance: Musical or dramatic presentation (solo or ensemble)
  • Exhibition/portfolio: Presentation of art work
  • Portfolio/Dossier Entry: one of a number of short assignments which collectively form students’ coursework
  • Project: individual, perhaps self-directed exercise carried out over an extended period, involving practical activity and report
  • Oral Task: assignment requiring spoken response
  • Listening Comprehension: responding to a spoken text
  • Formal examination.
  • As is to be expected, the menu of appropriate task types varies from one subject to another, and should be elaborated in each set of departmental policies and procedures. In each case, there will be a balance between tasks to be undertaken in the classroom and for homework. There will also be a balance between tasks assigned on individual and group bases.
  • However, our conventional understanding of what properly constitutes classwork and what would normally constitute homework may possibly be inverted by radical changes in pedagogy.

What assessment guidelines do we use at ARIS?

The guidelines for assessment depend on the task. Broadly, there are two types – both of which are consistent with the process of criterion-referenced assessment:

  • Analytic markschemes: marks are explicitly assigned to specific questions or part of questions; this allows the rewarding of partial success; the markscheme is specific to one examination or test
  • Assessment Criteria: descriptors of a hierarchy of levels are used on a ‘best fit’ model; these are appropriate when a task is so open-ended that the variety of valid responses is too great to allow the application of analytic markschemes; the criteria remain the same over different examinations or assignments.

Students may request to see their own personal assessment record in the form of either reports or transcript. However, they are encouraged to maintain their own record of assessments in their subject portfolios. Teachers may view student records – either on the network drive or in hard copy from the Head of Secondary’s office. Parents have access to assessment records in the form of term reports for their own wards, and will be provided with appropriate whole-school performance information by bulletin and scheduled parent/teacher conferences.

Who makes the assessments?

  • Teacher assesses student
  • Student assesses self
  • Student assesses other student(s)

It is almost self-evident that it is a fundamental aspect of the job of a teacher to make assessments of student work. However, it is also very important that students are given opportunities to assess their own work (self-assessment) and that of other students (peer-assessment). Such processes entail analysis, evaluation and synthesis; in other words, crucial elements in the acquisition of higher order cognitive skills.

What student attributes should be assessed?

It is important to be clear about what aspect(s) of the student is/are being assessed. The aspect(s) could be:

  • Aptitude: an assessment of the student’s academic ability or potential
  • Achievement: an assessment of the student’s academic performance (skills and knowledge)
  • Effort: an assessment of the student’s attitude to learning.

The distinction between the first two of these attributes is reflected in two types of testing:

  • Psychometric testing: calibrated short questions to measure aptitude in a particular area, goals are discrimination and ranking
  • Performance Assessment: students expected to carry out tasks that directly reflect the range of knowledge and skills learned in the classroom (including higher order cognitive skills).

When and how are these attributes assessed at ARIS?

While aptitude, and by inference, potential, might be or should be the focus of assessment in an entrance (placement) examination, assessment of achievement has central importance in everyday school activity. The principal means by which achievement is recorded is through the use of quantitative marks and grades on student work and reports – representing both ongoing term work and examination performance. The various departmental policies outline the assessment tools employed in each academic subject in order to produce these marks and grades. Constructive comments on student work and in reports are likely to mention effort as well as achievement.

  • Achievement: assessed formally through marks and grades on individual pieces of submitted work, and through grades and comments attached to term reports
  • Effort: assessed formally through written comments on individual pieces of submitted work and on term reports as well as through term reports.

Both can be assessed less formally through dialogue between student and teacher.

What are the characteristics of effective assessments?

There are two basic aspects of an assessment task that should be constantly kept in mind:

  • Validity: the extent to which an assessment actually measures what it claims to measure. (In boiling water at sea level at sunset in Tung Lo Wan Road in Hong Kong my thermometer registers 100 degrees Celsius. Is it accurate?)
  • Reliability: the extent to which similar assessments are consistent over time. (I do the same experiment in the plaza outside the Chilean Naval Academy in Valparaiso at midnight a year later. Is the thermometer reading the same?)

IB Diploma Programme example: Pedro from Buenos Aires gets a grade 6 in his Physics HL exam in November 2015. Amanda from Helsinki gets a grade 6 in her Physics HL in May 2016. Alex, the university admissions officer at MIT, needs to be confident that the two candidates have reached similar levels of performance. She wonders about the validity (Do these two Physics exams actually measure what they claim to measure?) and the reliability (Was the Spanish version of the exam in the southern hemisphere summer of 2015 consistent with the English version of the exam in the northern hemisphere summer the following year?).

In addition to the basic tenets of validity and reliability there are further considerations to be taken into account in devising effective assessments. For example, bias can be expressed as a difference in outcome of an assessment process that is not related to a genuine difference in the attribute being measured. This is to be avoided wherever possible.

Also to be avoided is overlap between the assessment criteria and/or the assignments. Are you measuring the same thing twice, and obtaining different results? To what is the difference attributable? Replication of assessment tasks can also overload students to the extent that the repeated task is approached with a negative attitude, perhaps further skewing the results.

In any assessment task there is a balance to be struck between validity and reliability. Validity means that the task measures what it is meant to be measuring; reliability means that the task yields consistent scores at different times and with different teachers.

If reliability is over-emphasized at the expense of validity, there is a danger that the task loses its relevance to the desired learning outcomes, and students become burdened with assignments that do not improve their performance. An extreme example of this situation would be setting a multiple choice test to measure students’ understanding of an historical period or comprehension of a passage from Shakespeare. Students may become de-motivated if they cannot see the point of the assignment or how it could help them.

If validity is over-emphasized at the expense of reliability, there is a danger that measurement of student performance will become inconsistent. While the desired learning outcomes might well be addressed, the student’s scores might not reflect this effectively, and hence they lose credibility. Students may be justified in feeling that their scores are unfair, and teachers may be hampered in making clear appraisals of students’ progress. In guarding against this scenario, detailed markschemes can help.

Maintaining the balance between validity and reliability becomes particularly challenging when complex, higher-order skills are involved. The founder of the IB, Alec Peterson, put it in this way:

"What is needed is a process of assessment that is as valid as possible, in the sense that it really assesses the whole endowment and personality of the pupil in relation to the next stage of his life, but at the same time is sufficiently reliable to assure pupils, parents and teachers, and receiving institutions, that justice is being done. Yet such a process must not, by its backwash effect, distort good teaching, nor be too slow, nor absorb too much of our scarce educational resources." (Alec Peterson, Schools Across Frontiers, Open Court, Illinois 1971)

Internal examinations

It is expected that internal examinations for both IGCSE and IBDP students will, overall, follow the formats of the external IGCSE or IB examinations. A mock exam and the concomitant markscheme should as far as possible be a replica of the ‘real thing’, though syllabus content not yet covered in class should obviously be excluded. Apropos of this, the physical environment in which such internal examinations are conducted should also replicate the 'real thing'. Security of examination question papers and markschemes, seating arrangements in the exam room, invigilator training and so forth should be a dress rehearsal for the performance. Administrators of examinations should never underestimate candidates' ingenuity in finding shortcuts to success, and the requirements of external examining boards must be adhered to punctiliously.

A properly balanced examination paper should display an equivalent overall level of difficulty to that of an IGCSE or IB past paper in its entirety. This equivalence legitimizes the use of published grade boundaries from recent IGCSE or IB examinations in the subject (IB data is available on IBIS and from the Subject Reports) as a guide in setting appropriate grade boundaries for the internal examination. Final mark totals or percentages for the internal examination can be converted in this way to generate a 1−7 grade (or its IGCSE equivalent) to describe the student’s performance, and thus be a reasonably reliable predictor of future performance, ceteris paribus .

Coursework

The situation described for internal examinations cannot possibly apply to the variety of tasks set by teachers during normal lessons and homework. Thus a different way to arrive at grades must also be devised.

The rudimentary word descriptions for the 1−7 scale need to be interpreted in the context of Grade Descriptors that, in the case of the IBDP, are published on the OCC for each subject or group of subjects. The equivalent Grade Descriptors for IGCSE subjects are also available in the syllabus documents. Teachers must refer to these descriptors in order to develop and maintain an accurate understanding of what quality of work corresponds to each point on the IGCSE or IB scale for the subject/course concerned. Assigning a grade to a piece of work is a professional judgement made by the teacher in the context of this supporting documentation and his/her previous experience.

Grade Descriptors should also be engaged in the process of setting an assignment for assessment purposes. In particular, a clear judgement must be made as to whether the assignment, on its own, provides an opportunity for the student to perform at a level corresponding to the top (eg A* or 7) level of the Grade Descriptors. In the case of some simple tasks, this may not be the case, but it is vital that most tasks given during the course of a term challenge students to attain this level of performance. If this approach is not adopted, the assignment set will not adequately be able to differentiate between levels of performance, perhaps causing a modal clustering and/or unrepresentative skewing/kurtosis in the distribution. Such an outcome should not be dismissed as a trivial statistical quirk, as it has a direct impact on the quality of the academic diagnosis and hence on future teaching. It may also be evidence of malpractice.

If the task allows the student to produce responses across the whole range of descriptors, then the student’s response must be evaluated by the award of a grade (eg 1−7). Depending on the subject and the nature of the specific assignment, this grade may be arrived at directly or as the end-product of a marking scheme (this is likely to depend on the degree of structure in the assignment). Where a markscheme is used, the teacher should make a judgement about what scores will correspond to which IGCSE/IBDP grades – again with reference to the published Grade Descriptors and the teacher’s professional experience in the matter.

If the task does not allow the student to achieve a level of performance consistent with the top end of the Grade Descriptors, this task cannot, on its own, be awarded a score on the A* to G or 1−7 scale. In other words, to set an assignment task where the top grade on the scale used is not achievable would obviously be unfair to the student. In this situation, the mark or score awarded for the assignment should be combined with marks and scores given over the course of the term for other assignments that are of a similar type. For example, class quizzes might be grouped together, or minor homework assignments. By the end of the term, it should be possible to award a grade for the category of work as a whole, as long as the group of tasks taken together have allowed the student access to the entire range of performance grades.

A coda: Assessment should be neither feared nor revered

Without pejoratively naming instances, it is claimed that some assessment systems, within which tens of millions of young people are today striving to succeed, are deeply flawed. Some systems have, in some cases irrevocably, fossilised into dead imitations of living organisms. Their operating systems are arcane, obscure, impenetrable, unaccountable and frighteningly complex. Their assessment outcomes, including those star quality grades of which the students dream, are feared and revered. That is not just.

Such assessment systems are not serving the needs of the generation for whom they have been devised.

It was earlier maintained that assessment is a fundamental part of the educational process; it is not simply the end product of that process, rather, it is an integral element of it.

Hence it is vital that if assessments, including those outlined in this policy document, are to meet educational needs and serve educational purposes effectively, they must inherently be capable of evolving; they must incorporate in their design mechanisms whereby they themselves are routinely subject to scrutiny, appraisal and evaluation, or to use a word by now familiar, assessment.