History 2001


On September 16, 2001, teams of educational researchers, assessment specialists and practitioners from around the world gathered in Chester, England for at three-day conference on assessment. Our purpose was to pool our collective wisdom on how best to improve the quality and impact of those assessments that have the most direct influence on student learning — the assessments developed and used by teachers day to day in the classroom. These are the assessments that inform the instructional decisions made continuously during teaching and learning by students, teachers and parents.


The meeting was structured to permit the development of ideas and strategies for bringing “assessment for learning” to the fore. This was accomplished by formulating a few key focused questions in advance and by conducting carefully planned discussions during the meeting. Participants included teams from Australia/New Zealand, Canada, Europe, the United Kingdom and the United States. A total of 37 educators took part in the meeting. Unfortunately, the conference was convened a few days after the September 11 terrorist attacks in the United States. So most U.S. and Canadian team members and were unable to attend.

The Focus Questions

Prior to the meeting, each participant was assigned to the focus group of his or her choice, whether research, professional development or policy. All participants were informed of the questions that would guide the discussions of their assigned focus group. Those questions are listed below:

Research on the Nature and Impact

How can we most effectively form a synergy between assessment for learning and periodic summative assessment of learning in the classroom?
How can we best understand the relationship between assessment for learning and current theories of learning, cognition, pedagogy and brain functions during learning?
How can we better understand the roles and benefits of student-involved assessment, record keeping and communication in promoting student success?
What qualitative and quantitative evidence has yet to be developed that will help policy makers understand the power of formative assessment to impact summative assessment results?

Professional Development
What do teachers and administrators need to know and understand to use assessment to benefit students?
How can pre-service teacher training contexts be changed to permit the development of assessment literacy?
What specific professional development tactics and procedures should be implemented to help practitioners become assessment literate?
Should we verify or certify that practitioners are, in fact, able to assess accurately and use assessment to benefit student learning? If so, how?

Policy Priorities
What is sound assessment policy? In other words, upon what assumptions or beliefs should sound assessment policy be based?
How can we influence policy makers to move beyond the dominant assessment policies being formulated today?
How can we encourage policy makers to set consistent assessment policies (that is, noncontradictory policies) across levels and jurisdictions?
How can we use policy to enable/encourage student involvement in assessment, record keeping and communication?

The Schedule of Conference Activities
The schedule of meeting discussions was as follows: The morning of the first day was devoted to orientation and becoming acquainted with assessment issues around the world.

To begin with, members of each team provided overviews of the assessment environment in their country, describing most common practices, identifying issues, and evaluating the extent of application of the principles of assessment for learning. Then the discussion leader for each focus group provided rationale for the importance of each of the guiding questions listed above.

During the afternoon of day one, focus teams met for the first time to begin to formulate their answers to their assigned questions. The afternoon’s discussions and debates provided the basis for the formulation of initial answers to be presented to the entire conference the next morning.

On the morning of the second day, each team presented the results of their afternoon of work to the entire conference. In each case, participants asked clarifying questions, added new information and insights, offered specific advise to the reporting team, and considered the connection between their focus group’s work and that of the other groups.

That afternoon was devoted to the reconsideration of the key questions by each focus group. Each team reviewed their questions, reformulating them if necessary, and refined their answers based on the large-group presentation and discussion. In addition, during this session, focus groups began to think about specific ways to connect research, professional development and policy.

During the morning of the third day, each team offered its final report. These are summarized below. In conclusion, participants brainstormed connections among research, professional development and policy. Again, the results of that discussion are summarized below.


Introductory Comments:

The discussion of research priorities in classroom assessment was conducted with full awareness of the fact that we already have in hand compelling research on the impact of assessment for learning on student achievement. Unprecedented achievement gains can be realized. Further, we have in hand compelling research on the inferior quality of many classroom assessments due to a pervasive lack of opportunity for teachers to develop their assessment literacy.

1. How can we most effectively form a synergy between assessment for learning and periodic summative assessments of learning in the classroom?

This question reflects a fundamental position expressed by many conference participants. Although it is conceptually useful to differentiate formative and summative assessment, their respective functions and links to decision making, it is not productive to set up an opposition between formative assessment aimed at improving learning and summative assessment that certifies learning outcomes for society. In other words, the pedagogical and psychological dimensions of assessment should not be separated from the social and institutional dimensions. Given this premise:

  • Research needs to identify the forms of summative assessment that are compatible with the aims of formative assessment. For example, research on portfolio assessment is a promising direction for understanding the interplay between formative and summative assessment practices that foster learning.
  • Two lines of research need to be coordinated: (a) intervention studies or design experiments in which researchers actively collaborate with teachers to develop new ways of linking formative and summative assessments; (b) qualitative, descriptive studies of “ordinary practices” in order to understand how teachers presently link, or do not link, formative and summative assessment, and why these practices have evolved.

If summative assessment is considered as being a summary for a student at a particular time, considerable care is needed to identify what evidence gives the best picture at that time. Some evidence may be inappropriate because it is out of date, and the weighting of the appropriate evidence needs to carefully take into account the relative importance of the aspects assessed. Three forms of summative assessment need to be distinguished:

  • One form involves summing up current status: This could be criterion-referenced (describing current performance against curriculum criteria) or norm-referenced (comparing the student to others in the class, school or country).
  • A second form could be described as self-referenced or ipsative: a description of the amount of progress achieved over a period of time. This second form is more demanding because it requires sound summative judgements at two different times and careful comparison of these.
  • The third form could be described as predictive, and involves elements of the first two forms and projection into the future. Progress across time, together with assessment of current status (in the criterion-referenced sense), are used as the basis for predicting future achievement or suggesting strategies for enhancing future performance. This third form is the most demanding of all, requiring very sophisticated judgements.

These three forms often appear together. For instance, a teacher reporting to parents might indicate the student’s current status (norm or criterion-referenced), comment on the amount of progress achieved in recent months, and indicate likely areas of progress or steps needed to progress. A university student might be awarded a degree with second class honors, but taking into account evidence of rapid, recent progress might be recommended for a scholarship for Ph.D. study.

2. How can we best understand the relationship between assessment for learning and current theories of learning, cognition, pedagogy and brain functions during learning?

This question focuses on the problem of how to draw on current theories of learning and instruction to conceptualize assessment for learning. Formative assessment was initially developed within Bloom’s mastery learning model on the basis of what can be termed neo-behavioristic principles of instructional design (teach-test-feedback/correction). It is important to look at how formative assessment is transformed when it is based on other conceptions of learning and teaching. How is formative assessment carried out if it is based on constructivist or socio-constructivist principles, or on ideas of participation in a community of practice, as advocated by situated learning theorists. For example, Vygotsky’s notion of the zone of proximal development can be used to conceptualize the integration of “interactive formative assessment” within teaching/learning activities.

The following guiding questions emerged from the discussion:

  • We must explicate how various theories or view of learning – e.g., constructivist, social construction, meta-cognitive theory – consider assessment; what role does it play in each?
  • We must come to understand the relationship between motivation and learning. How does assessment for learning contribute to motivation? Or how it might contribute more fully?
  • Assessment for learning is not merely instruments and tools. It is in organizing learning environments, classroom interactions, and the interpretations and uses of evidence of learning. To what degree is assessment for learning currently woven into each of these?
  • What are teachers’ theories of learning? What are students’ conceptions of learning?
  • How do these link? What do they believe about acceptable evidence of learning and the role of assessment?
  • What alterations would move students closer to an understanding of the attributes of quality work? What conversations are going on about the nature of quality? How is it negotiated? Who gets to decide? What does this say about assessment for learning?
  • How does formative assessment influence students understanding of what they know already, how they use feedback and the development of self-assessment, monitoring and correction?
  • Assessment for learning puts great pressure on teachers to be expert in their subjects. To what extent can and in what ways do they identify the errors, diagnose misconceptions, plan for student involvement, etc.?
  • We want to describe the trajectory of change in teachers’ practices of formative assessment, as a reflection of their understanding of learning, and in their beliefs about assessment.
  • What kinds of professional development most effectively change the way teachers view the learning process and thus engage in assessment?

3. How can we better understand the roles and benefits of student-involved assessment, record-keeping and communication in promoting student success?

The group reflected on what constitutes student involvement in assessment for learning.

One end of the continuum would consist in feedback and modification of teaching, with the student involvement limited to changing their behaviours under the control of the teacher; the other pole would entail a social constructivist view where students are intimately involved in understanding their own learning and in the planning of what to do next. In the context of this kind of change, issues of power are central.

Teachers are concerned about transferring power to students. Key research questions revolve around how to decide who should have power when. Answers will emerge from deeper understanding of how assessment relates to learning. If the goal is student progress towards self-directed learning, growing involvement and skill in self-monitoring (self-assessment, self-evaluation) is critically important. The research on metacognitive reflection and regulation, and on mindfulness and intentionality in learning, provides valuable perspectives in this regard.

Student involvement is often analyzed in purely individual terms, i.e., there is a focus on each individual’s involvement in a personal trajectory of learning. We need to analyze involvement in a more social, interactive perspective; this means looking at student involvement in assessment practices, which entail whole-class interactions and peer interactions that contribute to a classroom-learning trajectory. There is a growing recognition that student-student interaction is a potentially valuable source of assessment for learning. Students receive a substantial amount of feedback from their peers, often much more than teachers can manage for each student because of high class sizes. Research placing microphones on individual students has demonstrated very powerfully that learning of particular concepts or skills is often achieved through student-student interaction. It also shows that student-student interaction can sometime be quite negative, in unobtrusive ways that the teacher does not necessarily notice. The literature on cooperative learning has nevertheless produced a strong consensus that this mode of learning usually has benefits for all students in amount learned, depth of learning, motivation, and social attitudes and skills. Vygotsky’s social-interactionist model of learning has emphasized the importance of scaffolded learning, with students being assisted by others in their “zone of proximal development.” All of these considerations suggest that attention needs to be paid to the culture of assessment created by peer interaction and by student-teacher dialogue.

4. What qualitative and quantitative evidence has yet to be developed that will help policy makers understand the power of formative assessment to impact summative assessment results?

This question on the nature of evidence includes several sub-questions:

  • What types of qualitative and quantitative evidence is current research able to provide?
  • What types of qualitative and quantitative evidence are likely to be convincing for professional development programs and for educational policy makers?
  • What types of qualitative and quantitative evidence need to be developed in future research?

The priority research topics, for which both qualitative and quantitative evidence needs to be collected and synthesized, were defined as follows:

  • Understanding students’ and teachers’ perceptions: More work is needed on understanding the experience of assessment for learning from the perspective of the participants in the process; we must listen carefully to the teacher’s and the student’s voice.
  • Implementing change in assessment: We need to study and come to more deeply understand how best to initiate, implement, and institutionalize change in classroom and school practice, particularly given continuing pressures for assessments of learning for public accountability.
  • Anchoring assessment in learning theory: Further investigation is needed into the relationship between assessment for learning and the broader field of research on intellectual endeavor; explanations are needed of how learning and pedagogical theory fit together to promote a positive impact of assessment.
  • Links to school subjects: Much of the research on assessment for learning and its impact on students has been conducted within specific subject matter domains. Future studies should delineate more precisely the generic features and the subjectspecific aspects of sound assessment and their coordination.

In addition, the following research orientations emerged:

  • It is important to preserve and strive to understand the complexity of the phenomenon understudy, that is, assessment for learning. If there is a choice to be made in conducting our research, we should sacrifice precision to
  • address complexity.
  • Large-scale research findings need to be connected to smaller naturalistic studies so that they are both considered as complementary windows on classroom practice.
  • Funding is not sufficiently available for classroom assessment research: Evidence should be mobilized to convince funding agencies of the power of this means of school improvement.


1. What do teachers and administrators need to know and understand to use assessment to benefit students?

Although there was some disagreement in the group about the narrowness or broadness of essential things educators need to know and be able to do by way of classroom assessment to benefit students, there was broad agreement that existing research provides a basis for determining the priorities. Everyone agreed that these basics are essential:

  • Educators need to be crystal clear on what we want students to know and be able to do. They need to have strategies for making sure that students understand these learning intentions for each and every lesson.
  • Educators need to know how to productively involve students in self-assessment.
  • Educators need to be able to provide feedback to students that is specifically related to the learning intentions targeted. This feedback needs to be descriptive of what was done well, be descriptive of what needs to be improved, and include specific suggestions how to make improvements.
  • Educators need to know effective questioning strategies that improve learning and student motivation.

The following would be added to constitute a broader view of what educators need to know and be able to do.

  • Educators need to know how to generate dependable information about student learning to plan next steps and provide students with accurate feedback. This includes understanding the various assessment options (selected response, essay, performance assessment, and personal communication), when to use each, and how to design them so that they generate accurate information.
  • Educators need to know how to involve students in all forms of assessment (selected response, essay, performance assessment, and personal communication) to maximize student learning and well-being.
  • Educators need to be aware of all users and uses of assessment information so that assessments and communication can be designed with the end user in mind. For example, if the purpose is to involve students, educators need to know how to design the assessment materials and context to best do this, and need to know how to provide specific feedback. Other purposes, (e.g., communicating with parents, or generating information for use outside the classroom), however, might require different assessment designs, materials, and/or communication techniques.

2. How can both pre-service teacher training contexts be changed to permit the development of assessment literacy?

Those responsible for the development of assessment literacy in this context should be certain that the following points are emphasized:

  • All assessment must connect directly to and support student learning
  • Teacher candidates should come to see themselves as action researchers in the classroom, using assessment results to inform adjustments in practice
  • The teacher education process must engage candidates as learners through the effective use of quality classroom assessment practices; that is, professors must model sound practices within their own courses
  • The content of training should include everything listed in response to the first question
  • Candidates must learn about the assessment environment in which they will work— both when that environment is healthy and unhealthy. They must have the opportunity to understand how to support effective environments and how to change ineffective ones
  • They must have the opportunity to understand how policy influences the larger assessment environment in
  • schools, so they can become proficient at influencing policy
  • Faculties of teacher education must, themselves, become assessment literate and take responsibility of passing those competencies on to new teachers

3. What specific professional development tactics and procedures should be implemented to help practitioners become assessment literate?

Sound professional development:

  • must provide opportunities for dialogue between learners;
  • must be based on the needs of the participants;
  • must involve ownership by the participants;
  • should involve some choice of strategies/type of involvement for the participants;
  • needs to be at appropriate times in the day and school year;
  • must be ongoing, sustained, and supported with the necessary human and financial resources;
  • needs leadership at the system and/or school level;
  • must be seen as a key professional responsibility by teachers and school leaders;
  • must involve situated learning, i.e., opportunities for participants to learn that arise directly from their work situation;
  • is most powerful when models, modeling and/or student work samples are used—this allows for learning through concrete examples;
  • most provide for opportunities for critical analysis and/or reflection on the assessment practices of self and others;
  • must provide opportunities to practice, take risks and flounder with the expectation that ‘it’ will not be perfect the first time.

There are a variety of ways to generate initial interest from stakeholders in professional development in assessment for learning. These should be ‘mixed and matched’ to participants’ roles, interests and needs. Examples of choices are:

  • enlightened self interest, i.e., how assessment for learning will make their job easier, better, etc.;
  • cognitive dissonance, i.e., presenting information or data which makes participants intellectually uncomfortable;
  • engaging participants in the analysis of student work;</li
  • presentation of new information designed to inspire and/or inform. (This is where current, relevant research evidence should primarily be used.);
  • and mandated exposure to models, ideas and policies.

The following were identified as effective professional development approaches, techniques, or models. The approaches can be used in various combinations, as appropriate to particular circumstances. Therefore this is also a ‘mix and match’ list. The approaches could also be used at various levels, i.e. with individual teachers, groups of teachers, whole schools, groups of schools and the wider communities that schools serve. The list is not intended to suggest that some approaches are necessarily more effective than others, although the sub-group that identified these strategies did think that the first four can be particularly effective. There was some discussion of problems in sustaining changes in professional practice when support is withdrawn.

Learning teams within schools may be particularly effective in addressing this problem, as these allow teachers to continue to support each other. Learning teams – collaborative learning of small groups of educators – may be formed on the basis of year levels, centres of interest in assessment, within school groups, cross-school groups, or subject areas. The purpose would be to share ideas, develop resources, critique practice, problem solve and provide support for each other:

  • Sharing best practice
  • Observing colleagues
  • 1:1 coaching and support
  • Drop in ‘clinics’
  • Showcasing new developments and strategies with colleagues (within schools and across schools)
  • Development of support materials including exemplars, manuals, books, magazines, videos, appropriate websites
  • Development of classroom action research projects
  • Panels/Forums for listening to student, parent, and teacher voices
  • Interactive workshops – targeting specific needs, specific people
  • Keynotes, presentations, lectures
  • Courses – onsite, online

The professional development team identified a number of issues that will need attention in the near future, including:

  • The importance of fully inducting teachers (and parents) who are new to a school. The induction programme should clarify a school’s approaches to teaching and learning, including its assessment practices.
  • How to make professional development self-sustaining for individuals and school. The programme must ensure that momentum for ongoing development in assessment can be maintained, independent of the professional developer and/or the leader.
  • Professional development should be linked to teacher evaluation/performance management and school review – with a focus on growth and development.
  • Research supports the crucial importance of formative assessment. Should professional development in this area be a choice for teachers already ‘in service’ or should it be mandated?

4. Should we verify, certify, or assure that practitioners are, in fact, able to assess accurately and use assessment to benefit student learning? If so, how?

It seemed to us that there was general consensus in the group that competence in the techniques and processes of Assessment for Learning is one of the defining attributes of a successful teacher. As such we identified two main implications for the profession as a whole:

  • that there is a need to ensure that all teachers are competent in assessment for learning;
  • that there should be appropriate opportunities and support for professional development in the area of Assessment for Learning, and appropriate recognition of this competence (e.g. by accreditation or certification).

We considered that the first of these is problematic inasmuch as it could be met by regulation from outside the profession e.g. by government-imposed testing of teachers’ competence in the area. Our view was that it was important for the profession to ‘regulate its own’ through its representative bodies, such as the general teaching councils in each of the countries of the UK. Assessment for Learning must be consolidated as a central professional competence issue and a focus for initial and continuing professional development.

In order, then, to provide appropriate opportunities and support for teachers to develop their competence in Assessment for Learning, we considered that the professional development sector of the profession should take the lead by promoting Assessment for Learning’s well-documented success in improving learning. Assessment for Learning is taken for granted in many contexts and we would argue that its importance should become embedded in routing professional discourse. The means by which development of the necessary awareness and skills can be supported could borrow from other professions. Examples include long term mentoring arrangements, which use experienced practitioners, accredited by professional bodies as mentors, and career-long evidence of successful professional development and achievement through the keeping of a regular log or portfolio of professional development.


Introductory Comments:

Early in our discussion it became clear that the definition of sound assessment policy and the keys to influencing policy makers vary as a function of the context within which policy is being set. For example, assessment policies are set at classroom, school, district or board, state or province, and federal levels. For ease of consideration (and to preserve our collective sanity!) we opted not to analyze those differences. Rather, we selected one level, the local school district or board, and collected our thoughts about managing assessment policy at that level. We assume that conference participants can extrapolate to contexts relevant for them.

As our deliberations continued, it became clear that we could consolidate our four assigned questions into two. For instance, our success in helping policy makers set sound assessment policies (Q2) will directly impact our ability to help them set consistent policies across levels (Q3). Further, as we teach policy makers about the “assessment for learning” beliefs and assumptions that we feel should drive their assessment policy settings (Q1), hopefully, the policies they set will encourage student involvement in the assessment process (Q4). For these reasons, we offer the following answers to two sets of questions:

   1. What is a sound policy? That is, upon what assumptions or beliefs should sound assessment policy be based? and
   4. How can we use policy to encourage educators to involve students in the assessment for learning process?

In our opinion, enlightened educational policy aims at helping students become privately happy and publicly productive. In that regard, we believe that sound assessment policies arise from an understanding of several overarching principles that should guide the development of effective schools. These are the guiding principles that we must help policy makers at all levels believe in and understand:

  • Achievement expectations must be clearly articulated to cover all essential life-long learning outcomes
  • Achievement expectations must be prioritized so as to avoid overcrowding the curriculum; that is, to help educators make choices in the face of limited resources
  • Schools must accommodate students with special needs at both ends of the achievement continuum by organizing schools to permit students to grow at a rate appropriate for them
  • In that regard, rigid adherence to age cohort grouping must be broken to accommodate different routes to learning and different rates of learning
  • Schools must operate in unwavering partnership with the student’s home
  • Assessment and instruction must keep students in touch with their own learning processes, so they can learn to manage their own learning effectively
  • For this reason student involvement in the assessment, record keeping and communication process is essential
  • All assessments, whether large-scale or classroom, must be of high quality, producing accurate information about student achievement
  • Assessment practices must be evaluated in terms of their ability to reduce student failure to learn; that is, they must be studied in terms of their impact on the social and economic consequences of that failure

Assessment policies that are consistent with these principles are likely to guide educators to enlightened practice.

   2. How can we influence policy makers to move beyond the dominant assessment policies being formulated today? and
   3. How can we encourage policy makers to set consistent policies across levels?

Several specific tactics emerged which, considered together, provide a road map for influencing policy makers at all levels reflect on the wisdom of their assessment policies. We can communicate most effectively with policy makers and thus expand their assessment policy horizons in productive ways by doing the following:

  • Tune into the specific priorities and needs of the policy maker. If we approach the matter or sound assessment policy with their most current issues in mind, we can show them how “assessment for learning” policies can help them advance their own social policy agendas. In other words, align the sound assessment policy message with broader social priorities so policy makers can come on board for what they believe to be the right reasons.
  • Center on the benefits to stakeholders of setting assessment policies that promote learning. Answer the questions, who wins if we assess for learning and how do they win? The answers, of course, center on the results of existing research syntheses. But don’t stop there. We can then go on to explore the implications of those results for reducing the costs of extensive school failure.
  • Reframe messages about sound assessment practice and expected impact on learning for different audiences. Lay audiences, policy maker audiences and members of the professional community of educators are likely to hear and understand different versions or parts of the message. One message will not fit all listeners.
  • Make the message brief and powerful. Identify the key persuasive points to be made and capture them in the fewest possible words for delivery to policy makers.
  • Present the same message over and over. This gives message receivers multiple opportunities to hear it and permits them to tune into different parts on different occasions.
  • We must educate policy makers on the differences between sound and unsound assessment policies and practices. This schooling process will take time and effort but will pay off. Policy makers at all levels strive to do what they believe to be best for schools and students. We can influence the standards by which they decide what is best by reveal to them the broad array of assessment alternatives that we have at our disposal and by showing the our evidence of the expected impacts of each option on student learning.
  • Work with the media to be sure they understand key assessment issues and can serve as allies in the presentation of a clear and complete picture of student achievement.
  • Become a partner in the policy making process by serving on decision making boards, offering technical assistance locally where needed, and serving as a resource for development of assessment literacy in the policy setting arena.


We concluded the final day of the conference by brainstorming ideas that capture the interconnections among the three domains of our deliberations. The following guidelines emerged from that process:

  • Document and synthesize research on sound “assessment for learning” policies and practices internationally. What is working, where and why? What options have been developed in different cultures? What models of excellence can we learn from? The broader our base of technical understanding, the easier it will be to influence policy and practice.
  • Package compelling research for different policy audiences. Some will understand the researcher’s version and, to have credibility with them, the message must be cast in that form. Others will tune into the layperson’s version, because it is only that version that they will understand.
  • Investigate options for demonstrating accountability. We must research ways of using classroom assessments to demonstrate student mastery of key standards. Similarly, we must ensure maximum utility of large-scale assessments for instructional decision making.
  • Connect research on learning and cognition to our advocacy of particular assessment practices. What does research tell us about the state of the human mind when it learns with greatest ease? What does that tell us about the assessment practices that we should be using?
  • Conduct longitudinal studies of the impact of student involvement in assessment on achievement. Document impact on student motivation to learn and on life-long learning.
  • Mobilize all key stakeholders, including parents, community groups, business leaders, and professional educators, to influence policy makers to move toward greater investment in “assessment for learning.” Our first priority must be an investment in teachers, providing them with the opportunity to become assessment literate.
  • Research productive models of professional development. Over the past decade, valuable lessons have been learned about helping adult learners grow within the safe harbor of learning communities. We can take advantage of those lessons and build upon them for the development of assessment literacy.
  • Continue to expand our research base on how teachers and students make meaning of the assessment process and how that process impacts them personally. We must come to understand assessment as seen through their eyes.
  • Work on the terminology that we use to communicate about “assessment for learning.” Simple clear vocabulary with agreed upon meaning will make it easier to communicate with lay audiences such as parents and policy makers.
  • Document and share examples of sound policies and practices—networking will be crucial.


In follow up communication immediately after the Chester meeting, participants were asked to reflect on the most important new insight to come to them as a result of our time together and whether those insights will impact their work. The following paragraphs are written in the first person in order to profile what appears to have been our collective reaction to the conference. It combines each respondent’s most important insight, some included as direct quotes, into a coherent whole. In describing our most important new insight to come from the conference, it is as if we each were writing the next sentence for this passage:

I am struck by the fact that we know so much more together than any of us does alone. We are separate nations growing independently, each with our own language, policies and traditions. Yet we all came to the same conclusion at roughly the same time for the same reasons. Now we appear to share the same commitment to “assessment for learning.” Sure, we stumble over terminology here and there. We may even harbor different beliefs about teaching and learning. We need time to mediate those differences. But it is clear that assessment for learning is an idea whose time has finally come. If we make it a world project, we can impact the lives of so many children.

I was reaffirmed in Chester. Our interactions renewed my belief in the pointlessness of teaching to the test. I believe more strongly than ever in the power of teachers to guide assessment in far more productive ways. They can help students learn by involving them in the assessment process. My faith in the power of student-involved assessment is re-energized. Now I am certain that research, policy and professional development can work together so powerfully in the service of assessment for learning.

Our tasks are clear. We need to involve others, as other nations are moving productively down this same path and we can learn from them. We need to communicate about assessment for learning at many different levels in each culture, questioning traditionally unquestioned beliefs about the role of assessment in school improvement.

We have to find ways to overcome the “why bother” attitudes among so many practitioners, influencing their minds and spirits to make a difference. We need to tie together our research, professional development and assessment policy to make an even more compelling case for assessment for learning.

Chester represented an initial investment by each of us, as we traveled on limited resources during troubled times to share our experiences. Now we must maximize our return by continuing to grow together and by spinning lessons learned from each other into policies and practices that fit each of our unique cultures. The Chester conference already is impacting my daily thinking, presentations and work. I refer to the experience and the lessons I learned their almost daily now in my interactions with colleagues.

The Proceedings of an International Conference
Assessment Training Institute Foundation
50 SW 2nd Ave., Suite 300
Portland, Oregon 97204 USA
November, 2001