District, Union, and Community Collaboration: Massachusetts Consortium for Innovative Education Assessment
Consortium leaders discuss how their model – based on collaboration among districts, teachers unions, and community organizations – aims to change the way school quality is assessed.
Since 1996, Massachusetts’s accountability system has been defined by a single standardized test. While Massachusetts has received accolades for its high scores on the National Assessment for Educational Progress (NAEP) and Program for International Student Assessment (PISA) tests, absolute results from standardized tests tend to correlate strongly with family income and parental education (Reardon 2011). The state’s aggregate scores mask significant inequities; Massachusetts ranks in the bottom third of states with the largest achievement gaps by race, income, and language. Additionally, the narrow focus on one set of metrics – test scores in three subject areas – has incentivized narrowing the curriculum and teaching to the test, particularly in urban districts that serve diverse students, due to the pressure to avoid being designated as an underperforming school or district.
The Massachusetts Consortium for Innovative Education Assessment (MCIEA) was founded in March 2016 by a group of school districts, teacher unions, partner organizations, and a key state senator, with the goal of creating a new multiple-measures accountability system. This system was founded on the belief that there are richer assessment methods than a single standardized test to truly assess student learning and school quality, methods that can provide a deeper, more dynamic understanding of students and schools.
The consortium is creating a multiple-measures school quality dashboard in the areas of teachers and the teaching environment, school culture, resources, student learning, and civic engagement and well-being. The primary means of assessing student learning will be robust teacher-generated, curriculum-embedded performance assessments. The completed dashboard will provide parents, students, educators, community, and policymakers with a comprehensive portrayal of how a school is progressing across all the areas that contribute to students’ social-emotional and academic growth. Rather than one single score, the dashboard will show the areas in which a school is doing well and those where more progress is needed, providing more complete and accurate data to use in determining improvements that need to be made. Such a dashboard eliminates the need for single scores, ratings, and levels that currently exist merely to sort students and schools.
The consortium’s governing board consists of superintendents and teacher union presidents of consortium districts, with the Center for Collaborative Education (CCE) providing coordination support and facilitating the performance assessment initiative, and a team from the College of the Holy Cross facilitating design of the school quality dashboard.
CCE’s executive director Dan French sat down with Dianne Kelly, superintendent of the Revere Public Schools (a consortium district), and Erik Fearing, Revere Teachers Association president and MCIEA co-chair, to discuss the consortium.
What is the primary goal of the Mass Consortium?
Erik: The biggest impact would be a change in the state education culture from a focus on punitive accountability and multiple-choice testing to a holistic recognition of student knowledge and the value of schools and districts within their communities. I want to see teachers’ professionalism recognized and have them be involved in the formal assessment of students and schools. We need to move away from the extreme focus on one limited style of assessment – standardized testing – and on limited subjects, and recognize the broad value of schools across all academic subjects as well as non-academic areas.
Dianne: Right now, in this state and across the country, there is a false narrative about the efficacy of public education. We constantly hear in the media about failing schools and schools that aren’t meeting the accountability standards in Massachusetts. There are many powerful voices in the commonwealth that support charter schools under the guise of our “failing public schools” and the idea that the public schools don’t innovate or meet individual student needs. Those are untruths about what is really happening in public education today.
The Revere Public Schools is a perfect example of an urban school district in which we have high levels of poverty and a diverse student population, and yet our schools are succeeding. But there is no one beating down our doors to do a story about that; instead, they are talking about how bad the public schools are. So we need to find alternative ways of demonstrating that our public schools are, in fact, succeeding.
The idea isn’t that we abandon accountability. Rather, we want to abandon having one test be used to make judgments about a student or school. A number on a standardized test does not speak to a student’s unique needs; single scores do not adequately describe the kinds of successes that a particular child may have had in school. For example, for a student with an interrupted formal education and the social-emotional effects of living in a war-torn country, success might be to spend an entire day in school with classmates and not have an emotional meltdown. But we don’t get to talk about that when all we talk about is a student’s score on PARCC or MCAS (Massachusetts’s standardized test).1 MCIEA creates an opportunity to assess schools holistically for how they are able to help students achieve instead of looking at narrow, nondescript, decontextualized, single test scores.
Why did you want the Revere Public Schools to join MCIEA?
Dianne: Our current accountability system highlights a narrow area of focus. We should value academic disciplines in addition to mathematics, English language arts, and science. We seek to create a different assessment system that incorporates performance assessments so that, for example, students can articulate their knowledge of mathematics through an artscape they create. We want to enable students to show us what they know and can do in multiple ways instead of pigeonholing them into “show me this one way to do this, which is the way I want it done, and if you can’t, you’re a failure.” MCIEA creates an opportunity for students to express themselves through learning that engages them.
Erik: One reason I am interested in MCIEA is to get a better measure for districts. The current accountability system2 is punitive for districts with high percentages of low-income students, almost guaranteeing that these districts will be at the bottom of the list. We are looking to create measures that everyone believes in other than scores from multiple-choice exams.
How does a district become a member of the consortium?
Dianne: From the very beginning, we wanted to be sure each district was represented by a superintendent and local teachers union president. The truth is, regardless of what the superintendent wants to implement, whether or not it happens at the classroom level is up to the teacher. So if the teachers are not on board from the very beginning in making decisions and shaping the program, then it would be less likely to succeed.
Erik: This has to be a grassroots effort, something that teachers believe in. Having union leadership at the governing-board level gives credibility to empower teachers in the decision- making process. It’s not easy to get a relationship of trust between a superintendent and teachers union president where you can have genuine collaboration. There is a power disparity between them. So having both parties on board and having the broader membership across the district behind those people is crucial to staying in.
What is the balance of the work between school quality and performance assessment?
Erik: In the beginning, each district entered the consortium for different reasons, and some weren’t sure they were going to commit to both paths. In the end, though, every district committed to engage in both parts.
Dianne: Both avenues of work are extremely important. In Revere, it made sense from the very beginning to get in on both of them and not feel left behind on one or the other.
What is the school quality measures work going to look like?
Erik: We want to completely overhaul the measures that are used in determining school quality. We are asking stakeholders within each district – including parents, teachers, students, and administrators – what it is they think makes a great school. We will use the answers to build a school quality measures framework, and then gather available administrative data and develop surveys for different stakeholders, such as gauging how newcomer English language learners – students and their parents – feel welcomed in school.
Dianne: Revere High School has recently received two national awards because of our work in welcoming a diverse student population to the building and meeting the needs of our diverse students. The process leading to the awards involved site visits with teams from multiple states who dissected the curriculum and enrollment, examined whether students of color were well represented in Advanced Placement classes, observed the quality of the advisory period, analyzed discipline data, and gauged the relationships among students, between adults and students, and among adults. They dissected the entire school. In both cases, Revere High School was the only gold school winner from New England. Hugely impressive.
Yet, in the Massachusetts accountability system, Revere High School is at the 22nd percentile in performance, which is based almost completely on a single standardized test average. With a handful of lower MCAS scores, Revere High School would have been classified an underperforming high school and in need of intervention by the state. Clearly, the time is right for us to be talking about why Revere High School is determined to be outstanding on the national level but is deemed as borderline in trouble at the state level.
What is the thinking behind moving from an external testing company to teacher-generated performance assessments in order to assess student learning?
Erik: Standardized tests are efficient at providing scores and a ranking, but they are not very effective at actually assessing learning, knowledge, and skills. With performance assessments, teachers are developing, administering, and scoring tasks. Teachers, working together, will be the ones who examine student work and determine whether it meets the proficiency benchmark. This process recognizes the professionalism of teachers. Instead of devoting so much time away from the curriculum to taking these external state tests, we’d much rather have these well-vetted, thought-out performance assessments where we are getting better information about student knowledge and capacity from an assessment that is part of the curriculum. Students will learn something from taking the test, and we won’t lose instructional time.
Dianne: What is exciting for me is to get at the question, “What is the purpose of assessment?” If our purpose is to assign a numerical value that determines the rank of a school against all other schools that serve students at a particular grade span, I’m not sure who that helps. The Massachusetts accountability system automatically says that 20 percent of schools at each grade span have to be failing. It doesn’t matter how good those schools are; somebody has to be in the lowest 20th percentile and labeled as failing. I don’t know how that system speaks to what our kids know and are able to do. It’s nonsensical, really.
Assessment should be a way to inform teacher practice and help students (and parents) understand their progress. Giving students choice about how they are going to articulate their knowledge and skills advances their learning. Engaging in that type of performance assessment is a much more valuable use of time, effort, energy, and resources than to associate a particular number with a particular student and with a particular school.
What will implementation of the performance assessment work look like in MCIEA districts?
Erik: This first year, we’ve started with professional development for creating and piloting performance assessments. There are thirteen schools in this first cohort. Each school has determined the grade level and subject areas represented on the lead teams. Over four years, we expect to engage every school in each consortium district in the performance assessment work.
Dianne: I envision a time down the road when using performance assessments in class is just a routine part of what we do. And teachers meeting in teams to review student work and refine performance assessments is a cyclical thing we do in order to determine who is achieving understanding of the curriculum, who needs additional assistance, and what additional assistance we need to give them.
Will participating districts be exempt from MCAS and from the underperforming designation?
Dianne: Not while we are building our accountability system. Ultimately, though, the goal is that we are able to provide robust data about student achievement and school progress; at that time, we will sit down with the state and request them to apply for a federal waiver that would exempt participating districts from MCAS.
Erik: We would like to see the current performance rating, single number scores that are given to schools and districts, go away. There is a lot more to a school than is measured on MCAS.
How will participating districts ensure technical quality?
Dianne: The fact that teachers are meeting in cross-district groups to vet the assessments and score student work will contribute to ensuring technical quality. Teachers are going to receive substantial professional development on how to write an effective performance assessment with rubrics and how to assess appropriately. What is important for us to work on as a district is how teachers can work together to make sure that implementation throughout their schools is of high quality.
Erik: Once we have draft tasks from multiple districts, cross-district teams of teachers will be able to look at them and get a second set of eyes on them. We will also be partnering with the Center for Assessment to ensure the right technical quality measures are in place.
How does Revere envision providing adequate professional development time to implement performance assessments?
Dianne: In order to be selected for this work, each principal and school leadership team had to agree to devote a good chunk of professional development time to building school-wide faculty capacity to create, validate, and score performance assessments. That will help pollinate the work across a school. As well, in Revere we allow teachers to select and sign up for ten hours of professional development in any area that interests them; working on performance assessments will be one area in which they can choose to focus.
Erik: The school leadership teams participating in the performance assessment institute have spent time putting together implementation plans on how to build capacity within their own buildings. And we’ll have another cohort of schools going through that same process next year. In the long term, there is a question of ensuring we create a high level of expertise. We have a quality amount of school-based collaborative time, so teachers can work on and share practice around performance assessments in professional learning groups.
Why do you think the state legislature supported the consortium’s work by including a budget line item to support MCIEA?
Dianne: I think even our legislature understands that it is time, almost twenty-five years after MCAS was introduced, to reflect on what we have learned and set new, loftier goals for our schools and our students’ achievement. Over recent years, we have been able to identify effective best practices in instruction and assessment. We’re well positioned to move forward into this new era of assessment and look at the purpose of assessment differently than we did twenty-five years ago.
Erik: The 1993 Education Reform Act set a vision for a multiple-measures state assessment. Unfortunately, the test that was created did not hit the mark; we ended up with largely a multiple-choice test. We are looking to capture that spirit again, to create an accountability system that measures everything we want to measure and that the legislature wanted to measure in 1993. We have a lot more capacity now to work toward that goal.
What challenges have you faced in launching the consortium?
Erik: It has been important to get buy-in from all union members. The first step has been getting teachers enough information to ease the anxiety of an unknown initiative. We are talking about big changes in assessment practices, and change is hard. Many teachers feel that MCAS was misused and worry that MCIEA assessments will be similarly misused, so a big piece is communicating to all teachers so they have a full understanding to be on board. Being responsive to people’s concerns meant slowing down our start-up a little bit, but doing so has positioned us well for moving forward.
Dianne: I think there is a historical context where teachers sometimes think that new initiatives are coming down from on high and the union doesn’t have a say in what it’s going to be or even whether they want to do it. We had to make it clear that joining MCIEA was a joint district-union initiative and that teachers would have a say. Whenever we talked about MCIEA to teachers, Erik and I talked about it together. That made a difference.
To sum up, what’s the message you most want to convey about the work of the consortium to the public and policymakers?
Dianne: Believe in us. We need less testing and more assessment for learning rather than assessment of learning.
Erik: Schools aren’t failing – that’s just a narrative that policymakers decided to write and have stuck to for a long time to maintain a certain power structure.
1 These standardized tests are given to Massachusetts students from third through twelfth grade. PARCC (Partnership for Assessment of Readiness for College and Careers) tests cover English language arts and math; MCAS (Massachusetts Comprehensive Assessment System) tests are given in English language arts, math, and science.
2 Massachusetts’ public schools are sorted into levels from high-performing to lowest-performing based predominantly on student scores on the state’s standardized test; test participation rates and graduation rates (at the high school level) are minor factors in determining levels.
Reardon, S. F. 2011. “The Widening Academic Achievement Gap between the Rich and the Poor: New Evidence and Possible Explanations.” In Whither Opportunity? Rising Inequality and the Uncertain Life Chances of Low-Income Children, edited by R. Murnane and G. Duncan. New York: Russell Sage Foundation Press.