While up in Albany a few weeks ago, I was interviewed by someone from NYSED about what I might say to parents who are considering “opting out” their child from state testing. You can view the video here*.
Someone on Twitter, “WiffleCardenal,” voiced a critique to me regarding the video, in contrast to things I’ve said in the past on testing. In fact, they even tweeted quotes of my own words! I deeply appreciate that someone out there is actually listening, and willing to take the time and effort to hold me accountable to them. I have elected to respond here, since Twitter isn’t the greatest venue for nuanced discussion, especially at the end of a long day, and I also hate typing things on my phone.
@mandercorn Hi Mark, I’m trying to square your new NYSED commercial with your thoughts in a chat with the Nation a few years back.
— WiffleCardenal (@WiffleCardenal) March 31, 2016
This is in reference to a live chat I did back in 2012 on The Nation‘s website with journalist Dana Goldstein and educator Tara Brancato. Have my views shifted since then? I would say they have in some ways.
@mandercorn You said:”Higher order questions don’t square with multiple-choice assessments.” Are the MC ?s on these exams good?
— WiffleCardenal (@WiffleCardenal) March 31, 2016
You know, honestly, they’re not as terrible as I thought back then. I proctor these tests each year and go through the experience of answering the questions along with my students. The questions are often cognitively demanding and require multiple reappraisals of the text in question. A few of them are duds, certainly, but having tried to write many of my own text-dependent questions since then, I’ve come to appreciate a well-written multiple choice question. Check out this post from Joe Kirby (UK educator) on the rationale for using multiple choice questions for assessment.
@mandercorn You said, I think disapprovingly (?) “It has become quite clear that the primary purpose of testing is to evaluate teachers…”
— WiffleCardenal (@WiffleCardenal) March 31, 2016
Unfortunately, this continues to hold true. In reaction to this, the Center for American Progress recently created a “testing bill of rights” to advocate for better aligning tests with a more meaningful purpose.
This doesn’t mean, however, that I’m opposed to having test scores factor into my own evaluation or my school’s evaluation. When scores are considered over multiple years, I think they can be an important and useful measure of teacher effectiveness. But they are extremely variable, so I would only want them to be considered alongside other data that can provide adequate context.
One of the things I’ve become more aware of over time is that while our testing and evaluation schemes are extremely problematic, if we look at the big picture, accountability and testing do bring transparency to serving populations of students that were traditionally ignored. No Child Left Behind was certainly faulty and overzealous policy — but it also brought attention to holding school districts accountable to serving students with disabilities and other underserved populations based on data. This was entirely new, and it has raised awareness.
This is why the NAACP, the National Disability Rights Network, and other national civil rights groups oppose anti-testing movements.
@mandercorn You said “We are measuring the wrong things. We should be measuring the learning environments of schools via direct observation”
— WiffleCardenal (@WiffleCardenal) March 31, 2016
Yes, I continue to believe this. Test measures are only one source of data that need to be coupled with qualitative observational data and other forms of understanding. Fortunately, I do feel like our focus, at least in NYC, has shifted to better match this understanding.
To give further context on my statements on the NYSED video, I was speaking about how I use testing data, which I do every week when developing IEPs for my students with disabilities. I compile all information I have on a student, including multiple years of state test data, in-house assessment data, such as reading, writing, and math scores, GPA, attendance, psychoeducational evaluations, social histories, etc. When viewed all together, in tandem with teacher observations and student and parent interviews, I find aggregate state testing data useful!
So it’s important to understand I’m not advocating now and never have advocated for a state test score as a singular reference point to judge myself or a student. But when viewed with appropriate context, I do find state testing data to be useful. (More on how I use that to develop IEPs here.)
@mandercorn You said: “Right now, we are acting like students are products of individual teachers.” Has that changed?
— WiffleCardenal (@WiffleCardenal) March 31, 2016
No, unfortunately. While I do think that test scores should factor into an account of an individual teacher’s effectiveness (only in aggregate and when considered in terms of growth, not proficiency), we’re creating incentives for competition, rather than collaboration.
If I could set the rules for how we use test scores for accountability, I would do something kind of radical: I would hold all grade-level teachers accountable for student scores on literacy tests. And I’d stop labeling them “ELA” tests and call them “literacy” tests. Why? Because if we are honest about what we’re really testing, we’d acknowledge that the knowledge required to understand complex texts comes not solely from ELA, but furthermore from science, social studies, music, art, and so forth. (More on my argument on this here).
Furthermore, I’d try to better level the playing field for all students by requiring test makers to broadcast one year in advance which texts would be tested (not specific passages, just the general title/author). I would allow parents and educators an opportunity to vote on which texts they wanted tested that year as well to make it more reflective of current interests. The reason I would do this is that this would provide an opportunity for all students to build up the requisite vocabulary and background knowledge to access a text. Right now we just give them random texts, as if every child will be bringing equivalent knowledge and vocabulary to them, which is false.
@mandercorn “Test prep consumes a huge portion of time. And it is a huge disservice to the kids…most in need of access to enriching lit”
— WiffleCardenal (@WiffleCardenal) March 31, 2016
Yes, unfortunately this continues to hold true in too many schools. But this is also why I have been a consistent supporter of Common Core standards, which have become synonymous with testing in some people’s minds. Yet the Common Core standards provided us an opportunity to move away from test prep, because they are fundamentally about building student knowledge and academic vocabulary through engagement with rich and complex texts — this is the exact opposite of test prep!
This speaks to the problem of making state tests so high stakes, and why we need multiple measures, such as direct observation, to hold schools accountable. It also is the reason for why I would advocate for the seemingly radical measure, as per above, of communicating which texts would be assessed that year so that “test prep” instead would simply be about reading and studying and discussing the rich texts that were selected for that year’s assessment.
@mandercorn You spoke of testing being “inhumane,” “terrible,” and “unfair” for your special ed kids. I am genuinely mystified. What gives?
— WiffleCardenal (@WiffleCardenal) March 31, 2016
Yes, it can be inhumane when a student is several years behind in reading ability or struggles in coping with anxiety and stress.
While computerized testing brings a whole new set of problems, I do believe we should move in this direction, because with computerized testing, we can use adaptive testing that can better scale to meet a student where they are. Otherwise we end up punishing students who are struggling, for whatever reason. Unfortunately, the needs of students with disabilities never seem to be factored into test design except as a final consideration, rather than from the ground up.
But there’s another side to this, too. I think we have to ask ourselves, as a teacher, a school, and a system, how do we prepare all of our students to be able to engage with a challenging text independently? And in what ways are we sequentially building their knowledge and skills and vocabulary in order to prepare them for doing so? It is the failure to do so systematically and adequately that we are failing students who most need those skills and knowledge.
@mandercorn You seemed like a great advocate for whole-child ed! Do you think the Pearson tests we have now help in that cause?
— WiffleCardenal (@WiffleCardenal) March 31, 2016
Pearson is out of the picture, in case you didn’t know. I have no idea what Questar tests will be like, though I imagine they will be comparable.
From what I’ve heard, PARCC assessments are far superior to the cheaper assessments NY decided to get from Pearson. I think we get what we pay for, and if we want better test design, we have to be willing to fund them.
Personally, I think if we’re going to just use tests for accountability purposes, then we could make them every 2 or 3 years instead of every year to save money, and they could still continue to be used for that purpose.
What would be awesome is if we could move more towards performance based assessment. There’s a great article on them in the most recent American Educator. This seems like the right direction to go in if we truly interested in assessing the “whole child.”
Well, don’t know if all of this fully says everything I would like to say about testing, but I’m seriously tired after a long week, so this will have to do.
WiffleCardenal, whoever you are, thank you holding me accountable and I welcome continued critical dialogue on these issues.
* This was after a long day of a train ride from NYC and meetings with legislators, so I apologize for my shiny face. Won’t apologize for the winter beard, however. And no, I was not paid for that interview nor given a script. As ever, I speak my own mind (or so I like to think. Certainly let me know if it ever seems like I don’t).