Student Centered Data: Part III


A PISA Test | Theo Muller

In my last post, I examined the following principle:

  • Instruments used for data collection must align with everyday practice and purpose
I noted that any technology that is used should not detract from a teacher’s attention on the students in front of them, and that thus far, a paper checklist may yet be the best instrument for this purpose.

I’d also like to add that videotaping is another tool that can serve certain data gathering purposes in this regard. Rather than spending time frantically scribing “low inference” notes or consulting a rubric, one can replay the video and selectively observe at will. This is why I think the idea of expanding the notion of a “running record” to that of taking a video of a student reading is excellent.

Let me move to our next point:
  • Data gathering and reporting must be as automated as possible

The less time that a teacher spends gathering, inputting, and creating data reports, the more time that can be spent analyzing, reflecting, and taking action based on that data.

Note further that a teacher in the United States already has little time outside of the classroom, most of which is spent on afterschool programs, planning lessons and grading, so any time taken to input data is time taken away from a focus on students’ needs.

This point probably seems self-evident, but unfortunately, the folks who design much of the software used in education seem to consider the teachers who are the end users of their products as an afterthought. Their first priority, instead, seems to be to craft and pitch their product for the administrators who will want to see reports of the data once input. This makes sense on their end, since the administrators will be the one shelling out the money for the software. But it ultimately does not make sense for teachers, at the detriment to their students.

People have been blowing the “data driven” bugle for a while now in public education circles, and when a superintendent or other district leader steps into a school building, that’s all you’ll hear, as colorful reports are whipped out and displayed in portfolios. Unfortunately, this fervor for displaying data reports to one’s higher ups rarely results in much change within a classroom.

I also believe that part of it is that we often forget to question the sources of the data themselves. Once something is quantified, it seems like it has become fact. Yet often assessments aren’t necessarily gathering what they purport to, or they may depict something that is dependent on context that the assessment makers did not plan for.

Ultimately, assessing an assessment requires reflection and analysis best conducted within a facilitated group discussion.

I have found that most useful “data” to examine within a group setting in this way are not reports from the latest multiple choice benchmark. The best information to examine is real student work, especially student writing across content areas.

When teachers can see a student’s work across different classrooms, they begin to see patterns that they can connect to their understanding from what they see everyday in their own classroom. They can also detect any discrepancies between different classrooms, and determine what collective strategies could be used across classrooms. This sort of conversation, because it is based on student work from their classrooms, more likely results in a shift in practice, as the data are not abstracted out of context.

These sort of professional conversations based on real student work are essential, and to the NYCDOE’s credit, they are occurring more frequently here in NYC. If these conversations could somehow take precedence over the outsized influence of standardized tests, I believe classroom practice would shift more responsively to meet student needs.

So you can see an interesting trend in my recommendations on data thus far: I’m advocating for a strategic reduction in technology use in schools, rather than the reverse.

Multiple choice assessments have their place in the classroom, but the key is that they must be as short and targeted on specific content as possible. We must acknowledge that such data is necessarily shallow by nature, and reflect this in the manner that we collect, report, and analyze that data. The gathering and reporting of such data must therefore be as automated as possible so that time is not wasted processing and inputting that information into spreadsheets just so as to satisfy an administrator’s whims. I have found MasteryConnect to be excellent for this purpose: it is designed with the end user–the actual teacher who will use it–in mind, and it automatically generates reports that will please both administrator and teacher alike.

When we need to go deeper into the data, my advice is to move beyond shallow quantitative data, and qualitatively explore real student work through professional dialogue. Here is a protocol I developed for this purpose. The information and analysis that is derived from such dialogue are much richer and applicable to everyday practice.

In my last and final post on this somewhat dry topic, I will explore our final point, which builds off of my recommendation to harness professional dialogue:

  • Data reports must be easily shared
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s