Conceptual vs. Procedural Math at Mastery charters

13946605527_82e5a3a848_z

“Maths” by Chris de Kok is licensed under CC BY 2.0

There was an interesting recent the74 piece on a Philadelphia charter organization, Mastery, which takes low performing schools and works to “turn them around.”

 

Embedded within this article is the implication that a shift to a focus on the teaching of conceptual math, rather than “rote” procedural teaching, led to a swift downturn in math scores.

“So this year, the network began reintroducing teaching techniques that had been a staple at Mastery schools for years, while seeking a middle ground between no excuses and restorative practices. It’s a ‘journey of trying to find out what’s the right mix,’ Gordon said.

Specifically, the network is reintroducing procedural math instruction, which focuses on rote instruction like memorization and repetition.”

It seems worth digging into this supposition a bit more.

Is Mastery’s downturn in math scores due to the failure of conceptual math in general as a pedagogical approach? Or is it a failure of the network to attract and train teachers who can teach this type of math more effectively?

Or is it a failure in the assessments that were used as a reference? Or was it that conceptual math takes longer to “stick” and pay dividends? Or was it a failure of the curriculum they used to move in a more conceptual direction? . . .

http://the74million.org/article/at-philadelphias-mastery-charter-network-culture-is-key-to-turning-around-failing-schools

Success can’t be measured by one or two numbers

“Whenever you make huge decisions about complex situations based on one or two numbers, you’re headed for disaster — especially when those numbers can be gamed.”

—Mark Palko and Andrew Gelman, “How schools that obsess about standardized tests ruin them as measures of success” on Vox

We’ve questioned Success Academy’s “success” on this blog before. These statisticians bring a new lens to that question.

I don’t want to denigrate the good work that Success Academy teachers and students are doing. There are practices and systems well worth replicating and investigating in these schools. But Eva Moskowitz’s political framing and marketing of her schools as the solution to poverty is problematic.

My current views on testing, in answer to my past views on testing

While up in Albany a few weeks ago, I was interviewed by someone from NYSED about what I might say to parents who are considering “opting out” their child from state testing. You can view the video here*.

Someone on Twitter, “WiffleCardenal,” voiced a critique to me regarding the video, in contrast to things I’ve said in the past on testing. In fact, they even tweeted quotes of my own words! I deeply appreciate that someone out there is actually listening, and willing to take the time and effort to hold me accountable to them. I have elected to respond here, since Twitter isn’t the greatest venue for nuanced discussion, especially at the end of a long day, and I also hate typing things on my phone.

This is in reference to a live chat I did back in 2012 on The Nation‘s website with journalist Dana Goldstein and educator Tara Brancato. Have my views shifted since then? I would say they have in some ways.

You know, honestly, they’re not as terrible as I thought back then. I proctor these tests each year and go through the experience of answering the questions along with my students. The questions are often cognitively demanding and require multiple reappraisals of the text in question. A few of them are duds, certainly, but having tried to write many of my own text-dependent questions since then, I’ve come to appreciate a well-written multiple choice question. Check out this post from Joe Kirby (UK educator) on the rationale for using multiple choice questions for assessment.

Unfortunately, this continues to hold true. In reaction to this, the Center for American Progress recently created a “testing bill of rights” to advocate for better aligning tests with a more meaningful purpose.

This doesn’t mean, however, that I’m opposed to having test scores factor into my own evaluation or my school’s evaluation. When scores are considered over multiple years, I think they can be an important and useful measure of teacher effectiveness. But they are extremely variable, so I would only want them to be considered alongside other data that can provide adequate context.

One of the things I’ve become more aware of over time is that while our testing and evaluation schemes are extremely problematic, if we look at the big picture, accountability and testing do bring transparency to serving populations of students that were traditionally ignored. No Child Left Behind was certainly faulty and overzealous policy — but it also brought attention to holding school districts accountable to serving students with disabilities and other underserved populations based on data. This was entirely new, and it has raised awareness.

This is why the NAACP, the National Disability Rights Network, and other national civil rights groups oppose anti-testing movements.

Yes, I continue to believe this. Test measures are only one source of data that need to be coupled with qualitative observational data and other forms of understanding. Fortunately, I do feel like our focus, at least in NYC, has shifted to better match this understanding.

To give further context on my statements on the NYSED video, I was speaking about how I use testing data, which I do every week when developing IEPs for my students with disabilities. I compile all information I have on a student, including multiple years of state test data, in-house assessment data, such as reading, writing, and math scores, GPA, attendance, psychoeducational evaluations, social histories, etc. When viewed all together, in tandem with teacher observations and student and parent interviews, I find aggregate state testing data useful!

So it’s important to understand I’m not advocating now and never have advocated for a state test score as a singular reference point to judge myself or a student. But when viewed with appropriate context, I do find state testing data to be useful. (More on how I use that to develop IEPs here.)

No, unfortunately. While I do think that test scores should factor into an account of an individual teacher’s effectiveness (only in aggregate and when considered in terms of growth, not proficiency), we’re creating incentives for competition, rather than collaboration.

If I could set the rules for how we use test scores for accountability, I would do something kind of radical: I would hold all grade-level teachers accountable for student scores on literacy tests. And I’d stop labeling them “ELA” tests and call them “literacy” tests. Why? Because if we are honest about what we’re really testing, we’d acknowledge that the knowledge required to understand complex texts comes not solely from ELA, but furthermore from science, social studies, music, art, and so forth. (More on my argument on this here).

Furthermore, I’d try to better level the playing field for all students by requiring test makers to broadcast one year in advance which texts would be tested (not specific passages, just the general title/author). I would allow parents and educators an opportunity to vote on which texts they wanted tested that year as well to make it more reflective of current interests. The reason I would do this is that this would provide an opportunity for all students to build up the requisite vocabulary and background knowledge to access a text. Right now we just give them random texts, as if every child will be bringing equivalent knowledge and vocabulary to them, which is false.

Yes, unfortunately this continues to hold true in too many schools. But this is also why I have been a consistent supporter of Common Core standards, which have become synonymous with testing in some people’s minds. Yet the Common Core standards provided us an opportunity to move away from test prep, because they are fundamentally about building student knowledge and academic vocabulary through engagement with rich and complex texts — this is the exact opposite of test prep!

This speaks to the problem of making state tests so high stakes, and why we need multiple measures, such as direct observation, to hold schools accountable. It also is the reason for why I would advocate for the seemingly radical measure, as per above, of communicating which texts would be assessed that year so that “test prep” instead would simply be about reading and studying and discussing the rich texts that were selected for that year’s assessment.

Yes, it can be inhumane when a student is several years behind in reading ability or struggles in coping with anxiety and stress.

While computerized testing brings a whole new set of problems, I do believe we should move in this direction, because with computerized testing, we can use adaptive testing that can better scale to meet a student where they are. Otherwise we end up punishing students who are struggling, for whatever reason. Unfortunately, the needs of students with disabilities never seem to be factored into test design except as a final consideration, rather than from the ground up.

But there’s another side to this, too. I think we have to ask ourselves, as a teacher, a school, and a system, how do we prepare all of our students to be able to engage with a challenging text independently? And in what ways are we sequentially building their knowledge and skills and vocabulary in order to prepare them for doing so? It is the failure to do so systematically and adequately that we are failing students who most need those skills and knowledge.

Pearson is out of the picture, in case you didn’t know. I have no idea what Questar tests will be like, though I imagine they will be comparable.

From what I’ve heard, PARCC assessments are far superior to the cheaper assessments NY decided to get from Pearson. I think we get what we pay for, and if we want better test design, we have to be willing to fund them.

Personally, I think if we’re going to just use tests for accountability purposes, then we could make them every 2 or 3 years instead of every year to save money, and they could still continue to be used for that purpose.

What would be awesome is if we could move more towards performance based assessment. There’s a great article on them in the most recent American Educator. This seems like the right direction to go in if we truly interested in assessing the “whole child.”

Well, don’t know if all of this fully says everything I would like to say about testing, but I’m seriously tired after a long week, so this will have to do.

WiffleCardenal, whoever you are, thank you holding me accountable and I welcome continued critical dialogue on these issues.

* This was after a long day of a train ride from NYC and meetings with legislators, so I apologize for my shiny face. Won’t apologize for the winter beard, however. And no, I was not paid for that interview nor given a script. As ever, I speak my own mind (or so I like to think. Certainly let me know if it ever seems like I don’t).

Friedrichs v CTA, and Thinking Probabilistically

By Matěj Baťha (Own work) [CC BY-SA 2.5 (http://creativecommons.org/licenses/by-sa/2.5)%5D, via Wikimedia Commons

Yeah, that headline was a mouthful.

But here’s the thing. You’re going to hear a lot of ed folks declaiming on the potential outcome of the Friedrichs v California Teachers Association SCOTUS case over the next few days. For good reason, as this is a case that may well prove to be more determinative of the future of public education in this country than ESSA.*

I’ve been reading Daniel Kahneman’s excellent Thinking, Fast and Slow lately**. Kahneman’s book is all about ideas we’ve touched on before here, such as cognitive bias and uncertainty. We’ve also looked at how “probabilistic thinking” could be used to overcome bias. So when I fortuitously came across this article on how “superforecasters” use probabilistic thinking, as well as a “base rate,” or “reference class” in order to make more accurate predictions, it jibed well with my understanding, and I think there’s useful lessons to heed as Friedrichs case is heard over the course of this week.

Rather than ideologically proclaiming sweeping predictions, as the experts are wont to do, “superforecasters” are less certain about their predictions, which ironically makes them better predictors. Professor Philip Tetlock delineates between “hedgehogs” and “foxes,” and notes that superforecasters are more akin to foxes:***

According to Tetlock, foxes are more pragmatic and open-minded, aggregating information from a wide variety of sources. They talk in terms of probability and possibility, rather than certainty, and they tend to use words like “however,” “but,” “although” and “on the other hand” when speaking. . . 

Unfortunately, most of the predictions you see in the media lack the specificity necessary to test them, like a specific time frame or probability, Tetlock says. . . 

Instead, Tetlock advocates for something he calls “adverserial collaboration” — getting people with opposing opinions in an argument to make very specific predictions about the future in a public setting, so onlookers can measure which side was more correct.

What does this have to do with Friedrichs? Well, I would suggest asking education “experts,” who will write about their ideas on the case, to assign a probability to their predicted outcome.

Based on my own, extremely limited understanding of the case, I think there’s a 65% chance that Friedrichs will win. I could well be completely wrong. But you’ve got my prediction here, in writing, with a timestamp on it, so you can hold me accountable to this.

I’ll write more on my thoughts on the case soon, but in the meantime, my thinking on Friedrichs v. CTA in a nutshell:

I think public sector unions need to change and adapt much more rapidly to a changing workforce and economy, but I believe strongly in the necessity for unions to present a necessary counterbalance to government and private financial interests. If Friedrichs wins, as I’m afraid she might, then we will witness a drastic further decline in the power of unions in our country. I believe this will be to the detriment of the long-term interests of our nation.

The only commentator I’ve seen thus far who’s beginning to think ahead to this outcome is Dan Weisberg of TNTP. He doesn’t assign a probability to the outcome, but implies it when he says the following:

Unfortunately for the unions, at least five Supreme Court justices appear to be more sympathetic to the teachers’ arguments than I am. The Court practically invited this challenge when it stopped just short of striking down agency fees in a similar case a few years ago.

I’m hoping our unions are already preparing for the worst, because no amount of impassioned op-eds can influence the outcome at this point.

*my apologies to all non-US residents for the US-specific jargon in this post.

**thanks to Deputy Chancellor Josh Wallack, who recently bestowed the book on educators at a dinner hosted by the NYC DOE Office of Leadership.

***We’ve looked at hedgehogs and foxes here before:

 

UPDATE 2/13/16:

Justice Scalia has just died, so that completely changes the odds. While I had first assigned a 65% probability to Fredrichs winning this case, my forecast has shifted closed to 40%. Read more on SCOTUSBlog: “The most immediate and important implications involve that union case.  A conservative ruling in that case is now unlikely to issue.”

 

Public Debates on Education are Ideological, Rather than Sociological

“Yet it struck me that most of the tensions the struggling school experienced that year were sociological rather than ideological: They concerned the challenge of bringing together people of different races and backgrounds (most of the families were low-income and black whereas most of the teachers were young, white, and middle-class) around a shared vision of what education can and should be. Yet our public debate is centered squarely on the ideological rather than the sociological. We endlessly debate the overall “worth” of various institutions—from “no excuses” charter schools to teachers unions—with a political or ideological framing. But we rarely venture inside, scrutinizing the arguably more important question of how people relate, or fail to relate, within these realms. Venturing inside—at least in a meaningful way—takes time, trust, and an open mind.”

—Sarah Carr, “There Are No Simple Lessons About New Orleans Charter Schools After Katrina. Here’s How I Learned That.” on Slate

Charter vs. District Systems

By NASA’s Aqua/MODIS satellite (http://visibleearth.nasa.gov/view_rec.php?id=6204) [Public domain], via Wikimedia Commons
Neerav Kingsland looks at the recent findings on professional development via the TNTP Mirage report and the Rand Corporation study, and comes to the conclusion that “Professional development only seems to lead to student achievement increases in charter schools!”

I noted in a recent post that in the TNTP study, teacher effectiveness and growth was notably more observable in a CMO, and I hypothesized that this could be attributable to some charter networks having more tightly managed systems of assessment, curriculum, teacher practice, and observation.

But to suggest that this is an innate quality of charter schools is questionable. There is absolutely no reason for a district school not to be in possession of such qualities, and indeed, many do.

Kingsland argues for NOLA-style systems, in which the government merely regulates, rather than operates, schools, with the idea being that the private sector can conduct operations more efficiently and effectively. But there’s a potential, and possibly critical, issue with such a system: a lack of coherency.

Within a well-managed district, on the other hand, there is potential for greater coherency. A state or central office can provide specific direction on operational shifts via policy that all district schools would be expected to adhere to.

Kingsland asks, “is it more likely that we can achieve major gains in districts or scale highly effective charters?,” I think he’s created a false dichotomy. I think the more interesting question is, “How can we achieve major gains by leveraging federal, state, and district policy to implement effective and coherent systems, content, and practices across all schools?”

A NOLA-style system might be able to make swift initial gains, due to well-managed networks putting into place strong systems of assessment, feedback, and practice. But it’s certainly feasible that a well-managed district system can make even bigger gains over the longer haul.

I disagree, therefore, with Kingsland’s position that charter schools are inherently superior in enhancing teacher effectiveness and promoting student achievement. In fact, I charge that a NOLA-style system may ultimately run up against its innate incoherency, at which point, large-scale gains would stagnate.

I could be totally wrong on this, of course, and admit that this is conjecture and based on my own values. It may be that a NOLA-style system may end up leading to greater coherency in operations due to competition, and thus, best practices evolve through demonstrated gains in one organization and subsequent adoption by those who are attempting to compete.

I may also be overstating the ability of district schools to establish coherency, given constraints in operating within often volatile political contexts.

The problem is, of course, that while NOLA has demonstrated significant academic gains on tests since moving into a private sector operated system, it’s still purely conjecture as to whether the same benefit would transfer to any other district simply due to a  structural change. It’s also still conjecture that those gains can be solely attributed to a structural shift to private sector operation, rather than the simple mechanism of distributing students across geographical boundaries.

But let’s assume for the moment that Kingsland is correct that a private sector operated school system is the optimal system. I would still argue, even in such a case, that this doesn’t mean that such a system will necessarily scale effectively into different social and political contexts.

In the face of great complexity and uncertainty, we can hedge our bets by planning for robustness, rather than optimality.

The question therefore becomes: what is the most robust? A school system operated by the public, or a school system operated by the private sector?

Perhaps the answer lies somewhere in between.

Forest Mondays

A stream in the Adirondacks
A stream in the Adirondacks

“Every Monday morning, the kids suit up for a day outdoors. Rain or shine — even in the bitter cold — they go out. They head to the woods next to their school where they’ve built a home site with forts and a fire pit.

First thing, the kids go to their “sit spots.” These are designated places — under a tree, on a log — where each kid sits quietly, alone, for 10 minutes. Their task is to notice what’s changed in nature since last week.

. . .

What her students gain from the experience might not be measurable, she says, but that doesn’t mean it’s not worth doing.

Her principal, Amos Kornfeld, agrees. He says schools are being forced to think about everything in terms of data and measurable outcomes, but he doesn’t need test scores to tell him forest kindergarten is working.

When the kids come back from the woods, they look happy and healthy, he says. “Schools need to be focusing on that, too.”

–“Out Of The Classroom And Into The Woods,” news story on NPR ED by Emily Hanford

Thresholds

Church of Ura Kidane Mihret, Zeghie Peninsula, Lake Tana, EthiopiaA. Davey from Where I Live Now: Pacific Northwest

I’ve discussed thresholds on this blog before in relation to ecosystems, and the reality that we don’t really know when such thresholds may be crossed.

To review, thresholds—in an ecological sense—refers to when some small, seemingly insignificant nudge suddenly results in an abrupt transformation of the entire ecosystem, resulting in loss of diversity and possible extinction. Keen readers of this blog may recognize a relation to the concept of hysteresis, which Will introduced us to.
An article on Ensia explores thresholds, depicting the battle of scientists trying to determine some method for uncovering early indicators of the crossing of such thresholds.
We live in a precarious age in which ecological thresholds have been crossed in such wanton abandon that those of us who do have an inkling of what is occurring sometimes prefer to bury our awareness in the veritable sand. It’s gotten so bad that there’s talk about scrapping such quaint notions as “conservation” and “sustainability” as they currently exist, and rather acknowledging that whatever future plants and animals might have is wholly dependent on the space that we might delegate for them in the margins of our busy lives of mindless consumption.
The great engine of human evolution, such as it is, will not wind down anytime soon. Max Planck once said, “A scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it.”
Unfortunately, in this case, the scientific truth of the ecological devastation generated by our collective actions would then be realized by a generation much too late to do much about it other than mitigate desperately against it’s effects. Which is more or less what we are beginning to see taking place (catastrophic forest fires, rising sea levels, violent storms).
But I’m going off on an preachy tangent here. My point in bringing up the article was to highlight an example of possibly misguided energy. Rather than seeking predictive power to better visualize approaching thresholds, wouldn’t it be much better to change our practices now with the assumption that we should prevent any devastation from occurring, as much as possible?
A scientist quoted in the article acknowledges the limitation of attempting to quantify thresholds:

“I hope there are universal early indicators. If we have to figure this out for every system, then we’re up a creek without a paddle,” says Aaron Ellison, a forest ecologist at Harvard University. “If we have to spend 30 years on a system that we want to manage in some way, they’ll all be gone before we have a chance.”

In the face of such great complexity and unknown, we can’t wait for research to clarify every last thing. We need to rely on our ground-level observations and local knowledge. But how do we gain clarity at that ground level?

“When you get to the field, you have to deal with particular ecological patterns. You have to deal with the technology and methodology that’s available for collecting data. You have to deal with the reality of sampling, and figure out how you’re going to use those samples and analyze them,” explains Craig Allen, a research ecologist with the U.S. Geological Survey who’s studied transitions in the American Southwest.

It’s complex, as this scientist points out.

But there are observable clues that we can detect.

Given that detecting thresholds could take decades, researchers are looking for a shortcut — namely, indicators that can be applied to any arid region and require the ecological equivalent of a thermometer under the tongue.

The most promising of these is changing vegetation patterns. The beginning of a grassland’s transition to desert is marked by localized outbreaks of relatively sparse shrubs. Where soil once held by the grasses’ roots had acted like a sponge, water no longer penetrates. Wind blows faster over bare ground, piling eroding earth at the base of shrubs, which require more of the system’s water.

This reminded me of the “broken windows” theory of crime prevention–that paying attention to and addressing lower-level problems can be a means of preventing greater calamity.
It also made me consider the importance in developing simple and clear heuristics and checklists in the face of complexity. If we don’t pay attention to small, incremental changes, then we will lose sight of the bigger picture, and may not notice potentially cataclysmic transformations until too late.
My advice is to avoid waiting for magical algorithms to determine where heretofore invisible thresholds may or may not be crossed. Instead, develop clear and straightforward management heuristics for paying attention to and addressing smaller issues.
In a school, this means addressing the peeling paint in the hallway immediately, rather than waiting until the summer. Immediately calling the parents of the child who has suddenly grown sullen and won’t say good morning to any of his teachers. Having that awkward conversation with a challenging colleague. And so on.

Student Centered Data: Part III

A PISA Test | Theo Muller

In my last post, I examined the following principle:

  • Instruments used for data collection must align with everyday practice and purpose
I noted that any technology that is used should not detract from a teacher’s attention on the students in front of them, and that thus far, a paper checklist may yet be the best instrument for this purpose.

I’d also like to add that videotaping is another tool that can serve certain data gathering purposes in this regard. Rather than spending time frantically scribing “low inference” notes or consulting a rubric, one can replay the video and selectively observe at will. This is why I think the idea of expanding the notion of a “running record” to that of taking a video of a student reading is excellent.

Let me move to our next point:
  • Data gathering and reporting must be as automated as possible

The less time that a teacher spends gathering, inputting, and creating data reports, the more time that can be spent analyzing, reflecting, and taking action based on that data.

Note further that a teacher in the United States already has little time outside of the classroom, most of which is spent on afterschool programs, planning lessons and grading, so any time taken to input data is time taken away from a focus on students’ needs.

This point probably seems self-evident, but unfortunately, the folks who design much of the software used in education seem to consider the teachers who are the end users of their products as an afterthought. Their first priority, instead, seems to be to craft and pitch their product for the administrators who will want to see reports of the data once input. This makes sense on their end, since the administrators will be the one shelling out the money for the software. But it ultimately does not make sense for teachers, at the detriment to their students.

People have been blowing the “data driven” bugle for a while now in public education circles, and when a superintendent or other district leader steps into a school building, that’s all you’ll hear, as colorful reports are whipped out and displayed in portfolios. Unfortunately, this fervor for displaying data reports to one’s higher ups rarely results in much change within a classroom.

I also believe that part of it is that we often forget to question the sources of the data themselves. Once something is quantified, it seems like it has become fact. Yet often assessments aren’t necessarily gathering what they purport to, or they may depict something that is dependent on context that the assessment makers did not plan for.

Ultimately, assessing an assessment requires reflection and analysis best conducted within a facilitated group discussion.

I have found that most useful “data” to examine within a group setting in this way are not reports from the latest multiple choice benchmark. The best information to examine is real student work, especially student writing across content areas.

When teachers can see a student’s work across different classrooms, they begin to see patterns that they can connect to their understanding from what they see everyday in their own classroom. They can also detect any discrepancies between different classrooms, and determine what collective strategies could be used across classrooms. This sort of conversation, because it is based on student work from their classrooms, more likely results in a shift in practice, as the data are not abstracted out of context.

These sort of professional conversations based on real student work are essential, and to the NYCDOE’s credit, they are occurring more frequently here in NYC. If these conversations could somehow take precedence over the outsized influence of standardized tests, I believe classroom practice would shift more responsively to meet student needs.

So you can see an interesting trend in my recommendations on data thus far: I’m advocating for a strategic reduction in technology use in schools, rather than the reverse.

Multiple choice assessments have their place in the classroom, but the key is that they must be as short and targeted on specific content as possible. We must acknowledge that such data is necessarily shallow by nature, and reflect this in the manner that we collect, report, and analyze that data. The gathering and reporting of such data must therefore be as automated as possible so that time is not wasted processing and inputting that information into spreadsheets just so as to satisfy an administrator’s whims. I have found MasteryConnect to be excellent for this purpose: it is designed with the end user–the actual teacher who will use it–in mind, and it automatically generates reports that will please both administrator and teacher alike.

When we need to go deeper into the data, my advice is to move beyond shallow quantitative data, and qualitatively explore real student work through professional dialogue. Here is a protocol I developed for this purpose. The information and analysis that is derived from such dialogue are much richer and applicable to everyday practice.

In my last and final post on this somewhat dry topic, I will explore our final point, which builds off of my recommendation to harness professional dialogue:

  • Data reports must be easily shared

Student Centered Data: Part II

In my last post, I proposed some principles on data collection that I believe are applicable to public education:
  • Instruments used for data collection must align with everyday practice and purpose
  • Data gathering and reporting must be as automated as possible
  • Data reports must be easily shared
I’d like to explore each of these points in greater detail and see if they do indeed bear any relevance. In this post, I will begin by examining our first point:
  • Instruments used for data collection must align with everyday practice and purpose
In a classroom, every second counts. A minute that your attention is not on your students is a minute they will take to redirect their own attention elsewhere. To regain their attention will then consume additional time. Teachers that have poor classroom management are the teachers who have their backs turned for minutes, clicking through folders on their computer, shuffling through papers, or flipping through their lesson plan book on their desk. While they are doing this, students are engaged in conversation, throwing papers, getting out of their seats, and so on.

What this means for a teacher is that any instrument that you may use for data collection must not require your attention for more than mere seconds at a time. If you have to spend a minute tapping on a screen to unlock it, pull up an app, select an option, and wait for it to load, then you’ve just lost a minute assessing and observing your classroom. You don’t have that sort of time to spend.

This is why the most valuable instrument for data collection in a classroom may very well be a paper checklist with your students’ names printed on it. Their homework is out on their desk? Check. They are focused and attentive? Check minus. Johnny is unprepared for class again? Quick note, will call his mother later.

I’m a tech geek, and at the beginning of this school year, I created simple checklists using Google forms, which I would then pull up on my tablet before class began. As students walked in, I would check off their assignments. This is really useful data to have available in a spreadsheet, so I can easily observe trends over time. But even those few seconds it took me to pull up the screen and then press each button with my finger were seconds that my eyes were off of my students. There’s also something about having my eyes on a screen that is different than having my eyes on paper. It sucks my attention in. So I stopped bringing in my tablet and went back to pen and paper checklists.

This principle also applies to the student end of things. Oftentimes, getting students onto laptops or tablets and utilizing technological tools sounds like a game changer, but in an actual classroom, you may find that you are spending your time negotiating password issues, dealing with technical glitches with web browsers or connections, or other unforeseen hardware or software troubleshooting. Depending on the software, you may further find that students click around on things and pay little attention to the content of what they are learning. So using such tools must be very purposeful, and you must be prepared to schedule in time to help students negotiate any obstacles they may encounter, and to teach and model the use of the tools effectively.

Therefore, any instrument that is to be used in a classroom for collecting data must directly align with everyday practice, and be purposeful to the content to be taught and learned.

For now, I’m going to stick with paper checklists and leave my tablet at home and my smartphone in my pocket. The next step is transferring the data collected on those checklists into an online data tracking system for the purpose of analysis, and this is the step that can become most burdensome for teachers.

My next post will explore that very issue in greater depth.