During school visits, I rarely make it five minutes without hearing about “the data.” I am shown big binders containing spreadsheets of data. I sit in data meetings where teachers furiously highlight lines in spreadsheets and calculate percentages. Principals tell me how data are being used to evaluate students and teachers.
Some of this is great. I’m all for assessing children regularly, being clear on what they know and then using the results to decide what to do next. I would be willing to bet that good teachers have been doing this for years without spreadsheets and binders and meetings.
I also think it’s good to systematize this school-wide and to learn good practices for looking at student work and how to use data to analyze and improve your teaching. Sadie Estrella is doing some great work on helping schools do this.
However, I think there is a VERY important piece that gets often lost in a rush to analyze data and use results to plan instruction.
Data are only as good as the assessments used to gather them.
That’s one of the most important things I’ve taken away from being researcher. As researchers, we can spend just as much time designing the tools to gather our data as we do analyzing them. If the wrong tool is used, it completely changes the conclusions we can make.
A lot of data I see collected in schools are from multiple choice tests that are supposed to reveal who mastered a standard and who didn’t.
There are two potential problems with this. One is that there is an assumption that choosing the right answer means a student understands the standard. That may not be true. Did the student answer the question correctly by guessing? Did the student answer the question incorrectly because he or she didn’t understand what the question was asking?
The second problem is that knowing how many students in your class answered a question incorrectly is important, but knowing why they didn’t answer it correctly is more important. It’s impossible to tell from an excel spreadsheet why some students are struggling and others aren’t.
So what tools might help us figure out why students are struggling?
In my line of research, we do a lot of clinical interviews, where we ask students questions as they are solving a problem. I find data collected that way far more useful in figuring out why students are struggling. I’ll talk more about clinical interviews and how to use them in your class next time, but if you want to know more now check out the resources below.
It’s important to collect data on our students and analyze what we collect, but we need to spend just as much time thinking carefully about the tools we use and what kind of information we are gathering.
Buschman, L. (2001). Using student interviews to guide classroom instruction. Teaching Children Mathematics, 8 (4), 222-227
Ginsburg, H. P., Jacobs, S. F., & Lopez, L. S. (1998). The teacher’s guide to flexible interviewing in the classroom: Learning what children know about math. Boston, MA: Allyn and Bacon.