Category Archives: Assessment

Learning Targets

images

The first step in thinking about how to provide students with frequent feedback is having students identify their learning targets. Not just the goals for the unit or year, but also the daily or weekly targets that are going to get them there.

As I’ve mentioned before, I’m training for a half-marathon. The event is April 26th. My goal is run all 13.1 miles before they close the course (which I think is after 4 hours) without any permanent damage to myself.

In order to do this, I set weekly goals. For example, this week, my goal is to complete one 5-mile run and two 30-minute runs. For me, that’s much more manageable than thinking about running 13.1 miles.

I track my progress using an app on my phone and I know right after each run if I hit my target. At the end of the week, I know if I’m on track or not and I set my goal for the next week.

How does this relate to math class?

Do your students know what their goal for class was for the day? For the week? For the unit? If they don’t know the goal, they can’t begin to monitor their progress.

Often, we, as teachers, know what the plan is.  We have an objective in our lesson plan or an aim on the board. We can see how this goal will help them reach later goals and where it falls in the curriculum map.

Our students don’t necessarily see these connections.

When I visit classes, I ask students what they are learning for the day. They tell me, “page 70”  or “the problem on the board.” Sometimes they point to the aim that is written in their book but often it’s not in kid-friendly language so they don’t know what it means when I ask them about it.

It can be hard for a child to know what they are supposed to be focused on. Sometimes we are lucky if they even know what they are supposed to be doing in class–never mind what the goal of it is.

That’s why clear learning targets are so important.  Think about these questions:

  • What do you want students to walk out of your room knowing that they didn’t know before they walked in?
  • Can students explain the goal for the day in their own words?  Do they know how it will help them achieve larger goals?  Do they see the connection to pervious goals?

I’ve seen changes in achievement when students are clear on what their goals are.  When students take ownership of their learning and their progress (and I’ve seen this happen even in early grades), they know what they should focus on during class, they know when they are successful and they know when they still need to work on something.

The research supports my experience. Students who can identify what they are learning significantly outperform those who cannot.  Educational researcher Robert Marzano reviewed the research on goal setting and found that student achievement increased 16 to 41 percentile points when students could identify what they were learning.

In my experience, it also helps students change their attitude towards math. Just like running 13.1 miles seems impossible to me, so do goals like passing math class or getting a high grade on a unit test or project for some students.  Breaking down what it means to get there helps students see exactly what they need to do. It also helps them see progress so that getting there becomes possible.

Want to know more? Check out the following:

Marzano, R. J. (2006). Classroom assessment and grading that work. ASCD.

Moss, C. M., & Brookhart, S. M. (2012). Learning Targets: Helping Students Aim for Understanding in Today’s Lesson. ASCD.

 

The Power of Frequent Feedback

Image_run

Happy New Year All! I hope this year brings you lots of laughter, learning and new adventures.

My big adventure for the new year is to run a half-marathon. I started running in the fall and in that short time, I’ve learned a lot about myself. One thing is that I respond really well to frequent feedback on how I’m doing. I like knowing how much further or faster I ran than the day before.

A little into my running, I downloaded an app that tracks my mileage, elevation, and time as I run. My favorite part is that is sends me emails when I’ve hit my personal best in various categories.

I love finding out that this week was my personal best in miles ran or that today I ran my fastest mile. What I also love is that when I finished my first 10K in December it didn’t matter that I was one of the last people to finish (notice in the picture how no one else is around me) because it was a new personal best for me.

This got me thinking about the type of feedback we give our students in math class. As a student, I only remember receiving feedback on summative assessments–end of unit tests or when I got my report card. Even then, it was only a number or letter grade. I never received daily or weekly or even monthly updates about whether I hit a new personal best.

How different might things be if students received feedback about their personal best in math?

The research supports the idea that frequent feedback is important for our students. One study showed that over a school year, the rate of learning in classrooms that used short cycle (within and between lessons) and medium cycle (within and between units) assessments was about double other classes.

There is something to be said for giving students frequent feedback. However, simply testing students more frequently is NOT the answer.

Formative assessment is complicated. It involves many things, including thinking carefully about:

  • How students are assessed
  • The type of feedback given
  • How students can be encouraged to take ownership of their learning
  •  What adjustments need to be made to instruction

I’ll share some ideas from the research on how to begin to think about these aspects in future posts.  In the meantime, I’m going to keep trying to reach a new personal best on my runs.

Want to know more?  Read: Wiliam, D., Lee, C., Harrison, C., & Black, P. (2004). Teachers developing assessment for learning: Impact on student achievement. Assessment in Education, 11(1), 49-65.

Is a careless error really careless?

In a 1982 study by John Clement, 150 freshman engineering students were given the following problem:

Write an equation using the variables S and P to represent the following statement:

“There are six times as many students as professors at this university.”

Use S for the number of students and P for the number of professors.

37% of the students answered incorrectly and wrote: 6S=P.

Why did they make that error?   Take some time and see if you can figure out why.

At first, Clement thought the mistake was due to carelessness. These were engineering students and this was a simple algebra problem.  They must have just been careless, right?

Clement wanted to check his hypothesis.  Using clinical interviews, he asked students to think aloud as they solved the problem.  At first, he thought he was correct in assuming it was a careless error– it appeared that students were using what he called a “word-order-matching approach.”  They incorrectly mapped the order of the key words in the problem directly to the order of the symbols in the equation.

However, as he continued his analysis of the interviews, he realized there was another explanation.   Students who incorrectly answered the questions were doing something that made sense to them.   Their intuition was to place the multiplier next to the letter associated with the larger group.   Although incorrect, it was meaningful to students.

The correct answer, S=6P, was not meaningful to them.  It did not describe the situation in the problem.  Professors are not multiplying.  It did not make sense.

Yet it makes sense to us and to the students who could answer it correctly.   Clement explains that we understand it because of 2 reasons:

1. We see the variables as representing numbers rather than objects

2. We are able to invent a hypothetical operation on the variables that creates an equivalence.  We know the equation represents what would happen if you made the group of professors six times larger than it really is.  Then the number of professors would be equal to the number of students.

The problem doesn’t seem so simple now, does it?   If we attributed students’ mistakes on it to a careless error, we wouldn’t diagnose the problem correctly and if we don’t diagnose the problem correctly, we have little chance of figuring out how to remediate it.

This happens a lot in classrooms.  We see a student make what looks like a careless error and we tell him to check his work assuming he just didn’t think carefully.  Then the student changes the answer because “check your work” is teacher code for “you got the answer wrong.”  Sometimes the error is a case of not being careful.  Sometimes, like here, it’s not.

Instead of assuming it’s a careless mistake, assume that the incorrent answer makes sense to the student and try to figure out why.  Then try to figure out why the correct answer makes sense to you.  The contrast may help uncover the understanding you have that they still need to develop.

Read more here:

Clement, J.  (1982).  Algebra word problem solutions:  Thought processes underlying a common misconception.  Journal for Research in Mathematics Education, 13(1), 16-30.

Clement, J. (2000) Analysis of clinical interviews: Foundations and model viability.  In Lesh, R. and Kelly, A. (Eds.), Handbook of research methodologies for science and mathematics education (pp. 341-385).  Hillsdale, NJ:  Lawrence Erlbaum.

How to gather the data you really want about student misconceptions

Let’s say you have a binder full of the data I talked about last time.  You know which students got which questions wrong on your latest assessment.

How do you use this information to help those struggling students?

You can’t.  Knowing they answered a question incorrectly is useful, but you need to know why.

That’s where clinical interviews come in.  I like to think of clinical interviews as the diagnostic interview a doctor does when you come into the office.  The doctor doesn’t try to fix what’s ailing you without first making sure he or she has correctly diagnosed the problem.

We need to do the same with students. We need to understand how they are thinking about the math before we can try to help them.

Clinical interviews are used in research to gather data about how a child is thinking.  You give a student a problem and then ask questions as he or she solves it.  The objective is not to teach students but rather to collect information about how they are thinking so that you can later determine the correct remediation.

I know that it’s not realistic to do this with every child in your class.  However, you can learn a lot by trying to incorporate them at times with certain students.

Try this experiment:

Look at your spreadsheet of data.  Take one child who is struggling and try to figure out why.

Give them a task from the assessment (you will need to be careful about how you choose this task) without any answer choices and ask them to think aloud as they work on it.  This may be uncomfortable for them at first since they aren’t used to making their thinking explicit.

You can prompt them to think aloud by asking them:

  • Why are you doing _____?
  • How do you know ______?
  • Tell me more about what you just did.
  • What are you thinking about?

As they talk, listen carefully and try to find out what thinking is producing the misconception they are displaying.

A main assumption that I work with when doing these interviews is that children do what makes sense to them even if it seems like nonsense to me.  My job is to figure out what makes sense to them and why.

The more of these you do, the better you will get at seeing things through the eyes of your students. You will start to anticipate the mistakes student will make. You will start to anticipate what might be problematic about certain models you are using. You will begin to discover that the meaning you see in manipulatives or diagrams is not necessarily why they see.

After you have diagnosed what is going on, you can better plan what to do next.  I’ll talk more about ways to analyze the data you gather from these interviews another time.

Want to know more? Check out:

Solving for Why:  Understanding, Assessing, and Teaching Students  Who Struggle with Math by John Tapper

Buschman, L. (2001). Using student interviews to guide classroom instruction. Teaching Children Mathematics, 8 (4), 222-227

Ginsburg, H. P., Jacobs, S. F., & Lopez, L. S. (1998). The teacher’s guide to flexible interviewing in the classroom: Learning what children know about math. Boston, MA: Allyn and Bacon

Data data everywhere…

During school visits, I rarely make it five minutes without hearing about “the data.”  I am shown big binders containing spreadsheets of data.  I sit in data meetings where teachers furiously highlight lines in spreadsheets and calculate percentages.  Principals tell me how data are being used to evaluate students and teachers.

Some of this is great.  I’m all for assessing children regularly, being clear on what they know and then using the results to decide what to do next.  I would be willing to bet that good teachers have been doing this for years without spreadsheets and binders and meetings.

I also think it’s good to systematize this school-wide and to learn good practices for looking at student work and how to use data to analyze and improve your teaching.  Sadie Estrella is doing some great work on helping schools do this.

However, I think there is a VERY important piece that gets often lost in a rush to analyze data and use results to plan instruction.

Data are only as good as the assessments used to gather them.

That’s one of the most important things I’ve taken away from being researcher.  As researchers, we can spend just as much time designing the tools to gather our data as we do analyzing them.   If the wrong tool is used, it completely changes the conclusions we can make.

A lot of data I see collected in schools are from multiple choice tests that are supposed to reveal who mastered a standard and who didn’t.

There are two potential problems with this.  One is that there is an assumption that choosing the right answer means a student understands the standard.  That may not be true.  Did the student answer the question correctly by guessing?  Did the student answer the question incorrectly because he or she didn’t understand what the question was asking?

The second problem is that knowing how many students in your class answered a question incorrectly is important, but knowing why they didn’t answer it correctly is more important.   It’s impossible to tell from an excel spreadsheet why some students are struggling and others aren’t.

So what tools might help us figure out why students are struggling?

In my line of research, we do a lot of clinical interviews, where we ask students questions as they are solving a problem.  I find data collected that way far more useful in figuring out why students are struggling.   I’ll talk more about clinical interviews and how to use them in your class next time, but if you want to know more now check out the resources below.

It’s important to collect data on our students and analyze what we collect, but we need to spend just as much time thinking carefully about the tools we use and what kind of information we are gathering.

Buschman, L. (2001). Using student interviews to guide classroom instruction. Teaching Children Mathematics, 8 (4), 222-227

Ginsburg, H. P., Jacobs, S. F., & Lopez, L. S. (1998). The teacher’s guide to flexible interviewing in the classroom: Learning what children know about math. Boston, MA: Allyn and Bacon.