# Should they know what’s on the test?

Sounds silly, right? Of course they should. But I mean, should they know the questions in advance?

But that’s a real question in mathematics education, apparently. SKoolBoy over at Eduwonkette’s place says that repeating questions (same form, different numbers) inflates scores. But what’s wrong with kids studying particular math for particular situations? As they get better and more comfortable, they can extend that math into new places.

Mathematics, for kids, can be first a series of skills, perhaps abstracted from a physical situation, and then the application of each skill to one particular situation, and then, and this is the hard part, the ability to look at new situations and decide which skill that is.

There are some who insist that the application of the skill and the skill be learned together, or that the application somehow precede the skill, or at the extreme, that the skill as a skill never be studied at all.

During the height of the math wars the worst texts were organized by non-mathematical topics, with the mathematical skills scattered throughout. I think there is one called ARISE that starts with a unit on elections, cobbling together lots of election math from bits and pieces from probably 10 separate topics. I read that unit and enjoyed it – but only because I already knew most of the math. What a bizarre way to teach the skills though!

The skills we teach are challenging – it’s the nature of teaching a fully abstract subject. They are hard enough without surprises. Whoever decided that we should use “authentic assessment” in grade school mathematics, “performance standards,” “mathematics in context,” they just don’t understand how difficult the math itself is. Those kinds of tests can help separate the top of the students from the rest, but they fail to test what is most important: skill acquisition.

One of my favorite algebra tests that I give, the one on systems of equations, each year I tell my students exactly what they will encounter, the day before the test.

- “One system that you must solve with substitution, set up to make the substitution easy for you.
- One system that you must solve by linear combination, set up to be straightforward.
- One system that you must solve graphically, the answer will probably fall between the lines.
- Three more systems, not set up for a particular method, where you can chose the method of solution. One or two of these will have no solution, and in that case instead of checking your answer, you’ll need to attempt the question using a second method, showing that that also leads to no solution.
- One word problem that leads directly to a system of two linear equations.
- And a wind or current problem.”

You know what? I know what is being measured. The kids know what’s being measured. It’s the material that was taught. And they are real and valuable skills. What’s so bad about knowing what’s going to be on the test?

My opinion: That’s the only fair way to test.

Students don’t know what is important. They need some guidance, to keep from being overwhelmed by all the possibilities. Of course, the best students can usually figure out what is significant and what’s not, and they can find ways to apply what they know even to an unfamiliar test question. But for most of us, we need it spelled out. We appreciate the teacher who says, “This is exactly what you need to know. Master it!”

I think there remains a tension between the idea of tests as a measure of skill mastery, and tests as a measure of overall aptitude in the subject.

If the object is skill mastery, you make pretty clear what will be tested, and you test it.

If its overall aptitude in the subject, the attitude tends to be a bit more sink-or-swim… Can a student apply knowledge to a new situation? Can a student figure out what’s important from what’s covered in class.

I think there’s a place for both, but it’s important not to confuse the two, and measuring “proficiency” seems to me to be more the first type than the second.

There’s definitely a place for that sort of skills test (I think the distinction Rachel draws is a very apt one), but I don’t think that it’s an adequate model for summative testing. Yes, there should be questions of that kind, but a final needs to measure the skills that should have been learned AND the ability to take simple problems at various levels of abstraction and use those skills. Not for the whole exam, or even most of it, but you do need some non-standard questions at the end if the exam is going to meaningfully separate strong students from merely decent ones.

Otherwise you run the risk of an exam that tests nothing but accuracy for the top quartile, which is neither interesting nor useful imo.

The context of my post was discussion of state testing… I think that they can really only measure proficiency.

In my classroom tests, where almost everyone “gets” the stuff, but it is not always clear at what level, I usually have some varieties of non-standard questions. But sometimes I separate with extra skills…

In the systems of equations test that I boasted about for being predictable, I include a “bonus” question that I didn’t mention (above, or to the kids in advance) – Solve for x and y: ax + by = c, x + y = 1.

Finally, this is math. The skills are hard. And their acquisition generally precedes mastery, at least in the course of a year. In an algebra course (the most important course I teach), I am setting foundation for skills that will need to be called on in a far wider variety of circumstances in future courses.

Ah, well… our state (in fact national) testing is designed to measure ability as well as proficiency (in theory), so that’s what we’re used to and expect.