Thursday, August 9, 2012

How Intermediate Rounding Took 20 Years Off My Life


Recently I found myself in a situation where intermediate rounding seemed inevitable, and so I sat there wondering, “Is there some kind of rule that would help me to discern an appropriate amount of rounding that is acceptable in the middle of a problem, so to not impact the final answer?” For example, if I need my final answer to be correct to the nearest whole number, would intermediate rounding to the nearest thousandth have an impact on the results of my final answer?

Potentially, it only takes 0.01 error to impact a final value rounded to the nearest whole number. That is, 2.49 would round down to 2, but 2.50 would round up to 3. Rounding intermediately to the nearest thousandth only introduces a maximum error of 0.0005  (say, from rounding 10.2745 up to 10.275 or rounding 5.25749999… down to 5.257).

Clearly, I could see that the answer to my conundrum would be a definitive “It depends.” Of course, it would depend on what happened in my problem between the intermediate rounding and the final answer.

As it turns out, there are lots of fascinating intricacies that play out in the solution of this problem. It's almost too embarrassing to admit just how much brain real estate I have dedicated to thinking about this.  But here’s one particular aspect that struck me hard.

If I am introducing an error of 0.0005 and then multiply this value by some factor, then my error would also be multiplied by this same factor. OK, so in this particular scenario, a factor of 20 would be sufficient to potentially impact the final whole number value.

What if I square the value? My instinct says that the error would also be squared, which would lead to an insignificant impact on my scenario. But my instinct is wrong. The reality is that the resulting error relies entirely on the initial value. For example, a value of 256.0235 that was rounded up to 256.024 and then squared would be off by more than 0.25, clearly enough to make a significant impact. And a larger number, like 10,000.0005 that gets rounded up to 10,000.001 and then squared would be off by more than 10.

BAM! I find myself in the body of an awkward teenager, struggling with the most famous algebraic misconception:

You see, I haven't made this mistake in years, but yet am amazed to find that the inner instinct still remains. I'm not sure what this means exactly, but at the very least it sheds some light on my teaching and perceptions of student understanding. Too often this particular misconception gets blamed on a misapplication of the Distributive Property.

What if, instead of insisting that "exponents do not distribute," or "the Distributive Property does not apply here," I allowed students to explore their misconceptions and discover that the Distributive Property does indeed apply? What if we embraced this instinct and used it to delve more deeply into quantities as factors?


What if I finally realized that even if they remember the rules and get this problem right every time it appears in symbolic form, that maybe, just maybe they still don't quite understand what it means?

What if.

I think I feel a performance task coming on.

1 comment:

  1. I had never thought about the connection between rounding and distributing, but it seems like making this connection explicit to students might help. That problem of "distributing the exponent" is SO pervasive. I'll work with this in my classes this fall and let you know if it helps!

    ReplyDelete