Asymptotics
© 10 Jan 2012 Luther Tychonievich
Licensed under Creative Commons: CC BY-NC-ND 3.0
other posts

theory

How do you define “‍efficient‍”?

 

Most of us know two ways to add. We can add by counting, or we can add using decimal place values. Actually, we know three: we can also add by memorization, which is how we do the digits in place-value addition. Which process is most efficient?

Clearly the fastest is memorization. It takes no work for me to add 3 and 5. As soon as my mind has processed the concept “‍3 + 5‍” it has supplied the answer. But memorization is only fast for small numbers; it would probably take me months to add three-digit numbers by memorization. For big numbers, place value is vastly faster.

Computer scientists have a special name for talking about efficiency for big inputs: asymptotics. The idea behind asymptotics runs as follows:

  1. Pretend we could write down a mathematical expression that represents the exact amount of time it takes to perform a task of a given “‍size‍”.

  2. Write it out as a bunch of terms—that is, expressions added together. Take the term that gets biggest fastest for really big numbers.

  3. If there’s some constant number multiplied on front of that term, throw it away.

  4. What’s left is the asymptotic behavior of the approach.

An example: If it takes me 1.4 + 0.2n + 0.0001n2 seconds to multiply two n digit numbers then the asymptotic behavior of my multiplication is n2.

There’s a more mathematical way of talking about all this—several actually. And they’ve each got a silly name: big-O, little-o, big-Omega, little-omega, big-Theta, and twiddles. Most often people use big-Theta and call it big-O; in fact, big-O is so pervasive it’s the name of the wikipedia article on asymptotics.

Thus, we might say that adding by counting is O(10n) (read “‍big oh ten to the n‍”) while adding by place value is O(n), where n is the number of digits in the numbers we add. Adding by memorization, since it doesn’t apply to really big numbers, doesn’t have a big-O.

When I first learned about big-O and asymptotics I thought it was kind of silly. Ignore everything that happens for small inputs? Crazy. Most of the stuff I do every day is small.

Two observations make asymptotics a sensible way of viewing things. First, even though most tasks are small, it is often the case that most time is spend on the few big tasks. Second, it is roughtly true that asymptotics describe the approach where small-scale performance mirrors the details instead. I might be able to count-add small numbers faster than you, but we are both bound by the asymptotics for the large numbers.

What really makes asymptotics easy to swallow is comparing them to the much grosser polynomial/non-polynomial distinction. But that is a topic for a later post.




Looking for comments…



Loading user comment form…