Polynomial or Not?

theory

Defining “tractable” or “efficient.”

A few weeks ago I commented on asymptotics and the idea that, in analyzing the efficiency of algorithms it is common to consider only the behavior on very large inputs. Today I build on this idea to discuss “tractability,” the most common concrete realization of the idea of efficiency.

We say that an algorithm is tractable
if it’s asymptotic behavior is bounded by some polynomial.
It doesn’t matter what the polynomial is:
`O`(`n`^{1}) and `O`(`n`^{1000})
are both called tractable.
If a problem isn’t tractable it’s called intractable,
non-polynomial, hard, or exponential—though
technically these terms are distinct,
they are used interchangeably more often then not.
As a rule, people talk about tractable algorithms as being practically useful
and intractable algorithms as being unusable.

The unusability of intractable algorithms is easy to defend
given the fact that anything non-polynomial is at least exponential,
which I will not defend here
.
Consider, for example, an algorithm
whose time to execute is in `O`(2^{n}).
Then even if we are so fast we can handle a problem of size `n` = 100
in a minute, it will take over sixteen hours to handle a problem of size `n` = 110
and a full week (24/7) for `n` = 113.
This kind of sudden time increase
means there is virtually no scalability of any kind;
even if you are 50 times faster and work for 100 times as long
you only get to handle an extra dozen inputs.

That polynomial algorithms are usable is harder to defend.
`n`^{1000} isn’t much better than 2^{n}

Loading user comment form…

Looking for comments…