The Usual Complexity Assumptions
"The usual complexity assumptions" is a term refering to a set of conventions
employed so often in complexity calculations that they are usually taken for
granted.
For example, one often refers to Bubble sorting as an
O(n2)-time operation, when, in fact, it is not. It merely
requires O(n2) comparison operations. If we were to compare
two integers, a and b using (for example) a Turing machine, we
would, in the general case, need to go over
all their bits in order to find a difference, so this operation may take as
long as O(log(min(|a|,|b|))), rather than O(1).
Similarly, we casually refer to storing real numbers and performing operations
on them, when, in fact, it is impossible to store a full representation of
a general real number on any computer or any Turing machine.
"The usual complexity assumptions" therefore refers to the most commonly used
(and least commonly explicitly specified) computational model. It defines
"integers", "reals" and "arrays" as abstract data types, and also defines an
execution model that uses them. This, in turn, means that certain operations
are assumed to be atomic O(1)-time operations.
The following are the O(1) operations assumed:
Regarding integers
- An integer can be stored in O(1) space.
- The four basic arithmetic operations (addition, subtraction, multiplication
and division) as well as modulo-taking are all O(1).
- Comparisons (less than, greater than, equal to) are all O(1).
- Shift-left and shift-right by n bits, for any integer n,
are O(1). (The result of this is the same as multiplication/division by
2n.)
- Conversion to real numbers is O(1).
Note that bitwise operations other than shifts are not considered O(1), unless
explicitly stated.
Regarding real numbers
- A real number can be stored in O(1) space.
- The four basic arithmetic operations (addition, subtraction, multiplication
and division) are all O(1).
- Comparisons (less than, greater than, equal to) are all O(1).
- All trigonometric and inverse trigonometric functions are O(1).
- Exponentiation and log-taking are both O(1).
- Conversion to integers by either "floor" or "ceil" is O(1).
Regarding arrays
- Allocating/deallocating an array (of reals or integers) of any size is O(1)
time. (An array stores a single integer/real in each of its cells. Cell indices
are integers. Arrays start off as non-initialized, meaning that if an array
cell is read before it was ever written to, the value read may be any integer
or real. Arrays cannot change their sizes once allocated.)
- Accessing a given array position for either reading or writing is O(1).
(Accessing an array position outside the array bounds [either by use of an index
with a negative value or one larger than the array's maximum] leads to undefined
results. Array indices are zero based.)
- Input and output of the program are special cases of array reading and
array writing (respectively), and follow the same complexity.
Note that use of an integer/real variable is modeled as the use of
an array of size 1. Integer/real constants (i.e. program literals) are like
variables, but unlike regular arrays/variables do get initialized and cannot
be overwritten with new values.
Regarding flow control
- Jumps are performed in O(1) time.
- Conditional jumps are perfomed in O(1) time, in addition to the time it
takes to evaluate the condition.
Though anybody who has done any complexity calculation has probably been
exposed to this computational model, its assumptions are rarely stated
explicitly. They are repeated here because in riddles they are often stretched
ad absurdum, so it is useful to keep them in mind.