In this chapter
Back in high school you probably learned about six different kinds of numbers; specifically:
Each of these sets has certain properties. For example, the set of whole numbers is infinite. That is, there's no last number in the set. No matter how big a number you choose, you can always add one more to get a bigger number. For another example, given any integer there is a unique integer called its additive inverse such that the sum of the two is exactly zero. Rationals except 0 also have multiplicative inverses. The square root of any complex number is a complex number. In any of these sets a+b = b+a (commutativity) and (a+b)+c = a+(b+c) (associativity). And so forth.
As a group these sets have the property that each one is a superset of the previous one. That is, the whole numbers are the natural numbers plus zero. The integers are the whole numbers plus the additive inverses (negative numbers) of the natural numbers. The rationals are the set of all ordered pairs of the integers. The reals are all rationals plus an uncountably larger set of numbers that can be formed from decimal expansions that don't repeat. The complex numbers can be thought of as the set of all ordered pairs of the reals, in which case the reals are just the complex numbers with the second member of the pair equal to zero.
There are a number of programming languages, mostly designed for and used by mathematicians and computer scientists of a theoretical bent, in which great effort is expended to make sure that numbers in the programming language match up exactly to the Platonian ideal of numbers you've been learning about since grammar school. Haskell???? and ML???? come to mind. In the real world, however, these languages have been almost uniformly unsuccessful and have not achieved broad adoption. In fact, it's sometimes joked that the primary use of ML is to write ML compilers. (see Kernighan interview, pull quote????)
In the real nitty-gritty world of programming, languages take a much less pure approach to arithmetic. Languages like C, Fortran, Pascal, and of course Java use almost incorrect realizations of the Platonian ideals of numbers. In these languages, the set of integers is finite; that is, there is a largest integer and a smallest integer. Not all integers have additive inverses. Real numbers are not a superset of the integers. And perhaps most distressingly of all, basic arithmetical prinicipals like commutativity and associativity don't always hold true.
Now don't get too worried. Most of the time computer arithmetic behaves precisely as you were taught in high school algebra. If it just threw those rules out the window, then computers really wouldn't be very good for working in a world that is in fact very well described by high school algebra. However, for reasons of performance most computers and most programming languages use only an approximation to the pure, Platonian ideal of numbers. Now aproximation is a useful and valid tool, but sometimes approximations fail. Thus it's important not to just naively calculate without understanding the details of the approximation used and the places it's likely to get you into trouble.
Given the differences between pure Platonian numbers and the dirty, impure world of computer arithmetic, it's perhaps a good thing that we don't use the same words to describe computer numbers. Computer don't have natural or whole numbers or integers or reals. Instead computers use two's complement and floating point numbers.
The fundamental integer data type in Java is
This is mostly the same as te integers you're familiar with from
grammar school arithmetic. 2 plus 2 equals 4. 10 times 10 equals 100.
1987898789 minus 1987898788 equals 1. 3 plus -3 equals 0, and so forth.
Example ???? is the Java encapsulation of these statements.
However, the set of all
ints differs from the set of integers in one
very important way: the set of
ints is finite. There is a largest
int and a smallest int. More specifically the largest int is ????
and the smallests int is ????.
doubleoccupies eight bytes of memory. Assuming, there's no overhead, how many doubles can you stuff into 10 megabytes of memory?