The D70 is a 6 megapixel (MP) camera as was the norm some years ago. Since then, the manufacturers have nourished the megapixel myth – that pixel count correlates more or less exactly to image quality. On the web, many have written about the myth. Ken Rockwell, who I find a bit flamboyant at times, has written a page on the myth which also contains some fun pointers to a megapixel street test and a completely bogus site that fuels the megapixel myth by an untrue interactive zoom function. The differences in image quality from differences in pixel density are of course nowhere like what they try to imply.
Every pocket camera nowadays has 10 MP or more, phone cameras have up to 5 MP. But their image quality is not up to the standard of any modern DSLR like the D70 or newer. It has to do with a number of factors, photo site size (pixel size) being one of the most important.
The concept of picture element is also being misused in the manufacturers’ world. Pixels are not interesting, picture elements are. It takes four pixels in a Bayer arrangement (two green, one red, one blue) to make up a true color picture element. Thus, a 12 MP camera using the Bayer pattern (which most sensors do) does have 6 million green pixels, 3 million red and 3 million blue. This makes up 3 million true color picture elements, but interpolation and demosaicing brings this up approximately to around 6 million. If you shoot JPEG, this processing is done in-camera and you are totally dependent on the manufacturer’s algorithmic skills. Different manufacturers are differently skilled, fortunately Nikon are top-of-the-form. If you shoot RAW (NEF), you do the processing outside of the camera which widens your choice of algorithms
There is a trade-off between pixel count, resolution, and image quality. At first, it seems that more pixels should be attained at any cost. And it is true that more pixels could bring more resolution. But it also brings a number of drawbacks. First, more pixels will increase the file size of your images. This is the easiest drawback to live with; it will eat up more disk space and require more primary memory and processing power for your image handling. Second, more pixels on the same sensor area will decrease the size of each pixel, making it more susceptible to noise. With fewer photons captured, natural variance plays a larger role, which gives you noise. Since of course this noise can be interpolated away, you will never get worse image quality from more pixels on the same area, but not as much better as you thought (and the manufacturer seemed to promise). Third, diffraction limits your usable apertures considerably.
Of these, diffraction is probably the least understood one. There is a nice website that explains diffraction much better than I have the time to. On the new Nikon D800, diffraction limits the image quality of any lens at f/8 and smaller apertures. Which is the majority of available apertures for most lenses! And since most lenses perform at their best stopped down a couple of steps, many lenses are unable to perform their best at any aperture on a D800. Nikon is not very keen on telling you this. For a D70, diffraction starts hurting at f/16 while for cameras with the same sensor size, more pixels lead to more diffraction. The D90 is affected at f/11 and the D7000, having a very similar pixel density to D800, is affected already at f/8. Thus, while the D90 and the D7000 in theory have clearly better image resolution than the D70, this is diminished in practice by diffraction. Diffraction does not ruin the image but eats into the advantage of high pixel counts. And it does not show in the reports on new sensors.