posted on Aug, 14 2009 @ 06:04 AM
reply to post by Phage
The way I understand it is that if an object on a 29.6cm per pixel photo is exactly 29.6 pixels then it will "fill" that pixel, but if it "falls"
in a cross between four pixels, for example, it will be averaged in each pixel with the surrounding area, and they consider that to fully define any
object on a photo they need to be at least 3x3 pixels to avoid being averaged with the background.
Here is a little test.
On the first column we have four squares, one occupying the whole pixel, one occupying four pixels, a 2x2 square occupying four pixels and a 3x3
square occupying nine pixels. On the second column we have the same images but resized to 10% to make each 10x10 square to look like just one pixel.
Those images were then resized back but keeping the pixel shape, not re-sampling them. On the third column we have same sized and positioned circles,
and on the fourth column we have their resized versions.
As you can see, only when we reach the 3x3 size can we start to get a real idea of the original shape, below that all shapes are turned into just a
So, they are saying in a more "scientific" way the same things we say, we cannot see what it is when it's only a pixel wide.
Edited to add:
I forgot about the difference between the map-projected version and the original. The difference comes from the stretching or compressing of the image
to compensate for the angle between the camera and the ground (usually close to 90º, but not exactly 90º) and to compensate for the shape of the
target (Mars, in this case), so sometimes the original resolution is smaller than the map-projected version, sometimes it's bigger.
[edit on 14/8/2009 by ArMaP]