I posted the following information in another thread, but I think it is relevant for this thread also, so I will repost it here.
As promised, I made some tests to see if there is a difference between a normal photo taken with the visible spectrum and a photo made with three
photos, one for each channel (red, green and blue).
My sister got me some samples of filters (they are not filters for cameras, they are light filters), and I chose the ones closest to the ones used by
the Rover's cameras.
These are the characteristics of the three filters, showing what wavelenghts they let through them.
Combination of all three filters
It's noticeable that they are not narrow-band filters, so if the differences may be stronger with narrow-band filters I do not have any way of
knowing it, at least for now.
First, a colour target from a HP scanner.
Photo taken with sunlight and "auto levels" applied on Photoshop.
Same conditions, but composite made with three photos from each channel.
The colours on the composite look stronger, and they are farther from what I see than the visible light photo.
Same colour target, under artificial light, giving it a more yellow tint.
Same conditions, but composite.
There is a bigger difference between these two photos than between those taken under sunlight, and that is probably because the blue filter, when
using the artificial light, gave a very dark photo, so the auto levels did not had as many data to work as with the other channels, making a yellowish
Now, a photo from a sunlit, outdoor view (what I see from my dinning-room window)
(click for full size)
And the composite for the same view, some seconds after.
(click for full size)
Some things that are visible on this comparison:
1. the clouds were too fast.
2. the blues are stronger, both the sky and a building on the background in the right side are bluer
3. all reds are weaker, the reflected light on the wall on the left side, for example, is not as red on the composite as it is on the "real"
4. the more neutral tones become a little weird, probably because of the lack of red, but I am not sure
One thing that is not visible (and that only I can see) is that the "real" photo is much closer to the true look of that scene, although it was a
little darker than in "real.
Considering this, I think (even more than before) that just making composites with the Rover's photos creates images with too many blue and little
red. That does not mean that the reddish NASA photos are correct (I have no way of knowing) but it makes me think that they are really closer to what
can be seen on Mars.
The camera was on automatic for all photos.
I used the camera's black and white (greyscale) feature for the three different channels' photos.
I used auto levels in the colour photos and on each channel to make it the most automatic possible, and closer to what we do with the Rovers' photos.