Originally posted by jeep3r
reply to post by ArMaP
If my interpretation or understanding is correct ...
Error level analysis at its base level is difference / comparative process.
You can mimick what that website does to an extent by taking the image, saving a more compressed copy and then subtracting it from the original.
Basically it's using the properties of JPEG compression since applications such as Adobe or spliced images handle block based compression uniquely.
the authors exactly meant when referring to heavy JPEG compression causing their method to become (potentially) less reliable.
If an image is saved multiple times over and over again or saved at very low quality, this technique does become less reliable and on the opposing
side covering your image in noise will alter every compression block and make it harder to interpret. That would get you caught out in other ways
Whether I got it right or not: I certainly ask myself whether there's no way to preserve the natural noise variance
It's called counter-forensics. (I wouldn't call it noise variance exactly maybe but yus its valid question anyway>.<)
Originally posted by ArMaP
reply to post by wmd_2008
But, as I always say, if done by a professional, nobody would notice it.
Before I start, I say I think you're awesome Armap. ATS will hate me for this, but what you saying isn't quite
accurate and showing ATS persons
photos is not a test of forensics at all.
Image forensics doesn't happen on ATS. Not being mean! Just we don't do comparative science here and if we did, very few persons would read it.
NASA has provided a certain level of transparency with their imagery and hardware leaving them vulnerable if they are faking and altering images. The
image forensics game has developed fast in the last ten years, and we keep finding more ways to model and classify CG and tampered imagery.
Perhaps if they are using incredibly
solid computer generation software and virtual cam, they could be generating the images entirely, but even
then. To say 'if done by a professional' no one would notice ... no not a professional, a counter-forensics God
would be needed to do this, or
they shooting on film sets (again haha).
This is really
brief stuff, but it should give an idea:
Camera lenses are an image ballistics go to - they often contain unique properties which can be modelled.
Chromatic abberation, for example, is from failure to focus light from all wavelengths; different wavelengths of light will hit different parts of the
sensor which can be modelled and then compared to other images. The image center can also be defined to detect cropping, and any attempts to composite
would have to match any lens abberations such as this. There are multiple of these types of things that could be targetted in the Curiousity dataset.
NASA have also been pretty up front about their sensor and other hardware choices. Their choice of color filter array for example. The demosaicing
process and interpolation creates periodic correlation patterns. Changes in this pattern would show that an image was altered, and Curiousity is
providing a huge
data set for consistency investigations here.
The actual editing process is a whole other maths mine field. Scaling, rotating etc ... will resample the image. Again this can be modelled. With the
massive data set provided by Curiousity, hue, saturation, and luma can be areas to look for inconsistent maths. Natural image statistics
(investigation of regularities inherent to natural images) can be a massive boon also as well as our numerous and well understood optics laws.
Digital Image Forensics has come a long
way. Art tools aren't developed to create forgeries. In fact, they're quite lousy at it in an instance
where a lot of information is known about the photos being taken. Even minor changes by the camera itself can be quantified with maths.
There was a time at the start of the digital age where image forensics looked like it was deep trouble ;the wide availability of video equipment meant
that broadcast attacks on video security scanners and similar things became a real concern, and catch up was needed. Such scanners can now recognize
when they are recieving a rogue signal even if their network doesn't just by analyzing the image presented to it.
There have been large numbers of statistical investigations into how images are captured and formed with surprisingly good results. Classification
accuracies in studies reaching 90% is not uncommon, though this number can drop considerbly with some techniques out the wild ... the point is we're
getting a lot better at it, and the danger of new approaches rumbling high profile fakery is quite high.
edit on 7-3-2013 by Pinke because:
(no reason given)