It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

New algorithm to detect altered photos.

page: 1
0

log in

join
share:
jra

posted on Jul, 24 2004 @ 06:17 PM
link   
I don't know if this has been posted yet at all and if this would be the right section for it, so please delete this or move it if needed.

I just found an article about a new computer algorithm that can look for evidence of tampering and editing that the naked eye wouldn't normally see. This could help a lot with find out what photos are faked and which are possibly real. I say possibly real because there are other methods of faking UFO photos that don't need photoshop. For example, taking a photo of a real scale model.

So i wouldn't expect this algorithm to find and detect every fake photo, but at least it will filter out the ones that were faked by putting two or more images together in photoshop (or any similar program). You can read more about it here.

www.dartmouth.edu...

[edit on 24-7-2004 by jra]



posted on Jul, 24 2004 @ 06:28 PM
link   

Originally posted by www.dartmouth.edu...
Farid's algorithm looks for the evidence inevitably left behind after image tinkering. Statistical clues lurk in all digital images, and the ones that have been tampered with contain altered statistics.

"Natural digital photographs aren't random," he says. "In the same way that placing a monkey in front of a typewriter is unlikely to produce a play by Shakespeare, a random set of pixels thrown on a page is unlikely to yield a natural image. It means that there are underlying statistics and regularities in naturally occurring images."


The biggest hiccup I can see with using a statistical quantanization formula or simular is in certain cases where data may seem random, but it is just extreme light conditions, based on my experiences of adjusting image palettes. The other thing that might be another common monkey wrench would be compressing to another format to yet another format. This does hole promise for images that are only compressed once (like most jpg and png files). Heh, it might also be a good benchmark for antialiasing work in a given photo/gfx editor. I would be highly interested in how this develops. Heh, I might even be interested in seeing how the system works on a mathmatical level. Does anyone know if this will be proprietory or will this be open-source?


jra

posted on Jul, 24 2004 @ 06:30 PM
link   
Well i just did some more reading on this stuff... apparently this algorithm doesn't work well on lossy formats such as .jpg. Seeing as how pretty much all photos on the net are .jpg or .gif, this makes it rather usless.

For those that don't know, .jpg compression can alter the size of pixels. The algorithm looks for pixels that arn't the same as the majority of the other pixels or something like that, so this thing would only work good on non lossy formats .bmp or .tiff and what have you.



posted on Jul, 24 2004 @ 06:36 PM
link   
I always love it when people claim to have come up with detection systems. So many people over look so much and produce sub-par results.


Farid's algorithm looks for the evidence inevitably left behind after image tinkering. Statistical clues lurk in all digital images, and the ones that have been tampered with contain altered statistics.

Farid and his students have built a statistical model that captures the mathematical regularities inherent in natural images. Because these statistics fundamentally change when images are altered, the model can be used to detect digital tampering.


The article later states that, in the future, this sort of algorithm could be used in court cases. But that's absurd. Two points and then I'll sheath my logical katana... :

1) He has only captured a subset of all images. This means he took some tiny set of all images and averaged them out. Yet he claims "know" all images. If I kissed 23 women, do I know what its like to kiss all women? I hope not.

2) Secondly, his system can be defeated by a simple hack: take a picture of a digitally altered image. The variation range, in the process of being transfered to photographic (or digital) film will become normalized again. Simple. I have a real photographic phony digital image


offtopic, but his work on separating reflections is very impressive: www.cs.dartmouth.edu...



posted on Jul, 24 2004 @ 06:52 PM
link   

Originally posted by jra
Well i just did some more reading on this stuff... apparently this algorithm doesn't work well on lossy formats such as .jpg. Seeing as how pretty much all photos on the net are .jpg or .gif, this makes it rather usless.


I am quite surprised that it doesn't take the palette into account as that would allow for some insight into the overall behavior into the image, compressed or not. Gif's might be an exception since the Gif87A specifaction compresses on a bit by bit level using boolen logic, where as a jpeg (if I understand correctly) is an lossy advancement on the compressed bitmap scheme. Compression by color, in the case of jpeg, if I remember right, it will try to find tonal avaerages between two pixels and then insert/record the average pixel depending on certain conditions. The bitmap(compressed) format is lossless because it simply counts out how many common colors are sequentual(sp?) then records pairs of (color value,number of times) until the whole image is scanned into file. As it stands, yeah this will be nice for lossless images, but can be anywhere to mediocre to useless for lossy images. Which brings me to the point I stated earlier, the more an image is altered in a lossy format and *recompressed*, the less diverse the color data will be. It might be able to disguinish a rough count of alterations, but it will still take a human being to interprete results, kinda counter-productive given it's original purpose.

[edit on 24-7-2004 by Crysstaafur]



posted on Jul, 24 2004 @ 10:52 PM
link   
Ah... I see now.

From: www.cs.dartmouth.edu...
which is the actual published report.

Here is the key to his sampling: "When creating digital forgeries, it is often necessary to scale, rotate, or distort a portion of an image. This process involves re-sampling the original image onto a new lattice. Although this re-sampling process typically leaves behind no perceptual artifacts, it does introduce specific periodic correlations between the image pixels." (pp 10)

I currently work as a "digital image specialist" at a record archive group for a certain university. Sometimes I have to work off of contact sheets that have numbers in grease marker written all over them. I have found, overall, using Photoshop, I never have to scale or distort segments of the image. I admit I have had to rotate a segment. But his algorithm is cumulative (convoluted, technically
) and I doubt small instances of rotation matter.

Most forgeries do not require wholesale scaling and rotating. Oftentimes the stamp tool, aligned with the previous sample point, is enough.

He also writes:


There is a range of re-sampling rates that will not introduce
periodic correlations. For example, consider down-sampling
by a factor of two (for simplicity, consider the case where
there is no interpolation). The re-sampling matrix, in this case,
is given by:

(fig 14)

Notice that no row can be written as a linear combination
of the neighboring rows - in this case, re-sampling is not
detectable. More generally, the detectability of any re-sampling
can be determined by generating the re-sampling matrix and
determining whether any rows can be expressed as a linear
combination of their neighboring rows - a simple empirical
algorithm is described in Section III-A.


Since he's working in a 2d matrix format, row format, left to right ("linear combination of the neighboring rows"), and rotation works diagonally, it is unlikely to be detected.

What he has achieved, however, is amazing. Take a look at his polar Fourier graphs of the altered images. The pinpoints indicate row disparity and hence forgery.



new topics

top topics
 
0

log in

join