I started this thread after posting in a Hoagland thread on roughly the same topic. Hoagland, for those who don’t know, has altered old images to
produce artefacts which is claimed to be hidden information/structures.
There is also the Moonlanding situation where those not understanding image enhancement techniques believe that photographs demonstrating the
existence of a lunar lander believe that the images are ‘false’ because they have been enhanced.
So what are appropriate methods to enhance an image? When is a person introducing new information? And what the hell does sharpening an image actually
Rather than running around ATS trying to spread information on stick-it notes I decided to make a thread about altering photographs in general. This
will prove useful if:
1. You have made adjustments to a photograph to reveal detail and want to prove, or know if you have actually demonstrated what you think you have.
2. You are discussing with an expert, and want to understand what they are doing.
3. You are an expert, and want to understand what you are doing.
This will not prove useful if ...
1. You are already a massive expert and want to skip straight to the more advanced stuff.
2. You do not care, or you already have an expert you trust an subscribe to and do not want to question them.
3. You are Richard Hoagland.
If there’s a good response to this initial post I may make some more in the same thread on similar topics.
Who is Pinke, and who is an expert?
I suppose I say this for range finding purposes …
The term expert is a relative term. To put things in perspective … I am a junior artist, and have assisted in forensic analysis on a small number of
occasions. Note, this doesn’t mean I have done particularly interesting things, or particularly amazing stuff, but it gives an idea.
I don’t consider myself an expert. The people I’ve worked with have been experts in my mind. They wouldn’t consider themselves experts either.
The one thing I have learnt about experts is they are never afraid to refresh themselves on a subject before working on it.
I personally believe you’re an expert whenever you’re right, confident, and don’t need to be offensive to make a point. When you can present
your information so anyone can follow it, you are an expert. If you ever have to state your credentials on record you’re either in a court room, or
you should be able to state that your information can be easily researched and tested.
You’ve captured an image of a UFO, the JFK assassin, or some other interest artefact or event. You’re a professional photographer, perhaps
considered an expert in the design field. You enhance your image to show some extra detail, or perhaps use a median filter to assist the viewer …
You have revealed E.T giving the single finger salute.
You are then asked … how does it work? How do we know it’s not just something you added by accident?!
The algorithms used in computer systems and image software are often not even understood by the professional using them. This can lead to inadvertent
mistakes, or the inability to hit home with proof. A researcher with good information simply can’t explain why their approach was appropriate, or
why it is actually correct.
Your audience will begin to find the information you’re presenting boring or inconclusive, or perhaps another ‘expert’ will take your findings
and turn them to dust. Due to your lack of understanding of the physics involved, you find yourself defenceless.
In this thread, I’d like to discuss just some simple, practical information a person can use when investigating photographs or video in clean
acceptable ways. I’m going to bounce around a little, and do some case studies to demonstrate. Hopefully in the long term this information will
become useful to those actually investigating events, or to those dealing with experts they do not understand.
During this thread I’m going to make some assumptions.
Ideally we would like to capture our images on film and have access to a lovely negative. Photographic prints are not much good for tonal and spatial
resolution. Prints provide poor grey scale range, and often our digital images are from prints or worse … photos of prints. Sometimes we only get
jpeg compressed images, or worse.
Also, ideally, our eye witnesses to any situation would be reliable. Unfortunately, human vision mostly sucks even though it is better than film in a
lot of ways. Human vision does not measure brightness for example, and is only able to make a comparison between two bright and dark objects. Both
film and human vision respond logarithmically to light, which is the beginning of our problems.
Whilst a film camera can record a 12 bit images with 4096 brightness levels … the average digital camera or digital print is usually 8 bit (256
brightness levels). This means that while a piece of film can be processed to reveal hidden details, our digital images are not really knee capped so
much as disembowelled. Even images shot on the beloved Red One camera are only 10 bit.
Human vision also does not measure color. Humans will see colors as different depending on the other colors in a scene. Humans are much better at
comparing two colors than they are at measuring a single color. This results in a number of famous illusions which can be performed on humans
This is my favorite one and one of the most famous. The squares labelled A and B are both RGB 120 120 120. Yet, our eyes assume that one is white.
Cameras themselves aren’t really affected by these illusions. However, they are affected by the sensitivity of the chemical reactions in film, or
the diodes in a chip. The color is interpreted by the camera, and is in no way the same amount of information as is in reality. This means the camera
should be white balanced for its location. How often is this done in our images? Not often. There are ways around this though, but that's not for
This means that unless our witness and persons film shoot several different exposures (a very rare event) we’re up our creek with a spoon as a
paddle to begin with and can often be left with a dark and difficult to see photograph. In UFO images this usually extreme white mixed with extreme
The other major issue with our cameras is compression. Many images we see are using JPEG (Joint Photographers Expert Group) or H264 compression.
It’s good to be familiar with your compression technique prior to judging an image.
A nice way of doing this sometimes is using a grid! Here is a grid in Jpeg (lossy format) and Tif (lossless format which would be posted except ATS
doesn't like TIF images. Which is a bit like installing a cat flap for an elephant on a conspiracy forum, but I guess bandwith costs money!). You can
see some of the differences, though the compression I've used is extreme to highlight them.
This example lets us discuss some of the attacks and defences based around compression.
We need to see the original image?
This ultimately dependant on a number of factors; cameras have a definite resolution limit. Also, it may shock some persons, but often the information
coming in through the sensor is compressed before it gets to an SD or flash card by the camera itself. The digitization process itself limits the
resolution. Therefore, sometimes h264 or jpeg compression is perfectly acceptable. It is unlikely the camera will produce anything much greater in
What does compression actually do?
A rarely asked question, but often referred to when an image looks a bit scrappy.
Lists of things that can happen:
- Removes features
- Alters the size, shape, and color of features
- Reduces resolution in different areas (Different compressions handle different parts better. Normally uniform regions details are preserved better
than heavily detailed areas. Parts can also be affected depending where they lie in the compression. JPEG divides an image into 8 x 8 pixel blocks. At
the boundaries this can create loss of detail or artefacts which cannot be predicted reliably. In a video, often details can be observed or resolved
elsewhere in a moving image. With knowledge of how a compression acts, a person can reliably interpret an image.
Later we might go into methods of identifying compression, and reviewing different types of compression. However, not much time today!
In the grid example above, notice the errors in the image?
It’s best if the storage medium of an image or video cannot be tampered with, and this includes introducing errors. Magnetic storage mediums can
create errors, and can be edited on a day to day basis. Storing online in a time stamped location, or on a CD or DVD with a serial number ensures a
medium isn’t being altered on a regular basis. Note CD or DVD is not a good ongoing archival solution as they fail, but it can be evidence that you
have not tampered with your content.
Keep complete records of step by step procedures used to alter your image. Keep a copy of each version of your image. Ideally you should be able to
take picture of a grid with the same model of camera if not the same camera so lens artefacts can be investigated.
If you're really keen there are programs which can check an image between versions to make sure it has stayed the same.
A digital image, in its most simplest form, is a grid of numbers containing pixel brightness values. Pixels, until we invent something else, are the
squares which hold our images together. Always remember, it’s just squares and numbers.
Enhancing an Image Hoagland Style!
Ideally image enhancement should show already existing details better for our silly human eyes. It shouldn’t add new detail or destroy our original
There are moments where damaging data is required, but if I make more of these posts we will discuss how to deal with this later on a method by method
The Common ATS Methods
The most face palm worthy thing seen on a daily basis is persons using various pixel maths to produce alternate versions of an image that even they do
not understand. Often this is due to a misunderstanding of their own work flow.
Most users work in 8 bit color space which introduces some major problems when working with an image with ‘hidden’ detail. I will be working with
my gradient grid image so as not to provoke any controversy among ATS’ers’. Please feel free to follow along in your own editoring program.
One of the most common mistakes dealing with an image is assuming that your editing application is actually your friend. It is a tool like any other;
smack your hand with it, it won’t say sorry. Contrast and brightness can be useful tools, however they must be used with care. Especially when using
more than one operator/script/filter/node (for the purposes of this exercise I will refer to any adjustment or filter as an operator or operation.
Many different applications use different terms for these!)
First a simple example: imagine my grid image is a nice sky.
Figure A is my working plate or image.
Figure B is Figure A with an increase
of 50% brightness. (I will work in percentiles so persons without video background can follow along.
Figure C is figure B with a 50% reduction
in brightness. What happened?
If you’re lucky your software will guess what you’re trying to do. If you’re only a little lucky you can change your options so your software
will store the lost values in the future. If you’re unsure of how to do this, you’ve just been trolled.
The brightest part of our image is now clipped at 50%. Any time working with images a person must be aware of limitations of both of their software
and of their image. Ideally, you should go back and work out what you were trying to do and do it in a much cleaner way. In our non-practical example,
we would simply remove the first brightness operation. Voila.
However, lets look at a practice use of this theory and how a person might make a real mistake. After all, this particular mistake is quite obvious,
and who would really use two opposing operators like that? The answer is Richard Hoagland.
Regardless of your thoughts of Hoagland’s imagery, here is an example of an easy mistake using brightness and contrast operators.
We will this time begin with figure B which has had brightness increased
50%. We will now increase
the contrast by 25%, as was the case
in a Hoagland image, and review what we have found.
As can be seen, all we have achieved is more destroyed data! No new data has been shown. However, Richard Hoagland did not find destroyed data in his
famous images. One of the reasons behind this is film grain and similar artefacts. I will now add grain to my original image.
And continue the same process as before. I will also remove the grid so you can see what occurs in the film grain areas.
Figure1C - brightness and contrast increase with grain
Now we’re getting some Hoagland style data. In our efforts to discover new data, all we have done is damaged our existing data to the point of
fooling ourselves. In a final step, I have increased the brightness to 100% to demonstrate how an incredibly dark or bright sky may reveal a hidden
‘line’. This is often the lines referred to in moonlanding photography which demonstrate 'sets' and other such things.
We must be familiar with what our attempts to reveal data are doing prior to acting on our instincts. The majority of ‘new data’ in an image that
is found legitimately is not new data …
The human eye, as stated previously, is logarithmic
. We do have trouble noticing details in extreme dark and bright planes of an image. The
bonus of this tutorial is that I know exactly what I am looking at. When we are faced with an image with no reference to what is, we can’t be sure
of what we’re doing unless we’re sure of our maths. We know my image is a gradient with a grid over the top. So how do we reveal this without
causing more problems?
We could choose one operator and stick with it.
We could Decrease Contrast.
Both bring us unsatisfactory results. Both actually make our grid harder to see in some ways (which you will see if following the whole tutorial). At
best the push to grey is a minor help, at worst we are in a situation where our results are inconclusive, or too simplistic.
Another approach might be to do an invert, but if human eyes have a problem noticing detail in extremes of light and dark we are only reversing our
Some people might then move onto sharpening. The instant reward approach encourages you to reach for your sharpening tool in your favourite editing
Sharpening is Annoying
Classically, what sharpening does is increase the difference between adjacent pixels. This has the effect of making an image look sharper, but is a
trick of the eye. From this point in the demonstration, I have put some happy text at the bottom of my image. This is to demonstrate why certain
approaches don’t work, as a grid is very simple. I have also introduced some grain, permanently from this point on. Use this image if you wish to
follow along with me.
So I want to find out what the writing says! Brightening and contrast aren’t getting me what I want, so I decide I will use a sharpening filter.
I’m wrong btw.
Unfortunately, my sharpening has introduced even more artefact which makes me sad.
The majority of sharpening kernals a person uses will not produce good results in all (or sometimes many) areas. The sharpening kernel is often
intended to be a quick defocus fix which introduces nasty looking edges and is really not intended for forensic purposes. Furthermore, with this
method you will also be sharpening your film or video grain and other artefacts! You will also notice my film size has become significantly bigger
after my sharpening effort. This isn’t what we’re trying to do!
I am going to skip the brightness and contrast attempts to solve this issue. Please try them on my sample image, however you will note they produce
less than satisfactory results.
Why does my sharpening thing suck?
Please don’t feel the need to read this bit if you hate people and maths.
A sharpening kernel is generally this:
Imagine the above as a group of pixels.
This multiplies the stored brightness in each value by 8 and to subtract the 8 neighboring pixel values from it. If this was applied to uniform area
of an image the result would be zero. For example, an area of an image with the value of 100 in nine pixels would multiply 100 by 8 and then minus 8
loads of 100 from it. If any neighboring pixels are brighter or darker than the middle pixel the result will be darker or brighter in this order. This
is called a Laplacian
It is very similar to the process we’re about to discuss, and is usually accompanied by a very small amount of blurring. This process also alters
brightness variation which is something compositors often have to go back to fix. This can sometimes be done by changing the central pixel weight, but
we’re probably digressing.
One of the biggest issues with a sharpening filter or operation which you are not familiar with the internal workings of is … if someone asks …
you simply don’t know what it’s doing. Even if you do know what it’s doing, you’re also relying on the controls which are inbuilt within the
operation or filter. This prevents fine adjustments.
Disclaimer: Your sharpening result may vary depending on pluggies.
Do it the Hard Way
So what we need to do is use a method that doesn’t introduce a lot of fake data, that can be applied over our grain, and lets us see what we’re
doing quite nicely. What we’re aiming to do is solve the issue of human eyes being based around edges. Human eyes enjoy images like cartoons with
nice obvious edges for us to look at. When we can’t see edges in a photograph, it’s not because they’re not there … it’s because we simply
like the middle part and not the very brightest parts for example, or uncontrasted parts.
The Unsharp Mask
Take the original grid.
Create two layers or nodes in your program. From this point on, since most persons will be using photoshop, I will primarily refer to layers. If you
are using a comping app with nodes or similar I imagine you know how to translate this already and probably already should know what an unsharp mask
Both layers should contain a copy of our image. We will call one original
and the other mask
the brightness of your mask layer by 10 – 30%, don't worry you can refine it later. Blur
the mask layer. I blurred mine by about
four pixels. Use a fairly standard and quick blurring algorithm to do this. Then alter your layer to subtract from the original image. What you should
end up with is a much nicer clearer and visible data.
Play with the values so you can understand the relationships between them!
What have I actually done?
You will notice that the tone of the image has suffered, but not nearly as much as it could have done. You can fix this, if you’re adventurous and
know what you’re doing, by changing the brightness of the result, but this then becomes a artistic operation.
You've done this essentially (exact numbers may vary).
(image) - (0.4 x blurred image) = More Charlie Sheen
The bolder among you may have managed this.
(1.4 x image) - (0.4 x blurred image) = Moreish Charlie Sheen
However, this won't have 100% the effect we're looking for, but it will tighten the look.
The brightness contrasts the two layer edges. The blur increases the region of the sharpening and also introduces contrast. If you find artefacts in
your work you may need to blur the image more, but this shouldn’t apply to this demonstration.
Essentially, what we’ve done is made some data more visible to our silly eyes by putting contrast in the right areas which our eyes view as
sharpening. As always, with any changes like this we should always refer back to the original image to view what has happened as a result. Carefully
compare the new data to the original image. You should always
find that this information was already there at least mathematically. If you find
a small planet after doing five or six adjustments to your image you may have done that, which from now on, will be called pulling a Hoagie.
Don’t pull a Hoagie ATS.
PS – I removed my note from my final example image just to give persons doing the exercise something stoopid to look forward to.
Is this the only and best way of doing this?
No. There are many ways to solve this puzzle. Some are better, some are more complex, however each method has its own practical uses in real life
One of the nice benefits of an unsharp mask is that you’re using your own image on itself, and thus aren’t guessing at what various contrast and
brightness levels should be at in your image. Anything you can do in a procedural way is a bonus. Plus, you know exactly what you've done, and you can
demonstrate the maths.
Furthermore is one method where you won’t damage the image dramatically. People are less likely to believe a heavily damaged image that reveals more
detail over an image with values at least in ball park ranges.
It’s also nice and simple. Perhaps later if people want we can discuss other more complex sharpening methods including working with color, and ways
of doing things to images which have more grain and other problems such as interlacing and interference.
I am trying to reintroduce brightness, and I can’t get it to go!
This will vary depending on your comp system, and is not really the point of the exercise.
If you’re very interesting in compositing for artistic purposes there are a great many learning fountains on the web.
I thought you weren’t going to be controversial?! I agree with Hoagland!
That’s okay. I won’t judge you.
: This article was written in one sitting. Pinke takes no responsibility for use, accuracy, or injury from this information. Pinke is
not an equal opportunities employer. Pinke may or may not be on a horse.
edit on 27-8-2011 by Pinke because: Typos vs Pinke FIGHT!
edit on 27-8-2011 by Pinke because: Stoopid Title
edit on 27-8-2011 by Pinke because: MOAR TYPOS!
edit on 27-8-2011 by Asktheanimals because: for spelling