It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


How to Investigate UFOS and do other things good too!

page: 1
<<   2 >>

log in

+15 more 
posted on Aug, 27 2011 @ 02:06 PM
I started this thread after posting in a Hoagland thread on roughly the same topic. Hoagland, for those who don’t know, has altered old images to produce artefacts which is claimed to be hidden information/structures.

There is also the Moonlanding situation where those not understanding image enhancement techniques believe that photographs demonstrating the existence of a lunar lander believe that the images are ‘false’ because they have been enhanced.

So what are appropriate methods to enhance an image? When is a person introducing new information? And what the hell does sharpening an image actually do?

Rather than running around ATS trying to spread information on stick-it notes I decided to make a thread about altering photographs in general. This will prove useful if:

1. You have made adjustments to a photograph to reveal detail and want to prove, or know if you have actually demonstrated what you think you have.
2. You are discussing with an expert, and want to understand what they are doing.
3. You are an expert, and want to understand what you are doing.

This will not prove useful if ...

1. You are already a massive expert and want to skip straight to the more advanced stuff.
2. You do not care, or you already have an expert you trust an subscribe to and do not want to question them.
3. You are Richard Hoagland.

If there’s a good response to this initial post I may make some more in the same thread on similar topics.

Who is Pinke, and who is an expert?

I suppose I say this for range finding purposes …

The term expert is a relative term. To put things in perspective … I am a junior artist, and have assisted in forensic analysis on a small number of occasions. Note, this doesn’t mean I have done particularly interesting things, or particularly amazing stuff, but it gives an idea.

I don’t consider myself an expert. The people I’ve worked with have been experts in my mind. They wouldn’t consider themselves experts either. The one thing I have learnt about experts is they are never afraid to refresh themselves on a subject before working on it.

I personally believe you’re an expert whenever you’re right, confident, and don’t need to be offensive to make a point. When you can present your information so anyone can follow it, you are an expert. If you ever have to state your credentials on record you’re either in a court room, or you should be able to state that your information can be easily researched and tested.

The Problem

You’ve captured an image of a UFO, the JFK assassin, or some other interest artefact or event. You’re a professional photographer, perhaps considered an expert in the design field. You enhance your image to show some extra detail, or perhaps use a median filter to assist the viewer … You have revealed E.T giving the single finger salute.
You are then asked … how does it work? How do we know it’s not just something you added by accident?!

The algorithms used in computer systems and image software are often not even understood by the professional using them. This can lead to inadvertent mistakes, or the inability to hit home with proof. A researcher with good information simply can’t explain why their approach was appropriate, or why it is actually correct.

Your audience will begin to find the information you’re presenting boring or inconclusive, or perhaps another ‘expert’ will take your findings and turn them to dust. Due to your lack of understanding of the physics involved, you find yourself defenceless.

In this thread, I’d like to discuss just some simple, practical information a person can use when investigating photographs or video in clean acceptable ways. I’m going to bounce around a little, and do some case studies to demonstrate. Hopefully in the long term this information will become useful to those actually investigating events, or to those dealing with experts they do not understand.

The Medium

During this thread I’m going to make some assumptions.

Ideally we would like to capture our images on film and have access to a lovely negative. Photographic prints are not much good for tonal and spatial resolution. Prints provide poor grey scale range, and often our digital images are from prints or worse … photos of prints. Sometimes we only get jpeg compressed images, or worse.

Also, ideally, our eye witnesses to any situation would be reliable. Unfortunately, human vision mostly sucks even though it is better than film in a lot of ways. Human vision does not measure brightness for example, and is only able to make a comparison between two bright and dark objects. Both film and human vision respond logarithmically to light, which is the beginning of our problems.

Whilst a film camera can record a 12 bit images with 4096 brightness levels … the average digital camera or digital print is usually 8 bit (256 brightness levels). This means that while a piece of film can be processed to reveal hidden details, our digital images are not really knee capped so much as disembowelled. Even images shot on the beloved Red One camera are only 10 bit.

Human vision also does not measure color. Humans will see colors as different depending on the other colors in a scene. Humans are much better at comparing two colors than they are at measuring a single color. This results in a number of famous illusions which can be performed on humans specifically:

This is my favorite one and one of the most famous. The squares labelled A and B are both RGB 120 120 120. Yet, our eyes assume that one is white. Crazy!

Cameras themselves aren’t really affected by these illusions. However, they are affected by the sensitivity of the chemical reactions in film, or the diodes in a chip. The color is interpreted by the camera, and is in no way the same amount of information as is in reality. This means the camera should be white balanced for its location. How often is this done in our images? Not often. There are ways around this though, but that's not for today!

This means that unless our witness and persons film shoot several different exposures (a very rare event) we’re up our creek with a spoon as a paddle to begin with and can often be left with a dark and difficult to see photograph. In UFO images this usually extreme white mixed with extreme black.


The other major issue with our cameras is compression. Many images we see are using JPEG (Joint Photographers Expert Group) or H264 compression. It’s good to be familiar with your compression technique prior to judging an image.

A nice way of doing this sometimes is using a grid! Here is a grid in Jpeg (lossy format) and Tif (lossless format which would be posted except ATS doesn't like TIF images. Which is a bit like installing a cat flap for an elephant on a conspiracy forum, but I guess bandwith costs money!). You can see some of the differences, though the compression I've used is extreme to highlight them.

This example lets us discuss some of the attacks and defences based around compression.

We need to see the original image?

This ultimately dependant on a number of factors; cameras have a definite resolution limit. Also, it may shock some persons, but often the information coming in through the sensor is compressed before it gets to an SD or flash card by the camera itself. The digitization process itself limits the resolution. Therefore, sometimes h264 or jpeg compression is perfectly acceptable. It is unlikely the camera will produce anything much greater in another format.

What does compression actually do?

A rarely asked question, but often referred to when an image looks a bit scrappy.

Lists of things that can happen:

- Removes features
- Alters the size, shape, and color of features
- Reduces resolution in different areas (Different compressions handle different parts better. Normally uniform regions details are preserved better than heavily detailed areas. Parts can also be affected depending where they lie in the compression. JPEG divides an image into 8 x 8 pixel blocks. At the boundaries this can create loss of detail or artefacts which cannot be predicted reliably. In a video, often details can be observed or resolved elsewhere in a moving image. With knowledge of how a compression acts, a person can reliably interpret an image.

Later we might go into methods of identifying compression, and reviewing different types of compression. However, not much time today!

Managing Compression

In the grid example above, notice the errors in the image?

It’s best if the storage medium of an image or video cannot be tampered with, and this includes introducing errors. Magnetic storage mediums can create errors, and can be edited on a day to day basis. Storing online in a time stamped location, or on a CD or DVD with a serial number ensures a medium isn’t being altered on a regular basis. Note CD or DVD is not a good ongoing archival solution as they fail, but it can be evidence that you have not tampered with your content.

Keep complete records of step by step procedures used to alter your image. Keep a copy of each version of your image. Ideally you should be able to take picture of a grid with the same model of camera if not the same camera so lens artefacts can be investigated.

If you're really keen there are programs which can check an image between versions to make sure it has stayed the same.

Your files

A digital image, in its most simplest form, is a grid of numbers containing pixel brightness values. Pixels, until we invent something else, are the squares which hold our images together. Always remember, it’s just squares and numbers.

Enhancing an Image Hoagland Style!

Ideally image enhancement should show already existing details better for our silly human eyes. It shouldn’t add new detail or destroy our original image utterly.

There are moments where damaging data is required, but if I make more of these posts we will discuss how to deal with this later on a method by method basis.

The Common ATS Methods

The most face palm worthy thing seen on a daily basis is persons using various pixel maths to produce alternate versions of an image that even they do not understand. Often this is due to a misunderstanding of their own work flow.

Most users work in 8 bit color space which introduces some major problems when working with an image with ‘hidden’ detail. I will be working with my gradient grid image so as not to provoke any controversy among ATS’ers’. Please feel free to follow along in your own editoring program.

One of the most common mistakes dealing with an image is assuming that your editing application is actually your friend. It is a tool like any other; smack your hand with it, it won’t say sorry. Contrast and brightness can be useful tools, however they must be used with care. Especially when using more than one operator/script/filter/node (for the purposes of this exercise I will refer to any adjustment or filter as an operator or operation. Many different applications use different terms for these!)

First a simple example: imagine my grid image is a nice sky.

Figure A is my working plate or image.

Figure A

Figure B is Figure A with an increase of 50% brightness. (I will work in percentiles so persons without video background can follow along.


Figure C is figure B with a 50% reduction in brightness. What happened?

Figure C

If you’re lucky your software will guess what you’re trying to do. If you’re only a little lucky you can change your options so your software will store the lost values in the future. If you’re unsure of how to do this, you’ve just been trolled.

The brightest part of our image is now clipped at 50%. Any time working with images a person must be aware of limitations of both of their software and of their image. Ideally, you should go back and work out what you were trying to do and do it in a much cleaner way. In our non-practical example, we would simply remove the first brightness operation. Voila.

However, lets look at a practice use of this theory and how a person might make a real mistake. After all, this particular mistake is quite obvious, and who would really use two opposing operators like that? The answer is Richard Hoagland.

Regardless of your thoughts of Hoagland’s imagery, here is an example of an easy mistake using brightness and contrast operators.

We will this time begin with figure B which has had brightness increased 50%. We will now increase the contrast by 25%, as was the case in a Hoagland image, and review what we have found.


As can be seen, all we have achieved is more destroyed data! No new data has been shown. However, Richard Hoagland did not find destroyed data in his famous images. One of the reasons behind this is film grain and similar artefacts. I will now add grain to my original image.

Figure 1B

And continue the same process as before. I will also remove the grid so you can see what occurs in the film grain areas.

Figure1C - brightness and contrast increase with grain

Now we’re getting some Hoagland style data. In our efforts to discover new data, all we have done is damaged our existing data to the point of fooling ourselves. In a final step, I have increased the brightness to 100% to demonstrate how an incredibly dark or bright sky may reveal a hidden ‘line’. This is often the lines referred to in moonlanding photography which demonstrate 'sets' and other such things.


We must be familiar with what our attempts to reveal data are doing prior to acting on our instincts. The majority of ‘new data’ in an image that is found legitimately is not new data …

The human eye, as stated previously, is logarithmic. We do have trouble noticing details in extreme dark and bright planes of an image. The bonus of this tutorial is that I know exactly what I am looking at. When we are faced with an image with no reference to what is, we can’t be sure of what we’re doing unless we’re sure of our maths. We know my image is a gradient with a grid over the top. So how do we reveal this without causing more problems?

We could choose one operator and stick with it.

Decreasing Brightness.


We could Decrease Contrast.


Both bring us unsatisfactory results. Both actually make our grid harder to see in some ways (which you will see if following the whole tutorial). At best the push to grey is a minor help, at worst we are in a situation where our results are inconclusive, or too simplistic.

Another approach might be to do an invert, but if human eyes have a problem noticing detail in extremes of light and dark we are only reversing our problem.

Some people might then move onto sharpening. The instant reward approach encourages you to reach for your sharpening tool in your favourite editing application.

Sharpening is Annoying

Classically, what sharpening does is increase the difference between adjacent pixels. This has the effect of making an image look sharper, but is a trick of the eye. From this point in the demonstration, I have put some happy text at the bottom of my image. This is to demonstrate why certain approaches don’t work, as a grid is very simple. I have also introduced some grain, permanently from this point on. Use this image if you wish to follow along with me.

So I want to find out what the writing says! Brightening and contrast aren’t getting me what I want, so I decide I will use a sharpening filter. I’m wrong btw.

Pinke Sharpening

Unfortunately, my sharpening has introduced even more artefact which makes me sad.

The majority of sharpening kernals a person uses will not produce good results in all (or sometimes many) areas. The sharpening kernel is often intended to be a quick defocus fix which introduces nasty looking edges and is really not intended for forensic purposes. Furthermore, with this method you will also be sharpening your film or video grain and other artefacts! You will also notice my film size has become significantly bigger after my sharpening effort. This isn’t what we’re trying to do!

I am going to skip the brightness and contrast attempts to solve this issue. Please try them on my sample image, however you will note they produce less than satisfactory results.

Why does my sharpening thing suck?

Please don’t feel the need to read this bit if you hate people and maths.

A sharpening kernel is generally this:

Imagine the above as a group of pixels.

This multiplies the stored brightness in each value by 8 and to subtract the 8 neighboring pixel values from it. If this was applied to uniform area of an image the result would be zero. For example, an area of an image with the value of 100 in nine pixels would multiply 100 by 8 and then minus 8 loads of 100 from it. If any neighboring pixels are brighter or darker than the middle pixel the result will be darker or brighter in this order. This is called a Laplacian.

It is very similar to the process we’re about to discuss, and is usually accompanied by a very small amount of blurring. This process also alters brightness variation which is something compositors often have to go back to fix. This can sometimes be done by changing the central pixel weight, but we’re probably digressing.

One of the biggest issues with a sharpening filter or operation which you are not familiar with the internal workings of is … if someone asks … you simply don’t know what it’s doing. Even if you do know what it’s doing, you’re also relying on the controls which are inbuilt within the operation or filter. This prevents fine adjustments.

Disclaimer: Your sharpening result may vary depending on pluggies.

Do it the Hard Way

So what we need to do is use a method that doesn’t introduce a lot of fake data, that can be applied over our grain, and lets us see what we’re doing quite nicely. What we’re aiming to do is solve the issue of human eyes being based around edges. Human eyes enjoy images like cartoons with nice obvious edges for us to look at. When we can’t see edges in a photograph, it’s not because they’re not there … it’s because we simply like the middle part and not the very brightest parts for example, or uncontrasted parts.

The Unsharp Mask

Take the original grid.

Create two layers or nodes in your program. From this point on, since most persons will be using photoshop, I will primarily refer to layers. If you are using a comping app with nodes or similar I imagine you know how to translate this already and probably already should know what an unsharp mask is!

Both layers should contain a copy of our image. We will call one original and the other mask.

Lower the brightness of your mask layer by 10 – 30%, don't worry you can refine it later. Blur the mask layer. I blurred mine by about four pixels. Use a fairly standard and quick blurring algorithm to do this. Then alter your layer to subtract from the original image. What you should end up with is a much nicer clearer and visible data.

Play with the values so you can understand the relationships between them!

What have I actually done?

You will notice that the tone of the image has suffered, but not nearly as much as it could have done. You can fix this, if you’re adventurous and know what you’re doing, by changing the brightness of the result, but this then becomes a artistic operation.

You've done this essentially (exact numbers may vary).

(image) - (0.4 x blurred image) = More Charlie Sheen

The bolder among you may have managed this.

(1.4 x image) - (0.4 x blurred image) = Moreish Charlie Sheen

However, this won't have 100% the effect we're looking for, but it will tighten the look.

The brightness contrasts the two layer edges. The blur increases the region of the sharpening and also introduces contrast. If you find artefacts in your work you may need to blur the image more, but this shouldn’t apply to this demonstration.

Essentially, what we’ve done is made some data more visible to our silly eyes by putting contrast in the right areas which our eyes view as sharpening. As always, with any changes like this we should always refer back to the original image to view what has happened as a result. Carefully compare the new data to the original image. You should always find that this information was already there at least mathematically. If you find a small planet after doing five or six adjustments to your image you may have done that, which from now on, will be called pulling a Hoagie.

Don’t pull a Hoagie ATS.

PS – I removed my note from my final example image just to give persons doing the exercise something stoopid to look forward to.

Final Notes

Is this the only and best way of doing this?

No. There are many ways to solve this puzzle. Some are better, some are more complex, however each method has its own practical uses in real life images.

One of the nice benefits of an unsharp mask is that you’re using your own image on itself, and thus aren’t guessing at what various contrast and brightness levels should be at in your image. Anything you can do in a procedural way is a bonus. Plus, you know exactly what you've done, and you can demonstrate the maths.

Furthermore is one method where you won’t damage the image dramatically. People are less likely to believe a heavily damaged image that reveals more detail over an image with values at least in ball park ranges.

It’s also nice and simple. Perhaps later if people want we can discuss other more complex sharpening methods including working with color, and ways of doing things to images which have more grain and other problems such as interlacing and interference.

I am trying to reintroduce brightness, and I can’t get it to go!

This will vary depending on your comp system, and is not really the point of the exercise.

If you’re very interesting in compositing for artistic purposes there are a great many learning fountains on the web.

I thought you weren’t going to be controversial?! I agree with Hoagland!

That’s okay. I won’t judge you.

Disclaimer: This article was written in one sitting. Pinke takes no responsibility for use, accuracy, or injury from this information. Pinke is not an equal opportunities employer. Pinke may or may not be on a horse.

edit on 27-8-2011 by Pinke because: Typos vs Pinke FIGHT!

edit on 27-8-2011 by Pinke because: Stoopid Title Fix

edit on 27-8-2011 by Pinke because: MOAR TYPOS!

edit on 27-8-2011 by Asktheanimals because: for spelling errors

posted on Aug, 27 2011 @ 02:34 PM
are lens flares cool to use?

posted on Aug, 27 2011 @ 02:42 PM
Sorry, read the bad grammar in the title and decided your humongous post would probably be flawed to.

Just FYI, this might have looked better, "How to investigate UFOs and do other things well too!"

posted on Aug, 27 2011 @ 02:47 PM
reply to post by wasco2

There's such things as humor, there's some really great info in the OP. Learning to be had if you can manage not to be an ass.

posted on Aug, 27 2011 @ 02:56 PM
great information, although im not sure how many people will find it useful! I still dont see how any of this disproves Richard Hoagland?

posted on Aug, 27 2011 @ 03:03 PM
reply to post by Kali74

Sorry, it's some kind of birth defect inherited from my father. Along with a very low tolerance for stupidity it's sometimes made my life a little difficult. Also I'm fairly well versed in how photo enhancement works and the dangers of altering or creating new data. I'll probably read it when I have more time.

posted on Aug, 27 2011 @ 03:09 PM

Originally posted by knightsofcydonia
great information, although im not sure how many people will find it useful! I still dont see how any of this disproves Richard Hoagland?

There was a Richard Hoagland related post which claims a 50% increase in brightness followed by a 25% increase in contrast will result in producing new information in a photograph. This could just be the persons own mathematics, as I haven't actually read all of Hoagland's stuff yet, but so far my findings have put those low on my priority of reading.

I generally investigate whatever seems most likely to be interesting at the time.

Regarding my grammar, is a Zoolander reference.

I'm also not that smart, and don't do things good.

Edit: And as it says in the OP, if you're already a photo enhancement nut you will likely want to skip it. Oh and lens flares are AWESOME! (So long as they're anamorphic!)

edit on 27-8-2011 by Pinke because: Edit

edit on 27-8-2011 by Pinke because: (no reason given)

posted on Aug, 27 2011 @ 03:10 PM
reply to post by Pinke
A great post and should be a reference point for some of the guys who enjoy 'enhancing' NASA images in search of UFO proof.

Two members, Depth of Field and Armap, will love your thread.

One suggestion. Can you add a post that uses a NASA (HiRiSe, Lroc etc) image and then 'enhance' it to show something that isn't really there? I think it could be more effective than the grid in your OP.

@knightsofcydonia - Hoagland hasn't brought anything that needs to be 'disproved.' The guy is always making silly claims in between his C2C interviews.
edit on 27-8-2011 by Kandinsky because: (no reason given)

posted on Aug, 27 2011 @ 03:13 PM
I think this is some great information and sorely needed, not just for UFO research but for every other field where images are critical for analysis.

I gave this thread an applause both for quality and depth.
Fine job Pinke!

Thank You!

posted on Aug, 27 2011 @ 03:23 PM
I think this thread should be used for one's own convenience. It is not to make you think you've become an expert or to expect that your opinion matters the most because you followed some tips. You (who decides to folow some tips) still remain a nobody i.e non-expert, so it is best to bring such photos or any things you doubt about to an expert at visual graphics and design.

It is TL
R and I personally use my own experience from more real UFO cases and my knowledge of image, video, sound editing. So to me this is needless. You can also see whenever something is fake by its behavior. Example: 'UFOs over London' a bunch of orbs moving front and backward in front of the camera almost like screaming 'shoot me on camera' are too obvious but we are talking about behavior of such objects.

Anyway this thread could serve to those who believed the 'UFOs over London' if anyone has the nerves to read all that text. Grats for the hard work though, you deserve a point for all that text.

posted on Aug, 27 2011 @ 03:39 PM
reply to post by Pinke

You said you haven't read Hoaglands books which I find to be a major hole in your logic and theme of the thread... His findings, by increasing gain were unique and seperate from anything you have suggested here. The thread is dependent on the fact that Hoagland was wrong, but you can't prove it...So my question, why bring him into the thread at all? even if you think he is wrong, thats still your opinion..which is based on NOTHING because you haven't heard or listened to what he has to say. so before you go discrediting him, at least allow the opportunity to be receptive to what he has to say.

posted on Aug, 27 2011 @ 03:50 PM
Although I haven't got the time to read your extensive post, I have to say this is most appreciative and most needed for the ATS community. Pictures are something we deal with very often, and if your post provides information that enhances the generic ATS member to evaluate pictures on their own, which would greatly improve on the sites content, it is a vastly valuable thread. Even without reading most of it, I will provide you with both a star and a flag, for the sake of garnered attention and the fact that you've put alot of work into it.

Thank you!

posted on Aug, 27 2011 @ 04:07 PM
Realised how random this post was ... Decided I nap then discuss later.

Basics ... Thread is not entirely reliant on hoagland and will not discuss specific material here. Is against point.

Nap time.
edit on 27-8-2011 by Pinke because: Delete sleepy post

posted on Aug, 27 2011 @ 04:43 PM
Epic thread Pinky, very informative! Thanks for teaching me something new.

Also loved the title, sounded like something straight out of Zoolander.

Wasco2, mate don't be so condescending. The member cleqarly spend alot of time making this post, and it is well researched. Next time, try a different approach...


Posted Via ATS Mobile:

posted on Aug, 27 2011 @ 04:45 PM
Reply to post by VreemdeVlieendeVoorwep

Oh, and Wasco, FYI, read your second post again, and see the error there.

Don't judge so quickly.

Posted Via ATS Mobile:

posted on Aug, 27 2011 @ 04:45 PM
Reply to post by VreemdeVlieendeVoorwep

Oh, and Wasco, FYI, read your second post again, and see the error there.

Don't judge so quickly.


Posted Via ATS Mobile:

posted on Aug, 27 2011 @ 05:18 PM
reply to post by VreemdeVlieendeVoorwep

Unless you're talking about a missing comma I still don't see what's wrong with my post.

But at least I didn't post twice.

posted on Aug, 27 2011 @ 10:06 PM
i take it the hoagie doesn't have lettuce, tomato, and mayo on it...though it's possible it may have already been baked ..lmao...sorry..i couldn't resist that

posted on Aug, 27 2011 @ 10:24 PM

Originally posted by wasco2
Sorry, read the bad grammar in the title and decided your humongous post would probably be flawed to.

Just FYI, this might have looked better, "How to investigate UFOs and do other things well too!"

He isn't trying to teach us about grammar, he's teaching us about photography, the creation of anomalies, how photographs are altered and what can happen in doing so, how a computer creates a picture and reads it and how our own eyes do. I have ZERO interest in this subject but I loved this article. It was written well and taught me a lot, even if I will never use any of the information. Someone mentioned the OP presented the information "like an ass". I have spent a lot of time in school and other various classes and the most irritating thing is an instructor who thinks they are funny or their personal stories take precedence over the subject in question. If he added little quips throughout his post we would spend more time reading it and lose focus on the topic that was presented; I could not be happier with how this is written. The OP took a lot of time to teach us things that would otherwise take years of learning. If you aren't here to learn, at least keep from bitching.

Thank you OP, S&F!

posted on Aug, 28 2011 @ 11:22 AM
I stopped reading when you started to incorrecttly explain how a digital camera and film camer works. Film is analogue, where do you get this 12 bit digital from? Madness.

And 8 bit depth on digital cameras? Mine is 12 bit - the Sony Alpha A77 is 24 bit. As is the Nikon D700 and a new Pentax, I think it is the K2.

You also misunderstood how the immage sensor works - it can only measure luminence at each site, and it does so through RGB microfilters, do give a luminance reading in each of those chanels. These are then combined in a process called interpolation to produce a colour immage. There is no chemical process involved here as you suggest.

I saw you go on to talk about compression. Did yo consider that many of us shoot in RAW files?

Your post is way too long. Cut to the chase and give us the overview dude!

top topics

<<   2 >>

log in