I'd like to show you a de-blurred HOAX picture of the Clementine Structure...

page: 3
61
<< 1  2    4  5  6 >>

log in

join

posted on Aug, 28 2013 @ 12:36 AM
link   
reply to post by Deaf Alien
 


He isn't, he made this from a JPEG he pulled off this site.




posted on Aug, 28 2013 @ 12:38 AM
link   
reply to post by raymundoko
 


So I guess there is no 2nd image?



posted on Aug, 28 2013 @ 12:46 AM
link   
What I'm getting from this is, there isn't anything worth obscuring.



posted on Aug, 28 2013 @ 12:46 AM
link   
reply to post by Deaf Alien
 


You are correct.



posted on Aug, 28 2013 @ 12:48 AM
link   
reply to post by raymundoko
 


I guess that's the end of this thread?



posted on Aug, 28 2013 @ 01:01 AM
link   
The way forward from here imo might be to find the coordinates for the image in OP. Then see what other images are available of the same area. Are there any other clementine sourced images of the area? Are they also blurred/blacked out? How does the area look in similar resolution images taken by different missions such as Chang'e or LROC.


Similar to this topic is mine started a week ago: Check out this post if you are interested in seeing some research quality highest-resolution Lunar surface images that has been called 'alien' by some.
edit on 28-8-2013 by PINGi14 because: (no reason given)



posted on Aug, 28 2013 @ 01:08 AM
link   
reply to post by Deaf Alien
 


Yes, as far as any discussion goes his edited image should not be included in any debate. However if this is who I think it is he will be here arguing his cause no matter what.



posted on Aug, 28 2013 @ 01:12 AM
link   
reply to post by raymundoko
 


That's fine. He is who he is. I will await his explanation anyway.

How that technique works? You take TWO images and interpolate them and create 3rd image.

As far as we are concerned this thread is done unless he provide a second image.



posted on Aug, 28 2013 @ 01:57 AM
link   

Originally posted by funkster4

Originally posted by freelance_zenarchist
reply to post by funkster4
 


What you're doing is not polynomial texture mapping so stopping using that to give credibility to your images.

The difference is that in the picture of the statue, B is a combination of A + C.

In your pictures you only have A, a single frame. You can't just alter the image and then use that to create new details, such as aliens and moon bases.

edit on 27-8-2013 by freelance_zenarchist because: (no reason given)



Hi...

I think you missed it: I derive iterations/interpretations from a source material, and then interpolate the results.
Maybe you've missed the part in the HP presentation when they conclude that PTM can be used on digital pictures.

Meaning, you know, you can work with one single picture by changing the settings and iterating it.

Yeah, you probably did miss it...

As explained, they use one variable, while I use several, that's all.
I do not "alter" the image, no less that you could claim that a scanner image of a human hand is an "alteration" of the picture of a human hand. they are different interpretations of the same data, and you will be very hard pressed to prove which one of the two is "truer" than the other (good luck with that one...)....

Actually they are both valid, though quite different (they do not carry the same information, but the information they carry is objectively true)...

You might want to prove your point by demonstrating why information derived ffrom a picture in the conditions I indicated (using conventional optical settings) might not produce reliable information. I am curious...

Oh, and I am was not looking after aliens or moon bases; I worked on those images only as a verification of totally unrelated results...


Just noticed you're the same guy who is objecting vehemently to the images of the Turkish Clip I posted in the other thread.

Vehemently, but with not much supported argument, as I demonstrated quite easily.
I seem to remember also that I proposed to make available a sample of the Frame 11 image tp you, for your convenient review.

I don't think I have heard your reply....

edit on 27-8-2013 by funkster4 because: (no reason given)
edit on 27-8-2013 by funkster4 because: (no reason given)
edit on 27-8-2013 by funkster4 because: (no reason given)


---

This is an area I actually have some formal training that is good enough to make a valid comment.

Using an analogy, this is like a 3D version of High Dynamic Range Photography where you take
MULTIPLE exposures (low light, medium and overexposed) and combine or overlay the photos
together such that the areas that have the MOST DETAIL will come to the foreground.
On an overall basis, the MOST detailed areas of each photo exposure shine through
and the NEW image shows things that normally aren't visible.

On a technical basis, polynomial texture mapping uses techniques that are similar
to raytracing by MAPPING a SYNTHETIC reflectance point on a 2D or 3D pixel which
would be visible IF a specific 2D-XY or 3D-XYZ coordinate was illuminated at a
specific lighting level. This is literally pixel-by-pixel contrast, brightness, saturation
and gamma enhancement but on an individual pixel basis using a pixel illumination
placement algorithm governed by iterative rotation on multiple 2D-XY or 3D-XY axes.

Or in other words, take a lump of rock in a dark room, light the rock from one or more angles,
and rotate the rock OR the lights and sample ONLY the pixels that are CURRENTLY visible
to a virtual camera. Any synthetically lit pixel that stays within a given USER-DEFINED
luminance level and chroma (colour) range limit is copied to a final destination bitmap
while other values outside of the designated range limits will use the pixels from the
ORIGINAL unaltered image when copied to the final destination bitmap. When all pixels
are sampled, the final image will be a perfectly exposed BUT synthetic version of
WHAT WOULD BE SEEN BY THE HUMAN EYE if all points on the object
were illuminated perfectly. This brings out normally UNSEEN detail.
edit on 2013/8/28 by StargateSG7 because: sp.



posted on Aug, 28 2013 @ 02:03 AM
link   

Originally posted by Deaf Alien

What image are you using (c) to interpolate and create that image?


He doesn't have a 2nd image, or in the statue example a (C) image. He's taking one image, making copies of it, applying some sort of adjustments to them and then combining them.


Originally posted by funkster4
Now, take any source image, and apply to it any variable settings you choose: light, contrast, contour, sharpness,, etc, just anything. You will have generated then a different interpretation of the original data set.


Or in other words, you will have altered the original source file!




Originally posted by anon29

It would seem to me, that given the results, and the credentials provided one could only conclude that:

A. The HP software did what it was designed to do and removed an 'obstruction,' to reveal the original, unaltered image.

B. The HP software did not do as it was designed and altered the image randomly, which is what we're seeing here.

or

C. The op fabricated this image and is lying.


He's not actually using the HP software or using polynomial texture mapping. I believe he keeps mentioning it in order to give credence to his images. Or he's just really confused. I don't know if he's lying and just trying to troll the Aliens & UFO forum, he sounds like someone who's new to image editing software and doesn't really understand what he's doing.


Originally posted by funkster4
I do not use the PTM software



Originally posted by funkster4
HP actually offers a free dowload of a PTM sofware, but I was not able to make it work (I am not computer friendly).

edit on 28-8-2013 by freelance_zenarchist because: (no reason given)



posted on Aug, 28 2013 @ 02:11 AM
link   

Originally posted by ZetaRediculian
im pretty sure that's a steam engine.


I agree. Maybe a long lost relative of Thomas the Tank Engine?




posted on Aug, 28 2013 @ 02:13 AM
link   
reply to post by StargateSG7
 


Yes, what he's doing is closer to HDR photography than PMT.

Would you agree though that since he's only using a single source image (of low quality), and artificially adjusting it with software to create the multiple exposures, that it's not possible to de-blur anything, or uncover any detail that isn't there? Such as aliens and secret moon bases.

edit on 28-8-2013 by freelance_zenarchist because: (no reason given)



posted on Aug, 28 2013 @ 02:19 AM
link   
And just for good measure DEPENDING onthe TYPE of blur used
(i.e. Gaussian, 3x3 or 5x5 pixel averaging) I can UNWIND the blurring
operation to sythesize what WOULD HAVE BEEN THERE at a given
2D-XY or 3D-XYZ pixel coordinate by resampling and replacing the
blurred pixels and moving them BACK to their original estimated locations
and changing their luminance/saturation levels back to the estimated
original values using basic trigonometry (i.e. via 2D-XY or 3D-XYZ vector
point translation and rotation) .

For 3x3 or 5x5 pixel averaging algorithms the processs is trivial
since the blurring operation is technically a convolution filter which
can be iterated in reverse! Gaussian blur is much harder because
I have to do a min, max or median possible centrepoint estimation
for the surrounding pixels that are used as sample points for the
bell curve...that's a LOOOOONG process because of the sheer NUMBER
of possible estimated values for contributing pixels that MUST conform
to a bell curve when averaged over a given 2D or 3D pixiel coordinate.

On a 1920 by 1080 pixel Gaussian blurred image (i.e. 2 megapixel image)
it could take around 8 hours to create a large database of POSSIBLE
bell curve centre point values which will create the ESTIMATED final
de-blurred image. And THAT is on a 2 core 3.2 GHZ Intel Core7 processor!
edit on 2013/8/28 by StargateSG7 because: sp



posted on Aug, 28 2013 @ 02:27 AM
link   

Originally posted by freelance_zenarchist
reply to post by StargateSG7
 


Yes, what he's doing is closer to HDR photography than PMT.

Would you agree though that since he's only using a single source image (of low quality), and artificially adjusting it with software to create the multiple exposures, that it's not possible to de-blur anything, or uncover any detail that isn't there? Such as aliens and secret moon bases.

edit on 28-8-2013 by freelance_zenarchist because: (no reason given)


---

Actually THAT IS NOT QUITE TRUE...see my previous comment about using MATHEMATICS
to reverse a convolution filter or "DeGauss" a Gaussian Blur. Math is iterative and if the final pixels
are the ORIGINAL ones that have been blurred by a MATHEMATICAL ALGORITHM such as
a 3x3 pixel averaging filter, I CAN undo that by literally REVERSING the iterative math operation.
It's a technique used in Forensic Imaging to recover details in images that somebody has
simply used a built-in Photoshop filter on. Most people simply DO NOT have the training
to understand NOT to use a filter operation to hide imagery but rather they should actually
BLACK IT OUT with 0% RGBA Solid Black (i.e.R:0 G:0 B:0) rectangle in order to PREVENT experts
from de-bluring or undoing your special effects which are ALL math based and CAN be reversed!

So what I am saying here is YES it MIGHT be a real alien artifact or moonbase because
a single image CAN be de-blurred or have its special effects REVERSED if I can GUESS
at the TYPE and SIZE of blurring algorithm(s) originally used!

I personally have an imaging library that puts the NSA or NRO to shame
(i.e our Midgrid Multimedia Engine at www.midgrid.com) which I did entirely
in PASCAL (Delphi XE) which is an incredibly advanced tool for image manipulation
so such an operation is TRIVIAL for me to code and run since its all grid-processed.

I suspect the OP did it in JAVA, C++, Delphi or even VISUAL BASIC in order to
run his OWN deblurring code. It`s not technically complex, but it IS a time consuming
task and you REALLY DO NEED a NETWORK of computers (i.e. 10 minimum dual core
or 4-core machines) to be able to do a deblurr of a Gaussian Blurred image near real time.
edit on 2013/8/28 by StargateSG7 because: sp
edit on 2013/8/28 by StargateSG7 because: sp
edit on 2013/8/28 by StargateSG7 because: sp



posted on Aug, 28 2013 @ 02:38 AM
link   
reply to post by StargateSG7
 


Yeah but we're not talking about fancy algorithms like in the latest version of Photoshop, the Camera Shake Reduction, which is shown in this video and can de-blur an image. We're talking about simple adjustments like brightness and contrast sliders.



apply to it any variable settings you choose: light, contrast, contour, sharpness,, etc, just anything.



posted on Aug, 28 2013 @ 02:43 AM
link   

Originally posted by freelance_zenarchist
reply to post by StargateSG7
 


Yeah but we're not talking about fancy algorithms like in the latest version of Photoshop, the Camera Shake Reduction, which is shown in this video and can de-blur an image. We're talking about simple adjustments like brightness and contrast sliders.



apply to it any variable settings you choose: light, contrast, contour, sharpness,, etc, just anything.


---

there are about SIX REALLY GOOD deblur filters for Photoshop (i.e. added as a plugin!)
ranging in price from $50 to over $25,000 (teranex-type) PER MODULE. So it`s not out
of the realm of possibility to add a GOOD DEBLUR to Photoshop if you have the money!



posted on Aug, 28 2013 @ 02:48 AM
link   
reply to post by StargateSG7
 


What?



posted on Aug, 28 2013 @ 03:07 AM
link   

Originally posted by freelance_zenarchist
reply to post by StargateSG7
 


What?


---

YUP! $25,000 for a hardware accelerated de-blur algorithm
(used to be Teranex but now someone else) for Photoshop!!!!!

I also remember us paying $120,000 for their Standard Definition TV to HDTV
motion-compensated upscaling and PAL/NTSC conversion system.

I even remember paying $60,000 for a Betacam SP pro camera
which is now worth about $200 without the lens!

So 25 grand ain't that unreasonable if the product you buy DOES THE JOB!

But for 25 grand that de-blur plugin REALLY better be able to do its job (it does!)

Software can get REALLY EXPENSIVE once you start loading up on the plug-ins
even for relatively mid-level programs such as Photoshop...and if you go for something
really high end like Autodesk Flame (a 3D visual FX product) you're getting into the
$150,000 price point if you load up on the 3rd party FX modules.
edit on 2013/8/28 by StargateSG7 because: sp.



posted on Aug, 28 2013 @ 03:30 AM
link   
To ufo believers it's supposed to be a spaceship with a giant alien fixin it. No joke.



posted on Aug, 28 2013 @ 03:34 AM
link   
reply to post by Asikakim
 


Or a structure of some kind. I do not doubt they blurred something important but I cant see anything in the photo..




new topics
top topics
 
61
<< 1  2    4  5  6 >>

log in

join