It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

A thought on image technology.

page: 2
0
<< 1   >>

log in

join
share:

posted on Sep, 26 2014 @ 09:14 AM
link   

originally posted by: theGleep
I have long wondered if it might be possible to get a good result by stacking "treated" copies of the same image...like stacking a "sharpened" copy with a "blurred" copy and a "smoothed" copy and the original.


Not really, because you're not adding information that was originally in the scene.



posted on Sep, 26 2014 @ 09:15 AM
link   
Its like riding a scooter into a filling station and buying about a third of a gallon of gas.
You practically never pay the advertised price because of the roundup.
Had to do this the other day when gas was advertised at $314.9.
Guess how much I paid for .355 gallons?

Raw is not necessarily uncompressed.

"11 + 7-bit lossy compression scheme"

edit on 26-9-2014 by Cauliflower because: (no reason given)



posted on Sep, 26 2014 @ 10:19 AM
link   
A digital image resolution is limited to the number of pixels in the CCD/sensor.

A pixel in a sensor can only be one homogeneous color, so when you zoom into the image far enough to see that pixel, then that is the limit of the resolution.

Granted, there are ways to artificially "smooth out" the pixels (i.e., the one original pixel is made into -- say, for example -- an 8x8 array of 64 smaller pixels put in place of the one original pixel), but the new smaller pixels created from the one original pixel will have colors that are artificially determined by interpolation. The new smoothed-out pixels are not necessarily the actual colors, but rather just colors determined through a math algorithm.


edit on 9/26/2014 by Soylent Green Is People because: (no reason given)



posted on Sep, 26 2014 @ 01:52 PM
link   

originally posted by: brace22
a reply to: Biigs

I think you mis understood my friend. I know you can zoom in and process in current technology. I do it every day. But you cannot process X amount of pixels and make them look as if that was the original (In JPEG. I am talking JPEG, not RAW)

I want to know why, what law of physics is stopping us creating technology?



Hi, I 'm a computer guy.

The laws of physics that stop you from zooming in infinitely to a picture are called information theory. It maps the amount of bits in something to the amount of possibilities for what it can show. For instance, suppose we have a 3 bit number. That can represent 2^3 (8) different numbers. Numbers 0-7 in binary are 000,001,010,011,100,101,110,111. As you can see that's every possible combination of 1's and 0's in three places (bits), nothing more can be added without adding another digit, aka another bit. So point is if your camera takes 3 bit pictures, it can take 8 possible different pictures, period. If your camera takes 100 megabit pictures, it can take 2^100,000,000 different pictures, period. Each time you talk about the capacity to zoom in beyond any 1 picture, you are talking about expanding that space (because each of the 2^100,000,000 different pictures must also have the capability, and be multiplied by the new capability. But if your camera hardware only captures 100 megabits, that's demanding information you just don't have from that camera.

Note that you can do tricks, which make it seem like you can zoom in. For instance, with lots of high resolution aerial photos of a forest, you could train a computer to take low resolution aerial photos of a forest, and fill in the unknowns with what's *probably* there. However, this isn't true information on the forest. For instance, if one brighter pixel were actually caused by a tiny UFO, the algorithm would make it render as a patch of snow reflecting the sun, because that's what's most likely there based on its other photos.



posted on Sep, 26 2014 @ 02:48 PM
link   
a reply to: tridentblue

Bit depth determines contrast ratio of the color components, not the number of possible images that can be reproduced. Megapixels (gigapixels, etc) determines the image dimensions. Like with bit depth, this alone does not determine the number of possible images that can be reproduced.
edit on 26-9-2014 by GetHyped because: (no reason given)



posted on Sep, 26 2014 @ 03:34 PM
link   
a reply to: GetHyped

This stuff doesn't matter, getHyped. The general law is that ANY computer file that's N bits in size can be one of 2^N different possibilities. That's because every computer file is just a string of 1's and 0's, just a binary number. If you still don't see it, imagine computers use decimal numbers, and instead of bits, we have digits. Then a number that has 3 digits can represent 10^3 (1000) different possibilities, corresponding to the numbers 000-999. Adding any more possibilities takes more digits. So if a camera only captures a certain amount of bits of data, that's the same as if it only captured a certain amount of digits. There are only so many numbers you can express with a limit on the digits you can use.



posted on Sep, 26 2014 @ 03:52 PM
link   

originally posted by: tridentblue
This stuff doesn't matter, getHyped.
I wouldn't say it doesn't matter, but other than that comment, you guys are both looking at the same thing from two different perspectives and neither one of you is wrong.

The reason it matters is, you might have one application where you're more interested in color depth so you devote more of the available bits to that and less to resolution, while in another application resolution may be more important than color depth so you re-allocate the available bits accordingly.

When you say it doesn't matter, I infer that to mean the total bit count could be the same in either case and of course you're right, however the total bit count isn't the only thing that matters, if you have various applications that might place more importance on either resolution, or color.
edit on 26-9-2014 by Arbitrageur because: clarification



posted on Sep, 26 2014 @ 05:26 PM
link   
a reply to: Arbitrageur

Like, say, bitrate in audio. XYZ Kbps alone isn't enough to deduce audio resolution as the variables at play (sample rate, sample bit depth, number of audio channels) allow for any number of combinations that meet this given rate of bits per second.




top topics



 
0
<< 1   >>

log in

join