It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Scientists directly image planets orbiting star

page: 6
59
<< 3  4  5    7 >>

log in

join
share:

posted on Jan, 29 2017 @ 12:10 AM
link   

originally posted by: AttentionGrabber
a reply to: charlyv




They are the result of direct observation,


Prove it.




Great science has already come out of it.


Like pus?



I think you should prove they didn't, they have already said that it was all based on direct imaging with the Keck. Also....


With an annual cost of $30.8 million and 574 nights available for observing, the cost of one observing night on a Keck telescope is $53.7 thousand dollars.
Linky: NAOO - Keck

Not the kind of grant money you spend on bull#, especially with the peer groups they have. They keep incredible records, have fun.



posted on Jan, 29 2017 @ 03:32 AM
link   
That's really rather cool.

Thanks for posting OP.

Now...we can image a Solar System 129 light years away this well, when will we be able to view every inch of the Lunar surface, right next to us, down to the mm per pixel scale?



posted on Jan, 29 2017 @ 03:55 AM
link   
Sorry for being so on topic here,although the story of Phage's business ventures on Hawaii were unusually entertaining! Will any of those planets be in the Goldilocks Zone so that in how ever many billions of years it take's,life will develop on it/them?



posted on Jan, 29 2017 @ 04:37 AM
link   

originally posted by: MysterX
That's really rather cool.

Thanks for posting OP.

Now...we can image a Solar System 129 light years away this well, when will we be able to view every inch of the Lunar surface, right next to us, down to the mm per pixel scale?



When the laws of physics change.

What you've got here as an 'image' are single pixels, pretty much, and only getting that because the planets are reflecting the light of their local sun. The orbits are very large (400 years!) so you have enough angular resolution to resolve them as separate little dots.

But Rayleigh's Criteria still applies. The resolution you can get is limited by aperture and the frquency of the light you're imaging with, and it's limited by diffraction effects.



posted on Jan, 29 2017 @ 04:47 AM
link   
a reply to: Bedlam

None of those obstacles would apply if an orbiter is only X miles above the surface, with good enough optics and imaging subsystems though.

The moon is also reflective to our suns light, all the way around it...so illumination isn't a problem.

It'll be a wait i know (if at all)...but i'm looking forwards to the day when i can inspect the individual grains of regolith from an online image bank.



posted on Jan, 29 2017 @ 04:56 AM
link   

originally posted by: MysterX
a reply to: Bedlam

None of those obstacles would apply if an orbiter is only X miles above the surface, with good enough optics and imaging subsystems though.


As long as 'x' is low enough. Or the aperture is large enough. Or you are imaging in UV or something, but between the three variables, that's all there is. You have to be close, have a big aperture or very short wavelengths.



The moon is also reflective to our suns light, all the way around it...so illumination isn't a problem.


Illumination isn't an issue for resolution limits. The reason/method for the planetary images in the OP is that they're being seen as point sources. So the resolution is effectively zero, and the aperture limit drops out.You can't do that with a ground image though, except in the limit where you call the entire lunar surface a point. One pixel.



It'll be a wait i know (if at all)...but i'm looking forwards to the day when i can inspect the individual grains of regolith from an online image bank.



Then you will wait forever, because you'd have to have an orbit a few feet off the surface. Seriously, the angular resolution limits are not something that's easy to get around.

Now, if you had the meta magic to make a lens with the mathematical chicanery of having no edges, then Rayleigh/Dawes/Sparrow drop away and you're left with a resolution limit that's photon dependent. But that's a different set of NDAs.

see also: linky
edit on 29-1-2017 by Bedlam because: (no reason given)



posted on Jan, 29 2017 @ 05:15 AM
link   
a reply to: Bedlam

Thanks for all that Bedlam, you obviously know a lot more about imaging than I.

Dashed my fantasy a little, but never mind.

A lens with no edges, conjurors up a mental image of a glass or crystal sphere...a hollow sphere with a spherical, edgeless CCD at the centre. Voilà...edgeless imaging lens.

My royalties can be sent in Sterling or Dollars.



posted on Jan, 29 2017 @ 05:30 PM
link   
a reply to: charlyv

This is why science is beyond politics and should never be manipulated by the person in power.

Imagine if this was censored by Trump?

Absolutely beautiful imagery, thank you for sharing it.



posted on Jan, 29 2017 @ 06:13 PM
link   
a reply to: Bedlam

How large can a holographic lens be? Given that the interference pattern could be artificially generated.



posted on Jan, 29 2017 @ 07:39 PM
link   
a reply to: charlyv

Great post, I love stuff like this. It boggles the mind to think where we will be in just 25 years from now.

Hopefully we can speed up the process by gaining some advanced technology from that Alien spacecraft which they recently found crashed down in Antarctica. ~$heopleNation



posted on Jan, 29 2017 @ 08:55 PM
link   

originally posted by: Phage
a reply to: Bedlam

How large can a holographic lens be? Given that the interference pattern could be artificially generated.


It depends on what you're using to generate the lens image, and also on how much time you have to compute the lens transform and the interference pattern to generate it.

We had to do it on the fly, so it was all DSP implemented in FPGAs, with the FPGAs being reloaded in real time to change what they did as you needed, the customer called it the 'Swiss clockwork design'. It worked but was pretty awful to understand. The computational engine controlling the LCOS or DLP was a single cPCI board encrusted with the best FPGAs you could get at the time, IIRC I had something like 100 pages of state machine bubble charts that got compiled to VHDL and a small novel of VHDL routines. You couldn't do fast multiplies in that generation of parts so we had to run the data out of the part, through a bunch of Harris Semi specialty math parts as needed and back in for more crunching.

The lens you got was maybe 2cm across. The size is related to the size of the DLP or LCOS or whatever they use these days. The bigger the lens, the more data you can put through it, because of the edge limitations. You have a diffraction limit there too. And there's other LCOS parts in the data chain feeding in the data and a CCD receiving it, so there's limits there, too.

The effective throughput in teraflops is still not spoken of, but for a vector processor, the thing was far faster than any machine in the 90's civilian market for many years. It's still pretty damned ok, although for the price of a car, you can now field enough graphics cards to get there. A lot bigger one could probably be built, but this thing had to go in a combat aircraft. Electrooptical computation generally doesn't come mil spec.



posted on Jan, 29 2017 @ 08:58 PM
link   
a reply to: Bedlam

Awesome rap!

2cm. Not very impressive. Cool but not real useful for astronomy.



posted on Jan, 29 2017 @ 09:01 PM
link   

originally posted by: Phage
a reply to: Bedlam

Awesome rap!

2cm. Not very impressive. Cool but not real useful for astronomy.


You need an imager the size of the lens. I would guess they could do better now but that was studly in the 90's.


eta: I remember wanting to murder the Harris people. The parts did mathematical operations you couldn't easily do in the FPGA but it seemed like every damned one of their parts had a hinky interface, and they were all different. You couldn't design to one sort of hinkiness or bletcherosity. The histogrammer had a qualitatively different interface than the matrix multiplier and so on. The data order was all different, too, so you couldn't just make an image processing chain out of multiple Harris parts. And there really weren't any decent DSP parts, although TI came out with one not long after which would probably have worked. Been a lot smaller too.
edit on 29-1-2017 by Bedlam because: (no reason given)



posted on Jan, 30 2017 @ 12:25 AM
link   

originally posted by: Imagewerx
Sorry for being so on topic here,although the story of Phage's business ventures on Hawaii were unusually entertaining! Will any of those planets be in the Goldilocks Zone so that in how ever many billions of years it take's,life will develop on it/them?


Good question, because due to the power of that star, the 'Goldilocks zone" would have to be computed for the class of star and it's brightness. Anyway, all we see are gas balls presently, huge ones. The white paper and other sources mentioned had eluded to the existence of small rocky planets that are not presently resolvable. It is an infant system, and they also eluded to multiple nebulous rings.



posted on Jan, 30 2017 @ 12:43 AM
link   
a reply to: Bedlam

That is some heavy stuff Bedlam. I used to do a lot of real time work in the past, but never had the opportunity to work with stuff like that. Cool.



posted on Jan, 30 2017 @ 12:54 AM
link   
a reply to: charlyv

So going by the flashing on the 4th body out it has a sizeable moon or sets of or is it just camera defects... Earth like maybe any Kepla Data?
edit on 30-1-2017 by DreamerOracle because: (no reason given)



posted on Jan, 30 2017 @ 01:05 AM
link   

originally posted by: DreamerOracle
a reply to: charlyv

So going by the flashing on the 4th body out it has a sizeable moon... Earth like maybe.


I suppose anything is possible, but like Bedlam and others had elaborately pointed out, the resolution problem will keep us wondering until something better comes along, which will not be too long either, as JWST is just around the corner.
edit on 30-1-2017 by charlyv because: s



posted on Jan, 30 2017 @ 01:14 AM
link   
There are better places to go a lot closer. I think I'd cut my throat on the way there. 129 light years. Eesh.



posted on Jan, 30 2017 @ 01:21 AM
link   
a reply to: Bedlam

I would think that the Betty Hill map would be a cool place to surf. I don't quite think anyone has been successful at figuring out how she came up with that.



posted on Jan, 30 2017 @ 02:10 AM
link   

originally posted by: charlyv

originally posted by: Imagewerx
Sorry for being so on topic here,although the story of Phage's business ventures on Hawaii were unusually entertaining! Will any of those planets be in the Goldilocks Zone so that in how ever many billions of years it take's,life will develop on it/them?


Good question, because due to the power of that star, the 'Goldilocks zone" would have to be computed for the class of star and it's brightness. Anyway, all we see are gas balls presently, huge ones. The white paper and other sources mentioned had eluded to the existence of small rocky planets that are not presently resolvable. It is an infant system, and they also eluded to multiple nebulous rings.


Ok thanks.So the Goldilocks Zone won't really be known until the star has settled down in a kerzillion years time,or could already exist and life is good to go in the zone?







 
59
<< 3  4  5    7 >>

log in

join