It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Shiny new tech to advance cell phone cameras in the future! And the untold story is...

page: 1
23
<<   2 >>

log in

join
share:

posted on Nov, 19 2012 @ 05:12 PM
link   
Here' s a nice story about a radical new design for phone camera lenses.

The story explains how a 'new' technology is becoming available. It's SO cool that you will one day (not quite yet) be able to take photos with your cell camera that might rival a DSLR camera with a big Nikon lens.

Wow, that's pretty neat, right? The new lens will be able to actively focus, act like a big macro lens that can do wide angles, close ups and act as a telephoto lens. On top of which, the quality will be incomparable. Since the lens is a big problem with small cameras, this is a boon to humanity!



Scientists have developed a super-thin lens that can function either as a convex or a concave lens with the flick of a switch. It means a scene can either be magnified or viewed at wide angle.

It could provide a new generation of small lenses used in devices such as mobile phones and tablet computers, allowing keen photographers to capture images that previously required expensive SLR lenses.


Sounds really promising! A new day will dawn for cell phones and tablets!

Now...what else. Hm.

Welp, if you've ever paid attention to the weirder bits of my posts I do occasionally drop the oblique reference to Current Actual Military/Gubmint Stuff into them. They're not usually that blatant, which irritates BFFT and mbkennel (sorry).

Back in maybe 2007, I started popping occasional references to this technology into posts, and I suppose none of you are NRO/NGIA employees (heh) as my 'who amongst you will respond to this?' trolling got bupkes as a reaction for the last five years. So. The question you ought to ask is - this sounds miraculously good. What sort of chicanery might it be turned to? And that gets us, gentle reader, into the odd and arcane arts of Orbital Imagery.

Sparrow and Rayleigh and Dawes! Oh my!

A popular meme in the world at large, and certainly on conspiracy web sites goes something like this:

"My uncle, who works for the FBI, says that satellites are so good that we can read the letter in your hand if you open it at the mailbox" or "we can read dates off of coins on the ground" or whatnot.

Tripe.

There's an issue with optical lensing. And that problem is, the lenses are not infinitely large. You wouldn't think that would be a problem, would you? But it's true - the lack of infinite aperture actually is like an error in the lens. What you get as a result of this is an effect called diffraction, which sets an upper limit on the amount of information that can pass through the lens, or, in short, a lens of a given aperture (size) cannot resolve details smaller than a certain size (angular resolution, literally), given a particular color of light. It doesn't matter if the lens is made by magical Keebler elves in a hollow tree so that every atom in the lens is exactly where design would have it. It doesn't matter if the air is crystal clear. It doesn't matter. For any finite lens, the resolution at a fixed distance is limited by the aperture and the wavelength.

If you google around for this, you'll want to look for "telescope", "modulation transfer function", "resolution" and "aperture" among other things. This has actually been known for a long time - the late 19th century scientist Lord Rayleigh was one of the first to point this out, so you will see the problem often discussed as "Rayleigh's Criterion" or "Rayleigh's limit". Other people have refined the thing, so you'll also see Sparrow's Limit and Dawe's limit, and occasionally something called an "Airy disk". Some of these are more applicable to stellar imaging than ground imaging, but it's still all the same issue - a finite lens has limits on its performance. The reading on this varies from mind-numbingly mathematical to overly simple - choose your poison.

So, how do TPTB, in this case pretty much entirely the NRO/NGIA with the occasional CIA surprise, deal with this? Well, they can't, at least not with optical imaging. Rayleigh sets a physical bar against reading the dates on coins, newspapers, or counting hairs on Uncle Fred's head. You can use better optics, to get as close to the limit as possible. You can use BIGGER optics - remember the larger the aperture, the smaller the resolution for a fixed orbital height. So a big lens or mirror means a finer-grained image. But you've got to get the damned thing into orbit, so something like Hubble is about as big as you can get.

And, in fact, the Hubble's mirror is, oddly, about the size of the main mirror on a KH-11. Or so I'm told.

But it's just damned tough to make them much larger AND get them into orbit. Big is heavy, and fragile.

So what's another way? Well, you can contrive to make a faux aperture by combining a number of smaller telescopes. If you had an array of mirrors (or lenses), each small in itself, you could, if all went well, put their images together so that the aperture looked like the size of the array, not the size of the mirrors making it up. That's great in theory - you see it done a LOT in ground based radio telescopes. But it's tough to do with optics. Not that you CAN'T do it, but the accuracy of positioning the individual mirrors in the array is a function of the wavelengths involved. For radio, that can be meters off and still be ok. For light, the spacing has to be to fractions of a wavelength of light. And remember, the shorter the wavelength of light, the better you can resolve details, so these two things work against each other. You want it short for details, but short is tough for mirror positioning.

Not that this has stopped them. One way to make this work better is to tether a group of sats, usually three, and spin the crap out of them so that the tethers are really rigid. Then you use micropositioners to adjust the spacings, and voila! an aperture as big as the three sats at the ends of the tethers, made of three much smaller mirrors. This isn't perfect either, because tiny perturbations like going in and out of direct sunlight, or gravitational anomalies caused by masscons below, most particularly if you ever have to reposition/retask the group, just takes FOREVER to compensate for, so optical satellite clusters that do this go on and off line constantly as they fall out of perfect alignment and have to recalibrate. Also, it's just not quite as good as a real mirror of that size, so you can't get the maximum resolution or light gathering capability a huge mirror or lens would provide. But it's better than a poke in the eye.

(continues on page 2 - maybe this is a good test for tldr!)



posted on Nov, 19 2012 @ 05:44 PM
link   
well.....i tried to wait for page 2.

You are right. It does get my ire.


Look forward to the remainder of your post.



posted on Nov, 19 2012 @ 05:51 PM
link   
(page 2)

So, if we've whopped into the stops for a light-based optical image at the far end of tethered satellites, where do we go from there?

Well, the next stop is NON-optical images. If I can figure out what's going on another way, then that's good too.

So, years ago, the gubmint started lofting non-optical ground imaging satellites.

There's a way to take pictures of things on the ground by using radio waves, basically an offshoot of radar. Now, this, too, has the same exact resolution problem as that of a telescope, and for the same reasons. No antenna system has an infinite size, so you end up with diffraction limits just like Rayleigh's Criterion.

Worse, the wavelength of the radio waves you use for this are WAY bigger than that of blue light ground imaging, and as I discussed in part 1, the longer the wavelength the worse the detail. So why would you go to radio/radar to do ground imaging?

The aperture.

By means of arcane mathematics beyond that of mortal men, you can take a coherent radio wave source, a dandy satellite design, and the magic of orbital mechanics, and pretend that you have a HUGE aperture. Not just the ends of tethers, you can make one MILES wide. The overall box for this trick is called "SAR", for Synthetic Aperture Radar. Basically, you collect data over a comparatively big swath of orbit, and by means of magic you can reconstruct all this data into a real image. There are, of course, limits to how big the swath can be, and that's set by how perfect your coherent source is, and drift in the satellite, and imperfections in the orbit, and the satellite design. But those are all tertiary to the big ass aperture you can get this way.

Using SAR techniques, you can, from orbit, resolve to much smaller limits than you could with an optical telescope. The problem is, you don't get an optical image. It looks good, almost like a photo. But you can't see things on the surfaces of objects. If I imaged Uncle Fred's car, I could tell the make and year. But I couldn't see, for example, a message painted on it. I would see the car's surface but not markings.

You can get SAR images from UAVs and fighter craft, too, it's a thing AESA can do in addition to its many other tasks.

In the olden days, what used to happen is that NRO satellites would grab really detailed SAR images using film as the mechanism for data storage. Back then, when dinosaurs roamed the earth, we didn't have fancy high speed data links or high speed processors. So, the satellites would zip SAR data onto high resolution film base about 6" wide. Raw, it looks like pictures of raindrops on mud, or like a real hologram without the laser. The trick is to do the complicated mathematical transform that converts the radar data into a ground image, and back then, the satellite couldn't get the data down or the processing done.

So the satellite recorded the raw data on filmbase, and kicked it out the back like Corona. The SAR film would reenter, be caught with a Fulton Skyhook, and taken to the lab. Once developed, you still get a muddy pebbled bunch of crap. To convert it to an image, at the time, it took too long to do the math using a mainframe - so they used lenses. Specially ground lenses actually do mathematical transforms on images. Some eggheads figured out how to make a lensing system that did the SAR math trick. There was a team of lens grinders working for NRO then, the thing was a masterpiece of the art. And any little deviation from the orbital height it was designed for and you got a worse and worse image, and had to regrind a new lens. It worked, but not well.

As time went by, it became possible to honk the data down over a radio link and then crunch away with mainframes. It was still not real time,but was better than lens grinding.

There are a lot of radio SAR imagers in orbit. It's not state of the art anymore, but it's dependable, cheap, and it gets the job done in 99% of circumstances. Anyone can put up a SARSAT these days, pretty much, and peek in people's private military/aerospace playgrounds. So, the military/gubmint has spent lots of your tax dollars devising clever arcane ways to screw this up in areas you don't want imaged. SAR obscuration was originally done from ground stations but now we have satellites that swoop around doing this in a much more flexible and real-time way, in case you don't want your troop movements or materiel disposition seen by everyone with any orbital capacity at all. That in itself is another "conspiracy theory" article but I'm not sure I can go into detail on it.

(stay tuned for part 3!)



posted on Nov, 19 2012 @ 05:53 PM
link   
Wow, that's incredible, so now technicians have come up with a way to mimic the natural lens of an eye......truly amazing.....What? it only took them 500 millions Years? Oh well...



posted on Nov, 19 2012 @ 06:26 PM
link   
(part 3)

The SAR imaging trick is all done with digital signal processors now. You can do all sorts of things with the basic math. If you move the imaging apparatus over the ground track at a fixed rate and height and the radar source is as perfectly stable as can be, you can get great images, the more perfectly you do these things the better the imaging will be.

It also works backwards.

If your imaging apparatus is fixed, and you move the target through them, you can image the target, get a speed, height and flight path. Sandia (sorry, guys) came up with a means of imaging the other guys' stealth planes doing this using bistatic radar. Heh.

But, coming around full circle, the fastest SAR imaging is done using holograms of lenses. You can take really fine grained LCD imagers like LCOS or MEMS imagers like a TI DLP, and put them into an optics rig with a laser and generate a holographic lens system. Basically, you calculate what a perfect SAR transform lens would be for your particular circumstance, then calculate what the interference pattern for a hologram representing that lens would be. That gets displayed on the LCOS, and voila! the laser in the rig creates a functional image of a lens system. You then run your SAR data through the holographic lens, and you have real-time processing. You eat a big delay upfront, once, to do all the lens calculations for your imaging pass, but when you've got it, it's all flow. You can make hololensing systems that could not be ground even by teams of Swiss opticians, and you can do it in the magic box in a few seconds. And THAT, gentle readers, was done partially by yours truly back in the 90s.

But wait, there's more!

So, they went from optical images, to SAR. But once you stretch out your aperture to a km using SARSATS, you will eventually run into the limits again set by the wavelength of your imaging radiation. For optical satellites, that's the color of the image - bluer is shorter, and thus more detailed, that's one reason the disks with more data are "bluray" and not "redray".

For SARSATs, the shorter the wave, the better the detail, which is why 10cm stuff is unclassified for the most part, but you don't see a lot of 1cm images on the net. As you push that wavelength down with a big aperture, you can image pretty darn tight. WAY better than cameras. But, of course, it's still not good enough - someone might have something the size of a cellphone you'd want to look at for some God-forsaken reason. Smaller is never enough! So, the next step! As you push radio sources down down down in wavelength, they get cranky and hard to manage.

What's way shorter in wavelength than radio? Light, again!

If we couldn't get a big aperture using a light based telescope, maybe we can do it using light based SAR! So, using the same math trick as SAR, what we can do is, we can use a very very stable IR laser in that satellite, using a color that will easily penetrate to the ground. Using some magical filtering on the way back up, we can transform the very faint ground returns of that laser source into exactly the same sorts of muddy, grainy noise looking recordings you get from SAR. Only, now this data has resolutions down into the millimeter range. In theory, less, but in practice, it gets to be really tough to get all your lensing systems perfect. But hell, it's good enough for government work. The NRO operates ground imaging synthetic aperture laser sats under the Lacrosse series. Lacrosse kicks absolute ass. They® still can't read that letter in your hand - like SAR, SAL won't pick up surface markings. But you can look at someone's build, facial features, and whatnot (with some limits - remember we're looking down) and ID individuals, if you've got physical biometrics. They can't tell what's displayed on your iPhone, but they can tell one from an Android.

The downside of it is atmospheric conditions play a MUCH bigger role than they do for SAR, it's light after all, so clouds and rain are one reason SAR is still a hot satellite imaging technique. But when the stars align, Lacrosse was nearly unbeatable. A problem that re-rears its ugly head is that you get SO much data back, it's hard to process, even with supercomputers. So you have to look small. Big sweeping images are NOT real time. Of course, there's the hololens trick that will speed that back up - thus the coming full circle - but even then, the hololens can only process so much data. So it's not real time, but if you crop your attention down it's close.

You'll have noticed the "was" part there.

(onto part 4)



posted on Nov, 19 2012 @ 06:56 PM
link   
(part 4)

The far end of the rainbow here is a relatively new 3D ground imaging system that uses SAL as the starting point. You can actually return volumetric images of ground targets now, with resolutions down to 1mm voxels or so. That's a Northrop trick, and it also kicks ass. It's the difference between looking down and getting a flat photo and being able to pick a locale and get a 3D image of it, then being able to "walk through" the thing using either VR goggles or a big display you "grab" and "spin" around and "walk through" to get the viewpoint you want, sort of like playing Doom. Again, you don't get color, or surface marking, but you can easily tell what sort of car that is, or spot facial features which is something Lacrosse was bad at.

The data volume is immense. That's a limit. But you can use a tactical imager from a plane, sort of confine what you want to see, and get it a few seconds later.

That started happening about six years ago. It's still not fielded reliably, but it'll get there. That's the bleeding edge now.

So, how does this come back to cell phone camera lenses?

Well, as they say in north Georgia, where I come from, 'what goes around, comes around'.

They've® been going to more and more arcane methods of imagery, all of which are more and more technically fragile, slow, and difficult to field. And all these issues are due to diffraction limits in lensing/antenna systems, because lenses in the real world aren't perfect, since they have edges and aren't infinite.

You'd think that was sort of insurmountable. But, no!

You can make an infinite lens. And the way you do this is with a plasmonic metamaterial lens, like the one in the article.

Metamaterials are engineered to have quirky, non-real-world characteristics. You can make metamaterials that have impossible permeability and permittivity, which is hard to explain without math, but basically, I can make light in a metamaterial lens seem to go faster than light, or way slower than light, or be travelling the wrong way through the lens.

Metamaterial coatings for planes, for example, can be engineered to return negative Dopplers, or zero Dopplers - to one part of the radar, it looks like the plane is going backwards, to another, forwards, so the radar's filtering just edits out the plane as 'noise'. Crafty, eh?

The same way, I can make an optical lens that uses left-handed metamaterial characteristics that edit out the lens' edges. Think about that one. The lens can be designed using plasmonic metamaterial techniques in such a way that the lens will, to all intents and purposes, have an infinite aperture.

The diffraction limit goes away.

No processing. No number crunching. No real limits to volumes or areas of images caused by data flow problems. No non-optical image limits.

Now the limit to resolution is the lens design, and the CCD, and how perfectly you can fabricate that lens. Instead of detail resolution being limited by wavelength and aperture, it's limited by light input - tinier details become dimmer - you'd need a longer exposure. That has its own problems, so in practice there's a final limit there, but if you had a DANDY system, you probably COULD read someone's mail.

In addition, all the nifty features of the phone lens apply to this thing - variable focus, zoom, some neat filtering tricks. Only now it's on an NRO bird looking through your window seeing what sort of cereal you're eating.

RIght now, it's sort of lab-by. Hard to make, only works for monochromatic light, so you don't get full color photos, fragile. But it's the up and coming thing. Back to optical images, only now, without those pesky limitations. No tethers, no huge-ass Hubble sized KH birds. No telltale emissions.

As you see this sort of lens system proliferate into the consumer world, just remember - they're several years ahead of it at NRO.



posted on Nov, 19 2012 @ 06:59 PM
link   
So... this all meas that "they" are spying on us without smartphones, and with this technology, they can do it even better?

Good thing I never did the smartphone thing...



posted on Nov, 19 2012 @ 07:07 PM
link   
Fascinating. Thanks so much for what obviously took you a lot of time and effort to prepare. Yes, I remember the "We can read the newspaper on the park bench" stories. That was a nice explanation without getting so technical my brain would freeze!



posted on Nov, 19 2012 @ 07:09 PM
link   
reply to post by davjan4
 


No, more like this new lens tech for smart phones has a really nasty propensity for misuse from you guys point of view.

From the MIC side, it's got all sorts of interesting possibilities attached to it.



posted on Nov, 20 2012 @ 08:53 PM
link   

Originally posted by Bedlam
I can make light in a metamaterial lens seem to go faster than light


Faster than light would normally travel through a non-metamaterial lens? or faster than 'c' ?



posted on Nov, 20 2012 @ 09:25 PM
link   
Well that was a good read thanks Op. You put this together very well, it was informative enough I learned but not to technical that I feel asleep. So Im extremly excited about real world apps. Im not to worried that tpb or whoever can see my credit card number because if they want it they will get it anyway so ehh.

However im super excited about the off world possibilities, If I followed you correctly this can be applied to not just looking down at us but up at stars or other planets correct..?

I would love to see some close up hi/def pics other planets besides ours.



posted on Nov, 20 2012 @ 11:17 PM
link   

Originally posted by Tajlakz

Originally posted by Bedlam
I can make light in a metamaterial lens seem to go faster than light


Faster than light would normally travel through a non-metamaterial lens? or faster than 'c' ?


That's a good question. People do a tap dance about that - theoretically it HAS to go faster than light in a vacuum for the lens to work, but they® tell me it's a phase vs group velocity thing from one side of their mouth, the other bunch says you can't transport data across the lens faster than light and it's more a math thing than reality. A few people you'd otherwise trust say it really DOES go faster than C through the device and that should be investigated.

Try looking up "negative index" or "left handed" metamaterial, "faster than light", "diffraction free" and maybe "anomalous dispersion". I don't have a stance on it from my engineer side, other than "what's expedient?", from the physics side, I get the same queasy feeling about the small details that I get looking at phase and time conjugators.



posted on Nov, 20 2012 @ 11:20 PM
link   

Originally posted by CitizenJack

However im super excited about the off world possibilities, If I followed you correctly this can be applied to not just looking down at us but up at stars or other planets correct..?

I would love to see some close up hi/def pics other planets besides ours.


True, at some point, you ought to be able to make some kickass large aperture metamaterial "perfect" lenses.

It's a big deal in radio and radar at the moment. There's a whole set of careers in plasmonics that I'm too old for, but if you were a math hound it's going to rock in the next 20 years.

At one point, the Roosians were working on enhancing dielectric constants with some sort of plasmonic trick, so you could build better-than-battery capacitors. You used to hear a lot of basic research out of them 10 years ago but it's all quiet now, which is generally a sign they are trying to engineer it.



posted on Nov, 22 2012 @ 04:13 AM
link   
I knew this was a Bedlam thread from the word 'Shiny' in the title and wow am I glad I saw it.

Great thread, great info, great tie ins to some of your more esoteric and vague postings in the past. Since meta-materials weren't known about in the early 40's it begs the question as to how one would modify permittivity and permeability back then. I doubt plasmon engineering was up to snuff.

Seems like a good time to be interested in plasmonics either way. Lucky me.

Now if only there was more information about experiments relating to a machian universe...
edit on 22-11-2012 by framedragged because: Clarity



posted on Nov, 22 2012 @ 04:08 PM
link   

Originally posted by framedragged
I knew this was a Bedlam thread from the word 'Shiny' in the title and wow am I glad I saw it.


Geez, am I THAT predictable? (ROFL!)



Great thread, great info, great tie ins to some of your more esoteric and vague postings in the past. Since meta-materials weren't known about in the early 40's it begs the question as to how one would modify permittivity and permeability back then. I doubt plasmon engineering was up to snuff.


That works a different way.



Now if only there was more information about experiments relating to a machian universe...


Since a lot of guys here think they discover "hints" in movies, I did pitch doing a real disclosure wrapped in a shiny movie shell and study how people reacted to it, but alas, I got shot down. Maybe it's because I was a main character. I had a real tear-jerker death scene too. Hell, they could even film most of it "on location" and save a lot of money on sets, but you'd have to kill the film crew after, unless the Navy filmed it. Internal military films look a lot like that 1950s crap you saw in school as a kid about brushing your teeth, though, so I'm not sure that's an option.

Just think...
scene: inside port airlock
action: pressure counter on wall slowly counting down, slight hissing noise, guys are popping their ears

Schmidt: "My God. We flew faster than light, and now we're on a whole other g----m planet"
Carnes: "Too bad we can't tell the other guys back home - hey, boats, why aren't you excited?"
Me: "Been there. Done that. Got the t-shirt"
Schmidt: "Bull--t. You can't be that used to it"
Me: "Just wait"

Door opens - outside there's a dark sullen red sky, with a large dark red sun on the horizon to the left. Red tinged sand dunes roll to the horizon. Off to the right, a cluster of featureless adobe-looking buildings stand. Blue-white streetlights scattered among them make them stand out unnaturally. The group walks out onto the sand. Cut to- face on medium distance shot, ship looming over them in the background.

The looks of awe, wonder, and expectation slowly fade from the group, except for Tom, who grins, reaches into his back pocket, and fishes out a can of dip, which he begins tapping on the heel of his other hand.

Tom: "Well, boys, welcome to Planet Dirt."



edit on 22-11-2012 by Bedlam because: (no reason given)

edit on 22-11-2012 by Bedlam because: (no reason given)



posted on Nov, 23 2012 @ 03:14 PM
link   

Originally posted by Bedlam

Geez, am I THAT predictable? (ROFL!)



Nah, you just have a very unique enthusiasm to your posts. I'll sometimes be mindlessly reading responses to some thread or other without looking at names and say to myself, "This feels like a Bedlam post..." only to glance over and see that infamous tag.




That works a different way.



Haha, that much I'd gathered by now. The only attempt I've ever seen to go beyond degaussing in the Philadelphia experiment, while remaining somewhat grounded in reality, involves inducing spin polarization in a body and rotating it to create a space-time shield. Kind of like turning the body into a small matter inductor and aligning the induced grav-mag field opposite its weight vector. Sort of a miniature version of what I like to imagine a mach drive as, minus a couple things. The physics explanations of NMR and the metric tensor up to that point seem fairly solid but then it devolves into woo-woo land. Somehow I doubt it's what really happened.




Door opens - outside there's a dark sullen red sky, with a large dark red sun on the horizon to the left. Red tinged sand dunes roll to the horizon. Off to the right, a cluster of featureless adobe-looking buildings stand. Blue-white streetlights scattered among them make them stand out unnaturally. The group walks out onto the sand. Cut to- face on medium distance shot, ship looming over them in the background.

The looks of awe, wonder, and expectation slowly fade from the group, except for Tom, who grins, reaches into his back pocket, and fishes out a can of dip, which he begins tapping on the heel of his other hand.

Tom: "Well, boys, welcome to Planet Dirt."



edit on 22-11-2012 by Bedlam because: (no reason given)

edit on 22-11-2012 by Bedlam because: (no reason given)


That's a movie I'd see in IMAX. Especially if it was filmed on location. And filming it on location would have the added bonus of being a reverse moon landing hoax. Now that would drive the conspiracy theorists up a wall.

Back on topic, it would be pretty far out for consumers to have a meta-lens sometime in the future. That wasn't something I saw being outside of labs and black projects for quite some time. Makes me want to try and get in to the lab of the one professor at my school who's done any work with meta materials at all. I wonder if he'd ever consider an undergrad. If he wouldn't I imagine I can at least get in with one of the numerous professors whose focus is plasmonics.



posted on Nov, 23 2012 @ 04:17 PM
link   

Originally posted by framedragged

That's a movie I'd see in IMAX. Especially if it was filmed on location. And filming it on location would have the added bonus of being a reverse moon landing hoax. Now that would drive the conspiracy theorists up a wall.


That was another point I made. Imagine everyone carping about the CGI when it's on-location shots. Heh. The whole concept fit my twisted sense of humor.



Back on topic, it would be pretty far out for consumers to have a meta-lens sometime in the future. That wasn't something I saw being outside of labs and black projects for quite some time. Makes me want to try and get in to the lab of the one professor at my school who's done any work with meta materials at all. I wonder if he'd ever consider an undergrad. If he wouldn't I imagine I can at least get in with one of the numerous professors whose focus is plasmonics.


I think they're still a few years away from an achromatic metamaterial plasmonics consumer lens. NRO, however, has been working their collective asses off for this goal for the last decade. They're funding a lot of research indirectly.

The fabrication is still an issue. But it won't be forever. Remember, the physical aperture of this lens is a limit on photon collection, not resolution. So even a small one would be...useful.

Plasmonics and metamaterial research are both great fields. Especially if you want to end up like me, which could be bad or good depending on what you like doing.

I also recommend lasers. Lasers, plasmonics. Hm. Two great tastes that might go great together. Maybe.
edit on 23-11-2012 by Bedlam because: (no reason given)



posted on Nov, 23 2012 @ 04:18 PM
link   
reply to post by Bedlam
 


Thank you for the informative read. What applications or improvements would these new lenses offer for astronomical imaging?



posted on Nov, 23 2012 @ 04:24 PM
link   
reply to post by iforget
 


Spectacular resolution. The ability (if designed that way) to make some distortion corrections on the fly if used from ground level. If they can lick the tendency to be monochromatic, they should be able to have near achromaticity - you can make wideband radio lenses, I'm not sure why you couldn't do it for optical ranges.

Near zero distortion. Light weight. I could probably think of more if I worked on it.



posted on Nov, 29 2012 @ 02:07 PM
link   
Since older imaging satellites used film I've always wondered what happened when the satellite finished its roll of film. I know that they went up with thousands of feet of film, but it's a little wasteful to just junk the expensive satellite once it had exposed it all. Seems like you could extend its lifetime if someone would go up to put a new roll of film in but that would require a whole lot of resources and clandestine activity. Perhaps an agency related to the one which wanted to send a group of astronauts to tiny space stations and physically take pictures at around the same time as Apollo. Would probably have been beneficial to have a reliable launch and reentry vehicle way back when too.

Or maybe 2000 - 16,000 feet of film was enough for the viable lifetime of the craft.



new topics

top topics



 
23
<<   2 >>

log in

join