Originally posted by amazing
How would we detect these?
First, in the article, of course, this point is made: Biochips
A light-emitting diode (LED) starts off the detection process. The light that it produces hits a fluorescent chemical: one that absorbs incoming light and re-emits it at a longer wavelength. The longer wavelength of light is then detected, and the result is sent to a control panel outside the body.
Glucose is detected because the sugar reduces the amount of light that the fluorescent chemical re-emits. The more glucose there is, the less light that is detected. S4MS is still developing the perfect fluorescent chemical, but the key design innovation of the S4MS chip has been fully worked out. The idea is simple: the LED is sitting in a sea of the fluorescent molecules. In most detectors the light source is far away from the fluorescent molecules, and the inefficiencies that come with that mean more power and larger devices. The prototype S4MS chip uses a 22µW LED, almost forty times less powerful than the tiny power-on buttons on a computer keyboard. The low power requirements mean that energy can be supplied from the outside, by a process called induction. The fluorescent detection itself does not consume any chemicals or proteins, so the device is self sustaining.
Reality is somewhat less alarming. A simple ID chip is already walking around in tens of thousands of individuals, but all of them are pets. Companies such as AVID (Norco, Calif.), Electronic ID, Inc. (Cleburne, Tx.), and Electronic Identification Devices, Ltd. (Santa Barbara, Calif.) sell both the chips and the detectors. The chips are the size of an uncooked grain of rice, small enough to be injected under the skin using a syringe needle. They respond to a signal from the detector, held just a few feet away, by transmitting out an identification number. This number is then compared to database listings of registered pets.
The development of the X-ray image intensifier by Westinghouse in the late 1940s in combination with closed circuit TV cameras in the 1950s revolutionized fluoroscopy. The red adaptation goggles became obsolete as image intensifiers allowed the light produced by the fluorescent screen to be amplified, allowing it to be seen even in a lighted room. The addition of the camera enabled viewing of the image on a monitor, allowing a radiologist to view the images in a separate room away from the risk of radiation exposure.
More modern improvements in screen phosphors, image intensifiers and even flat panel detectors have allowed for increased image quality while minimizing the radiation dose to the patient. Modern fluoroscopes use CsI screens and produce noise-limited images, ensuring that the minimal radiation dose results while still obtaining images of acceptable quality.
Originally posted by BO XIAN
reply to post by qmantoo
I wonder if MRI's would fry them?
What would fry them?
Originally posted by BO XIAN
reply to post by tetra50
I'm happy to trust your assertions.
I've read a lot. I don't know if I have or have not read all your posts. Probably not.
I get weary of the topic. It is more than a little depressing if one focuses on it overmuch.
I'm happy to cheery you on in your posts and research, however.
It's just gotten to be a dreary and sad to pic to me. I try to keep some tabs on it but I avoid immersing myself too deeply in it any more.
There's more than sufficient downers in the news in our era, imho.