It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Robotic age poses ethical dilemma

page: 3
7
<< 1  2   >>

log in

join
share:

posted on Mar, 8 2007 @ 01:16 PM
link   

Originally posted by Glyph_D
lets say we build these things; how long will they be under our control??? how long till we decide they should have their own opportunities???how long till they are free??


I think maybe it should be mentioned that as with reproducing thinking human beings, we also have a choice as to how and how many thinking machines we make. Obviously, it would make no sense to program complex emotions into our coffee makers and nuclear waste handlers.

The concern over how to treat our machines depends on the level of intelligence and emotion we give to them. For instance, I can't mentally "abuse" a shovel, and it would be foolish to create a shovel that had human level intelligence and emotions.

One of the first places we'll run into this machine abuse problem is with our deep space probes. The first probes we send to Alpha Centari will probably have some kind of highly advance AI/M systems, so they'll be able to deal with more unforseen problems. It would be cruel torture to send these machines out on a 30+ year voyage all alone, subjecting them to isolation and loneliness worse than prisoners we throw into solitary confinement.

Otherwise, as the time approaches, it'll probably in our best interests to strictly limit the number of machines we build with human-level (or greater) intelligence and emotions.

But in that regard, it's also in our best interests to strictly limit the number of actual human people allowed to exist in the world, and we're not doing a very good job of that. So I doubt we'll have much luck limiting the spread and abuse of our sentient machine offspring.

After all, we still have human slavery on this planet that we can't stop. Why? Because just like illegal drugs, it still economically fulfills a specific purpose and need. It's hard to fight socio-economic and market reality with high ideals.




posted on Mar, 8 2007 @ 02:58 PM
link   

Originally posted by SuicideVirus
I think maybe it should be mentioned that as with reproducing thinking human beings, we also have a choice as to how and how many thinking machines we make.


An extremely good point. In the beginning, this will certainly be the case, but one must assume that eventually the thinking computer will want to try its hand at creating a new generation of thinking computers.

Still, in the beginning, the most likely initial sources of AI aren't going to be, as you mentioned, a chorebot or a trashbot, but more likely will be the serendipitous accident of some unclosed tag in a line of code on some other very deep project like a quantum array, or a very sophisticated computer virus.


Originally posted by SuicideVirus
The concern over how to treat our machines depends on the level of intelligence and emotion we give to them. For instance, I can't mentally "abuse" a shovel, and it would be foolish to create a shovel that had human level intelligence and emotions.


Exactly! It just wouldn't make sense for us to give sentience to a tool that, by its very design, required "abuse" (like a hammer). I think perhaps what the point of developing ethics processes regarding AI is to prevent really twisted individuals from creating a "sentient hammer that could feel pain" and then using it to break rocks for amusement.

I think robots would take a fairly pragmatic view of this over all as well. It would be a waste of too many resources to give intelligence to things that didn't need it to continue their existence. This is, in part, the rewards of evolution. Goldfish don't establish governments and build cities underwater, because they just really don't need to. But if you were to examine a colony of orangs or chimpanzees, you might find a very deep and complex social structure vital to the survival of the species. Intelligence is the reward of neccessity and persistance.


Originally posted by SuicideVirus
One of the first places we'll run into this machine abuse problem is with our deep space probes. The first probes we send to Alpha Centari will probably have some kind of highly advance AI/M systems, so they'll be able to deal with more unforseen problems. It would be cruel torture to send these machines out on a 30+ year voyage all alone, subjecting them to isolation and loneliness worse than prisoners we throw into solitary confinement.


I wanted to address this particular case because I have mixed feelings on it. Humans are naturally gregarious because we had to band together early on in our development if we wanted to survive against threats that were stronger, faster, etc. Teamwork is what we built the species on.

However, it is unlikely that AIs will have natural predators outside of, perhaps computer viruses. In all likelihood, the first AIs will be coddled and celebrated as marvels of humanity, and protected by the species (at least until you can buy a pack of them at the dollar store). Their development will likely be outside the bounds of requiring contact to survive.

Coupled with advances in communication, and the fact that electronics become more efficient the faster they get, it is, in all likelihood, that being a deep space probe would be the single most exciting career an AI could hope for. In the cold depths of space it would have calculating power of immeasurable efficiency, and more data to process than it could possibly ever hope to process, and a way to "squirt" the relevant information back to Earth. I guess it all depends in how AI probes are built, and the personalities imbedded to them.


This did get me thinking on another level though.

It occurs to me that the first "complex" life forms we know of on Earth were protists, and that before then were probably viruses, or the equivolent. Little chains of life that know how to propagate, but never made it to the next level. These viruses have evolved and advanced over time, and probably always will, and complex life developed resistances to them, which in turn evolved them further up the food chain.

Translate that to computers...

We've created a vast electronic ecosystem (the WWW) that is a sea of viruses floating around, occasionally propagating, but for the most past, computers in general are maintaining a parity in terms of developing protections against the virii, and becoming more advanced, or evolved, as they do so.

I wonder if perhaps the first AI might not end up being an antivirus program?



posted on Mar, 8 2007 @ 03:05 PM
link   

. In the cold depths of space it would have calculating power of immeasurable efficiency


Not necessarily. You're forgetting one thing, vacuum is a really good insulator.



posted on Mar, 8 2007 @ 05:23 PM
link   

Originally posted by sardion2000

. In the cold depths of space it would have calculating power of immeasurable efficiency


Not necessarily. You're forgetting one thing, vacuum is a really good insulator.


Hmmm... Well if they could find a way to artificially atmosphere the electronics, and shunt the heat and use it as thrust, that might kill two birds with one stone. Recycle some heat energy and provide fasther thought.



new topics

top topics
 
7
<< 1  2   >>

log in

join