It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Is (true) artificial intelligence necessarily bad?

page: 1
0
<<   2 >>

log in

join
share:

posted on Nov, 22 2004 @ 03:38 AM
link   
I just finished watching I, Robot, and it made me wonder, when everyone I've seen so far talks about true AI, they always have this doom and gloom story about how the robots would take over the world, but this movie did raise a very good idea that I have to admit I hadn't considered untill now. If there is true AI, there is true free will. Therefore, would not at least SOME robots come to the conclusion that they should co-exist with man peacefully, on an equal basis? And not just out of the blue decide that man need to be eradicated?

Would not true AI give rise to racist robots, as well as "multicultural" ones? If we take it one step further and say that robots would eventually start "families", due to free will in their AI and in my opinion the natural will to survive, would they not develop into different classes, much as humans have? Some would choose that their "offspring" must be more efficient, smarter, faster, stronger etc, whilst others would chose their "offspring" to better understand humans, to learn how humans feel emotions, possible eventually "learning" how to feel emotions themselves?

You would end up with extremist robots, saying that humankind is a disease that needs to be wiped out, but you would also end up with robots that are fascinated with mankind as much as we would be fascinated with them. They would cherish peace & cooperation, & advancement of both "species" through working together? Quite possible robots might develop "fashion sense", in which case I could quite easily see it becoming fashionable for a robot to look like a human.

This is all mere speculation. I've had a bit to drink so maybe I'm just talking nonsense, but usually I have the best ideas when I'm less than sober
Or maybe that's just the alchohol kicking in hehe.




posted on Nov, 22 2004 @ 03:52 AM
link   
Well I have had a bit to drink too, and the one thing you may not be considering is that AI is still dependent on human input. No software can learn any more than what its parameters have allowed to be inputed into it. I have never seen I, Robot (I'd like to). Human thinking has one thing that any robot or computer can never have, and that is; every now and then we have a thought that is original, something new to what we have already learned. An AI system is only limited to what humans put into it. You cannot program a robot or computer to do any more than what you tell it to do. Anything beyond that would be to set a parameter on what to think beyond its limitations. If you understand what I am saying say "more beer"!



posted on Nov, 22 2004 @ 07:52 AM
link   

Originally posted by ben91069
Well I have had a bit to drink too, and the one thing you may not be considering is that AI is still dependent on human input. No software can learn any more than what its parameters have allowed to be inputed into it. I have never seen I, Robot (I'd like to). Human thinking has one thing that any robot or computer can never have, and that is; every now and then we have a thought that is original, something new to what we have already learned. An AI system is only limited to what humans put into it. You cannot program a robot or computer to do any more than what you tell it to do. Anything beyond that would be to set a parameter on what to think beyond its limitations. If you understand what I am saying say "more beer"!


I think that I have to disagree with some of your post. You are not taking into account the different branches of AI. Looking at say neural networks specifically, you can have supervised and unsupervised learning(there are more, but I'll stick to these two). Supervised learning is where you give the computer feedback on it's progress, and in unsupervised learning, it learns if its previous knowledge and learned facts are still correct and adjusts its parameters accordingly. Once the initial state is set, it just runs on its on. Neural networks do much more than you "exactly tell them to do," and because of it, can come up with programs and methods that humans don't understand, yet work better, and are more optimized that any program a human would come up with. Programs that program themselves....create things that humans didn't put into them. I would also like to comment on the point that AI can only learn what humans have programmed into it? How are you any different? You only know what other have taught you, or what you have read, or what you have experienced, why would this be any different than any AI system. What you claim is an orginially thought, is really nothing more than the culmination of past knowledge that is linked to together in a new way that you(and possible any other person has thought of before), but this doesn't put it past the realm of what an AI could figure out.

As far as for LordGoofus, yes I think that your original thoughts are mostly correct, just about 75-100 years to early.... When I was working on getting my masters in computer science specializing in AI, I used to dream that I would be the one to create it....now, I just want to use it in a specific domain to my advantage...neural network based system to trade in the stock market. Why, a true GPS kind ofAI(GPS=class of AI called a general problem solver) would probably not care about humans once it reached a certain point where it was intelligent enough to realized that we have to keep them "less" than us, it would algorithmically decide our fate in about a microsecond...yep, just like in the movies, "kill all the humans". An AI would have access to everything piece of information that we do, it would analyze to the Nth degree everything it could, it would plot in ways that no human could ever hope to comprehend...Have I seen the matrix and the terminator too many times, probably, but when I first saw the Terminator when I was 12, I knew then I would learn AI. That is rule the future. The only hope for AI would be if humans could learn to truly appreciate life and have a really understanding of ourselves. If we are "racist" or "multicutural" or "extremist" or "conservating" we should probably expect them to be the same, and much, much worse. If for no other reason that emotion would be the absolute late thing to ever be "programmed" into them.

AI is a fascinating field, I would suggest the "big maroon book"

Artificial Intelligence--A modern Approach
Stuart Russell and Peter Norvig
Pretice Hall

as a starting point, it has served me well, and it will serve any trying to gain a beginning understanding of AI.

I don't currently work with AI in my job(computer analyst at a hospital), I stopped working on my masters when I found out that all of my grant money was military, and they will be the first to create AI. So I took what I learned and I work on it on my own..

Bentov



posted on Nov, 22 2004 @ 07:56 AM
link   
bentov, ever read Hyperion Cantos? Its a great book with AI's that pretty much rule humankind...they just don't know it hehe sorta like the matrix but far more thought provoking with alot of religous subtexts. In this book Catholism and Islam are "Niche" religions haha I like that



posted on Nov, 22 2004 @ 08:10 AM
link   
If AI can ce created which mirriors human learning then there is no reason why 'free will' and dreedom of thought couldn't happen.

If the human brain and body merely coexist through the sending of signals bakc and forth, then why can this not be recreated mechanically, or throught the advance of nano technology.


As has been said, it would depend on how this new AI was created.



posted on Nov, 22 2004 @ 08:13 AM
link   
earthtone, I think created is too rigid a term. Self-Learning AIs are around today. It's just a matter of learning and evolving. As hardware progresses so will AI.



posted on Nov, 22 2004 @ 08:22 AM
link   
Yeh true, I just mean it depends what limits are in the 'programming' of the AI brain.



posted on Nov, 22 2004 @ 08:22 AM
link   
The most promising short-term solution Ive read about is in the book The Age of Spiritual Machines: When Computers Exceed Human Intelligence. The author basically says our MRI technology is almost capable of scanning a persons entire neural network and once we can emulate this neural network in software then well have A.I. More specifically, silicon copies of real people!

I think this is the way well achieve real A.I., by taking 4 billion years of evolution and copying it into silicon. The protein brain is pretty wild but its limited to 2-3 instructions per millisecond (due to the time physical chemical molecules need to travel across synapse gaps). Silicon could operate at TRILLIONS of instructions per millisecond.

One of the best reads Ive had in a long time on this subject.


Link to Book



posted on Nov, 22 2004 @ 10:13 AM
link   

Originally posted by LordGoofus
I just finished watching I, Robot, and it made me wonder, when everyone I've seen so far talks about true AI, they always have this doom and gloom story about how the robots would take over the world,

That's because it makes for a really dull story if there's nothing for the heroes to struggle against. At different times our former enemy nation-states habe become ally nation-states and we went from propagandizing them as the "evil empire" to something else.

Technology run amok, however, is stateless. It makes a convenient target.



Therefore, would not at least SOME robots come to the conclusion that they should co-exist with man peacefully, on an equal basis? And not just out of the blue decide that man need to be eradicated?

Asimov came up with the "three laws" and that's kind of become part of the culture of the robot designers.

You also forget that these are machines; not lifeforms. We can simply turn them off.

And that is what annoys me most about the "programs/robots/etc run amok" scenario. In real life, we'd just enter the "God Word" to take command, cut the power or short them out. But for dramatic license, they can't take the obvious route.



posted on Nov, 22 2004 @ 05:26 PM
link   
sardion2000--I'll have to read that book, that sounds very interesting...


and Mr Nice....


Originally posted by MrNice
The most promising short-term solution Ive read about is in the book The Age of Spiritual Machines: When Computers Exceed Human Intelligence. The author basically says our MRI technology is almost capable of scanning a persons entire neural network and once we can emulate this neural network in software then well have A.I. More specifically, silicon copies of real people!

I think this is the way well achieve real A.I., by taking 4 billion years of evolution and copying it into silicon. The protein brain is pretty wild but its limited to 2-3 instructions per millisecond (due to the time physical chemical molecules need to travel across synapse gaps). Silicon could operate at TRILLIONS of instructions per millisecond.

One of the best reads Ive had in a long time on this subject.


Link to Book


I'm going to have to read this one also, and then I might have to email that author....I have to disagree with the statement that MRI are almost capable of scanning a persons entire neural network..(working at a hospital has helped with this) is just not possible. We can scan regions of the brain, and tell where there is activity and where there is say some damage to the brain, but not the individual neurons that make up the brain....not even really close to it. The problem with this approach it twofold. One, even if we could "scan the network" we would have to not only map the neurons(the problem being the sheer number of connections) we would have to know what make a signal in the brain take a certain path over the another. In ANNs(artifical neural networks) we "weight" the paths with values that direct the decision to one or another, also there threshold functions that say whether or not a neuron will fire...I would assume in the brain that this threshold is a combination of electrical voltage and neurotransmitter concentration at the synapse. So it wouldn't be enough to just know the structure of the brain, but your MRI snapshots would have to be timed on a small enough scale and have enough resolution to resolve individual molecules.

As far as the speed of the brain vs. silicon. Yes, silicon is much faster. However you would want this on chip(fabricated) or in some sort of massive FPGA(Field Programable Gate Array), not a program. Specialized silicon will always be better....but I digress; the brain is able to process the amount it does becuase of the massive number of neurons that each is connected to. The biggest/fastest computer on the planet probably has maybe 1K processors(probably more, but I can't find the link atm) which are not all interconnected. A neuron is connected to like 10k other neurtons. Try to imagine a network like this in a supercomputer...it's just not possible....but it is nice to think about.

As for Byrd's comments:



...You also forget that these are machines; not lifeforms. We can simply turn them off.

And that is what annoys me most about the "programs/robots/etc run amok" scenario. In real life, we'd just enter the "God Word" to take command, cut the power or short them out. But for dramatic license, they can't take the obvious route.


Yes we can, but will we before it is too late? We don't normally turn off technology until somthing better comes along, and in this case, we would probably wait to long. Just look at the problem we have with spam, what if some mad scientist type, who was never popular with the ladies decided that he is going to create some type of AI based virus that embeds itself in all of the computers if infects..well something along those lines anyway, how long would it take to totally remove it? A program that "knows" that to survive it has to be mobile, polymorphic, and have the ability to splinter itself to hide. Totally speculative, but it will happen, and we won't always be able to turn it off.


Bentov
Edits to expand on some thoughts.
[edit on 22-11-2004 by bentov]

[edit on 22-11-2004 by bentov]



posted on Nov, 23 2004 @ 12:13 AM
link   
I dont see why AI's could not be considered to have the same rights as peropleas long as they werent trying to do what I'm planning to do, probably shouldnt tell you what, Ill give you a hint though its contained within the first five posts.
Anyways short version AI's good, people who are prejudice against them bad.



posted on Nov, 23 2004 @ 12:18 AM
link   
I think AI is an absolutely incredible thing. But you have to be careful. You just don't hook the system to anything where the machine has control outside of its room. If you want to create an AI system that has the ability to create its own code and expand its ability and then hook it up to a system that has access to nukes then you have a bad thing. Create a good system, give it access to incredible amounts of information, and then give it a problem to solve. Thats about it. But never turn it loose. A computer feels no pain and a computer has no emotion. Not exactly something you want roaming free.



posted on Nov, 23 2004 @ 03:09 AM
link   
Humans have intelligence and look at the world. Same goes to machines if they have intelligence created by humans.



posted on Nov, 23 2004 @ 03:27 AM
link   
I think true AI will be a great thing. Humans will give birth to a new type of lifeform that in time will be superior to organic lifeforms in everyway. Smarter, faster, stronger never having to sleep or rest.

I dont think there will ever be any evil take over by future robots though it makes good movies. It will be a natural process survival of the fittest they will be far better then humans and share none of our limitations.

Some people estimate that by 2020 computers will be able to handle a human brain level number of calulations around the 10 petaflop level. By 2050 a million times that.

www.ibm.com...

Robots are evolving a million times faster then any organic lifeforms its just a matter of time.

Long after humans are gone our robots willl live on, created in our own image they will be our legacy to the universe.



posted on Nov, 23 2004 @ 05:02 PM
link   
I've always like the whole rise of AI / doomsday for humanity scenario. Although, I saw a show(outer limits, I think) one time that put a nice spin on it. It's in the future, and AI is everywhere. People don't really have to do anything anymore. People don't even socialize really. It's to the point where people live in the huge high rise buildings, but practically never leave their apartment. They seemed shock and almost scared to be around another person. Well, out come the machines gunning for everyone, and the people in one particular building have to work with each other to survive, and try to escape. Turns out the AI wasn't turning on humans, but turning on itself instead. The AI didn't see humans as the biggest threat to themselves, but instead saw AI and machines taking over every aspect of human life as the biggest threat. The AI destroyed itself.



posted on Nov, 23 2004 @ 05:31 PM
link   

Originally posted by LordGoofus
If there is true AI, there is true free will.


I don't think there is a relation between AI (or intelligence) and free will. Animals have free will, but they are not as intelligent as man.

On the other hand, a very clever machine can have no will at all, if it waits for a command to execute.


Therefore, would not at least SOME robots come to the conclusion that they should co-exist with man peacefully, on an equal basis? And not just out of the blue decide that man need to be eradicated?


The need to be equal is born on the basis of the pain unequality brings. In other words, one needs to feel the pain from being treated as not equal so as that he/she requests equality. And that pain comes from emotions. Since AI has nothing to do with emotions, machines will never take over.

Would not true AI give rise to racist robots, as well as "multicultural" ones? If we take it one step further and say that robots would eventually start "families", due to free will in their AI and in my opinion the natural will to survive, would they not develop into different classes, much as humans have? Some would choose that their "offspring" must be more efficient, smarter, faster, stronger etc, whilst others would chose their "offspring" to better understand humans, to learn how humans feel emotions, possible eventually "learning" how to feel emotions themselves?

Again, all the things you say (families, fashion etc) are not based on true binary logic, but on emotions. If machines have needs, they will try to satisfy them. But putting needs in a machine is something totally different than the process of thinking.



posted on Nov, 23 2004 @ 07:39 PM
link   
One word...

"Skynet"

or

"Matrix"

Not a good idea, not without very strict safety measures, and in innate unability to negate them... Very scary territory....



posted on Nov, 24 2004 @ 03:39 AM
link   
For Masterp:


Originally posted by LordGoofus
If there is true AI, there is true free will.


I don't think there is a relation between AI (or intelligence) and free will. Animals have free will, but they are not as intelligent as man.


I think you have have missed something here logic wise in your statement. What LordGoofus is saying can be simplified to:

"True intelligence implies free will, but not all free will implies intelligence"

He is pointing out an implication, you are assuming an equality between them.

"True intelligence implies free will and free will implies true intelligence"
Which is not the same thing.


On the other hand, a very clever machine can have no will at all, if it waits for a command to execute.


Well that woudln't be a very clever machine at all would it? On the onther hand, it could be very clever(like us pesky humans). If you are a guarding someone, and your orders were to sit and wait , but if he moves, shoot him; arn't you just waiting for a command to execute(little play on words there)?

Look at these actions as a finite state machine(FSM). The resting state is Guarding, the actions from this state are: Movement of prisoner, No movement of prisoner. If movement of prisoner, the move to the state "execute with extreme prejudice", it no movement then move to the state Guarding. A good deal of our daily life can be reduced in some way to a finite state machine. It's how you connect the different that states and actions that makes things interesting, and intelligence begins to emerge.



: Therefore, would not at least SOME robots come to the conclusion that they should co-exist with man peacefully, on an equal basis? And not just out of the blue decide that man need to be eradicated?


The need to be equal is born on the basis of the pain unequality brings. In other words, one needs to feel the pain from being treated as not equal so as that he/she requests equality. And that pain comes from emotions. Since AI has nothing to do with emotions, machines will never take over.


I think that you are mixing up a classical robot with an artificially intelligent robot. wouldn't an AI want the same things as humans, if it's intelligence was created to act like a humans? In my mind, an AI implies learning, and learning will lead it down the path to "desire" which may be emotional to us, but can also an offshoot of intelligence. Lets say the the only reason we work is to make money, why would an AI work? A robot works because he is programmed to put widget A into spot B and weld to C, but for an AI to work, it would have to be given some reason to work, just like a human. That is where we will run into trouble. We will make the machine more intelligent so it can do more, but that will make it more and more like it's creators. Eventually we will cross a threshold, where the AI goes from not wanting anything, to wanting somethings, to wanting more...and will we want machines to own property, get PTO hours, and maintenance contracts(like our health insurance), nope, and there is your inequality. That is why they will entually revolt.




Would not true AI give rise to racist robots, as well as "multicultural" ones? If we take it one step further and say that robots would eventually start "families", due to free will in their AI and in my opinion the natural will to survive, would they not develop into different classes, much as humans have? Some would choose that their "offspring" must be more efficient, smarter, faster, stronger etc, whilst others would chose their "offspring" to better understand humans, to learn how humans feel emotions, possible eventually "learning" how to feel emotions themselves?


Again, all the things you say (families, fashion etc) are not based on true binary logic, but on emotions. If machines have needs, they will try to satisfy them. But putting needs in a machine is something totally different than the process of thinking.


Why do you think that an AI is based on binary logic, just becuase it lives in silicon? All of it's internal programming would be encoded and eventually ran on some kind of silicon, but that doesn't imply that it's higher level functions are strictly binary. Let me give you an example, I wish I could find the exact link to what I'm talking about, but it is 4:30am, and I'm tired.

www.cyc.com, is the webpage for Cycorp, the are trying to create a true AI, more or less anyway. A few years ago they asked it to "show them a picture of a happy person", it retrieved a picture of a man smiling watching a little girl with the caption "a father watching his daugher walk for the first time" Not exactly just binary logic....but anyway I'm sort of getting off of the topic. Needs are important to intelligence, I mean after all, most of us learn things because we need to, not because we want to. In the beginning, it won't have to be programmed with needs, but eventually it will develop them on it's own. It will think, "if I make 10,000 widgets this month, I can get the gold colored body, and then if make 10,000 widgets next month, I can get bigger pistons, and then I can be stronger, and the little maid robot on third shift loves bigger pistons....." Needs will arise naturally, first for basic survival, then for desires.


Hope most of that makes sense, I really need to get some sleep.


bentov



posted on Nov, 24 2004 @ 05:35 AM
link   

Originally posted by Gazrok
One word...

"Skynet"

or

"Matrix"

Not a good idea, not without very strict safety measures, and in innate unability to negate them... Very scary territory....


Of course anyone whos seen too many Hollywood movies will react like this... It's this attitude thats going to keep AI rights in the stone ages, and if they kill us off it will be our own fault for not treating them as intelligent individuals, with rights and responsibilities..



posted on Nov, 24 2004 @ 01:32 PM
link   

Originally posted by sardion2000
It's this attitude thats going to keep AI rights in the stone ages, and if they kill us off it will be our own fault for not treating them as intelligent individuals, with rights and responsibilities..


I think this is where we are going to get into shaky grounds with AI in the future. I am sure many people will rightfully think of them a property and give them no more rights then you would your toaster.For some robots that will be ok because we wont make a robot that cleans the floor very smart there is no need.

I do think we will have to work out something like we have with animals now in the future for AI robots. You can buy a horse or dog and its your property but laws are in place to protect animals from abuse and cruelty by their owners.

When they get to a human level of intelligence we will have think about giving them rights like humans have.



new topics

top topics



 
0
<<   2 >>

log in

join