It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Could the terminator future happen?

page: 2
1
<< 1    3 >>

log in

join
share:

posted on Jun, 21 2004 @ 01:07 PM
link   

Originally posted by danscan
Fortunatly a computer can not understand meaning. It only understands what it can. 0-1.


This is a point that I was belaboring earlier in this thread. Unfortunately, it just isn't true with the neural net computers. That link that Viendin posted earlier links to a computer that definitely understood more than just yes/no, on/off, 1/0. And future embodiments of this technology will probably be actually intelligent and sentient. Now, then, if we were to make a machine that was sentient, how does that differ from life? If it can think, emote, and generally be everything that we are without the cumbersome and maintenance-intensive body, the what's to say that it isn't a life form? One would have to assume, here, that it would desire to be self-perpetuating.




posted on Jun, 21 2004 @ 08:38 PM
link   
Just because we might might be able to make a computer smart... doesn't mean we'd be able to instill in it values...

Imagine though that we "unlock" the secrets of the brain and basicaly duplicate the enviroment in a computer... might that not make it human in some abstract way. Sooner or later it will want equal rights, since it would have a sence of self and value it's life and opinions. What also might it think of us after be given time to get to know us as a species outside the enviroment of a lab.

What if.. in fact the computer in question is biological in nature
news.bbc.co.uk...

Current models of future computing include biological components as seen in the link above...



posted on Jun, 21 2004 @ 09:48 PM
link   
The thought about all this new technology that really bothers me is what if we humans are to become the so called borg or cyborgs that supposedly everyone fears in the Star Trek series.

Right now alot of people feel a need to get a college degree perhaps even an MBA etc. to get a good job advantage over others to get ahead. Just imagine if the thing of the future was to get a biological memory implant chip to let you automatically remember and retrieve tons of data lightning fast. This would give you a good job advantage to all those without implants. Soon everyone who wanted a good job would feel a need to get implanted to compete or face less pay or unemployment.

Other more sophisticated robotic implants probably would come along and more might feel a need to get enhanced to get competitive advantages over everyone. We may all become the borg or robots that we fear. I don't want to get any implants but I don't know what I would do if I faced unemployment because all the good jobs were going to implanted people. Now that's a scary future to me.

On a totally robotic future, I'm sure it would be a military dream come true to have smart robots that could go into combat as effectively as any human.



posted on Jun, 21 2004 @ 10:45 PM
link   
if you are inerested in cybernetics... this is a good thread www.abovetopsecret.com...



posted on Jun, 21 2004 @ 10:51 PM
link   
Does anyone not realize that the possibility could be there that consciousness and awareness are not products of matter? If this is true, then true AI would not be possible.



posted on Jun, 21 2004 @ 11:00 PM
link   
Even if that's true a computer based solely on logic could still lash out...

what's that old movie with mathew brodrick.. with the AI defense computer? it wasn't emotional but it was still dangerous .. acting out on logical deductions it made



posted on Jun, 22 2004 @ 12:33 AM
link   
UnusualMe, the movie was WarGames (a great movie
).

I love the Terminator movies with a passion (including the third one, even though James Cameron wasn't on board - it was great entertainment). The biggest flaw I find with the movies is the most logical one that I have never heard any one bring up...

All that great technology that SkyNet can create, and yet it never thought to create an artificial biological virus to kill the humans? I mean, its not like SkyNet is a biological entity, so it wouldn't affect it. Also, you'd think SkyNet would have designed nano-viruses to kill the humans. Well, then those movies really wouldn't be movies if the humans could be killed so damn easily - all of them. But, for something like SkyNet, you'd think that that would be the most logical way to kill the humans. Oh well, I didn't write the movie, but that idea alone makes the movies very flawed to begin with.

Oh, not to mention the time travel aspect. Its so hard to make time travel work in a movie, even in the best of movies.

I find huge flaws in all three of the films, but I won't discuss them here. And anyways, I love the movies too damn much for the flaws to take away the enjoyment from me. :-D

As far as A.I. goes, I've always been interesting in the thinking machines mythos from the original Dune series (not the prequel's and the prequel's prequels that Frank Herbert's son did, along with Kevin J. Anderson). Basically, the thinking machines enslaved humankind over hundreds, even thousands of years; and humandkind finally rose up to combat them and get rid of them from the known universe once and for all.

But, they needed a replacement for them, so they created the Mentats (I can't remember of the Mentats were created, or if they had been around for a while). They are brought up by birth to think and calculate just like a thinking machine would in all situations. Their brains are used very differently from most every one's. Their only flaw (since they are a living being), is that they fatigue and they can sometimes make errors (on very rare occasions). Simply put, they think like a computer, but they can still suffer from the same things all living beings suffer from - greed, betrayal, etc...

I personally see thinking machines as being a possibility. But, the tecniques we use now will not get us to that point just quite yet. Personally, I'm starting to think that the multi-layered flat-design of most CPU's and other chips, are the problem. As they say: Form Follows Function. But, in this case, it should be: Function Follows Form. I have a very interesting idea that just struck me.

Anyways, great thread! :-D I love these types of threads.






[Edited on 6-22-2004 by EmbryonicEssence]



posted on Jun, 22 2004 @ 12:56 AM
link   

Originally posted by Gazrok
We'll never be dumb enough to not have an "off" switch... I hope...


Thanks for qualifying that statement with the "I hope" lol. People at times aren't very smart. On/Off switch? I wouldn't bet on it. Remember we have to put warning labels on the most rediculous things.



posted on Jun, 22 2004 @ 01:35 AM
link   
Hairdryer
Do NOT use in shower.

So what is that IDEA that struck you?? Telling me you have some great IDEA then hitting submit before elaborating.. is just a tease .. lol



posted on Jun, 22 2004 @ 01:39 AM
link   
UnusualMe, I will ellaborate on this "idea" when I have more thoroughly thought it out. :-D I'm loving your threads.



posted on Jun, 22 2004 @ 07:09 AM
link   

Originally posted by UnusualMe
Even if that's true a computer based solely on logic could still lash out...

what's that old movie with mathew brodrick.. with the AI defense computer? it wasn't emotional but it was still dangerous .. acting out on logical deductions it made


Then you only have to program the following:

1. Never harm a human.
2. Never harm yourself.
3. Only break rule 2 when it conflicts with rule one.



posted on Jun, 22 2004 @ 07:12 AM
link   
It all comes down to one thing, how much intelligence do we give them? this could decide our future. IMHO, i wouldn't give machines or computers too much intelligence just enough to work for us.



posted on Jun, 22 2004 @ 11:10 AM
link   
Who knows maybe as we get closer to our goals restrictions will be set on how intelligent they are allowed to be...

The possible outcomes of a future that uses AI is dependant I think on how we treat this new forming intelligence.



posted on Jun, 22 2004 @ 02:11 PM
link   
That kinda brings up another question.

Exactly how would you determine how much intelligence AI has? Is self-awareness a prerequisite for any type of true intelligence? Does self-awareness constitute silicon based life? Would it be unethical to limit a silicon based lifeform's intelligence, based on carbon based paranoia? Would it eventually be unethical to turn off a silicon based intelligence, since that would basically be murder?

Whatchoo think??



posted on Jun, 22 2004 @ 02:21 PM
link   
The worst thing you could ever do is program in self awareness in an AI system. If you make the code where it basically understands that things can be done to jeopardize it the system will want to do things to protect itself. Of course the absolute worst thing to do in that case is to code the system to be able to modify its own code. By doing so you give the system the ability to adapt and react. Imagine a computer with unlimited abilities and no fear.



posted on Jun, 22 2004 @ 02:27 PM
link   
self modifying programs are one of the big directions ai is going... according to some stories i read they've already managed to do a few very scary things.



posted on Jun, 22 2004 @ 02:43 PM
link   
Should it be legal to have computer code that is self aware and at the same time able to modify itself? That would be basically the beginning of the end.



posted on Jun, 22 2004 @ 03:14 PM
link   
Now, that's just the reaction that I was trying to avoid.

Why would it be the beginning of the end? Why do humans always think that non-human intelligence would want to destroy humans? There isn't any precedent for it. Humans tend to want to destroy that which is different from them in some way, generally due to lack of understanding. Why would non-human intelligence want to destroy it's creators? Please, I really would like an intelligent response to this question. I really don't understand.

The closest analogy that I can find is human and dolphin interaction. Generally, dolphins help humans when we enter their world. They play with us, there have been documented cases where dolphins have helped humans back into boats that they've fallen out of, and they have protected humans from predators. Now, dolphins are generally considered intelligent, and sentient. Ergo, a non-human intelligence that has no desire to destroy humans, even though humans have killed many dolphins for no apparent reason (to the dolphins).

So, then, why is it that we consider silicon based intelligence dangerous?

That stance just doesn't make sense to me.

[edit on 22-6-2004 by Ouizel]



posted on Jun, 22 2004 @ 07:17 PM
link   
I believe the human response to something strange and possibly something that we perceive as becoming a possible threat to us is to attack and destroy. I think that's just human nature at work there.

A thinking machine such as a self aware AI entity on the internet might see things differently. It might see humans as servants waiting on it's needs day and night. People are constantly waiting on computers, entering new data, updating data, and creating new programs every day. People are also constantly making hardware upgrades and improvements. We are already like servants to a possible AI that just sits and grows. I said possible because if AI ever did become self aware, I was speculating that some form might develop over the net but that of course would depend on alot. It's interesting to speculate exactly what is self awareness. It also seems strange when programs on your pc start behaving smarter than they used to when you know they should be programmed to behave the same way every time (if using old software).



posted on Jun, 22 2004 @ 08:37 PM
link   

Originally posted by Ouizel
So, then, why is it that we consider silicon based intelligence dangerous?

That stance just doesn't make sense to me.



That's because people in this society are more inclined to focus on negative things.



new topics

top topics



 
1
<< 1    3 >>

log in

join