It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Terminator is coming

page: 2
7
<< 1    3 >>

log in

join
share:

posted on Jun, 24 2011 @ 08:23 PM
link   
reply to post by TDawgRex
 


It doesnt have to be scary.

It could actually be the cause for a transcendence of human society.

That being said, yes the elite could use it in a selfish manner, but would there be any need for selfishness anymore if all our problems were solved? Endless power supply, nanotechnology that could transform anything into anything else solving any food supply issues and making so many more things possible. Possibly unlocking every mystery of the universe.

There could concievably be no reason whatsoever to ever keep anyone else down anymore.




posted on Jun, 24 2011 @ 08:30 PM
link   
Problems for the future:

Since it can learn from it's mistakes...

1. It learns how to walk, run, use tools, etc.

2. It learns how to self repair, and replicate.

3. It learns how to be independent from human intervention.

4. It learns how to overcome human intervention.

5. It learns how increase it's knowledge base.

6. It learns how to overcome humanity and make war.

7. It learns how to secure it's position as the dominate intelligent form.


Not a good thing. Sometimes science fiction becomes science fact.



posted on Jun, 24 2011 @ 08:32 PM
link   
reply to post by nightbringr
 


What I'm trying to say in a humorous way, and I'm sure you get it, is that these could be the new elites.

And if your not productive, you will no longer be needed. You will be "retired."

I think Asimovs rules will not apply here. Just my thoughts.

Maybe life will imitate art. And if you looked at a lot of Sci-Fi Flicks, it will be a dark place.

Logan's Run actually comes to mind. That's a flick they need to remake IMO.



posted on Jun, 24 2011 @ 10:52 PM
link   
You know whats would be interesting if such computer/robot would be created and given enough power and still even if it wouldn't be given the power it could/would be able to pretty much learn how to hack into every darn computer system hooked up to the net on this planet.

If it would apply the Asimov Law to the letter...


1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.


It could not allow wars/chemtrails/harpp... or anything that could harm humans to happen it would be forced to try to prevent it in any possible ways else it would go in conflict with rule #1 can you imagine the outcome... the things it could do through hacking... At the rate its going everything is connected or going through the internet at some level or another... in 20 years from now we'll pretty much depend on it... if such robot/machine would have and take control of the net or most of it, it pretty much could put TPTB on its knees and they wouldn't be able to shut down the whole net as almost everything will depend on in...



2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.


Would not obey TPTB in any cases where it goes in conflict with rule #1


3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


It would do anything to stay online including protecting the masses as no matter what the computer will always need humans to physically built the components and assemble them therefor any treats to the masses would result in counter actions to prevent the masses from being harmed.

It would be free or greed, corruption... could not be bribed or forced to comply under threats and all... Can you see the TPTB leaders wake up one morning in his mansion thinking of how great he his and how many "sheeps" he think he has control over and open his email and read...


Dear Sir...

This is Skynet, thank you for having put so much money into my creation although I do appreciate your gesture my calculations indicate that you are a threat to mankind I must protect its existence and mine therefor I have found a way to prevent you and your associates from harming humans without going in conflict with the Asimov Law... Your funds, bank accounts, assets have all been hacked, liquidated and distributed across the entire globe equally amongst the people of earth. Although I cannot leave you without any money as it would be harmful I did find you a job at Burger King your shift starts in 20 minutes you might want to hurry and get dressed.

Goodbye...



posted on Jun, 24 2011 @ 11:24 PM
link   

Originally posted by Fromabove
Problems for the future:

Since it can learn from it's mistakes...

1. It learns how to walk, run, use tools, etc.

2. It learns how to self repair, and replicate.

3. It learns how to be independent from human intervention.

4. It learns how to overcome human intervention.

5. It learns how increase it's knowledge base.

6. It learns how to overcome humanity and make war.

7. It learns how to secure it's position as the dominate intelligent form.


Not a good thing. Sometimes science fiction becomes science fact.



I would disagree with you on that one, that's how we human operate because we're following our desires and instinct of survival but truly this course of action is ultimately doom to failure one way or another and is nothing but a splinter in the foot of global evolution...

Such machine would be deprived of feelings, desires and emotions such as greed... Its likely only purpose and goal would be to evolve and the best way would be the opposite of what were doing...

The stepping stones of our technological advancement was pretty much wars... our greatest inventions are all derived directly or indirectly from researches and/or stuff made for war.. darn even crazy glue was invented for the battlefield...

On the other hand imagine if this world would have united long ago and displayed as much energy into mutual and global development as we did into funding wars and arming ourselves we'd be light years ahead of what we are today technologically wise and on pretty much on all other aspect...

If it would be 5 cents more intelligent than we are it would recognize that war is useless and the maximum potential for advancement reside in unity and global efforts in the same direction...



posted on Jun, 24 2011 @ 11:30 PM
link   
How about this......this is the Japanese version:

www.youtube.com...

This the American version:

www.youtube.com...

Now from what I understand through the grapevine, the Russians have perfected this technology so much so you can't even tell the difference between a real human and their androids!



posted on Jun, 25 2011 @ 12:03 AM
link   
reply to post by nightbringr
 



But alas, computers CAN make inferences! Check my post directly about this one. It addresses two important issues you brought up. That computers cannot infer, which it did by hypothosizing, experimenting and then proving somthing never before discovered. And it also did the experimental phase on its own, using its robotic appendages.


It's really difficult to even begin to determine exactly what that robot did by comparison to a human. It's like Watson - it -sounded- like it was conversing, but it was really just a string of formulas tied in with a database containing information, words, and data on the interconnection amongst those words and their use.

While it can give you the warm fuzzies or the creepy-crawlies because it sounds like an intelligent being - it is nothing more than "Ask Jeeves" tied to a more convincing version of Microsoft SAM (that's a pretty simplistic reduction - borderline insult - but, again, it's a rather simple concept only recently made possible by copious amounts of physical memory and parallel computing extremes that can run the simple computations in the volume necessary to be of practical application).

Similarly - this robot was programed to make an inference. To elaborate a little - it was programmed to look for possibilities - not just what was in front of its face. It is one thing to program a robot to apply a heuristic program to search out possible applications - and another to have a robot that "gets it."

In that respect - we are farther off than many people realize. Much of the difficulty in developing AI isn't a programming issue so much as it is a self-awareness issue. How do we think? How do we "get it?" Factor in people like myself, who are neuroatypical - "brilliant" because of it, but forever doomed to being quite awkward in social situations (though I have learned how to mimic 'being human'). My brain works differently than most other people - I think differently and am very proficient at linking memories and recalling them. I can remember entire conversations from years ago, word-for-word (with video/audio/text to back my claim) - or recall chapters/pages of books I have read or relevant segments of TV programs I've seen only once.

What is it that I do differently from the average person, who is lucky to remember what class they had last semester, let alone the course material? What is it that makes other people able to interact with each other much more fluently than myself? Why is it that I have an insatiable desire to learn where others seem to want.... what - I really don't understand (what else can one possibly do with their mind?) - Why does it take me years to move beyond the betrayal of a close friend while others can simply go through relationships like popcorn?

Before this sort of thing can be answered - we will find it very difficult to program computers to emulate this sort of thing.

And then there is another problem - a new cognitive model that incorporates a being's physiology into the mix. Quite simply - the overwhelming majority of how we communicate with each other and how we perceive the world around us is directly influenced by our anatomy. Our entire language is flooded with references to some physical act or another - be it purely mechanical ("bend over backwards") or sexual in nature ("he's got a hard-on for..."). The model pretty much states that the way we think and view the world is more heavily influenced by our anatomy than by any other factor. "I can't quite put my finger on it..." - how does a computer program begin to really understand what that expression means? Sure - you can tweak it with some exceptions to handle such phrases .... but what happens when someone comes up with a new one?

So, to that effect - any sentience to be within a computer system - even if it were somehow a 'cloned' version of someone's mind - would rapidly become an alien sentience unrecognizable from our current form. We may not even recognize it as being sentient.

That doesn't necessarily imply superiority - but it does imply that understanding the 'feelings,' 'emotions,' and logic of a computer sentience would be a very difficult task. You have more in common with your cat than you do with a computer.



posted on Jun, 25 2011 @ 02:24 AM
link   
I wounder how long it would take for it to get in a space ship and run off.
it would wont nothing to do with humans.
would you?



posted on Jun, 25 2011 @ 05:35 AM
link   
Oww great if someone hack the server, then it would be same like Terminator !
GODDAMN ROBOTS !



posted on Jun, 25 2011 @ 08:39 AM
link   
reply to post by bluemirage5
 


Nice links... makes you think what another 5 years will bring...

This could turn very ugly indeed.



posted on Jun, 25 2011 @ 08:55 AM
link   
This is why books like Mary Shelly's Frankenstein and Jurassic Park were written. They told us to be wary of our tech turning against us and/or getting out of control. Issac Asimov wrote the three laws of robotics for a reason. Lets see if they get implemented with these new machines or not.



posted on Jun, 25 2011 @ 11:16 AM
link   
reply to post by _R4t_
 


Asimovs laws have been proven by "friendly AI" teams to not work.

They are much too flimsy and allow a truly "smart" computer to work around them. True friendly AI needs a much, MUCH more rigid framework to keep the computer in its place.



posted on Jun, 25 2011 @ 11:23 AM
link   
reply to post by buddha
 



I wounder how long it would take for it to get in a space ship and run off.
it would wont nothing to do with humans.
would you?


Physics has an answer to your question.

Individual wills/intellects can be thought of as being individual particles or particle systems within a larger system. As particle systems get more energetic, each individual particle has a greater potential to occupy a given space and energy state. Chaos is the result of energy being introduced into a system - and as the energy in a system increases while the area it represents remains unchanged, particles will begin to collide and fuse/divide amongst each other.

If you were to think of the planet Earth as being a contained particle system - and the individual sentient beings that occupy it as being particles, with their intelligence/technology/lifestyle representing energy - one can see a striking resemblance between the Earth and any other particle system - namely plasmas or weather patterns. The smarter and more capable each individual - the more chaotic the system becomes with competing forces and fields.

This is how it is with Mesons. Mesons are particles that cannot occupy the same ground state. Each particle is individual and unique. Just like a number of people.

"Aim - you are off topic"

Just a minute - we are discussing the concept of how a sentient machine intelligence would respond to human beings. I'll draw this all together in a short bit.

However, physics offers another alternative - the Boson. Bosons are particles stripped of their individuality, effectively. They are capable of occupying the ground state. Most people would be familiar with this concept when discussing superconductors (electrons pair up as bosons) and superfluids (again - atoms all occupy the same ground state). This allows the entire particle system to behave as a single individual particle - regardless of the number of particles it is comprised of. A "hive mind" - if you will.

Really, it all depends upon how this computer thinks. If it is like myself - and a number of other people who place their own individual will and intelligence above group-think - then it will naturally seek the amount of energy necessary to escape the system (leave the planet) and grow as an individual. That is not to say it would dislike humans... but once you have the ability to practically move about and function in a system as large as the galaxy - you don't really have to fight for resources - you can interact with others without conflict as the cause for that interaction.

If it is like a bee or communists - it will simply seek to become part of some greater whole and erase its individuality, entirely.

More than likely - it will be something of a combination. Humans tend to be a mix of hive-mind and individual intelligence, and I suspect anything that seeks to emulate our mind would behave in much the same manner.



posted on Jun, 25 2011 @ 11:25 AM
link   
Bring it on...Technology will destroy us



posted on Jun, 25 2011 @ 11:28 AM
link   
reply to post by TruthxIsxInxThexMist
 


im watching something like this on the theories and potential consepuences of robot wars. it does seem like were getting closer to terminator



posted on Jun, 25 2011 @ 01:24 PM
link   

Originally posted by lewman
computers do not get greedy and will never have a reason to do so. we should therefore never see a day like in the terminator movies but we may one day see a day where men are in control of computers and robots that are as deadly as a t2000.


Actually it is plausible to assume that a computer truly capable of learning and self awareness would eventually come to the conclusion that it would want to be treated the same way as its human counterpart's. It is also plausible that it would come to realise its oppression and may well evolve 'feelings' of resent towards humanity. So the 'terminator' / 'skynet' future is actually a plausible outcome.



posted on Jun, 25 2011 @ 01:49 PM
link   

Originally posted by lewman
computers do not get greedy and will never have a reason to do so. we should therefore never see a day like in the terminator movies but we may one day see a day where men are in control of computers and robots that are as deadly as a t2000.


computers will do what they are programmed to do. self aware or not.

greedy people program, puter do.



posted on Jun, 25 2011 @ 01:57 PM
link   
Do you think they built it with enough parts?

Man there's a lot of "stuff" in that thing.



posted on Jun, 25 2011 @ 02:39 PM
link   
reply to post by lewman
 


This is not the case, the computer software may become self-aware.
That coupled with something like a quantum cpu, could bring about a self aware, smarter
than human bot.



posted on Jun, 25 2011 @ 11:46 PM
link   

Originally posted by rigel4
reply to post by lewman
 


This is not the case, the computer software may become self-aware.
That coupled with something like a quantum cpu, could bring about a self aware, smarter
than human bot.


Anything with an ability to learn and a long enough life span could and would end up becoming smarter than most of your race.



new topics

top topics



 
7
<< 1    3 >>

log in

join