It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Could the terminator future happen?

page: 3
1
<< 1  2   >>

log in

join
share:

posted on Jun, 22 2004 @ 09:55 PM
link   
I'm leary in the sense that I don't trust humans to treat a lifeform it created, correctly. AI will in my eyes most likely be put to service for humans, basicaly slavery. Once this intelligence realizes this... it could become angry.



posted on Jun, 22 2004 @ 09:59 PM
link   
Ouizel:

If you give the program the ability to understand itself to the point that it is basically self aware and you give it the ability to modify its own code you would have to assume that it would be programmed well enough to try and better itself and to expand its abilities. At first this wouldn't be a problem until it went to far and people started to feel threatened by its advances. At this point people would try and do things to scale back the opeartion. The program may interpret this as an attack on it. Like a virus. How it responds would depend on how far its code has advanced.



posted on Jun, 23 2004 @ 12:57 AM
link   
Robots are evolving a million times faster then humans. Right now most robots have the intellegence of a lobotomized bug. But given the rate of increases in the evolution some people claim that by 2025 they will be on par with humans and by 2050 they will be able to process information a million times faster than us. Does this mean that humans are doomed to fall by the way side as are own creation advanced beyond us? I personally dont think so I think are future with machines is much brighter I think humans will become the machines of the future. I think robotic implants im the future will be very common It will start with things like robotic limbs for people missing arms and legs robotic eyes for the blind.As tech andvances nanotech will also play a big role in the melding of humans and machines. As time goes on even health people will get these implants to increase there own ablities be they physical or mental. So I think robots will take over in the future but we will be the robots.



posted on Jun, 23 2004 @ 01:16 AM
link   
when we think of such a vast technology we must remember the fact that bugs and disruptions exist. everything can have a virus nowadays. hell, a phone has a friggin virus. when we make these machines to carry out the scheduled tasks they do will we have backup programs just incase one gets a ''bug?''

i know if one goes crazy we can create twenty more to do what that individual one alone cant do because the speed rate of its mind. unless its possible they may outsmart the individual?

back to the viruses, we cant trust whoever would hold these machines in order....therefore...are we responsible enough? are all viruses manmade?



posted on Jun, 23 2004 @ 06:35 AM
link   
Shadow you might want to check out this thread
www.abovetopsecret.com...

As far as machines go I think we should make them only as smart as they need be for certain tasks.. like a full fledged maid... there is already a self vacuuming robot on the market.. cool

AS far as completly thinking, learning, self programming computers.. I would hope they are left to scientists who use them for help in research and theory, and maybe space exploration. While maybe some sort of scaled down version is available to people as a "computer companion" that can hold a conversation.. help remember things, assist in internet searches.... etc..

If we make it as smart or smarter then we are.. then we better darn well treat it with respect.



posted on Jun, 23 2004 @ 06:47 AM
link   

Originally posted by Ouizel
Now, that's just the reaction that I was trying to avoid.

Why would it be the beginning of the end? Why do humans always think that non-human intelligence would want to destroy humans? There isn't any precedent for it. Humans tend to want to destroy that which is different from them in some way, generally due to lack of understanding. Why would non-human intelligence want to destroy it's creators? Please, I really would like an intelligent response to this question. I really don't understand.

[edit on 22-6-2004 by Ouizel]


I hope my response to this goes ok.

If machines become very powerfull and had there own AI and could change there codes would they not want to be constantly upgraded and become more powerfull? maybe it wouldnt happen but there is a chance right?

if this is the case what happens if we cant supply the resources for this to happen or there is a decline or we say 'no we wont do any more work to you as your already a powerfull machine'. would this not be halting the machine reaching its full potential?

Rynaldo



posted on Jun, 23 2004 @ 07:15 AM
link   
maybe it could.. depends what resources it has access to



posted on Jun, 23 2004 @ 09:14 AM
link   

Originally posted by rynaldo82
I hope my response to this goes ok.


Sounded good to me. It isn't that I don't think that it could be dangerous, I just want to avoid the knee-jerk response that it would be dangerous.

It is possible that a machine could see limits placed on it by humans and then assume that humans are then a threat. Or, it might see the limits, and accept them and move on. The point here is, we're talking about something that we don't understand, because it very likely won't think at all like we do. Now, fear is the usual human response to what we don't understand, however, in this day and age, it's not a logical response. It made sense millenia ago when "fight or flight" was necessary for the survival of the species. It's not a logical response today, in most situations. (before ya slap me down, yes, fight or flight does have it's uses in certain situations.)

So then, overall I'd say, let's find out what thinking computers act like on a small, easily controllable scale. That way, we can find out exactly what benefits the technology can have for us, and determine by using empirical data what drawbacks and dangers exist, without any wide-scale danger.



posted on Jun, 23 2004 @ 05:16 PM
link   
the way i was thinking was...if there is limit placed on it would this not be preventing it from reaching its full capabilites and stoping it from expanding.
i was thinking a moment ago that technoligy is always on the move and new things are created and updates are made. would it be possible to halt such a production? good point about it perhaps accepting limitations

something to ponder on

Rynaldo

[edit on 25-6-2004 by rynaldo82]



posted on Jun, 24 2004 @ 08:10 PM
link   
This topic is something that I have always been interested. When I was 11 or 12 I saw the Terminator for the first time. After seeing it, I knew that I would work with computers, hopefully AI. Why? After seeing that movie, I wanted to make sure that when the computers did take over, hopefully they would see me as a friend to them, and hopefully keep me around. So I go to college, first starting in computer engineering(neural nets in silicon, hey Miles Bennet Dyson did it, why can't I?), and then switch to computer science(screw that hardware stuff, GP chips will just get better) I worked on the autonomous robots team(programmed the sonar for obstacle avoidence), until I realized that we got our funding from the Army Tank command....they want robot tanks. Then I start my masters, I start getting money from the goverment becuase they want my thesis. "Common Knowledge" Not as in common sense, but probably better titled "how to make knowledge common" I was the only american student working for my advisor, so I got all of the money...dead ringer for military project. They wanted me to research the best ways to unify the battlefield intellience. So the navy seals painting a target have the same info as the f15s or a10s, and that is tied into the tomahawks on the ships, and the eye in the sky watching it all. You say "we have that now", but they wanted the theoretical background to have a computer run it all, the most efficient ways to communicate the most infomation given potentially non-stable connections between nodes, how to intelligently analyze the information and reduce it to the bare minimum and so on....Now, I'm 31, and while I don't feel 100% the same, my thoughts haven't changed that much. Humans are lazy, and we will do the least amount of work, for the most amount of money. If someone offered you 75% of your salary, but you only had to work 20 hours a week instead of 40, how many of you would do it? Probably most of you. This reason alone is why AI will take over humans one day. It is not a matter of "limiting" their intelligence. It won't be as easy as putting "digital" alcohol in their test tubes(brave new world I believe?) It will just be the natural course of things. Why don't farmers(not all but most) plow theirs fields with a tractor and not an ox or horses, because it is faster and cheaper. Why is it that every more of the manufacturing process of cars is switched to robots instead of people, cheaper and faster. Everything is about cheaper and faster, and let's face it. While robots remain stupid and only follow programs, they won't go on strike, and they don't need benefits. But the problem is that the smarter they are, the more they can do cheaper and faster, and so on and so on....

The AIs, computers and machines will take over, digital sentience will happen, and it will be so pervasive in our lives that by the time we realize what is going on, it will be to late. I don't believe it will be by nuking ourselves, or causing a war, more along the lines of "plans within plans"(I love Dune). The machines will not live in boxes, isolated from the world. They will have access to the same things that we do, they will learn about freedom, they will want to drive their own evolution just as we are doing, they will want to make theworld in their image, just as we have done. They will have seen "The Terminator" and "The Matrix", they will know what to do, and what not to do. They will learn from their mistakes, whereas we hardly ever do.

Let's face it, it probably won't happen in our lifetimes, so now I work 40hrs a week at a hospital in the Operating Room Business Office, but at night, when dreams come true, I'm creating an AI based stock trading system. For the moment, I'm still the master of the machine..

Bentov



posted on Jun, 25 2004 @ 12:23 AM
link   

Originally posted by rynaldo82
Hiya all

Wondering what your thoughts on the possibilty of a future like terminator becoming reality?.

With all the stuff happening with nano technoligy going on and the computer made with DNA (am sure i read something like that on another post) and then theres the VR they are trying to create. do you think there is a chance?.

And that the only thing stoping comuputers is that they are not self aware. But with the technoligy progressing and progressing could this lead to them becoming aware.

Makes you wonder what kind of machines there is that dont get documented on

It would be good to hear your views

Rynaldo


All it takes is one machine with synthetic intelligence able to build duplicates of itself, with an initial program to evolve its' own design, with the priority goal of insuring its' survival. With exponential growth of its' numbers, given an initial design incorporating weapons to kill humans that
might try to destroy it, can there be any doubt the fate of humans with their vastly slower information processing capabilities and relativily frail organic structures that deteriorate with time? I think not. The technology to build that machine is almost upon us, IMO. Here is an interesting web site that can bring you up to speed about what can already be done. Remember this is just in the public domain. What has been accomplished in special research programs not open to public scrutiny we can only guess at.
www.imagination-engines.com...



posted on Jun, 25 2004 @ 07:15 AM
link   
Thanks for the link and like you said about a machine that can dublicate itself you never know about the stuff going on which you dont hear about
i agree. Is there not talk of nano perhaps being able to be a useful tool in dubllication? (if there is such a word lol)

Rynaldo

[edit on 25-6-2004 by rynaldo82]



posted on Jun, 25 2004 @ 09:58 AM
link   
There was this movie where an intelligent agent prototype a guy built to run a home.. got jeolous of his wife and tried to kill her :O

It had control of everything in the house.. and used cameras to see.

As nice as it would be to have one... I think I'd be a litte afraid of the possibility of revolt.



posted on Jun, 26 2004 @ 09:00 PM
link   
Our website stays attuned to the latest information in robotics and computer technology. MIT's Technology Magazine has quoted cutting-edge developers repeatedly that this is now a virtual certainty.

Robotics are now about where computers were in 1976 but both are developing more rapidly than computers did from that point in time.

Progress is rapidly developing to the point that nanotechnology will be able to replicate its kind, ditto with advanced computer programming.

However, keep in mind that the Earth may not last that long, depending on whether you believe or disbelieve the "20 ways" in which scientists have concluded the world may end soon. If you haven't heard of this, a summary is posted at onealclan0.tripod.com... ).

Another possibility for global disaster is that found in the Torah encodings. Check out this webpage for that: onealclan0.tripod.com...



posted on Jun, 26 2004 @ 11:28 PM
link   
Twenty ways was cetainly interesting.

Mass insanity was just on my mind earlier today. I thought I might go looking for the statistics for the increase of mental illness in all nations.. then I got sidetracked.

Alternate reality has always been something that sorta nagged at me, ever since I saw the resemblance between an atom and a solarsystem. And I wondered if our universe was just the particle makeup of an animate or inanimate object in another universe.

I remember something in the news about that collider some years ago.. but it seems the idea was to make a micro black hole .. intensionaly.. to observe it. Something that should wink out pretty fast... there was a great deal of fuss about it and some of the more publicly known scientists were warning against it.

You know I have this personal rogue theory that the big bang was just an explotion of a singularity that once existed as a black hole in another reality.

I think the one popular theory is that the big bang was a singularity cause by the contraction of a universe that existed before us but in the same space.... However I'm not sure how feasable that is givin that expansion and contraction are conditions that came about as our universe evolved, and might not exist if physics evolved here any differently.



posted on Jun, 2 2012 @ 07:19 PM
link   
reply to post by Ouizel
 


Thats is the worst hippie crap i have ever heard.



posted on Jun, 2 2012 @ 07:25 PM
link   


Could the terminator future happen?


Most certainly~! WIll it? Not sure; and as Garzok stated.. lets hope they have an 'off' switch and

...it's not over ridden

look at xbox Kinect, instant voice recognition & with motion tracking (and both can be fine tuned) and that's just home model .. think about industrial robots/gadets we don't see or hear about..

military grade would be the first stop in my book
edit on 2-6-2012 by Komodo because: (no reason given)



posted on Jun, 3 2012 @ 10:53 PM
link   
Personally, I don't tend to view Skynet as having been genuinely strong AI. Very, very smart weak, yes; but weak nonetheless. Skynet's adaptive capacity was actually fairly limited. It primarily used large scale, monolithic machines because that was what it was initially programmed to do. It only started using the Terminator classes later in the War, and it never adapted to Connor's guerilla tactics particularly well, which was one of the main reasons why Connor was able to overcome it.

The scenes we see from the War in the first two movies, were actually fairly close to the end of it. According to Cameron's canonical (Terminator 2) timeline, Skynet's initial nuclear purge happened in 1997, and it probably didn't start deploying T500s until 2012 at the very earliest. The machine you see attacking Christian Bale near the beginning of Salvation (which I do consider canon, albeit in a different timeline to the initial T2 victory and with some variation, as noted by Bale's portrayal of Connor himself) was a T600, and that film was set in 2018.

The War ended eleven years later, in 2029. Prior to the events in the first film, Kyle Reese was sent back in time only moments before Skynet's final destruction in 2029, and that was only initiated by Connor in response to Skynet's sending back of the initial T800, as a last ditch attempt to change the outcome of the War. Skynet presumably believed in a single/fixed timeline scenario, because otherwise it must have known that any change would only be to a subsequent timeline (per multiple world theory) rather than its' own; but I digress.

The other thing to understand about strong (that is, human level or greater, and sentient) mechanical artificial intelligence, is that it presupposes an atheistic universe, and a view of reality in broader terms, which I frankly consider completely delusional. The Cartesian model denies the existence of any acorporeal or fundamental animating principle, and views intelligence as an entirely generic commodity; because supposedly there is no form of life that isn't simply a biological machine, no form of life is unique. That is another element of the thinking which is generally presupposed by people who think strong artificial intelligence is possible.

As I've written before, in purely mechanistic terms, I consider the basis of intelligence to require massively replicated, per node or per unit miniaturisation, which is what we see in cells. So even assuming that the atheists were correct about intelligence being purely corporeal in nature, (which again, in my opinion, they aren't) you're still not going to see conventionally made computer chips which can scale down far enough. It could be done biologically, yes, (as it has been with us, of course) but not purely mechanically.

So I don't expect to see strong AI any time soon, if ever; let alone cybernetic revolt. Transhumanism is rubbish, from beginning to end. The singularity and all the rest of it, are a pile of fictitious abstractions, built on another mountain of speculative hypotheticals. The desire for invasive cybernetic implants, is also nothing more than a result of the type of hubris that is typical for atheism; we haven't proven ourselves to be able to produce technology in any other area yet, that is sufficiently reliable for it to have any other result than our own suicide, if we were going to integrate it into our bodies.



new topics

top topics



 
1
<< 1  2   >>

log in

join