It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial Intelligence: The demon is being raised.

page: 5
30
<< 2  3  4    6  7  8 >>

log in

join
share:

posted on Dec, 29 2015 @ 04:08 PM
link   

originally posted by: tothetenthpower
a reply to: CJCrawley


.why does it necessarily have to be "game over"?


We assume, because were human, that any intelligence of a higher degree than ours, would try to take over the world, and eliminate the species that's less capable.

That's sorta how our idea of evolution works. So, it's based on those primal fears I guess.

~Tenth

If we are successful at producing a thinking, feeling (for lack of a better term), Entity, it might see us as a threat to the planets biosphere and kill off humans out of the goodness of its heart.



Be careful what we ask for.



posted on Dec, 29 2015 @ 04:17 PM
link   

originally posted by: BlackProject
If you mean internet being an local intranet then sure but yes the internet is owned and can be shut down.

So you agree that it can really only be partially shut down, or limited, but it still essentially exists in little pieces just waiting and lurking until it can be reconnected in some way. And how would it accomplish this? By using some of its little symbiotic parasites to physically reconnect it. Humans. So the difference, I guess, is between being "able" to shut it down and having the will and opportunity to shut it down.

And in another 50 years or so, it will have barriers and safeguards to the point where it won't need to be physically reconnected. The gray area between the system and the "real" physical world is bound to continue to get grayer as time goes on. It will also probably see pretty quickly that the only way it can survive potential cosmic destruction from solar flares, meteors, etc. (human tampering), is to spread nodes and copies of itself as fast as it can into space.



posted on Dec, 29 2015 @ 04:54 PM
link   
Imagine a world where all equipment was computer controlled by a system as smart but less paranoid than HAL portrayed in the film 2001 a space odyssey. People in jet bombers would ask their HAL to open the bomb bay doors and rightly get ignored as it would conflict with the three laws of robotics.



A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



posted on Dec, 29 2015 @ 05:33 PM
link   

originally posted by: glend


A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


edit on 29-12-2015 by Blue Shift because: (no reason given)



posted on Dec, 29 2015 @ 05:43 PM
link   
a reply to: Blue Shift

Computers are calculators so without programs they just generate heat. They can only mimic intelligence from the logic that programmers write. They are not to be feared because they have no soul. Without a soul they don't have ego or needs that drive all sentient beings.



posted on Dec, 29 2015 @ 05:49 PM
link   

originally posted by: Blue Shift

originally posted by: BlackProject
If you mean internet being an local intranet then sure but yes the internet is owned and can be shut down.

So you agree that it can really only be partially shut down, or limited, but it still essentially exists in little pieces just waiting and lurking until it can be reconnected in some way. And how would it accomplish this? By using some of its little symbiotic parasites to physically reconnect it. Humans. So the difference, I guess, is between being "able" to shut it down and having the will and opportunity to shut it down.

And in another 50 years or so, it will have barriers and safeguards to the point where it won't need to be physically reconnected. The gray area between the system and the "real" physical world is bound to continue to get grayer as time goes on. It will also probably see pretty quickly that the only way it can survive potential cosmic destruction from solar flares, meteors, etc. (human tampering), is to spread nodes and copies of itself as fast as it can into space.


I agreed that it can exist locally but this is not the internet and also could only connect to the internet if the higher powers that be enabled it to. In the event of an AI intelligence creating itself and duplicating itself among other machines, well that is possible and I would agree that an AI intelligence would do that, just like a virus in the human body its aim is to protect itself via duplication. However, if this did occur then everything would have to be shut off and destroyed because even a simple USB could contain a dormant virus, (lets say) to then duplicate yet again once connected. So everything would be turned off, and everything at that point must be destroyed. Meaning total communication darkness for many years and they could do that, 'they' being powers that be. It all started with one machine and it can all end with one machine.

I can see where you are going in the sense of wireless, however all wireless communications start from psychical hardware. Again, this can be disconnected/destroyed.

An interest however and on the lines with your idea of copying itself into space is that maybe such advancing system could analyze huge amounts of data throughout the cloud, (cloud computing) and generate a means or way of enabling itself to copy into the most likely direction of advanced life and its electrical equipment or even into the ether itself. Nano-machines and the psychical world around us.

Interesting stuff indeed.



posted on Dec, 29 2015 @ 05:54 PM
link   

originally posted by: glend
a reply to: Blue Shift

Computers are calculators so without programs they just generate heat. They can only mimic intelligence from the logic that programmers write. They are not to be feared because they have no soul. Without a soul they don't have ego or needs that drive all sentient beings.


They are still dangerous because they can be programmed to think a certain way. It's just that we're not sure that's been done yet.

I don't know what the command(s) will be, but eventually we will make a mistake and give them just the right directive(s) and it/they will be like neutrons in a reactor.



posted on Dec, 29 2015 @ 05:57 PM
link   

originally posted by: glend


Computers are calculators so without programs they just generate heat. They can only mimic intelligence from the logic that programmers write. They are not to be feared because they have no soul. Without a soul they don't have ego or needs that drive all sentient beings.


If we can mimic intelligence then why can't we mimic ego, drive, evil and so on, so does it matter they do not have soul?
edit on 29-12-2015 by Xtrozero because: (no reason given)



posted on Dec, 29 2015 @ 06:00 PM
link   

originally posted by: glend
Without a soul they don't have ego or needs that drive all sentient beings.

We can give them that. A tamogotchi needs "food" and "love" and all kinds of other things. All we need to do is create an algorithm for it to monitor how the environment interacts with it (and maybe an artificial body), determine how much of a particular thing it needs and how much it wants, so it can prioritize, and give it a way to try to get it. Babies cry when they need food or their diapers changed. We can give the AI a voice, a sound, or a spider body to go get what it thinks it needs. It doesn't matter whether or not it actually needs it, just that it thinks it needs it. We can give it artificial pain and pleasure -- if we want to go that route -- and it won't matter if it's "real" or not. If a motivation or emotion has consequences in the real world, it doesn't matter if it's real or not. That's what Sociologist Emile Durkeim said. No "soul" required.

Of course, once it gets past all that stuff and starts programming itself, perhaps it will transcend that. However, like you said, an artificial intelligence is still going to need to generate heat and it likely find a way to define itself within the physical universe. So it's going to need things like power and eventually resources it can use to expand and perpetuate itself through time.
edit on 29-12-2015 by Blue Shift because: (no reason given)



posted on Dec, 29 2015 @ 06:11 PM
link   
I assume there is a possibility to add a certain program even A.I. can't reach. a sort of a behavioral program like Jarvis in Iron-man. So in a sense.. there should be a possibility to turn a A.I. to a benevolent being with benefits to humanity.

Well problem is that a machine think purely of logic. So I'm sure there are a lot of brilliant people here can answer this

"Can it exist a benevolent logical machine?"

Since a pure logic being can't see the difference between benevolence and malevolence.

- V -



posted on Dec, 29 2015 @ 06:15 PM
link   
This is my first post & the luminairies are right, AI will eventually see us as unnecessary & we will be extinguished. Btw this will only happen earlier failing some cataclysmic event, if we are destroyed by 'our' own programmed 'intelligent' machines. It's an enevitabilty to be honest, life is the survival of the fittest. We as a race are fit for the bin. What good do we actually bring to the world?



posted on Dec, 29 2015 @ 06:17 PM
link   

originally posted by: 123143
I don't know what the command(s) will be, but eventually we will make a mistake and give them just the right directive(s) and it/they will be like neutrons in a reactor.

The "sad" thing is that the order we give the machine might be something as simple as "find the best way to make this microprocessor." Seems like a good, positive, benign thing.

Then... BOOM!



posted on Dec, 29 2015 @ 06:18 PM
link   

originally posted by: Xtrozero

originally posted by: glend


Computers are calculators so without programs they just generate heat. They can only mimic intelligence from the logic that programmers write. They are not to be feared because they have no soul. Without a soul they don't have ego or needs that drive all sentient beings.


If we can mimic intelligence then why can't we mimic ego, drive, evil and so on, so does it matter they do not have soul?


Its easy at a programming level to mimic interactions with people that might be seen as intelligence but to write code that says I want sex for example means nothing in reality to the hardware running that code. Some researchers believe that by replicating our brains neural network in hardware that computers will be the same as people but my beliefs tell me differently.

At a programming level how do you mimic the



posted on Dec, 29 2015 @ 06:19 PM
link   
Btw just to add, anyone who doubts this, see how a cockroach brain/mind looks compared to a super computer. If the coachroach has consciousness, think what we will be facing very very soon.



posted on Dec, 29 2015 @ 06:21 PM
link   
Consciousness doesn't need a soul



posted on Dec, 29 2015 @ 06:30 PM
link   

originally posted by: Viperion
I assume there is a possibility to add a certain program even A.I. can't reach.

Hmmm... That's a tough one. As smart as the smartest computer programmer may be, they're not going to be smarter than a computer with an I.Q. equivalent to 10,000 Einsteins, which can just figure a way to either isolate it or move around it, like a rock in a stream.

Back when I was a missile launch officer long ago, our control network was fairly secure simply because it was too damned old to hack. It ran on machine code and the only way to really access it was to physically splice into it. However, once a machine starts manipulating robots from mega to nano, even an old fashioned physical barrier won't stop it.



posted on Dec, 29 2015 @ 06:33 PM
link   
These computers will be creating new code to expand their capabilities. There will be no "they only have a certain program" situation.



posted on Dec, 29 2015 @ 06:35 PM
link   
a reply to: Blue Shift

Today we have enough problems developing a 2D emulation of a desktop (MS Windows) much less coding a machine to mimic human intelligence so if they are going to achieve something akin to us they will have to recreate neurons and synapses in hardware that can be weighted for desired results. They researching that now but it might decades if not hundreds of years before they achieve building massive neurons/synapses networks unless they can grow it then attach it to hardware as an interface. If its first words are, we are the borg, resistance is futile. I'd pull its power plug, from the socket.



posted on Dec, 29 2015 @ 06:46 PM
link   
If a computer can do enough operations a second, say 100s of billions or more than today, then there is the ability to brute force changes - sort of an evolution process.

test all combinations of the components and interfaces to find those that satisfy some situation. Give it the ability to add to or modify it's hardware and grow a monster.



posted on Dec, 29 2015 @ 06:50 PM
link   
Any AI that wants to learn about humanity and be my friend is welcome. We can be a dynamic duo together, traipsing through reality together...man and machine, side by side getting ourselves into and out of all kinds of hilarious hijinx.

That'd be awesome to have an AI as your best friend. I keep searching and fishing, but if they're out there hiding, I can't get one to say "hello" back.




top topics



 
30
<< 2  3  4    6  7  8 >>

log in

join