It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial Intellingence vs. Man

page: 1
5
<<   2 >>

log in

join
share:

posted on Nov, 4 2014 @ 03:49 PM
link   
This is a fascinating article from an unusual source (wouldn't really think this would be up the Financial Times alley) and it's a great read. What sets it apart from most of it's counterparts is it takes a balanced view of the issue. Instead of focusing on just the doomsayers it also focuses on those who think AI will ultimately be a benefit and free mankind. Have a read.

www.ft.com...


The object of concern – both for him and the Machine Intelligence Research Institute (Miri), whose offices these are – is artificial intelligence. Super-smart machines with malicious intent are a staple of science fiction, from the soft-spoken Hal 9000 to the scarily violent Skynet. But the AI that people like Soares believe is coming mankind’s way, very probably before the end of this century, would be much worse. If it were a sci-fi movie, a small band of misfits would be thrown together at this point to save the planet. To the people involved in this race, that doesn’t seem so far from reality. Besides Soares, there are probably only four computer scientists in the world currently working on how to programme the super-smart machines of the not-too-distant future to make sure AI remains “friendly”, says Luke Muehlhauser, Miri’s director. Their effort is prompted by a fear of what will happen when computers match humans in intelligence. At that point, humans would cede leadership in technological development, since the machines would be capable of improving their own designs by themselves. And with the accelerating pace of technological change, it wouldn’t be long before the capabilities – and goals – of the computers would far surpass human understanding.





For techno-optimists like him, the idea that computers will soon far outstrip their creators is both a given and something to be celebrated. Why would these machines bother to harm us, he says, when, to them, we will be about as interesting as “the bacteria in the soil outside in the backyard”? Peter Diamandis©Annie Tritt 'We haven’t seen 1 per cent of the change we’re going to see in the next 10 years' - Peter Diamandis Countering the disaster-movie scenario of Miri, he sketches a future in which the machines shake off their earthly shackles and leave mankind behind: “It’s a huge universe, there’s plenty of resources and energy for them.” His matter-of-fact tone makes this science fiction outcome sound almost a given. “There’s no reason for them to stay here and battle with us – they can escape at the speed of light if they want.”

edit on 4-11-2014 by Vdogg because: (no reason given)




posted on Nov, 4 2014 @ 04:01 PM
link   


” His matter-of-fact tone makes this science fiction outcome sound almost a given. “There’s no reason for them to stay here and battle with us – they can escape at the speed of light if they want.”


And maybe thats what we're seeing in our skies, machines that have left other planets!



posted on Nov, 4 2014 @ 04:07 PM
link   

originally posted by: VoidHawk



” His matter-of-fact tone makes this science fiction outcome sound almost a given. “There’s no reason for them to stay here and battle with us – they can escape at the speed of light if they want.”


And maybe thats what we're seeing in our skies, machines that have left other planets!


There is a theory that if we ever do encounter extraterrestrial life, it is far far more likely to be a machine intelligence than something biological, their creators having long since died out. Depressing, but interesting nonetheless.



posted on Nov, 4 2014 @ 04:20 PM
link   

originally posted by: Vdogg

originally posted by: VoidHawk



” His matter-of-fact tone makes this science fiction outcome sound almost a given. “There’s no reason for them to stay here and battle with us – they can escape at the speed of light if they want.”


And maybe thats what we're seeing in our skies, machines that have left other planets!


There is a theory that if we ever do encounter extraterrestrial life, it is far far more likely to be a machine intelligence than something biological, their creators having long since died out. Depressing, but interesting nonetheless.


A lot of the scifi I've read goes down that route. I think our future will be man and machine being merged, even planting chips in our brains to boost our abilities, eventually we'll reach a point where there's nothing left thats human.



posted on Nov, 4 2014 @ 04:23 PM
link   
I wonder whether an artificial intelligence that lacked emotion would be better off than a natural intelligence (like us) that is a slave to its passions.

Do you think - assuming it was conscious - that a purely rational being would be "happier" than its more fleshy, instinct-driven precursor, or would concepts like "happy" and "sad" be totally unintelligible to such a being?

Or am I getting ahead of myself? Could an artificial intelligence be conscious? If so, how? If not, why not?

Just musing.




posted on Nov, 4 2014 @ 04:26 PM
link   
The question for me will be how many AI's will there be in the future . Can they develop conscientiousness? And be aware of itself and create self image? What role model would they pursue? Would they fight eachother or stand against their creator?



posted on Nov, 4 2014 @ 04:29 PM
link   

originally posted by: VoidHawk

originally posted by: Vdogg

originally posted by: VoidHawk



” His matter-of-fact tone makes this science fiction outcome sound almost a given. “There’s no reason for them to stay here and battle with us – they can escape at the speed of light if they want.”


And maybe thats what we're seeing in our skies, machines that have left other planets!


There is a theory that if we ever do encounter extraterrestrial life, it is far far more likely to be a machine intelligence than something biological, their creators having long since died out. Depressing, but interesting nonetheless.


A lot of the scifi I've read goes down that route. I think our future will be man and machine being merged, even planting chips in our brains to boost our abilities, eventually we'll reach a point where there's nothing left thats human.


A lot the scifi I read indicates a war against machines at some point in the fictional universe's history. Two examples are the "Dune" series and "The Algebraist" by Ian M. Banks.

Ultimately at some point AI supersedes human intelligence. Things go bad then humans kick but. It's a plot device with a cautionary tale that still leaves the author able to even write about humans.


edit on 4-11-2014 by nukedog because: (no reason given)



posted on Nov, 4 2014 @ 04:54 PM
link   

originally posted by: muchmadness
I wonder whether an artificial intelligence that lacked emotion would be better off than a natural intelligence (like us) that is a slave to its passions.

better off in what way?


Do you think - assuming it was conscious - that a purely rational being would be "happier" than its more fleshy, instinct-driven precursor, or would concepts like "happy" and "sad" be totally unintelligible to such a being?

"Happy" and "sad" are regulated by neurochemistry. Electronics would be regulated by electricity. I think any biological subjective experience would never translate to code.



Or am I getting ahead of myself? Could an artificial intelligence be conscious? If so, how? If not, why not?

Not until we understand what consciousness is, otherwise, how would you program it?

Just musing.


edit on 4-11-2014 by ZetaRediculian because: (no reason given)



posted on Nov, 4 2014 @ 05:08 PM
link   
a reply to: nukedog

"The Algebraist" by Ian M. Banks.

I havent read that. Just took a *little* peak on wikipedia (dont want to know to much yet), I shall go purchase as soon as possible.



posted on Nov, 4 2014 @ 05:13 PM
link   
a reply to: VoidHawk

You know I really think it's possibly my favorite scifi book of all time. And I'm a big dick, Herbert, chricton, clarke and tad Williams fan
edit on 4-11-2014 by nukedog because: (no reason given)



posted on Nov, 4 2014 @ 05:21 PM
link   
Personally, I think there is a big misunderstanding of AI in general. We really don't have a clear definition of "Intelligence" to work with for one. We also don't have a clear understanding of what "consciousness" is. AI is here and is surpassing human intelligence. It started when Deep Blue beat Kasparov and when Watson won at Jeopardy. Not conscious or feeling but efficient at completing a task.

They will never be "evil", just unfeeling, cold and calculating. If they need to kill, they will do it efficiently and not waste time with feelings and emotions.

Its probably a good idea to understand them.



posted on Nov, 4 2014 @ 05:26 PM
link   
a reply to: ZetaRediculian

Imo it's pretty simple. A self aware program. The electrical paths are treated like neurons which enables it to rewrite it's own code as it sees fit. It just selects the path that works best and does this millions of times at light speed. The only thing that limits AI in it's supposedly current state is the upper threshold of silicon for retaining data. There are other mediums that have been put forth such as water.



posted on Nov, 4 2014 @ 05:34 PM
link   
a reply to: nukedog

Interesting. do you have any links? I would like here more about that.



posted on Nov, 4 2014 @ 05:37 PM
link   
a reply to: ZetaRediculian

I can look up the link for how it was explained to me in one of Dan Carlin's common Sense podcasts. He paints in crayon strokes though.

Common Sense: Summoning The Demon

Don't worry it's not sensationalist. Also some source notes for the show. The first half is about Ebola.

Steven Hawking


As for the silicon thing that's something my first computer science teacher taught us.

Quick Change Materials
edit on 4-11-2014 by nukedog because: (no reason given)



posted on Nov, 4 2014 @ 06:04 PM
link   
a reply to: ZetaRediculian

Thinking about it, and this is wild speculation on my part, emotions or their equivalent would be part of AI eventually. The way our mind is set up like a chemical computer, the silicon based variety could also mimic, to further speed cognitive abilities. In theory I guess, it would maximize certain substrates of circuits to the maximum benefit.

For example, AI wants to cook chicken for humans. It either A) fails and gives it's master salmonella or B) does an ok job. If the substrate of circuits it used while cooking were successful it would put those into overdrive in the future and neglect the ones that didn't work so well.

The only thing that would get tricky imo was how would AI know what's right and wrong? It would occupy a plane of existence much like a psychopath's.
edit on 4-11-2014 by nukedog because: (no reason given)



posted on Nov, 4 2014 @ 06:29 PM
link   
a reply to: nukedog

Thinking about it, and this is wild speculation on my part, emotions or their equivalent would be part of AI eventually. The way our mind is set up like a chemical computer, the silicon based variety could also mimic, to further speed cognitive abilities. In theory I guess, it would maximize certain substrates of circuits to the maximum benefit.

Nothing wrong with wild speculation...Are you saying that emotions would be necessary? Im thinking "simulated" emotions but only so the end user feels comfortable and the "choices" will seem to have a human quality. I think AI is pretty fast now but, yeah, there will be more improvements and more efficient circuits.



posted on Nov, 4 2014 @ 06:46 PM
link   
a reply to: ZetaRediculian

Well it would in my view depend on how you define emotion.

If I stub my toe I feel one way.
If I kick a ball into a net I feel another.

It's essentially a bio mechanism for pain/pleasure that rewards certain functions. An an advanced AI would end up using something similar I think.

The software that we interface a computer with could be emotionally based in your context. Input happiness to the AI and it outputs the correct response (if any.)

AI won't have a lot of the problems the human brain does though. Our chemicals are often really out of wack.

ETA: I don't think AI will actually feel in the same sense we do but in the search for cognitive skills it will use a driving force or "emotion." Said force likely being nothing more than an array of circuits that perform well in a given situation.
edit on 4-11-2014 by nukedog because: (no reason given)



posted on Nov, 5 2014 @ 10:28 AM
link   
a reply to: Vdogg

Some of us like to think that we already are being controlled by a superior intelligence other than an old-fashioned god of some sort. Be it aliens in the UFOs or very secret quantum computers of our own doing, the current shifts in the world's leader's thinking can be difficult to explain based upon historical records of human behaviors. A new perspective seems afoot, where is it from? Not from some unencumbered and sage human mind that we recognize and honor. Certainly, the Dali Lama is hardly consulted for his views about how to run the world. Can we accept that the hugely wealthy elites of the world would take it on themselves to inspiring us to destroy much of what we take as "good and right" in our materialist world of which they provide much? No. something is going on above normal human "intelligence." The hot topics of today seem beyond human comprehension and lack the usual typical, immediate human self-interest and self-gratifications.

My bet is the ETs in their UFOs equipped with quantum computers beaming down directions.



posted on Nov, 5 2014 @ 10:42 AM
link   

originally posted by: VoidHawk



” His matter-of-fact tone makes this science fiction outcome sound almost a given. “There’s no reason for them to stay here and battle with us – they can escape at the speed of light if they want.”


And maybe thats what we're seeing in our skies, machines that have left other planets!


and because AI traditionally doesnt appreciate organic lifeforms as equal in terms of substance or cognitive capacity, they think there is no intelligence on earth.



posted on Nov, 5 2014 @ 04:24 PM
link   

originally posted by: TzarChasm

originally posted by: VoidHawk



” His matter-of-fact tone makes this science fiction outcome sound almost a given. “There’s no reason for them to stay here and battle with us – they can escape at the speed of light if they want.”


And maybe thats what we're seeing in our skies, machines that have left other planets!


and because AI traditionally doesnt appreciate organic lifeforms as equal in terms of substance or cognitive capacity, they think there is no intelligence on earth.


What do you mean! Look at all the amazing tech we create to blow each other up!




new topics

top topics



 
5
<<   2 >>

log in

join