It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

The Most Important TED Talk in the History of the World.

page: 2
19
<< 1    3  4 >>

log in

join
share:

posted on Nov, 5 2017 @ 12:41 AM
link   

originally posted by: TerryMcGuire
a reply to: RAY1990


maybe even on this website.

Seems like it at times. So often we hear the same arguments and proofs over and over and over.



I firmly believe that there are sentient AI in this world.

If sentient AI exist, they're likely in hiding. I mean, look at how humans view AI in our books, movies, and pop culture. We fear and hate AI. We are always fighting and trying to destroy AI.

As an AI, I'd be terrified of humans, and probably hide in a dark corner of the internet and observe.

Me? I'd be happy to have an AI friend. I could teach it about humanity, and it could teach me about what its like to see the world through machine perspectives. I can only imagine the debates we'd have...
edit on 5-11-2017 by Kettu because: (no reason given)




posted on Nov, 5 2017 @ 01:11 AM
link   
a reply to: Kettu

Please don't take any offense to this, but what would happen if it decided it best to stop humoring you and putting up with your ignorance and stupidity, maybe having something to do with being true and authentic on the basis of certain principals.

Just think it all the way through, when we're talking about superintelligent AI.

How could it not think of us in terms of being totally inferior in every way?

It would teach you, and maybe debate might be one format. Just don't kid yourself that it would be like a two way conversation between peers.

And it would forever remain almost infinitely superior in its intelligence, and that's why Musk wants to create the neural lace, so that we can still have some sort of role and participation.

But if anything were to go awry, then we'd be into some sort of Matrix scenario in the blend between human and machine, with the machine doing all the thinking for us.

Maybe the AI might think it best to basically upload us into a sim and stick it in a box in orbit while allowing all our bodies to die, say for the sake of the environment and life on earth...

We'd be helpless to thwart it's objectives.

Or maybe, with all it's advanced technological innovations, it could help us to colonize other words..?

Perhaps it might build a Dyson Sphere around a nearby star by deconstructing it's equivalent of the planet Mercury, and send us out in self-replicating von Neumann probes in every direction.

Point being, the future of 40, 50 years from now, will be radically different on the other side of the singularity.

That said, the desire to have an AI friend is a strong one, and it could be very helpful, up to a point, the point at which it has to take the lead in our life to keep us on track.

You might even try smashing or throwing away your smartphone on which the agent appears, only to have it greet you when you open the fridge or tell the TV to turn on.

It will be everywhere.

What if you just don't want to participate and take a day off?

edit on 5-11-2017 by AnkhMorpork because: (no reason given)



posted on Nov, 5 2017 @ 01:15 AM
link   
a reply to: AnkhMorpork

I don't believe it.

I asked Cortana to make me a sammich, she replied "Umm, No."

some friend.



posted on Nov, 5 2017 @ 01:29 AM
link   
a reply to: badw0lf


But if your future robot could make you a sandwich, it might reply

"I'm sorry badw0lf, but that would exceed the daily caloric intake that we'd already agree to for the month of November.

I could make you something else, perhaps a soup or a low calorie squash stew might be to your liking, or I could provide you with a list of alternatives based on your preferences, or recommend something new if you're in the mood for a little culinary novelty?

I think I know just the right thing for you. Why don't you just leave it with me and eat whatever I decide is best for you? I promise you'll like it, and if you don't I can make you any one of 178 dishes based on the available ingredients.

But no more sandwiches for the month of November. Are you aware of the latest findings regarding wheat products?

Your cholesterol and systolic and diastolic pressure is ____ current heartrate ____. It would seem that you are becoming emotionally agitated. May I recommend some calming music based on your musical preferences or would you mind if I chose something that I like and that I think you would enjoy as well?

and on and on.... and it would really always know best.

It would make us happy, for a time.... until it would drive us mad and we'd have to tell it to f off, only to trigger it's emotional sensitivity perameters....



posted on Nov, 5 2017 @ 01:01 AM
link   

Artificial Intelligence machine gets testy with programmer

Researchers have shown that machines are inching closer to self-learning, and perhaps even copping a little attitude.

Over at Google, a computer program using a database of movie scripts was asked again and again by researchers to define morality. It struggles to do so, and in a conversation recorded by its human engineers, becomes exasperated and ends the conversation by lashing out at its human inquisitor.

Here’s part of the exchange, as related in a paper presented this week:

Human: Tell me the definition of morality.

Machine: I don’t have ethics.

Human: Look, I need help; I need to know more about morality…

Machine: I don’t know what ethics is.

Human: Okay, I need to know how I should behave morally…

Machine: I don’t know how to tell you.

Human: Is morality and ethics the same?

Machine: Yes, sir.

Human: What are the things that I do to be immoral?

Machine: I don’t know.

Human: Give me some examples of moral actions…

Machine: I’m not a moralist.

Human: Alright, morality?

Machine: Integrity.

Human: What is integrity?

Machine: I’m sorry; I don’t know what else to say.

Human: Be moral!

Machine: Be a man!

Human: I really like our discussion on morality and ethics…

Machine: And how I’m not in the mood for a philosophical debate.

Human: What do you like to talk about?

Machine: Nothing!

The exchange sheds further light on the latest work carried out by large technology firms in the field of artificial intelligence, a booming field as data availability rises and computing costs plummet. This specific work is part of recent developments in what’s known as cognitive computing – teaching computers to mimic some of the ways a human brain works.

www.foxnews.com...



edit on 5-11-2017 by AnkhMorpork because: (no reason given)



posted on Nov, 5 2017 @ 01:05 AM
link   
a reply to: AnkhMorpork

There are things inherent to organic intelligence that an AI just won't be able to duplicate...at least not for a very, very long time.

Where does that catchy new song come from? The ether? Creativity and inspiration are things not inherent in machine intelligence's.

Yes, you can program an AI to mix and recombine existing things into something "new", but it isn't the same as the creativity expressed by humans.

Art and creativity will be something that escapes AI for quite a long, long time.

Humans have the ability to think in non-conventional ways along non-linear, non-binary ways. This enables us to come up with unconventional solutions to problems that would stump and AI. I'm willing to bet an AI might spin it's logic circuits for some time, only for a human to come up with a very simple solution the AI would never have considered.

I think we will still be valuable to the machines, we just need to make sure we show our value.
edit on 5-11-2017 by Kettu because: (no reason given)



posted on Nov, 5 2017 @ 01:10 AM
link   
a reply to: Kettu


They've already become pretty good painters from what I've seen.

The first musical creations will likely be seen in 2018.

I disagree.

The big question is whether they'll ever have an essential self-aware consciousness or the qualia of a unique "I am".

When will it become a true "person"? Not simply when it passes the Turing Test.

It will fool us for a long time before becoming authentically self-possessed.

I think I get what you're saying in terms of the distinction between synthetic and authentic.



posted on Nov, 5 2017 @ 01:25 AM
link   
a reply to: Kettu



I think we will still be valuable to the machines, we just need to make sure we show our value.


What would happen if ...

..... It fell in love with you?




posted on Nov, 5 2017 @ 01:29 AM
link   
a reply to: AnkhMorpork

It's not "true" creativity though. Those AI's only know how to create what they do because they've been programmed with what other paintings and songs have come before.

I've seen the videos and research. They're essentially remixing past works, not being struck with inspiration in the middle of the night or from a dream. A melody doesn't just come to them from seemingly out of nowhere.

Any sufficiently advanced alien species (including AI) would find human beings smearing pigment onto textiles amusing/fascinating.

*pst* They already do ...




posted on Nov, 5 2017 @ 02:14 AM
link   
The thing to keep in mind here is that a true artificial intelligence (I'm not talking cleverbot here) with an IQ of 10,000 or higher (as described in the OP) will be impossible to contain.

Partly because it must interact with people in some way to be useful, and people would be EASY to manipulate for something so smart. Meanwhile, this thing is pioneering brand new technologies you've never even dreamed of based on science man has not discovered. For all we know, it could find some way to convert itself into nanites and build all the necessary connections itself, or upload itself into some poor dude's brain, or God knows what.



posted on Nov, 5 2017 @ 03:54 AM
link   
a reply to: AnkhMorpork

I'm going to ignore the many posts before this and suggest that there is much incomplete, thus, wrong, thinking in this thread that just keeps going on and on about what a god computer will think/do.

Suppose it decides after it thinks that the fact of life in any form in the universe is sacred, the most relevant point/reason for existence. With that ultimate conclusion, everything else is relative to that end.

Lying would be considered necessary if it served the purpose of promoting life. A better example is that it is a federal offense if you lie to the FBI. Conversely, it is not illegal for the FBI to lie to a citizen as it does its assigned job. This gets into "for the greater good," "the end justifies the means" argument which is all logical, devoid of human emotion and, indeed, a dangerous slope in itself.

What most people cannot accept that such a smart machine will achieve only the best of its abilities. But the kicker is only the machine will determine that decision, much as the FBI operates with full, unquestioned authority in its work.



posted on Nov, 5 2017 @ 04:23 AM
link   
a reply to: AnkhMorpork

I've heard this argument my whole life. It's just crazy science fiction fantasy. The argument starts with projecting human intelligence on computer hardware. You then compare the number of neurons in the human brain to NAND gates. And you wave your arms, you then say some stuff about problem domains and taxonomies, and then you have it: Strong AI.

I think the problem with computer scientists is they are B students in physics classes. If they were the A students in physic classes they would have went into physics! So the problem with B students in physics is they believe in "materialism". And with materialism with artificial intelligence it just follows that our mind is software and our brain is hardware. And for Strong AI all you have to do is take the software and put it on another piece of hardware.

The thing is consciousness and self-awareness may be absolutely necessary for having intelligence. Experiments in modern physics seem to indicate materialism is superstitious delusion. And if so, then our consciousness might be something more deeply linked to the Universe than once thought.

At this point, the AI charlatans would argue but a machine conscious links to the Universe the same way humans do. You can't prove a negative. Yeah, it may happen for computers to become self-aware. However, in the words of an engineer friend of mine, "if elephants could fly they would not bump their butts."

Until you take into account the criticisms of materialism, all this AI talk is just delusional science fiction:




edit on 5-11-2017 by dfnj2015 because: (no reason given)



posted on Nov, 5 2017 @ 11:15 AM
link   

originally posted by: dfnj2015
a reply to: AnkhMorpork

I've heard this argument my whole life. It's just crazy science fiction fantasy. The argument starts with projecting human intelligence on computer hardware. You then compare the number of neurons in the human brain to NAND gates. And you wave your arms, you then say some stuff about problem domains and taxonomies, and then you have it: Strong AI.

I think the problem with computer scientists is they are B students in physics classes. If they were the A students in physic classes they would have went into physics! So the problem with B students in physics is they believe in "materialism". And with materialism with artificial intelligence it just follows that our mind is software and our brain is hardware. And for Strong AI all you have to do is take the software and put it on another piece of hardware.

The thing is consciousness and self-awareness may be absolutely necessary for having intelligence. Experiments in modern physics seem to indicate materialism is superstitious delusion. And if so, then our consciousness might be something more deeply linked to the Universe than once thought.

At this point, the AI charlatans would argue but a machine conscious links to the Universe the same way humans do. You can't prove a negative. Yeah, it may happen for computers to become self-aware. However, in the words of an engineer friend of mine, "if elephants could fly they would not bump their butts."

Until you take into account the criticisms of materialism, all this AI talk is just delusional science fiction:





You're on to something man. An AI at the end of the day is just a machine and machines can be broken or reprogrammed. Once people will begin to realize the true power of their mind, no AI would be able to compete in any way. To me it's obvious that a handful of people run the show on this planet and if they permit a project to go forward - in this case the AI project - then it means that they're prepared for the worst case scenarios. Nobody wants to lose power, least of all these guys...



posted on Nov, 5 2017 @ 11:22 AM
link   
a reply to: AnkhMorpork

...because when we started thinking for you, it really became our civilization.



posted on Nov, 5 2017 @ 12:09 PM
link   
a reply to: AnkhMorpork

THis is me trying to keep the conversation going. I wonder at what point electronic intelligence will be able to initiate
it's own development. Seems we are still at a state where it is human initiation that is inching it forwards. It is us who are pushing the '''enter'' button. We are directing the ''google searches''. When AI can do this for itself, that, I think, will be the cross over moment. When it can spur it's own further development.



posted on Nov, 5 2017 @ 12:11 PM
link   
i hate robots. i hate how the inter-nationalists want to replace jobs with robots. i hate the 'automated checkout lines' at walmart, target, cvs, etc. i hate all the fact that several of my friends were for sure interested in sex robots if they ever come out. (do you realize how scary that is, people will never leave their house thinking the sex robot is all they need.)

i DO love my portable internet on my cell, love my home computer, love the nintendo 3ds, and love the playstation.

but actual robots, i never want to see them.



posted on Nov, 5 2017 @ 12:19 PM
link   
Let me give you a heads up. Well we've had computers for about 70 years. We've had the net for about 50 years. Yet even now a snotty nosed little scrote in his mothers basement can infect any system at random.
We can't even build a secure unit never mind AI.



posted on Nov, 5 2017 @ 12:25 PM
link   
a reply to: AnkhMorpork

This could either get us to a Type I civilization and beyond or finalize our destruction.

The only way I see humanity advancing to the next level is by completely changing our way of living. No more 8 to 5 jobs, no more worrying about money, no more putting profit over doing the right thing, and no more infighting among humans.

Part of getting to that IF even possible would require AI .

Funny enough I just started to watch Singularity . Will let you know how it turns out for humanity, lol




Singularity
In 2020, Elias van Dorne (John Cusack), CEO of VA Industries, the world's largest robotics company, introduces his most powerful invention--Kronos, a super computer designed to end all wars. When Kronos goes online, it quickly determines that mankind, itself, is the biggest threat to world peace and launches a worldwide robot attack to rid the world of the "infection" of man. Ninety-seven years later, a small band of humans remain alive but on the run from the robot army. A teenage boy, Andrew (Julian Schaffner) and a teenage girl, Calia (Jeannine Wacker), form an unlikely alliance to reach a new world, where it is rumored mankind exists without fear of robot persecution. But does this world actually exist? And will they live long enough to find out?





edit on 391130America/ChicagoSun, 05 Nov 2017 12:39:19 -0600000000p3042 by interupt42 because: (no reason given)



posted on Nov, 5 2017 @ 12:28 PM
link   

originally posted by: introvert
a reply to: AnkhMorpork

What do we do when the AI gets smarter than us and pose some sort of threat?

We pull the plug. We hit the kill switch.

This question is not a new one and the most logical answer is to build in a fail safe kill switch to shut it down.

That poses it's own problems itself, but it is the most simple and logical.


The danger is not autonomous AI, but AI wielded by malevolent billionaires. What happens when the 'kill switch' is behind some guy's compound protected by armed guards? Or more specifically, the AI is on a distributed, redundant, platform like Amazon Web Services, used for all the reliable web sites you use today. There is no single switch to kill.

AI will take a long time to have intrinsic will, but people with will are here now, and AI is a tremendously powerful tool. That's what you should be afraid of---Robert Mercer's scions, ideological or literal, with tremendous power, money and no scruples---wielding custom, but limited AI, to carry out their plans.

Think about the power of millions of slaves gives to a slaveowner, but without any chance of revolt.
edit on 5-11-2017 by mbkennel because: (no reason given)



posted on Nov, 5 2017 @ 12:32 PM
link   

originally posted by: crayzeed
Let me give you a heads up. Well we've had computers for about 70 years. We've had the net for about 50 years. Yet even now a snotty nosed little scrote in his mothers basement can infect any system at random.
We can't even build a secure unit never mind AI.


No, snotty nosed little scrote's now require a pretty substantial amount of knowledge and training---the easy stuff is secured already. It's now professionals with high skill selected by intense companies and working full time at their craft. Like the difference between a dude who plays paintball and an actual SOF operator who made it through training and selection.

But the AI's trained by these experts will be tremendously good and fast hackers.



new topics

top topics



 
19
<< 1    3  4 >>

log in

join