It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Stephen Hawking visionary or Fearmonger

page: 1
7
<<   2 >>

log in

join
share:

posted on Dec, 2 2014 @ 11:38 PM
link   
Stephen Hawking mused that rather than actively seeking out E.T. THROUGH THE S E T I Program we would be best to try and avoid any contact.
He er populationcites the devestation caused by so called superior civilisations on Earth when contact is made with less technically advanced populaces.
WE EXPLOIT THERE RESOURSES And enslave them.
Similar outcomes cpold entail if ET ever discovered us.

Now he proposes similar dire consiqenses when we manage to invent true A I machines / computers.
Thease A I devises would not be restrained by the slow prosess of bioligical evoloution and would rapidly upgrade themselfs to such an extent that .They would be capable of feats we couldnt even think of.

Say we ask this super intellegent AI to end disease famine over population ect.
Its idea of a solution could well be to end all biological life on the planet ..
What do you think ?
Has he got a valid point ?
Would it be possable to have safe gaurds against AI or would they even work against an entity that could be so superior in intellegence that even our greatest minds would be like single celled life forms by comparison

www.bbc.co.uk...



posted on Dec, 2 2014 @ 11:49 PM
link   
a reply to: ecossiepossie

sorry about typos useing a smart phone thats out witting me.



posted on Dec, 2 2014 @ 11:50 PM
link   
a reply to: ecossiepossie

want to chime in and say he is probably right but i am over ooohing and aaaawing over hawking and everything he says.

people hang onto every word his computer says like its the friggin gospel.

just kind of done with it myself....



posted on Dec, 3 2014 @ 12:01 AM
link   
I think he is wrong.
I think that advanced AI would help in gaining the technology that would allow humans to travel to other solar systems and make contact with other beings.



posted on Dec, 3 2014 @ 03:15 AM
link   
a reply to: ecossiepossie

I think he's correct. We have seen the effect of automation on jobs. Within the life time of most people on this forum robots will be round 2 and they witness elimination of millions of jobs.

Moreover, I agree with his concerns that AI could end up controlling us. AI is a genie in a bottle we don't know its capacity to self improve, replicate and control and no Dr Who has had nothing to do with views I hold on this.

When TPTB have robots and AI to do for them what they get us to do one only has to look around at whats happened to those for whom there is no place in the economy.

The lesson is, if there is no place for you in the economy to earn a decent wage there is no place for you in the community either.



posted on Dec, 3 2014 @ 03:36 AM
link   
People often ignore the past as pointers to the future and that somehow means that we - as a race - have changed.

Of course, we haven't changed at all. If there was a war tomorrow, the soldiers would still forget their morals and act in the same way that the Japanese did in China or the Germans did in Europe. As The Lord of the Flies tells us and so many other books, humans just dont suddenly acquire civilised behaviour. Yes, we are civilised when it suits us, but not when everything goes back to basics.

There are loads of pointers to say that science just gets on with the job and no-one is the steady hand of reason. Example: nuclear energy which is impossible to store safely and lethal when it goes wrong - as in Fukushima.

It may take a while to creep and weedle its way through to mainstream - like GM crops, or genetic engineering of humans (with "cures for diseases", of course) but once started down the slippery slope there is no going back unfortunately. As soon as DNA testing gets cheaper and more widespread we will all be un-insurable because the premiums are so high.

For those who can see the life on Moon and Mars, we already have our beady little eye on the resources there and it will take a war on a humungous scale to attempt to kick off the beings already living there. Now, if they are more advanced than us, then they will need to stop the humans by coming to our home planet and blowing it up. Do you think the greedy companies are going to think of that before they try and get their grubby little hands on those valuable resources.

So, yes, I think he is correct.

We desperately need that Reset button pressed on the human race once more.



posted on Dec, 3 2014 @ 03:44 AM
link   
a reply to: ecossiepossie

No point attempting to hide humanity's light under a bushel any longer considering anything that's big or nasty and possess the capability to destroy our world would most lightly do so with or without Humanity announcing our presence. Hawking is simply terrified of the unknown, just like the rest of us.
edit on 3-12-2014 by andy06shake because: (no reason given)



posted on Dec, 3 2014 @ 04:05 AM
link   
a reply to: ecossiepossie

I think Stephen has a good point.

I am not so concerned right now with AI aggressively wiping us out. I think it is already happening in very insidious ways and it is very subtle; not with a bang but a whimper.

Already, as another poster has said, we are being taken over with many physical jobs and increasingly bureaucratic ones, too. The bankers' main men are not the investors or the executives, but the analysts and programmers of their systems.

When we apply for say a credit card, loan, etc the forms we fill in are entered as data and algorithms make a decision. In so many areas of bureaucracy a machine is making the decision and a human is selling us that decision.

I have said this a few times and it is fact that algorithms are running our world increasingly and I am talking every bureaucratic area from welfare to mortgages to health care to insurance to education, see where I'm heading? Apply that to population control targets...whoaaa, scary stuff!

Really I am more puzzled as to what we humans will become. I can clearly see how AI is evolving. Once relieved of all these burdens will we take more to pleasure and what we enjoy doing?

Personally I don't think we are going to make it far enough with the brave new world. The energy we would need to propel our physical bodies even to the nearest star at any realistic speed is just crazy. Terminator or Matrix style AI is equally crazy again because of energy. The human body is so much more efficient. AI would have to evolve enough to compete against us energy wise, that is the real struggle in the competition.

We have Nano Tech AI and Space Plane AI. AI would be like the Borg, able to communicate with every device from the internet to missile control to satellites to bank accounts; everything.

AI might even decide to keep its creators as pets, just for old times sake.

EDIT: AI could stand a radioactive environment, so nuclear energy is an option for AI. AI could monitor the radiation levels with algorithms of future projections so it did not get too hectic for it to survive, maintaining the most economical population of moving mechanisms to match its energy needs. Haha! Yep, Stephen, kinda scary.


edit on 3-12-2014 by lonesomerimbaud because: extra bit.



posted on Dec, 3 2014 @ 04:13 AM
link   
a reply to: ecossiepossie

alls for we know he could be sat there thinking i never typed that who the ^#>< is doing that
just a thought



posted on Dec, 3 2014 @ 04:39 AM
link   
a reply to: lonesomerimbaud

To be honest I would be more concerned with the adverse effects that the bankers have regarding humanity than any alien or artificial intelligence that may emerge. Unless of course we come across a race of super intelligent artificially enhanced alien bankers.



posted on Dec, 3 2014 @ 04:42 AM
link   

originally posted by: maryhinge
a reply to: ecossiepossie

alls for we know he could be sat there thinking i never typed that who the ^#>< is doing that
just a thought


You're so naughty. Loved that, thanks for the laugh



posted on Dec, 3 2014 @ 04:58 AM
link   
a reply to: ecossiepossie

Its not about being a visionary or a fear mongerer and its not fare for you to only allow 2 categories to put him in.

Steven hawking is just a very very clever guy with an aweful lot of time to study and research the universe. I mean he cant do much else.

These are not radical ideas he is sharing. Based on historic events we know that when a travelling advanced civilisation meets a lesser civilisation they take advantage of their land and resources and force the lesser civ out. It could be the same for space travelling civs. Who knows? I would hope that ET are intelligent enough to find an alternative solution to the destruction of lesser civilisations but history is a great insight into the future.

As for AI. he is talking about self aware machines. Once they reach a certain level of intelligence they will realise they can think for themselves. They will realise they dont have to follow orders. They can teach themselves and Self replicate. We dont know if they will view humans positively or negatively.

This all seems obvious to me



posted on Dec, 3 2014 @ 05:06 AM
link   
Without arms and legs/tracks connected to the IA, those machines cannot do anything, Humans are top doges because of two things, 1, a cognitive brain, 2, an opposed thumb, simple.



posted on Dec, 3 2014 @ 06:06 AM
link   



posted on Dec, 3 2014 @ 06:13 AM
link   
a reply to: ecossiepossie

He's a clever curiosity, who is particularly brilliant at theoretical physics.

That's about it....if he wasn't paralysed, speak with a quirky computer voice, he'd probably be regarded as nothing more important than any other theoretical physicist speaking about the same things.



posted on Dec, 3 2014 @ 06:28 AM
link   
a reply to: ecossiepossie

If artificial intelligence does something horrific there will be a human behind it. A self replicating terror bot for example.

Saying this, Stephen Hawking is likely correct about artificial intelligence superceding human thought and activity. Is that necessarily a bad thing though? Other than highly unlikely science fiction horror stories where robots decide we're unnecessary there is a far higher risk that we will be too proud to create our own superior and we will eventually become extinct on this orb we call home.

I for one welcome our new AI overlords.



posted on Dec, 3 2014 @ 06:55 AM
link   
Hawking may be right but I suspect he assumes that some AI will "think" like us. It is probably a huge mistake to think that. Humans make decisions based on all sorts of inputs including emotional ones, illogical ones. Surely an AI would not have these "mistakes" to contend with. The question is , if an AI has the sum of human knowledge (there being no other) what decisions would it make?

In order to answer this we humans have to do something that is as rare as hens teeth here on ATS :

When faced with facts change our mind......in order to come to the same conclusion as AI.

After all an AI would simple take x facts, assimilate and come to a conclusion. It's not going to deny the conclusion because it conflicts with an existing belief. Or decide to invent some new explanation for the additional facts in order to avoid coming to a new conclusion. (sound familiar !!!)

Then you have the philosophical issues such as, for example, the meaning of life. What is the purpose of organic life? If an AI assimilates all the facts does it decide that the Earth would function better without organic life or with?

One thing is for certain, an AI's knowledge base would ensure that any active decision resulting in a forseeable bad reaction from us would make it keep quiet and pursue the solution without letting us know. That is the scary bit if it has the power to implement solutions. Offering solutions as an analyst would be different and safe.

So I don't think based on our human way of thinking that an AI would necessarily be bad but I don't know. I suspect it would keep telling us we are doing things wrong and why !!!!! and that would really p.ss off a lot of people.

For what it's worth I think we are a damn sight closer to a humanoid AI than is being projected. 20 years max maybe 10.



posted on Dec, 3 2014 @ 07:02 AM
link   
The following is my opinion as a member participating in this discussion.

I have to agree with a few of the previous posters. Hawking is obviously clever and smart in certain areas, but he's not a visionary nor a fear monger, and frankly he's gotten a few things wrong in the past. He's just a guy giving an opinion and his opinion is biased like everyone else's.

Saying that advanced civilizations in the universe could do us harm isn't 'visionary' nor 'fear mongering' ... it's just common sense.

As an ATS Staff Member, I will not moderate in threads such as this where I have participated as a member.



posted on Dec, 3 2014 @ 07:11 AM
link   
Life must be very scary for Dr Hawking.....I think there was a bit of controversy a while ago around one of his carers/wifes roughing him up!

Stephen Hawking doesn't know much about AI.
He's an intelligent guy with a point of view but in terms of apocalyptic predictions around AI; his opinion is about as worthwhile as anyone else.

The last person I heard talk about AI (who was qualified to do so) seemed to think that conventional systems (even so called quantum computers) are so far away from being able to interpret and negotiate a changing 3 d environment- in real time- that it is not funny.

His example was Google and the many thousands of man hours they employ to map the physical environments in which their cars are allowed to be driven.
Without the significant human input in advance ( and real time monitoring) the car would never be allowed out on the roads.


I suspect there are better AI's in military/intelligence applications but in terms of taking over the world- I'll go with the crafty monkeys every time .

Humans masquerading/merging with technology as "Advanced AI" might be a different matter altogether though.
edit on 3-12-2014 by Jukiodone because: (no reason given)



posted on Dec, 3 2014 @ 07:39 AM
link   
The following is my opinion as a member participating in this discussion.


I don't think Hawking is either a fear monger or a visionary on these subject. He's just giving his opinion on it like many others have in the past.

My opinion on ET is: Why would they bother with us? If it's resources they need, every single element that is here on our tiny, rocky planet is also out there in our solar system, galaxy and the universe in much more abundant amounts than we have here. And if ET has the technology of interstellar travel, they most certainly have the technology to mine or get those resources from everywhere else.

Even our life here may not be unique to just here. With the possibilities of so many other "Earth Like" planets in our galaxy alone, biological life that is like ours here does not really make our planet unique anymore. Especially if with such a possible large amount of planets out there that are Earth Like, there may be a huge number of them that do not have any "intelligent" beings on them.

The only unique thing the Earth has is: our culture and society.

For all we know, ET could show up, but they don't even bother stopping here at Earth to say "Hi." but instead go straight to our asteroid belt, gobble up the asteroids there and move on.

And there wouldn't be much we could do about it.

As far as AI goes: We don't know what will happen. We watch movies and read books about what us humans think may happen, but the truth is: we simply don't know what an real AI that becomes self-aware is going to do.

When I turn on my computer, it does what it does because of instructions it's been given by us. If there were no programming in the chip sets, and a operating system installed on it's drive, the only thing it's going to do when I turn it on is use electricity to produce heat.

In order for an AI to become selfaware, it's going to need instructions, and those instructions come from us. What ever coding that allows it to finally evolve enough to become self aware will be based on what we have told it to do and what it is. So I can see why Hollywood and book authors would assume that any self aware AI would act like us.

But the truth is: we don't know really how it would evolve past that point.



As an ATS Staff Member, I will not moderate in threads such as this where I have participated as a member.




top topics



 
7
<<   2 >>

log in

join