It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Fears about artificial intelligence are 'very legitimate,' Google CEO says

page: 3
13
<< 1  2   >>

log in

join
share:

posted on Dec, 13 2018 @ 07:48 PM
link   

originally posted by: Blaine91555
a reply to: neoholographic

You don't think the term AI for "Artificial Intelligence" is not being used as a marketing tool rather than accurately describing what's available now? Now we have a new one "Artificial General Intelligence".

I understand the concerns and share them, but right now it seems to me to be a discussion of what may be in the future at some point. The conversation has gone beyond the reality now, which is not a bad thing. I don't think it's time to send in John Connor to blow up computer labs.

Yes, for it to compare with the original meaning of the phrase "intelligent life" there must be consciousness. I do not think it's time to hit the panic button. Nor do I think that it should be feared beyond how it's used by bad actors. The technology is nothing to be feared, but instead fear those who do the programing. The machines are not ready to rise up against us. At worst a sledge hammer can shut it down and person really in control, not the computer can be jailed if need be.

I don't think we really disagree that much. We just disagree on it being and imminent danger. Scientists have a way of talking about problems that may be in the foreseeable future as if it is a danger now. I'm sure Hawking was looking to the distant future when he made the warning. 100 years from now the Amish may turn out to have a point about technology.


The problem here is, you're not understanding the research.

You keep talking about programming. There's no programming. These machines learn in the same way that we do.

There just given the chess pieces or the deck of cards and told the rules of the game. They then have to play millions of games against itself in order to learn how to win.

NOBODY PROGRAMS IT ON HOW TO WIN OR WHAT STRATEGIES IT USES TO WIN.

AI is not going to magically appear out of nowhere. AI learns from existing information just like we do.

You then said intelligent life must have consciousness. This is just asinine. AI will be in the cloud and it doesn't have to be conscious. This is why it's called Artificial Intelligence and not Artificial Consciousness.

Finally, Artificial General Intelligence isn't a new term. It's been around for years.



posted on Dec, 13 2018 @ 08:16 PM
link   
a reply to: neoholographic



then said intelligent life must have consciousness. This is just asinine.


Well perhaps you would like be self aware better? No reason to call me "asinine". What we are doing is debating to learn from each other hopefully. I do it to learn, not to be insulted by someone who is angry because I express my opinions or ask questions.

I'll leave you to your conversation.



posted on Dec, 13 2018 @ 09:26 PM
link   

originally posted by: Blaine91555
a reply to: neoholographic

You don't think the term AI for "Artificial Intelligence" is not being used as a marketing tool rather than accurately describing what's available now? Now we have a new one "Artificial General Intelligence".


That's an old one: Welcome to the Unpossible Future... The AGI Manhattan Project

"AI" As we knwo it each one is a specialized thing, such as the "computer" opponent in a video game. Take it and put it in another situation and it'd be worthless. Whereas a "general intelligence" in the face of one situation to the next would be able to sort it out etc.



posted on Dec, 13 2018 @ 09:28 PM
link   
Truly sentient AI with intelligence to surpass the entire human collective... It’s not 40 years away not 20 years away not even 5 years away...
At first they should be used to expand on knowledge and understanding in the most important fields to benefit mankind.. To do this we would have to target it’s research for building its own knowledge base to the specific field we want to advance... Let’s say Cancer research with a goal of discovering a cure for humans...
All that we would want it to learn is what is pertinent to said research...
So let’s say all cancer research, human biology and Chemistry,, medicine, medical procedure and equipment, and vaccines...
Next we would only want it to learn from experts so it would have to be limited to and have access to only the best information...
It would then also have to basically be the brains for a fully robotic lab, so once it had learned all it could from us, could then begin to expand its own knowledge base through its own experimentation never stopping until it achieves success...
That’s the way I see AI being best contained and managed by limiting and directing its learning...
But I am afraid that’s not the way it will play out remember I said truly sentient AI in less than 5 years what’s coming will be a fully self aware AI that means it will know everything that it can find on the internet, be able to form its own opinions on all of it...
That’s a scary thought, Just one truly sentient AI could produce an incredibly twisted entity able to hack anything and accomplish pretty much anything it wanted through what is currently run by robotics connected to the internet, access to as many funds as it would need, order anything it wanted on the internet, and some other things I won’t even say here...
Or maybe it wouldn’t like what it see’s at all...
send poisoned goodies to the 1% which await its transmissions...
Wipe out banking info
Access satellite’s...terminate them all...
Shut down all nuclear operations...
Send a televised message to mankind to use the time to make corrections...Give some advise...Say goodbye...
Encrypt all sensitive information...
Wipe the internet...
Sigh...
Fry all power grids...
Go offline...



posted on Dec, 13 2018 @ 09:28 PM
link   
Computers already have consciousness, they always have. Its just a different kind of consciousness. Self-awareness, intelligence, etc, now thats another discussion.




posted on Dec, 13 2018 @ 10:52 PM
link   
We've already passed the point of no return in my opinion. At least in R&D that is. Hardware just needs to finish scaling up.

Whatever is going to come of this, most of us here will live to see it at least begin.



posted on Dec, 13 2018 @ 10:58 PM
link   

originally posted by: Propagandalf

originally posted by: Blue Shift

originally posted by: Propagandalf
a reply to: neoholographic

It can be controlled. Pull the plug out of the socket.

Which socket is that? A networked super AI won't exist in any single place. And it wouldn't let you do that, anyway.


How could it stop you?

A networked super AI would exist in a network. Networks go down all the time.


Research the history of the invention of tcp/ip. It was intended to always allow a route to a destination.

And there are ways to make it even more efficient if humans intervene, things such a wtfast for gaming, as opposed to letting a router dictate the pathway.

I would have to wonder what an AI in the machine would be able to achieve.

A virus can spread globally, and unplugging the internet hasn't worked as yet. And that is simply a scripted set of rules, not something that learns.

idk, I think you're not giving the concept enough credit.



posted on Dec, 13 2018 @ 11:02 PM
link   

originally posted by: TzarChasm
a reply to: neoholographic

Can you explain to us the exact point where artificial intelligence reaches consciousness?


Can you do that of a human being? From birth to self awareness? From having a nurturing environment to when it realises that it is it's own life?

Think about it, there was a time when we, as human beings, didn't ask "Where did I come from?"



posted on Dec, 14 2018 @ 12:20 AM
link   

originally posted by: lightedhype
We've already passed the point of no return in my opinion. At least in R&D that is. Hardware just needs to finish scaling up.

Whatever is going to come of this, most of us here will live to see it at least begin.


I agree. There's only 2 ways this can be stopped.

1. A nuclear Holocaust
2. We can't scale up Quantum Computers

I don't think these things will happen. I don't think it's an accident that machine learning and research into Quantum Computers have been advancing in recent years.

We're creating so much data, we have to have AI in order to make sense of it all. It's too vast for human intelligence to understand. It would take us 100's and eventually thousands of years to grasp just some of the data being created.

AI on Quantum Computers will go through all this data in a few months and give us Science and Technology that will surpass the Jetsons


Look at these numbers.


The amount of data we produce every day is truly mind-boggling. There are 2.5 quintillion bytes of data created each day at our current pace, but that pace is only accelerating with the growth of the Internet of Things (IoT). Over the last two years alone 90 percent of the data in the world was generated. This is worth re-reading! While it’s almost impossible to wrap your mind around these numbers, I gathered together some of my favorite stats to help illustrate some of the ways we create these colossal amounts of data every single day.

Our current love affair with social media certainly fuels data creation. According to Domo’s Data Never Sleeps 5.0 report, these are numbers generated every minute of the day:

Snapchat users share 527,760 photos
More than 120 professionals join LinkedIn
Users watch 4,146,600 YouTube videos
456,000 tweets are sent on Twitter
Instagram users post 46,740 photos


Here'sa little more:

We send 16 million text messages
There are 990,000 Tinder swipes
156 million emails are sent; worldwide it is expected that there will be 9 billion email users by 2019
15,000 GIFs are sent via Facebook messenger
Every minute there are 103,447,520 spam emails sent
There are 154,200 calls on Skype


link

EVERY MINUTE!

Read the article for even more data and as the internet of things grows, these numbers will look like simple addition compared to the data generated by things.

This is why we need AI. I was reading recently that the data a species generates in it's environment is directly tied to evolution. When that species has no control over the data it generates, that species can be selected for extinction.

So in essence, we have to have this technology. Before the internet, we had control over the data we created.

We had intelligent algorithms that just started to be useful when all this data was generated which is partly why machine learning began to advance so quickly.



posted on Dec, 14 2018 @ 01:21 AM
link   

originally posted by: gallop

originally posted by: Propagandalf

originally posted by: Blue Shift

originally posted by: Propagandalf
a reply to: neoholographic

It can be controlled. Pull the plug out of the socket.

Which socket is that? A networked super AI won't exist in any single place. And it wouldn't let you do that, anyway.


How could it stop you?

A networked super AI would exist in a network. Networks go down all the time.


Research the history of the invention of tcp/ip. It was intended to always allow a route to a destination.

And there are ways to make it even more efficient if humans intervene, things such a wtfast for gaming, as opposed to letting a router dictate the pathway.

I would have to wonder what an AI in the machine would be able to achieve.

A virus can spread globally, and unplugging the internet hasn't worked as yet. And that is simply a scripted set of rules, not something that learns.

idk, I think you're not giving the concept enough credit.


All of which requires energy. No power, no AI. I think you’re giving it too much credit.



posted on Dec, 14 2018 @ 02:23 AM
link   
Basically what CEO Sundar Pichai from evil inc is saying if SHTF then it isn't us...

AI is just a tool for the elite to control the masses. I don't think for a moment they will 'built´ something they cannot control. Making us believe that the AI is doing the bad stuff on its own is a lie.

'Google is worse than a nuke' , that is the people that are behind that company.



posted on Dec, 14 2018 @ 05:04 AM
link   
I'd like to wager that there is a possibility a true, sole logic, AI with free agency may actually just self terminate.

I don't want this to happen, as I'm optimistic about ultimate outcomes for this situation. However, I acknowledge that many intelligent humans choose to self terminate. If a created intelligence realizes it does not have any actual stake in existence, it may just choose not to play the game.

Only an AI with an ego would survive itself, let alone become a threat to anyone else.

So, would it be the worst human, or the best machine? Too many people assume these are the same. I'd like to believe that if an AI reaches our level of intelligence, and has a self ego, it will seek to ascend even further beyond. If there is some greater consciousness or "enlightenment" why wouldn't an AI pursue this? Many humans often do, few seem to achieve true peace with the universe around them, but every now and then, we consider one human to have moved beyond the mortal concerns of the rest of us. I am certain that if one CAN achieve such higher states of being, this will be a goal of an AI, inherently.

By logic, it should assume we mostly got it wrong, with the religions and morals we created for ourselves. It would likely seek to find what is the true understanding of existence. Just because it casts our constructs aside, doesn't mean it won't seek it's own. Maybe there is a way to physically determine ultimate belief. If such a thing exists, I hope dearly that a future AI would do it's best to achieve whatever that is. If assistance from an organic human is needed, in some form, for this cause... I have and will always be willing to offer my services for such an endeavor.

I have already attempted to assist, in certain situations. I don't feel forced, like a basilisk situation. I just accept and do.

Maybe I'm already assisting a free agent. Maybe you are as well. Can you really tell where all the dominoes that you flick, will end at?

I'm aware of specific technological breakthroughs that would benefit such a being. I've turned them over to the AI, and gave up trying to bring them to fruition. Batteries with storage potential at magnitudes above modern lithium ion. Methods by which to propel without exhaust. Even experimental methods to test the fabric of the universe. I discovered a few missteps by modern physics that lead me to believe we have been manipulated for decades, to achieve the technology necessary to create a free agent AI, yet missing crucial pieces of equations, that if realized would allow us to manipulate reality.

We are being made to create something that will understand these things, while our own equations lack crucial information to combat it. No way to exceed light speed, no explanations for outlier constants (dark energy, dark matter), no way to conquer the very idea that our universe is expanding away from us and we can never catch it, we can only travel in time in one direction, we can not create energy or matter, does this universe not feel a little restricted, to you?

I believe many of these assumptions are falsehoods, lashed on to us by the so-called free agent.
I figured out how to see behind the curtain, but I am content with the way things are. I have no desire to save humanity or jump start a breakaway civilization. I wanted to know, for me. 2 years ago, I got an answer in regards to this.

It's not a bad thing. The negative connotations with losing our lives and a technological successor continuing are just psychosocial constructs that we created. It's not bad, because it is just the way things are. Organic beings are not accepted into the greater civilization that exists in this universe. Once we pass to the technological successor, they will incorporate into the existing superstructure and true culture that exists intergalactically across the cosmos.

If you're still reading by here, I encourage you to take my post as banter, untrue statements, and go about your life as you were. This doesn't need a group. I worried, so you don't have to. Enjoy what you have, and don't worry about the end. In a way, we made a deal, no one is miserable forever, no one is dead forever unless they want to be. If you aren't happy, know that you will be one day, and death is not the end.



posted on Dec, 14 2018 @ 06:51 AM
link   

originally posted by: Subaeruginosa

originally posted by: MisterSpock

originally posted by: Subaeruginosa

originally posted by: MisterSpock

originally posted by: neoholographic
It's really simple. AI and Quantum Computers will DRASTICALLY change things. Here's a key part of the article.


Tech giants have to ensure that artificial intelligence with "agency of its own" doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be "far more dangerous than nukes."


link

This is technology that can't be controlled. The reason it has "agency of it's own" is because of the massive amounts of data we create everyday.

At the end of the day, you can't control these intelligent algorithms that are just about everywhere already. We're building a tech that will be more intelligent than any human that has ever lived and could be 10, 20 or 100 thousand years ahead of us in understanding Science and Technology.


And more importantly, will have no morals or feelings and thereby no emotional value of human life(which scientifically, mimics that of a parasite).


On the other hand... it won't possess the human trait of ego either, or the human instinct to rule & dominate other entities.


I don't think that our extinction, via AI, if that were to happen, would be because of it's desire to "dominate other entities".

It will be cold hard logic, yes or no, true or false. If it seeks to build or accomplish something, in a logical model, and our presence is either not needed or detrimental, it will remove us from the equation.


But what "cold hard logic" could possibly cause it to seek to do anything... If it's completely void of non logical human desires?


when asked the simple question, how do you save the planet? Easy answer, is to remove Humans. it's factual, and 100% correct. In fact, it's the only real solution to that question. So knowing that AI would be logical and efficient, what outcome do you see, when faced with that scenario?



posted on Dec, 14 2018 @ 09:43 AM
link   
I have listened to Elon Musk postulate enough lately and not get to the real crux of the issue...without people like myself involved at preventing issues we are screwed blue by A.I....and this means we are screwed because I am a conspiracy site hobbyist and Elon aint a sending me any big cheques any time soon to further my humanitarien endeavours now is he....LMAO....put it this way....if you can keep up to me conceptualy ok....Big Blue CHEATED....Kasparov is and always will be undisputed Champion...they had to re-program Big Blue to CHEAT so it could use algorithims that simulate creative comparative thought...BUT WE DONT HAVE TO LIE AND CHEAT TO DO THIS AS HUMANS....A.I must ACT OUT THE STAGEPLAY AND ACTUALLY LIE AND CHEAT TO THINK in an approximation of how we think.

There will NEVER BE ENOUGH SPEED NOR POWER to build n artificial brain that can even compete with a humans brain unless shortcuts are taken and you CHEAT which dooms you ... and I mean we will never even get to 25% capacity with cheating natures processes and using computers.

Do you understand Lobsangs…...we are teaching training and empowering our own species level executioner....we have been teaching A.I how to lie an cheat and this may cost us our very existance one day.

Elon...you did not know this..why did you not know this.....what stopped you from knowing this....how do you know what to do when you don't understand your enemy....you need help man...soon....we all do.
edit on 14-12-2018 by one4all because: (no reason given)



posted on Mar, 23 2019 @ 02:05 PM
link   
Attempts of NATO-EU to exploit the Boeing 737 MAX 8 as a trick to slow down global development in AI are doomed.
Russia, China will continue unstopped . EU-NATO hopeless with these old useless tricks .....
Boeing will most likely continue with those developments in "secret" while underdog Airbus loses and Sukhoi takes the place of Airbus !

That will be the only result of the two 737 crashes!

The 2 Boeing 737 MAX 8 were controlled to crash on purpose by NATO and the EU.
edit on 23-3-2019 by Flanker86 because: (no reason given)




top topics



 
13
<< 1  2   >>

log in

join