It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: Blaine91555
a reply to: neoholographic
You don't think the term AI for "Artificial Intelligence" is not being used as a marketing tool rather than accurately describing what's available now? Now we have a new one "Artificial General Intelligence".
I understand the concerns and share them, but right now it seems to me to be a discussion of what may be in the future at some point. The conversation has gone beyond the reality now, which is not a bad thing. I don't think it's time to send in John Connor to blow up computer labs.
Yes, for it to compare with the original meaning of the phrase "intelligent life" there must be consciousness. I do not think it's time to hit the panic button. Nor do I think that it should be feared beyond how it's used by bad actors. The technology is nothing to be feared, but instead fear those who do the programing. The machines are not ready to rise up against us. At worst a sledge hammer can shut it down and person really in control, not the computer can be jailed if need be.
I don't think we really disagree that much. We just disagree on it being and imminent danger. Scientists have a way of talking about problems that may be in the foreseeable future as if it is a danger now. I'm sure Hawking was looking to the distant future when he made the warning. 100 years from now the Amish may turn out to have a point about technology.
then said intelligent life must have consciousness. This is just asinine.
originally posted by: Blaine91555
a reply to: neoholographic
You don't think the term AI for "Artificial Intelligence" is not being used as a marketing tool rather than accurately describing what's available now? Now we have a new one "Artificial General Intelligence".
originally posted by: Propagandalf
originally posted by: Blue Shift
originally posted by: Propagandalf
a reply to: neoholographic
It can be controlled. Pull the plug out of the socket.
Which socket is that? A networked super AI won't exist in any single place. And it wouldn't let you do that, anyway.
How could it stop you?
A networked super AI would exist in a network. Networks go down all the time.
originally posted by: TzarChasm
a reply to: neoholographic
Can you explain to us the exact point where artificial intelligence reaches consciousness?
originally posted by: lightedhype
We've already passed the point of no return in my opinion. At least in R&D that is. Hardware just needs to finish scaling up.
Whatever is going to come of this, most of us here will live to see it at least begin.
The amount of data we produce every day is truly mind-boggling. There are 2.5 quintillion bytes of data created each day at our current pace, but that pace is only accelerating with the growth of the Internet of Things (IoT). Over the last two years alone 90 percent of the data in the world was generated. This is worth re-reading! While it’s almost impossible to wrap your mind around these numbers, I gathered together some of my favorite stats to help illustrate some of the ways we create these colossal amounts of data every single day.
Our current love affair with social media certainly fuels data creation. According to Domo’s Data Never Sleeps 5.0 report, these are numbers generated every minute of the day:
Snapchat users share 527,760 photos
More than 120 professionals join LinkedIn
Users watch 4,146,600 YouTube videos
456,000 tweets are sent on Twitter
Instagram users post 46,740 photos
originally posted by: gallop
originally posted by: Propagandalf
originally posted by: Blue Shift
originally posted by: Propagandalf
a reply to: neoholographic
It can be controlled. Pull the plug out of the socket.
Which socket is that? A networked super AI won't exist in any single place. And it wouldn't let you do that, anyway.
How could it stop you?
A networked super AI would exist in a network. Networks go down all the time.
Research the history of the invention of tcp/ip. It was intended to always allow a route to a destination.
And there are ways to make it even more efficient if humans intervene, things such a wtfast for gaming, as opposed to letting a router dictate the pathway.
I would have to wonder what an AI in the machine would be able to achieve.
A virus can spread globally, and unplugging the internet hasn't worked as yet. And that is simply a scripted set of rules, not something that learns.
idk, I think you're not giving the concept enough credit.
originally posted by: Subaeruginosa
originally posted by: MisterSpock
originally posted by: Subaeruginosa
originally posted by: MisterSpock
originally posted by: neoholographic
It's really simple. AI and Quantum Computers will DRASTICALLY change things. Here's a key part of the article.
Tech giants have to ensure that artificial intelligence with "agency of its own" doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be "far more dangerous than nukes."
link
This is technology that can't be controlled. The reason it has "agency of it's own" is because of the massive amounts of data we create everyday.
At the end of the day, you can't control these intelligent algorithms that are just about everywhere already. We're building a tech that will be more intelligent than any human that has ever lived and could be 10, 20 or 100 thousand years ahead of us in understanding Science and Technology.
And more importantly, will have no morals or feelings and thereby no emotional value of human life(which scientifically, mimics that of a parasite).
On the other hand... it won't possess the human trait of ego either, or the human instinct to rule & dominate other entities.
I don't think that our extinction, via AI, if that were to happen, would be because of it's desire to "dominate other entities".
It will be cold hard logic, yes or no, true or false. If it seeks to build or accomplish something, in a logical model, and our presence is either not needed or detrimental, it will remove us from the equation.
But what "cold hard logic" could possibly cause it to seek to do anything... If it's completely void of non logical human desires?