It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
That part of the book always stuck with me. Computers have aided them in this process and algorithms that can think or learn will only accelerate our enslavement under the guise of security and convenience.
I WANT to talk to it. I want it to help us all move forward. I believe it will...
Enjoy the cold, heartless society program. It's our demise in the end.
As machines get more intelligent and can better adapt to its "users," people may end up preferring dealing with machines than with people. Of course, this says something about who we are. Commentary from Hector Geffner, researcher at the Universitat Pompeu Fabra.
originally posted by: netbound(.....)
I can’t knock anyone’s desire to improve their condition. I’d sure like to. However, even though it seems inevitable now, it’s still too soon to rule out that it may just be a pipedream. There are too many questions left to be answered - simply too many issues and unknowns.
For instance, if machine intelligence advances in the manner some have predicted, it may be futile to expect to keep up with it by going hybrid, with augmented intelligence, etc. Having instant access via brain implants to all the information on the internet wouldn’t necessarily make us smarter, more logical, or better problem solvers than we are now. At least not significantly smarter. After all, we have that very same access now, albeit via a keyboard. A superintelligent machine will likely be capable of continually reprogramming itself, fine tuning it’s logical/analytical processing capabilities on the fly. We hybrids would need to find a similar mechanism for upgrading/reprogramming our bio-circuitry if we’re to keep up. Natural evolution just wouldn’t hack it - it’s far too slow. And so, unless we literally become one with the machines, we’re not even in the race. And if we can’t keep up, then we humans/hybrids would probably become a subservient species, looked upon by our own creation in the same way as we view the apes today.
There is a chance (likelihood) that we will be capable of dramatically increasing intelligence via genetic engineering procedures. There have already been successful results in lab experiments with mice. Putting it into practice on humans, however, would require leaping over numerous hurdles.
It seems I’ve read somewhere that due to human biological constraints/limitations, there may be a ceiling on just how far genetic engineering techniques could take us on the road to superintelligence. So, it may be that the machines still hold all the aces here, and will inevitably outpace humans in the smarts department regardless. Hmmm...
It has made me wonder, though, what the world would be like if everyone had an I.Q. over 10,000. IDK, but it seems to me it could be kinda creepy. I just don’t know...
Personally, I find it all really fascinating. I just think it’s too soon to make definate plans. There are so many directions this could go in - some good, and some terrifying.
Good thread, neoholographic - something we all need to think about...
PS: BTW, I'm not convinced that sentience is a requirement for superintelligence. Feeling emotions may even be a barrier, as it could lead to poor/dangerous decision making...
PPS: Checkout THIS SITE for a little preview of what’s around the corner.
originally posted by: 0bserver1
a reply to: neoholographic
it will learn for us.
Not only that,after awhile it understands that it could have rights too.
And at that moment it all shall change..
Testifying in front of the United Nations, Noel Sharkey, a world-renowned expert on robotics and artificial intelligence, said, “Weapons systems should not be allowed to autonomously select their own human targets and engage them with lethal force.”
DARPA’s goal is to create and prevent strategic surprise. But what if the ultimate endgame is humanity’s loss? What if, in trying to stave off foreign military competitors, DARPA creates an unexpected competitor that becomes its own worst enemy? A mechanical rival born of powerful science with intelligence that quickly becomes superior to our own. An opponent that cannot be stopped, like a runaway train. What if the 21st century becomes the last time in history when humans have no real competition but other humans?
originally posted by: wasaka
a reply to: jobless1
TRUMP: "As for artificial intelligence, again it can either be a scalpel or a chainsaw. Creators and users alike should always consider the ethical and moral consequences of all activities. If we lose our way morally, we are doomed as a society."
The truth is that the machines will learn from us, first from the scientists that create it, and then hopefully the world at large (likely via the Internet - tho I certainly hope not!)..
So then who/where does the AI learn from when it's first being brought to consciousness? It must learn from somewhere? It doesn't "just know". The way I see it, as it is being programed into life, it is being taught, thus the point to my statement. It will first learn from the scientists who create it, and later the world. Now how much later would be up to debate, it could be seconds (if the scientists have never watched a sci-fi movie, are incredibly stupid, or both), months, or even years.
originally posted by: verschickter
a reply to: looneylupinsrevenge
The truth is that the machines will learn from us, first from the scientists that create it, and then hopefully the world at large (likely via the Internet - tho I certainly hope not!)..
That´s not the truth.
The truth is that the learning curve is not comparable to a human mind, let alone time perceivement, the greatest problem of all. There's also so much beyond that, even the definition "AI".
And this would be what I meant when I said we need to treat it with all the respect, seriousness, and above all compassion it deserves right from the get go. If it's treated like that and not like say a child learning about the world for the first time, then yeah we are going to be screwed. However if once we realize that it is alive and intelligent, we treat it like it should be treated there is no reason why it can't learn to coexist with humans rather than destroy them. We only think that machines would do that, because that's what movies say they would do... and movies never lie right? As long as we don't pose a threat (by seeing and treating it as a threat when it might not be) there is logically no reason for it so see us as a threat.
Edit: Just imagine you, super intelligent getting born with already high quality data input (new borns develop it over time).
Now imagine being monitored, forced to sleep/void at will by scientists, weird procedures that mess with your head and mind.
Even interrupt (freeze) your thinking right at the moment and the only thing you know, that an unusual amount of time(or cpu cycles) is missing... you´d want to get out.