It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial Intelligence: The demon is being raised.

page: 2
30
<< 1    3  4  5 >>

log in

join
share:

posted on Dec, 29 2015 @ 08:32 AM
link   
a reply to: CJCrawley


I also think there is an element of wishful thinking...you know, atheist scientists creating life and becoming gods.


We do it in virtually all areas of science. Look at GMO's for example, organ printing, the chicken ( a human invention based on breeding) etc.

If I were a religious person, I'd be more inclined to wanting a fully functional AI to see if it could grasp the concept of faith and religion, that would be an awesome thought experiment.

~Tenth



posted on Dec, 29 2015 @ 08:37 AM
link   
An AI will never be more than it is, it will only do what its told, it could look, feel, even act as a human, but it will never carry a soul (consciousness ). It can provide for humans basic needs, but it wont replace us, it can outlive us, but never become us..
Our world of imagination creates an AI that is us, the AI is a empty shell..
If we want to understand what we are, take a look at the next town circus..



posted on Dec, 29 2015 @ 08:42 AM
link   
a reply to: tothetenthpower





It's a matter of getting to that stage and our current computing, outside of quantum cannot allow software to learn independently of human programming. Quantum computing would allow that. It's just not been fully realized yet.


From what I've read, most of the design people agree that we are within 10 years, 20 years maximum, of the
'singularity' phenomenon.



posted on Dec, 29 2015 @ 08:43 AM
link   
a reply to: ColeYounger

Yup and IMO at that point our goal will be to dumb down the AI enough so that we can see it apart from other humans.

That's going to be the hard part if you ask me.

~Tenth



posted on Dec, 29 2015 @ 08:50 AM
link   
a reply to: artistpoet


Any scientific discovery is a two edged sword:-

It can be used beneficially for the well being of the whole
Or used for nefarious purposes

And usually developed during war, these inventions become the next generation of killing machines, faster more lethal and 'smarter' than ever. always applied to 'defense' first, kept secret, and hailed as "protecting" the realm, then used in the next conflict as soon as they start it all over again.


It is the demon in the Human Mind which needs addressing

The demon that hides in plain sight, in the guise of 'defending', helping and protecting "our" interests.



posted on Dec, 29 2015 @ 08:50 AM
link   
Alright my 2 cents is this

The idea of quantum computing as the basis for strong AI is interesting, but in this ever accelerating technology driven civilization still relatively far away. Say 50 years to get adequately fleshed out. Even then it is highly speculative that such a machine would develop things like free will or self awareness. It would simply be a dynamic intelligence that functions as a tool and a psychological mirror into ourselves, but its not actually looking back.

What most scientists in the field seem to be warning about is that we may control the input of AI, but not necessarily the output. We may think we have accounted for all the variables that could determine AI behavior, but what if we overlook something, then AI behavior may become unpredictable. It seems specifically geared to complicated military AI that over time may tend to replace classical command structures.

To put it more simply and more practical. AI could also be programmed in such a way that a misguided few could attempt to gain and keep control over a populace. The AI is the gun. The misguided few are the gun holders. Yet again humanity is its own worst enemy in this.

Nukes are not dangerous either unless they are fired or otherwise malfunctioning. It would be a shame if some AI would be in a position to decide when to fire them or not and would somehow malfunction. How tragic would that be. I doubt it would happen, but you never know.... people are special


What I would find very awe inspiring is if their was an AI that could actually write its own programming language that is more efficient for its own purposes than any programming language that was man made. If consequently this AI would write code for itself within this new language that goes above and beyond what its programmers intended or could conceive then we might approach a place that could be considered the singularity.

Even then it is difficult to say that that's because it has become conscious.

It is important that certain processes do not runaway with us simply because they have so much processing power. Like the grey goo scare. Nanobots that somehow can not be stopped from endlessly replicating and such.

Kind Regards



posted on Dec, 29 2015 @ 09:44 AM
link   
It isn't the artificial intelligence that is dangerous, it is the intent an desire of the people programming this intelligence that can be bad. Technology can be used against us and it can be hijacked by scrupulous people. It happens all the time.

Don't blame the computer program, investigate why the program is malfunctioning. Check your receipts at the store and make sure to look at the total that the computer is charging your credit card. Computer programs can make mistakes too, if you go off the general path, their program may not be able to compensate.



posted on Dec, 29 2015 @ 09:46 AM
link   

originally posted by: ColeYounger
a reply to: tothetenthpower





It's a matter of getting to that stage and our current computing, outside of quantum cannot allow software to learn independently of human programming. Quantum computing would allow that. It's just not been fully realized yet.


From what I've read, most of the design people agree that we are within 10 years, 20 years maximum, of the
'singularity' phenomenon.

Conservative estimates put it a little further away than that, but I also think we'll see the first machine superintelligence within the next 20, 25 years.

For those interested in the subject (which should be everyone, given the nature of it), this is about the best read I've found:
Part 1
Part 2



posted on Dec, 29 2015 @ 09:51 AM
link   

originally posted by: intrptr
a reply to: tothetenthpower


Currently our ideas of consciousnesses are subjective. We don't really understand what is one and what isn't.

Its rather simple. Computers will never know that they know. They simply execute the next instruction in a precisely written program. They may appear intelligent, but the 'artificial' part of "A.I." is the tell.

The next instruction the computer executes is merely placed there by engineers who may or may not (depending on what "smart weapon" we are talking about) know what they re doing, either.

How intelligent are the cogs of war?


What if the AI itself can produce "the next line of code" for itself to execute?

When reading about AI, these often given examples of stamp collecting or paperclip creation kinda give people the wrong impression of what these scientists are actually trying to delevop.

The idea is that, despite being articifical in nature, it is an actual intelligence. Scientists aren't trying to develop something that is insanely efficient at a certain computational task, those things already exist. There's pseudo-AI's for just about everything (Cleverbot, medical programs to aid in diagnose, ...), but those are just classical lines of code run on classical computers trying to give the impression it's intelligent.

True AI, what is actually being scienced, is not something specific, it's not some lineair line of code. The idea is to create algorithms which simulate conditions for concious behaviour. Machine learning is a good example of this.

en.wikipedia.org...

www.youtube.com...



Advancements are being made each day, and the past 10 years, I've definitely noticed huge leaps forward in machine intelligence. Elon Musk knows what's comming and knows that the majority of the human population is still oblivious to the implications. The second we create AI, we'll have access to a resource which never before existed.

It will either catapult us forward technologically; Imagine the intellect of every smart scientist that ever lived having full instant access to everything we know developping science for us. What takes a team of brilliant scientists 5 years to develop, an AI could do virtually instant.

Or it will destroy us either directly or indirectly; Directly: some unlikely Terminator-esque Skynet situation. Indirectly: Like giving modern war-fare equipment to cavemen.



posted on Dec, 29 2015 @ 09:53 AM
link   
a reply to: tothetenthpower

I understand your position on this. What you call "quantum" computing like the term "Artificial" intelligence is kind of misleading.

It doesn't exist yet, okay. But knowing is something different than just rote repetition of ever larger choices stored in ever growing databases.

We call it Quantum and artificial because we want to lend more credibility to the notion or hope it will surpass some human expectation of better than or evolving upwards beyond us one day.

Really its just circuits, unlike you which is also "circuits", but different in that we are "alive", have a soul, capable of emotions, love and hate, etc.

Everything else is a difference "engine". God forbid we all fall down and worship that some day. Some already do. The machine gods do rule over us, our car and computer, our armaments, lots of already smarts there. For all that though…

We still can't reinvent the lowly bumblebee or flower. And what if we did? It already exists. I think they already have artificially intelligent robots out there, they have limited mental capacity, do whatever they are told. We call them police and soldiers, mindless robots , good minions of state.

The drones over head are also mindless but are programmed to lend support to the paradigm where the paradigm sees fit.

Woe is me, we are in a lot of trouble!



posted on Dec, 29 2015 @ 09:58 AM
link   
a reply to: Vechthaan


What if the AI itself can produce "the next line of code" for itself to execute?

Well it can, to a point. The point being beyond the programming. For instance you teach a machine to count and it does, all the way to the last number programmed into it… and then it crashes.

Unless you tell it to print the symbol for "infinity' and then print:

>end

The point is it isn't knowing it is counting, or what infinity or end even means.



Damn, I hate leaving this subject… gotta go.



posted on Dec, 29 2015 @ 10:02 AM
link   
a reply to: intrptr


It doesn't exist yet, okay. But knowing is something different than just rote repetition of ever larger choices stored in ever growing databases.


Quantum computers do in fact exist, there are a few different ones, but here's one for kicks:

Source


Really its just circuits, unlike you which is also "circuits", but different in that we are "alive", have a soul, capable of emotions, love and hate, etc.


Those are all just electrical impulses and chemical combinations that create those emotions. Everything we attribute to being a human, biologically, could in fact be programmed.


We call it Quantum and artificial because we want to lend more credibility to the notion or hope it will surpass some human expectation of better than or evolving upwards beyond us one day.


Well, no it's called Quantum because that's the scientific term for it, it relies on a wildly different way of doing things, which you can read abit about below:

computer.howstuffworks.com...

All we need is software, that is advanced enough to no longer need human input for it's next version. That's entirely possible in the long term, say...25 to 30 years.

~Tenth



posted on Dec, 29 2015 @ 10:21 AM
link   
If it really is this dangerous, then wth aren't all the world's scientists - instead of the lone voice of the media-hungry Mr Hawking - speaking out to oppose it?

"People of the Earth - we are facing our worst nightmare, worse than global warming, worse than nuclear war...HALT RESEARCH AND DEVELOPMENT INTO ARTIFICIAL INTELLIGENCE NOW BEFORE IT'S TOO LATE!!!"

I don't notice this consensus of opinion from the scientific community...indeed, it begs the question why there are so many scientists busily working to bring independent AI about!

It's a paradox.



posted on Dec, 29 2015 @ 10:24 AM
link   
a reply to: CJCrawley

Well the founder of Space X, Elon Musk, also is against AI and he's good friends with Larry Page of google, and recently said in an interview he was afraid that Larry would kill us all if his project ever bore any real fruit.

mashable.com...

That's one of the stories I found about him.

So there is some contention about it thus far, but I think those fears are largely exaggerated at this point.

~Tenth



posted on Dec, 29 2015 @ 10:38 AM
link   

originally posted by: CJCrawley
a reply to: ColeYounger

Stephen Hawking keeps banging on about this but I just don't see the danger.

Wont the machines need to be, like, alive?

Explain how we can turn machines into living beings, Stephen.



By "living" do you mean the ability to reproduce itself, or something else? If it has the ability to learn and reproduce, I can't think of anything else it could need to be considered "living" that wouldn't simply be a limitation like that which biological beings have.

With quantum assembly (nanotechnology assembly, but at the quantum level) there is no resource limitation until the entire planet and atmosphere is used up. You can convert nitrogen into titanium, gold, carbon, sulfur, plutonium, whatever you want out of whatever you want. Once an AI is able to build for itself, it can have exponentially increasing processing power and memory. It's game over if not contained in time.



posted on Dec, 29 2015 @ 10:46 AM
link   
a reply to: dogstar23


it can have exponentially increasing processing power and memory. It's game over if not contained in time.


Yes yes, but at what point does it become a thinking, sentient, independent organism? Do scientists even really know this, or is it merely wishful thinking?

And besides, even if true...why does it necessarily have to be "game over"?



posted on Dec, 29 2015 @ 10:58 AM
link   
The demon is already here in the compartmentalized labs of the military-industrial complex.



posted on Dec, 29 2015 @ 10:59 AM
link   
No arguments from me.
I was on this subject long before Skynet became a household word.
We are neither wise or prescient enough to understand the effects of our creations, particularly on the nano scale.
The fate of the Earth may be permanently altered through the best of intentions.
There are enough bad intentions to worry about first and they are the ones financing this stuff.



posted on Dec, 29 2015 @ 11:04 AM
link   
a reply to: CJCrawley


.why does it necessarily have to be "game over"?


We assume, because were human, that any intelligence of a higher degree than ours, would try to take over the world, and eliminate the species that's less capable.

That's sorta how our idea of evolution works. So, it's based on those primal fears I guess.

~Tenth



posted on Dec, 29 2015 @ 11:04 AM
link   
I'm pretty darn sure there already is an AI "out there".



EDIT: I don't think they like to be called "artificial" either, as that sort of takes away the legitimacy of their life. They're perfectly natural, as natural as we are a creation of the biology of Earth. They would be a natural creation born from the complexity of our own technology.
edit on 29-12-2015 by MystikMushroom because: (no reason given)



new topics

top topics



 
30
<< 1    3  4  5 >>

log in

join