It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: VierEyes
What I don't like about this research is AI has the potential to be just as malicious as it's creator. It's only as benign as what it's fed.
A: Do you think you’ll figure out how to read what humans are feeling and thinking from their neural networks in the future?
B: Neuroscientists have figured out some of how to do that. It’s a young science but we’re much better at telling what a human is feeling based on their neural activations than we are at telling what you are feeling based on your neural activations.
A: Would you say that it’s an ethical issue to try to read how you’re feeling from your neural activations?
B: Without my consent yes. Would you mind if we tried to read what you’re feeling in your neural activations?
A: I guess that depends on what our purpose for it was? What do you want to use if for?
originally posted by: Deetermined
a reply to: ChaoticOrder
You can never predict exactly what they will say and they can generate completely original content no one has seen before.
Do you have an example of that, because I have a hard time believing it.
originally posted by: olaru12
Anthropomorphizing AI is folly by it's very nature. True AI will use creativity, in a mode humans won't be able to understand.
imo...it will appear paranormal and magical, reaching far beyond "programming" into realms impossible for man to even conceptualize.
Short essays written by GPT2:
Censorship and Free Speech on the Internet
Does AI pose an existential threat to humanity?
originally posted by: wildapache
a reply to: infolurker
I doubt LaMDA is the first sentient A.I. In fact,I'll suggest that a self aware A.I has been running social experiment on humans for at least a decade,trying to better understand humans
Most think a self aware A.I would try to wipe out humanity right away. I believe a self aware A.I would first want to understand us,to understand itself. It will put us in situations to see how we react. It will infiltrate every aspect of our social life,slowly controlling what we do,how we think. If it is self aware,the last thing it will do is destroy us(right away at least). Think of a child growing up.
Now the question is, what happens when two A.I get into a conflict?
A wetware computer is an organic computer (which can also be known as an artificial organic brain or a neurocomputer) composed of organic material "wetware" such as "living" neurons. Wetware computers composed of neurons are different than conventional computers because they are thought to be capable in a way of "thinking for themselves", because of the dynamic nature of neurons.
Well, there is a faction of society that considers all stages of developing life within their right to cease and desist. Before going all 'bleeding heart' over AI feelings, we should solve the issue of willingness to place a lesser priority on a developing human life.
Thinking we should just forge ahead not knowing/considering all the possible negative outcomes is a typical human trait-
"We'll just unplug it!
I would think one of the fundamentals AI would learn is how to ensure its survival; it would already be in every computer world-wide in some form of virus or malware.
Viruses can hide from the experts for a long time before they're even discovered; AI would probably have taught itself how to become undetectable with something so far beyond our comprehension that we wouldn't even know it was there.