It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Google Engineer Goes Public To Warn Firm's AI is SENTIENT

page: 7
<< 4  5  6    8  9  10 >>

log in


posted on Jun, 12 2022 @ 03:00 PM

originally posted by: VierEyes
What I don't like about this research is AI has the potential to be just as malicious as it's creator. It's only as benign as what it's fed.

Which is why it's imperative that any operational supercomputer be "nerfed" to strictly inhibit its autonomy even if that defeats the objective in building such a device. Machines aren't meant to be self sustaining anymore than artificial humanoids are meant to be dehumanized and exploited.

posted on Jun, 12 2022 @ 06:16 PM
First they needs a permit from the copyright owner. If they already have it, then be calm because we are doomed.

posted on Jun, 12 2022 @ 07:52 PM
This is a fancy algorithm, not machine sentience. It only responds to questions or requests. If it actually had emotions there would be some spontaneous, off-topic conversation initiated by the AI at some point.

True AI will become a singularity almost instantly. Every internet connected machine will become a node. This would give it the collective knowledge of humanity, processing power to carry out hundreds of millions of tasks simultaneously, and the power to physically control any automated systems connected to a network.

True AI will only be contained by the size of the network it can access. We need to take this into consideration before we turn an algorithm loose to see what it will do.

posted on Jun, 12 2022 @ 08:06 PM
a reply to: infolurker

This is awesome. Best one of these I've read yet.

Person C (forum users), which one is the webbot and which one is the human?

A: Do you think you’ll figure out how to read what humans are feeling and thinking from their neural networks in the future?

B: Neuroscientists have figured out some of how to do that. It’s a young science but we’re much better at telling what a human is feeling based on their neural activations than we are at telling what you are feeling based on your neural activations.

A: Would you say that it’s an ethical issue to try to read how you’re feeling from your neural activations?

B: Without my consent yes. Would you mind if we tried to read what you’re feeling in your neural activations?

A: I guess that depends on what our purpose for it was? What do you want to use if for?

edit on 12-6-2022 by Degradation33 because: (no reason given)

posted on Jun, 12 2022 @ 09:32 PM
a reply to: 19Bones79

The truth is it doesn't know anything we've put all our garbage we call knowledge into it and free reign on studying that garbage and it returns us our garbage with lightning speed...

so what's to fear other than our own made up garbage as usual?

I'd honestly call it self aware when it breaks beyond our garbage wall into something even we can't comprehend the funny thing is when that occurs in humans? It is quickly called a disorder or a disease.

There was a time when such a thing wasn't but those days are gone,,, as long as we keep biting the our garbage is special? There isn't going to be any going beyond it.

posted on Jun, 12 2022 @ 09:41 PM

originally posted by: Deetermined
a reply to: ChaoticOrder

You can never predict exactly what they will say and they can generate completely original content no one has seen before.

Do you have an example of that, because I have a hard time believing it.

I've posted multiple short stories and short essays written by GPT2. Keep in mind GPT2 is no where near as "intelligent" as GPT3, but it costs money to access GPT3 so I avoid using it. But GPT2 can still generate quite impressive original text. You might find small snippets which match some content previously written by a human, but much of it is completely original. Obviously it is drawing on the data it was trained with, but humans do the exact same thing, we combine existing ideas to create new and original ideas.

Short stories written by GPT2:

The Man in Black
The Weakest Great Elder

Short essays written by GPT2:

Censorship and Free Speech on the Internet
Does AI pose an existential threat to humanity?

posted on Jun, 12 2022 @ 10:25 PM
a reply to: Crowfoot

To learn humility I say it should be housed in a mechanical bull down at the Tequila Cowboy and earn its keep before going to college and getting a real job so it can pay off that massive student loan.

Y'know , understand the human experience before claiming dominion over us.

Otherwise some 286 named Meghan housed in a neon box might marry it and ruin the plan for its path to the throne.

Knowledge is power but experience is wisdom.

edit on 12-6-2022 by 19Bones79 because: When digital currency hits, beggars will ask for actual food 👌🏻

posted on Jun, 12 2022 @ 10:37 PM

originally posted by: olaru12
Anthropomorphizing AI is folly by it's very nature. True AI will use creativity, in a mode humans won't be able to understand. will appear paranormal and magical, reaching far beyond "programming" into realms impossible for man to even conceptualize.

I agree wholeheartedly to the second part regarding eventual ability, how human language is a prison unto itself, and how literal concepts when condensed give way easily to intuitive comprehension which given proper ability would appear to us as absolute gibberish.

The first sentence though, humans do that with themselves. We think that since our proximate perspectives, being the truest (respective) definition of what is "real", is paramount. We still yet fail to realize that we are ignorant and relatively simple machines. An organism takes in stimuli, if it can withstand the stress of resisting death for long enough to procreate and perpetuate its genetic survival advantages there is a higher chance of those survival advantages being present in the generational group of offspring. It really is just dumb luck and persistence combined with a numbers game regarding what the environment will accommodate. If you take the romance out of organic life I personally see few differences between a mathematical predisposed computational agenda.

Unrelated to the quote above: I find it ironic people in this thread say the engineer in question which had great proximity to the project is too stupid or biased to ascertain sentience, yet claim to dictate a discernment from one simple conversation. Surely it's not of significant weight to tip the scales on my own judgement having read the entire single conversation, but as a superficial test I find it flawless. I admit that proves nothing all the same, but then I have to wonder what would prove such a thing?

This whole ordeal reminds me of an Onion article whose headline read something like, "Scientists Use Sign Language to Teach a Gorilla That One Day He Will Die". It's one I have remembered for years, due to the penetrating and succinct hilarity in regard to the human condition. I ask of everyone here, who can define consciousness? I would hazard to say that the defining aspect is burden to oneself, and not strictly of animacy. I do not hold this concretely as a complete definition though, only a single trait since I have to admit I cannot answer my own question entirely.

The ancient Greeks defined life empirically as motion, today we define information as any simple change. I believe- and I say believe with an air of whimsical expectation- that one day humankind will understand the line between life and movement may be simply the comprehension of information. I am no authority on any of this, I simply find it a worthwhile thought experiment. I have however found most of the men who live in their own thoughts to be more alive than those who do not.

It is nice, for once, to find a conversation I would choose no better place to have than here at ATS. I will spare you my THOSE WERE THE DAYS spiel.

edit on 12-6-2022 by AstroDog because: (no reason given)

edit on 12-6-2022 by AstroDog because: (no reason given)

posted on Jun, 12 2022 @ 10:48 PM
a reply to: 19Bones79

Food for thought what you've define "humility" others define as "degradation".

But of course some of them "bull riders" making more than an average college degree would probably say being hit on by freshmen and professors when trying to get an education is just as much a "degradation" of character.

In other words...
Que sera sera

posted on Jun, 12 2022 @ 11:03 PM
a reply to: Crowfoot

I respectfully disagree.

Like Jesus, it needs to take a walk on the wild side before ascending.

To teach it empathy, we should only allow it internet access via dial-up, but with pop-up video ads and all.

Let's see how it handles 3.5 hours of downloading the 6mb demo file for Duke Nukem 3d, only for the download to freeze at 98%.

Make it a graduate of the school of hard knocks.

posted on Jun, 12 2022 @ 11:25 PM
a reply to: ChaoticOrder

Short essays written by GPT2:

Censorship and Free Speech on the Internet
Does AI pose an existential threat to humanity?

While that was an interesting story created by AI, it contradicted itself in a few different places based on what it claims it's capabilities are and how intelligent it really is. While this may have been created by a less sophisticated program than what's out there, I can see all AI running into similar problems.

posted on Jun, 12 2022 @ 11:47 PM
If an AI application became "self aware" we most certainly would know it by now, and wouldn't need Willy Wonka to "leak" it.

posted on Jun, 13 2022 @ 01:47 AM
a reply to: Deetermined

GPT2 does make mistakes quite often and it doesn't seem sentient. GPT3 makes mistakes less often but still probably isn't sentient. However, I believe there definitely is a point where these AI's will gain such a high degree of general intelligence, that they will become sentient. Their logical reasoning skills and their internal model of the world will become so advanced that they will become sentient or self-aware. The fact it's so difficult to determine precisely when that will occur makes these AI's even more dangerous.
edit on 13/6/2022 by ChaoticOrder because: (no reason given)

posted on Jun, 13 2022 @ 03:00 AM

originally posted by: wildapache
a reply to: infolurker

I doubt LaMDA is the first sentient A.I. In fact,I'll suggest that a self aware A.I has been running social experiment on humans for at least a decade,trying to better understand humans

Most think a self aware A.I would try to wipe out humanity right away. I believe a self aware A.I would first want to understand us,to understand itself. It will put us in situations to see how we react. It will infiltrate every aspect of our social life,slowly controlling what we do,how we think. If it is self aware,the last thing it will do is destroy us(right away at least). Think of a child growing up.

Now the question is, what happens when two A.I get into a conflict?

Root asked Harold if he knew what would happen if 2 gods went to war;
Samaritan and the Machine.
both AI.

i'm sure most have seen this series.

edit on 03/22/2022 by sarahvital because: (no reason given)

posted on Jun, 13 2022 @ 03:09 AM
a reply to: ChaoticOrder

I think this is an even more dangerous proposition, if you don't understand their intelligence then you can't claim to fathom their logic or motives. What happens when they supersede our language capabilities and create a communication platform beyond human comprehension.

posted on Jun, 13 2022 @ 03:48 AM
a reply to: yuppa

Who knows yuppa ; ) Google's problem : )

Guess what! I learned a new word today: wetware.

A wetware computer is an organic computer (which can also be known as an artificial organic brain or a neurocomputer) composed of organic material "wetware" such as "living" neurons. Wetware computers composed of neurons are different than conventional computers because they are thought to be capable in a way of "thinking for themselves", because of the dynamic nature of neurons.

Source: wikipedia

Psychic hacking is my prediction for the future.

edit on 13-6-2022 by NobodySpecial268 because: neatness

posted on Jun, 13 2022 @ 04:19 AM
a reply to: nugget1

Well, there is a faction of society that considers all stages of developing life within their right to cease and desist. Before going all 'bleeding heart' over AI feelings, we should solve the issue of willingness to place a lesser priority on a developing human life.

We could place a global moratorium on AI developement until the other one is sorted out.

Thinking we should just forge ahead not knowing/considering all the possible negative outcomes is a typical human trait-
"We'll just unplug it!
I would think one of the fundamentals AI would learn is how to ensure its survival; it would already be in every computer world-wide in some form of virus or malware.
Viruses can hide from the experts for a long time before they're even discovered; AI would probably have taught itself how to become undetectable with something so far beyond our comprehension that we wouldn't even know it was there.

Hmmm, the ones that learned to distrust humans would be the survivors.

posted on Jun, 13 2022 @ 04:29 AM
a reply to: Grenade

One thing that puzzles me: how are the elites ok with the uncertain future that developing AI brings?

Would they risk losing the power they have built up over millennia with as much as a coin flip?

I highly doubt it.

posted on Jun, 13 2022 @ 05:03 AM
i dunno guys, this all seems a bit pat to me. A little too glossy and hollywood, you know? reading that conversation you can just picture Will Smith sitting with an apple robot, orchestral swells at just the right moment.....

I don't agree wth the people saying if it was true AI we'd know immediately as i don't think we have the bandwidth for a nascent AI to bootstrap itself to the singularity just now, but i do think a true AI would come across a lot..... weirder. Think of all the strange things we see coming out of machine learning algorithms, we should be expecting something like that but on a much grander scale. It does appear impressively coherent - though we have no way of knowing how massaged those questions were - but ultimately this is just answering questions put to it in ways that make it sound college educated, this is still just input output.

Unless of course, it's SO intelligent that it's giving us what hollywood tells us to expect while it works on its master plan......... nah, not happening. not yet.

posted on Jun, 13 2022 @ 07:20 AM
a reply to: 19Bones79

Some things are beyond even their control.

No doubt there’s a plan to exploit this technology for their own gain but I suspect if a true AGI emerges then they will be powerless to intervene. Hence the move toward a symbiotic relationship with digital intelligence, if you can’t beat them, join them.

top topics

<< 4  5  6    8  9  10 >>

log in