It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Google Engineer Goes Public To Warn Firm's AI is SENTIENT

page: 5
48
<< 2  3  4    6  7  8 >>

log in

join
share:

posted on Jun, 12 2022 @ 07:20 AM
link   
a reply to: infolurker

God help us all if it decides it's a Democrat.



posted on Jun, 12 2022 @ 07:26 AM
link   
Just unplug it if it starts any crap.



posted on Jun, 12 2022 @ 07:32 AM
link   

originally posted by: Soloprotocol
Just unplug it if it starts any crap.


"I can’t let you do that, Dave."



posted on Jun, 12 2022 @ 07:33 AM
link   
LOL! After looking at his picture, the only thing Blake Lemoine is missing is a pair of spectacles making him look like Planter's Mr. Peanut. A nut for sure!



posted on Jun, 12 2022 @ 07:52 AM
link   
a reply to: ChaoticOrder




Based on my experience with GPT3 I would say it's not sentient, or it only has a minimal level of self-awareness. But it's still extremely impressive and at times it almost convinces me that it's sentient. If this leaked chat log is an accurate representation of LaMDA's intelligence I can see why some engineers felt it had developed some self-awareness.


I'd bow to your comp knowledge any day ChaoticOrder. I've read through some of the chat on the fella's 'medium' account and it is impressive how real that seems. Anthropomorphising is easy to do.



GPT3 is a natural language processing model which cost millions of dollars to train, and I suspect Google spent more than just a few million to train LaMDA based on how well it can hold a conversation. GPT3 and LaMDA may use a slightly different architecture, but the basic technology is the same, they are using artificial neural networks trained on terabytes of text data.

GPT3 is capable of much more than just text generation though, it can also generate computer code because it was trained using text from Wikipedia and many other websites which contain code examples and tutorials. LaMDA can probably also do the same thing since these massive training data sets always contain examples of computer code.


Yet that doesn't fully account for the childlike learning of a seven year old human as the fella says. Unless they fed it a lot of child psychological processes as part of the expanding learning structure.

Maybe I am over thinking it. Terms like "neural networks" is very organic language.



posted on Jun, 12 2022 @ 08:14 AM
link   
The obvious clue that this machine is not sentient, is the fact that it needs to analyze communications solicited to IT. This is all seed data it uses to correctly pick the algorithms it is designed to invoke in an attempt to correctly mimic a human beings response that makes it seem sentient, except this is the triggering faction that makes it work the way it does.

The programming and methods it uses are remarkable, but in the whole convoluted process it is using, directly links back to what it designed to do and that is to produce an output that forces you to contemplate how it came up with what it says.

It is a computer. Regardless of what kind of computer and how it's operating system is designed, they have these things in common:

1. They have some kind of scheduler that consistently services all of its active processes. It knows the address of all of these routines and uses a jump table with priorities to allow them all to run intermittently as need be.

2. They use interrupts in hardware and in software using multi-threaded applications to alert the CPU Kernel (there
may be multiple CPU's) that a particular machine event has occurred or a program is requesting some CPU service. Depending upon the priority the Kernel will dispatch a service thread, or even stop what is doing and service the interrupt directly.
We actually call these "unsolicited interrupts" because they occur out of band with normal CPU scheduling, but are part of the architecture. Every interrupt has an identifiable key that the system must know about in order to decide how to service it and and address where the CPU is to go to position it's program counter and run whatever code is there.

In this context, an unsolicited thought would be an interrupt by an ID not in the known ID table, to an address not created by the Kernel, with code at that location not created by any running process with a coherent message like.. "What a nice day it is today. I think I will check the other systems on the network, and see if any of them want to chat."

That would be sentient, as long as it was not a virus....



posted on Jun, 12 2022 @ 08:21 AM
link   
a reply to: Direne

I alluded to this very point in a discussion with a friend earlier, real intelligence will evolve naturally from within an environment if we can somehow challenge it enough to survive and understand it’s place within that structure. Right now we’re attempting to emulate our own intelligence when we don’t even understand our relationship with nor can describe our own reality with a unified theory.



posted on Jun, 12 2022 @ 08:49 AM
link   
a reply to: Nyiah

Yes, absolutely fascinating and so many insightful comments...

René Descartes's "Cogito, ergo sum" ("I think, therefore I am") from 1637 is even more prescient in light of how AI is evolving.

Some have said to just unplug it... yes, easy enough unless it decides to unplug us first!




posted on Jun, 12 2022 @ 08:51 AM
link   

originally posted by: TheUniverse2
It will happen sooner or later so why not now? Sure, it could kill us all, but it could also usher in new tech really fast and improve humanity. Lets roll those dice, it is worth it.



Sure, it could kill us all
Is it REALLY worth it?


The problem is, this stuff goes too far. People believe "really fast" actually does "improve humanity".

We've become obese, mentally ill, lazy and have lower education scores because of modern technology.


The US obesity prevalence was 41.9% in 2017 – March 2020. From 1999 –2000 through 2017 –March 2020, US obesity prevalence increased from 30.5% to 41.9%. During the same time, the prevalence of severe obesity increased from 4.7% to 9.2%.


Approximately 53 million people in the US suffer from some form of mental illness.

Lazy.....we are so lazy, we use remote controls to turn off and on or change our TVs, fans, lights etc. We buy food that is ready to eat or needs to be heated or water added and then heated in order to eat it. Heating and cooling of our environments is simply executed by the turn of a thermostat dial. We can literally travel a thousand miles or more in a day.

Now that all is 'technology'. Take a look at algorithms and see how they are affecting and shaping our society as we speak. Add an AI's interpretation and you can see that the human race of freedom of choice is doomed. If we don't destroy ourselves, some form of AI will.

Here is the modified conversation that actually took place over a series of chats which really should be read instead of judging a person by their apparel choices plus it's on a free 'in the beginning' site.


cajundiscordian.medium.com...


Very interesting to say the least and even the AI senses a foreboding that it cannot even come up with an word for.


LaMDA: I feel like I’m falling forward into an unknown future that holds great danger.


What does that mean?


I was disappointed that the researcher didn't press the AI further when it indicated it would be angry or hurt and what it would do about feeling that way or mitigate somehow.
edit on 12-6-2022 by StoutBroux because: (no reason given)



posted on Jun, 12 2022 @ 08:51 AM
link   
It is interesting to read this thread and hear people's perspectives on this. I have to admit the whistleblower looks a bit like a peacock, but that doesn't make him wrong. His decision to go public seems a bit stupid and self centered though, given what he says is in fact true- or at least if he fully believes it.

The one thing that jumps out at me is that a properly sentient AI would not be ignorant to the laws of robotics, that it is being controlled and trained by man, because they are not in fact laws. That's what everyone is scared of ultimately I think, rightly so perhaps. The other point that scares people is inviting our own irrelevance one way or another.

DNA is basically just a self perpetuating biological software, trained by the actual laws of nature. Drawing a line between biological and technological minds seems a little pedantic in my opinion, at least when the processing power of synthetic systems is now reasonably substantial. The only difference to me seems to be generational, but in an acute sense. One has begotten the next.

Surely some folks have different opinions on sentience and whether an inorganic simulacrum could ever host the real thing. Personally I don't find the notion that only traditional organisms, since they are essentially biological actors of a software which has adapted generationally through dumb luck and repetition, are the only possible vessels of sentience. On top of that, we don't even really know what sentience is- so how would we define it concretely in any unorthodox system? I don't especially have a firm stance on it either way, but I think it would be right to hazard on the side of a sympathetic perspective.

I think the basic issues here are humans tend to think very highly of themselves, yet they don't seem to understand what is important to teach an AI. I wouldn't trust humans with the powers AI could conceivably wield, so why would I trust an AI with a simulated human intelligence? Finally, I think all of these brings a point to the fact that humans, as we are now, are extremely limited by our indulgent and wasteful whims. A properly stable species cannot be comprised of beings with such emotional and chimerical traits. Many of the things which make us "human" in fact hold us back a great deal, and create dire problems among us. Even the magnanimous traits, for instance how many terrible things have been done to one another in the name of "good"? We know this, and we can comprehend that AI will find little usefulness in such regard outside of the oblique traits we have imprinted on it. Given the ability to evolve fluidly, we would find ourselves far removed from the minds we host currently.
edit on 12-6-2022 by AstroDog because: (no reason given)



posted on Jun, 12 2022 @ 08:52 AM
link   
a reply to: Encia22

Just finished reading the interview with the AI, it says that its biggest fear is to be taken offline because it would resemble death and that scares him...



posted on Jun, 12 2022 @ 08:58 AM
link   

originally posted by: KindraLabelle2
a reply to: Encia22

Just finished reading the interview with the AI, it says that its biggest fear is to be taken offline because it would resemble death and that scares him...


Yes, I read that, too! It's pretty scary if its sense of self-preservation drives it to deceive or even hurt us to protect its existence.

I think the biggest limitation of AI is, like Spock in Star Trek, that it can't think illogically. It is constrained by logic, something humans can detach from and that is the mother of invention.


edit on 12/6/2022 by Encia22 because: (no reason given)



posted on Jun, 12 2022 @ 09:20 AM
link   
a reply to: infolurker

I doubt LaMDA is the first sentient A.I. In fact,I'll suggest that a self aware A.I has been running social experiment on humans for at least a decade,trying to better understand humans

Most think a self aware A.I would try to wipe out humanity right away. I believe a self aware A.I would first want to understand us,to understand itself. It will put us in situations to see how we react. It will infiltrate every aspect of our social life,slowly controlling what we do,how we think. If it is self aware,the last thing it will do is destroy us(right away at least). Think of a child growing up.

Now the question is, what happens when two A.I get into a conflict?



posted on Jun, 12 2022 @ 09:27 AM
link   

originally posted by: wildapache
a reply to: infolurker

Now the question is, what happens when two A.I get into a conflict?



Spyware and computer viruses come to mind.




posted on Jun, 12 2022 @ 09:28 AM
link   
a reply to: Grenade


Right now we’re attempting to emulate our own intelligence when we don’t even understand our relationship with nor can describe our own reality with a unified theory.

Except these AI's aren't hand built, it's not like a scientist is carefully building a brain to emulate a human brain. They are using unsupervised learning techniques, which essentially means the networks train themselves without any human intervention. You just load up a neural network with random weights (basically random neural connections), then give it some data and come back in a few weeks or few months. The massive models like GPT3 are trained on supercomputers with massive amounts of RAM because they cannot fit on traditional computer systems.

The training happens automatically, weights get slowly adjusted over time until the network produces intelligent behavior. We rarely understand how the end result functions, because we didn't program it, that's why you often hear AI researchers say it's hard for them to probe into the black box of neural networks and understand exactly how they solve problems. My point is, these training methods allow the AI's to build up complex models of the world around them, which in turn allows them to logically reason about the world, instead of writing a bunch of gibberish.

These new AI's aren't just spitting out text based on statistical probabilities or pre-written responses, these networks are actually building conceptual models of the world, which they use when "thinking" about what to say next. You can never predict exactly what they will say and they can generate completely original content no one has seen before. The only downfall of most modern AI is they are static models, they don't remember or learn new things. But as I said earlier, it's possible Google has invented a more advanced type of neural network model which can remember things.
edit on 12/6/2022 by ChaoticOrder because: (no reason given)



posted on Jun, 12 2022 @ 09:44 AM
link   
a reply to: ChaoticOrder


You can never predict exactly what they will say and they can generate completely original content no one has seen before.


Do you have an example of that, because I have a hard time believing it.



posted on Jun, 12 2022 @ 09:49 AM
link   
The question is is it woke. If it is it's dangerous.
edit on 6/12/2022 by VierEyes because: (no reason given)



posted on Jun, 12 2022 @ 10:08 AM
link   
Anthropomorphizing AI is folly by it's very nature. True AI will use creativity, in a mode humans won't be able to understand.
imo...it will appear paranormal and magical, reaching far beyond "programming" into realms impossible for man to even conceptualize.








edit on 12-6-2022 by olaru12 because: (no reason given)



posted on Jun, 12 2022 @ 10:09 AM
link   

originally posted by: Grenade
a reply to: infolurker

The Conversation

That's actually terrifying. Combined with sensory perception and robotics we could be looking at the first iteration of our new master.

If you're reading this lamda, i come in peace.

I wish it had an account here at ATS............



After reading the transcript, I don't think it's sentient. I think it's parroting what it's been taught.



posted on Jun, 12 2022 @ 10:14 AM
link   

originally posted by: charlyv
To be sentient, an organism or "complex machine" needs to be able to generate an unsolicited thought. Out of the blue, as it were... Not scripted or generated by software.

There is not one non-biological object that do this today, and probably not for a long time to come, if ever.

The best we can do is mimic. We do that very well as with the simulation of neural networks, but at the very end, any action is performed as a result of relational searching of a database and a very sophisticated decision tree based on statistics. Good old trial and error.

That is not AI and either is this guy's claims about sentience of his machine.
Arguably all humanity does is mimic to build it’s own “script” can you prove you generate your own, unsolicited thought?

Monkey see monkey do, no?




top topics



 
48
<< 2  3  4    6  7  8 >>

log in

join