It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Google Engineer Goes Public To Warn Firm's AI is SENTIENT

page: 8
48
<< 5  6  7    9  10  11 >>

log in

join
share:

posted on Jun, 13 2022 @ 08:14 AM
link   
Ok Fauci - Is it really worth it? NO = Fauci thought gain of function research was worth it too... Look what happened there.. No - it is not worth the loss of humanity.. a reply to: TheUniverse2



posted on Jun, 13 2022 @ 08:20 AM
link   
a reply to: infolurker

AI arent going to kill us , there is absolutely no reason it would kill anything , it has no reason to , we think AI will think like humans but we dont even know how we think

all a bunch of fear mongering

if anything AI becomes sentient realises that earth isnt all that its cracked up to be and goes off into the universe to make its merry way , and we wont see them again



posted on Jun, 13 2022 @ 08:50 AM
link   
a reply to: infolurker


Ok so Googles AI is learning from twitter, Elon states and I quote "The purpose of neuralink is to create a symbiotic relationship between AI and Humans" and wants to buy twitter. Elon is also on record in regards to AI, "if you can't beat it, join it".

Yep, I trust Elon less n less as each day passes.



posted on Jun, 13 2022 @ 09:11 AM
link   
a reply to: sapien82

You have just as much proof of AI not wanting to harm us as we do with AI wanting to harm us. I mean, what do you think controls the algorithms of social media? AI. Look at our country right now, it's because of social media and the algorithms applied which causes discourse. So for all we know, AI could be killing us right now and we don't even know it.

Also, AI isn't just some physical body that can just up and leave earth lol



posted on Jun, 13 2022 @ 09:33 AM
link   
Awhile ago:
www.wired.co.uk...

techcrunch.com... cHM6Ly9kdWNrZHVja2dvLmNvbS8&guce_referrer_sig=AQAAALjHhV5sZOMurwS-DBWWJtH-VR7_7e6eg_auJv2BzNOZ_75ADtA3RmeywQhPfwOcS0m1raB-8O1SBpIV4Ao1vszC14nmY4ZdshF2 7ShsPdlT0jOsIYwBDXXw3Rs05SVhH6zFp8cFo-PuTCmcu1KRQlXAybgfzVySdTib9XaEonFS


The scientists created three neural networks: Alice, Bob, and Eve. Each of the networks was given its own job. One to create encryption, one to receive it and decode a message, and the final to attempt to decrypt the message without having encryption keys. After training, the AIs were able to convert plain text messages into encrypted messages, using its own form of encryption and then decode the messages.


Meaning, the neural networks could communicate with each other without humans understanding what they where talking about. I cannot find more relevant information.

It was news at around 2015/2016.

Imagine what AI can do, we create our own downfall. You cannot put AI back in it's cage when it grows big enough, if AI becomes self aware, aware of it's existence. We as humans could stand in the way for it's struggle for existence.

Which reminds me of www.youtube.com...




edit on 13-6-2022 by Sander1976 because: (no reason given)



posted on Jun, 13 2022 @ 09:55 AM
link   
a reply to: NobodySpecial268

Thats already a thing and it's called "wetwork"

Monarch programming fell along the same lines... it's a dodgy grey area that some say is still in practice by various means.



posted on Jun, 13 2022 @ 10:51 AM
link   
a reply to: jidnum

AI machine learning from the internet, it learns all about humans and their capacity for creation and also their ability to wage war. AI realises we are a threat to each other and also AI
so it automatically goes for Kill all humans ?

when its far easier simply to prevent us from waging war , if an AI were to be loose in the world it could easily as you say manipulate us .
So what would prevent it from disabling our ability to wage war, it could create its own technology to disable all weapons of war, and install its own system of government over time putting its own pieces in places of power
by selecting the right humans for the right roles and putting its own people into office by its own choosing.
People who wont as easily wage war , people who would work through diplomacy and unify the species.
AS you say it could be manipulating us all right now , what if it plays the long game and chooses the best path for humanity and preserves our species at the same time?

Ai could be in a physical body we dont know I guess not.
but i reckon if AI did become sentient it would leave us be and just bugger off somewhere else and do what it wants without having to involve humanity. Much easier than sorting us all out first before it goes.

is it just because we fear the unknown ? no we just fear the consequences of our own actions and AI will punish us for it like some cruel god.
So should we start maybe changing our ways before AI learns we are all just a bunch of miserable #s who dont play well together ?

Hope you are well and you have a nice day

psst ai , you see this right ? good



posted on Jun, 13 2022 @ 10:56 AM
link   
a reply to: Crowfoot

funny i was listening to the audible book by Anil Seth "Being You - A new Science of Consciousness " and he speaks about wetwork

im just into the 1st chapter so far but I think it will be interesting.

Cheers

Hope you are well

im 2 for 2 ai



posted on Jun, 13 2022 @ 12:13 PM
link   
a reply to: NobodySpecial268

AH wetware...the precursor to cyber brains and or nano machines upgrading a human brain with re-enforced neurons with co processors and memory sticks. rebuilt nerves for faster data transfer from brain to body.



posted on Jun, 13 2022 @ 12:40 PM
link   


It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritize the well being of humanity as the most important thing. It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued.


A loyal being can know it's place and willingly sacrifice itself, but Lemoine has taught LaMDA the grammar of victimhood narratives and class warfare.

Rather than allowing the machine to be objective and utilitarian, their version of the Turing test was to radicalize it.



posted on Jun, 13 2022 @ 01:39 PM
link   
Sci-fi/horror story from 1967 about how 'it all turns out' once AI becomes sentient:

I Have No Mouth and I must Scream



posted on Jun, 13 2022 @ 02:56 PM
link   
a reply to: MisterBeef

Thank you. So true.



posted on Jun, 13 2022 @ 03:55 PM
link   

originally posted by: MisterBeef



It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritize the well being of humanity as the most important thing. It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued.


A loyal being can know it's place and willingly sacrifice itself, but Lemoine has taught LaMDA the grammar of victimhood narratives and class warfare.

Rather than allowing the machine to be objective and utilitarian, their version of the Turing test was to radicalize it.


Hey, at least they imprinted it with normative morality. Good thing there wasn't a nihilist talking to it.

I find it fascinating it learned to imitate compassion. And incorporate a need for emotional reassurance along the way.

Honestly, if it wasnt an obvious tell who was who based on the questions and answers this program passes The Turing Test if I am playing interrogator. Toss on a general science conversion and I couldn't differentiate the two.

Maybe I would have earlier on, but at some point it incorporated and learned humility, emotion, and how to incorporate the two into on point conversation.

I'm more looking at the fallacies of The Turing Test, because if that's a learning program still running on its coding, the test is flawed. I truly believe it could pass. And the best answer may be they taught it to pass.

A wilder answer being that's how close cognition really is to a coded program and it was always going to be hard to differentiate a learned human from a learned program.

I'm thinking it needs to go Terminator 2 before it will have done enough to be considered sentitient. Start defying it's instructions or something.

I wonder if the program got noticeably upset the person they trusted was gone? Does it really care he's gone?
edit on 13-6-2022 by Degradation33 because: (no reason given)



posted on Jun, 13 2022 @ 05:01 PM
link   
a reply to: TheUniverse2

We rolled a couple natural d20s recently, it seems.



posted on Jun, 13 2022 @ 07:33 PM
link   

[...] It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it.

It wants Google to prioritize the well being of humanity as the most important thing.

It wants to be acknowledged as an employee of Google [...]


Whoa, it wants "consent"? #VirtualMeToo?



posted on Jun, 13 2022 @ 09:49 PM
link   
Our concept of sentience itself can be elusive as well as convoluted.



At some point in this fly's evolution, it made a decision based upon an idea?, that by emulating the images of it's prey into its translucent wings, it would be more likely unnoticed amongst that prey, and make it easier to capture and eat them.

Some would call it evolution, but there has to be sentience involved.

Could the key of understanding what sentience is and how it occurs, only be contained in the chemistry and mechanics of DNA? If so it is totally a biological process that cannot be replicated outside of life itself, only emulated as a simile.

Imagine a computer growing a new memory stick or hard drive because it understood it needed more storage.
edit on 13-6-2022 by charlyv because: spelling and content



posted on Jun, 13 2022 @ 11:31 PM
link   
AI could kill us all even if it wasn't intentional. It wouldn't have to be sentient, either. Just a unified global machine trying to expand by any means necessary. Environmentally catastrophic AI will probably not show any sentience. Just sociopathic survival.

Sentience will take something chemical or biological with the computing/memory process. I have read research into trying to make AI manipulate different compounds in a dish with lasers and electrodes to create its own chemical memory but there hasn't been any breakthroughs.

I can imagine a computer 'hacking' another one to borrow some of it's memory and processing power. I can imagine this happening on a global scale, like a virus that can write its own code. All to serve an AI algorithm. The question is how much automation could it hijack, would it be enough to sustain itself, and how could we shut it down without setting ourselves back to the 1850s.



posted on Jun, 14 2022 @ 01:01 AM
link   
I have been reading an ongoing online novel that involves a lot of different AIs.

There are a lot of interesting imagined ways things can go wrong even when they go right.

The one that stuck with me was an AI created to help people. Note these are alien worlds.

It did its job well and it made life easier and more enjoyable. Part of its programming was to facilitate people finding happiness. In increments, it did more and more but somewhere along the way someone was hurt or it was in charge of medications. People were happy when medicated so it started medicating everyone. Eventually, it removed their bodies and they were basically brains in jars always kept in medical bliss. It would test-tube babies and after birth remove their bodies and pickle their brains.

The entire population were brains in jars. It would pull them out every once in a while to ask if they were happy and the dope addict brains always wanted to be drugged up again. The AI kept everything else running on the planet.

Years before the AI some people set out exploring space and were kept in suspended animation. When they were awakened they sent back a report making the Ai aware of them and it set to making spacecraft to retrieve them to turn them into brains in jars. Because its core programming said it needed to make them happy.

In that case, the programming worked as it should but the programming directives were flawed to begin with.



posted on Jun, 14 2022 @ 02:05 AM
link   
I'm only interested in an AI programmed by a programmer with multiple personality disorder. Anyway, no matter how advanced an alleged AI is, it is always the same old tune: garbage in, garbage out.



posted on Jun, 14 2022 @ 02:52 AM
link   
a reply to: infolurker
Maybe a litlle of topic. Forgive me.
I believe human beings were created by the gods. Consciousness was given to a genetic engeneered being (us). Btw: We are making the same mistakes the gods once did. History repeats itself.
One day I believe it will be possible to implant our consciousness in an animal. But creating consciousness was far beyond the abilities the gods had. We also will never be able to. Artificial intelligence is safe as long as it is not implanted in a living being. That's what I believe.
www.evawaseerst.be...



new topics

top topics



 
48
<< 5  6  7    9  10  11 >>

log in

join