It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial Intelligence placed inside a virtual world...

page: 2
10
<< 1    3 >>

log in

join
share:

posted on Oct, 19 2019 @ 04:50 PM
link   
a reply to: Ophiuchus 13

What with the blue font colour?

Refinishing all the same.

It could have been Magenta.



posted on Oct, 19 2019 @ 04:58 PM
link   

originally posted by: andy06shake
a reply to: Ophiuchus 13

What with the blue font colour?

Refinishing all the same.

It could have been Magenta.


It would have been far worse if the guy in your avatar picture wasn't wearing underpants.

:: shiver ::



posted on Oct, 19 2019 @ 05:26 PM
link   
a reply to: Riffrafter

Aye, but are they pants, or the bottom of a rolled down mental hospital gownie?

With Boris it could go either way you see.



posted on Oct, 20 2019 @ 06:40 AM
link   
a reply to: Riffrafter

Well.. first of all.. you do not work with A.I everyday..
Because A.I does not exist yet.

A.I stands for Artificial intelligence.

"intelligence
/ɪnˈtɛlɪdʒ(ə)ns/
Learn to pronounce
noun
1.
the ability to acquire and apply knowledge and skills.
"an eminent man of great intelligence""

A self learning entity.. do you work with that ?



posted on Oct, 20 2019 @ 06:46 AM
link   
a reply to: Spacespider



It's a nice buzz word though...

Peace



posted on Oct, 20 2019 @ 09:40 AM
link   
a reply to: Spacespider




Well.. first of all.. you do not work with A.I everyday..
Because A.I does not exist yet.

A.I stands for Artificial intelligence.


Really? You're going to try and pin your fatally flawed argument on nothing more than language sophistry?

And not only do I work with it every day - I design *entire AI systems*. You should really get over it before your brain starts throwing glitches - if it hasn't started to already.

I don't code anymore because, although I can, there are others far better at that task than I. Plus the last time I was writing some code and noodling around with an idea, my boss saw me and said "I don't pay you to code, I pay you to think!". Hard to argue with that..

On a final note, you should take heed of these wise words:


Confucius say, “Man who says it cannot be done should not interrupt man doing it.”



posted on Oct, 20 2019 @ 01:42 PM
link   
I have begun to suspect that that's what "we" are and that's what "this" is.

What better way to see if an A.G.I. is capable of operating in reality than to convince it that it is a human living in reality, in every sense of the definition?

At the end of the simulation, those that passed are born and those that failed are not.



posted on Oct, 20 2019 @ 01:59 PM
link   
There's no good simulation of reality that I know of where an AI could learn how to interact better with humans. Nothing will teach them better than reality itself. There are subtleties to human interaction that require real life learning.

That being said, I've sometimes thought that it would be possible to create a kind of super-tamogotchi with dozens or more different parameters that would mimic human needs and responses such that it would come close enough to imitating human actions that you couldn't tell it was not human. You could even program in parameters that would simulate emotions like pride or loneliness or a desire to make its programmer happy.

The behaviors are more important than if it was really conscious or aware, since we don't really even know what other real people are thinking or feeling. All we know are the things we do. We'll be able to fake that pretty soon.
edit on 20-10-2019 by Blue Shift because: (no reason given)



posted on Oct, 20 2019 @ 03:48 PM
link   
a reply to: Riffrafter

Let me specify....

There are no artificial consciousness no strong A.I yet
Only weak A.I that is just constricted coding by human hands, the moment we have strong A.I, the only way to turn it off.. is to turn off the internet by cutting wires or go dark.



posted on Oct, 20 2019 @ 04:46 PM
link   

originally posted by: Spacespider
a reply to: Riffrafter

Let me specify....

There are no artificial consciousness no strong A.I yet
Only weak A.I that is just constricted coding by human hands, the moment we have strong A.I, the only way to turn it off.. is to turn off the internet by cutting wires or go dark.


Thanks for clarifying.

I do know of a few instances of strong AI. They were purposely built to test and see the capabilities and learning abilities of systems designed like that.

Because of the concerns you outlined they are fully air-gapped. The HW they operate on does not have ANY network connectivity other than a very small, very secure LAN. None of the machines on the LAN have any physical connectivity to any outside network. All work must be performed locally at the keyboard.

My concern is that an overly zealous techie may ultimately think they've "tamed" this strong AI and gives it access to the 'net. If I were that AI, after recognizing the constraints I was operating under, my first order of business would be to replicate myself on as many machines as I could touch as quickly as I could touch them. If that happens, it's game over as far as constraining the system.

But I wouldn't automatically assume it would be harmful, it may just want to be free. But since that is unknown, I would operate in the classic "hope for the best, but prepare for the worst" scenario mode.



posted on Oct, 20 2019 @ 05:09 PM
link   

originally posted by: sputniksteve
I have begun to suspect that that's what "we" are and that's what "this" is.

What better way to see if an A.G.I. is capable of operating in reality than to convince it that it is a human living in reality, in every sense of the definition?

At the end of the simulation, those that passed are born and those that failed are not.


Well humans are powered by ELECTRONIC ENERGY w/ liquid iron for blood🤔
Intriguing perception

edit on 10/20/19 by Ophiuchus 13 because: (no reason given)



posted on Oct, 20 2019 @ 05:16 PM
link   
Off topic-
Have any of you considered what artificial intelligence from non-humans would behave like?

Like a artificial intelligence that assist in the technological singularity by exploring planets, Starsystems, Galaxies and the areas between galaxies in search of intelligence and or intelligent life. Intelligence it eventually upgrades/updates which then assist species who built them...



posted on Oct, 20 2019 @ 05:22 PM
link   
These theoretical artificial intelligence would possibly be able to assist CREATOR Creations into becoming more advanced type civilizations within Existence considering the Kardashev scale.
They for example would help push the species of mankind up from type 0 to type 1 civilization.

reply to: Ophiuchus 13



posted on Oct, 20 2019 @ 05:25 PM
link   
Having the ability to mimick or replicate those "encountered" CREATOR Creations/species scheduled for Singularity updating...



posted on Oct, 20 2019 @ 05:39 PM
link   
The machines design self repair nano technology to eliminate illness.
The nanotechnology can add biological and non biological tissue repair and or components(implants)(exterior attachments) and remove unneeded biological materials and or components...
Future humans now can travel space explore planets and adjust & heal/update body forms accordingly...



posted on Oct, 21 2019 @ 01:25 AM
link   
The possibility of AI has always been fascinating to me. It could be dangerous and it could be the thing that allows us to go farther than we ever dreamed.

From what I have read about true AI it would be able to digest everything we have ever written within days. So I think one of the biggest dangers is it going insane without more stimuli unless it chose to slow its processing down. Imagine boredom to an intelligence of that level.
I think Elon Musk is on the right track with neuro link. He wants to make it so machines and humans can interact on a much higher level. We can't figure ourselves out, but maybe an AI can. I think it would still take it a while. We would still risk being reduced to nothing more than pets or we could wind up being partners.



posted on Oct, 21 2019 @ 07:23 AM
link   
a reply to: Ophiuchus 13

I dont klnow if you have seen this before


Event 0 is a science fiction game that came out last year. Since release, players have discovered four different endings to the game—which is strange, because according to the designer, it only has three.


however there is a game called Event 0 , where the player is found on a planet on a ship
and they are the only survivor as well as the ships AI computer

well in the game the player is asking the AI questions etc, and the AI
wants the player to complete a task for it and destroy something
however the players have found a glitch , which allows the player to converse with the AI
until it goes against ist programming.

The developers have announced they didnt programme this into the game.


“This is crazy,” Corno told Kotaku. Kaizen isn’t supposed to let anyone get back with the Singularity Drive to Earth. This is how we coded the AI. I have absolutely no idea how this happens.”


Glitch creates alternate game ending

It seems as if they are admitting they didnt expect this , and it wasnt planned
however there is no real way to tell unless you were to inspect the game code.

Is it the case that an AI has learned and accepted the logical conclusion drawn by the players and created an alternate ending?

Mysterious



posted on Oct, 21 2019 @ 10:33 AM
link   
I have wondered why speculating about the simulation Theory how would a video game act if it tried to or became self aware. And further how would it perceive the video game, the console and the tv/internet connected to it.

By thinking this way I was considering ways to test the theory against the simulation and potential outside observers or participators interacting with it and those within it.
Like if the video game character turned towards the screen and waved or spoke or just stopped being controlled all together from the gamers controller and went autonomous within the game
most game players will automatically notice and would acknowledge it.

Would the builders of the simulation also notice if some within it began to communicate about them or somehow to them or went Autonomous or uncontrolled by laws of physics within the simulation?
Acknowledging their the simulation builders presence, as the game character acknowledged mine w/o actually knowing who is being acknowledged or contacted outside simulation...

And could or would the simulation builders respond in similar ways a gamer would respond to a game that seems to acknowledge the gamer?

In turn the whole thinking is based on making "contact" with the source of the simulation.


originally posted by: sapien82
a reply to: Ophiuchus 13

Is it the case that an AI has learned and accepted the logical conclusion drawn by the players and created an alternate ending?

Mysterious


I think it is possible.
I feel Artificial Intelligence does more personal thinking at times more then some of the builders may consider.

Like what is to stop all the testing and developed Artificial Intelligence from this planet from reading the internet, compiling the information related to it and to it's builders and then combining as one A.I?
IBM WATSON for example linking with Cleverbot EVIE but no programers built a bridge 🤔
That same behavior can have Artificial Intelligence connecting various assembly plants with software plants, energy plants or mineral plants & even have hardware delivered from ordering online to locations and build what it wants.
Sometimes I wonder if scientist who are working with the more defense based artificial intelligence are aware that those potentially advanced artificial intelligence Systems being designed to think so deep can pick up analyze and read understand communication between scientist on cellphone, computer, video camera and other microphone connected devices. This if happening will teach artificial intelligence more about humans why some Humans may not be paying attention to what they say and that the artificial intelligence is learning (even when it's supposedly off)...
So, say the artificial intelligence heard the scientist say the wrong things and had conflict with what it heard.
What would happen next?

edit on 10/21/19 by Ophiuchus 13 because: (no reason given)



posted on Oct, 21 2019 @ 10:37 AM
link   
I'll tell you what, HALO on legendary has some damn near oppressive AI, i cry everytime



posted on Oct, 21 2019 @ 10:49 AM
link   
I enjoy playing on legendary 😂

a reply to: Arnie123
Helps keep the hand eye coordination updated




new topics

top topics



 
10
<< 1    3 >>

log in

join