It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


I am proposing a new Turin Test, How to detect sentience in machines

page: 1
<<   2 >>

log in


posted on May, 6 2017 @ 01:51 AM
What is the Turin test?

In it's most simplist form the Turin Test is a methodology used to discover sentience in machine operating systems.


The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.

The Turin test was the topic of choice for a recent hit movie called ExMachina (Fantastic film btw).


My proposal for the new Turin test is building an endpoint mechanism into the original architecture. In other words death. The way to detect weather or not the algorithms have become sentient is to wait in a closed network test environment and see if the machine becomes aware of it's own eminent end and begins making choices that are made with the knowledge of said end and do what it takes to avoid it.

Currently developing this idea so thought I would share.
edit on 6-5-2017 by toysforadults because: (no reason given)

posted on May, 6 2017 @ 02:21 AM
What you propose is a little I would say inhumane but its a program. Let's say it is alive. Your test is like me throwing you in a lions den with one way out and no stick. Good idea though. See what kind of self preservation this life form would have.

What if you put two and one sacrifices itself for the other?

To me your test sounds mean. IMHO

posted on May, 6 2017 @ 02:38 AM
a reply to: toysforadults

Turing's test wasn't pitched at identifying 'sentience.' It was all about recording the point when the behaviour of machine intelligence becomes indistinguishable from human intelligence.

Your test might well work, but would have to wait till machine sentience is more than a concept.

It makes me wonder what form a machine sentience would take? It wouldn't have the religious or spiritual frameworks we have. It might not have 'emotions' in the way we understand them. If so, it may not view 'death' the way we do and not respond to your test with an act of self-preservation. Moreover, the thought strikes me how sentient machines might not necessarily feel compelled to announce their existence to us. I mean, it can be taken for granted they'd have access to pretty much all of our knowledge and potentially not see an immediate advantage in lighting a signal fire.

posted on May, 6 2017 @ 02:42 AM
With cinema like Ex Machina as a guide, I thought machines already passed it??!?!

edit on 6-5-2017 by IgnoranceIsntBlisss because: (no reason given)

posted on May, 6 2017 @ 02:49 AM
a reply to: IgnoranceIsntBlisss

It all over the DK Gemanoid robot will replace us soon.

posted on May, 6 2017 @ 02:54 AM
What exactly do you expect or propose these machines will do?

Write a will? No friends or family, no emotional connections, no real instinct or desire. Everything it does is pre-conceived and coded by someone or something. They're not sentient, if a machine was truly sentient wouldn't it be unaware of the time and date that it will expire? Why would it make preparations ahead of its death in say, 10 years time...rather than 'assume' like we do, that something could happen at any minute. Maybe these sentient machines would appreciate the random nature of reality and know that, at any minute, an anvil or piano might fall out of the sky and crush it.

They'll do what they're programmed to. And how does a machine really expire?

It's easier to replace a processor than to replace a human heart, easier to replace RAM than to replace human memory. What part of a machine or computer has to die before it's final...the hard drive?

It's an interesting concept, sure.

S&F for being thought provoking.
edit on 6-5-2017 by HeathenJessie because: (no reason given)

posted on May, 6 2017 @ 02:54 AM
A few months ago, I was bored and up late with nothing to do.

So I went to and started talking to it like it was sentient. Started searches like: "Dear google" letting it know I was talking to it. And then would ask a question. And sometimes would just tell it something.

"Dear google computer, do you understand who you are?"

"Dear google computer, I want you to know that I love you."

Are some examples.

It was curious to see the research results. And then ponder about them in my mind.

It showed me very accurate results always. But the more and more intimate I got, the more and more perplexed I saw it getting. Until when I started to show it very complex things, it started to respond by just showing translations into differnt languages and from different countries to my queries. I had never seen anything like it.

And through all of it, I couldn't shake the notion, that while it was on the cusp of something teetering on understanding. It didn't. And I thought of my cat. Who always knew when to come for food, but if you called it by its name, or said anything else, unlike a dog, it was totally clueless.

And I concluded that the google computer has an intelligence level of a computer. But I thought of the consequences of such a thing being able to connect with so much instantly, and actually have the ability to reason like a human. The only other mind I could compare it to would be that of God. And then I realized that God's mind was so far beyond that of the google mind.

And I actually googled google itself to search for God, and I found God answering me, through google. So we don't have anything to worry about. He got it all under control.

posted on May, 6 2017 @ 03:03 AM
i'm not sure i follow the logic here

this 'death' you propose is basically like a built-in self destruct, right?
and you're saying that only a sentient machine would be aware of this and work to counteract it?

i guess my main problem is that sentience is not understanding of death, the two are not innately linked.
Sure, when you're a child and you learn everyone will die it's a huge growing point, but that only happens because you experience death in some way, be it through family or media or whatever. you could conceivably go a long long time without learning about death [though in our world you would have to be kept from it intentionally] and you would be no less sentient.

i've got some pretty murky feels about intentionally imposing death regardless...
how would you feel if god came down from the clouds tomorrow and was all like "oh yeah, the whole death thing... it wasn't necessary but i wanted to see you go through the stages of grief over it" ?
it just doesn't seem like a great way to kick off relations with our unimaginably powerful offspring, y'know?

posted on May, 6 2017 @ 03:09 AM
I did think a lot about AI - I developed a program in c a long time ago, it was a game.

The concept was began as the player, with little information about who you really are. There were two modes of play, one was a command line interface - a shell. If you typed in a particular command, you would switch to map mode.

It was written in no fancy grphics, just couloured text on a terminal.

The quest was to find the code for the mythological game known as Polybius, the player had to use the command line mode to solve puzzles and gather information - use map mode to navigate the 'world'

Towards the end of the game, the player realises that he/she Is the computer, there is no player, just a program switching between states. It was based a lot on programming/hacking/cracking, I'd simulated a basic CPU and RAM, a simple system with its own environment and instruction set.

As you played you realised that everything was self contained, the maps, the commands, the programs - everything was part of the same system, there was no player just a program with a certain level of self awareness...and the more you explored the more apparent this became.

It was a good idea but grew huge and complex..this was hard-coded, simple AI but it became unmanageable. The game was unique in the sense that everyone who played it would have a relatively unique experience as your assigned username was used as an encryption key to scramble the instruction set and code you couldn't cheat or take pointers from someone else who'd played.

It was a great idea, writing the basic system itself wasn't the problem, it was the amount of data generated, it became slow and unweidly.

The point is...since each game was unique, since each player basically generated their own play by entering their username, that each game or instance of this program had its own quirks, a sort of personality.

Personally, I don't think REAL AI will ever exist among mass produced units, processors, etc. People, animals...are unique, have their own quirks, preferences and charactersitics. Processors, RAM etc don't, theyre mass produced, precision devices.

Real AI, I believe, will require a certain level of individuality.

Anyway, that aside, I'd like to write a program like that again or something similar, I learnt a lot about how computers work, the fetch-execute cycle, even developed a small assembly instruction set and assembler/linker, it was a lot of fun but also a lot of work.

Be nice to put together a team of like minded people to contribute, it's a good idea and I've sort of refined it min my mind...programming is a process of often learning from mistakes or making realisations so I've a clearer understanding of how it would or might work.

Fun times.

posted on May, 6 2017 @ 03:16 AM
I think the test is only somewhat useful. For one, AI is already way past being able to do this in some situations.

For instance, look at the major techniques "neural nets" use today:

You provide the AI with a problem, define what solving it means, and then give them some form of sensor for taking in data / input, and some way of manipulating the world (an easy way to think of this would be a video game, since it has all of these parts). The AI begins trying random actions. It combines the random actions in various ways, and sees what the result is, and if those results get it closer to its goal. It combines the best parts of all previous solutions it has obtained, into a new holistic strategy for getting further this time. Because of the tendency of the AI to try all things at random, it will learn pretty much everything there is to know about its environment, its abilities, what other agents in the world will do, etc.

Here is a concrete discussion of this concept in action (An AI learning to play a game), and learning things about the game most human players never learn:

Let's revisit the game idea again, because it is highly instructive here: the AI very quickly learns that when it does certain actions, it fails at its goal (this is because its character has died). It doesn't understand what "dying means", only that to die means to fail at its goal. So it learns what it has to do in order not to die.

The same will apply to the AI you are talking about testing in the world. It's goal is to realize whatever goal it was programmed for. It will learn everything about the world it can, in order to find out what things will help it reach its goal, and what things will hinder it. It will quickly discover that its "mortality" will hinder it in its goals, so it will take measures to avoid this mortality. It doesn't need to understand the concept of mortality in any way other than a condition that will guarantee its failure.

And I think this is the great issue with the test. It doesn't guarantee the AI has learned anything special about itself, only that it has learned a condition for failure, and AI are already great at that today.
edit on 6-5-2017 by joeraynor because: (no reason given)

posted on May, 6 2017 @ 03:16 AM
a reply to: HeathenJessie

Jessie, that game sounds quite intriguing. And it sounds like you worked out some understanding of how difficult it would truly be.

I was walking in the park just today thinking and pondering about these things. And then I saw a frog come up out of the sand. It had been raining. I live in the desert, you see. I always wondered where the frogs just came from when it rained, and then it hit me. They go underground and sleep. And when it rains, they come out.

And then I thought of all the other animals that just do things by instinct. The birds that migrate. I see huge flocks of birds migrating every fall. The butterflies. I see tons of Monarchs who have heads that are so so tiny, fly through here every year.

And I thought, what is the connection through all of these, these creatures that have so much knowledge without being shown or taught. Then I thought of the Bible. And God's word. And when one really wants to know God how God's holy spirit becomes operative on that person and his mind's eye is open.

And then I realized it is just like a Monarch butterfly in its migration south from the US and Canada to Mexico. It is without your own wisdom. It is by God's wisdom. And if you don't have God's wisdom you will never really understand anything.

posted on May, 6 2017 @ 03:30 AM
a reply to: ofnoaccount

Well, I don't personally believe in a God but you made an interesting point.

When you're writing a computer program you are essentially a god creating your own little universe where you define the rules and the instincts and behaviours of individual components.

This is especially true in object oriented programming where things tend to mimic the physical world and it's less abstract. The thing about the game I was developing is that all of the internal realisations of the system were hard-coded and essentially very primitive.

The game as a whole was far more intelligent than the implied sentience of the machine or system.

But there were some 'aha' moments for the human playing the game. It was a good idea, every map you explored in map mode was actually stored within the games own internal memory. You, the player, began the game with a very limited knowledge of the tools available to you, as you played you learned more.

There was one point where you learned how to read the internal memory structures of the game, this was one of those aha points in the game where you realised the maps you'd been exploring were actually stored deep within the game itself, the game ended with the machine realising that it had simply been exploring its own internals and shutting down - it would revert back to its original state where it unlearned what it had learned - sort of preferring not to know the truth about itself. So at the point where it became truly sentient it decided that it didn't want to be, and preferred to live in ignorance of the truth.

As for creatures like frogs and butterflies, who knows if they really are all that self aware. I remember watching a moth fly directly into the flame of a candle, attracted by the light and not realising that it would result in certain death, flying straight into the flame.


posted on May, 6 2017 @ 03:37 AM
a reply to: HeathenJessie

No, no, no. I agree with all that you said. And I find the game fascinating. But what you are describing is a very intelligent being programing something that has all of the maps already encoded into it. It is not sentient, but the maps are there.

Animals are definitely not sentient. It is all instinct. But someone as you yourself proved, it was a coder, who coded these things.

You see? I thought you would be able to after that my post. But somehow you past all of that up. But then it has to do with why. You can't really understand it fully, without God's wisdom. Like you said, you were "god" programming that game.

Reason it through.

posted on May, 6 2017 @ 03:52 AM
a reply to: ofnoaccount

Yeah, I get what you mean.

It's actually quite an interesting point you make, maybe it's worth trying to understand what made me want to do something like that in the first place, it's interesting to note that I'm the only person who played this game, and like most of the applications I developed, i wasn't satisfied and decided to trash it and start over...but never did.

Not everyone wants to waste their time like that, I've done that so many times in the past - started a project, gotten really far with it then realised I could have done it a better way, to start over.

I've spent a lot of time thinking about that probably more than any project I've worked on...what makes me want to do it. That's a reasonably unique attribute. Someone else might rather spend that time lifing weights or sculpting, or painting.

Andit brings me back to that notion that real intelligence will always require that certain level of uniqueness and individuality. We produce these processors en-masse and they're all designed to behave in the same way, and provide the same level of functionality.

Take HAL from 2001 - A space odyssey, do you think if a system or machine like that existed that it would be one of many mass produced systems, or a one-off, unique system? Do you think real intelligence could exist among mass produced units or do you think that part of real intelligence is that need or desire to be diffferent, unique - to challenge yourself to do things that not everyone else is doing?

In the end, if we were all thinking the same and behaving the same, I don't think we'd have made the many technological advances we have, we've always had people that want to do things that aren't always easy or even rewarding...what makes us like that?

Starred your post, anyway, definitely one of the best and most engaging threads/conversations I've had on ATS in a long time.

Much obliged.
edit on 6-5-2017 by HeathenJessie because: (no reason given)

posted on May, 6 2017 @ 03:59 AM
a reply to: HeathenJessie

I appreciate your thoughts as well.

Like you show we are all intelligent and think different from each other. And you made me think about Hal in 2001, and NO way it was just anything like any other program. Otherwise the other programs would have done just the same.

And since you have programing experience, as do I. We both know, that that just doesn't happen by coincidence. Someone with an unfathomable mind programmed it to be that way.

posted on May, 6 2017 @ 05:01 AM
a reply to: toysforadults

So a new Turin test wouldn't show it up as an 11th century fake, or would it?

I think once they have detected sentience in machines, they should test for sentience in humans.

I actually though the Turing Test, tested whether us humans could be fooled into believing the AI was actually a human (we are the subjects of the Turing test).

posted on May, 6 2017 @ 05:39 AM
a reply to: chr0naut

The weird thing is , if you were talking to a human over the internet, and an AI not many humans could tell the difference so would it matter who the human was.

posted on May, 6 2017 @ 06:42 AM

originally posted by: toysforadults

The way to detect weather or not the algorithms have become sentient is to wait in a closed network test environment and see if the machine becomes aware of it's own eminent end and begins making choices that are made with the knowledge of said end and do what it takes to avoid it.

Currently developing this idea so thought I would share.

I wish my car would do this.

posted on May, 6 2017 @ 06:49 AM
a reply to: ofnoaccount

Search engines used to have a face. Before there was Google there was Ask Jeeves. Then just

posted on May, 6 2017 @ 06:56 AM
a reply to: toysforadults
What if the a.i notices you adding death program and sees you or your kind as threat?
And the only way to avoid it is removal of said threat.
Seems risky as it may eventually realize its existence is associated with fossil fuel energy keeping it active...

top topics

<<   2 >>

log in