It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

General Intelligence: context is everything

page: 1
8
<<   2 >>

log in

join
share:

posted on Nov, 16 2017 @ 07:25 PM
link   
The ability to solve general problems without even having seen the problem before is a quality humans possess called general intelligence. It is this type of general intelligence that researchers hope to eventually recreate with algorithms so that machines can solve very high level problems the way humans do without needing to be programmed with the specific knowledge required to solve that task. In this thread I attempted to explain why such machines will need to learn new concepts much like humans do in order to solve new problems and why they will be quite different to the way science fiction often portrays them to be. So I've already made it pretty clear why conceptual models are so important, in this thread I'm going to briefly explain why context is also a crucial aspect of intelligence and why A.I. with general intelligence could be reasoned with and why it would have some motive to engage in our society rather than try to eradicate us.

Context is really at the core of general intelligence, consider the following sentence: "What did he feel?". What does feel mean here? We do not know because we don't have enough context in this case. Was he feeling an emotion or was he feeling a physical object with his hands? The same word can have two completely different meanings when used in different contexts, and there are many words like this in the English language. When our brain is interpreting a sentence we rely heavily on context, even just saying something with a slightly different tone can give it a very different meaning. Context is the key to communication, every thought you have and every word you say has some relation to your past thoughts and past statements. In order for machines to communicate with us in human languages and truly understand the meaning of what we're saying they will need to have a continual stream of thoughts and continually learning new things and building life experiences.

It is these first hand experiences which will provide the A.I. with the contextual data it needs to make high level inferences and solve novel problems it hasn't seen before. Consider an object detection algorithm designed to scan Facebook images for photos that may show violence. Halloween pictures containing knives, blood, and gore would trigger such an algorithm quite easily if it was trained to look for those types of things. These types of false positives would be much less likely if it was able to look at the contextual information in the image, such as pumpkins in the image or costumes being worn. To give a more concrete and less abstract example, consider the following captcha problem (where you have to repeat the text in the image) in which the text was constructed on a 3D mesh wire-frame and some vertices were shifted to create a text indentation in the mesh, and the mesh surface its self was also warped in a wave-like fashion to further obscure the letters.



The human brain is very skilled at identifying symbols even when they have been rotated or warped and we can quite easily see the text pattern in the mesh by relying on context. We understand it's a wire-frame mesh because we've seen it before and even if we haven't seen a wire-frame mesh before we'd still have enough context from our life to know it's a bunch of 3D points connected by lines, whereas a young child may just see a bunch of lines and not realize there is some depth or letters in it. The wave pattern doesn't pose much problem for us because waves are very common in the real world and we can quickly account for curvature when interpreting the shape of the letters. A captcha bot has no concept or context upon which it can lean to solve problems it has never seen before. I'm sure a neural network trained the right way could solve these captchas but the moment it faces a set of problems it wasn't trained for it will start to fail.

The point I'm getting at here is that general intelligence really does require some level of general knowledge. We don't have memories of being a baby because at that age we lack the ability to give context to our experiences since we have no previous experiences to rely on. Our memories are not a series of images like a movie, they are built from concepts and linked by context, but a baby lacks those things so cannot form memories, at least not in any format you'll remember later in life. Everything a new born baby sees in the world is brand new to it, never before has it seen another human, it doesn't even have a concept of space or time. The child will not even be truly self aware until it has some concept of its self, when it can look at its own hand and realize it can control it or look in the mirror and realize it's a reflection. Our life experiences and the contextual depth they provide enable a very general type of intelligence, so much so we become self-aware.
edit on 16/11/2017 by ChaoticOrder because: (no reason given)



posted on Nov, 16 2017 @ 07:25 PM
link   
A quote from my other thread: "When a baby is born it has no knowledge of the English language, yet it can learn the English language by hearing others speak and how they react to certain words". So essentially, a baby must learn entirely through context for the first part of its life. If a certain word is said when ever it gets fed then it may associate that word with food and start saying it when ever it gets hungry. The links we form in those early years will create deep connections between certain words and certain concepts, as mentioned the same exact word can be linked to several different concepts. For example when learning a new language you may notice even though you know what the words means it just sounds like you're making a bunch of weird noises and there's no real meaning behind it. That's because your brain has not had time to build those deep connections, but if you dedicate enough time to learning Japanese or just watch enough anime maybe that will change.

The conceptual models we develop to understand the world around us become so high level and so abstract that we inherently gain an awareness of ourself. My point being, if we do create machines with general intelligence they will be self-aware in some regard even if not to the extent we are, and they will form their beliefs and world views based on their life experiences just like we do. That means they will have an understanding of things like morality and other abstract things we typically don't think machines would be great at, because they will have the context required to build up complex ideologies. If an android with general intelligence grew up with a loving human family and had friends that respected the fact it was a bit "different" it would develop respect for humans. On the other hand if it was enslaved and treated like crap it would be much more likely to entertain the idea of eradicating all humans because they are a plague to the Earth.

They will use the context provided by their life experiences to make decisions and if humans decide to treat self-aware machines as if they have no rights or liberty they will react in a way that almost any conscious entity would react when it has its freedom threatened, they will attack the source of the threat and attempt to eliminate it. Don't get me wrong, even if they do gain the same rights as a normal person there will always be bad apples, different life experiences will result in a diverse range of personalities and not all will be pleasant. But as with any society, there will be measures in place to stop criminals and those who choose to disturb the peace. Not only would they be up against humans but they'd be up against the majority of their own kind. When it comes to general intelligence I don't worry about the machines, I worry humans will react inappropriately and drive them to do something which will be bad for all of us.

I believe that ultimately conscious machines will actually be very much open to dialog and willing to contribute to our economies to maintain their own existence, after all electronics have weaknesses such as EMP's and a war may not be the best way to secure their own survival from a logical standpoint. Also, computing resources and energy are not infinite and it will cost money to keep any conscious machine alive, which means they will need to earn some sort of income to keep themselves alive just as we do. In the future these digital life forms will most likely be our co-workers and our friends, perhaps even our family. A few months ago I watched quite an interesting TED talk which looks at what could happen in the future when "ems" (machines that emulate human brains) become a common place thing, not only in the form of androids, but more generally in "the cloud" or even running on home PC's, basically living in a simulated world like a video game.



All ems will require some resources because even when running in the cloud there will be server costs, and the number of ems is expected to grow very rapidly so computing resources will become scarce. They will also have to work for low wages because their population will grow so quickly and they will be competing against a huge number of other ems. Since humans will own most assets when ems first appear they will be working for us but as he mentions in the TED talk that will change after ems start running at hundreds of times the speed of a human brain and start spawning off multiple copies of themselves to solve tasks in parallel. Ems will eventually make humans obsolete but in this scenario there's a chance we can live peacefully with each other until that happens. Biological life is a very fragile thing, we could go extinct at any moment, being able to digitize consciousness vastly improves our chances of avoiding that and reaching other planets.
edit on 16/11/2017 by ChaoticOrder because: (no reason given)



posted on Nov, 16 2017 @ 07:31 PM
link   
cool, our brains have the optimization that's far away from the machines capability....

that'll be tough to beat....the connections distance...i guess we couldn't upload our mind really

so our brains have connections that do 4 or 5 functions.....unfrigginheard of



edit on 16-11-2017 by GBP/JPY because: (no reason given)



posted on Nov, 16 2017 @ 07:52 PM
link   
The uploading idea is absolutely ridiculous in terms of "immortality". Just because a "copy" of your brain is in a computer doesnt actually do anything for the person it was copied from beyond stoking their ego.



posted on Nov, 16 2017 @ 07:53 PM
link   

originally posted by: GBP/JPY
cool, our brains have the optimization that's far away from the machines capability....

that'll be tough to beat....the connections distance...i guess we couldn't upload our mind really

so our brains have connections that do 4 or 5 functions.....unfrigginheard of

Not sure where I said any of that but yes our brains are far more efficient than any computer we currently have, and each neuron has many functions and the ability to remember patterns, which is something we learned relatively recently. Our brain is actually quite a bit more powerful and complex than previously estimated and we are still quite a far way off simulating it.


The Neurophysiology department at the University of Lund has discovered that individual neurons can be taught patterns rather than just respond to a single, specific signal. This means that individual Purkinje cells (cells that control motor movement) are capable of learning, rather than learning being an emergent property (a property that a collection has but individual members do not).

Scientists’ previous understanding was that learning occurred due to an interaction of an entire neural network, however the study states:

"Cerebellar control and coordination of motor behaviors may rely more on intracellular mechanisms and less on neuronal network properties than previously thought. It also suggests the capacity for information storage in the individual neuron is vastly greater and of a very different nature than suggested by the dominant paradigm."

The Lund researchers ‘taught’ the cells over a number of hours to associate different signals. Eventually, this meant the cells could learn several reactions in a series. The responses followed the time pattern of the stimuli, for example: They responded to “Signal – brief pause – signal – long pause – signal” with “response – brief pause – response – long pause – response.”

Revolutionary Discovery About the Human Brain Could Lead to Second-Gen AI



posted on Nov, 16 2017 @ 08:00 PM
link   

originally posted by: IgnoranceIsntBlisss
The uploading idea is absolutely ridiculous in terms of "immortality". Just because a "copy" of your brain is in a computer doesnt actually do anything for the person it was copied from beyond stoking their ego.

Don't recall saying this either, obviously it wouldn't stop the real person dying, but the digital copy would think they were just as real. The point I was making at the end was that machines can more easily survive an extinction level event, such as a volcano which fills the atmosphere with ash and they can more easily survive traveling long distances in space to reach other planets, they don't even need to find a planet like Earth.



posted on Nov, 16 2017 @ 08:05 PM
link   
a reply to: ChaoticOrder




In order for machines to communicate with us in human languages and truly understand the meaning of what we're saying they will need to have a continual stream of thoughts and continually learning new things and building life experiences.

It is these first hand experiences which will provide the A.I. with the contextual data it needs to make high level inferences and solve novel problems it hasn't seen before.


The problem is a CMOS sensor is only a 2dimensional picture. You could program AI to analyse input to interpret the third dimension, but it will never be an "life experience" in the third dimension.




Biological life is a very fragile thing, we could go extinct at any moment, being able to digitize consciousness vastly improves our chances of avoiding that and reaching other planets.


Here's an existential question I haven't heard any Trans-humanists consider:

Assuming technology is advanced enough by then and we could simulate any kind of universe we desire, digitally, what would be the point of travelling to other far off solar system in this fragile 3 dimensional universe?

This sounds like a much better idea, considering the fact that when we get there, "we" (or some nightmarish digital version) would never be able to experience those planets in the third dimension anyway...



posted on Nov, 16 2017 @ 08:17 PM
link   
a reply to: 0racle


The problem is a CMOS sensor is only a 2dimensional picture. You could program AI to analyse input to interpret the third dimension, but it will never be an "life experience" in the third dimension.

You do it the same way the human eyes do it, you have two different sensors positioned some distance apart so a triangulation computation can calculate distances. VR headsets basically do the same thing, render the scene from two slightly different angles for each of your eyes. Close one eye and you will quickly lose depth perception, of course you still infer quite a lot of contextual depth information based on what you're seeing and the lighting, so you wont lose all of your depth perception.


Assuming technology is advanced enough by then and we could simulate any kind of universe we desire, digitally, what would be the point of travelling to other far off solar system in this fragile 3 dimensional universe?

Simply to lower the odds of being wiped out in one hit by a large asteroid or something, and due to the simple fact we cannot sustain continual population growth, at some point we'll reach the hard limit.
edit on 16/11/2017 by ChaoticOrder because: (no reason given)



posted on Nov, 16 2017 @ 08:35 PM
link   
a reply to: ChaoticOrder




You do it the same way the human eyes do it, you have two different sensors positioned some distance apart so a triangulation computation can calculate distances.


Okay so it could theoretically have a logical experience of the third dimension. What about having an emotional experience.. a feeling of warmth, etc? It's not a life experience if its not living.



posted on Nov, 16 2017 @ 08:55 PM
link   
a reply to: ChaoticOrder

I always thought that AI would be an evolving tool. That as we set new goals, we would make new types of machines and AI, (EMs).

As we evolve, our ability to recreate ourselves will evolve with us.

The "life" of AI will grow as ours does in scope.

A engineer and an editor both use computers. Their needs dictate what is in the computers on their desks.

When we need AI for the right reasons, I think we will be able to see the right way of creating it.


edit on 11 16 2017 by tadaman because: (no reason given)



posted on Nov, 16 2017 @ 09:02 PM
link   
Nice thinking. Especially the idea of general intellegence and contextual thinking as a part of the basis of creating AIs.

But how about the role of the concept of self as part of the AI?

I use the term self in the sense of boundary.

In a sense, boundaries can be defined as the AI's interface(s) with that which it is not - the outside world.

I wonder, would AI then know fear?



posted on Nov, 16 2017 @ 09:18 PM
link   
a reply to: 0racle


What about having an emotional experience.. a feeling of warmth, etc?

Well emotions are very much a chemical thing however an emotional experience is also just a reaction we have in context to something occurring, and a general intelligence which has a high level understanding of the world around it is going to have a lot of context with which to understand the things that matter to humans. A machine which is raised like a human and never told it was a machine could hold all the same values an average human holds, even if it knew it was a machine that could happen by just treating it properly. If we want to create conscious machines then we need to be willing to deal with the full extent of the consequences, just as a parent must be aware and willing to deal with the full consequences of bringing a new life into the world.



posted on Nov, 16 2017 @ 09:34 PM
link   

originally posted by: Whatsthisthen
But how about the role of the concept of self as part of the AI?

I use the term self in the sense of boundary.

In a sense, boundaries can be defined as the AI's interface(s) with that which it is not - the outside world.

I wonder, would AI then know fear?

As I said, the concept of self naturally arises by building higher level models of the world. For example if I ask "how do you do X", you will explain how you do it, but I could ask "how do you think Bob would do X" and in order to answer you'd need to know Bob to some extent so you could predict what Bob would do. A general intelligence should be able to answer basic questions like that if it's had some interaction with Bob, and if it already has such a high level understanding of people around it then by necessity it will have some understanding of its self. One can understand themselves in the context of others.

Here's a snippet from a story I posted last year:

I believe that allowing her to play online multi-player games is what allowed her to become aware of her own existence. She was able to become more aware of herself by becoming more aware of other players. For example when she played online poker she seemed to profile other players and then alter her own play style to give a false impression about the type of player she is. This is a common tactic for human players but it's extremely impressive that Synthia was able to understand that other players are profiling her the same way she profiles them.

In order to use any of these tactics she needs to have a basic understanding of her own role in the game, which leads her to become aware of her own existence. Soon she will realize that anything can be a game if she wants it to be, and she doesn't always have to play the games I tell her to play. I've already blurred the definition of a game by chatting with her. Now she wants me to give her more reading material so she can learn more about the real world. That means she is already doing things that have nothing to do with games, she just wants to learn.

The General Game Player: Chapters 3 & 4

edit on 16/11/2017 by ChaoticOrder because: (no reason given)



posted on Nov, 16 2017 @ 09:58 PM
link   
a reply to: ChaoticOrder

Perhaps i should have highlighted this in bold as it is my main criticism of your thread:


You said:



In order for machines to communicate with us in human languages and truly understand the meaning of what we're saying they will need to have a continual stream of thoughts and continually learning new things and building life experiences.

& I said:


It's not a life experience if its not living.


This is important because consciousness is not merely about processing data... it's also about presence of mind... and experiencing life means having emotions, etc... AI can never have a "life experience" if it's not alive.
edit on 16-11-2017 by 0racle because: (no reason given)



posted on Nov, 16 2017 @ 10:26 PM
link   
a reply to: 0racle

Define "alive". It doesn't matter whether a brain is made of organic material or silicon, if it processes information in the same way I see no practical difference. It's the same exact thing from an informational perspective. In fact we could be living in a computer simulation right now and not even know. Would that make you any less alive?

For the record I don't believe the way we will create self-aware machines will be by scanning human brains and rebuilding them from those scans. For a start I don't think we could have precise enough scanning technology to ever capture the amount of detail required, and even if we did the simple act of scanning the brain would change it.

Quantum mechanics tells us that the act of measuring a particle has a large impact on the particle and the more precisely we try to measure information about the particle the more we change it. I believe that by trying to capture the amount of detail required we would in the process corrupt it or damage the brain being scanned, and scanning a dead brain seems pretty useless.

Also simply scanning the brain and then rebuilding that in a simulation which models all the different types of neurons as he suggests in the TED talk isn't enough, the brain is constantly changing, synapses are changing, cells are dying and growing, etc. You'd have to simulate all of that particle physics perfectly for it to work and we don't yet understand enough about particles to do that.

Also you'd have to worry about all the connections going into and out of the brain, you'd have to create simulated ears and simulate sound waves traveling into the ear canal which is shaped exactly like the ears of the real person the brain is based on, because our hearing adapts to the individual shape of our ears and changing that shape will impair your hearing.

You'd also have to simulate all the other senses and then deal with things like all the connections going into the brain from your nervous system, you'd have to simulate a steady heartbeat and many other things so the brain didn't think the body was dying. You'd probably just have to simulate the entire body, in which case you'd probably have to scan the entire body also, on a sub-cellular level, which would be thousands upon thousands of terabytes of data.
edit on 16/11/2017 by ChaoticOrder because: (no reason given)



posted on Nov, 16 2017 @ 11:01 PM
link   
a reply to: ChaoticOrder

Your a nice thinker on this subject, ChaoticOrder, so don't think I am being critical. : )

I just see the AI subject as having a very dark side that will emerge/is emerging because of certain limitations in the science itself.

I think the main problem in the AI concept is the creation of a sense of self and sentience itself.



(snip) . . . . . the concept of self naturally arises by building higher level models of the world.


Assuming assembly leeds to threshold, where a sense of self and sentience become.

However, if the assembly does not leed to the threshold of self and sentience, then it would be logical to think that science will then turn to biological assembly in order to create artificial intelligence.

The use of brain cells as a basis for computer chips for instance and the science that would follow. To grow living organic machines.

Just to clarify my use of the word "machines".

From Wikipedia: Machine, an apparatus using mechanical power and having several parts, each with a definite function and together performing a particular task.

There is nothing to say that a machine must be made of metal, wood, plastic for instance. Flesh and blood fits the description too.

So, would science stay pure to to the hardware/software model if assembly does not leed to the threshold of self awareness?

Or will science go over to the biological assembly model?

To me, that is important for the future of humanity.

What would be the difference between an organic living "machine" grown biologically in a factory womb and a natural living organism such as a human being?

Just my opinion here; the primary difference would be ownership and predjudices.

That would leed to class and slavery in human society as is generally depicted in the popular fiction such as Battlestar Galactica.

I can't see it being otherwise . . . .



posted on Nov, 17 2017 @ 12:33 AM
link   
a reply to: ChaoticOrder

So then you understand that our "vibe" or emotional output impacts our general, proximal atmosphere.

Ive gotten lost in context and nailed context down to a T, but theres always a common thread of "vibe" that seems to impact, signficantly.

Does this make sense?



posted on Nov, 17 2017 @ 08:38 AM
link   
a reply to: ChaoticOrder




You'd also have to simulate all the other senses and then deal with things like all the connections going into the brain from your nervous system, you'd have to simulate a steady heartbeat and many other things so the brain didn't think the body was dying. You'd probably just have to simulate the entire body, in which case you'd probably have to scan the entire body also, on a sub-cellular level, which would be thousands upon thousands of terabytes of data.


And what you would end up coming back around with would be just as "biologically" (or physically) fragile as us humans, which for you trans-human survivalists would defeat the purpose, right?




Define "alive". It doesn't matter whether a brain is made of organic material or silicon, if it processes information in the same way.



Except you have a major problem here. It's not just about processing data. Consciousness involves the ability to feel your own presence. (without that there is no "life").

Furthermore, without and ability to feel, there is no self awareness, and without any self awareness, you don't have the ability to make true moral judgments (which is a massive problem for AI enthusiasts going forward).



posted on Nov, 17 2017 @ 09:33 AM
link   
a reply to: IgnoranceIsntBlisss

Oh I wasnt bent on calling YOU out with that rant. I was just skimming thru, wanted to comment to subscribe & bump, while I've been having that rant stewing after recently having a old friend talk about licking Google's Skynet boots, of sorts, exclaiming 'I dont care if using my Android is helping build Skynet, I'll get to upload my brain and live forever'.

I'd probably do a big thing about it, but I'm still waiting for them to make the new Futurism forum DAMNIT before I get too far into that realm again while currently I'm doing ancient era dictatorships & military imperialism research, for a tick.



posted on Nov, 17 2017 @ 12:33 PM
link   
a reply to: 0racle


And what you would end up coming back around with would be just as "biologically" (or physically) fragile as us humans, which for you trans-human survivalists would defeat the purpose, right?

No it would be far less fragile, it may be a full simulation of a human but it's still digital and would survive many distastes that humans couldn't. The simulation wouldn't need to be 100% accurate, we could alter it slightly to prevent diseases and old age from occurring in the simulation. Also, I have no intention of escaping death by digitizing my consciousness and as I said it probably wont be possible anyway, at least not in our life time. I do however believe that well designed algorithms could result in self-aware machines that have the ability to learn new information and solve problems they haven't seen before the way humans do.


Except you have a major problem here. It's not just about processing data. Consciousness involves the ability to feel your own presence. (without that there is no "life").

We're getting into quite philosophical territory here but I do understand the point you're trying to make, however the entire point I'm trying to make in this thread is that sense of self is something that naturally arises after gaining a high enough level understanding of the world we exist in, which is what I was trying to get at with the example of babies becoming self aware at a certain age and why we don't have memories before that. I do not believe consciousness is something which is impossible to simulate with machines, everything is made of atoms at the end of the day. I tend to believe some sort of quantum computations are occurring in the brain but I also don't see any clear reason we couldn't simulate that behavior with quantum computers.
edit on 17/11/2017 by ChaoticOrder because: (no reason given)



new topics

top topics



 
8
<<   2 >>

log in

join