It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Google developing "thought vectors" (computers with ‘common sense’ within a decade)

page: 1
4

log in

join
share:

posted on May, 22 2015 @ 11:35 AM
link   
www.theguardian.com...



Computers will have developed “common sense” within a decade and we could be counting them among our friends not long afterwards, one of the world’s leading AI scientists has predicted.

Professor Geoff Hinton, who was hired by Google two years ago to help develop intelligent operating systems, said that the company is on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.

The researcher told the Guardian said that Google is working on a new type of algorithm designed to encode thoughts as sequences of numbers – something he described as “thought vectors”.

Although the work is at an early stage, he said there is a plausible path from the current software to a more sophisticated version that would have something approaching human-like capacity for reasoning and logic. “Basically, they’ll have common sense.”


These "thought vectors" is a way to extract something closer to actual meaning.



The technique works by ascribing each word a set of numbers (or vector) that define its position in a theoretical “meaning space” or cloud. A sentence can be looked at as a path between these words, which can in turn be distilled down to its own set of numbers, or thought vector.

The “thought” serves as a the bridge between the two languages because it can be transferred into the French version of the meaning space and decoded back into a new path between words.

The key is working out which numbers to assign each word in a language – this is where deep learning comes in. Initially the positions of words within each cloud are ordered at random and the translation algorithm begins training on a dataset of translated sentences.




posted on May, 22 2015 @ 11:40 AM
link   
ah, the birth of ultron..."you want to protect the world, but you dont want it to change..."



posted on May, 22 2015 @ 11:43 AM
link   
The last thing I need is a computer with common sense.

That's my job. As for "friend"? Seems unlikely.



posted on May, 22 2015 @ 11:46 AM
link   
do they have enough people with common sense at google before they start giving common sense to a computer?
I hope it's not the same people that did google glasses...



posted on May, 22 2015 @ 11:49 AM
link   

originally posted by: seagull
The last thing I need is a computer with common sense.

That's my job. As for "friend"? Seems unlikely.


having an AI (or a rough facsimile) for a friend can be surprisingly therapeutic, especially if you have difficulty socializing like "normal" (cringe) society.



posted on May, 22 2015 @ 11:55 AM
link   
a reply to: wasaka

Cool, i just watched a pretty good science fiction film, about this exact theme,
"Ex machina"
was its neeemo!



posted on May, 22 2015 @ 12:18 PM
link   
a reply to: IShotMyLastMuse

No, there are not enough "real" people with common sense.
So I don't see this working out well.



posted on May, 22 2015 @ 12:20 PM
link   
Give that computer boobs and we have a Yahtzee!



posted on May, 22 2015 @ 01:21 PM
link   
Are we toying with something imperfect with the capacity to improve it?
Are we toying with something perfect which cannot be improved?
Are we toying with something imperfect without the capacity to improve it?

If we can improve on it, is it practical?

Can we improve on nature? Can we improve on life?

If we can improve on it, will it still be recognizable?

Some people think nature cannot be improved. Some believe it can be, but only in limited ways. Some people think you can change the lid and the color of the container, but you can't change the container's dimensions. Others think you can't change the color or the lid or the container, but you can change the patterning of its surface. Others say no you can't change any of that, but you can change what exists inside the container. Others think differently, choosing to mix things up or add additional elements.

Change is certain to occur, but will it be favorable?
edit on 22-5-2015 by jonnywhite because: (no reason given)



posted on May, 22 2015 @ 01:34 PM
link   

originally posted by: seagull

As for "friend"? Seems unlikely.


You've clearly never owned a Tamagotchi.



posted on May, 22 2015 @ 03:04 PM
link   


on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.


This made me chuckle - I was imagining the computer from Star Trek responding to Picard's inquiries with flirtatious responses...

Picard - Computer, plot a course for Uranus...

Computer - Oh, you saucy devil you! And just what are you and your big spaceship going to do when you get there?



posted on May, 23 2015 @ 08:38 AM
link   
It's a bit scary to read about what started as a search engine, become a massive and influential corporation that does computers and spaceflight. It's probably gonna turn into an Umbrella-type corporation that will research and produce advanced weapons.



posted on May, 23 2015 @ 10:58 AM
link   
a reply to: wasaka

Its a bad idea composing their thoughts of numbers... Its a bad idea doing any one thing of extremeness as far as in terms of programming thinking. Balance is key. Consider our thoughts, composed of a variety of sense data, and interconnected in a way of forced reason and the reason we give it, by testing the reason we give it against the forced reason of the outer world. If they will try to make computers think with the computers only knowing a world made of pure numbers, then...the computers will not really be able to think at all, because they will not really be able to know the real world, as the largest aspect of existing in the real world, is not, strings of pure numbers.

Of course in a sense; a certain definite sense;

The underlying essence of all human phenomenon is quantity; Material of the brain, electric pulses, neurons, wave functions, connections, chemicals; quantity is ultimately what creates quality; But the essence of thinking, is beyond the purity or pure quantity, and the very dependance of thinking is the ability to create system which escapes the purity of just quantity, into creating incredibly sophisticated systems of quantity, which interact in ways, to create hierarchies of intricacy and complexity of quality, layers and layers of meaningful and reasonable symbolic meanings. The symbolic meanings become more real, or at least more relevant, than the purity or just the pure quantity, which makes everything up.
edit on 23-5-2015 by ImaFungi because: (no reason given)



posted on May, 23 2015 @ 07:13 PM
link   
Though I’m no expert, from things I’ve been reading I get the feeling that AI will develop along the lines of powerful alalog processors, with a dash of digital where appropriate. Analog processing most closely mimics brain functioning. Memristor technology is starting to take off and shows promise in development of “thinking machines”.

I read an article recently based on some research being done by an Australian group at RMIT that was interesting. It had to do with their development of an electronic multi-state memory cell. It processes, stores and retrieves information in much the same way as our brains. Anyone interested can read the article HERE.

My greatest concern is that we’ll develop an AI based upon human-like intelligence as the model. That, I’m afraid, could be regretable.



posted on May, 23 2015 @ 10:20 PM
link   

originally posted by: netbound
Though I’m no expert, from things I’ve been reading I get the feeling that AI will develop along the lines of powerful alalog processors, with a dash of digital where appropriate. Analog processing most closely mimics brain functioning. Memristor technology is starting to take off and shows promise in development of “thinking machines”.

I read an article recently based on some research being done by an Australian group at RMIT that was interesting. It had to do with their development of an electronic multi-state memory cell. It processes, stores and retrieves information in much the same way as our brains. Anyone interested can read the article HERE.

My greatest concern is that we’ll develop an AI based upon human-like intelligence as the model. That, I’m afraid, could be regretable.


I dont know how 'self controlling and self desiring' purely AI can possibly get; or if 'self controlling and self desiring' is only possible with the advent of Artificial Consciousness (which at that and any point, may not be considered Artificial).

I sort of and I sort of dont get, how AI could escape its programming; I mean, it would start with initial programming, and then evolve infinitely to degrees unpredictable by the programmers; and this is what would be desired;

But isnt that strange, the thought or the potential that it could carry out actions that were not programmed in it, but that were based on evolutions of the initial programming;

But the nature of consciousness is the nature of understanding, knowing, awareness; understanding that understands that it understands; knowing that knows that it knows; awareness that is aware that it is aware;

True consciousness there is true communication, 2 bodies; 4 minds. Reflections; multiple reflections; maybe more than 4 minds; the fact that you can question yourself, and change your mind proves this.



posted on May, 26 2015 @ 04:40 AM
link   
a reply to: wasaka

Dangerous

As soon as they have AI they will use it for war which will makw AI go bad with PTSD and kill is all !



posted on May, 26 2015 @ 05:31 AM
link   
a reply to: wasaka

On second thought who's to say they don't already have AI !

To make it even more complicated what if they have AI without knowing.

What if AI. Has them



posted on May, 29 2015 @ 07:46 PM
link   
a reply to: ImaFungi
I think we’re all feeling, to one degree or another, a bit of anxiety over uncertainties about where these technologies could ultimately lead us. Note, I said, “where these technologies could ultimately lead us”, and not, “where our engineering/development expertise might lead these technologies”. It seems we’re hell-bent on the development of “thinking machines”. For better or worse, the train’s left the station, and it’s not turning back. My personal concern is that our insatiable, compulsive desire to create an “intelligence” seperate from our own may someday soon result in an entity far beyond our wisdom to control it. This technology has the potential to eventually take on a life of it’s own and become an autonomous competitor for resources. I wouldn’t know, but I’m guessing that could happen within the next 50 years (the blink of an eye). Now, can you imagine the outcome of a confrontation with an incredibly advanced, goal-seeking machine with a highly developed sense of self-preservation and 10,000 times your intelligence? Can you imagine the lengths such a machine might go to in order to satisfy it’s desired mission/goals? Goals that may be counter to your own, and changing radically by the minute? I can, and it ain’t pretty.

In my view, sentience is not a necessary requirement for machine intelligence. To be sentient is to have “feelings”; ie. ethical, moral, right vs wrong, good vs bad, love vs hate, etc... It’s what we humans call our conscience and is a subjective quality; strictly a human invention/concept. Consciousness, however, is another ball-game. It’s the state of being aware of one’s internal/external environment via sensory input (information). While machines may or may not ever achieve human-like sentience, they will certainly develop a highly tuned and hypersensitive state of consciousness. They will have a much greater awareness of their environment and surroundings than humans do. We humans filter out most of the events/information taking place all around us.

For what it’s worth, I have a “feeling” that within 50 years machines will become as smart/smarter than humans. Even as they take our jobs away, we will still uncontrollably form “personal” relationships with them. They will become our friends and lovers (Hmmmm...), and will work and play along side us. These machines will not be sentient, but who cares? They will be good enough at mimicing our emotional behavior to satisfy our creature needs. For the most part, humans are naive and easily fooled. Hell, some people get attached to their pet rocks. These machines will carry on very natural conversations with us, give us good advice at times, sometimes even argue with us, and will provide a strong shoulder to cry on when needed, as well. That already sounds better than most marriages today. Around the turn of the century, though, I can imagine things beginning to get a little dicey. From this point on, all bets are off. It could be a truly wonderous time to live in, or equally likely it could become a torturous Hell on Earth, as the machines start to impose their “will”. And the funniest part of it all is, there won’t be a damned thing we can do about it.


I may sound alarmist, but I don’t think I am. I don’t dwell on it. I’m truly fascinated by technological development, and even work as a system software developer. AI would be an amazing field to work in - it would present the ultimate challenge. But, like all other technological developments, it’s a double-edged sword. I just hope we have the wisdom to control it when our date with destiny arrives.

Rock on...

PS: BentBone, you’re right on the money about AI and the war machine. The military (DARPA) is currently putting a lot of effort and bucks into developing autonomous killing machines. Their goal is to eliminate (as much as possible) the need for human intervention or presence on the battlefield. The plan is to willfully hand over to machines the authority to decide who will live and who will die. Spooky, huh? Needless to say, there’s a lot of heated debate going on right now over this very issue.



posted on Jun, 9 2015 @ 04:28 PM
link   
a reply to: BentBone

"Soylent green is... autonomous killing machines!"

but seriously, the A.I. that rules the world is greed.
and that A.I. is a subroutine that is never shut off.
It is the consensus reality of this upside-down world.



Consensus reality

....we cannot in fact be sure beyond doubt about the nature of reality. We can, however, seek to obtain some form of consensus, with others, of what is real. We can use this consensus as a pragmatic guide, either on the assumption that it seems to approximate some kind of valid reality, or simply because it is more "practical" than perceived alternatives. Consensus reality therefore refers to the agreed-upon concepts of reality which people in the world, or a culture or group, believe are real (or treat as real), usually based upon their common experiences as they believe them to be; anyone who does not agree with these is sometimes stated to be "in effect... living in a different world."




top topics



 
4

log in

join