It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Here's 2 news items that bring us closer to Artificial Intelligence

page: 3
9
<< 1  2    4  5 >>

log in

join
share:

posted on Oct, 31 2014 @ 05:44 PM
link   
a reply to: neoholographic


First, it's not exactly the same. It's just basic common sense. Have you read some of the latest research in this area? How do you think humans shared information before the internet? It's simply giving machines a cloud mind so they can learn from each other and it will also add a level of uncertainty.

Again, humans have been doing this with collective consciousness through things like the printing press or just verbally passing down information through tradition. Again this is something simple and basic and I suspect your problem is with the term collective consciousness.

You're actually agreeing to what I'm saying, it's just you want to say it's robots sharing the internet but what do you think the internet is? It's collective consciousness sharing information. Why do you think so many researchers in A.I. love the internet? It's because there's a wealth of information being shared that A.I. can learn from. Like I said, humans have been doing this way before the internet came along.


I’m sorry but the internet is the internet, and collective consciousness is a different concept entirely. Basic common sense. Simply giving machines a “cloud mind” isn’t going to accomplish anything if machines aren’t already intelligent enough to use it. So no I’m not agreeing with what you’re saying.



This makes no sense. If someone has created an intelligent algorithm that simulates intelligence, why isn't it the same as having intelligence? If it's simulating intelligence, what is it doing?


Perhaps you missed it, but I said the machine will only know the algorithm and nothing else. How is a machine intelligent if it is only doing what it is programmed to do? Every machine does what it is programmed to do already.


It's learning and creating new information from the information it processes. This is machine intelligence. Why doesn't a machine learn when it's programmed to? Have you ever seen or heard of a neural network?


But it’s not learning and it’s not creating new information. Simply saying it is does not make it so. We already have machines hooked up to a “cloud mind” processing and sharing information.


Like I said, I call this the Haley Joel Osment misconception. You think computers need to evolve intelligence in the same way that humans have evolved intelligence. Newsflash, we will be the programmers of Artificial Intelligence, just like we're the programmers of Artificial Flavors.


You think machines do not need to evolve intelligence the same way that humans do, yet you say they need to create and share ideas and share it on a “cloud mind”…just like humans do. Brilliant.

Reasoning, planning, learning, language processing, perception and object manipulation are examples of what kind of intelligence?

If you’re saying machines need machine intelligence, we have that already—they’re called machines.


All I can say is, you have to research these things instead of looking at them from a Haley Joel Osment perspective.



artificial intelligence |ˌɑrdəˈfɪʃəl ɪnˈtɛləʤəns| (abbr.: AI)
noun
the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.


General intelligence is the primary goal of AI research. I’m not sure how following an algorithm equates to general intelligence. Machines already do that. Care to explain with your vast wealth of research how following an algorithm equates to general intelligence?


I really don't understand your point. Why do people think A.I. will be less intelligent because someone created an intelligent algorithm? Where do you think A.I. will come from if it's not initially PROGRAMMED by PROGRAMMERS?????


Algorithms are sets of rules. No one can create an “intelligent algorithm”. Intelligence is reserved for those who can create algorithms, not machines that follow them.

The Chinese room argument is a legitimate concern for some AI researchers. Perhaps you can explain how AI will be able to learn anything other than what they are programmed to learn?



posted on Oct, 31 2014 @ 05:50 PM
link   
a reply to: ZetaRediculian


No. In this model, they are part of the internet. Why would each and every robot need to learn everything from the ground up?


We already have machines that are a part of the internet. These machines interact with each other, share information, scan data etc. So far, no AI has come of this "model".


edit on 31-10-2014 by LesMisanthrope because: (no reason given)



posted on Oct, 31 2014 @ 06:21 PM
link   
a reply to: Blue Shift




The point at which they're told to memorize a list of 5,000 words, and they ask, "What the hell am I doing this for, again?"


Exactly. Compiling symbols at a faster rate is not any sign of intelligence. Until it can take those symbols, understand them, and create something new out of them, there is no intelligence apparent.



posted on Oct, 31 2014 @ 06:34 PM
link   
a reply to: LesMisanthrope


We already have machines that are a part of the internet. These machines interact with each other, share information, scan data etc. So far, no AI has come of this "model".

robohub.org...

As you’d expect, not all knowledge robots can learn is easily exchangeable in a joint knowledge repository. Raw trajectory data or sensor and actuator parameters are often too hardware specific to be exchanged successfully. However, a fair amount of knowledge robots learn can be exchanged: For example, maps, CAD models of objects, and articulation models of doors and drawers have been successfully learnt and shared between different robots. A particularly interesting area for learning are links between shared information, such as where is the fridge (map coordinates), what does it look like (object recognition model) and how do I open it (object articulation model). Another are probabilities, such as given that I see a table, bed, and chair, where would I most likely find a pillow? Robots are well-suited for this type of learning not least because, unlike humans, they are capable of rapid, systematic, and accurate data collection. This capability provides unprecedented opportunities for obtaining consistent and comparable data sets as well as for performing large-scale systematic analysis and data mining.

The idea is that robot learning can be shared and distributed which will make the "learning" much more efficient. the idea of a stand alone robot is fading. The whole idea of what makes "robotics" and AI is changing. Automated systems would be connected to a repository of information. So why would my system need to "learn" something that's already been learned somewhere else?



posted on Oct, 31 2014 @ 06:41 PM
link   
a reply to: LesMisanthrope

Exactly. Compiling symbols at a faster rate is not any sign of intelligence. Until it can take those symbols, understand them, and create something new out of them, there is no intelligence apparent.


describe what it means to "understand" something.

does this meet your criteria?

By applying a learning algorithm to parsed text, we have developed methods that can automatically identify the concepts in the text and the relations between them. For example, reading the phrase "heavy water rich in the doubly heavy hydrogen atom called deuterium", our algorithm learns (and adds to its semantic network) the fact that deuterium is a type of atom (Snow et al., 2005).

ai.stanford.edu...



posted on Oct, 31 2014 @ 06:46 PM
link   

originally posted by: LesMisanthrope
No one can create an “intelligent algorithm”.

Quite so. I also believe that in order for there to be real intelligence -- or at least intelligence that is a rough equivalent of animal or human intelligence -- a machine needs a body with which to interact with reality. It needs to be able to feel and see and hear and whatever else it wants to do, and gain what would be an analog to pleasure or pain from those senses. It needs to go from a purely objective mode to a subjective mode where it makes decisions not just on crunching numbers to calculate an outcome, but where it makes decisions that could directly affect itself.

Of course, you wouldn't want every washing machine and toaster to have this ability. That would cause a lot of problems. Once a machine (or network of them) becomes self-aware, it's hard to predict exactly how that is going to shake down. However, even if they turn out to be benign, they're still going to be competing with us for resources and energy. Unless they like us a lot, and maybe want to keep us around as pets, or unless they just decide to leave Earth and spread through the universe, we could be in for a rough time.



posted on Oct, 31 2014 @ 07:10 PM
link   
a reply to: Blue Shift


I also believe that in order for there to be real intelligence -- or at least intelligence that is a rough equivalent of animal or human intelligence -- a machine needs a body with which to interact with reality

That's interesting. But why would it need a body? I think the concept that an intelligent machine corresponding to an animal is fading. Connected dumb machines is where its heading.



posted on Oct, 31 2014 @ 08:20 PM
link   
a reply to: LesMisanthrope

This post shows a lack of understanding that's truly troubling. You said:


I’m sorry but the internet is the internet, and collective consciousness is a different concept entirely. Basic common sense. Simply giving machines a “cloud mind” isn’t going to accomplish anything if machines aren’t already intelligent enough to use it. So no I’m not agreeing with what you’re saying.


The trouble here is, you're have a pseudoskeptics knee jerk reaction to the term collective consciousness. It's just basic common sense. Humans share information and learn from that information. That's not just the internet. You might as well throw out the whole field of cloud robotics if this means nothing.

The Robot in the Cloud: A Conversation With Ken Goldberg


Ken Goldberg has been thinking hard about robots for almost three decades.

His work ranges from over 170 peer-reviewed papers on things like robot algorithms and social information filtering to art projects about the interaction of people and machines. A professor at the University of California, Berkeley, he is establishing a research center to develop medical robots to assist in surgery. That is just the latest development in what he thinks will be one of the great technology breakthroughs of our age: the fusing of robotics and cloud computing. He talks about it in this edited and condensed conversation.

Q.

What is cloud robotics?
A.

Cloud robotics is a new way of thinking about robots. For a long time, we thought that robots were off by themselves, with their own processing power. When we connect them to the cloud, the learning from one robot can be processed remotely and mixed with information from other robots.
Q.

Why is that a big deal?
A.

Robot learning is going to be greatly accelerated. Putting it a little simply, one robot can spend 10,000 hours learning something, or 10,000 robots can spend one hour learning the same thing.


Link

So, it means nothing to you but that's meaningless. It means something to a researcher in this area with over 170 peer reviewed papers.

Simply put, a collective conscious is just humans sharing information and then creating new things with the information being shared. This will also give robots a level of uncertainty. It will allow them to learn and share information as they interact with the environment.

Like I said, this is basic common sense but I suspect you're just having a natural pseudoskeptic knee jerk reaction when you saw collective consciousness. All I can say is, you need to take a deep breath and actually read the research in areas like cloud robotics.

You said:

But it’s not learning and it’s not creating new information.

Again, you have to actually start to read research. For some reason you keep saying if A.I. is programmed then it's not intelligent. That makes zero sense. Of course the initial programs will come from PROGRAMMERS. Here's an example from a Wired article titled.

The Three Breakthroughs That Have Finally Unleashed AI on the World

It also can think in ways completely different from human cognition. A cute example of this nonhuman thinking is a cool stunt that was performed at the South by Southwest festival in Austin, Texas, in March of this year. IBM researchers overlaid Watson with a culinary database comprising online recipes, USDA nutritional facts, and flavor research on what makes compounds taste pleasant. From this pile of data, Watson dreamed up novel dishes based on flavor profiles and patterns from existing dishes, and willing human chefs cooked them. One crowd favorite generated from Watson's mind was a tasty version of fish and chips using ceviche and fried plantains. For lunch at the IBM labs in Yorktown Heights I slurped down that one and another tasty Watson invention: Swiss/Thai asparagus quiche. Not bad! It's unlikely that either one would ever have occurred to humans.

www.wired.com...

Here's more from an article titled:

Google’s New Computer With Human-Like Learning Abilities Will Program Itself


The computer is currently being developed by the London-based DeepMind Technologies, an artificial intelligence firm that was acquired by Google earlier this year. Neural networks — which will enable the computer to invent programs for situations it has not seen before — will make up half of the computer’s architecture. Experts at the firm hope this will equip the machine with the means to create like a human, but still with the number-crunching power of a computer, New Scientist reports.

In two different tests, the NTM was asked to 1) learn to copy blocks of binary data and 2) learn to remember and sort lists of data. The results were compared with a more basic neural network, and it was found that the computer learned faster and produced longer blocks of data with fewer errors. Additionally, the computer’s methods were found to be very similar to the code a human programmer would’ve written to make the computer complete such a task.


betabeat.com...

You said:

But it’s not learning and it’s not creating new information.

This one line shows you haven't studied these things and it's just hyperbole. Anyone who lightly followed neural networks would know this statement is vacuous.

Again:

The results were compared with a more basic neural network, and it was found that the computer learned faster and produced longer blocks of data with fewer errors.

How do you think the computer learned faster???

You said:


You think machines do not need to evolve intelligence the same way that humans do, yet you say they need to create and share ideas and share it on a “cloud mind”…just like humans do.


Yep, NEWSFLASH, ARTIFICIAL INTELLIGENCE NEEDS PROGRAMMERS.

This is just an asinine line of reasoning. It makes zero sense. Who do you think is working on cloud robotics??

Who do you think is working at places like Deep Mind???

Who do you think is working at the 14 A.I. and robotics companies that Google bought?

PROGRAMMERS!!

It's just crazy to think that because artificial intelligence is programmed then it's not intelligent. Who do you think has been doing research and writing code in the areas of robotics and Artificial Intelligence???

PROGRAMMERS!!

Do you think it's just going to assemble out of nowhere for no reason???


edit on 31-10-2014 by neoholographic because: (no reason given)



posted on Oct, 31 2014 @ 09:09 PM
link   
a reply to: neoholographic


The Three Breakthroughs That Have Finally Unleashed AI on the World

That was a great article. This stuff is moving so fast, its hard to keep up with it. Watson was awesome a few years ago but now? holy cow!


A few months ago I made the trek to the sylvan campus of the IBM research labs in Yorktown Heights, New York, to catch an early glimpse of the fast-arriving, long-overdue future of artificial intelligence. This was the home of Watson, the electronic genius that conquered Jeopardy! in 2011. The original Watson is still here—it's about the size of a bedroom, with 10 upright, refrigerator-shaped machines forming the four walls. The tiny interior cavity gives technicians access to the jumble of wires and cables on the machines' backs. It is surprisingly warm inside, as if the cluster were alive.

Today's Watson is very different. It no longer exists solely within a wall of cabinets but is spread across a cloud of open-standard servers that run several hundred “instances” of the AI at once. Like all things cloudy, Watson is served to simultaneous customers anywhere in the world, who can access it using their phones, their desktops, or their own data servers. This kind of AI can be scaled up or down on demand. Because AI improves as people use it, Watson is always getting smarter; anything it learns in one instance can be immediately transferred to the others. And instead of one single program, it's an aggregation of diverse software engines—its logic-deduction engine and its language-parsing engine might operate on different code, on different chips, in different locations—all cleverly integrated into a unified stream of intelligence.



PROGRAMMERS!!

It's just crazy to think that because artificial intelligence is programmed then it's not intelligent. Who do you think has been doing research and writing code in the areas of robotics and Artificial Intelligence???

PROGRAMMERS!!

I am more into the DIY stuff...been following this guy for a while. WTF?

edit on 31-10-2014 by ZetaRediculian because: (no reason given)



posted on Nov, 1 2014 @ 05:56 PM
link   

originally posted by: ZetaRediculian
ria?

By applying a learning algorithm to parsed text, we have developed methods that can automatically identify the concepts in the text and the relations between them. For example, reading the phrase "heavy water rich in the doubly heavy hydrogen atom called deuterium", our algorithm learns (and adds to its semantic network) the fact that deuterium is a type of atom (Snow et al., 2005).

ai.stanford.edu...



And what about the phrase "heavy water rich in salt is important in governing ocean circulation"---will it learn that deuterium is a type of salt?

I work in machine learning & statistical modeling. There are some very nice breakthroughs in machine learning, an approaching AI in the last few years. Mostly, a result of being able to collect an enormous amount of data--- Google's job is collecting data and training AI on it.

There's still a very large gap between this and a general AI, like the difference between a powered nail driver and an architect.
edit on 1-11-2014 by mbkennel because: (no reason given)



posted on Nov, 1 2014 @ 10:39 PM
link   
a reply to: mbkennel



There's still a very large gap between this and a general AI, like the difference between a powered nail driver and an architect.


Its a fascinating topic. I don't think we will ever get to general AI mostly because I don't think its clearly defined and I don't think we have a clear understanding of human intelligence. Computer Programming logic is completely different than the way humans think so it will never be the same thing. Plus I think that goal post will always be moving. AI is "intelligent" though but its different than human intelligence.

en.wikipedia.org...

Operational definitions of AGI[edit]

Scientists have varying ideas of what kinds of tests a superintelligent machine needs to pass in order to be considered an operation definition of artificial general intelligence. A few of these scientists include the late Alan Turing, Ben Goertzel, and Nils Nilsson. A few of the tests they have proposed are:

1. The Turing Test (Turing)
See Turing Test.
2. The Coffee Test (Goertzel)
A machine is given the task of going into an average American home and figuring out how to make coffee. It has to find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.
3. The Robot College Student Test (Goertzel)
A machine is given the task of enrolling in a university, taking and passing the same classes that humans would, and obtaining a degree.
4. The Employment Test (Nilsson)
A machine is given the task of working an economically important job, and must perform as well or better than the level that humans perform at in the same job.
These are a few of tests that cover the a variety of qualities that machine needs to have to be considered AGI, including the ability to reason and learn, as well as being conscious and self-aware.[12]


Say we are able to build a machine that could pass "The Coffee Test", then what? All we can say is that we built a machine that can perform a specific task. It would use a lot of pattern matching and be able to coordinate movements to perform the tasks. Each task could be broken down. For instance, "finding the coffee machine" task could use a massive image database of all coffee machines to find a best fit. Then the next task in sequential order. All its physical tasks would be a pre determined routine like "adding water". Each task is completed by drawing from a mass amount of data.

In the end, all we have is still a "powered nail driver" even though it passed the test. No different than the task of learning 5000 words

All we really need is the coffee maker plugged into an AI. No need for some clunky thing messing up my kitchen.



posted on Nov, 2 2014 @ 12:53 AM
link   
a reply to: neoholographic


The trouble here is, you're have a pseudoskeptics knee jerk reaction to the term collective consciousness.

So, it means nothing to you but that's meaningless. It means something to a researcher in this area with over 170 peer reviewed papers.

Simply put, a collective conscious is just humans sharing information and then creating new things with the information being shared. This will also give robots a level of uncertainty. It will allow them to learn and share information as they interact with the environment.

Like I said, this is basic common sense but I suspect you're just having a natural pseudoskeptic knee jerk reaction when you saw collective consciousness. All I can say is, you need to take a deep breath and actually read the research in areas like cloud robotics.


I have no problem with the term collective consciousness when it is used in the right circumstances. I agree that the internet is basically collective consciousness. However, I’m well aware of the pseudo-believer’s instinctual reaction to jump to conclusions.

In the context of artificial intelligence, a “collective consciousness" is absolutely meaningless. I have said a few times now, we already have machines using the internet, sharing and processing information. Vast databases and programs built by PROGRAMMERS are working around the clock as we speak. So the question to you yet again is, how is a “cloud mind” a prerequisite to artificial intelligence? If the “cloud mind“ is already present with machines hooked up to it, programmed by PROGRAMMERS, sharing information, maximizing processing power, what is missing from the equation?

I’ll answer that for you. Artificial intelligence is missing.

Your assertion that AI needs a hive mind that mimics the collective consciousness of humans is redundant. First, saying robots need to mimic the collective consciousness of humans is basically the same as saying robots need to mimic humans. Nobel prize to you sir. Second, a collective consciousness without individual consciousnesses defeats the purpose of a collective.

As for a quantum mind, you are unable to even tell me what that is.


Again, you have to actually start to read research. For some reason you keep saying if A.I. is programmed then it's not intelligent. That makes zero sense. Of course the initial programs will come from PROGRAMMERS. Here's an example from a Wired article titled.


Do you think Watson knows what a banana tastes like? Does he have any knowledge or experience of what those dishes he “creates” taste like? How they smell? Their texture? Watson knows absolutely nothing about cooking. All he understands is how to follow the algorithm and manipulate symbols accordingly, none of which he has any clue what it means or what he's doing. That is not intelligent.


This one line shows you haven't studied these things and it's just hyperbole. Anyone who lightly followed neural networks would know this statement is vacuous.

Again:

The results were compared with a more basic neural network, and it was found that the computer learned faster and produced longer blocks of data with fewer errors.

How do you think the computer learned faster???


Of course, as someone so well-researched and well-studied in these areas, you understand machine learning and what that entails. It is essentially mathematical optimization making predictions based on a model, producing a learning curve. It’s not actually learning. It doesn’t know what the data means nor does it care.


For some reason you keep saying if A.I. is programmed then it's not intelligent.

Yep, NEWSFLASH, ARTIFICIAL INTELLIGENCE NEEDS PROGRAMMERS.

This is just an asinine line of reasoning. It makes zero sense. Who do you think is working on cloud robotics??


Except I never said artificial intelligence doesn’t need programmers.



posted on Nov, 2 2014 @ 01:03 AM
link   
a reply to: Blue Shift

I completely agree. Such a robot is already in the works.

eccerobot

The embodied hypothesis of cognition is already shaking up research in almost every field of the human sciences, including AI.



posted on Nov, 2 2014 @ 01:31 AM
link   
managed to chew my way this far..
i'm used to being ignored


Exactly. Compiling symbols at a faster rate is not any sign of intelligence. Until it can take those symbols, understand them, and create something new out of them, there is no intelligence apparent.

wisdom / intelligence

..now where?

 



Do you think it's just going to assemble out of nowhere for no reason???

we all know that isn't going to happen & didn't happen
(that was just too good to pass up)
couldn't help but think of all those evolutionary minds that bring us to where we are in the first place

..the g/f & i have been discussing this stuff lately, after watching various supposed AI bots talk to eachother
(don't mind us, we're just idiots)
we reckon we'd like to have like, some kinda AI-interactive device that we could talk to & teach stuff
we want to see how messed up we can get it
and we want to visit our friends AI-things to see how messed up theirs are



posted on Nov, 2 2014 @ 09:47 AM
link   
a reply to: LesMisanthrope

Do you think Watson knows what a banana tastes like? Does he have any knowledge or experience of what those dishes he “creates” taste like? How they smell? Their texture? Watson knows absolutely nothing about cooking. All he understands is how to follow the algorithm and manipulate symbols accordingly, none of which he has any clue what it means or what he's doing. That is not intelligent.


What does a banana taste like?



posted on Nov, 2 2014 @ 10:13 AM
link   
a reply to: UNIT76

wisdom / intelligence

..now where?


We move the bar again and make it more obscure so that machines will never be as "intelligent" as people.


we reckon we'd like to have like, some kinda AI-interactive device that we could talk to & teach stuff
we want to see how messed up we can get it
and we want to visit our friends AI-things to see how messed up theirs are

what kind of device?



posted on Nov, 2 2014 @ 11:33 AM
link   

originally posted by: LesMisanthrope
a reply to: Blue Shift

I completely agree. Such a robot is already in the works.

eccerobot

The embodied hypothesis of cognition is already shaking up research in almost every field of the human sciences, including AI.


The only way to settle this: Robot battle to the death.



posted on Nov, 3 2014 @ 08:29 AM
link   

originally posted by: neoholographic
That just seems like a blind knee jerk reaction especially when this space is advancing so rapidly.

So the programmer who programs an AI to be without any morality is blameless?

Does not the creator have responsibility over his creation?

As I previously said, the AI has to be programmed to become something. If a computer does not know about something, it cannot be this something. A programmer has to teach his programs if he wants it to do/be something. Inferno did not write itself - Dante had to write it. Programs are just the same - they have to be written by a programmer. Is not the possible nature of the AI a responsibility which the programmer has to address, instead of blaming it on the AI after the damage is done?

Where has the four Laws of Asimov gone?


edit on 3-11-2014 by swanne because: (no reason given)



posted on Nov, 3 2014 @ 08:44 AM
link   
I'm just going to pop in here and say should the machines rise up and try to destroy us, I'll be first in line at the human to android/cyborg conversion building.

Might as well play for the winning team.



posted on Nov, 3 2014 @ 08:48 AM
link   
a reply to: swanne



So the programmer who programs an AI to be without any morality is blameless?

is there a Morality API from Microsoft yet?



new topics

top topics



 
9
<< 1  2    4  5 >>

log in

join