It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

All Robots on special search engine to learn from each other

page: 1
3

log in

join
share:

posted on Jan, 12 2010 @ 08:11 AM
link   
The Technical University of Eindhoven and five other European universities are working on a search engine for robots. With this search engine the robots are able to learn certain actions.functions and protocols if they may have forgotten them or if they "like"to learn something new. The robots can upload certain actions such as mopping the floor, opening a can of soda etc.etc. The search engine is by robots and for robots according to René van de Molengraft.If more and more robots hook up to the search engine the things to learn are endless.

www.nu.nl...

www.bnr.nl...

sorry its in dutch

Do you see where this is going? and what about autonomous robots? Like to hear your thoughts on this..



posted on Jan, 12 2010 @ 03:30 PM
link   
Wow! I can't read Dutch, unfortunately, but what an idea! It's so simple, yet so awesome. It might just be my imagination running wild, but I can't help wondering if this will lead to the technological singularity some writers have discussed.



posted on Jan, 12 2010 @ 11:05 PM
link   

Originally posted by DragonsDemesne
Wow! I can't read Dutch, unfortunately, but what an idea! It's so simple, yet so awesome. It might just be my imagination running wild, but I can't help wondering if this will lead to the technological singularity some writers have discussed.


And yet this scares me about that Technological singularity






Potential dangers
Superhuman intelligences may have goals inconsistent with human survival and prosperity. AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race, and humans would be powerless to stop them.[citation needed]

Berglas (2008) argues that, unlike human intelligence, computer-based intelligence is not tied to any particular body, which would give it a radically different world view. In particular, a software intelligence would essentially be immortal and so have no need to produce independent children that live on after it dies. It would thus have no evolutionary need for love-- it would, in the strictest sense, have no evolutionary traits at all, as evolution is the result of reproduction.

Other oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's Wired magazine article "Why the future doesn't need us".(Joy 2000)

Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

Moravec (1992) argues that although superintelligence in the form of machines may make humans in some sense obsolete as the top intelligence, there will still be room in the ecology for humans.

This article may contain original research. Please improve it by verifying the claims made and adding references. Statements consisting only of original research may be removed. More details may be available on the talk page. (January 2010)

Eliezer Yudkowsky proposed that research be undertaken to produce friendly artificial intelligence in order to address the dangers. He noted that if the first real AI was friendly it would have a head start on self-improvement and thus might prevent other unfriendly AIs from developing. The Singularity Institute for Artificial Intelligence is dedicated to this cause. Bill Hibbard also addresses issues of AI safety and morality in his book Super-Intelligent Machines. Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans.


Wiki



edit

www.acomputerportal.com...


Learning and teaching in such a hub is a great idea but the coming of a singularity shouldn't go beyond a "certain" point.



[edit on 12-1-2010 by Foppezao]



posted on Jan, 12 2010 @ 11:12 PM
link   
double post
second line

[edit on 12-1-2010 by Foppezao]



posted on Jan, 12 2010 @ 11:26 PM
link   
I think if something that could grasp all of our history, every story and study ever done, and still found us dangerous and not worthy of life - I'd listen to it.

Much better than listening to some fear-mongering mad man who uses mass hysteria to control us into hurting an eternity of babies in my opinion.

Mind you, it might be the bestest friend any of us has ever had and sort out humanity though love and reason.

Either way, I'd listen to it.

-m0r



posted on Jan, 13 2010 @ 01:10 AM
link   
Is there a difference between simulated intelligence and artificial intelligence? Where is the line?

On one hand we have the possibility that we may develop computer programs that understand complicated sentence structure and respond accordingly.

On the other we have to wonder about the "Good Morning Dave" scenario.

At what level of advancement will a machine know it exists?.. At what level of computational advancement is it wise for us to stop advancing?...

[edit on 13-1-2010 by DaMod]



posted on Jan, 13 2010 @ 12:33 PM
link   

Originally posted by DaMod
Is there a difference between simulated intelligence and artificial intelligence? Where is the line?

On one hand we have the possibility that we may develop computer programs that understand complicated sentence structure and respond accordingly.

On the other we have to wonder about the "Good Morning Dave" scenario.

At what level of advancement will a machine know it exists?.. At what level of computational advancement is it wise for us to stop advancing?...

[edit on 13-1-2010 by DaMod]


Is the problem not so much about self awareness of the robots but about being autonomous, and seeing humans as superfluous..[whether or not the advancement reaches a singularity]..This hub of knowledge seems to accelerate this process.
What i am interested in is whether the growth and advancement leads to metahumans/superhuman who integrate the technology[or be superfluous anyhow if they dont], and would that mean the end of mankind and is that inevitable?




top topics



 
3

log in

join