It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Shape shifting robots

page: 1
0

log in

join
share:

posted on Sep, 17 2004 @ 08:23 PM
link   
Interesting article: Self-reconfigurable robots can reshape themselves as the task or environment changes
The idea is similar to the toy "Transformers" That can reconfigure their parts.

What I found most interesting:


Early work in self-reconfiguring robots used centralized methods to control how the pieces reassembled themselves. Today, researchers in the field generally acknowledge the need for distributed methods, in which each robotic module takes at least some control of its own destiny.

The idea is that a robot made of lots of identicle parts, able to reconfigure their arangement, Would have lots of little brains rather than one central processing unit. This reminds me of insect hives, where each individual does a very simple job, but the hive as a whole runs in a very organized way, as if there were centralized leadership, even thourgh there isn't.

It also reminds me of some discussions of nano-technology. I think it's called "swarm theory," Where many self-replicating units act together for a unified result.

It almost seems like a different form of inteligence. Instead of having one powerful brain that has to be conscious of everything, there are many small brains that only have very simple computing power. Yet the result of accomplishing the task is the same.

It also reminds me of some of these terrorist groups composed of many independent small cells. If one part is destroyed, it doesn't compromise the whole group.

Oh well, I guess I'm rambling... Anybody have any thoughts?

[edit on 17-9-2004 by cimmerius]

[edit on 17-9-2004 by cimmerius]

[edit on 17-9-2004 by cimmerius]



posted on Sep, 20 2004 @ 10:49 AM
link   
This i'd like to see.



posted on Sep, 20 2004 @ 02:15 PM
link   

Originally posted by cimmerius
[...]Today, researchers in the field generally acknowledge the need for distributed methods, in which each robotic module takes at least some control of its own destiny. [...] This reminds me of insect hives, where each individual does a very simple job, but the hive as a whole runs in a very organized way, as if there were centralized leadership, even thourgh there isn't.
[...]
It almost seems like a different form of inteligence. Instead of having one powerful brain that has to be conscious of everything, there are many small brains that only have very simple computing power.
[...]
It also reminds me of some of these terrorist groups composed of many independent small cells. If one part is destroyed, it doesn't compromise the whole group.


Your analogies are apt in principle - This article goes a bit further tho' I think - as it doesn't seem to be just a way of increasing redundancy in systems (as it relates to "fall back provision" - not losing yer job: defined here: computing-dictionary.thefreedictionary.com...).

Redundancy, or decentralised control, mirrors systems in the natural world -
Think about how you heal after trauma for example, with the formation of scar tissue ... it just sort of happens.

Some individual organisms, like starfish and some of the cephalopods (octopus) operate *more* like this, their limbs have a sort of autonomy, but not to the point of reconfiguring altogether (although octopus can do some freaky things with their skin - oops my turn to ramble) ...

... that level of reconfiguration on the fly sounds like the way the internet works - it was always designed to route data around damage ... this is the first time I'd heard it applied to this extent in physical things tho'

Interesting find.



posted on Sep, 20 2004 @ 02:20 PM
link   
I don't think so.

We have enough of a problem getting a humanoid robot to walk on two feet.

Not quite yet.



posted on Sep, 23 2004 @ 11:19 PM
link   
Here is another article about robots that go beyond reconfiguring. They break apart and then recombine. Link


The modular robot can move along as a complete unit, built up of around 100 smaller parts. But when faced with an impassable obstacle, some of these modules can detach and proceed as a smaller unit, or even on their own.



Once the obstacle has been passed, however, the smaller units will automatically recombine into the larger whole, enabling them to travel over different terrain once more.

It sounds like most of the work is theoretical right now. They are developing software and doing simulations. But apparently there are some designs being tested.


Some preliminary testing has also been conducted using a modular lattice-shaped robot developed at Dartmouth, called Crystal.

I think it will be a long time before these kinds of application can out-perform a well designed conventional robot. Most conventional robots, however, are designed for fairly specific tasks and environments. The authors hope that eventually modular robots would have more flexibility.

[edit on 23-9-2004 by cimmerius]



posted on Sep, 24 2004 @ 01:47 AM
link   
But the thing with robots is that if they have a glitch or malfunction, they could turn on us humans - ( a bit like i robot), there is always a danger when humans want to recreat a machine of us, then government will get thier hands on it (robot)and probobly use it to kepp us in check invading our privacy more. This is a bad idea wither way they create it.



posted on Sep, 24 2004 @ 01:58 AM
link   
Ah Cimmerius youve brought back good memories of my childhood

I remember the first time I saw the Transformers Movie...ahh such a good movie




posted on Sep, 24 2004 @ 04:13 PM
link   
i dont see any point in poo-pooing this. its a fantastic leap forward, and yeah, the leaps gonna take a while, but its a large gap to leap. the ability to alter ones self is basically the ultimate power (every play the game 'whats your favorite superpower?' picking shapeshifting is cheating) and if we cant have it well we might as well give it to things that obey us.

theres no danger in robots attacking men if we dont build em to do it. give em intelligence, just make the first thing you program not to harm men/the owner (depending on the purpose of the robot) and make all other logical functions require that to evaluate as truth. that way, if something goes wrong with the whole 'dont kill me' thing, nothing else will work.




top topics



 
0

log in

join