The Real Rise of the Machines...

page: 3
74
<< 1  2    4  5  6 >>

log in

join

posted on Nov, 26 2012 @ 04:45 AM
link   
I wonder which the greater threat to the future actually is? Is it the idea that the machines will become smarter than we are or simply see us as a threat with an independent will to do something about it? That is one possibility.

Or

...is the greater threat really that they'll do precisely what we want them to do and no more? The threat there is in taking George Patton's fears of push button warfare making the slaughter too easy to the ultimate level. Naturally anything like that would be restricted technology to only a few nations ...and it wouldn't bother me if defense was the purpose. However, I don't necessarily trust mech armies with no human losses to report to voters in the hands of unknown future Presidents. That sounds like too many ways to make war perpetual as an accepted thing. Hmmm..




posted on Nov, 26 2012 @ 05:36 AM
link   
somehow the thought of a swearing furby killing people just doesn't cut it for me.



posted on Nov, 26 2012 @ 08:12 AM
link   
I've seen quite a few of these threads over the years of lurking around on here and have been tempted on a number of occasions to post, I haven't purely because I wanted to ensure that I could contribute something solid, informative and interesting that covers the topic at hand.

Well, here goes, aside from my usual business practices of earning "bread and butter money", a large portion of what my businesses earn goes into my "research". Since a very early age I have been interested in robotics and AI, I built robots at night school/home that could navigate simple courses, one of which was a 3 feet tall, fettled around with mechanics etc etc...unfortunately my "craftsmanship" skills are not to the standard to build a "Terminator", but what I do excel at is computer science. I've worked at the forefront of a number of technologies that many of you likely use on a day to day basis, but my true "calling" if you will is strong AI.

For the past 5 years, myself and a team that I hold in very high regard have been developing a "strong AI" and let me tell you, it's not easy. A real, usable mechanical "body" of any true intelligent machine will come before an AI able to drive it at a full, autonomous level, with the capability of making decisions. Hell, you could build a "Terminator" endo-skeleton today that would do the job, but you would have nothing smart enough to drive it (plus a mobile power supply).

That said, the research that we are doing, I feel, is already ahead of what even the military may have when it comes to strong AI. Most AI systems today are "weak AI", even the best military stuff is classed as weak AI.

Let me define the 2 terms for those that do not know them.

Weak AI, is defined as a system that has limited boundaries of work, with rules in place to govern the work that it does. Some examples are your cars engine management unit, the computers that monitor the stock exchange, air traffic control systems. Granted some of these are very complicated, and the line between them when it comes to "weak" and "strong" gets a little fuzzy, but, all these systems are designed to do one specific job and to adapt ONLY to changes in a specific environment.

Strong AI, is at the other end of the playing field, and is classed as an AI that can perform various tasks, with an understanding of the rules in that environment so that it can make NEW rules of its own accord as that environment changes. Strong AI could be given a problem its never seen, told the desired outcome and formulate a solution for that particular problem.

Now, the technology we are working on is getting close to the boundary between Weak and Strong. It isn't there yet, and likely won't be for a number of years. We had a breakthrough about 18 months ago when we realized that if we applied a particular logic model to the problems and challenges, that nigh on 90% of all traditional problems and "work" could be solved with it. This is what we are currently implementing and honing. This model goes so far that it can predict, and resolve future and past situations and help the AI to understand good and bad results from those actions and past events.

Our setup currently consists of a large bank of "logic boxes", which are quite powerful, but use off the shelf hardware, masses of storage for the big data that's required (its not all about CPU power), and quite a bit of power consumption when running full pelt.

My point is that we aren't going to be anywhere near a strong AI that could control a full body, autonomous robot for at least 5 years, and then, it all needs shrinking
I say 20 years before a system is compact enough to be contained within some form of body, and that's disregarding completely power supplies and such,

I'm happy to discuss this for those that are interested or have questions, and will answer anything I can that won't undermine or give hints to the tech we have so that others can attempt to clone and reproduce.



posted on Nov, 26 2012 @ 08:22 AM
link   
reply to post by SLAYER69
 


Do you think this the Tech of today? You think this is how far robots have come?

Or do you think the Technicians and Robots are further advanced? (We just aren't being alowed to see it yet??)

I really did rate the Terminator franchise highly but to think its nearly here on the streets is kinda trippy!!

Imagine if these things turn on us....

I am going to make it my duty to befriend as many robots as possible...



posted on Nov, 26 2012 @ 09:01 AM
link   

Originally posted by pheonix358
The bi-pedal soldiers are not the only way to go. Take a fully robotic cat, put a hard point on its back for a weapon, several small caliber weapons on the side of its head and goodbye humans.

The only reason we fight bi-pedal is to have the hands free to grasp the weapons. If you do not need to grasp the weapon then four legs are faster, can jump higher, are quieter and can move up steeper inclines.

Of course a robo-pet would be great. Your child could play anywhere in safety. Good way to introduce them. Pet and laptop all in one.

Scary thoughts.

P

Hell we already have robot pets. I used to have a robot dog. Didnt do much but was about as useful as the real thing. It was the ankle biter variety. Now if they were going to make a pitbull robot, i would expect it to be able to kill people to defend its owner.



posted on Nov, 26 2012 @ 09:10 AM
link   
reply to post by SLAYER69
 

To be totally honest with everyone, at first when I started reading this, I was a little skeptical on the timeline, but as I read/watched through this subject the thought popped into my head of the moment when in T3 the machines became self aware, and up to that point they were controlled by man...i.e. the drones deployed in every part of the world, and to be honest with everyone again it scared the hell out of me to think that, it could be today, tomorrow, or next week that it happens, all it takes is one group of "techies" to develop and write code for this to be a reality. I have seen 10 yr old kids crack and hack software so fast it aint funny. Now the scariest part of all of this is....the people that want this to happen or become a reality usually dont recruit the brainiacs, they pick up the rogues of the techworld side of the population and have them develop/write the programs, and thats where it all go wrong!!



posted on Nov, 26 2012 @ 09:11 AM
link   
Wait until a swarm of Vijay Kumar's robots come looking for you..

Robots that fly + cooperate

Now that scares me..



posted on Nov, 26 2012 @ 09:45 AM
link   
reply to post by MasterPainter
 


Soon be able to send these upto Mars to start construction?????

Possible or not??



posted on Nov, 26 2012 @ 09:54 AM
link   
I've always thought about the nexus of the ever more sophisticated robotics field and the dabbling of the military establishment in such technologies as a hotbed for emerging industry. Right now these things are developed in laboratories and colleges. In the future, however, it will trickle into industry in general in all likelihood. Once that happens, more vigorous bidding wars and contract competition will happen, and that competition will eventually lead to something either bipedal or quadrupedal being effective and advanced enough for the military to actually purchase and deploy it, like we do today with unmanned drones.

If we already have proprietary unmanned drones capable of carrying out precision air strikes, I have little doubt that at some point in the next 100 years, we will have animal-like robot infiltrators that can actually function effectively as infantry, and can communicate at the speed of light bottlenecked only by their processing power. It seems virtually inevitable to me.

Once that's the case though, then there's the other component necessary for a true rise of machine intelligence: the advancement of truly cognitive AI consciousness. We have a lot of theories as to how that might be implemented, from hardware neural nets, to software emulation of intelligence. But I strongly suspect we aren't likely to truly achieve this until we have a cognitive hierarchy similar to that of the human mind. By that I mean, an executive processor of stimuli and decision making capable of communicating, managing, and delegating functions to, other "organs" of the mind such as memory creation and recall, creativity, symbolic interpretation and recognition, language, mathematics, and especially a "subconscious" that renders all of the above as a compressed "background" and "abstract" version of the information contained within the larger cognitive system. That way it can also have, in theory, intuition (or some semblance thereof) and something logically - if not subjectively - similar to what we think of as emotion and reason.

As you can imagine, that seems like a much, much more daunting task than creating organically inspired and functional remote controlled robots that function in a networked fashion on the battlefield. I suspect at first this kind of machine intelligence will be nascent, and not amount to much more than, "differentiate between friend and foe, target foe, engage and destroy foe," and, "coordinate position and formation with fellow units." I.e. it won't actually be able to "think" about what it's doing or "comprehend" it, but will have programming enabling it to perform the functions we need it to. When machines can not only do these tasks, but understand and think about and improvise them, then we will be at another level. But even then, arguably, what they're able to think about will be limited to those behaviors.

We have to create a fully realized, freethinking, self-aware consciousness in software, that is aware of and fully in control of the hardware it operates on, before we'll be at the stage where we really need to worry about losing control of our creations. But I see that as somewhat inevitable too, and I actually believe it's possible we may achieve this by total accident. Possibly even by making some of the assumptions I do above, that consciousness cannot emerge as long as we don't design systems to "think" beyond a limited range of options available to them. Emergent behavior is something that can happen unpredictably and dynamically. If we make machines and AI complex and granular enough in the future, is it an inevitability? Perhaps so.

But while I believe the machine portion of this inevitability, and possibly even the intelligent aspect of it, could be within the next 100 years, I would give us perhaps as long as 500 years before we actually achieve true artificial consciousness (not just intelligence.) It seems like a much more difficult problem, and one that may have to happen emergently and unintentionally before it can truly be the threat we imagine it to be. We also have to consider the way technological advancement while, perhaps in terms of the mean progression, is somewhat exponential, there are facets of technology that have been stunted and stifled for various reasons over the course of that progression. For example, when a technology seems promising, but another bidder finds a way to do something similar but different for a lower cost. Or when the military has its heart set on a certain kind of force multiplier or capability, and cuts funding to another promising tech because they simply don't need or want it at that time.

So I think it could be a while, but I certainly think - if we continue to advance indefinitely, and assuming it's possible, which I believe it is - it will become inevitable beyond a certain point in my opinion.

Peace.
edit on 11/26/2012 by AceWombat04 because: (no reason given)



posted on Nov, 26 2012 @ 09:54 AM
link   
reply to post by SLAYER69
 

I think still the creators are superior than their creatures or robots especially intellectually. It is not easy that human makes a robot as intellectual as itself. however still many aspects of humanity have remained unknown.



posted on Nov, 26 2012 @ 10:08 AM
link   
reply to post by TruthxIsxInxThexMist
 


I think that's what they should do in the future, maybe the moon first.
What happens if they start building themselves??



posted on Nov, 26 2012 @ 10:17 AM
link   

Originally posted by SLAYER69
I think some of the replies in this threads scare me more than the potential rising of the machines


True. I see the point of one or two here, but I believe the fact is, we will never completely be able to avoid military conflict...not in our lifetime anyway. When it does occur, whatever one's stance is on it, I would rather the smelly brain washed rat in some other craphole country shoot at one of those robots than at one of our sons or daughters.

I think if I were to see that cheetah running at me at 28 mph, with shoulder mounted guns a blazing, that in itself would be enough to make me drop my gun and run the other way screaming some religious figure's name.

Either way, I've always been fascinated with the science of robotics and AI. (S/F Slayer, not like you'll notice them
)
edit on 26-11-2012 by Lonewulph because: (no reason given)



posted on Nov, 26 2012 @ 10:21 AM
link   

Originally posted by TruthxIsxInxThexMist
reply to post by SLAYER69
 


Do you think this the Tech of today? You think this is how far robots have come?

Or do you think the Technicians and Robots are further advanced? (We just aren't being alowed to see it yet??)


That was my next comment, we may fear what we see now, however in reality what we cannot see and do not know is what we should fear the most. What may reveal itself when the time comes?



posted on Nov, 26 2012 @ 10:45 AM
link   
reply to post by SLAYER69
 


Cool thread. Here's a video which really fits the theme of this thread well:


Reading this thread made me really want to learn how virtual neural networks work so that I can build my own A.I. software. I stumbled across a really awesome lecture series about how virtual neural networks work, starting from the very basics. It's so much simpler than I could have ever imagined it would be. This guy just lays it all out in such an easy to learn format, almost anyone could watch these videos and follow along and understand how it all works by the end. Watch these topics in the following order (there are multiple videos to each part):

Neural Network Fundamentals
Neural Network Calculation
Neural Network Training
edit on 26/11/2012 by ChaoticOrder because: (no reason given)



posted on Nov, 26 2012 @ 11:24 AM
link   
Thanks for sharing i have been interested in these military robotics since i seen a video of boston dynamics robot slipping on the ice and righting itself, scary indeed.



posted on Nov, 26 2012 @ 11:32 AM
link   
reply to post by ChaoticOrder
 


Neural networks are only one piece of the puzzle, and they aren't the be all and end all to strong AI. Granted they are very useful and efficient tools to use for certain tasks, pattern recognition being one of them...traditional logic based systems can not compete with ANN's (artificial neural networks) in terms of performance in this area...but...there are a number of issues with using ANN's which I have found in our research.

1. They are currently very "bulky" when it comes to performing complex tasks, the number of neurons required to perform ever increasing complex tasks rises exponentially. You then end up a requirement of many many processing nodes to simulate this ever increasing number of neurons.

2. Training....you have to train an ANN before you can use it, and this can sometimes be counter intuitive when dealing with large scale ANN's

3. Different tasks require a different kind of ANN, be it back propagation, Hadloop etc....figuring out which ANN type(s) you need to perform the task you require can be difficult and daunting. Chaining these ANN's together to perform an extremely complex task is a challenge unto itself.

4. Creating an ANN that can modify its own configuration to allow it to perform an task, and tune itself to that task is, as far as I know, still not been achieved. A self modifying ANN, is an absolute must before true "intelligence" using ANN's is possible.

5. Control is limited once you have trained it, and difficult to debug if the output is incorrect, basically you have to "wipe" it and start training all over again....tedious.

Ultimately our brains, and for all species work via some kind of logic, a thought process is exactly that, a set of logical paths that are evaluated and the one deemed the most suitable is the selected end result...."Im hungry, so I need to find food"....."I'm tired, so I need to sleep" etc etc...and in terms of solid logic processing, a computer wins humans hands down.

I think one MAJOR flaw that current AI research is making is to try and simulate nature....now, nature did a hell of a job with our brains, but that doesn't mean its the most efficient method of achieving the end goal...we need to look at what nature has developed, mimic the bits that do an outstanding job, and find better alternatives for the other areas.

Case in point, legs are good for a lot of things, but for outright speed, wheels are better



posted on Nov, 26 2012 @ 11:59 AM
link   

Originally posted by MasterPainter
reply to post by TruthxIsxInxThexMist
 


I think that's what they should do in the future, maybe the moon first.
What happens if they start building themselves??


Then we really will have a Terminator Scenario!!



posted on Nov, 26 2012 @ 12:02 PM
link   
reply to post by fuserleer
 



1. They are currently very "bulky" when it comes to performing complex tasks, the number of neurons required to perform ever increasing complex tasks rises exponentially. You then end up a requirement of many many processing nodes to simulate this ever increasing number of neurons.

It takes complex solutions to solve abstract or complex tasks. It's a lot better than having to code our own complex solutions to those sorts of problems.


2. Training....you have to train an ANN before you can use it, and this can sometimes be counter intuitive when dealing with large scale ANN's

Well solutions don't just pop out of thin air. Everything needs time to evolve before it will be capable of anything useful.


3. Different tasks require a different kind of ANN, be it back propagation, Hadloop etc....figuring out which ANN type(s) you need to perform the task you require can be difficult and daunting. Chaining these ANN's together to perform an extremely complex task is a challenge unto itself.

That is hardly much of a limitation or challenge. The main thing which needs to be changed is simply the number of inputs and outputs, the range of the input and output values, and the number of synapse layers and size of each layer. Choosing specific types of networks or layout configurations is not very hard when you understand the function and purpose of each type.


4. Creating an ANN that can modify its own configuration to allow it to perform an task, and tune itself to that task is, as far as I know, still not been achieved. A self modifying ANN, is an absolute must before true "intelligence" using ANN's is possible.

It's extremely possible if the ANN has access to real time training data which doesn't need to be supplied by a user. For example those programs which evolve virtual creatures by using a type of natural selection process are using "self-tuning" or self learning ANN's. The neural networks, or brains, start off random but get refined with each new generation as the weakest performers are picked off. Since the computer can easily track the performance of each creature it can assess the real time training data and pick the best designs.


5. Control is limited once you have trained it, and difficult to debug if the output is incorrect, basically you have to "wipe" it and start training all over again....tedious.

You don't necessarily need to wipe it and start again or "debug" the ANN, it could also be trained with the correct solution to replace the "bug".
edit on 26/11/2012 by ChaoticOrder because: (no reason given)



posted on Nov, 26 2012 @ 12:09 PM
link   
We create robots to fight for us because we don't like when people die...

But because Humans have less of a steak in the battle, war will become more frequent.



posted on Nov, 26 2012 @ 12:35 PM
link   
someone did some math using moores law and discovered that in about 600 years every bit of matter and energy in the entire universe will be used up in one big cosmic consciousness computer. I think at that time whatever seeded our universe with the big bang will harvest this computer to be used in its own realm/existence/reality/whatever you wanna call it. Humans are just a drop in the bucket in evolution, computers and technology will evolve beyond anything our brains can comprehend, and they too are just a drop in the bucket of evolution, whats beyond techonolgy? who knows...maybe the computers will have pity on natural things along the way and upload our consciousnesses to continue to think in some kind of computer program zoo.





new topics
top topics
 
74
<< 1  2    4  5  6 >>

log in

join