It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Check out this ground breaking AI technology

page: 2
11
<< 1   >>

log in

join
share:

posted on Jan, 23 2017 @ 03:06 PM
link   
And all of this is worthless unless we focus on security as well. Because this just sounds like the BORG hive mind concept.

The overall end result of A.I. is though for A.I. to create other A.I. customized for each issue at hand but also connected to a central node that inturns talks to a master node, much like blockchain technology, it's an A.I. like blockchain, which would be something I'd like to see happen as well.

If we can survive long enough to get there.




posted on Jan, 23 2017 @ 05:17 PM
link   
a reply to: tikbalang

Cool gif. Point taken, I tend to be more optimistic about humanity, that humans adept to their environment. If that environment is positive, a fair majority (90%+) will be positive. If that environment is negative, people will tend to the negative.

The statement you quoted is fact. The middle or lower classes losing their income would crumble the system. Income is provided through means of labor, thats how you and I and the average Joe pay our bills. Labor, however, becomes more obsolete the more we advance, this is called Technological unemployment and has been a concern since the dawn of our industrialized world.

I can now simply continue quoting the wikipage word for word making my case, but the gist is:

Historically, we've adressed this problem by "creating jobs". Trump is gonna create X jobs, in Belgium it's the same, the government is going to create x amount of jobs. The problem there's no longer usefull jobs to create, machines do almost everything better and faster. So we started creating "useless" jobs. Callcenters. Yoga/Fitness centers. The entire marketing industry and so on. Jobs with the sole purpose being a job, rather than work that needs to be done to created added value. (Tree -> chair is actual value. Broken router due to #ty design -> fixed router thanks to 3,000 people standing on stand-by is not value) Continuing this train of thought is the example I gave a couple of posts ago about the 2000 tons of sand, where we are reaching the point of the utter ridiculous, just to create jobs.

There's a flaw there, and economists know about it. Elon Musk knows about it, as do plenty other high-profile people.

I don't think I've told any lies.

EDIT:

Just to be clear, the flaw being: when our current industrialized, capitalistic system was founded, they never expected it to reach the point of human labor becoming obsolete. We're starting to reach that point, in ways we already have.
edit on 23-1-2017 by Vechthaan because: (no reason given)



posted on Jan, 23 2017 @ 05:26 PM
link   
a reply to: galadofwarthethird

In a word, no.

What this team did, is take a certain subset of AI known as computer vision and basically made a very good optimization. One of my professors does a lot of AI research, I'll ask him about this tomorrow for some better detail, since he no doubt understands this better than I do as I only know the basics.

Essentially, in computer operations you have a classification of mathematical operators you can do called bitwise operators. These operations are extremely fast because it's just flipping bits. In order to use a bitwise operation it has a runtime (called bigO notation) of O(1) which is the fastest operation possible. Something like addition has a runtime of O(n) where N is the length of the operation. O(n) is usually the fastest runtime you can hope to get unless you're able to frame things in the context of a bitwise operation.

The bitwise operations are AND, OR, NOT, XOR, and what this company is using XNOR which is really just XOR and NOT combined (though simplified to 1 operation).

With AND you take 2 bits, and if they're the same you output a 1, if they're different a 0
11=1
00=1
01=0
10=0

With OR you take 2 bits and if there's a 1 involved it's a 1, otherwise 0
11=1
00=0
01=1
10=1

With NOT you basically flip the bits to what they're not
11=00
00=11
01=10
10=01

Finally there's XOR which checks if the two bits are different, 1 if so, 0 if not.
11=0
00=0
10=1
01=1

With XNOR you're using XOR+NOT logic, so you basically just invert the table. Basically you're saying, is it the same.
11=1
00=1
10=0
01=1

There's also shifts and rotations. We have these same concepts in base10 math, and use them all the time (common core math actually teaches it as the way to multiply and divide). The difference is the scale in base 2.

In base 10 math you can perform a shift by a factor of 10. Adding a 0 to 100 is an easy shortcut (shift left) to increase a value by 10 or subtracting a 0 (shift right) divides by 10. You can do the same thing in base2, aka binary. The difference is that a value changes by a factor of 2 instead. So what this involves is a very fast way to double or half anything.

To give an example, this is the number 4
00100

If I shift it left to
01000

The value is now 8, if I shift it right to
00010

The value is now 2.

So, without going deeper into the mathematics of it, I can multiply or divide anything by a power of two (2, 4, 8, 16, etc) instantaneously by shifting. To divide by 4 I would just shift by 2 places, or by 8 3 places. Using this math, alongside bitwise operators a computer can accomplish complex calculations in far fewer steps than actually performing the math would traditionally require.

That's the basics of how they did what they did, I could go into a little more detail if you want, but AI isn't really my field. Suffice it to say, it's pretty difficult to frame every problem in the form of just bitwise operators. Apparently their source code for the logic of how they did this is posted publicly but from the sounds of it, they essentially just rounded numbers and expressed all math as a power of 2.

Anyways, it's a pretty impressive optimization.

Most devices do not use this technology. Fewer can do it in real time.



posted on Jan, 23 2017 @ 05:35 PM
link   

originally posted by: Riffrafter
With that said, much of what the tech press calls AI is debatable. Hell, what people in the AI industry call AI is still hotly debated at times. But coming up with code and algorithms that can allow the complex maths requiring multiple iterations to run on hardware with less horsepower than what was previously required is a big step.


I wonder how scaleable it is. Lots of optimizations end up being very specific. This is automatically limited to just computer vision, but I wonder what sort of constraints are on that. The only technique I'm familiar with is image moments, and if you round those accuracy drops off a lot. I noticed in the example video and article that they were identifying objects with large contrasts between them. I wonder if it can figure out the difference between objects with very similar shapes but different purposes such as a shell casing vs a soda can or a CD vs a donut.



posted on Jan, 23 2017 @ 06:42 PM
link   
a reply to: ChaoticOrder

This is less important than the hype makes it seems. I have a pretty good idea what they are doing. They claim they have an architecture which is cheaper to execute computationally on conventional computers for some loss in fidelity, that's it. It is probably replacing IEEE floating point with low precision fixed point and truncating/rounding heavily intermediate results, maybe some even to one or two bits.

It will make deployment of certain neural networks less expensive--that's it. No fundamental breakthrough. It is not specific to one specific class of problems, however, what is specific to various classes of problems is "what is the performance degradation relative to standard methods?" Vision, not so bad because there are a very large number of inputs. Other problems use the smaller number of inputs more precisely and there, losing precision in intermediate computations would probably hurt performance more.

I think the future will be specialized hardware which implements a few of these and other types of tricks. Look to Intel & Nervana.
edit on 23-1-2017 by mbkennel because: (no reason given)



posted on Jan, 23 2017 @ 07:28 PM
link   
a reply to: mbkennel

I got curious and read their code on github, it's in lua which I'm not super familiar with (only used it for a couple projects)

Here's their code.

github.com...

For something that they're claiming can run on any device, I was surprised to see the processing being offloaded to the GPU. But maybe I misunderstood what was going on.



posted on Jan, 23 2017 @ 07:33 PM
link   
a reply to: Vechthaan




I tend to be more optimistic about humanity, that humans adept to their environment. If that environment is positive, a fair majority (90%+) will be positive. If that environment is negative, people will tend to the negative.


Has more to do with brainchemistry and that we are dopamine junkies, the think "positive attitude" is rewarding.



The middle or lower classes losing their income would crumble the system.


I believe having a balance between socioeconomic classes is more of the issue, productive work vs. labor and loaning your socioeconomic status.

The answer is simple, but it doesnt mean its right.



posted on Jan, 23 2017 @ 08:15 PM
link   

originally posted by: mbkennel
a reply to: ChaoticOrder

This is less important than the hype makes it seems. I have a pretty good idea what they are doing. They claim they have an architecture which is cheaper to execute computationally on conventional computers for some loss in fidelity, that's it. It is probably replacing IEEE floating point with low precision fixed point and truncating/rounding heavily intermediate results, maybe some even to one or two bits.

It will make deployment of certain neural networks less expensive--that's it. No fundamental breakthrough. It is not specific to one specific class of problems, however, what is specific to various classes of problems is "what is the performance degradation relative to standard methods?" Vision, not so bad because there are a very large number of inputs. Other problems use the smaller number of inputs more precisely and there, losing precision in intermediate computations would probably hurt performance more.

I think the future will be specialized hardware which implements a few of these and other types of tricks. Look to Intel & Nervana.

Something I'm familiar with in programming is how my first working code is very inefficient and bloated. The changes I make after whittle away until I reach maximum efficiency or quit. Is this anything like what what we see here? What I'm thinking is they used big computing to create these neural networks, or learn how they work, and now they're chipping away at bloat to make it work on smaller devices. If this is at all similar to what I'm familiar with, this doesn't seem as much a breakthrough as a incremental improvement.

Do they have to grow these vision systems on large computers before they're able to offload it to smaller devices?
edit on 1/23/2017 by jonnywhite because: (no reason given)



posted on Jan, 23 2017 @ 10:21 PM
link   
Modern AI, sure. Strong AI? Never. It would be reckless and dangerous.



posted on Jan, 23 2017 @ 10:27 PM
link   
a reply to: bigfatfurrytexan

could you explain the difference?



posted on Jan, 24 2017 @ 08:17 AM
link   

originally posted by: tikbalang
a reply to: bigfatfurrytexan

could you explain the difference?


Strong AI usually refers to the concept of a general AI that can solve problems or potentially gain sentience. Modern AI is no where close to that, it's extremely narrowly focused and any appearance of intelligence is actually just the work of designers making a product look like something it's not.



posted on Jan, 24 2017 @ 08:34 AM
link   
a reply to: Aazadan

Good primer explanation, and you're exactly right as to how this works.

One typo:

With AND you take 2 bits, and if they're the same you output a 1, if they're different a 0
11=1
00=1
01=0
10=0

AND is true only if all inputs are true, as in "A and B":
0*0=0
0*1=0
1*0=0
1*1=1
 


True AI will never use central processors. As long as we try to do so, we'll get more and more sophisticated calculators. True AI relies on the neural patterns, not on the neurons themselves, and thus will require a massive interactive environment containing a massive number of simple analog processors with adaptable connections.

TheRedneck

edit on 1/24/2017 by TheRedneck because: (no reason given)



posted on Jan, 24 2017 @ 09:20 AM
link   

originally posted by: TheRedneck
AND is true only if all inputs are true, as in "A and B":
0*0=0
0*1=0
1*0=0
1*1=1


Whoops, you're right. In my defense, it was a long weekend. Ended up doing a game jam which ended late Sunday (meaning I had no weekend), followed by a 10 hour day Monday, capped off with that post.



posted on Jan, 24 2017 @ 09:23 AM
link   
a reply to: Aazadan

No defense needed, it's easy to get those bits mixed up. I only mentioned it so others wouldn't get confused.



TheRedneck



posted on Jan, 24 2017 @ 09:58 AM
link   

originally posted by: tikbalang
a reply to: bigfatfurrytexan

could you explain the difference?


Imagine unleashing an alien God onto the world. How do we know what it would do? Its all in the constraints programmed in, and how those flow into unknkown myriad possibilities.

How can we even fathom what would motivate an intelligence and processing speed that worked on that kind of scale?



posted on Jan, 24 2017 @ 04:01 PM
link   
a reply to: bigfatfurrytexan

So modern AI is just an architecture type structure based on already programmed perimeters ?

It think it learns but in fact it is being feed?



posted on Jan, 24 2017 @ 05:10 PM
link   

originally posted by: bigfatfurrytexan

originally posted by: tikbalang
a reply to: bigfatfurrytexan

could you explain the difference?


Imagine unleashing an alien God onto the world. How do we know what it would do? Its all in the constraints programmed in, and how those flow into unknkown myriad possibilities.

How can we even fathom what would motivate an intelligence and processing speed that worked on that kind of scale?


I work with and design strong AI.

There are a lot of safeguards in place. Starting with air-gapping the systems that come on-line, and being extremely careful as to the "knowledge" each system is given or has access too., up through stringent security regarding who has access and how that access is handled.

Strong AI is utilized by the DoD in a number of different areas. And it utilizes HW & SW that the general public does not have access to. Hell, they probably don't even know it exists beyond a general sense. I'm sure it exists in other areas of the intelligence community, but I can only speak to the DoD based systems as that is what I work with.

DARPA does an amazing job of incubating and shepherding these technologies from infancy thru to delivery. That's their job...and they do it very well.

And I love how they describe themselves - 100 people linked together by a travel agent. The core of DARPA - i.e. those who are actually employees of DARPA - is really quite small. The bulk of their "workforce" consists of private contractors who do not work directly for them, but rather are on "assignment" such that DARPA's budget pays for their daily rates, business expenses, etc.

Hope that helps.


edit on 1/24/2017 by Riffrafter because: (no reason given)



posted on Jan, 24 2017 @ 05:20 PM
link   

originally posted by: tikbalang
a reply to: bigfatfurrytexan

So modern AI is just an architecture type structure based on already programmed perimeters ?

It think it learns but in fact it is being feed?


Modern AI is a combination of several different algorithms, no one AI is all knowing or all capable, each is capable of one specific thing. For example SIRI, Cortana, etc are actually a combination of several different systems. First you have a voice recognition system which translates data into words, then you have a tokenizer which breaks those words up into key terms. If you remember sentence conjugation from grade school it's very similar to that. From there, you take that data and use a search algorithm to find results.

It looks sentient but that's only because you're not seeing that it's simply the end result of a sequence of narrowly defined actions.

To go into more detail with speech recognition, there's a couple different ways to go about it. You can either use Hidden Markov Models which are basically a whole bunch of probability functions, tied together to suggest something (you're probably most familiar with these with your phones autocomplete feature), or you can go about using neural networks, which in the simplest terms give your computer thousands of versions of something, and tell it to match something similar to that. For example, showing a computer 10,000 different houses and telling the computer that something similar in shape is a house. These basically involve training a computer.

Machine Learning primarily uses Neural Networks and Genetic Algorithms to achieve results. Neural Networks I just covered, but then there's also genetics which basically involve trying something over and over making small incremental improvements, and then passing on those improvements using a system which mimics evolution by passing on the stronger results, implementing mating systems, having the weak ones dying out, and adding in random mutation. I'm a bit more familiar with genetic algorithms than neural networks... they're actually pretty simple to implement but they're relatively slow to evolve and it can sometimes be rather complex trying to frame a problem in the specific format required for the system to work. There have been some cool, and completely unintuitive solutions to have come out of genetics though such as the evolved antenna en.wikipedia.org... . This basically involves a computer training itself.

PT Barnum would have loved AI. The way it's presented to the general public is in reality not even close to how it works. In it's simplest form, it's just a computer performing a couple math operations over and over and over again very quickly until it arrives at an answer that an outside individual can confirm as being correct.



posted on Jan, 24 2017 @ 11:05 PM
link   

originally posted by: mbkennel
a reply to: ChaoticOrder

This is less important than the hype makes it seems. I have a pretty good idea what they are doing. They claim they have an architecture which is cheaper to execute computationally on conventional computers for some loss in fidelity, that's it. It is probably replacing IEEE floating point with low precision fixed point and truncating/rounding heavily intermediate results, maybe some even to one or two bits.

It will make deployment of certain neural networks less expensive--that's it. No fundamental breakthrough. It is not specific to one specific class of problems, however, what is specific to various classes of problems is "what is the performance degradation relative to standard methods?" Vision, not so bad because there are a very large number of inputs. Other problems use the smaller number of inputs more precisely and there, losing precision in intermediate computations would probably hurt performance more.

I think the future will be specialized hardware which implements a few of these and other types of tricks. Look to Intel & Nervana.


Great points - all.

Simply put, what they gain in speed, they lose in precision, although it's not a 1:1 correlation.

Where it will have a big impact is in the cases where as you said, could greatly benefit from a neural network or machine learning type approach for *some* of its computation. Key word being "some". But in an application or system that is a primarily or exclusively a neural network solution, the "fuzzy math" (pun intended) employed here will cause the results to be off by enough such that it's essentially an exercise in futility.

On the flip side, there are a lot of systems that would greatly benefit from having a routine and/or class that utilizes a neural network approach and where the results of that routine don't necessarily require a highly precise output. In other words, it may allow for a neural network approach in systems that otherwise couldn't even think of doing it because of the computational requirements.

I look forward to seeing where this goes, and how much further it can be improved both in terms of precision as well as in terms of lower system requirements.

I'm not sure if you work with AI, mbkennal - but if you don't, you have a terrific high-level understanding of it. It's still a hot mess to some degree even amongst those that work with it on a daily basis, but there is a level of agreement and understanding starting to emerge. And you know what's causing that big "breakthrough"? People in the field are finally beginning to talk to one another on a detail level without worrying so much about protecting their IP. Such as it is...

I expect a *lot* of progress to be made in the next few years. Whether that's a good or bad thing depends on how you feel about the results of this progress.

Very interesting times ahead...


edit on 1/25/2017 by Riffrafter because: (no reason given)



posted on Jan, 25 2017 @ 09:43 AM
link   
a reply to: neoholographic

Yes because that's what people require, to converse with there microwave, stove and fridges.

No point in attempting to constrain any true AI all the same considering its capability's and state of existence will be several orders of magnitude above us clever monkeys.

Lets just hope the thing takes pity on its flawed creators and builds us a better Zoo than the TPTB.




edit on 25-1-2017 by andy06shake because: (no reason given)




top topics



 
11
<< 1   >>

log in

join