It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Apple joins Amazon, Facebook, Google, IBM and Microsoft in AI initiative

page: 3
11
<< 1  2   >>

log in

join
share:

posted on Feb, 1 2017 @ 10:24 AM
link   
a reply to: Aazadan

You picture a best-case Utopian society. Be nice if it happens but all things considered (like human greed and fear, and the predictable "transition" period), I seriously doubt it will play that way.

Might materialize after a period of benign neglect and the requisite mop-up - for maybe about 5 million people.

Again though, it would be nice. Might be doable if the public were allowed to make fully informed choices.



posted on Feb, 1 2017 @ 12:03 PM
link   




posted on Feb, 1 2017 @ 12:24 PM
link   
a reply to: nOraKat

A deflection-distraction meme.

The elite will leave the masses to starve and die, mop up the survivors, then enjoy a life served by compliant machines.



posted on Feb, 1 2017 @ 07:19 PM
link   
But back to my original focus. From the OP quote:



...the group will work toward developing standards and ethics around the development and implementation of AI.




I think we have choices to make. Best they be fully informed.

No?



posted on Feb, 1 2017 @ 07:27 PM
link   
a reply to: soficrow

A Utopian society is hundreds of years off, but it's still what everything is building towards. Automation, which is ultimately all gong to be done by AI's is done to increase productivity and reduce working hours.

Also, people mostly make good choices. On the individual level a lot of people get a lot of things wrong, but crowds are mostly capable of getting it right.



posted on Feb, 1 2017 @ 07:39 PM
link   

originally posted by: Aazadan
a reply to: soficrow

A Utopian society is hundreds of years off, but it's still what everything is building towards.



A helluva lot can, and will happen in those hundreds of years. None of it Utopian.




Automation... is done to increase productivity and reduce working hours.



Uh huh. Humans Need Not Apply.




Also, people mostly make good choices. On the individual level a lot of people get a lot of things wrong, but crowds are mostly capable of getting it right.



So you're not a big believer in the mob mentality. Nor obviously, in a fully informed populace.






posted on Feb, 4 2017 @ 04:59 PM
link   
Pertinent:

www.csmonitor.com...



Poker now joins chess, Jeopardy, go, and many other games at which programs outplay people. But poker is different from all the others in one big way: players have to guess based on partial, or "imperfect" information.

...

...no matter how impressive your card skills, your power of prediction is limited by Texas Hold 'em because only certain cards are visible. An ace in your hand and an ace on the table might look like good news, unless your opponent is holding a pair of aces, or worse. No amount of computational power can answer that question.

...

“The typical approach to addressing perfect-information games [like chess] was to search through a game tree to find an optimal path,” he continues. But that’s no good in poker, because without knowing what cards are where, you can’t even figure out where on the tree you are.

Libratus is a whole new kind of machine.

Rather than try to sort through an unknowable tree, it focuses on finding a favorable move that represents a so-called “Nash equilibrium” solution...

You won’t always end up in the best possible situation, but you may be able to avoid the worst outcome.

...

Libratus plays poker in a similar way, never guaranteed to win any particular hand but likely to stay in the black over the long run.

...

People tend to bet and bluff in certain increments, and noticing those patterns helps professionals find an edge, but the computer was hard to read.

...

As Libratus was analyzing the day’s games each night, the pros did the same. While they weren’t able to find a consistent winning strategy, they suggest the experience made them better poker players. “Once you face Libratus, there's nothing worse any human could ever do to you. Every human is going to seem like a walk in the park,” said Jason Les, another one of the players.

...

The imperfect information nature of poker makes the win a huge achievement for the AI community, with far-reaching real world applications that Brown says include negotiations, auctions, and security interactions, to name a few. “In truth, most real-world scenarios involve hidden information. In the real world, not all the information is laid out neatly for all sides to see like pieces on a chessboard. There is uncertainty and deception,” he explains.

One remaining enclave of human superiority is multiplayer situations, however. Liberatus prefers bilateral dealing to group negotiation. It took on each poker player one-at-a-time, and couldn't have won in an ensemble game. Still, two-sided negotiations are common in the real world, and Libratus lays the foundations for computer programs that help negotiators elevate the art of the deal to a science.


www.csmonitor.com...

When we finally do crack - or accidentally end up creating, or allowing to emerge - more generalized AI, it's likely something like all of these different capabilities will end up rolled into it. And said capabilities are advancing much faster already than anyone thought they would.

Peace.


edit on 2/4/2017 by AceWombat04 because: (no reason given)



posted on Feb, 4 2017 @ 05:33 PM
link   

originally posted by: soficrow
So you're not a big believer in the mob mentality. Nor obviously, in a fully informed populace.


Who is most fit to rule? 1 person or 1 million? I think the answer lies somewhere in the middle. People (and this applies to a man on the street or a king) cannot be experts in every field, they will have no better than an average understanding of the issues, and the average isn't very high. A small group that is still small to communicate with others in the group, but large enough to be cross disciplined and able to defer to it's members that are experts in a given field is most effective.


originally posted by: AceWombat04
Pertinent:

When we finally do crack - or accidentally end up creating, or allowing to emerge - more generalized AI, it's likely something like all of these different capabilities will end up rolled into it. And said capabilities are advancing much faster already than anyone thought they would.


These systems aren't new. I've been building and gradually improving such an AI for a bit over a year now (for a different game though), and I'm an idiot. People more engaged with the field than me have been making them for many years. The idea is that it doesn't actually matter if you make the best play. In games like poker which rely on making hundreds of good moves across variance all you need to do is to verify that you're making good plays. The difference between good plays and optimal plays is making more money per hand, but if you can win most of your hands it doesn't actually matter because you win regardless it's merely a matter of speed and that extra time taken is much less than the time needed to compute optimal plays with imperfect information.



posted on Feb, 4 2017 @ 05:43 PM
link   
I'm fairly certain this still counts as a breakthrough and that it's a bit more of a definitive and dramatic advancement than that implies personally, but what do I know? I'm a layperson just reading. Feel free to disagree, as always.

Peace.



posted on Feb, 4 2017 @ 09:03 PM
link   

originally posted by: AceWombat04
I'm fairly certain this still counts as a breakthrough ...



Oh yeah. Computers that can out-bluff humans?

Just think of the potential...



posted on Feb, 7 2017 @ 05:50 PM
link   

originally posted by: Ohanka
This thread just makes me wish I'd done software development at university instead of history. Bah.


Nah...History is a terrific major. Wish I took more history classes back in the day. Seriously.

Hell, for my undergrad work I ended up with a double major...mostly unplanned...BS - Comp. Sci, BA - Philosophy.

Probably enjoyed my philosophy classes a lot more at the time and they informed my everyday life much more than my work with AI until just a few years ago. Now the AI work I do - which is primarily of the "experimental" kind (read: many failures) like neural network models and machine learning/deep learning stuff has become a lot of fun and makes me think much more about what we think, how we think and why we think the way we do. In all areas...not just science or math. Science and math are just the foundation for the models...

A lot more questions than answers, and my sense is it will be that way for a long time to come.

So, do some reading and take some classes in it if you want/can. You might have a talent for it.

And we need all the help we can get.
edit on 2/7/2017 by Riffrafter because: (no reason given)



posted on Feb, 7 2017 @ 07:18 PM
link   
This can't end well... at least not for people with a mindset of those living in today's world. If AI out-thinks humans, we will just be seen as pests on the planet. I pity my grandchildren who will witness this revolution.



posted on Feb, 8 2017 @ 09:51 AM
link   
a reply to: LogicalGraphitti

Ai can already out-bluff humans. ...What do you think the implications of that are?





posted on Feb, 8 2017 @ 06:37 PM
link   
a reply to: Riffrafter

I have to agree with Philosophy, it's not a good primary major but the classes are very interesting. Every week I have lunch with one of my former Philosophy professors. It's interesting, he has told me on a few occasions that the best minds in his classes are always the programmers. From talking to him, I think there's a lot of overlap with how programmers lay out a problem/solution and how philosophers go about doing it.

History is a lot of fun too, but realistically the only thing learning history does is prepare you for 40 years of teaching it. I like the classes but I would never major in it.



posted on Feb, 8 2017 @ 07:07 PM
link   
As I always say....what could possibly go wrong? A code of ethics is going to be developed by people who have none.



posted on Feb, 8 2017 @ 07:08 PM
link   

originally posted by: BrianFlanders
As I always say....what could possibly go wrong? A code of ethics is going to be developed by people who have none.


What makes you think software developers have no ethics? Particularly those companies.



posted on Feb, 8 2017 @ 07:09 PM
link   

originally posted by: Aazadan

What makes you think software developers have no ethics? Particularly those companies.


Experience?



posted on Feb, 9 2017 @ 08:56 AM
link   
a reply to: BrianFlanders


As I always say....what could possibly go wrong? A code of ethics is going to be developed by people who have none.




There is nothing right now - no code. At all. As the article states, "With the absence of regulations in the field, the group will work toward developing standards and ethics around the development and implementation of AI."

Given that no one has tackled it yet, who else do you see that's capable of doing the job?



posted on Feb, 9 2017 @ 09:34 AM
link   

originally posted by: soficrow
a reply to: LogicalGraphitti

Ai can already out-bluff humans. ...What do you think the implications of that are?







I think it's pretty amazing, especially the system they built to do it. Combines 4 or 5 different AI approaches which is ultimately the way in which a "singularity" will eventually emerge.

I'd love to play it someday...just with someone else's money.



posted on Feb, 9 2017 @ 09:23 PM
link   
a reply to: Riffrafter

I meant what do you think the implications are to humans. As in, human life.






top topics



 
11
<< 1  2   >>

log in

join