It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

After reading "Statistical Significance Tests for Machine Translation Evaluation" I have some que

page: 2
1
<< 1    3 >>

log in

join
share:

posted on Jun, 14 2013 @ 05:42 AM
link   
reply to post by BayesLike
 


But the tech I'm talking about is can retrieve data at high resolution every 20 milliseconds... I thought that was slow. ttfn, good talking guess I need to learn calculus. Dammit.



posted on Jun, 14 2013 @ 06:10 AM
link   
reply to post by teachtaire
 


I rarely use Monte Carlo -- it's ancient and way too slow. I prefer other methods and build surrogate models of complex non-linear systems. I do most of my work with those as it's much much faster than simulation of any sort, much more accurate than MC results, and extremely useful as it allows analyses in seconds which are otherwise intractable. It's typically millions of times faster than MC.

The applications are somewhat related to the "manipulate" the results, but not applied to people. It could be though.



posted on Jun, 14 2013 @ 10:47 PM
link   
reply to post by BayesLike
 


I'm talking about using the monte carlo method for higher level functions... like AI.

Which is why I was thinking of translating data as part of a neural network,
edit on 14-6-2013 by teachtaire because: (no reason given)


Here is one paper which looks at its use for speech recognition

www.lsv.uni-saarland.de...

I haven't read that entirely, but I have read

"MULTILINGUAL NAMED ENTITY EXTRACTION AND TRANSLATION FROM TEXT AND SPEECH" By Fei Huang

and mlt.sv.cmu.edu... "Measuring the Structural Importance through Rhetorical Structure Index"

people.cs.umass.edu... "Learning from One Example in Machine Vision by Sharing"
Probability Densities

edit on 14-6-2013 by teachtaire because: (no reason given)



posted on Jun, 15 2013 @ 01:37 AM
link   

Originally posted by teachtaire
reply to post by BayesLike
 


I'm talking about using the monte carlo method for higher level functions... like AI.
Which is why I was thinking of translating data as part of a neural network,

Here is one paper which looks at its use for speech recognition
www.lsv.uni-saarland.de...



That isn't doing what you might be thinking, but then I'm guessing at what you might be thinking. He has to calculate a multidimensional integral rapidly to estimate the expected next state given the observations. He is using MC methods, with so-called "importance sampling" to do the integration. It's a technique I used in the mid 80's and abandoned for better methods within one year. I only mention MC when teaching and in seminars. There is a better, much faster, way to proceed which is over 100x as efficient as MC for small samples.

The methodology he is employing in his paper is basically a Bayesian update of the expected current but unknown state given the observations to date. Cute, not worth a thesis if you ask me. It's a mere application of already known methods in use elsewhere. Essentially coping work of others and calling it a thesis instead of a merely different application. So, (perhaps) the application to speech recognition -- or some small tweak of an existing speech application -- is novel. The technique has been around a quite a while in other areas. And Bayes has been around for centuries. I do like the application and it would make a nice applied conference poster paper. Rev. Thomas Bayes

FYI: BayesLike refers to Bayes Likelihood

edit on 15-6-2013 by BayesLike because: (no reason given)



posted on Jun, 15 2013 @ 04:21 AM
link   
reply to post by BayesLike
 


Yes, I gathered the meaning of your name rather quickly! However, what "advanced" method are you referring too?


The picture I get from all of the papers on the subject Suggest that the majority of the research is leading to a hybridization of Hidden Markov Models, Neural Networks, and the good old Monte Carlo Method. I mean if relative values between handwriting can be discerned to tell the difference between a set of letters a, b, c, d.... I fail to see how that program is slow or old? It is what is currently being used for civilian and DARPA after all.

www.scribd.com...

^This discusses the reasons for creating a hybrid NN/HMM system.

www.lsv.uni-saarland.de...
^It uses a Bayesian Bootstrap Filter,

isl.anthropomatik.kit.edu...

^This paper, alternatively, deals with a system which has done a way with BBS.
edit on 15-6-2013 by teachtaire because: (no reason given)


Correct me if I'm mistake, but couldn't these ideas be also applied to extremely advanced facial recognition as well as market/text/misc. data?
edit on 15-6-2013 by teachtaire because: (no reason given)



posted on Jun, 15 2013 @ 06:42 AM
link   
reply to post by teachtaire
 


I'd prefer not to get into what I do too much. I made the decision decades ago to follow the lead of another and simply never publish on the methodology. It's not Bayesian.

Darpa to some extent goes with the flow and tries to push it along. That can be a good thing and it can distort development as well. The "convergence" in what gets published in quantity can be artificial and is always related to availability. Expect this to change over time. Hype in comp sci is shameful -- any technique around today has been claimed to do everything well or so it seems. In practice, unless an expert does the modeling, it falls very short of expectations. Maybe this hype is just academics chasing dollars?

None of it really works all that well or they wouldn't be looking for better. If any of this was all that good, I wouldn't have a job. I'm not concerned about that at all.

NNs can a) capture data and replay it b) be used to estimate probabilities. The HMM is a viewpoint -- all that is being done is parameter estimation (in that paper) with Bayesian methods. Not a big deal. The bootstrap has also been around for a while as has jackknifing. I don't think the one paper really did a bootstrap, but they might have in training. They probably also did simulated annealing to try to avoid converging to local optima. Typical methods.

Recognition problems are some of the thorniest around. People are good at it, we don't know why. Machine learning methods can't tell you why anything works and the parameters can't be interpreted. Mostly because they are correlated and correct for each other. Even the signs are not certain to have meaning, much less the values. OTOH, given something like the training dataset, the NN will play back what it learned. It can't do anything else. It's like a camera photo after a fashion.

This doesn't mean these methods aren't useful. Rahter, it means that the solution will fail somewhere



posted on Jun, 15 2013 @ 11:41 AM
link   

Originally posted by teachtaire
Hello, I was wondering if anyone could explain how automatic machine translation could be used to forecast political events at a regional level? Are there any good books/papers that anyone could suggest?


In theory, prediction models can be generated based off of chatter. Let chatter be defined as statements, both publicly and privately made, and searches for information/articles read. The greater the amount of chatter on a specific subject, the more likely that there will be an action (outcome) in response to it. Humanity is, generally speaking, very predictable as, for any problem, there is usually a finite set of solutions that can be taken and the solution taken would be the one viewed as having the most personal benefit to the individual with the least amount of cost. This would be echoed out through the mass. Additionally, if one saw which way the winds were blowing, it could also allow for manipulation/steering towards a specific outcome. The three steps combined: 1. information collection, 2. prediction models, and 3. steering would be what is called game theory. Game theory is a product of the governmental think tank, Rand. Plenty of stuff on the net on the subject of game theory and steps 1 and 2 could easily be automated. They are using the same thing for marketing via social media these days. If marketers are doing it, then you bet the government has an infinitely bigger and better version of it.



posted on Jun, 15 2013 @ 07:32 PM
link   
Thank you for the patience, I'm surprised more people haven't joined the conversation.

Are there any resources in particular you could suggest? As I've stated, currently I'm just reading everything I come across, my method isn't very efficient.

*But I must admit they seem a bit more up to date compared to the RAND documents?
edit on 15-6-2013 by teachtaire because: (no reason given)



posted on Jun, 16 2013 @ 06:33 AM
link   

Originally posted by WhiteAlice
Game theory is a product of the governmental think tank, Rand. Plenty of stuff on the net on the subject of game theory and steps 1 and 2 could easily be automated. They are using the same thing for marketing via social media these days. If marketers are doing it, then you bet the government has an infinitely bigger and better version of it.


I do believe Game Theory was around a long time before RAND got involved with it. But I do 100% agree that it is a very powerful methodology provided you have good intelligence on the state spaces and the loss functions of the players! Were I in a setting where I needed to manipulate outcomes and had the time to do the analysis, Game Theory and LogLinear Models / Logistic Regression would be at the top of my list of methods. I'd also spend some time understanding the sensitivity of the outcome choice to both the states and the perceived loss functions. That would tell you where you needed more information.

When I took several courses in Decision Theory and Game Theory from Berger in grad school, I was a little surprised at the utility of random responses in the solutions. I've used that knowledge, without the deeper mathematical analysis, to keep my opponent in negotiations for major purchases from discerning my utility (loss) functions. Showing keen interest in the product, plus lack of interest at critical points, getting set to walk away and allowing yourself to be talked back, taking out a checkbook or pen to signal interest and showing second thoughts and putting it away more than once during negotiations really does throw the opponent off balance. Once they are off balance, you can usually reshape their goals somewhat.

An intersting video in a TED talk on Game Theory can be found here: Bruce Bueno de Mesquita He's had some really good success at formulating game models in international politics. I do believe he is regularly engaged by the State Department and CIA.



posted on Jun, 16 2013 @ 02:04 PM
link   
reply to post by BayesLike
 


Game theory was around before Rand but what Rand did with it was significant in that it basically signaled the usage of game theory within the political spectrum. If you bear with me for a moment, I can take the usage of game theory as one of the strategic analysis models being used by the government one step further but it may seem like a flight of fancy. For decades (and most likely still currently), the educational system within the US has been identifying children of a certain high iq and other more ambiguous features and placing them into specialized education programs. Two of the subject matters that are taught to these children are game theory and Hulzinga's Homo Ludens. The introduction into the principals of game theory and the playing man are actually one of the first things that are done within these programs. My eldest was in one of these programs and one of their first projects in the 3rd grade was to create a game, which presents a player with a series of options but ultimately directs them to a specified outcome. Basically, a first introduction to the general idea of game theory. I suspect that game theory is one of the core theories within the program in addition to number theory. My youngest is in the program as well and spent the last 5th grade year learning number theory. I was in it myself as a kid and I view the world as a gigantic chess board.


I know that it sounds a little odd to consider elementary school children being taught subjects like game theory, which is normally an upper level subject, but this is one of the times where truth is truly stranger than fiction. By the time these kids reach middle/high school, it's taken a step further. Here's a handful of links to prove the veracity of what I am claiming.
books.google.com... y04egRA&hl=en&sa=X&ei=Nwm-Uf-nM4GvrgGelIGQBQ&ved=0CFUQ6AEwBjgK
en.wikipedia.org...
www.pthsd.k12.nj.us...
www.forbes.com...

These programs were created via the National Defense Education Act of 1958 and their modern STEM counterparts are still considered a matter of national security in the hopes that these kids will work for governmental agencies from DARPA to the NSA and possibly the CIA. I cannot confirm the CIA portion but yes to DARPA and NSA.








edit on 16/6/13 by WhiteAlice because: (no reason given)



posted on Jun, 16 2013 @ 09:47 PM
link   
What exactly does this have to do with what we were talking about? The links you've both provided have been rather basic.



posted on Jun, 17 2013 @ 02:52 AM
link   
reply to post by WhiteAlice
 


I understand where you are going with this as my daughter spent many of her years in gifted and talented programs, doing much of what you have indicated. In many ways, for statistics at least, the program is a real failure. I can say for sure that the public school teachers, even in the special advanced math classes she took (there were two levels of the advanced classes), did a very poor job with the probability and statistics they introduced. They didn't understand it at all and made huge errors both teaching and explaining the concepts.

We pulled her out of the middle and high school math classes and put her into college for many classes to avoid poor teaching. Her schools allowed that to happen as long as she attended a home room class. That made a huge difference IMO.

I've given a lot of thought over the years about the differences between physical scientists (plus engineers) and statisticians. There is a fundamental difference which I have come to believe cannot be taught but which others may be come more aware of: viewing the world (by default) as probabilistic. Both the physical scientists (plus engineers) and statisticians also have a default view of of models and patterns. But the science/engineering types have a default cause-effect orientation instead of a probabilistic orientation. The probabilistic orientation is far richer IMO.

I would not be surprised that RAND would introduce game theory into political science -- it's a natural fit. Learning how to manipulate opponents to your benefit is a powerful skill. I'm also aware the government has done designed experiments on public sentiment, especially supply availability surpluses and restrictions, and fairly well understands how the public will respond. The average person is actively manipulated much more than they believe -- many choices are not really their own, they are just the ones left to choose among.

Businesses have become more and more involved with this scientific manipulation starting in the 80's with point of sale couponing in grocery stores. As one of the more extreme examples today, Google's income stream is based on manipulating people to click on the most profitable links. Experiments on where to place which links are continuously performed, analyzed, and the link placements adjusted as the demographics change during each day. These are actually very simple experiments and can be highly automated.



posted on Jun, 17 2013 @ 02:57 AM
link   

Originally posted by teachtaire
What exactly does this have to do with what we were talking about? The links you've both provided have been rather basic.


If you read the post that I was responding to, you will note that the discussion was the probability of the use of game theory in terms of statistical prediction models at the governmental level. I was adding additional confirmation of the usage of this theory as it is an established part of the specialized education programs from which those agencies who use such things draw their employment pool from. It's called connecting the dots and it was directed to him, not you.

As far as learning more about game theory, I do believe Stanford has a free online course in the subject matter, which you can find here (though I'm not certain if it touches on the mathematical aspects of it): online.stanford.edu...
Paper on the subject of statistical game theory: www.hss.caltech.edu...

Happy?
edit on 17/6/13 by WhiteAlice because: (no reason given)



posted on Jun, 17 2013 @ 03:59 AM
link   
reply to post by BayesLike
 


I'm not terribly impressed with the programs either. Both of my children were in it (myself as well) and it's just strange. They pegged me for engineering and yet, I aced (and loved) statistics years later as that was how I had always seen the world. In other words, they were completely off the mark with me. My eldest is going to college early as well for similar reasons and my youngest, who is an excellent writer, was put into the "math" group to learn number theory.
Overall, they still introduce concepts that are atypical subjects for children and even if it is in error, it does allow a child a lot more time to chew on an idea so to speak than learning it years later at the college level. Still, it's interesting to look at the curriculum and the connections. Page 7 on this article talks about Robert Meeker. Meeker was married to Mary Meeker, who was heavily involved in the program curriculum. Kind of interesting to see the fingers in the pot for it though I think it suffers from the "too many chefs" syndrome.

I agree with you on the models and patterns though maybe it's because I am also a biologist (and an accountant so I'm a literal bean counter), I see the two groups to be less distinct. Thinking in terms of probabilities is fairly beneficial to the physical sciences as it lends itself to create "what if" scenarios that may end up modifying hypotheses or even theories. It just takes a 1000 or more experiments with repeat results to prove it, lol.

Rand has written a whole slew of papers on the use of game theory in a pretty wide variety of subjects over the years, ranging from military strategies, economics and politics. www.rand.org...

I agree very much with your sentiments on the level of manipulation that exists on a day to day basis. Outside of Google, cutting edge marketing is doing, well, this:

Also interesting to the thread because if marketers are using and automating it, then so has other entities but for probably much longer (based on the 20 years ahead rule): blog.eloqua.com...



posted on Jun, 17 2013 @ 05:15 AM
link   

Originally posted by teachtaire
What exactly does this have to do with what we were talking about? The links you've both provided have been rather basic.


I do feel it's somewhat on topic in that many if not most political settings, such as the overthrow of the government in Eqypt and other middle eastern countries, are manipulations rather than spontaneous.
One could potentially make the prediction by detecting the manipulation.

An internet parsing dictionary plus modeling (HMM or not) may have detected interest but not the occurrence of the "occupy" demonstrations. If the modeling was done with the knowledge that two forces were manipulating public opinion and how that was likely to occur, the "occupy" demonstrations may have been predicted. There was clearly a game being played. And an approximate date was set months before it happened.

A similar game was played in the middle east with the Arab Spring -- with more success.

A similar game is being played now and is likely designed to come to a critical point next summer -- during the 2014 election campaign.



posted on Jun, 17 2013 @ 01:38 PM
link   
reply to post by BayesLike
 


Much of the unrest, including Occupy, was entirely predictable. In 2010, I made the observation that the conditions for mass civil unrest were in place, meaning that the probability of demonstrations were highly likely to occur in terms of predictability. Some of those factors, such as declining wages due to a higher than normal unemployment, were definitely natural in that it had become an employer's market. Toss in unemployment, which was a market response to the global financial crisis in 2007/8 and there'd be a whole lot of people stretched to the breaking point. Both of these factors were basically natural market responses to the situation that led to the increased likelihood of civil unrest/demonstrations. Other factors were relating to political bodies' interactions with the business sector while counterpointed against individual struggling, repeated over and over again, where deference towards the business sector existed. The same thing was also occurring in Spain with the group behind that being Real Democracy Now. Throw all these things in together and you would have a sociopolitical economic potential of unrest which just needed a match and Adbusters provided that match in the states. History repeats itself and that is what basically allows for some predictability in societal response to specific combinations of trigger events. Whereas I was able to see it coming in 2010, our government, with all their nifty computers, saw the possibility of it happening as early as 2007/8 because they, undoubtedly, had access to a whole lot more information and a greater ability to process it. Our government, to put it simply, was prepared for it.

I'm a firm believer that there are natural components to what occurs (cause and effect) and this is what creates the prediction model of the outcomes that may occur from those natural components. Everything else thereafter is either proactive or reactive manipulation. I always tell my kids that noone exists in a bubble; ergo, everyone is influenced either directly or indirectly by external forces and has the potential for being manipulated. Going back to Adbusters for a second to illustrate, both Adbusters and Ted Kaczynski, as per his manifesto, had some very similar views albeit drastically different approaches. What societal influences led to the creation of them? It's a never ending chain and it's best viewed as a game because god only knows who is going to win.



posted on Jun, 19 2013 @ 01:23 AM
link   
hey, just being curious, should ones steam chat randomly change to a different language, would that be a cause of concern?



posted on Jun, 20 2013 @ 04:13 AM
link   

Originally posted by WhiteAlice

As far as learning more about game theory, I do believe Stanford has a free online course in the subject matter, which you can find here (though I'm not certain if it touches on the mathematical aspects of it): online.stanford.edu...
Paper on the subject of statistical game theory: www.hss.caltech.edu...

Happy?
edit on 17/6/13 by WhiteAlice because: (no reason given)


These both look pretty interesting, I've saved the links and will likely go through them in my spare time. I've been offline the past few days due to having almost 10 concurrent projects and working into the late evening.

Do you know if the online course is a free one?



posted on Jun, 20 2013 @ 05:06 AM
link   

Originally posted by WhiteAlice
reply to post by BayesLike
 


I agree with you on the models and patterns though maybe it's because I am also a biologist (and an accountant so I'm a literal bean counter), I see the two groups to be less distinct. Thinking in terms of probabilities is fairly beneficial to the physical sciences as it lends itself to create "what if" scenarios that may end up modifying hypotheses or even theories. It just takes a 1000 or more experiments with repeat results to prove it, lol.

Rand has written a whole slew of papers on the use of game theory in a pretty wide variety of subjects over the years, ranging from military strategies, economics and politics. www.rand.org...


I'll check out some of these papers too, more to see what new ideas have been developed over the years. The applications will be fascinating too.


Some disciplines are more accepting of randomness than others. Physical scientists and most of the engineering disciplines (plus most programmers) are not very open to randomness in an experiment -- anything less than extremely high precision is looked upon as bad technique. They don't deny randomness exists and can talk about it some, but it's not a default thought pattern. Chemical engineers are a bit different, they are generally quite open to statistical concepts and quite a few get very deeply into applied stats. I haven't worked with a lot of biologists, but I would imagine that biologists must be quite open to randomness in general -- as are psychologists and sociologists.

Where I think a lot of the comp sci community makes a possible error is in the thought that if all the detail is captured it is somehow better. All the more modern machine learning tools tend to extreme flexibility. In many cases, to have a good model, you actually want a somewhat stiff function for the response to ensure the response model is not capturing noise. It's the support domains over which these are to be applied that requires flexibility to isolate. Flexible capture all round is more, in my mind, a data compression method for future replay.

As you note elsewhere, when talking about the conditions in 2010 being right for some level of political activity, certain conditions do seem to make political activity more likely. You are looking at the domain (the support) for political activity and don't expect a prediction function to explain everything (both activity and non-activity) over the entire spectrum of possible outcomes.

To build a good model and capture the domain, it's often best to compare and contrast extremes and not look at anything in the messy middle. We can then, because of the high contrast, find the factors which are relevant and build a tentative model. It then makes some sense to go back and look at the messy middle to see what happens with the model in those cases. I can't quite express what to do with problems like this in a few words, unfortunately.

Some of the problems I get called in to solve have the same types of features: a full spectrum of responses and often thousands of possible factors. Only some are relevant. Some of the factors are specific to the individual and some are specific to the environment, then there are the interactions of these two groups of factors. A machine learning tool usually tries to deal with what someone guesses is relevant or possibly relevant and can't toss anything out -- which means it is blindly fitting noise. Also machine learning tools, because of high flexibility, are really incapable of interpolating through the messy middle if that data isn't included in their training. Somewhat stiff functions interpolate through gaps in the domains better and they extrapolate better.



posted on Jun, 20 2013 @ 05:13 AM
link   

Originally posted by teachtaire
hey, just being curious, should ones steam chat randomly change to a different language, would that be a cause of concern?


It depends on the individual, I would imagine. Are they switching because they don't know a word or two in one of the languages (or it fits better) or are they trying to hide something?

Perhaps a change in intensity of focus or breadth of word choices would matter too. But this is what I was referring to. It's OK to brainstorm factors to include as long as you toss out any factors which ultimately do not matter. This kind of thing may be an important factor in one type of domain and not in another. Don't include it where it doesn't belong!
edit on 20-6-2013 by BayesLike because: (no reason given)

edit on 20-6-2013 by BayesLike because: (no reason given)



new topics

top topics



 
1
<< 1    3 >>

log in

join