posted on Nov, 30 2008 @ 10:58 PM
reply to post by constantwonder
"Its not predicting the future only reading the collective consious of the worlds web users. "
...only reading the collective conscious of the worlds web users. I'd just like to take a moment to tribute the significance of such an act. A
computer is able to collect and classify enough information to qualify 'the collective conscious of the worlds web users'. That's a task that few
dared dream of 20 years ago.
I next want to illustrate the basis for reinforcement learning (mostly to re-orient myself with the notion). Given a defined domain, and a method to
quantify the ramifications of a result based on a representative set of domain attributes (predicting future events based on current and past
observations of the outcome of events): to predict an the outcome of an event you need to first be able to describe the domain. A simulation of all
valid action paths within the domain is the exhaustive way to do it. Doing so over and over allows for this theoretical implementation to recursively
allow current states to consider future states, ultimately allowing for any given state to converge upon a probability for each action. This means
that given a representative simulation, the probability of all future states can be qualified. That is only if the model takes on the markov-process,
which essentially entails no variability in the current state. This notion has been around before I was...
Further advances in the 90's have alleviated this dependency with limited successes in function approximations most noted by neural networks. But
the application requires unrealistic computational means to generate answers such as "what's the outcome of this war".
My new favorite advance in AI (which I admittedly haven't tried out) is the integration of chaos theory and random fractal theory into the underlying
statistical models described above to mitigate the prohibitive computational means issue. This approach has shown promising results in fields that
had before been thought to be impossible to accurately quantify. Examples include weather prediction and sea clutter quantification to name a few.
So perhaps I'm a bit of an AI enthusiast. But given the fact that 20 years ago, automating the compilation of the 'collective conscious' of a
majority was thought to be a dream at best; 15 years ago we laughed at the notion of a computer rivaling an expert. And now, after 15 years of
research, it seems the general opinion is skeptical.
I'd like to encourage the idea that the mathematical elements are all sound. And that either the extension or reintegration of these elements may
very well be sound given a proper representation of the attributes of the domain to consider. Even if Bruce simply associates probabilities to
determine the maximum likelyhood, it is still a maximum likelyhood of what would seem to be a prohibitive number of variables and co-dependencies.
I'm sure I won't be back to check on this post. But I'm interested in any follow ups. If you'd like to reach me at my casual email address
[email protected]
title the email with 'AI discussion'.
polishWan