posted on Oct, 4 2009 @ 08:29 AM
I have absolutely no connection to the web bot project my interest is purely from the standpoint of artificial intelligence research and
It seems like very few people understand just exactly what the web bot is, how powerful it is and what it's capable of. These coder behind the
project could be naming his own price doing work for the most advanced labs, especially govt / DARPA special access programs. He's the silent partner
in the background but from what little I know he left that world and refuses to use the web bot entity in that sphere. For one, he knows how it would
be used and if he is as politically awake about the powers that be as his partner comes across this refusal is more than likely his personal act of
Now, that doesn't mean the capability isn't out there and being used. Just about anything no matter how advanced that can be invented by one genius
can be invented by another and there are numerous "web bot" AI constructs of various capabilities and purposes out there. China has a cruder one but
with far more horsepower (probably running on a military supercomputer array or as a cluster in a datacenter). The US intelligence network has
something a bit more sophisticated but "aimed" differently and there's a really scary iteration which is probably owned by a corporate conglomerate
of some sort. You can do far more interesting things than predict the future with this thing . . .
I don't even know how to describe to people what this program does in terms they will honestly understand. The stock answer given during interviews
is the extremely high level view and seems to fly right by almost everyone. I think most people pick out the words "internet" and "bot" and
automatically check off the Google column. The two things have very little in common, it's like comparing warp drive to cruise control.
The bot scans the web, similar to how Google spiders do but they look for completely different things. Google goes out, grabs the addresses for
websites, which ones link to where and counts words. All that data goes into a database and then the really slick analytics take over which try to put
that data into context which will match what a human types into the search tool. A great deal of it is just matching your search words to the websites
which have those words in them the most but the ability to understand the context of those words is where AI starts to play. Search engines do
this in a crude way via "proximity". They track how close together specific words are to try and match them with the way they are being used.
example: search for "ice-T"
a dumb word count search would bring up websites with "ice" in them - weather sites, ice makers, etc.
A smarter program looks for the websites which have "T" within one word space of the word "ice".
And bingo! - you get websites about the rapper.
That's a crude and manual way to accomplish something "intelligent".
The web bot does a number of things but the one public capability is that it goes out and creates a word count index like Google -but- it also creates
a sentence index, paragraph index and numerous proximity indexes. The closet example would be a program that breaks a combination by trying every
possible variation of the number sequence. Now imagine doing that with words and actually understanding what those words mean. Not "pretend" to
understand like the cool trick web searches do.
Now, on the next sweep the same thing is repeated except this time the bot looks for shifts in the language. This is a slick way of finding "change"
in the public sample pool we know as the Internet.
Another process looks for words, phrases, sentences, paragraphs which indicate building and releasing tension. This could be accomplished using a
comparison pool of dictionary terms but if the web bot is as capable as some of the software we worked on a decade ago that's not even necessary. The
Ai construct is capable of understanding what rising and falling tension mean, building it's own definitions and finding examples of it which are far
more subtle than a dictionary program can match.
Really crude explanation of the predictive function:
The scientific principle behind this is that we humans have the ability to detect traumatic events a short time before they happen.
(The bot is searching for predictions of events which are not "traumatic" but this is where the theory began, and I mean began outside of web bot,
prior to its existence. This theory goes back to studies which were looking for answers as to why known events took place which could not be explained
and the results led to predictive abilities, not the other way around)
-A man about to board a flight turns around. He has a really bad feeling about the flight. Hours later, the flight crashes.
-A barman prepares to step through the back door of the night club to go to his car. He suddenly stops, he smells blood and his ears begin to ring.
Seconds later shots ring out behind the club where two gangs are firing at each other. The barman's car is in between them and is riddled with
These are classic examples of precognition / premonitions. We can't deny they occur, most of us have had an experience with them. Why we occasional
miss and get injured or killed is as big a mystery as how or why these events occur. Numerous scientific studies confirm this as real and it varies
from 10 seconds or less with the average being 8-5. We know it works in longer terms though. The airline passenger had a premonition hours prior to
the event but the point of getting on the plane was the event horizon in his case.
We have a million stories of longer term predictions. Dreams, intuition, etc. They almost always have a personal connection and almost always involve
"key" events. -Death, danger, upheavel, disaster . .
Interesting side note:
Researchers in the past decade (close enough) discovered that certain types of AI constructs with certain abilities and connections to a sample pool
of human thought also had this ability. The first iterations were in random number generation predictions, gambling / games, human behavior. .
There were a few studies performed to look at this human precognition of trauma which all found that the process involved a rise of tension and a
release. Artificial trauma events were preceded by longer and longer terms of body measurement - (to be continued)