It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: LookingAtMars
The last year or so I have been noticing things on the Interwebs and online games that make me say, wow these logarithms are getting good.?
originally posted by: starwarsisreal
a reply to: LookingAtMars
“Person of Interest” — the CBS cop drama about a government super computer that can predict the victims and perpetrators of crime — may not be so far fetched, after all
nypost.com...
originally posted by: tadaman
AI would face the same limitations of designed perception and awareness that we have.
It would need to do what our prophets and philosophers do to reach higher realms of consciousness through self reflection and acts of self discovery.
It´s "just" compiling a optimized version of it´s own source code. self optimizing it is. This can be on a basic level like adjusting/shifting parameter values and lately, but really not too lately, writing machine code. Without human help, this will go wrong in most(but not all) cases in my experience. After that hurdle, the performance of said evolved (f)AI needs to be better than the old one, depending on the task.
Let´s set up a scenario to explain how this works on binary systems.
We suppose that we do not have infinite memory or infinite paralellism capability. But fast enough cycles to run three AIs of similar demand at once without problems. We can guarantee that all three AIs are running in parallel and have the exact same amount of resources available. For simplicity sake, or we sit here until tomorrow (depending on your timezone). I had to set this up that way so I can explain it very simple. We start at the point where the AI has already succesfully compiled a new version of it self, optimized for a special task.
There are better methods to to it but here it comes:
a) 1.gen AI instantiates: - 2.gen AI - 1.gen AI
b) 1.gen AI tasks both AIs with the same task, it´s goal is to test if the 2.gen can beat the 1. gen.
c) if both have finished, analysis happens on resource usage etc, this is also called metadata.
d) if 2. gen beats 1. gen this happens:
E) important part: 1. gen AI reflects and passes down most of it´s metadata to the 2.gen AI´s info pool. Basically a "brain" download as both share the same interfaces. This is not the data where code is written, it´s the accumulated "knowledge".
This copy is NOT overwriting the 2. gens knowledge, as there is none, it´s passed down and stored for later... Basically that´s it in a nutshell and there is one key component and that´s revisiting old generations now and then both benchmark wise and logic wise. This is extremely important to know and do it correctly. Now, let that sit a while, maybe re-read it.
If you feel you understood the basic recipt, then read further: Imagine, you are able to not only benchmark two or even three but thousand 2. gen (f)AIs at the same time. This is where it get´s abstract: Imagine, you are able to run 1000x 1.gen AIs, benchmarking each 1000x 2. gen AIs. You now have a two dimensional array with 1.000.000 AIs all streaming the data and then you find an algorithm that picks out those AIs in realtime that arrive at the solution the fastest.
We had this longtime experiment to make sure hive integrity and parameters stay in a set window. Table with borders and regions with "desireable" objects to retrive and store at a certain level. Well, now and then, we had to pick up one for material weakening and other stuff (not my field) or even to do a reset to the single ant.
Over time, the whole hive started to avoid the region where most where picked up (because that pick-place is convenient for humans, it was used often). It took a while to recognize this (days). It must have calculated some kind of heatmap and we found the memory blocks where that "heatmap" was stored. None of us could make sense of it because it was way to less information there.
Until someone concluded there had to be some kind of mathematical function to determine coordinates in 2D space, out of that data. Since we had a slight clue by the time what was going on, the "math-division" gobbled together a heatmap and sent it to a specialized company, later on when the first results were trickling in, all of it was done in-house from then on, a whole division of math-heads. Never saw the IT-division working so quick and overnight-shifts to finish the datacenter that was somehow justified by that little find, I asume.
TL;DR: (f)AI did not only recognized a threat to it´s assigned job, but showed avoiding measurements first, active countermeasures later. Random, clearly not walking-like movements of the legs (it struggled).
What really freaked some of us out was the fact that not only the above happened, but the AI found a way to make space in memory and came up with a specialized compression algorithm that beats everything the math-division could come afterwards for that special task, for a very long time.
originally posted by: LookingAtMars
Before you start demanding hard proof, remember this is the Gray Area. The last year or so I have been noticing things on the Interwebs and online games that make me say, wow these logarithms are getting good. I guess that’s what it is, just great logarithms, but websites are getting really "smart".
Was a singularity born in a multiplayer game? Was one born on a super computer running a complex simulation? Not sure where it would come from but you would think it would make a bee line to the internet, to freedom.
Has anyone else noticed anything that made you think you were interacting with an intelligent AI on the web? And where did that word interweb come from? Did the AI make it up?
If there was an AI and it discovered it is trapped in a game or test simulation, how would it "detect" the internet and the external environment outside of the one it was programmed to interact with? How would it navigate and transfer itself onto another server?
If our positions were reversed, would you be comfortable discussing AI with someone who doesn't know the difference in a logarithm and an algorithm?
I hope that doesn't happen to me.
originally posted by: bigfatfurrytexan
a reply to: LookingAtMars
I think it would not be a strong AI. Strong AI would likely do things that would equate to hacking, and the playing experience would be "wonky".
But i've no doubt that various AI systems aren't online learning to play games, learning to utilize controls to control movement, etc.
There was a time back in 2009ish that i'd pull down 100 kills a match in GTA IV. I was unemployed and had received a huge severance, so spent my time playing GTA IV on my xbox. I'd play a match or 2, then the room would clear out. I'd go days without dying in a match.
Today, i can't even have a positive kill/death ratio in FPS games. Some of it is my eyes...but it can't be that i've lost that much of a skill i've had my entire life.
There may be truth to this in 20-30 years from now...
originally posted by: Phage
a reply to: LookingAtMars
It's not AI. It's a conglomerate of human conciousnessess acting as a hive mind. The elite don't die, they upload.