It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: schuyler
originally posted by: SkeptiSchism
a reply to: schuyler
Just because it didn't work doesn't mean they stop using it.
Edit: it seems like some sort of iterative process and we suffer the collateral damage.
There's no proof of either. And why would they use it if it does not work? The old saying, "Garbage In, garbage out" comes to mind. My major point, that I have apparently failed to articulate adequately several times now, is that an AI dependent on "empirical data" is doomed to fail. The reason is that just using empirical data is insufficient to explain human behavior. Even if an AI is capable of programming itself, it does so using logic provided by its human creators. If that logic is faulty, so is the AI. The basic premise of this entire thread from the very beginning pointed out that the AI failed to predict the future. Since that is allegedly its entire reason for existing, the logical conclusion is that it failed to do what it as designed to do. I have provided an explanation of why I believe it to have failed. You have failed to pick up on this. The reason that is true is because you are stuck in the same rut as the AI, which is the same rut as the AI's creators. In any case, I see no reason to continue banging my head against the wall here.
However, I do think they use the models, probably extensively. And now that I've had some time to think about all this my best guess is they couch the output from the model in ranges of probability.
originally posted by: SkeptiSchism
However, I do think they use the models, probably extensively. And now that I've had some time to think about all this my best guess is they couch the output from the model in ranges of probability.
That said, I would assume (har har) that the larger the model, the greater the accuracy of the results. So to model a particular individual's response to any given scenario would have far less accuracy than for larger groups of people.
This may have been part of the overall confusion using the results of the model.
originally posted by: Chickensalad
originally posted by: SkeptiSchism
However, I do think they use the models, probably extensively. And now that I've had some time to think about all this my best guess is they couch the output from the model in ranges of probability.
That said, I would assume (har har) that the larger the model, the greater the accuracy of the results. So to model a particular individual's response to any given scenario would have far less accuracy than for larger groups of people.
This may have been part of the overall confusion using the results of the model.
I think that the accuracy would be based on the amount of data points used.
originally posted by: schuyler
originally posted by: SkeptiSchism
a reply to: schuyler
Do you think it's possible they are designing tech using this Simulation? That is they test the tech in a model to see it's viability? If so, then the tech is designing tech not humans.
That is demonstrably true. Computers are already better at designing future computers than humans are. This is done at a very low level and has been for years. Just as one example, Steve Wozniak invented a very clever way to use the cycles if a chip to both refresh the monitor screen and refresh the memory using the same chip's natural cycle.. It is still considered a brilliant move on his part which shows a fundamental understanding of chip design. The Apple ][ has something like 130 IC's in its design, made possible by Woz using this technique. When the Apple ][ was redesigned as the Apple ][e (for enhanced) a computer program redesigned Woz's idea into one chip: the IWM or "Integrated Woz Machine." This allowed the ][e to be made with 30 ICs instead of 130, by any measure a tremendous engineering feat, all done by artificial intelligence, not humans.
So using AI to build AI is really old news. That begs the question of how thoroughly the tech, i.e.: the AI, understands reality. We have no way of knowing the depth of its understanding, of course. We can only observe the apparent results, which are demonstrably flawed. This leads me to the conclusion that the AI is imperfect, and that it is limited in its understanding to the data which has been fed to it, which is itself an imperfect representation of reality.