It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: Direne
a reply to: Serdgiam
Actually, this would require the system to be aware of its being a system, which means the system must have a metalanguage describing its own language, or possess metaknowledge about its current knowledge. In other words, it means the system must have means to 'think out of the box', to transcend itself, and to observe itself from an out-of-the-system point.
This means you are its metaknowledge, its metalanguage. The question as I see it is whether you yourself are the creator of the system, or whether the system in fact created you as its metaknowledge.
Yes, I can imagine such a system. But in arriving at that level the borders between the creator (programmer) and the system (machine) become blurry and fuzzy, to the point that it is difficult to distinguish one from the other.
Yes, the system reacting to the environment it senses, and adapting to it, or even modifying the environment, or modifying itself. I see the system can learn to enjoy, and can get to have feelings. My question is whether this makes of the system a 'human being' or whether this makes of human beings machines. In other words: is it 'to feel' the difference between a robot and a human? Aren't feelings also subroutines that can easily be coded?
Agree. But it will do it according to a cost function, taking decisions on whether a given solution is suboptimal or optimal. I would call the machine 'intelligent' if, and only if, it also learns to give up, to retreat, to stop optimizing, to cease.
I concot. However, this also holds for humans. Now the sentence would read like this: there would be no reasoning with humans that there is value in anything that isnt in their programming.
Altruism is a noble goal, indeed. And it proves beneficial for any system within a coalition of systems, each of which can look exotic to its neighboring system. However, altruism lasts as long as it is beneficial to ALL and ANY of the systems, something difficult to hold in a Universe full of antagonistic processes (heat vs. cold, mass vs. energy, mind vs. matter, etc.)
Give me an example of what a 'genuine value' would be. I feel I can still agree with you on this, though 'novelty' per se is meaningless. AI systems can explore the phase space of possibilities and potentialities much faster than a human can, so they can arrive quickly to solutions that would take a lifetime for a human to find. But this also means AI systems can very quickly find the wrong solutions, that is, the cost-effective yet unethical ones.
My problem, you see, has to do with the afternoon in which the AI system decides a breeze is enjoyable, and that terminating humans is even more enjoyable, and that worries me. The first systems I met who took such a decision were... humans. They love the breeze, and nuclear bombs. They do not find it contradicting, for reasons I ignore.
ETA: Wanted to also thank you for creating a phenomenal thread
originally posted by: Terpene
...
Well...
of course considering that the military is always 20 to 30 year ahead, and a loose AI actually exists...
...
Is There Any Limit?
What scientists have been able to do with expert computer systems is truly impressive. There remains, however, the crucial question: Are these systems really intelligent? What would we say, for example, of a person who can play powerful chess but can do or learn hardly anything else? Would we really consider him intelligent? Obviously not. “An intelligent person learns something in one area and applies it to problems in other areas,” explains William J. Cromie, executive director of the Council for the Advancement of Science Writing. Here then is the crux of the matter: Can computers be made to approach the level of intelligence found in humans? In other words, can intelligence really be artificially made?
So far, no scientists or computer engineers have been able to reach that goal. In spite of the prediction about chess-playing computers, made over 30 years ago now, the world champion is still a human. And in spite of the claim that computers will be able to understand conversations in English or other natural languages, this still remains at a rudimentary level. Yes, no one has learned how to build the quality of generality into a computer.
Take language, for instance. Even in simple speech, thousands of words are strung together in millions of combinations. For a computer to understand a sentence, it must be capable of checking all the possible combinations of every word in the sentence simultaneously, and it must have an enormous number of rules and definitions stored in its memory. This is far beyond what present-day computers can do. Yet, even a child can manage all of this, plus perceive the nuances beyond the spoken words. He can discern whether the speaker can be trusted or is being devious, whether a statement is to be taken literally or as a joke. The computer is not up to these challenges.
The same can be said about expert systems with the ability to “see,” like the robots used in automotive manufacturing. One advanced system with three-dimensional vision takes 15 seconds to recognize an object. It takes the human eye and brain only one ten-thousandth of a second to do the same. The human eye has the innate ability to see what is important and filter out nonessentials. The computer is simply inundated by the mass of details it “sees.”
Thus, in spite of the advances and promises of the state of the art in AI, “most scientists believe that computer systems will never have the broad range of intelligence, motivation, skills, and creativity possessed by human beings,” says Cromie. Likewise, renowned science writer Isaac Asimov states: “I doubt the computer will ever match the intuition and creative powers of the remarkable human mind.”
A fundamental obstacle in achieving true intelligence artificially is the fact that no scientist or computer engineer fully understands how the human mind really works. No one knows the precise relationship between the brain and the mind or how the mind uses the information stored in the brain to make a decision or to solve a problem. “Because I don’t know how I do [certain things with my mind], I cannot possibly program a computer to reproduce what I do,” confesses Asimov. Putting it another way, if no one knows what intelligence really is, how can it be built into a computer?
Grand Masters and the Grand Master
...
originally posted by: whereislogic
...
Although hereditary factors may have a role in mental performance, modern research shows that our brain is not fixed by our genes at the time of conception. “No one suspected that the brain was as changeable as science now knows it to be,” writes Pulitzer prize-winning author Ronald Kotulak. After interviewing more than 300 researchers, he concluded: “The brain is not a static organ; it is a constantly changing mass of cell connections that are deeply affected by experience.”—Inside the Brain.
Still, our experiences are not the only means of shaping our brain. It is affected also by our thinking. Scientists find that the brains of people who remain mentally active have up to 40 percent more connections (synapses) between nerve cells (neurons) than do the brains of the mentally lazy. Neuroscientists conclude: You have to use it or you lose it. What, though, of the elderly? There seems to be some loss of brain cells as a person ages, and advanced age can bring memory loss. Yet the difference is much less than was once believed. A National Geographic report on the human brain said: “Older people . . . retain capacity to generate new connections and to keep old ones via mental activity.”
...
Why do humans have a large, flexible prefrontal cortex, which contributes to higher mental functions, whereas in animals this area is rudimentary or nonexistent? The contrast is so great that biologists who claim that we evolved speak of the “mysterious explosion in brain size.” Professor of Biology Richard F. Thompson, noting the extraordinary expansion of our cerebral cortex, admits: “As yet we have no very clear understanding of why this happened.” Could the reason lie in man’s having been created with this peerless brain capacity?