It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

the AI reign

page: 3
9
<< 1  2   >>

log in

join
share:

posted on Aug, 25 2021 @ 10:49 AM
link   
a reply to: Terpene

I like the idea of first examining it through the lens of a threshold, then through nuance. I mean.. at what point do grains of sand become a heap?

The reason I like this approach is simply because I feel that many of the current conversations revolve around calling a few grains of sand an entire beach.

I think we tend to over-complicate a lot of this. When these things are being coded, they go for granular control rather than more over-arching algorithms. This is where my own software far surpasses all these major companies in a lot of ways (not all!).

To be absolutely clear though.. while I think that these things can lead to a nightmarish scenario so profoundly horrific that it could even over take the solar system, or even galactic sector.. these things are also our future and we can create incredible, awe-inspiring things. We just need to approach these technologies much, much different than most from our past. Typically, we just create it and sort things out later. Its that whole saying of "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."

This stuff is all significantly, significantly closer than most seem to think. As others (and myself) have even suggested in this thread; in many ways its pretty much already here. Yet, so many view it as scifi nonsense still. I find it interesting that something that used to influence innovation, invention, and foresight (science-fiction), is now used to do essentially the opposite.

ETA: Wanted to also thank you for creating a phenomenal thread

edit on 25-8-2021 by Serdgiam because: (no reason given)



posted on Aug, 25 2021 @ 12:21 PM
link   

originally posted by: Direne
a reply to: Serdgiam

Actually, this would require the system to be aware of its being a system, which means the system must have a metalanguage describing its own language, or possess metaknowledge about its current knowledge. In other words, it means the system must have means to 'think out of the box', to transcend itself, and to observe itself from an out-of-the-system point.


Correct! And imagine.. Direne bringing it back to language..
Its good to see you again.


This means you are its metaknowledge, its metalanguage. The question as I see it is whether you yourself are the creator of the system, or whether the system in fact created you as its metaknowledge.


Would you mind expanding your point here?




Yes, I can imagine such a system. But in arriving at that level the borders between the creator (programmer) and the system (machine) become blurry and fuzzy, to the point that it is difficult to distinguish one from the other.


I think this depends very, very much on what type of machine system we are talking about, and its subsequent coding. In many ways, I can see it going in entirely the opposite direction. Even at a basic level, without involving Intelligence, we can see systems that interact with their environment in ways that we dont fully understand. That said, perhaps "Intelligence" is more of a universal trait that we simply understand through the human experience rather than the other way around.




Yes, the system reacting to the environment it senses, and adapting to it, or even modifying the environment, or modifying itself. I see the system can learn to enjoy, and can get to have feelings. My question is whether this makes of the system a 'human being' or whether this makes of human beings machines. In other words: is it 'to feel' the difference between a robot and a human? Aren't feelings also subroutines that can easily be coded?


Ill be honest, I didnt really like using the word "enjoy" there. Im not sure that they can necessarily be easily coded, though I believe it to be the case. Our own bio-chemical algorithms are amazing, but some type of analogue could probably be programmed that is somewhat indistinguishable from human emotion.

That said, I strongly feel that this should be an emergent property rather than something that is precipitated from granular control. The more we try to directly make it "human," the worse the results may be. However, this is coming from a place that believes many of these properties that we feel are explicitly and specifically "human," are not. We are simply the strongest representation of more universal principles. But if, say, canines started to build a civilization.. we would likely see many of the same traits that we believe make us human. Eh.. maybe canines arent the best example due to their evolutionary proximity to humans.. But I think the general concept is still understandable there. Something like insects might work too, but due to their communication might be rather foreign.

But the core trait in play would be that many of these aspects may be inherent to a certain level of conscious interaction with the universe rather than anything that is unique to humans.




Agree. But it will do it according to a cost function, taking decisions on whether a given solution is suboptimal or optimal. I would call the machine 'intelligent' if, and only if, it also learns to give up, to retreat, to stop optimizing, to cease.


I like this a great deal! Ive never worded it like this, so thank you. Though, perhaps, we could reasonably state that "retreating" could very well be a continuination of the optimization.




I concot. However, this also holds for humans. Now the sentence would read like this: there would be no reasoning with humans that there is value in anything that isnt in their programming.


Do you believe so? Im not so sure.. Are humans capable of learning value is held in things they previously thought were valueless, useless, pointless, or meaningless? Perhaps we could say this is the case in a snapshot of time, but once we introduce the growth that results from a lifeform that can self-modify their programming.. I think it changes the overall picture. We might be able to state that it will always need to be present in the programming at any given time, and that the ability to change that "code" isnt specific to humanity, but it would appear to not be accurate to state it can never happen. Whereas with a hyper-advanced automated machine, the only way to change this variable would be direct external intervention (perhaps even exclusively from another system that can self-modify?).




Altruism is a noble goal, indeed. And it proves beneficial for any system within a coalition of systems, each of which can look exotic to its neighboring system. However, altruism lasts as long as it is beneficial to ALL and ANY of the systems, something difficult to hold in a Universe full of antagonistic processes (heat vs. cold, mass vs. energy, mind vs. matter, etc.)


Rather than altruism specifically, I actually view this to be a redefinition of selfishness. BUT! This gets into a very interesting discussion. One I would love to have, but I also dont want to bring the topic too far off course. As a summation, I believe that altruism, selfishness, etc. are actually traits that are defined in a given paradigm. Meaning, they can change drastically over time and in different environments. Though, I do think it could be strongly argued that they have remained static for (perhaps) all of known human history.

As a quick example, I built systems that would feed everyone in the world directly without centralized control. A part of me is driven unquestionably by altruism; I do not want anyone to starve and find the notion that people “need hunger to produce” to be.. misguided (being polite
). However, another part of me knows full well that I would directly benefit from what that genius in a favela in Brazil may create and bring into the world if they didn’t need to focus all of their time, effort, and energy into surviving. Its mutually beneficial and provides a framework for symbiotic growth to occur. They don’t have to possibly die a horrific, slow death from malnutrition.. and I get cool stuff (lol).

(cont.. yeayeahiknow)



posted on Aug, 25 2021 @ 12:22 PM
link   


Give me an example of what a 'genuine value' would be. I feel I can still agree with you on this, though 'novelty' per se is meaningless. AI systems can explore the phase space of possibilities and potentialities much faster than a human can, so they can arrive quickly to solutions that would take a lifetime for a human to find. But this also means AI systems can very quickly find the wrong solutions, that is, the cost-effective yet unethical ones.


Perhaps I can reason you into a position of finding value and meaning in novelty
I hear what you are saying though, and would go even further and state the same criticism can be levied against the word “genuine.” Here, we would define genuine as something that is “truly held, sincere, even foundational” and novelty to mean “aspects or facets that do not currently exist, either in the perceiving system itself or the external world.”

The code, in adhering to the principles I stated in a few posts previously, would actually avoid the “cost-effective, unethical” solutions should the potential value of novelty surpass the benefits of optimization. Using the food example, we have a situation that would require large amounts of resources and energy expenditure. In a vacuum, this is a loss. Sub-optimal, inefficient, and certainly not cost effective. However, the potential it creates is very likely to result in the emergence of novelty and even the invention of processes, technology, or systems that can then be recursively incorporated and make further optimization “easier” as well as open up doors that simply didn’t exist before.


My problem, you see, has to do with the afternoon in which the AI system decides a breeze is enjoyable, and that terminating humans is even more enjoyable, and that worries me. The first systems I met who took such a decision were... humans. They love the breeze, and nuclear bombs. They do not find it contradicting, for reasons I ignore.


If you were to absolutely force me to define “Intelligence” in these contexts, it would probably be along the lines of: “An emergent quality of any system that can learn. Where learning, itself, does not denote Intelligence but will likely lead to it over time and in the correct contexts and environments.” Admittedly, this has very strong esoteric undercurrents and many people nowadays will find that distasteful. And, it implies that a system of automation could become Intelligent given enough time.

That said, this is a serious concern.. though I suspect it is intrinsic to any Intelligence, as you imply. Perhaps the difference is.. a system of pure automation would open that f%^*ing window. To the observer, there may be little difference between an automated system that creates breezes with bombs and an Intelligent system that creates “bomb breezes” simply because it enjoys it. However, in the former scenario there is absolutely no chance, whatsoever, of changing the sequence of events without destroying the system or directly hacking the code. In the latter, there is a higher potential of completely different paths.

I guess my argument would be that both scenarios could lead to open windows and nuclear bombs, but the latter introduces a greater spectrum of possibility of entirely different courses whereas the former does not.



posted on Aug, 25 2021 @ 02:49 PM
link   
a reply to: Serdgiam



ETA: Wanted to also thank you for creating a phenomenal thread


Don't flatter me, when it's everyones contribution leaving me speechless.
I'm barely keeping up, and there are so many aspects to it, I did not deepen in on.
It's very interesting to read everyones contribution and your contribution in general do always resonate.
It's obvious, that you spend much time with the philosophical questions, and you have quite some experience in the field. that's a great combination for an interesting thread!

so yeah... thank YOU for making this a great thread!!!



posted on Aug, 26 2021 @ 01:37 AM
link   
a reply to: Serdgiam

My point about metalanguage, and hence metaknowledge, tries to emphasize that once you succeed in designing your system you must be fully aware you are automatically part of the system. As you state, this depends very much on what type of machine system we are talking about, and its subsequent coding. My point is there is no choice: you are the metalanguage of your system.

As complexity of a system increases, the boundary between life and non-life becomes blurred. Let's put a trivial example: a flying F-14 is a system. It is not a crew of humans flying a system called "F-14 Hornet", rather, it is a system comprising several subsystems, one of which is the crew itself. Were you to be tasked with studying such a system you would do it focusing on living organisms, and then focusing on its "cybernetics". One will tend to think we are here in front of two different systems: one comprising living organisms, and another comprising non-living artificial devices.

However, both classes of systems are agents that perform functions necessary for reaching their goals or, as biosemiotics has it, their semiosis can be inherited or induced by higher-level agents. I mean, in your intelligent home design we cannot obviate there is an initial creator (you) that, if you wish the system to be truly intelligent, will eventually become himself part of the system. Whatever ethics guided your design is lost at that very moment, and we can only speak from that point on of the system's ethics.

Life is intrinsically related to information processing and communication. Same for your artificial devices. Your gadgets can certainly process environmental data and take decisions based on it without understanding the problem that they help to solve. This is why traditionally algorithms of information processing appeared more important for cybernetics than the meaning of information.

In your design (correct me if I'm wrong) there is a wifi mesh connecting devices and sensors, and you. Let's call all of them 'agents'. These agents (you included) are interconnected both horizontally and hierarchically. Sure there will be subagents (modules, subsystems, and so on) that are always produced by other agents of comparable or higher complexity. You can then decide whether your mesh of interconnected devices (and you yourself) will be individuated or diffused (swarm agents). They can be autopoietic, autotrophic agents, with or without learning capacity. You opted for a system able to learn. First, it will learn about itself (you included), and it will perform optimization procedures to best perform its goal. This, as you said, includes interacting with the environment in ways you (the creator) do not necessarily understand. The system will probe the environment, will perform some tests, will make inferences, will establish analogies, will make inferences.

The question is: how far will the system go in its probing activity? Would it, for example, lower the room temperature to limits beyond what a human (you) could sustain, just for testing purposes? Would it disconnect this and that device in order to test for redundancy, or error checking? Would it disconnect you? Obviously, a human does not cut his/her hand just to test what would happen to his/her body, or does he/she? Humans inflicting self-damage, and being fully aware of the consequences, are they just a theoretical entity or, on the contrary, do they abound? Humans are intelligent beings, yet sometimes that intelligence is self-damaging or, worse, damaging to other life forms. What about your ideal system? Couldn't be the case that it learns how to suppress famine from Africa, as you said, by the expedient of terminating those who speculate and trade with food?

What if your environmental conditioning system at home decides the best range of temperatures for its working is just, say, 5 degrees Celsius instead of a comfortable 22 degrees Celsius?

I'm interested in your concept of granularity, and would like you to expand it. You wrote 'I strongly feel that this should be an emergent property rather than something that is precipitated from granular control.' My concerns is whether you are aware that evil-doing and nefarious-behavior are also emergent properties in any complex system. I would like to know whether there is a way that using granular control could prevent the misalignment of the AI system with the ethics of the initial creator (you) once the creator (you) becomes an indistinguishable part of the system itself. Even more interesting: could granular control misalign with the initial creator's goals if those goals were learned by the system to be evil? And how can the system correct itself when, as I said, you are already part of the system? Can we teach an AI to sincerely repent? To forgive? To cease?



posted on Aug, 28 2021 @ 02:22 AM
link   

originally posted by: Terpene
...
Well...

of course considering that the military is always 20 to 30 year ahead, and a loose AI actually exists...

“The performance of even the most advanced of the neural-network computers . . . has about one ten-thousandth the mental capacity of a housefly.” (Dr. Richard M. Restak, American neurologist, neuropsychiatrist, author and professor. He has contributed brain and neuroscience-related entries for the Encyclopædia Britannica and the Encyclopedia of Neuroscience.)

What man-made computer can repair itself, rewrite its program, or improve over the years? When a computer system needs to be adjusted, a programmer must write and enter new coded instructions. Our brain does such work automatically, both in the early years of life and in old age. You would not be exaggerating to say that the most advanced computers are very primitive compared to the brain. Scientists have called it “the most complicated structure known” and “the most complex object in the universe.”

Artificial Intelligence—Is It Intelligent? (Awake!—1988)

...

Is There Any Limit?

What scientists have been able to do with expert computer systems is truly impressive. There remains, however, the crucial question: Are these systems really intelligent? What would we say, for example, of a person who can play powerful chess but can do or learn hardly anything else? Would we really consider him intelligent? Obviously not. “An intelligent person learns something in one area and applies it to problems in other areas,” explains William J. Cromie, executive director of the Council for the Advancement of Science Writing. Here then is the crux of the matter: Can computers be made to approach the level of intelligence found in humans? In other words, can intelligence really be artificially made?

So far, no scientists or computer engineers have been able to reach that goal. In spite of the prediction about chess-playing computers, made over 30 years ago now, the world champion is still a human. And in spite of the claim that computers will be able to understand conversations in English or other natural languages, this still remains at a rudimentary level. Yes, no one has learned how to build the quality of generality into a computer.

Take language, for instance. Even in simple speech, thousands of words are strung together in millions of combinations. For a computer to understand a sentence, it must be capable of checking all the possible combinations of every word in the sentence simultaneously, and it must have an enormous number of rules and definitions stored in its memory. This is far beyond what present-day computers can do. Yet, even a child can manage all of this, plus perceive the nuances beyond the spoken words. He can discern whether the speaker can be trusted or is being devious, whether a statement is to be taken literally or as a joke. The computer is not up to these challenges.

The same can be said about expert systems with the ability to “see,” like the robots used in automotive manufacturing. One advanced system with three-dimensional vision takes 15 seconds to recognize an object. It takes the human eye and brain only one ten-thousandth of a second to do the same. The human eye has the innate ability to see what is important and filter out nonessentials. The computer is simply inundated by the mass of details it “sees.”

Thus, in spite of the advances and promises of the state of the art in AI, “most scientists believe that computer systems will never have the broad range of intelligence, motivation, skills, and creativity possessed by human beings,” says Cromie. Likewise, renowned science writer Isaac Asimov states: “I doubt the computer will ever match the intuition and creative powers of the remarkable human mind.”

A fundamental obstacle in achieving true intelligence artificially is the fact that no scientist or computer engineer fully understands how the human mind really works. No one knows the precise relationship between the brain and the mind or how the mind uses the information stored in the brain to make a decision or to solve a problem. “Because I don’t know how I do [certain things with my mind], I cannot possibly program a computer to reproduce what I do,” confesses Asimov. Putting it another way, if no one knows what intelligence really is, how can it be built into a computer?

Grand Masters and the Grand Master

...

Use It or Lose It

Useful inventions such as cars and jet planes are basically limited by the fixed mechanisms and electrical systems that men design and install. By contrast, our brain is, at the very least, a highly flexible biological mechanism or system. It can keep changing according to the way it is used—or abused. Two main factors seem responsible for how our brain develops throughout our lifetime—what we allow to enter it through our senses and what we choose to think about.

Although hereditary factors may have a role in mental performance, modern research shows that our brain is not fixed by our genes at the time of conception. “No one suspected that the brain was as changeable as science now knows it to be,” writes Pulitzer prize-winning author Ronald Kotulak. After interviewing more than 300 researchers, he concluded: “The brain is not a static organ; it is a constantly changing mass of cell connections that are deeply affected by experience.”—Inside the Brain.

Still, our experiences are not the only means of shaping our brain. It is affected also by our thinking. Scientists find that the brains of people who remain mentally active have up to 40 percent more connections (synapses) between nerve cells (neurons) than do the brains of the mentally lazy. Neuroscientists conclude: You have to use it or you lose it. What, though, of the elderly? There seems to be some loss of brain cells as a person ages, and advanced age can bring memory loss. Yet the difference is much less than was once believed. A National Geographic report on the human brain said: “Older people . . . retain capacity to generate new connections and to keep old ones via mental activity.”

Recent findings about our brain’s flexibility accord with advice found in the Bible. That book of wisdom urges readers to be ‘transformed by making their mind over’ or to be “made new” through “accurate knowledge” taken into the mind. (Romans 12:2; Colossians 3:10) I have seen this happen as people study the Bible and apply its counsel. Many thousands—from the whole spectrum of social and educational backgrounds—have done so. They remain distinct individuals, but they have become happier and more balanced, displaying what a first-century writer called “soundness of mind.” (Acts 26:24, 25) Improvements like these result largely from one’s making good use of a part of the cerebral cortex located in the front of the head.

Your Frontal Lobe

Most neurons in the outer layer of the brain, the cerebral cortex, are not linked directly to muscles and sensory organs. For example, consider the billions of neurons that make up the frontal lobe. Brain scans prove that the frontal lobe becomes active when you think of a word or call up memories. The front part of the brain plays a special role in your being you.

“The prefrontal cortex . . . is most involved with elaboration of thought, intelligence, motivation, and personality. It associates experiences necessary for the production of abstract ideas, judgment, persistence, planning, concern for others, and conscience. . . . It is the elaboration of this region that sets human beings apart from other animals.” (Marieb’s Human Anatomy and Physiology) We certainly see evidence of this distinction in what humans have accomplished in fields such as mathematics, philosophy, and justice, which primarily involve the prefrontal cortex.

Why do humans have a large, flexible prefrontal cortex, which contributes to higher mental functions, whereas in animals this area is rudimentary or nonexistent? The contrast is so great that biologists who claim that we evolved speak of the “mysterious explosion in brain size.” Professor of Biology Richard F. Thompson, noting the extraordinary expansion of our cerebral cortex, admits: “As yet we have no very clear understanding of why this happened.” Could the reason lie in man’s having been created with this peerless brain capacity?

Purposeful Design or Mindless Process? 1 of 2 (playlist: Real science, knowledge of realities compared to unverified philosophies and stories)



posted on Aug, 28 2021 @ 03:00 AM
link   

originally posted by: whereislogic
...
Although hereditary factors may have a role in mental performance, modern research shows that our brain is not fixed by our genes at the time of conception. “No one suspected that the brain was as changeable as science now knows it to be,” writes Pulitzer prize-winning author Ronald Kotulak. After interviewing more than 300 researchers, he concluded: “The brain is not a static organ; it is a constantly changing mass of cell connections that are deeply affected by experience.”—Inside the Brain.

Still, our experiences are not the only means of shaping our brain. It is affected also by our thinking. Scientists find that the brains of people who remain mentally active have up to 40 percent more connections (synapses) between nerve cells (neurons) than do the brains of the mentally lazy. Neuroscientists conclude: You have to use it or you lose it. What, though, of the elderly? There seems to be some loss of brain cells as a person ages, and advanced age can bring memory loss. Yet the difference is much less than was once believed. A National Geographic report on the human brain said: “Older people . . . retain capacity to generate new connections and to keep old ones via mental activity.”
...

Here's how that works at the molecular level:

A video that is included in the playlist linked at the end of my previous comment, because it contains further clues regarding the question at the end of my previous comment:

Why do humans have a large, flexible prefrontal cortex, which contributes to higher mental functions, whereas in animals this area is rudimentary or nonexistent? The contrast is so great that biologists who claim that we evolved speak of the “mysterious explosion in brain size.” Professor of Biology Richard F. Thompson, noting the extraordinary expansion of our cerebral cortex, admits: “As yet we have no very clear understanding of why this happened.” Could the reason lie in man’s having been created with this peerless brain capacity?

So get those neurons firing, use it or lose it. If you want, you can start with the first video in that playlist to get the bigger picture and to help spot those clues I spoke about:

Molecular Machinery of Life (Real science, knowledge of realities compared to unverified philosophies and stories)

If you do so, don't miss the ones entitled as follows:

Five Questions for Michael Behe
Molecular Machines - ATP Synthase: The power plant of the cell
Unlocking The Mystery of Life Trailer
Dr. Stephen Meyer: Chemistry/RNA World/crystal formation can't explain genetic information
Your cells—living libraries!

And the videos after the one I linked in my previous comment at the end. What may also be of interest is the series of videos entitled:

Psychology: The Art of selling nonsense/contradictions

Which is 3 videos, but the 3 videos after that are related, although that may not be obvious if you're not familiar with the Stargate Atlantis TV character called Lucius Lavin, and are under a similar spell of infatuation with the likes of Stephen Hawking, Richard Dawkins and Lawrence Krauss; and have similar difficulty in snapping out of it as the Stargate Atlantis main characters or crew in season 3 episode 3 named "Irresistible", especially well-depicted in the scene with Dr. Beckett and Sheppard in the jumper where Sheppard tells Beckett to buck up (a scene I sadly couldn't find so I went with another one).
edit on 28-8-2021 by whereislogic because: (no reason given)




top topics



 
9
<< 1  2   >>

log in

join