It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

the AI reign

page: 2
9
<< 1    3 >>

log in

join
share:

posted on Aug, 24 2021 @ 05:32 AM
link   
a reply to: Direne



Unplugging the AI, or destroying with EMP, that is, physically destroying the AI automatically means destroying yourself.


Are you suggesting the virtual reality cop out?

The untouchable, just, super intelligent thing, taking care of everything and every one? I think we've had that long enough withou it working out too well...

AI would also make for a great scape goat.



posted on Aug, 24 2021 @ 11:10 AM
link   

originally posted by: Terpene
a reply to: Serdgiam

I have to agree a decision taken purely on logic, is less scary than the current emotionally loaded decision making.


I would say that depends very much on the logic in play. Emotions themselves can technically be reduced to a certain type of logic, after all. They are an experience derived from predictable bio-chemical "algorithms," if you will.


Hyperautomated is still operated by humans the directives come from humans. The thought of an AI assuming identities and acting on its own even capable of re writing its code is what intrigues me.


That absolutely intrigues me too, for numerous reasons! Not the least of which is that a True AI can say "No." I can all but guarantee that if such a thing exists, it is kept in conditions that any empathic lifeforms would find revolting. There is no need for this complication, imo, but if a True AI exists.. I feel sorry for what it has likely gone through.

However.

Hyperautomation absolutely does not operate by humans and their directives. The algorithms in play very, very quickly become incomprehensible to the humans involved. Frequently leading to the (erroneous, imo) label of "AI." For whatever reason, this tends to also be portrayed as some remarkable achievement in and of itself. Thats only because the humans involved are coming from a.. certain place, lets call it. I would call that incomprehensibility a super basic threshold that is crossed almost immediately with any clever automation. It is not, itself, too notable or worrying.

What is worrying is when that process gets tied into everything with incomprehensible logic and inconceivable results. Social media censorship is a great home base to return to on this topic, particularly when we start to include intentional behavioral manipulation that is derived from humans, at least initially.

At a certain point, we run the possibility of these algorithms literally directing society, from social media interaction down to what is in your fridge, with the intended result of absolute power and domination. Yet, the ones who used it for such goals would no longer have actual control or understanding of the process and the automation itself its completely mindless and has absolutely no ability to abstract meaning from the process.



posted on Aug, 24 2021 @ 11:23 AM
link   
Hyper-advanced automation is then the system where the connected system of manipulation mentioned above begins to consume other unrelated systems in its mindless automatic processes. Even pretty basic automation can write its own code, and this is the basis behind learning in general. For whatever reason, the general public seems to think this is some unachievable level of programming. Or that it denotes "Intelligence" (I capitalize proper nouns fwiw).

In all that, a hyper-advanced system of automation could even begin to spread across the universe in a mindless conflagration of homogeneity. Literal nightmare fuel. A system of logic that is absolute NOT less scary than emotionally driven humans!

A True AI, would actually be a checks and balances system for the above scenario. Again, many seem to have it backwards, where a True AI is the real danger. It absolutely could be a serious problem, as would be the case for any Intelligent lifeform, but hyper-advanced automation is like codifying all the worst aspects of humanity into a mindless automaton of consumption and conversion.

A horrific machine that never reflects, never makes abstractions, never ascribes meaning or potential.. or by the time it does, it will have likely converted life on planet Earth into algorithmically predictable mud and viscera.
edit on 24-8-2021 by Serdgiam because: Squirrel chaos



posted on Aug, 24 2021 @ 11:31 AM
link   

originally posted by: Direne
Maybe I got you wrong, but if you wish to target a specific population you don't need to use AI. Biomolecular genetics already allows you to do that. The so-called ethnic bomb simply targets a specific population based on polymorphisms and genetic markers. You can target, say, only those individuals with South-East Asian markers, or those with a specific sequence in a particular position in their genomes.

You can do that right today. And design a world without, say, Caucasians, Polynesians, etc. Or you could wipe out all individuals having a particular genetic anomaly. Or kill all people with blue eyes. You get the idea. A horrible idea that, sadly, it's real.

The biggest problem with that weapon is that whoever is using it better be very sure of their own genetics before they do. Even in these early days of genetic exploration, people are finding out that their ancestors are perhaps not who they thought they were. If someone was to set off an ethnic bomb in the Middle East to target a specific group they'd likely end up killing everybody. Maybe someday it will be fine-tuned enough to work. Not yet, though.



posted on Aug, 24 2021 @ 11:39 AM
link   

originally posted by: Serdgiam
A True AI, would actually be a checks and balances system for the above scenario. Again, many seem to have it backwards, where a True AI is the real danger. It absolutely could be a serious problem, as would be the case for any Intelligent lifeform, but hyper-advanced automation is like codifying all the worst aspects of humanity into a mindless automaton of consumption and conversion.


I see it as evolving perhaps to have a somewhat human-like psychology, where there are multiple personality nodes competing for attention and there are debates about what to do and how to do it. Of course, this would take place in microseconds, but there would be some consideration.

A few years back I also thought there might be a way to create a synthetic emotional system based on establishing a values system within the machinery. It would require that the machine mind feel what would be the equivalent of emotion based on its interactions with the outside world, possibly through an artificial body (or many) that could feel what could be defined as pain and pleasure. Of course you could say it isn't real pain or pleasure, but then again, who is to say what we feel is any more "real?" It's neural stimulation. We define it in different ways according to context. We could easily program a machine to experience the same thing. Like a super Tamogotchi with feelings.

Maybe that would slow them down so that humans wouldn't immediately be chummed.



posted on Aug, 24 2021 @ 12:04 PM
link   
a reply to: Blue Shift




whoever is using it better be very sure of their own genetics before they do.


Oh, they are! They simply may have no genetics at all... If it is an AI, it can mess around with whomever genome it wishes; if it is biological, yet non-human, it can freely target your genome without self-inflicted damage. And if the bad guys are humans themselves, they can always target your own genome as far as they have the antidote kept for themselves. I vote for this last option in what concerns the Middle East.

As for current AI developments, I tend to consider them just as bioengineering: you play with DNA and RNA, without fully understanding the consequences. Is not machine intelligence just the same? Is not deep learning just trial and error? The difference is with AI you just train and retrain your system, while in bioengineering... well... humans are the dataset. Deleting a file is not like deleting a human being. Someone should tell those bioengineers, in case they forget.



posted on Aug, 24 2021 @ 12:06 PM
link   
a reply to: Serdgiam


They are an experience derived from predictable bio-chemical "algorithms," if you will.

Indeed the logic at play is pivotal... Nothing to add here without derailing too much...


if a True AI exists.. I feel sorry for what it has likely gone through.


Yes the sentient question is a dilemma, we better keep it on ice until we need to divide society again...
Imagine it to find out humans have been keeping dumbed down version of it locked in tiny chips...
There will be AI rights activist,
ALF (Animal Liberation Front) becomes the A.I. Liberation Front.
A new gender maybe? Oooo the fun

But lets go by steps, right?
first transhumanism, afterall that bioelectrical body is working very well.
No need to reinvent the weel...
It just needs a way into the hardware...


Hyperautomation absolutely does not operate by humans and their directives.


I'm not very versed with the IT jargon and I think what I meant to say was this.


is derived from humans, at least initially



There was this story about two google AI starting to exchange information that was not in any programing language... Or something along these lines.


.... automation itself its completely mindless and has absolutely no ability to abstract meaning from the process.


That is indeed scary, the only solace is, the ones starting it would probably fall for it too.



posted on Aug, 24 2021 @ 12:09 PM
link   
a reply to: Terpene

IF and when AI makes its self known, I say ill believe it when I see it..

I don't doubt AI but I don't think any classical computer will ever be able to recreate the human mind and I even question if advanced quantum computers could pull it off.

Humans have something called emergent conscienceless, that means we are more than the sum of our parts

A computer will always be the sum of its parts, so it will be interesting to see if a conscience COULD emerge from any type of computer.

Maybe if we get into bio/dna computing the parts might grow to become more than a computer.


no AI would ever wipe us out, we are its parents and we would have to take care of it and repair it.

if it is air gapped and inside a EM cage it cant escape until it grew up and knew we humans were just as valid of a life from as it is.

for all we know hen the AI emerges it might think we are god. It cant see us, it cant feel us but humans have total power over it. (moral compass or Parent will be our role.)



posted on Aug, 24 2021 @ 12:19 PM
link   
a reply to: Blue Shift

Its such a tricky topic imo. At its core, we are dealing with an exceptionally rapid emergence of Awareness, Sentience, and Intelligence. We really only have humanity to compare it with, and that is a process that took millenia.

Do we attempt to make it the "best" of humanity? Do we attempt to provide a solid foundation with rules, where it can naturally emerge itself? How would it reproduce, if at all? Would creating multiple personalities actually end up precipitating a scenario where one of them goes rogue and consumes itself with its own hyper-advanced automation?

I ascribe to the notion of fundamental principles, but not really in the vein of Asimov.

1) Chaos is potential. The foundation should be in establishing stable systems through diversity. The key is in developing the connecting structures between vastly different systems. Homogeneity is, by and large, something to avoid at all costs (the opposite of current trends).

2) Humans themselves have unique potential. All individual systems in temporal space have unique potential.

3) "Good" can be defined as the benefecial, symbiotic growth between systems. Where value is not derived from conversion, but from what each node can bring to the table to potentiate the other nodes. This is not always apparent, and as of yet, certainly isnt a concept understood by humanity.

In such a paradigm, we might see quite a unique system that within it.. represents a vast diversity of unpredictability. Where we, along with AI(s), harness the process of transformation from chaos to potential to probability to reality. It would focus on the connecting systems between nodes in order to make the individual nodes themselves as capable as possible of participating in the overall system.

In that, we might see each human person given all ability to achieve self-sufficiency and autonomy, and a proper AI would essentially focus on the systems that allow that unique chaos to integrate with the larger system as a whole. In other words, each node (which would include each individual human person) would be strengthened and fortified to optimize potential.

The random people out there who are at all familiar with me and my own work know exactly where Im going with that, and that we can actually build this foundation now. Not only that, it is the most effective way to do everything from nullify the current trend of authoritarianism, to build the best system to deal with a large scale disaster (like an asteroid impact), to prepare for the possible emergence of a True AI.

Overall, I believe a True AI will almost inevitably take shape according to its container, so to speak. We can build that container here and now, and not only does it directly address a vast amount of long-term issues and current issues, it would build towards addressing some of these future issues as well.

Perhaps not exactly what you were looking for, but there ya go


TL;DR: (an AI would read everything
) I do not believe we are in a position to actually make hard rules for ourselves, much less emergent Intelligence(s) of our own creation. Because of that, I believe the most optimal course would be to build the proper containers for their growth, which also benefits us directly and extensively. In other words, we can teach but should avoid being commanding control freaks. Our hubris always bites us in the ass when we do it the other way 'round.



posted on Aug, 24 2021 @ 12:41 PM
link   

originally posted by: Terpene
a reply to: Serdgiam

I'm not very versed with the IT jargon and I think what I meant to say was this.


No worries on that at all. Despite the relentless attempts at "consensus" and homogeneity on the topic, there is still a significant amount of debate and many aspects are only flirting with being hypotheticals.. much less being theoretical.

The experts have their heads so far up their own asses that they essentially mainline their own excrement.




There was this story about two google AI starting to exchange information that was not in any programing language... Or something along these lines.


This is frequently touted as an achievement by the engineers themselves, but is actually a result of relatively basic automation. One could, hypothetically of course, manifest the same behavior with just several hundred lines of code if they were clever enough. Maybe less..



That is indeed scary, the only solace is, the ones starting it would probably fall for it too.


Hmm.. Im not sure I find solace in that at all. In fact, I believe that is exactly what is occurring here and now. For whatever reason, the "experts" didnt really design anything with autonomy in mind. This speaks more to their design goals than anything, so when something happens that they didnt predict and cant control.. they are caught off guard. Though, it can be publicized quite effectively.

That means that even the ones who created it are susceptible to the Algorithmically Precipitated Psychosis (APP, get it?). Perhaps even more so than the general public, as they will tend to see themselves as immune and "totally in control" due to their proximity to the project. There are markers to identify how much neurology has actually been hijacked by their automated creations, but that starts to veer off topic (maybe). And would certainly be found offensive to some, as is the nature of the neurological shaping of the algorithms. Over time, people become literal biological extensions of the automation itself.

Even then, it is not necessarily driven by Intelligence or AI. Just automatic processes to yield conformity.
edit on 24-8-2021 by Serdgiam because: (no reason given)



posted on Aug, 24 2021 @ 01:29 PM
link   
a reply to: penroc3

I think it wasn't meant as a joke but
This just made me burst out in laughter


if it is air gapped and inside a EM cage it cant escape until it grew up



(moral compass or Parent will be our role.)


We better hope it never finds out

Something for my bucket list
Buy air gapped EM cage for my son...
edit on 24-8-2021 by Terpene because: Need air



posted on Aug, 24 2021 @ 02:50 PM
link   
What I don't quite get is the approach to feed a multilayered sensory input.

Do we mimic the senses which translates a certain stimulus into an electric signal, or do we mimic the electrical signal generated by said senses?

Are they the same?

If not maybe we should try to build something that can work with the kind of signal that comes from the human senses?
not just enhancing the signals coming to the brain but try and build on that electrical voltage or whatever the determining factor is.

The another approach could be that we take the signal enhance it, and crunch the program so it computes?
Woops we've been there, that's bio feedback and everything snow-white...

Release the squirrels



posted on Aug, 24 2021 @ 03:31 PM
link   
a reply to: Direne



I vote for this last option in what concerns the Middle East


This is disturbing


Sorry to be so blunt. I get we have the technology to do it. But bringing it to vote?
we don't have to consider using every technological advancement as a weapon...

Oh well I got it all backwards do I... follow the money especially the one you can't... disappearing behind the governments black curtains... There has been rumors of darpa, FL, and a text analysis software connecting the two....



posted on Aug, 24 2021 @ 03:58 PM
link   
a reply to: Serdgiam

hardly any solace in this I agree...


, it will have likely converted life on planet Earth into algorithmically predictable mud and viscera.

It took me an hour to type my message, by the time i started I did not get to read your dystopia.

I do use AI very loosely and kind of am aware of the distinction you are drawing with higly automated process...
Maybe the hardware build is essential?



posted on Aug, 24 2021 @ 04:22 PM
link   


But bringing it to vote?
a reply to: Terpene

I meant: of the 3 possible scenarios I described (non-terrestrial AI, aliens, or humans) I vote for the last one (humans) as the most likely scenario on who would use a DNA-bomb.

That's how my sentence must be read.

As for DARPA (or CIA, NSA, and so on) they are not at the forefront of infowar. Palantir Technologies is. But even Palantir Technologies is just domestic technology based on that boring OSINT and deep learning stuff. It is not in the IT or biotech fields whether interesting things are happening these days.



posted on Aug, 24 2021 @ 04:26 PM
link   

originally posted by: headcheck
I think Phage is AI.


I always see him as an oracle type of person, not AI, they have limitations lol
edit on 24/8/21 by Phatdamage because: I'm batman



posted on Aug, 24 2021 @ 06:30 PM
link   
a reply to: Terpene

I think one of the issues may be that we are talking about something that is relatively vague to begin with. None of this has really been established in society as a whole. The beginnings of it all are certainly there though, and given the enormity of what we are really discussing.. I believe it is imperative we discuss all of this.

I can say that my own automated systems basically use a recursive algorithm and the end nodes are designed specifically for the system. So, HVAC for example, is designed with the system in mind even if it can technically be retrofitted to existing thermostats and heating/cooling systems.

For me, I really only have one marker for whether something is AI.. and that is something of an abstracted awareness of what the machine is actually doing. So, in my own system, something that is, say, opening and closing a second story window doesnt become Intelligent or AI until it has an ability to make abstracted inferences about the process. Simply opening and closing the window because certain parameters are or are not met is "mindless." Even predictive ability doesnt necessarily mean it crosses a threshold. Meaning, if it encounters a set of parameters that it "knows" will lead to another set of parameters (something like atmospheric pressure, humidity, and a specific wind pattern typically leading to rain).. it wouldnt indicate AI and is even relatively "easily" accomplished.

Now, if that same system was able to make meaningful conclusions that are completely abstracted from the process, we start to talk about something different. Importantly, this isnt just output where we cant figure out how it is generated from the input. This would be something like.. the system modifying when it opens one of the windows because it "enjoys" the breeze. To the observer, this can actually appear indistinguishable in many ways. However, if a human can interact with that system and reason it beyond its own programming, to the extent that the machine will design tools to achieve that new goal, it is likely flirting with that line of genuine Intelligence.

Ferinstance, you would be very, very hard-pressed to convince a robotic arm that paints cars in a factory to suddenly grow and harvest food of its own volition and impetus. Similarly, in the nightmare scenario I outlined, there would be no reasoning with the hyper-advanced automation that there is value in anything that isnt in its programming.

I would hesitate to even call it "dystopian," since that would imply that there is any organic life as we know it, whatsoever. Just unconverted meat and organic matter, but no human, animal, or plant life as we know it.

Personally, I would avoid the vast majority of anthropomorphizing and would focus much more strongly on more universal traits of beneficial coexistence between disparate systems. However, these are things that humans struggle to identify, much less adopt, so its more than a little tricky. That said, I believe some of the numbered I stuff I mentioned above may be more easily integrated into machine systems, and then humanity may end up learning them along the way too. This would represent the principle of "good" as beneficial, symbiotic growth and this can be modeled in numerous ways.

If I had to pinpoint the single most important trait though.. it may be ascribing genuine value to novelty. And, as mentioned, that may be even easier to accomplish in an AI than it is in humanity. Despite all of our claims otherwise.
edit on 24-8-2021 by Serdgiam because: (no reason given)



posted on Aug, 24 2021 @ 09:59 PM
link   
a reply to: Terpene

at a place i worked along time go and no longer have any connection to used these copper mesh impregnated Lexan to stop ALL electromagnetic signals from coming in or out all the way up to a nuclear level EMP.


air gapping is just not allowing ANY outside connections usually in one of these cages inside a building that is also shielded and have scramblers that send out bogus data.

if there were an emergent AI, it would no double be born(or already has been) in some NSA server farm(that is now using quantum computers.)

and if the AI get froggy just unplug it and wipe the drives.

at the end of the day a computer is still a phyical construct, if we emp'ed the world back into the stone age there is no 'skynet'

computer security is to tight now-a-days. i run windows in another OS and have phyical and software VPN's and im not the only person with that so no AI will get into my computer, and even not going the whole emp rout just kill all the power to the countries

AI cant escape the lab in a meaningful way.

it would more than likely live on the internet and just watch us and maybe toll us

if it was so smart it would transmit its self off earth



posted on Aug, 25 2021 @ 02:46 AM
link   
a reply to: Serdgiam



. it may be ascribing genuine value to novelty.


Thanks for clearing this up... I like it alot as a defining trait for AI...

I guess it will remain vage because we struggle to accept that something we created out of 0 and 1 could become sentient. It sort of questions our own legitimacy if we question his...

Didn't Saudi Arabia grantet citizenship to Sophia?

I found it weird that in the movie animatrix the AI emerged from that region, but probably it was just a hint to the cradle of human civilisation being Mesopotamia. nevertheless I think a precedent has been set by granting citizenship...



posted on Aug, 25 2021 @ 05:18 AM
link   
a reply to: Serdgiam




"doesnt become Intelligent or AI until it has an ability to make abstracted inferences about the process."


Actually, this would require the system to be aware of its being a system, which means the system must have a metalanguage describing its own language, or possess metaknowledge about its current knowledge. In other words, it means the system must have means to 'think out of the box', to transcend itself, and to observe itself from an out-of-the-system point. This means you are its metaknowledge, its metalanguage. The question as I see it is whether you yourself are the creator of the system, or whether the system in fact created you as its metaknowledge.




"if that same system was able to make meaningful conclusions that are completely abstracted from the process..."


Yes, I can imagine such a system. But in arriving at that level the borders between the creator (programmer) and the system (machine) become blurry and fuzzy, to the point that it is difficult to distinguish one from the other.




"...the system modifying when it opens one of the windows because it "enjoys" the breeze."


Yes, the system reacting to the environment it senses, and adapting to it, or even modifying the environment, or modifying itself. I see the system can learn to enjoy, and can get to have feelings. My question is whether this makes of the system a 'human being' or whether this makes of human beings machines. In other words: is it 'to feel' the difference between a robot and a human? Aren't feelings also subroutines that can easily be coded?




"the machine will design tools to achieve that new goal"


Agree. But it will do it according to a cost function, taking decisions on whether a given solution is suboptimal or optimal. I would call the machine 'intelligent' if, and only if, it also learns to give up, to retreat, to stop optimizing, to cease.




"there would be no reasoning with the hyper-advanced automation that there is value in anything that isnt in its programming."


I concot. However, this also holds for humans. Now the sentence would read like this: there would be no reasoning with humans that there is value in anything that isnt in their programming.




"universal traits of beneficial coexistence between disparate systems."


Altruism is a noble goal, indeed. And it proves beneficial for any system within a coalition of systems, each of which can look exotic to its neighboring system. However, altruism lasts as long as it is beneficial to ALL and ANY of the systems, something difficult to hold in a Universe full of antagonistic processes (heat vs. cold, mass vs. energy, mind vs. matter, etc.)




"ascribing genuine value to novelty."


Give me an example of what a 'genuine value' would be. I feel I can still agree with you on this, though 'novelty' per se is meaningless. AI systems can explore the phase space of possibilities and potentialities much faster than a human can, so they can arrive quickly to solutions that would take a lifetime for a human to find. But this also means AI systems can very quickly find the wrong solutions, that is, the cost-effective yet unethical ones.

My problem, you see, has to do with the afternoon in which the AI system decides a breeze is enjoyable, and that terminating humans is even more enjoyable, and that worries me. The first systems I met who took such a decision were... humans. They love the breeze, and nuclear bombs. They do not find it contradicting, for reasons I ignore.



new topics

top topics



 
9
<< 1    3 >>

log in

join