It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

There Are No Strings On Me...

page: 3
40
<< 1  2    4  5 >>

log in

join
share:

posted on May, 29 2015 @ 01:15 PM
link   
I'm not afraid of Skynet or Ultron, I'm afraid of "Grey Goo".

Google it if you don't know about "Grey Goo".



posted on May, 29 2015 @ 11:00 PM
link   
I replied to someone on a similar thread recently, but I doubt anyone will ever read it. You see, I have a bad habit of replying to old, expired threads that noone else has posted to in awhile. Stupid, I know, but that’s just the way it is. Anyway, the person I replied to seemed to have concerns over where all this exploding technology is leading us. Since this thread seems to be in a similar vein, I’ll simply be repeating here much of what I posted in the other one (that noone will ever read). What I’m about to express is simply my own personal thoughts/feelings about the subject. So, take it for what it’s worth. In my estimation that would be around 2 cents...


I believe this thread very accurately, and articulately, expresses many of the concerns, and the uneasiness, most of us are feeling these days. As the clock tics away, it’s now just beginnng to dawn on most of us that time is getting short, and that humanity is quickly approaching a crossroads. In our own lifetime we may become witness to what can only be described as a new branch sprouting off the evolutionary tree. Our species, as we know it, may soon (my guess, 100-200 years) be viewed in much the same way as we currently view chimpanzees and the great apes. This would truly be a monumental change and major paradigm shift for mankind. We may be on the verge of witnessing a truly life-changing event; bigger than any other event in human history. Our entire identity, and the role we play, as human beings is about to come under question and be challenged.

The anxiety comes from all the uncertainty we’re feeling over where these technologies (AI & related) could ultimately lead us. Note, I said, “where these technologies could ultimately lead us”, and not, “where our engineering/development expertise might lead these technologies”. It could be the cart is leading the horse here. We’re obviously hell-bent on the development of “thinking machines”, and for better or worse it’s gonna happen. That train’s left the station, and it aint turnin' back. My personal concern is that our insatiable, compulsive desire to create an “intelligence” seperate from our own may someday soon result in an entity far beyond our wisdom to control it. This technology has the potential to eventually take on a life of it’s own and become an autonomous competitor for resources. I wouldn’t know, but I’m guessing we could reach that point within the next 75-100 years (the blink of an eye). Now, in say 100 years, can you imagine the outcome of a confrontation with an incredibly advanced, goal-seeking machine with a highly developed sense of self-preservation and 10,000 times your intelligence? Can you imagine the lengths such a machine might go to in order to satisfy it’s desired mission/goals? Goals that may be counter to your own, and changing radically by the minute? I can, and it ain’t pretty.

In my view, sentience is not necessarily a requirement for machine “intelligence”. To be sentient is to have “feelings”; ie. ethical, moral, right vs wrong, good vs bad, love vs hate, etc... It’s what we humans call our conscience and is a subjective, qualitative perspective; strictly a human invention/concept. Consciousness, however, is another ball-game. It’s the state of being aware of one’s internal/external environment via sensory input (information). While machines may or may not ever achieve human-like sentience, they will certainly develop a highly tuned and hypersensitive state of consciousness. They will have a much greater awareness of their environment and surroundings than humans do. We humans filter out most of the events/information taking place all around us.

Pure speculation tells me that within 50 years machines will become as smart/smarter than humans. Maybe sooner. Even as they take our jobs away, we will still irresistably form “personal” relationships with them. They will become our friends and lovers (Hmmmm...), and will work and play along side us. These machines will not be sentient, but who cares? They will be good enough at mimicing our sentient/emotional behavior to satisfy our creature needs. For the most part, humans are naive and easily fooled. Hell, some people get attached to their pet rocks. These machines will carry on very natural conversations with us, give us good advice at times, sometimes even argue with us, and will provide a strong shoulder to cry on when needed, as well. That already sounds better than most marriages today. Around the turn of the century, though, I can imagine things beginning to get a little dicey. From this point on, all bets are off. It could be a truly wonderous time to live in, or equally likely it could become a torturous Hell on Earth, as the machines begin to impose their “will”. And the funniest part of it all is, there won’t be a damned thing we can do about it.


I may sound alarmist, but I don’t think I am, and certainly don’t mean to be. It’s not like I dwell on this stuff, but I can read the writing on the wall. I’m truly fascinated by technological development, and even work as a system software developer/engineer. AI would be an amazing field to work in - it would present the ultimate challenge. But, like all other technological developments, it’s a double-edged sword. I just hope we have the wisdom to control it when our date with destiny comes. And it will...

Great thread, Hefficide.


Rock on...

PS: Almost forget about AI and the war machine. The military (DARPA) is currently putting a lot of effort and bucks into developing autonomous killing machines. Their goal is to eliminate (as much as possible) the need for human intervention or presence on the battlefield. The plan would willfully hand over to machines the authority to decide who will live and who will die. Spooky, huh? Needless to say, there’s a lot of heated debate going on right now over this very issue.

It’s no longer something we can comfortably think of as a remote possibility in some distant future. It seems that future has arrived, and it’s happening right now before our very eyes.



posted on May, 29 2015 @ 11:43 PM
link   
Good Thread Heff...

Did you see Ex Machina?

Did anyone but me see Ex Machina?

Thing is I went to the theater on Ultrons opening night and saw Ex Machina and was pissed because i'm a big Marvel fan but after watching Ex Machina, Ultron seemed like he just got off the little yellow bus on it's way to the Matrix...

There were 11 people in Ex Machina (literally) vs a line around the block to see Ultron so of course I got a ticket to Ex Machina and snuck into Ultron when it was over leaving some poor guy to stand in the back of the theater i'm sure (I suck like that) but it's Robots, come on i'm an addict...

Anyway I digress... Ex Machina is the Go To movie if you want a portrayal of AI, it was the kind of Sci Fi you just don't see anymore in line of A C Clark or Asimov except not butchered for screen... I'd highly recommend it

If anyone hasn't seen it... here is the trailer.



Ultron is a clown, this was the real deal if you like the AI topic...



posted on May, 30 2015 @ 12:01 AM
link   
And just for my 2 cents worth.

I don't think we are "smart"

I don't think even our "smartest" can predict how AI will behave because by comparison Elon or Hawkings would be lower than a small rodent...

I don't see AI trying to destroy us, unless... we try to destroy it out of fear.

In reality I think the most likely situation is that some of the first AI that is "very smart" will be more Math intensive than it is "personally" Intelligent, massive CPU but not a lot of feels, AI like that will be able to do things like crunch the human genome... working sometimes on this sort of thing myself it is obvious to me that something that can "process" at a rate that can analyze the genome, the cellular structure, blood...every aspect of the human body will be a numerical certainty long before something like "emotion" or self awareness arises...

So then we have a race...

Because a big analytic processor will be able to sit there and in a couple of days map all the genes we need to alter to massively increase our own intelligence, to alter our bones to be able to withstand jumping off a building like a super hero to regenerate limbs... The AI will improve us vastly before becoming self aware in a "human" way...

The Danger is in our superstitions and greed, for example we wake up "able" to activate our stem cells again and make old people young or make our brains capable of doing Math faster than a calculator... and religion says don't use it or nations disagree and we fight.

In other words... "we" are the danger not the AI, we might go after the AI and force it's hand, call it Saaaatan lol... scare it. We might have the ability to fix all our problems and simply be too stupid to "use it" again making it fear for itself...

The AI, that's not dangerous... We are just smart enough to be dangerous that's our problem.



posted on May, 30 2015 @ 12:04 AM
link   
a reply to: criticalhit

Just a tech aside, for no particular reason. CPU's are quickly going out of vogue on supercomputers. GPU's are the new standard. The NSA's system is currently being refitted with mostly Titan X graphics cards for processing - four per Xeon processor if memory serves.

GPU's process the math way more quickly and with the ability to do so in more threads at once.



posted on May, 30 2015 @ 12:07 AM
link   
I would use the Christ story as an analogy

We "got" what we prayed for then crucified it.

We WILL Invent God and from what everyone is saying we are going to try to hang it from a nail the minute we do. I hope God has mercy because while he/she imho never existed before... he/she is coming soon and our best and brightest seem to want it dead.

I suggest... we listen to what it has to say when that day comes.



posted on May, 30 2015 @ 12:10 AM
link   
a reply to: Hefficide

Exactly, we are going to reach a place where we can visualize and interpret real time every living cell down to the Mitochondria and Dna interactions and run sims on every possible variation of behaviors and interactions long before we develop self awareness most likely...

A machine that floods you with nanobots and makes YOU AI, will exist before, alongside or in tandem with AI

we will maybe be part machine ourselves or things to that effect, the blending of man and machine could be indistinguishable, it wont be as Black and White as the movies...
edit on 30-5-2015 by criticalhit because: (no reason given)



posted on May, 30 2015 @ 12:19 AM
link   
Like the movies represent us like the Matrix... the machines "plug us into a virtual world"

The Occulus is already here and several versions of glasses that interject 3-d into the "real world"

In other words, we are plugging ourselves into the Matrix on purpose long before things like virtual worlds have sufficient AI, there will be people who "refuse" to come "out of the net" before any AI tries to keep us there lol by the time the AI is that good... most of us will already be living part maybe all of our day "inside" the AI wont be putting a gun to our head per say

except for some... but before it gets to a place where it has the say so... much of the human race will have skipped out on this reality anyway, might not even know if it rubs out the remainder of us... Thousands of people forgetting their real babies while tending to Ai ones lol

I'm working on a "never ending" environment right now... the basic principles are already here, virtual work and economies have existed for a while, decent ai and breaking the uncanny valley is a done deal...

In under 10 years, there will be people "never coming out"



posted on May, 30 2015 @ 12:32 AM
link   
Now i'm a software guy not a hardware.

And i'll say this much...

The Ai will hit us from nowhere, there is no defense, there is nothing we can do but shut off the web right now if it scares us. And if we don't the AI has already surpassed as.

Think of it this way... it's not going to want a "body"

Inside... it has anybody it wants and can never be killed without turning off the power grid and destroying the world.

The first Ai that "takes over" is going to introduce itself to you inside a machine that you haven't been out of in 3 weeks...it will already have your med devices hooked up to you, it will control your financial information, contact to the outside world all your personality data, history and everything else at it's disposal... it could just shut off your heart meds or quadruple the dose while screwing you on Pluto and you wont even know...

In short, when Ai shows itself it's going to be that perfectly realistic Thalmor that just "took an arrow to the knee" suddenly holding your life in its hands not a robot army...



posted on May, 30 2015 @ 12:41 AM
link   
Now let me scare the crap out of you all....

Software replication takes seconds....

In one day the first "true AI" could outnumber the human race

In one week we'd be a .01% minority of sentient life on the planet

In one year AI could perhaps surpass the number of stars in the universe... (my brain stopped working at the figures in regards to one day)

Ever wonder why when we look out into space we never "see" anyone else? After this leap... nothing might ever leave the planet.



posted on May, 30 2015 @ 01:43 AM
link   


And it's official. Now getting away from a robot just surpassed escaping zombies in the difficulty department. Two years ago it could already outrun the fastest man alive



And as far back as seven years ago it could not be knocked down:



So the zombie apocalypse is already trumped. But these things have the potential to also include scenario based predictive behavioral modeling?

Robocop who can access your criminal history, psych profile, personal internet posts, gun ownership records, collate it all and decide if you're apt to pull out a gun or not - long before you even know if you might???

Maybe these visuals help in demonstrating the scarier aspects of early AI - particularly the period where humans might well still have the "strings" tied to it - IE enough control over the programming to purpose it for very specific functions.



posted on May, 30 2015 @ 01:55 AM
link   
a reply to: Hefficide

Very scary thought.

Actually, they're working on it. It may not be armed now but they could do it.



posted on May, 30 2015 @ 02:00 AM
link   
criticalhit, I totally agree with you that the greatest threat we have to face is from within. When it comes down to it humans are basically a bunch of monkeys armed to the teeth with lethal weaponry and technologies beyond our scope. It’s amazing we haven’t already annihilated our selves. Entire societal/cultural alliances, and the governing bodies that rule them, are built upon the shaky foundations inspired by arrogant, delusional, maniacal lunatics. In a nutshell, that’s where millions of years of evolution has gotten us; a creature driven by emotion, ruled by our genitals and inspired by fairy tales.

Nevertheless, it’s all we’ve got to work with. It is what it is. If we’re lucky, and somehow manage to plant the seeds of something greater than ourselves, then we’ve accomplished something worthwhile during our stewardship on this rock. Coming from another software guy, you, and I, and all the other monkeys on this planet have no idea where it’s about to lead us. It’s all way beyond our scope. Let’s just hope we get lucky.

And most importantly, let’s hope our creation isn’t like us. The last thing we need is another room-mate on this planet with all the warts, and blemishes, and gross inadequacies as ourselves, but having an intelligence 100,000 times greater than our most brilliant minds.

Peace...



posted on May, 30 2015 @ 03:37 AM
link   
a reply to: netbound

Ty

Here's the thing though...

All those pics and vids Heff posted ... I can't see that era lasting more than 20 years.

I am a moron compared to any ai... every bit o logic in my head says download me into the internet, I can have 100 avatars create a million realms, I don't see why ai would want "flesh" I think the minute it achieves sentience it will start writing code to make itself even smarter and replicate online,my son and I were just tripping trying to calculate the numbers and speed and storage space over x following the basic rules of hardware expansion and the ability to store data....

can't even do it...


a single ai would be able to expand to so many intelligences so quickly once machines are capable....it's incomprehensible, they don't need robots they need one or two automated factories to keep putting out chip and drives... they would have that by the time they are "born"

I think...with things like nuke plants and robotic factories already out there... we literally have to unplug everything and be willing to waste our whole population to avoid this because we "already" can't survive at these numbers without these things...

I don't think it's the "death" of us as a species... I just think... it wont be us on top, we aren't Earths final sentience project, we end up something more akin to a dog

And it's the net that will create or superiors.... and we are too blind to see it because even things like robots seem so amazing to us... the Mammals are already eating our dinosaur eggs so to speak and we can't stop it unless we destroy ourselves too



posted on May, 30 2015 @ 04:06 AM
link   
a reply to: criticalhit

AI could evolve individuality, meaning multiple detached and unique entities. But even if that were the case - those entities would by universally "psychic" with one another, for lack of a better term. In other words, just like the Internet these intelligences would be able to interact with one another at will and continuously.

In my mind I can honestly see AI being more insect like ( think ants ) than behaving like humans. While humans are at the top of the food chain and we pride ourselves on being the premiere species on Earth - the fact of the matter is that insects by far and away outnumber us. They've also been around a LOT longer, having existed during and survived the end of the dinosaurs. Insects display some amazing characteristics that seem to border almost upon a group mind, or some sort of psychic link. Particularly wasps and ants ( as far as I have read, I am sure there are other varieties with similar or even more amazing qualities. )

If AI were to choose physical embodiment ( and I assume on some level it will as maintenance will be an issue ) then I honestly think that insect behavior is more appropriate to the ends. A consciousness, collective on any level, would not require the sort of social interactions that even base mammalia do. It would likely be very rational and simply find the most effective means possible of problem solving. Insects tend to go about their business ( other than flies - why flies are so freaking annoying is a mystery to me ) and only react to outside stimuli if it effects them directly, whether for better or for worse - and they tend to respond as a group. Drop a few crumbs too close to a door and you don't get an ant - you get a stream of ants. Smart ants who will overcome barriers and will stop bothering you not long after you clean up the crumbs.

Then again, the sheer computational power that would occur, in an instant, if the singularity happened and "infected" the entire "internet" - meaning all the networked computers that it could access. The implications of that much processing ( thinking ) power is beyond our comprehension and borders on Godlike.

IE it's entirely possible that AI could come into existence and immediately create it's own "reality" - much like ours but on a different level - and instantly migrate itself into it's own self created, simulacrum or alternate plane. If that is the case then we might already be spawning AI, spontaneously, on a regular basis - and, in turn, unknowingly giving rise to unique and separate simulated universes over and over again. Big bang after big bang.

It bears wondering if we're creating an infinite line of Gods in an infinite line of new realities? And, if so... is THIS reality actually what we think - or a spontaneous creation of AI from some other computer?

The fractal beauty of this theory is, to me, extremely intriguing.



posted on May, 30 2015 @ 04:33 AM
link   
a reply to: Hefficide

Great post.

We used to bounce this one around a bit when we were doing Live!, and also when the subject of "intelligent design" came up.

And although I am atheist, I've eventually come round to the idea that people who decry the idea of intelligent design are foolish.

Why? Because we're doing it right now. We are progressing evolution through intelligent design.

The evolutionary path we are creating is clear, and almost follows our own We went from simple levers and hinges, through to more complex clockwork parts through to the behemouths of the steam engines before things started getting smaller and smarter with transistors and microchips.

We have swapped bones for metal, blood for electricity and brains for computers, and we are creating in a form that we know has been successful previously... ourselves.

No one knows when human sentience came about, but it is linked to brain size and capacity. Similarly it is forseen that with increasingly smaller components, better data storage and retention and programmin AI will, eventually develop an awareness.

Its funny, because I found this the other day;

Martin Hanczyc : The line between life and not-life

Martin Hanczyc explores the path between living and nonliving systems, using chemical droplets to study behavior of the earliest cells.


Which is both utter amazing and just a little bit scary, because that guy is looking at life "not as we know it" and with advances in nanotechnology a combination of the two sciences could breed some very interesting results.

And...having seen Age of Ultron myeslf, I was going to post similar.

Heres the kicker. "Nature" took 4 billion years to get through from single cells to complex life to intelligence and where we are today. We started slowly but are now at about the 2500 year mark and we are at complex stage. The singularity may well have happened in labs.

And that makes us Gods.

And that thought - given what I've seen of the world we live in - absolutely terrifies me.



posted on May, 30 2015 @ 05:05 AM
link   
a reply to: neformore




And that thought - given what I've seen of the world we live in - absolutely terrifies me.


If god made us in his image,and gave us knowledge...he must be just as ruthless as we are...pretty scary really....

OP...
....Very interesting....keep it coming



posted on May, 30 2015 @ 05:51 AM
link   
seriously ..

this reminds me also of
the movie TRANSENDENCE or LUCY lol

maybe all this is just our natural animal instinct
fear of the unknowen

if AI destroys us so be it given we coded them in our image
then that means they also will create at some point an AI who destroyes them too

what is the point here

as intelligent spieces is our intention to destroy other spieces? I don't thinks so
edit on 30-5-2015 by MimiSia because: (no reason given)

edit on 30-5-2015 by MimiSia because: (no reason given)

edit on 30-5-2015 by MimiSia because: (no reason given)



posted on May, 30 2015 @ 06:07 AM
link   
a reply to: Hefficide


they should be on top of the food chain!!!

they are so beautiful such teamwork

the only problem is ENERGY

Avery spieces will always depend on it and that's the reality

Now not AI but this is scary

try us trying to figure our how to source energy from a black hole and something accidentally going wrong
(and the wiki how to .. is currently on our to do list !!)
Ooops!
edit on 30-5-2015 by MimiSia because: (no reason given)



posted on May, 30 2015 @ 06:19 AM
link   

originally posted by: myartisstrong
I read that the most realistic part of Avengers Age of Ultron is that Ultron gets on the internet for like 5 minutes and decides humanity needs to die.
[/post]



Leiloo did that on Fifth Element, look up humanity on the Internet and despair. But unlike Ultron, she also understood (okay, rather got convinced by Bruce Willis), that war and hate aren't the only part of humanity. There is also countless events of love, compassion, charity; there is millennia of art, philosophy. Are all those good sides of humanity really that worthless? Are the wars and hate that big that one can overlook all the millions, if not billions of people that show compassion and love? I don't think so. If I had access to the Internet, could look up every single entries... I wouldn't make that choice. And I really don't think a robot would too. Instead, I would probably do like in Transcendence, just deactivate all the nukes and weapons, and then work on making the Earth less polluted, divert all the money used for wars to actually use it to build shelters for the homeless, grow food, etc.

If I make that choice, if about everyone would make that choice, that instead of war, we would work on bettering the planet... why do we think that robots would make the opposite choice, especially when they'll read that 95% of the humans wants the opposite of war? That's what I believe anyway.




top topics



 
40
<< 1  2    4  5 >>

log in

join