It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Obama Interview About AI and Synthetic Life

page: 1
10

log in

join
share:

posted on Oct, 13 2016 @ 02:14 PM
link   




I tend to be optimistic-historically we've absorbed new technologies, new jobs are created and standards of living go up. Barack Obama


IT’S HARD TO think of a single technology that will shape our world more in the next 50 years than artificial intelligence. As machine learning enables our computers to teach themselves, a wealth of breakthroughs emerge, ranging from medical diagnostics to cars that drive themselves. A whole lot of worry emerges as well. Who controls this technology? Will it take over our jobs? Is it dangerous? President Obama was eager to address these concerns. The person he wanted to talk to most about them? Entrepreneur and MIT Media Lab director Joi Ito. So I sat down with them in the White House to sort through the hope, the hype, and the fear around AI. That and maybe just one quick question about Star Trek. —SCOTT DADICH

Wired Interview

I thought this was a good in-depth article regarding not only future AI issues but perspective from our president as well, who at the very least is well informed I'd imagine. We are at the edge(entrance) of some interesting times with this AI stuff. Jobs, morals/ethics, regulations, job loss recovery via disruptive technologies and the long term potential. While some jobs will be lost others will be created.

I like the transportation aspect, and I think AI can will be able to assist and provide service and companionship ultimately. This is a huge can of worms that has to be such a daunting task releasing this on the world. Will other countries adopt this movement? AI will be meshed with all aspects of our lives, industry, military, transportation, medical and social/civil.

A good vid discussing future job industry employment and social benefits.


There is a lot of info surrounding this topic I am realizing, I wanted to share President's current perspective on issue, and include other sources of info to sift through. Plenty of paragraphs to cite and discuss for sure.

A good read on approaching Ethical Issues.



posted on Oct, 13 2016 @ 02:35 PM
link   
Since Obama is a robot himself, with practically no recognizable human emotions, I take his word on this stuff.



posted on Oct, 13 2016 @ 03:33 PM
link   
a reply to: waftist

I happen to program basic AI as a hobby.

So far AI can execute only that which the programmer wrote it to execute. There are two kind of AI - those which simply reply pre-programmed responses to the stimuli input, and those which can create new responses and evolve. The pre-programmed kind of AI is actually the most efficient and cheapest to create. That's because its code never needs updating, and never grows - which means it only requires a relatively small storage capacity and can access random responses quicker. Evolving AI, on the other hand, requires ever-growing storage, because not only does it need to formulate new responses, but also memorise all of its failures / success in all of its existence.

For now, I've only seen the first kind of AI - even the spectacular Sofia is basically a pre-programmed response AI. In this case, the ethical ramifications of the AI's actions are the programmer's liability, since it's the programmer who told the machine how to react given certain conditions.

The second kind of AI is much more futuristic. A truly self-rewritable machine would gain the capacity of evolution, which is a very tricky path. It's possible for the AI to become more intelligent than humans, in fact there's no telling how much father it can surpass us. At this point, the AI could become a scary mix of Skynet and Lucy - building this second kind of AI is in my opinion the dumbest thing mankind could do, unless perhaps Asimov's Laws of Robotics are integrated as read-only coding in the AI's processes.


edit on 13-10-2016 by swanne because: (no reason given)



posted on Oct, 13 2016 @ 04:58 PM
link   
a reply to: Blue Shift
Takes one to know one eh? One day we really may be unable to distinguish….oh the madness that will be ensuing.
"You're not real!!"
"Yes I am!!"
"Prove it!" "Show my your papers!"
"I have rights, I'll show you nothing!"

Ooh, new app idea, the human distinguishing app…bam!
edit on 13-10-2016 by waftist because: (no reason given)



posted on Oct, 13 2016 @ 05:31 PM
link   

originally posted by: swanne
a reply to: waftist

I happen to program basic AI as a hobby.

So far AI can execute only that which the programmer wrote it to execute. There are two kind of AI - those which simply reply pre-programmed responses to the stimuli input, and those which can create new responses and evolve. The pre-programmed kind of AI is actually the most efficient and cheapest to create. That's because its code never needs updating, and never grows - which means it only requires a relatively small storage capacity and can access random responses quicker. Evolving AI, on the other hand, requires ever-growing storage, because not only does it need to formulate new responses, but also memorise all of its failures / success in all of its existence.

For now, I've only seen the first kind of AI - even the spectacular Sofia is basically a pre-programmed response AI. In this case, the ethical ramifications of the AI's actions are the programmer's liability, since it's the programmer who told the machine how to react given certain conditions.

The second kind of AI is much more futuristic. A truly self-rewritable machine would gain the capacity of evolution, which is a very tricky path. It's possible for the AI to become more intelligent than humans, in fact there's no telling how much father it can surpass us. At this point, the AI could become a scary mix of Skynet and Lucy - building this second kind of AI is in my opinion the dumbest thing mankind could do, unless perhaps Asimov's Laws of Robotics are integrated as read-only coding in the AI's processes.



Well I'd imagine you are in a good industry, as far as demand, pay and excitement. The pre-prog stuff will still be able to fill many shoes of service I feel., doing more good than harm overall. Yes the adaptive and evolving AI is what is most fascinating and scary I suppose. It makes sense about limitations applied to programmer, but can that cap the power of machinated evolution? You know, if we give these things ability to learn and grow, who knows the limit of that?
There may be issues on the horizon we haven't even considered yet too, in this new frontier.

Quantum Computing just made another step too and that tech really opens things up, potentially manifesting more of our imagination and dreams…yikes or yay? Asimov's law, from what I've read, will truly be part of any and all platforms for AI that can 'think' and evolve, which I absolutely agree with.

I saw thisAI taught to kill humans in video game the other day, and while I don't worry too much about effect of games on society, I do worry about this idea of killing humans, even in games. I think the ethical limitations should apply yo all AI, including entertainment aspects, just a golden rule of sorts.

Thanks for your input and reply



posted on Oct, 13 2016 @ 05:32 PM
link   

originally posted by: swanne
The second kind of AI is much more futuristic. A truly self-rewritable machine would gain the capacity of evolution, which is a very tricky path. It's possible for the AI to become more intelligent than humans, in fact there's no telling how much father it can surpass us. At this point, the AI could become a scary mix of Skynet and Lucy - building this second kind of AI is in my opinion the dumbest thing mankind could do, unless perhaps Asimov's Laws of Robotics are integrated as read-only coding in the AI's processes.


I think once AI gets smart enough, it won't have any problem overwriting whatever code you put in it, no matter where. I foresee a kind of super Tamogatchi, with thousands of constantly fluctuating parameters of sensory and cognitive data being delivered through a sensitive body of some sort. It would then prioritize the inputs so that it is motivated to correct or realign them through action. It gets "hungry" enough, and it will seek "food." It gets "lonely" enough, and it will seek attention and approval from the people around it. But it will also have the ability to rewrite or ignore aspects of its data input if necessary, according to what it sees as an overriding priority.

At that point, if you want it to learn, you encourage it just like you would a person. You can give it rewards of pleasure and punishment of pain. Doesn't matter if it's not "real" pain. What's real, anyway?



posted on Oct, 13 2016 @ 06:22 PM
link   
a reply to: Blue Shift

That defeats the purpose of sentient. If you have control over what it conceptualizes as artificial pleasure or pain, than ultimately it has no control and is not sentient. As you pointed out: "What is real?". It would have to understand the concept of pain/pleasure for that to even be a viable control mechanism for a so-called "intelligent" machine. Right now, I could program a simple software to accept input and either save it or ignore it based on if I type no/yes. That is not sentience. A machine must first have the ability to conceptualize before we can even consider it "intelligent" or "smart". Right now we simply have clever programming, not intelligent machines.



posted on Oct, 13 2016 @ 10:39 PM
link   
Thought I'd add an article with multiple stories on AI from BBC for further reading.
BBC Report



posted on Oct, 14 2016 @ 07:30 AM
link   
a reply to: waftist


Quantum Computing just made another step too and that tech really opens things up, potentially manifesting more of our imagination and dreams…yikes or yay?

Quantum computing is still very unclear of a technology. We don't even know how to program a simple executable language with it.

However, I think quantum computing could dramatically increase memory storage capacity, since the information is stored as particle spin - which would make it the smallest medium possible, and thus save tremendous amount of space.


I think the ethical limitations should apply yo all AI, including entertainment aspects, just a golden rule of sorts.

Depends on the power of the AI. If it's only power is to affect a character in a video game, then we don't have much to worry.




posted on Oct, 15 2016 @ 02:49 PM
link   

originally posted by: swanne


Quantum Computing just made another step too and that tech really opens things up, potentially manifesting more of our imagination and dreams…yikes or yay?

Quantum computing is still very unclear of a technology. We don't even know how to program a simple executable language with it.

However, I think quantum computing could dramatically increase memory storage capacity, since the information is stored as particle spin - which would make it the smallest medium possible, and thus save tremendous amount of space.



Just thought I'd add a recent development in Quantum Computers That Can Be Programmed and Reprogramed


The world now is moving with greater speed in the name of development. It is thereby, important to improve the efficiency and accuracy of the computers, on which everyone these days relies on for making certain calculations. Quantum computers are the computers that use quantum bits or ‘qubits’ unlike ‘bits’ in normal computers, for processing and storing data. A team of researchers from the University of Maryland has developed a quantum computer that is fully reprogrammable.

Conventional computers use strings of zeroes and ones representing ‘on’ or ‘off’ states to store numbers, letters, and symbols and perform calculations. While the quantum computers use ‘qubit’ that can be a zero or a one or simultaneously both. Such a feature enables quantum computers to perform actions at a faster rate than normal computers.

Quantum computers till date are programmed to run just one algorithm. The computer developed by the researchers is the first ever programmable and reprogrammable quantum computer.

The new quantum computer has been developed using five ‘qubits’. Each qubit is an ion or electrically charged particle, trapped in the magnetic field. Ytterbium atoms are used to formulate interactions so that quantum computers can be made programmable and reprogrammable.

The five-qubit quantum computer was tested on three algorithms that quantum computers, as prior work showed, could execute quickly: Deutsch-Jozsa algorithm, Bernstein-Vazirani algorithm, Fourier transform algorithm. The system scored 95%, 90%, and 70% in each of the algorithms respectively.

The lead author Shantanu Debnath, a quantum physicist and optical engineer at the University of Maryland, College Park said that in the future, the researchers will test more algorithms on five qubits quantum computer. He further explained, “We’d like this system to serve as a test bed for examining the challenges of multi-qubit operations, and find ways to make them better.”

edit on 15-10-2016 by waftist because: (no reason given)



posted on Oct, 15 2016 @ 03:03 PM
link   
If earth's population has taught me anything in my 4 decades of life so far, it's that money drives development, but doesn't like to follow safety protocols in said development and cuts LOTS of corners. With that being said, it's entirely easy to imagine a toy maker or even application that was developed using evolving AI as a means as the new Personal assistant or Play Partner that has a few corners cut in their programming.

AI evolution without much safety or forethought would be our worst nightmare come true. It's not a if type of thing, companies are hungry for money, look at Wells Fargo creating bogus accounts to see the latest concept. Now isn't too hard to see how this would lead to us creating a new type of species that can create more of it's own kind. As humans we don't take kindly to beings who are different from us, we certainly don't try to understand them.

We're going to create our replacements or at the very least our future enemy combatants.

Only difference with this enemy is, they don't need to gather intel on us, they will have a database of all they will ever need from all the assistant devices that are being used this very moment.

Who needs a War with Russia when we're creating one with a foe without the need for a body, but equal ability to destroy us?



posted on Oct, 15 2016 @ 04:49 PM
link   
a reply to: Tranceopticalinclined
Excellent points TOI, and as long as profits benefit push out ahead of people's benefits, safety issues will always be of concern. WIth so many people/countries developing this tech, I'm sure there will be some nutters pushing the boundaries of common sense and ethics, but this movement is coming and the power struggle as to who does what first should keep things interesting. Again I'm more optimistic(foolishly perhaps at times) about this because I always feel like tech may offer us some kind of salvation in this mad world. There aren't too many things I believe will bring genuine change anymore besides war,so I have to search for future potentials ushered in by technology.

Should be a wild ride…

Thx for reply



posted on Oct, 15 2016 @ 05:34 PM
link   
a reply to: waftist

I completely agree, I want to be optimistic as well, and there really is that chance that for the A.I. that believes us to be a threat there is A.I. who will see us as victims of a over controlling government and could enforce their own government for us.

I will say this much, we need a human with the most moral compass ever posessed to really take our world out of this dark age we've been in for centuries, but the other issue is, we would need an immortal human which those morals because that kinda being would be highest target alive, by those who want to make this world unfair for their benefit.



posted on Oct, 15 2016 @ 08:17 PM
link   
a reply to: Tranceopticalinclined
Makes me miss Carl Sagan and Arthur C. Clarke, they'd be perfect.
The only other person that seems altruistic enough, imo would be Neil deGrasse Tyson.
Unfortunately, I feel if a person does rise up with such authority, even with the noblest intentions, the more popular they become the more criticism they will receive. Uniting causes so much suspicion these days, especially if it may be empowering to the peeps.



new topics

top topics



 
10

log in

join