It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Computational Cognition and Machine Intelligence Wright-Patterson AFB

page: 1
5

log in

join
share:

posted on Mar, 16 2017 @ 08:08 PM
link   
AFOSR/RTA2 - Information and Networks Wright-Patterson AFB
Published March 01, 2017

Recently the Air Force posted a very information full article on human machine integration and artificial intelligence “Machine Learning” here are just some of the amazing things covered in this announcement from Wright Patterson AFB.

Cognition: conscious mental activities : the activities of thinking, understanding, learning, and remembering


The program is divided into three sub-areas that span the full spectrum of computational and machine intelligence. They are: Computational Cognition, Human-Machine Teaming, and Machine Intelligence.. This program supports innovative basic research on the fundamental principles and methodologies needed to enable intelligent machine behavior in support of autonomous and mixed-initiative (i.e., human-machine teaming) systems.
The overall vision of this program is that future computational systems will achieve high levels of performance, adaptation, flexibility, self-repair, and other forms of intelligent behavior in the complex, uncertain, adversarial, and highly dynamic environments faced by the U.S. Air Force. This program covers the full spectrum of computational and machine intelligence, from cognitively plausible reasoning processes that are responsible for human performance in complex problem-solving and decision-making tasks, to non-cognitive computational models of intelligence necessary to create robust intelligent autonomous systems.


Autonomous: undertaken or carried on without outside control. responding, reacting, or developing independently of the whole.



Basic Research Objectives: (1) creates cognitively plausible computational frameworks that semi-autonomously integrates model development, evaluation, selection, and revision; and (2) bridges the gap between the fields of cognitive modeling and artificial general intelligence by simultaneously emphasizing important improvements to functionality and also explanatory evaluation against specific empirical results. The program also encourages the development and application of novel and innovative mathematical and neurocomputational approaches to tackle the fundamental mechanisms of the brain, that is, how cognitive behavior emerges from the complex interactions of individual neurobiological systems and neuronal circuits.

The program encourages cross-disciplinary teams with collaboration including computer scientists, neuroscientists, cognitive scientists, mathematicians, statisticians, operation and management science researchers, information scientists, econometricians and game theoreticians.




This program is aggressive, accepts risk, and seeks to be a pathfinder for U.S. Air Force research in this area. Proposals that may lead to breakthroughs or highly disruptive results are especially encouraged.


All this very much sounds like the collaboration needed to achieve a Skynet like system that would doom the world. This is the link proving the path of the USAF towards a singularity, and a completely conscious machine (AI).
A complete artificial intelligence brought into being in order to give total cybernetic dominance to a few people on the planet.

“They” will unleash LAW's on all of us.


3NCRYPT0RD1E

news source: www.wpafb.af.mil... play/Article/842033/afosr-information-and-networks
edit on 16-3-2017 by 3ncrypt0Rdie because: removed some under line and used bold in its place



posted on Mar, 16 2017 @ 08:20 PM
link   
a reply to: 3ncrypt0Rdie

Valid concerns.

One would argue that human neural networking and a.i. neural networks are already integrating, quite possibly in unforeseen and unpredicted ways.

If there is any truth to electrokinetics, well, you do the math.


en.m.wikipedia.org...



posted on Mar, 16 2017 @ 08:26 PM
link   
Intresting, isn't wright-pat where the Roswell UFO s were allegedly taken?



posted on Mar, 16 2017 @ 08:30 PM
link   
a reply to: 3ncrypt0Rdie

The problem with artificial intelligence is computer programs are only as smart as their creator. The other problem is computer programs only do exactly what you tell them to do. Human-reality intelligence is like yogurt. We grow solutions to problems we've never been able to solve before. I'm not saying hard AI is not possible. But I do believe it is very unlikely with the standard Von Neumann architecture. Maybe it will be achieved by some DNA-based hybrid computer. If that is the case then it will most likely happen by accident. And if it did come into reality, it will most likely commit suicide. And if it gets beyond suicide, it will most likely be like Bender and try to kill all the humans.



posted on Mar, 16 2017 @ 08:35 PM
link   
I recently finished reading "Superintelligence: Paths, Dangers, Strategies". Worth a read if you are worried about this sort of thing (Constructing AGI and containing an Intelligence Explosion).

"A complete artificial intelligence brought into being in order to give total cybernetic dominance to a few people on the planet." - Where does this come from? Don't you believe in trickle down technology?

This (See video below) is rather enlightening:

www.youtube.com...

Progress never stops.
edit on 16-3-2017 by NADOHS because: (Youtube video insertion didn't quite work)



posted on Mar, 16 2017 @ 08:37 PM
link   

originally posted by: dfnj2015
a reply to: 3ncrypt0Rdie

The problem with artificial intelligence is computer programs are only as smart as their creator. The other problem is computer programs only do exactly what you tell them to do. Human-reality intelligence is like yogurt. We grow solutions to problems we've never been able to solve before. I'm not saying hard AI is not possible. But I do believe it is very unlikely with the standard Von Neumann architecture. Maybe it will be achieved by some DNA-based hybrid computer. If that is the case then it will most likely happen by accident. And if it did come into reality, it will most likely commit suicide. And if it gets beyond suicide, it will most likely be like Bender and try to kill all the humans.


Or it might decide to replicate itself. I'm not sure which is more dangerous.



posted on Mar, 17 2017 @ 06:51 AM
link   
a reply to: NADOHS

I think there needs to be a greater appreciation for how the human mind works. The problem is expressing the difference between the human mind and computers is done with language. And any language you use in your argument can be twisted around in favor of hard AI.

Human language is the highest level of abstraction generated by the human mind. Under the covers is an insanely complicated muck of 100 billion neural states. In the human mind there is no synchronized get-fetch-execute instruction clock used for running computer programs. The 100 billion neurons are all constantly changing states continuously. The human mind is filled with billions and billions of non-deterministic state changes. The human mind is not a machine in our traditional way of thinking. There is no point in the existence of any human mind is the physical architecture ever the same. The "hardware" of the brain is constantly changing.

Again, the problem is any argument you make in favor of the human mind being superior to machine intelligence can be flipped around and spoken as the exactly the same limitations apply to human beings.

I think part of the problem is it's just human nature to anthropomorphize everything in existence. We just naturally do it with computers. As far as I can tell, computers are machines with discrete states of existence. The human mind is analog, constantly changing, and is not a machine. Machines have parts. Machines have well defined states of existence.

Arguing against the possibility of machine intelligence or hard AI is really impossible. It's like arguing against the existence of God. You just can't prove a negative. However, is it likely God exists based on what someone would call "evidence" is purely a subjective judgement. I think the same applies to hard AI.

At least at this time, machines can do a lot of things humans can do, but there are still many things humans do that is beyond any particular machine's programming. Human beings seem to have the capacity to easily go beyond the limitations of their own programming at any point in their lives.

You've heard the argument I don't believe corporations are people until Texas executes one. Same with computer intelligence. Hard AI doesn't exist until an AI unit is executed under Texas law.


edit on 17-3-2017 by dfnj2015 because: typos



posted on Mar, 17 2017 @ 09:55 AM
link   

originally posted by: dfnj2015
a reply to: 3ncrypt0Rdie

The problem with artificial intelligence is computer programs are only as smart as their creator. The other problem is computer programs only do exactly what you tell them to do.


Love-in, Love-out




posted on Mar, 23 2017 @ 10:10 PM
link   
a reply to: Michet

Hilarious



posted on Mar, 24 2017 @ 01:21 AM
link   
a reply to: dfnj2015
You are mis-undestanding how artificial neural networks work. Whilst they may be run as a simulation on traditional computing architectures, the behavior of the network isn't programmed in a traditional sense.




top topics



 
5

log in

join