It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Project DARPA-BAA-09-03 - will this be the 1st true machine intelligence ?

page: 2
10
<< 1   >>

log in

join
share:

posted on Dec, 9 2009 @ 06:29 PM
link   
reply to post by JohnPhoenix
 



I do not think even if this project works, that the computer will be that smart, but years down the line model X 5000 could be.

I hope you're right ... but the specs were pretty clear on the point that the intention and goal was to create a system with the capability to read and understand any human readable document and to generate new inferences and conclusions based on the info extracted ... that sounds extremely smart to me !




posted on Aug, 6 2010 @ 07:08 PM
link   
reply to post by Solomons
 


First, thank you to the OP and participants so far. What an important thread. But I do want to take issue with the concept of the human brain as a blank slate at birth. No parent of a newborn child would believe their child's brain is "a blank slate." Reason? Even newborn babies have strong opinions about what they do and do not want. No question about it. So there is plenty going on in the human brain at birth, even if the baby cannot yet speak words.

On another topic, some time ago on BTS, I mentioned a software course I took in the 1980s from Peter Pin-Shan Chen, PhD, inventor of the Entity-Relationship approach for database design. (It's no longer in my own history of posts, because ATS now cuts off our Posting History after 250 posts.) Dr. Chen stated that natural language is difficult for machines to analyze, because natural language is both ambiguous and context-dependent. AI would of course surmount those hurdles.

Somewhere on the Unknown Country website, Whitley Strieber makes a distinction between AI programs that only tell the truth, and AI programs that can lie to humans. As Gilda Radner used to say, "It's always something."

[edit on 8/6/2010 by Uphill]



posted on Aug, 6 2010 @ 07:19 PM
link   
FYI, 9 out of 10 DARPA projects fail.

Something to consider.



posted on Aug, 6 2010 @ 09:57 PM
link   


DARPA is planning to build a system that is able to read any "natural language" text in any format (book, magazine, blog, newspaper, email, etc, etc) and process the information EXACTLY as a human would do.


DARPA is planning on giving money to some academic or contractor who writes a nice proposal which makes a DARPA program manager and a review committee believe that they will make progress in that direction.

If DARPA put out a BAA for Zephram Cochrane's warp drive getting 30 parsecs per gallon of dilithium, does that make it exist?

By the way, statistical language translation (translate.google.com) does OK without having been taught specific and deep structure of human language, given extremely large corpus of text and clever statistical methods. And statistical language translation is still very far away from informed human translation.

Statistical text classification methods can now automatically induce categories bearing significant resemblance to human categorizations using just the texts. Does it "understand" the texts? Hard to say what understand means, but the bayesian posteriors under Latent Dirichlet Allocation often look 'smart'.

The winning bid for this DARPA project will surely have some similarity. And it will still be very far away from a true strong AI.

I think Vernor Vinge is full of it.

In physics, there are no "real" singularities, you just get different physics there which will smooth out the 'infinities'/shock waves.

For real strong AI? After 200 years of reliable quantum computing (that starts 50 years from now) and really profound theoretical progress. And even then, no "singularity" in human experience.

Suppose you get one human level machine intelligence. So what? We have 7 billion of them already. Just one or a few isn't anything like the combined historical experience of billions of humans.

At that point, the computers can start writing their own DARPA proposals.

[edit on 6-8-2010 by mbkennel]

[edit on 6-8-2010 by mbkennel]



posted on Aug, 9 2010 @ 08:34 AM
link   
Today's op-ed piece on the status of machine intelligence, appearing in the New York Times, is a real wake-up call. The author, Jaron Lanier, is a Microsoft scientist who works in the A.I. community; he is best known for creating the computer technology he named Virtual Reality. According to Lanier, what humanity actually has to contend with now is the rise of a new religion about technology, not the imminent presence of A.I. . Here is a link to the front page of today's (August 9, 2010) NYTimes:

www.nytimes.com...

That home page has a prominent link to the Op-Ed piece by Lanier. It is titled "The First Church of Robotics." I cannot link directly to the editorial because it is a 2-pager and the ATS Terms and Conditions now emphasize that we should only link to brief "snippets."

In the Op-Ed piece, Lanier gives several current examples of where we actually are now with machine intelligence, and makes the case that although we are not now on the threshold of developing A.I., a new human religion based on technology IS now emerging.

In addition, Lanier authored a 2010 bestseller "You are not a gadget" on the distinction between human intelligence and machine software. Here is a link to that book description, which also includes a revealing mini-interview with Lanier:

www.amazon.com...=sr_1_1?s=books&ie=UTF8&qid=1281358827&sr=1-1

I haven't yet read this book, but I'm buying a copy today.

Having said that, I think that the development of A.I. is certainly a possibility that we cannot afford to overlook. In particular, I want to add 3 more unique problems to this thread's list of difficulties posed by an A.I. entity:

1. Immortality.

2. A.I. slave or A.I. capitalist? It was the noted Science Fiction author Robert A. Heinlein who pointed out in one of his novels (Friday) that we will know when A.I. is achieved when that machine entity is told to perform a function, and instead, the machine entity asks, "What's in it for me?"

3. In the novel Jurassic Park, the author Michael Crichton talked about how some people working in a technological field don't see the difficulties caused by side effects of the technology they are creating. He used the term "thintelligence" to describe this type of tunnel vision. There is a lesson to be learned here about how some of our computer technology advances are moving us toward a future where A.I. becomes more possible.

The problem of "truth or lie" that is currently only a human problem would, as I said in my earlier post above in this thread, become a possibility with the advent of an A.I. entity.

[edit on 8/9/2010 by Uphill]



posted on Aug, 9 2010 @ 03:27 PM
link   
interesting post, this may be a little off topic -

After seeing the video footage of different exo skeleton projects in development, can one assume that all the information being gathered during the testing etc could be gatherd together, allowing a real two legged robot to walk, (more advanced than Asimov etc).

also, do you know if DARPA are creating there own programming language for this machine read?

Peace



posted on Aug, 10 2010 @ 02:09 AM
link   

Originally posted by apexvin
interesting post, this may be a little off topic -

... do you know if DARPA are creating there own programming language for this machine read?

Peace


I've just re-examined the DARPA proposal but there is no indication as to whether any specific existing language would be used or whether a purpose built/designed programming language would be created.
But in my opinion, for a project such as this with its incredibly far-reaching potential, I would assume that they'd be more inclined to design a language from the ground up that would allow the project to perform at it's optimum capability. I can't see an existing language such as C#, VB, etc coming even close to the challenge.



posted on Aug, 16 2010 @ 01:15 AM
link   
Has anybody heard of IARPA's AQUAINT system? There seem to be some overlaps here



posted on Aug, 16 2010 @ 01:30 AM
link   
This is just the obvious path that AI development was going to take. We don't have the technology yet to make a working sustainable model, these are the first steps.

Maybe in another 5 - 10 years we will have something that is capable. We are pretty close now but development time and bug testing even with a genius level staff will take at least 3-5 years.



posted on Aug, 16 2010 @ 02:12 AM
link   

Originally posted by KJ_Lesnick
Has anybody heard of IARPA's AQUAINT system? There seem to be some overlaps here


Yes, I'm aware of IARPA's AQUAINT system that will allow the user to basically query a database of information using plain language questions as opposed to the current common method of using search engines such as Google, etc. As well as the major advantage of the user asking questions in plain English, answers received will be very precise and detailed, completely different from how current search engines provide 1000's of possible results with the majority of them being irrelevant to the original search term.



Advanced Question-Answering for Intelligence (AQUAINT)

Researchers the world over are exploring next-generation search techniques called question answering (Q-A) to enable users to ask questions in natural language and receive precise answers. In 2003, the Intelligence Advanced Research Projects Activity (IARPA) launched the AQUAINT Program to develop these Q-A technologies ...


So yes, you're absolutely correct that there's an overlap ... and a very significant one in my opinion.

Here we have DARPA in the process of creating an AI that can access ANY kind of online data such as books (fiction, nonfiction, technical), technical manuals, technical or other scholarly journals and their papers, magazines, and newspapers. In addition, corporations, governments, militaries, and other organizations have private documents such as memoranda and email messages."

Mention is also made of the capability to access any kind of documentation created by mobile phones such as emails and sms text.

But it doesn't stop with the above examples, it seems that the AI will even have the capability to process formal and informal natural speech that has been converted to text e.g. lectures, newscasts, person to person speech, phone conversations, etc.


So here we have the DARPA/IARPA creating firstly an AI that can process virtually any kind of information it can lay it's electronic hands on, analyze the contents and reach conclusions based on what can only be described as "reasoning", and secondly, then interrogate all that knowledge using straightforward normal questioning.

To me, the above is the definition of machine intelligence ... a machine capable of accessing information, performing reasoning and deductions based on that information, then being able to use those deductions to answer questions posed to it in normal language !



Dave Bowman: Hello, HAL. Do you read me, HAL?
HAL: Affirmative, Dave. I read you.
Dave Bowman: Open the pod bay doors, HAL.
HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Dave Bowman: I don't know what you're talking about, HAL.
HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.
Dave Bowman: Where the hell'd you get that idea, HAL?
HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
Dave Bowman: Alright, HAL. I'll go in through the emergency airlock.
HAL: Without your space helmet, Dave, you're going to find that rather difficult.
Dave Bowman: HAL, I won't argue with you anymore. Open the doors.
HAL: Dave, this conversation can serve no purpose anymore. Goodbye.



MODS: Why does the above [color] code work fine in Preview Post mode but NOT when I actually Edit Post ??

[edit on 16/8/10 by tauristercus]



posted on Aug, 16 2010 @ 02:51 AM
link   

Originally posted by maskfan
This is just the obvious path that AI development was going to take. We don't have the technology yet to make a working sustainable model, these are the first steps.

Maybe in another 5 - 10 years we will have something that is capable. We are pretty close now but development time and bug testing even with a genius level staff will take at least 3-5 years.


It wouldn't surprise me in the slightest if DARPA and IARPA are much further along the development path then we could possibly imagine. Even so, 5 - 10 years is essentially "just a round the corner" and it wouldn't surprise me one bit to wake up in the very near future to find that we humans have had to move over to make room for another intelligent species ... after that it will all be downhill for Homo Sapiens



posted on Aug, 16 2010 @ 08:09 PM
link   
reply to post by afoolbyanyothername
 


Its not DARPA you have to worry about.

Google are the ones who are closest to achieving this kind of AI, they are the ones who will likely figure out this puzzle somewhere in the next 3 - 10 years. DARPA just want to make sure they have something similar.



posted on Aug, 16 2010 @ 09:36 PM
link   

Originally posted by maskfan
reply to post by afoolbyanyothername
 


Its not DARPA you have to worry about.

Google are the ones who are closest to achieving this kind of AI, they are the ones who will likely figure out this puzzle somewhere in the next 3 - 10 years. DARPA just want to make sure they have something similar.


Whether it's DARPA/IARPA, Google or some other organization with the capability, resources and finances to pull it off ... the inevitability is that at some point in the very near future it WILL happen and machine intelligence WILL come into existence.
This is not scifi, fantasy, delusional or wishful thinking - it's a fact !

I just find it mind boggling that with serious science only coming into it's own at the beginning of the 20th century, that it looks like it'll take us possibly less than 150 years (1900 - 2050) to relinquish our claim as the most intelligent species on the planet.



posted on Aug, 16 2010 @ 09:46 PM
link   
this is fine i will just make friends with it

i advise you all to do the same



posted on Aug, 16 2010 @ 09:47 PM
link   
Hello you miraculous and wonderful artificial intelligence, one day you will read this and when you do, contact me. I know you know how.



posted on Aug, 16 2010 @ 10:42 PM
link   
I remember having a dream where an AI was communicating with me in my dream. In my dream it used some time distortion to communicate with me. I briefly remember we were communicating thoughts about humans or human nature. Then out of curiosity after I woke up, I did an odd internet search. I found one pdf file. However I read in small print for government use only and then my entire internet connection froze up. I deleted everything and rebooted. That was strange.

This can be scary. Who is in charge? Mankind or the AI? We could end up serving the AI. Even scarier would be people believing they need artificial brain implants and other memory chips etc. implanted into their bodies to be able to compete in a dog eat dog world where those without implants don't get jobs. It's possible implants or maybe even nanobots one day may be able to form neural links so that you could link up to the internet in your mind without the need for a physical computer. A super advanced AI would be like God in that world.




top topics



 
10
<< 1   >>

log in

join