It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Can Machines think?

page: 7
11
<< 4  5  6    8  9  10 >>

log in

join
share:

posted on Mar, 20 2012 @ 01:16 PM
link   
What's the difference between artificial intelligence and organic intelligence? That one is based on silicon and the other on carbon? What if we found organic silicon life forms -- would their intelligence be unworthy of our consideration? Have you considered the possibility that you are in fact artificial intelligence in some giant computer simulation called "Earth"?

I'm not saying that the machines of 2012 have this capacity, but it would be stunning to me if creating A.I. with real sentience was an impossibility.




posted on Mar, 20 2012 @ 01:23 PM
link   
reply to post by milkyway12
 


I wouldn't really say that, If I were and advanced yogi I could make my consciousness leave my body, leaving my pineal behind, it is all interconnected and almost all important but if I make the long version most people would stop reading half-way and anatomy isn't what I want to share.

I want to make my point of "lets not forget who are the ones that carry Godlikeness".

simple as that.



posted on Mar, 20 2012 @ 01:28 PM
link   
A human is nothing more than an organic machine that can react dynamically to it's environment. To say synthetic life will never reach this level is simply ignorant. To say it will never reach this level without our help, however, may be a valid assumption. We are human and we love to tinker with new ideas, if nothing is stopping us from continuing research on synthetic life, then it is highly probable that we will create it.



posted on Mar, 20 2012 @ 01:44 PM
link   
I would say that some machines (like my old Atari 2600) have the capacity to "think" more than some of the humans I seem to encounter on a daily basis, so I would have to vote "Yes". However, this is only because the "normal" range of "thinking" has gone so far outside the control lines with humanity.



posted on Mar, 20 2012 @ 01:51 PM
link   
reply to post by andersensrm
 


The human brain can be replicated with technology but many religious types say it's impossible to gain self awareness without a soul. Self awareness is what allows us to control our actions without outside direction.

I think that the human brain is a complicated biological machine with a few basic underlying "commands" present, and we are less self aware than we realize. The concept of thinking and being more than the sum of our parts so to speak, is nothing more than an illusion created by a complicated machine.

Everything we do comes back to a few basic ideas or commands. Some things that do not suit the basic needs are thought about or present in life because they mimmick our natural base programming. Therefore to answer the question, yes, I think it's possible.
edit on 20-3-2012 by Evolutionsend because: (no reason given)



posted on Mar, 20 2012 @ 01:58 PM
link   
reply to post by andersensrm
 



You can give a computer a set of values by which to operate, just like a person...
The difference is the machine will not experience any emotional conflict or question why..



posted on Mar, 20 2012 @ 02:17 PM
link   
robots have computers for brains computers are programmed to do compute maths from low level, humans have to learn maths to build computers. for Ai to work a robots 'mind' needs to be mapped to learn maths, perhaps the first true Ai would instead of a pc with unlimited knowledge programmed onto a mathematical function would actually need to be a very quickly advancing state of what it can do what it cant until after enough time passes it suddenly know stuff

so the key here is memory and the maths programming for robots would be our biological hardware to access the memory, so would that mean everything can experience consciousness by accessing 'memories' ? even plants.

edit on 20-3-2012 by craybiez because: just realised somthink



posted on Mar, 20 2012 @ 03:01 PM
link   
I work with scripted machines every day. Machines are my co-workers. These Machines do not need to think in order to displace humans from the workplace. They are geographically distributed and yet they work together to analyze a set of conditions pertinent to solving problems in a monumentally huge physical layer network. We call it simply "computer test".

At some point the "human leaders" who run the show decided to let "computer test" do more. It now has the authority to send a real, living technician on a 200-mile drive which amounts to a wild goose chase.

Thankfully, most human technicians will reject "computer test" results and ask for a real human to look at the situation before they drive 200 miles. But some don't.

The "human leaders" at my enterprise do not see anything wrong with downsizing humans and adding more scripted machines to the network. The executives who run the show would rather get rid of me (lowering their expenses) and let technicians drive 200 miles (round trip) for nothing. NADA.

What else does "computer test" do? It sends auto emails to customers about the status of their services. All the information in the email might be completely or partially wrong! Believe me, I spend 40 hours a week cleaning up after "computer test".

TL;DR;
Specifically, the enterprise I work for trusts machine scripts and allows machine scripts to send human technicians, in trucks, on practically worthless errands. Although the scripts are not on par with classical AI the leaders of my enterprise believe (in their twisted form of logic) that these machines are saving money - thus they push for more mechanization, more machines, more computer scripts, and less humans.



posted on Mar, 20 2012 @ 03:10 PM
link   
reply to post by andersensrm
 


No, they can't think.

A living organism can deduce things from input without being programmed. Machines aren't capable of deducing things unless you write code for deductions -- and then it'll fail if the imput is ambiguous or out of its experience.



posted on Mar, 20 2012 @ 03:22 PM
link   
reply to post by andersensrm
 




When we cannot even accurately say what is going on when we "think" how can we analyze a replicated thought program? We hardly even know what goes on inside our own brains. What a thought is or even where in the brain these things take place. Take memory for instance. It was thought that there was only one place in the brain for memories but it turns out that there are memories stored all over the surface of the brain. We need to understand our own brains and thought patterns first then see if we can apply them to a machine. The machine is never going to have wants although it may express a desire for needs. Like a copier that says it needs paper. It wont make copies without it. It wont say I want Hammermill paper over Intl Paper. It wont say I want blue ink not black ink.



posted on Mar, 20 2012 @ 03:24 PM
link   
There seem to be a few competing ideas in this thread.

1. Can machines think? (and from that comes.. "What is the nature of thought?")

-- Yes, machines can think, because, what is thought if not the conditioned response to stimuli? We have computer programs that can 'learn' how to better react to humans. This is rudimentary thought in its most basic form. Do A. Did A work? If no, try B. Did B work? If no, try C. Now programs are learning when to skip A and go straight for C.

2. Are machines alive? (and from the comes.. "What is the nature of life?")

-- No, machines are not alive. However, we may certainly come to a time when mech-electric machinery will become alive much like our own bioelectric machinery is. There is no reason to believe mech-electric machinery will not be able to self-replicate. There is no reason to believe that a machine would be unable to create and alter it's own programing, or the programming of offspring.



posted on Mar, 20 2012 @ 03:30 PM
link   
reply to post by karen61057
 


memories can be in the form of knowing or photographic our brains just retrive this info, if a copier learns to use better paper because of a program priority then it would know to ask for it, but if its given a program to ask for blue ink or it will turn off then it would be a fifty fifty chance they would keep turning off



posted on Mar, 20 2012 @ 03:47 PM
link   
Perhaps the intelligent bit is actually social interaction? its relevant with all creatures of intellect, what do any of our decisions consist of I bet it involves thought for another being, could future machines see other machines and build experience and AI life through communicating where alone they could not do much.

is the internet going to be alive?



posted on Mar, 20 2012 @ 03:54 PM
link   
what is life?
some say that ever thing has life in it.
the wind rocks.

people still denie that dogs and animals can think.
but if you look at a well traind monkey.
you can see that it can think.
but some will say it just parrets what it was traind to di!!!

life if NOT just what a human is.
humans have a God complex!
they like to think they are part gods.
and that they are the only one!

Plants think and feel.
but most just turn a blind eye.
life is an energy that is in ever thing.



posted on Mar, 20 2012 @ 03:55 PM
link   

Originally posted by intrptr

Originally posted by blocula
reply to post by intrptr
 
How does the computer im using right now "know" what i want it to do and then do what i ask it to do? because of its memory? and is memory and knowledge the same? Humans would'nt be able to think of anything without our knowledge,our memories and so is the computer i'm using now actually thinking? I dont know,i'm getting a little tired from "thinking about thinking" and i think i'm starting to ramble a little,or maybe i just think i am?


edit on 20-3-2012 by blocula because: (no reason given)

At least you "know" you are tired. A computers memory is not but a vast array of stored "bits", i.e., "1"s and "0"s. Think of a light switch on a wall. On is one and off is zero. Clicking the switch of and on a billion times a second is one "GigaHertz" A long line of ports string these ones and zeros out in a very specific "programmed" fashion. If just one Glitch occurs the whole "symphony" of reading data from addresses and presenting it in "character" form for you to read and type will "crash". The computer must work exactly as programmed by humans without missing one single "1" or "0" or else... gibberish. Total failure. There is no room for error.

To be true there are "error routines" that can "fix" certain anomalies. But that is only as far as input and output go. CPU or core programming bugs in OS for instance are fatal. Power off and reboot. If the problem recurs it is a "bug" that needs revision. The computer does not "think". Memory is only grids of stored "1"s and "0"s. These are stored and retrieved in strings that "equate" to higher and higher forms of data until reaching your keyboard or screen. When you type an A, the computer translates that "A" into "1s" and "0"s that are stored at a memory "location" (in a grid). Nothing more or less. As far as the computer is concerned the difference between an A and any other "character" is simply a different combination of "1s" and "0"s at another memory "location".

See? Very simple. Now go to your light switch and rattle out some "1"s and "0"s (Binary Code) and call me in the morning.


How do we differentiate that from our brain? Theoretically we can convert it ito 1's and 0's anyways. Binary is a way of standardizing things, it says nothing about what the actual data is unless you know the code



posted on Mar, 20 2012 @ 03:57 PM
link   

Originally posted by Orderamongchaos
A human is nothing more than an organic machine that can react dynamically to it's environment. To say synthetic life will never reach this level is simply ignorant. To say it will never reach this level without our help, however, may be a valid assumption. We are human and we love to tinker with new ideas, if nothing is stopping us from continuing research on synthetic life, then it is highly probable that we will create it.


Exactly. But maybe not our help. But I would imagine it would take some type of biological, or natural sentient species like us to progress life to that level.



posted on Mar, 20 2012 @ 03:58 PM
link   

Originally posted by buddha
what is life?
some say that ever thing has life in it.
the wind rocks.

people still denie that dogs and animals can think.
but if you look at a well traind monkey.
you can see that it can think.
but some will say it just parrets what it was traind to di!!!

life if NOT just what a human is.
humans have a God complex!
they like to think they are part gods.
and that they are the only one!

Plants think and feel.
but most just turn a blind eye.
life is an energy that is in ever thing.


I agree, however I feel it will be awhile before we all begin to understand this



posted on Mar, 20 2012 @ 04:00 PM
link   

Originally posted by craybiez
reply to post by karen61057
 


memories can be in the form of knowing or photographic our brains just retrive this info, if a copier learns to use better paper because of a program priority then it would know to ask for it, but if its given a program to ask for blue ink or it will turn off then it would be a fifty fifty chance they would keep turning off


Whats this about the copier??? Fifty fifty??



posted on Mar, 20 2012 @ 04:08 PM
link   
reply to post by andersensrm
 


it couldn't differ-ate between something meaningful or terminate itself without prior programming hence stupid not AI



posted on Mar, 20 2012 @ 04:25 PM
link   

Originally posted by craybiez
reply to post by andersensrm
 


it couldn't differ-ate between something meaningful or terminate itself without prior programming hence stupid not AI


what kind of computers do you think we'll have 100 years from now, given we manage not to destroy ourselves



new topics

top topics



 
11
<< 4  5  6    8  9  10 >>

log in

join