posted on Jun, 19 2017 @ 08:07 PM
Yeah, I'm pretty sure this computer scientist does not have a neural approach to relational reasoning.
As to your question: Absolutely not! The idea of AI isn't even coherently formulated, let alone a plausible concept to most origin of life researchers
who actually realize in their work the importance dissipative processes as that relates to the emergence of conscious thought.
In short, if emotional or affective need can't be created, because such processes are intrinsically related to dissipative processes, than that means
non-carbon based "models" are just hyped up Bull$hit which ignorant people - being ignorant, lacking the requisite knowledge to make sense of the
claim, and more or less conditioned by Hollywood film - cannot detect the BS nature of.
And no. Human's involved in understanding Human nature (such as myself) are the only creatures capable of probing human phenomenological experience;
the sheer idea of transferring human self-organization, to a non-human machine, speaks to the horrendous degree of dissociation that seems to animate
human beings today. They are essentially operating within a dualistic INCOHERENT metaphysics.
Incoherent means "does not cohere", i.e. isn't coherent. Dualist, psychologistic approaches to consciousness i.e. most of AI research, is simply a
fantasy of computer-age gnostics, who, eventually, will have to admit the fundamentally complex non-reductive, and thus, impossible to imitate nature
of living processes.
Do you know, for instance, that 1 cortical neuron has as many as 100 billion molecules? Do you also know that it is a well known and accepted
neuroscientific fact that individual neurons flash in and then out of neural representations? For instance, if 1 neuron, tracked by subcortical
electrode implants, flashes on (i.e. initiates an action potential) when a picture of Tom Cruise is shown, for instance, it will not flash a moment
later when an intervening image has been shown, indicating that individual neurons are more or less "invariant", i.e. are replicas of one another,
meaning that individual memories are not associated with single neurons, or even a group of neurons, but is distributed in a statistical manner, that,
being so complex, is fundamentally unpredictable.
AI people like to ignore the assumptions that are being made about how the brain actually works. As a system of 86 billion neurons, too much of the
complex of what we are is simply non-reducible.
This is a gnostic fantasy. Fantasy and science do not go good together; a good scientist is one who can tell the difference between what is real,
versus what they (i.e. themselves) want to be real.
We live in the age of #ty reasoning, unfortunately. It seems to be a natural concomitant of capitalism, competition, and the profit motive.