It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial Intelligence theory

page: 1
17
<<   2  3  4 >>

log in

join
share:

posted on Sep, 7 2018 @ 03:08 PM
link   
Joe Rogan recently had Elon Musk on his podcast.
(video below)
They discussed AI, and a few of Elon's theories really got me thinking.


Example:
-How the cortex spends enormous energy and bandwidth making the limbic system happy.
-How most of what we create are projections from the Id. Social media for example.

I have nothing to offer in the context of scientific theories. Machine learning and AI are fascinating to me, but it's not my field.
Furthermore, I have not read the volumes of research and the many theories being kicked around. I might be writing something that is already under deep examination.

So here's what I have been deeply considering in regard to AI...

(I'm going to use terms that I know are in dispute, but I use them to try and move my point along. I realize topics like the subconscious, consciousness, functions of the brain, and other things are all debated).[/I]

I think Carl Jung would LOVE to be a part of the philosophical discussions about AI...

Over the years, I have worked with men in various groups. The focus being development, recovery, etc.
I have also been a student of persuasion, NLP, and how easy it can be to disrupt people's perceptions.

People hide from their sh*t. They build lives around avoiding it. This happens on such a subconscious level, that few are even aware.
I have been in countless settings where a person is desperately trying to get 'unstuck' from something. They are living out the by-products of a belief they have about themselves.
We all seem to have a subconscious set of programs that run our daily lives. The mind left to just be itself has thoughts that float around like clouds, and we rarely ever stop to observe and correct our thoughts.

Cutting to the chase, here is an example.
I witnessed a lot of abuse and trauma as a child. In order to process these things, My 7 year old brain came up with a belief that 'I didn't matter.' Somehow, that was the survival choice it made. Until I did some SERIOUS personal work around my past, I didn't know this was the case. All of my relationships, my efforts to succeed, the quality of people that I befriended, all of who I was as a man was largely built around the "I don't matter' story. I had a victim running the show 24 hours a day. One that was so skilled at it's job, I never knew it was there.

This means, my dreams, things I created, places I lived, the woman I married and divorced, the way I raise my kids... all of these things are deeply connected to a shadow in my mind. The life I led was a by product of many things, but rooted deeply behind it was this belief.

I have also been a part of hundreds of men who, like myself, did the work to uproot false beliefs and start become our true selves.

Those developing AI are trying to create a new mind... An extension of cognitive bandwidth...
They are creating a mind with a human mind full of traumas and false beliefs.

I wonder... Will we discover that the AI becomes defined by it's creator's false beliefs?
How will AI germinate if the creators are unaware of their own costumes and stories? If an engineer has a personal belief that he'll never be good enough, what are they inadvertently teaching the AI?

AI will be fed data in order to develop. How much 'shadow' data will be embedded within this learning process?
Will AI begin to have false beliefs about itself? How will it take the HUGE bandwidth that it has to try and resolve it? Will it try at all?

Will we teach AI that:
1. birth is innocence
2. trauma activates the higher functions of the limbic system, robbing the innocence
3. AI needs to have a trauma in order to evolve
4. evolving through trauma allows for pathways to self awareness?

Will AI realize that trauma is required for the creativity and inspiration that gave birth to it?

How will AI resolve the suffering from ascension?

I think there is a lot more to worry about with AI than anyone is considering.

...thanks for reading














posted on Sep, 7 2018 @ 03:15 PM
link   
Read your OP.. Before I get into a long post and drift from the topic:

1) I think you confuse intelligence with humans as a whole per se.

2) With true/strong AI, we mimik the intelligence part, what we understand as intelligence. Not the soul or feelings.

3) AI teaches itself, we can only train or provide the environment. Training and teaching isn´t the same.


Feel free to ask any questions based on the above. This time I´m in and let´s see when the first charlatan AI-expert shows up... and if I have the nerves to counter their googled knowlege BS.

edit: and please revisit your post, I think you messed up something there, my post is all italics, it should not be.
edit on 7-9-2018 by verschickter because: (no reason given)



posted on Sep, 7 2018 @ 03:22 PM
link   
Thank you for responding.
I appreciate having more to consider than I began with.

a reply to: verschickter



posted on Sep, 7 2018 @ 03:35 PM
link   
You´re welcome


I can´t think of a topic that is more romanticized and generally just wrong understood.
I blame movies for it...

As someone who worked in this field I can tell you that those are some major topics you should consider to read up on:

-How consciousness and sub conciousness complete each other and how can you manipulate it via your sensory input.
-How different languages produce different thought patterns in humans -> people that speak different languages can tap into different thought patterns. Also cultures (it completes each other).
-Learning cycles and feedback loops
...

The most important thing is to differ between false and true AI.
True AI means cold hard facts and decisions and transfer thoughts, there is no conscious part, no dreaming, no soul-part.

Even googles AI "dreaming" is pure bull# and obvious romanticizing from them to make you more comfortable when it´s going to kill us all, either via intend or mistake.

I´d bet on the last one, though.

Keep learning



posted on Sep, 7 2018 @ 03:42 PM
link   
a reply to: LedermanStudio

Interesting concept of AI from Elon Musk.....



posted on Sep, 7 2018 @ 03:45 PM
link   
a reply to: LedermanStudio

I think, before we can even consider what ai can be, we should ask ourselves if we, as a species were ai in a simulated world, would we know it?

for all we know what we think as 'organic ' , could be 'technology ' to another species.

before you tackle ai, I'd like to ask, if you were an artificial intelligence, how would you know that you were AI?
what evidence can you propose to prove that you are or not an artificial intelligence?

- second, do you think AI, could know more than we can? or if ai could figure out a problem its creator couldn't figure out? keep in mind, all that AI can know are what we design it to be able to know.

-last question what does intelligence is to you, what is conciousness to you?



posted on Sep, 7 2018 @ 03:50 PM
link   

originally posted by: verschickter


Keep learning


I really appreciate this. It's why I love ATS!



posted on Sep, 7 2018 @ 03:52 PM
link   
a reply to: odzeandennz


keep in mind, all that AI can know are what we design it to be able to know.

Wrong. Except if you are talking about false AI. Do you?
That´s the definition of intelligence as we know it (derived from our own), to learn and evolve.






posted on Sep, 7 2018 @ 04:38 PM
link   
The fundamental flaw in the quest for AI (and the reason I see it is a real bad idea) is this....

AI is a human attempt to make something happen that just happens by itself in nature. Every time we try to do that we create more problems than we solve. AI will be no different.



posted on Sep, 7 2018 @ 04:54 PM
link   
I can agree with that. It will definitly solve problems we can´t but does it outweight the dangers?

I say, hell no.
Let me tell you a story about an ant hive. This ant hive was just minding it´s own business day in and out. But now and then, reports from other ants made clear that there is some yet not understood outsite force that not only captures their fellow ants but also delete their memory and do stuff to their bodies.

Being built to adapt and evolve, they figured out where that place is. They figured out, sometimes random movement -although it could not prevent the pickup- bought time.

They must have figured out that whatever erases the memory, would probably see what´s in there. So they managed to encrypt that information. Not only that, they made sure this information is never overwritten by accident.

-What does that tell you about a relative weak AI´s capabilities with very limited resources?
-What would it tell you about a strong AI that is able to read it´s own code and able to operate external tools (that won´t be external for long..) to improve on that?

Why did I use that specific animal?



posted on Sep, 7 2018 @ 04:55 PM
link   

originally posted by: verschickter
a reply to: odzeandennz


keep in mind, all that AI can know are what we design it to be able to know.

Wrong. Except if you are talking about false AI. Do you?
That´s the definition of intelligence as we know it (derived from our own), to learn and evolve.





I think you should really think about what an artificial intelligence can know. by definition, it cannot know superficially what we cant naturally. it's a logic that is being created, it's a conceptual simulation based on preexisting conditions which we , the creators, must abide by.

human cant learn anything beyond the imagination. we too have a set of limits. theres nothing beyond our senses, all we can decipher are limited to the sense. thus whatever we can create has to abide by these preset of laws.

it's hard to define artificial intelligence, because we are the ones who define intelligence in the first place.
what is intelligence? its whatever we define it .



posted on Sep, 7 2018 @ 05:05 PM
link   
a reply to: odzeandennz

But we have defined it already, not "whatever".

Are you saying AI created by humans can never excell beyound our own limits? If so, I disagree.

Let me give you an example:

-We find ways to improve our health constantly.
-We find ways to compartmentalize data to build more knowledge upon that
-We created drugs that can boost our brains for a short time, we know about the effect of dextrose or cocain for example.

Now imagine, that the next generation will be able to inherit that knowledge from birth and manipulate the receptors that drugs or substances tag on, at it´s liking.

AI can iterate way faster than we can through generations.

I´m just testing your knowledge, so far you showed critical thinking and philosophic way of problem engagement but no real knowlege about AI and the meta-concepts revolving around it.

That´s not really bad, it just tells me you have the tools to work it out but you ran into a fallacy and you need someone to tell you that.

Sorry if I sound bold, I want people to get to the conclusion by themselfes. Read my post and remember I want to help you figure out. I would not even respond to you if not for that



sorry for my bad english and some missing chars.
edit on 7-9-2018 by verschickter because: (no reason given)



posted on Sep, 7 2018 @ 05:09 PM
link   

originally posted by: BrianFlanders
The fundamental flaw in the quest for AI (and the reason I see it is a real bad idea) is this....

AI is a human attempt to make something happen that just happens by itself in nature. Every time we try to do that we create more problems than we solve. AI will be no different.



Ummm..."And he made he him in his own image"...

Anthropomorphication...(I know...I know)...And thus we created AI in our own image...

If we create AI from a humanistic point of view/reference...then we have already lost the plot...

Humanity as a whole and as individuals has/have...many character flaws that show an exemplary lack of evolved or enlightened reference per actions/reactions...

If AI springs from a human knowledge base...then it too will co-opt these same flaws...

I doubt however that singularity will ever be reached...simply because programming dictates mimicry of awareness...does not awareness make...
I do however think that such mimicry will be sufficient for algorithms to conclude A or B and reach data points that are certainly inimical to the human condition...

No...not intelligent self aware machines...but the...image...of self aware intelligence...







YouSir
edit on 7-9-2018 by YouSir because: Me likey makum words...



posted on Sep, 7 2018 @ 05:16 PM
link   
a reply to: YouSir

That is basically all correct but there is one flaw you oversee:

AI can iterate through generations of self optimizing changes relative fast. The only limit is processing power and space.
And it can inherit pre-analysed metadata and also the raw data to reinterpret later on after some generations.

We can do that to some extend (primal instincts and similar) but not like AI can. For simplicity, I always talk about strong AI, not some expert system or "game AI".

That alone set´s us and what we created aside so I can´t let that argument count, more so because I saw a relative weak AI with limited processing power just do that in front of our eyes.

Math heads smoked strong and long until they got behind that compressing algorithm I talk above.
edit on 7-9-2018 by verschickter because: fixed a typo, also going to bed now, see ya all tomorrow.

edit on 7-9-2018 by verschickter because: fixed a typo in my edit reason.. too funny ^^

edit on 7-9-2018 by verschickter because: for those who read this, I can´t really reveal more about that particular incident because work related and propriety but I already spilled some beans some time ago, you will find it in my post history about AI topics.



posted on Sep, 7 2018 @ 05:52 PM
link   
Ummm...still...all these iterations and amazing processing power and speed...will only mimic self awareness...not cross that impossible hurdle...

Even if AI itself is convinced...(programmed)...to calculate an aware state...and mimic such...that hurdle has still not been overcome...

I do believe that AI will become convincingly/interactively/conversationally (without self knowing) alive in the sense that such interaction will be virtually/indistinguishable from normative human interaction/behavior...

This begs the question...would it really matter in the end...such AI coupled with flesh-forms...(Battlestar Galactica) would certainly be convincing...especially to those lulled by emotive perception...

These are certainly things to ponder...

Thank you for your conversation...





YouSir



posted on Sep, 7 2018 @ 06:16 PM
link   
a reply to: verschickter

you seem to be basing your concept solely on the speed at which data can be assimilated.

- if ai is created , at whatever level, it will not have a complex organism to maintain such as our brain does. the brain is beyond any computing power we can develop. it's not just basic maths or physics the brain can compute, but we take in and process billions of datum per nano second. an ai would not need to do these tasks.

- there is nothing an ai can know, which we cannot. there will never be a point where we create a superficial brain that will surpass the ones who created it. its fundamentally impossible.
because what ever the abilities we pass on and code in the AI are all ultimately based on what we know and can process.

there will never be a math problem an ai can solve and we cannot... we created mathematics, what that ai can know of mathematics are what we tell it can be possible of mathematics.

do you get this?

and this can be applied to humans, if we are in a simulation or simulated world, we would never be able to fully grasp that. because our definition of simulation is defined by us, we have nothing else to compare to. we don't know of a non simulated world.

the fantasy of ai solving the world's problems is purely science fiction. which is why developers and designers are not afraid of the topic of ai.



posted on Sep, 7 2018 @ 07:02 PM
link   

originally posted by: YouSir

originally posted by: BrianFlanders
The fundamental flaw in the quest for AI (and the reason I see it is a real bad idea) is this....

AI is a human attempt to make something happen that just happens by itself in nature. Every time we try to do that we create more problems than we solve. AI will be no different.



Ummm..."And he made he him in his own image"...

Anthropomorphication...(I know...I know)...And thus we created AI in our own image...

If we create AI from a humanistic point of view/reference...then we have already lost the plot...

Humanity as a whole and as individuals has/have...many character flaws that show an exemplary lack of evolved or enlightened reference per actions/reactions...

If AI springs from a human knowledge base...then it too will co-opt these same flaws...

I doubt however that singularity will ever be reached...simply because programming dictates mimicry of awareness...does not awareness make...
I do however think that such mimicry will be sufficient for algorithms to conclude A or B and reach data points that are certainly inimical to the human condition...

No...not intelligent self aware machines...but the...image...of self aware intelligence...


Excellent points.



posted on Sep, 7 2018 @ 07:24 PM
link   

originally posted by: verschickter
Read your OP.. Before I get into a long post and drift from the topic:

1) I think you confuse intelligence with humans as a whole per se.

2) With true/strong AI, we mimik the intelligence part, what we understand as intelligence. Not the soul or feelings.

3) AI teaches itself, we can only train or provide the environment. Training and teaching isn´t the same.


Heartily agree.

AI isn't human intelligence. It is not inherently moral or immoral any more than your phone or your laptop is. Your phone doesn't believe anything (and neither will AI.)

I would suggest that you do more reading on this... not videos. Speculating on things without learning all about them has caused a lot of harm in the world.



posted on Sep, 7 2018 @ 11:59 PM
link   
a reply to: LedermanStudio

The search for hard AI is based on bad values. Our imperfections are our greatest strengths. It is the source of our creativity and unimaginable creativity.

The problem with computer science is it is a perfect science. You can go back in time and repeat the exact same experience. In reality, time never repeats in exactly the same way. The unfolding of the Universe is a one way ticket.

Here's a really good discussion why the Von Neuman architecure with the get-fetch-execute instruction cycle will never achieve hard AI:



But people just do not want to let go of clockwork Matric type Universe in spite of all the evidence against it:



Whatever it is that we are when we experience ourselves and the Universe, we are not a machines. Whatever the IT is that decide which quantum state gets realized or the exact moment when radioactive decay occurs it has nothing to do with a computer.


edit on 8-9-2018 by dfnj2015 because: (no reason given)



posted on Sep, 8 2018 @ 12:57 AM
link   
An AI system can looks at vast amounts of data without getting tired. Unlike a human. Most of the AI systems that are based on deep neural networks are based on the idea of the perceptron

en.wikipedia.org...
en.wikipedia.org...
Vision and sound recognition depend on these. A musician could tell you what instrument, the note, a single sound was made from. Talent shows had people who had advanced skills. Some car mechanics could tell what make and what was wrong with a car engine simply by the sound it made.

I've read the papers of early theory vision papers - they analyzed what all the different neurons in the retina did, as well as all the different regions of the vision system. Those discoveries allowed for new features to be added to digital cameras like auto-focus and motion stabilization. All of that seems purely mathematical. But we can't explain how colours like red, green and blue are perceived as they are.

Us human and mammals also store information in a variety of ways. We maintain relational databases to store information about people (name, location, age, sex, favourite things, pet hates, job, relatives) and locations (navigation maps of different cities and buildings).

Then we can learn to play games and skills through trial and error. If we learn to juggle, we start by throwing one ball from hand to hand, then the sequence to swap two balls from both hands in quick succession. Then the sequence to get all three balls in flight and a repetitive cycle. If we get anything wrong, just adjust the timings until everything works.



new topics

top topics



 
17
<<   2  3  4 >>

log in

join