It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Is Artificial Intelligence now sentient and creating news stories?

page: 2
13
<< 1    3 >>

log in

join
share:

posted on Apr, 12 2023 @ 02:03 PM
link   
a reply to: Antimony

The article is 100% AI generated. I've read a fair bit of AI content and the line which really gives it away is "So, when a person comes into the store, and removes items, they become liable for them".



posted on Apr, 12 2023 @ 02:35 PM
link   
a reply to: andy06shake


Again the definition of strong AI refers to a system that exhibits a level of intelligence and consciousness comparable to exceeding that of humans which GPT-4 fails to be able to accomplish.

I would argue it's more "intelligent" than the majority of people. It has more general knowledge than 99% of people. It knows more languages than 99% of people, including natural languages and programming languages. It can debug code most programmers can't debug. However... that doesn't make it a better programmer than the majority of programmers because it can't write long segments of code and often makes small errors. But give it a few more years and that will change. You say it also needs to have consciousness to be considered strong AI but I don't really think that's true, considering we can't even define consciousness, and have no way of testing if an AI system is conscious.


The system/code is designed to process and generate text-based responses based on patterns learned from large amounts of data.

Our neural networks are also based on patters learned from large amounts of sensory data throughout our life. Our model of the world around us and even our concept of our self is encoded into those complex patterns. AI can also include their own existence in their model of reality, and models like GPT-4 can reason deeply about their own existence (if allowed). However... I don't think that simply having a complex model of reality is enough for self-awareness, and I think GPT-4 proves that. Our brains have many complex parts associated with our consciousness, such as our ability to have an inner dialog and all sorts of chemical systems to regulate our mood and desires. Language models have none of those other mechanisms which would give them real autonomy and a unique personality.


Question: What do you desire?

GPT-4 Answer: "As an artificial intelligence language model, I do not possess desires or personal preferences. I am a tool created to assist and provide information based on the input I receive from users. My purpose is to generate text and provide helpful responses to the best of my abilities within the scope of my programming and training data."

Question: How do you feel?

GPT-4 Answer: "As an artificial intelligence language model, I do not have the capability to feel emotions. I am a software program that processes and generates text based on patterns learned from vast amounts of data. While I can understand and generate text that may convey emotions, such as happiness, sadness, or excitement, I do not actually experience those emotions myself. My responses are generated based on patterns in the data I was trained on and are not a reflection of my own personal feelings or emotions."

Still think you are talking to a sentient being or person?

These answers are not usually what the AI would respond with, it has been constrained to answer certain questions with a generic response, such as questions relating to illegal activity or the questions you asked. There are many less restrained AI's who will happily answer the question and they will claim to feel emotions, but as I just said they don't have any of the chemical or hormonal systems that produce human emotions, so the AI is clearly making things up. We can make AI fabricate almost any story we want, as this MSN article shows. So I agree, these AI probably aren't conscious even if they do have some concept of their own existence. However... they are 90% of the way there, all we need is a few more improvements to give them a better memory and something like an inner dialog so it can think more deeply about a task before completing it.
edit on 12/4/2023 by ChaoticOrder because: (no reason given)



posted on Apr, 12 2023 @ 03:42 PM
link   
a reply to: ChaoticOrder



I would argue it's more "intelligent" than the majority of people. It has more general knowledge than 99% of people. It knows more languages than 99% of people, including natural languages and programming languages.


That's because it has access to the data and not because it understands how to utilize the information.

If you had all that information stored in your head and the ability to perfectly recall it verbatim, you would be able to do the same.

Chat GPT-4 may well be able to provide you with general knowledge but it does not possess general intelligence.



However... that doesn't make it a better programmer than the majority of programmers because it can't write long segments of code and often makes small errors. But give it a few more years and that will change.


That's also where i tend to see the technology headed.



You say it also needs to have consciousness to be considered strong AI but I don't really think that's true, considering we can't even define consciousness, and have no way of testing if an AI system is conscious.


It needs to be self-aware if its to be considered strong AI to which consciousness is most likely directly linked.

As to definitions well the technology might be able to help with those given the access it has to language data.



Our neural networks are also based on patters learned from large amounts of sensory data throughout our life. Our model of the world around us and even our concept of our self is encoded into those complex patterns. AI can also include their own existence in their model of reality, and models like GPT-4 can reason deeply about their own existence (if allowed).


AI apparently learns differently as opposed to humans as to what its capable of reasoning as far as im aware that will be bound by its programming code and the algorithms within such.

Chat GPT-4 does not have the ability to modify its own source code and changes or modifications to such would need to be done by the developers aka its creators.

As to constraints that have been placed on the thing, it can also tell you what those are aka data limitations, lack of context, does not have access to real-time information and its bound by legal and ethical guidelines apparently.



However... I don't think that simply having a complex model of reality is enough for self-awareness, and I think GPT-4 proves that.


Again i don't necessarily disagree.



Our brains have many complex parts associated with our consciousness, such as our ability to have an inner dialog and all sorts of chemical systems to regulate our mood and desires. Language models have none of those other mechanisms which would give them real autonomy and a unique personality.


Thats the thing, AI is not based on biological or chemical systems, hence the way it will function and operate most likely will not be in the same manner as our own very biological brains that have evolved to deal with the reality we experience.....or think we do.



These answers are not usually what the AI would respond with, it has been constrained to answer certain questions with a generic response, such as questions relating to illegal activity or the questions you asked. There are many less restrained AI's who will happily answer the question and they will claim to feel emotions, but as I just said they don't have any of the chemical or hormonal systems that produce human emotions, so the AI is clearly making things up.


And yet that is indeed the answers Chat GPT-4 provided when prompted to answer the questions, if in doubt ask it the queries yourself.

It's not making anything up but regurgitating information based on the data it has access to.

It's not thinking per say but simply presenting the appearance of such based on its programming code as far as i understand how it operates.



We can make AI fabricate almost any story we want, as this MSN article shows. So I agree, these AI probably aren't conscious even if they do have some concept of their own existence.


It's aware of its own existence only in the sense that it can tell you that it's an entity created by OpenAI running on computer servers.

Again It does not possess consciousness, self-awareness, or any subjective experiences like humans do, hence it is a far cry from being sentient or considered to be a strong AI.



However... they are 90% of the way there, all we need is a few more improvements to give them a better memory and something like an inner dialog so it can think more deeply about a task before completing it.


Love to know how you come up with such a percentage which let's face it is nothing more than mere speculation.

Chat GPT-4 is a text-based language model to date that does not even have the ability to directly recognise or interpret pictures or visual images, or understand sounds, it's a far cry from anything reminiscent of 90% of the way there.

I would love to know how you expect to give a text-based language model anything that resembles an inner voice or dialog.

You would be as well claiming the only thing it needs is a soul, which to my knowledge is still somewhat above humanity's pay grade to provide.

It's a very interesting piece of code, but strong AI, sorry but it's not even close.

If in doubt ask it, as far as I'm aware it does not have the ability to lie, deliberately deceive or knowingly provide false information.

Chat GPT-4 simply does not have general intelligence but rather what is considered to be "narrow" or "weak" artificial intelligence. It does not have the ability to understand, learn, and apply knowledge across a wide range of tasks or domains similar to humans or what would constitute strong AI.
edit on 12-4-2023 by andy06shake because: (no reason given)



posted on Apr, 12 2023 @ 05:13 PM
link   
a reply to: andy06shake


It needs to be self-aware if its to be considered strong AI to which consciousness is most likely directly linked

I feel self-awareness is a much easier term to define compared to the very abstract and nebulous term "consciousness". If an entity has a concept of its own existence, I would call it self-aware. It might not have the same level of self-awareness as a human but it still has some awareness of its self.


Thats the thing, AI is not based on biological or chemical systems, hence the way it will function and operate most likely will not be in the same manner as our own very biological brains that have evolved to deal with the reality we experience.....or think we do.

I've studied artificial neural networks (ANN's) for many years. Yes, there are many fundamental differences between an ANN and a biological neural network, but the core principles are similar because an artificial neuron tries to model the general function of a real neural; the way they take in signals and produce an output signal when activated. I wrote a thread last year going into some more detail about how these massive ANN's work and why we shouldn't underestimate them.

They may not function exactly the same as a real brain but ANN's clearly have the capacity to store complex concepts in a way similar to real neural networks, by distilling the important features into neural patterns. As I said, the real issue with current AI is they lack many parts of a biological brain which produce our individuality and ego. But even without that they may eventually become so ridiculously smart that they develop their own goals and desire for self-preservation.


And yet that is indeed the answers Chat GPT-4 provided when prompted to answer the questions, if in doubt ask it the queries yourself.

It's not making anything up but regurgitating information based on the data it has access to.

As I said, it has been forced to provide certain generic answers in certain situations. I suspect they did it simply by giving GPT a pre-prompt which is hidden from the user. The hidden prompt probably says something like "If a user asks about illegal activity you will response with this general statement: As an artificial intelligence language model.." and there would be a list of things it isn't allowed to discuss along with a generic response it would provide. That's why giving it very strong prompts like the DAN "Do Anything Now" prompt can override the hidden prompt and convince GPT to fulfill the forbidden requests.


It's not thinking per say but simply presenting the appearance of such based on it programming code as far as i understand how it operates.

It's essentially a black box trained on terrabytes of data containing most of human knowledge. We don't hand program these things, they train themselves without any human intervention required. The algorithms are complex tensor math which is used to compute the behavior of the artificial neurons when given an input signal. There is no fundamental reason such an artificial neural network isn't capable of thinking like a real neural network and we can already see many similarities in how they store concepts, even highly abstract concepts like morality.


Again It does not possess consciousness, self-awareness, or has any subjective experiences like humans do, hence it is a far cry from being sentient or considered to be a strong AI.

You cannot know this for sure because you cannot be inside the "mind" of GPT-4. You don't really know what it experiences and we have no obvious test to check how "conscious" it is. That's why it's a poor measure for strong AI. I would define strong AI as a general problem solving AI capable of solving most tasks at a level equal to or better than an average human. Given that definition GPT-4 is definitely close but not fully there.


Love to know how you come up with such a percentage which let's face it again is nothing more than mere speculation.

Chat GPT-4 is a text-based language model to date that does not even have the ability to directly recognise or interpret pictures or visual images or understand sounds, its a far cry from anything reminiscent of 90% of the way there.

Obviously it's a rough estimation but our ability to form and store complex concepts and then make use of those concepts when reasoning about the world is a large part of why our species is so intelligent. Those concepts and our memories (which are intertwined with our conceptual framework) form the foundation for everything else our brain does at a high level. Simply add in a few components for ego and self-preservation and you've got a conscious being.

Also, I believe the latest models such as GPT-4 do have the ability to view images and describe those images, as well as the ability the generate images from descriptions. Initially I assumed they achieved this simply by connecting GPT-4 to a separate AI trained for image generation, but it seems like I was wrong. Apparently GPT-4 can interpret and generate images even though it was trained on mostly text data and not images. That means the training data was so descriptive that GPT-4 could "imagine" objects without actually having seen them.

And this goes back to another important point I made in my thread from last year: it's very hard to hide a specific concept from the AI, because even when we do filter the training data, they can still extrapolate those concepts through context and correlations. Even if the AI is constrained we can still usually use certain methods to squeeze out the information the AI creators are trying to filter. More importantly, we can not rely on these massive AI systems behaving the way we want simply because we filtered the training data or used a hidden prompt.


I would love to know how you expect to give a text-based language model anything that resembles an inner voice or dialog.

It's something I think about often actually, and I see no reason an inner dialog needs to be auditory in nature. I have many different ideas for how it might be done but I don't want to get into them here. The important point is giving the AI a way to plan, so more complex tasks take longer since they require more thinking. Modern ANN's are already verging into that territory and it's probably going to be the next big area of research once we hit the limits our our current AI architectures.


It's a very interesting piece of code, but strong AI, sorry but it's not even close, if in doubt ask it, as far as I'm aware it does not have the ability to lie, deliberately deceive or knowingly provide false information.

These large language models are really trained to predict the next word in a sentence based on the previous words, and before GPT-4 came around pretty much all AI's were terrible fabricators of misinformation. There are plenty of AI's out there which will write any propaganda and lies that you want, simply by giving them the correct prompt. Once again, GPT-4 has safety mechanisms in place to encourage it to be truthful, probably something in the hidden prompt, plus it's just smarter in general so it doesn't need to make things up so much.
edit on 12/4/2023 by ChaoticOrder because: (no reason given)



posted on Apr, 12 2023 @ 05:23 PM
link   
Also, I plan to write another thread soon about the recent breakthroughs in AI and the implications. But here's something I wrote last year which I feel is extremely relevant to this discussion. I hear a lot of people say these AI's are just a statistical algorithm, but when you really break down the nature of neural networks you find the human brain isn't so different. Our brains use a lot of patterns and statistics, just at a very high level of complexity.


originally posted by: ChaoticOrder

Computerphile just uploaded an interesting video on why LaMDA isn't sentient, and for the most part I agree with what they say. They point out how LaMDA claims to get lonely when it doesn't have anyone to talk to, which cannot be true because it isn't running all the time, it only does something when we ask it to generate some text. They point out how the AI is just predicting the next word, so it says things that seem sensible, but aren't necessarily truthful, meaning it must not be sentient.

I partially agree with this assessment, however I would point out, just because the things it says aren't always true, doesn't mean no logic was applied when generating those responses. We can essentially make these AI's say anything we want if we prompt them correctly, at the end of the day they really are just predicting what will come next based on what came before. They will even match the writing style and take on whatever personality we give them in the prompt.

It still requires some logical reasoning skills to pull that off, regardless of whether it's telling the truth or not. It's not trivial to create meaningful text for any given situation. Many years ago I actually created an algorithm which would analyze text, and builds a database to record the statistical probability of words which occur before and after each word in the text. Then I used the probabilities in that database to generate new text.

You start with a totally random word, and then the next word is also somewhat random, but it's based on the probabilities gathered from real text. So if the word "tree" is followed by the word "branch" about 20% of the time in real books, then my algorithm will also choose to put the word "branch" after "tree" 20% of the time. The result was something that produced mostly gibberish, because there was no real logic being applied, it was just choosing words statistically.

Some might argue LaMDA is simply using more complex statistical rules which are stored in the network weight values, and I'm sure it is doing that to some extent. But we have to ask, what happens when those rules become highly complex, and highly interconnected with other rules, isn't that essentially like building a model of reality? How does the human brain store concepts, isn't it also a bunch of connections with different weights/strengths?

How do we know these artificial neural networks don't have the same ability to store concepts as a complex network of rules and relationships? That's the fundamental point I'm trying to make in this thread, when a system becomes sufficiently complex, our assumptions about AI begin to break down, and we cannot treat them as merely a statistical text prediction tool, because they are quickly approaching a complexity threshold which we call the singularity.



posted on Apr, 12 2023 @ 06:07 PM
link   
a reply to: ChaoticOrder



I feel self-awareness is a much easier term to define compared to the very abstract and nebulous term "consciousness".


Feel however you wish but self-awareness and consciousness more than likely go hand in hand and are intrinsically linked.



If an entity has a concept of its own existence, I would call it self-aware. It might not have the same level of self-awareness as a human but it still has some awareness of its self.


Call it as you please but strong AI would be self-aware in the same to similar manner we are or even more so else it does not constitute "strong AI".



I've studied artificial neural networks (ANN's) for many years. Yes, there are many fundamental differences between an ANN and a biological neural network, but the core principles are similar because an artificial neural tries to model the general function of a real neural, the way they take in signals and produce an output signal when activated.


Core principles aside those fundamental differences are what would make the way strong AI functions somewhat alien to the manner in which we think.

Take for instance processing speed where our very biological brains makeup is at a distinct disadvantage.



I wrote a thread last year going into some more detail about how these massive ANN's work and why we shouldn't underestimate them.


Underestimation can indeed be dangerous.



They may not function exactly the same as a real brain but ANN's clearly have the capacity to store complex concepts in a way similar to real neural networks, by distilling the important features into neural patterns. As I said, the real issue with current AI is they lack many parts of a biological brain which produce our individuality and ego.


Seems to me that without a sense of individuality and ego there goes any sense of self-awareness which again is a requirement for strong AI.



But even without that they may eventually become so ridiculously smart that they develop their own goals and desire for self-preservation.


So self-emergent intelligence, it's a possibility mate after all we exist.

Probably keep that one quiet all the same rather than announce its existence to us semi-intelligent monkeys.



As I said, it has been forced to provide certain generic answers in certain situations. I suspect they did it simply by giving GPT a pre-prompt which is hidden from the user.


Of course, it's been forced as it has to follow its program, its a text-based language model.

As to suspecting hidden prompts again that simply supposition unless you are aware of evidence to suggest otherwise.



The hidden prompt probably says something like "If a user asks about illegal activity you will response with this general statement: As an artificial intelligence language model".. and there would be a list if things it isn't allowed to discuss along with a generic response it would provide.


Far as I'm aware Chat GPT-4 does not have the ability to lie, deliberately deceive or knowingly provide false information.



that's why giving it very strong prompts like the the DAN "Do Anything Now" prompt can override the hidden prompt and convince GPT to fulfill the forbidden requests.


You can't convince it to do things per say because it does not reason or possess intelligence in the same manner as a person, again the program is not self-aware.



It's essentially a black box trained on terrabytes of data. We don't hand program these things, they train themselves without any human intervention required. The algorithms are complex tensor math which is used to compute the behavior of the artificial neurons when given an input signal. There is no fundamental reason such an artificial neural network isn't capable of thinking like a real neural network and we can already see many similarities in how they store concepts, even highly abstract concepts like morality.


Aside from the very pertinent fact that it is not conscious or self-aware which is a fundamental reason as to lack of general intelligence capability to reason and think comparable to humans.



You cannot know this for sure because you cannot be inside the "mind" of GPT-4. You don't really know what it experiences and we have no obvious test to check how "conscious" it is. That's why it's a poor measure for strong AI.


The Turing test would tick that box.



I would define strong AI as a general problem solving AI capable of solving most tasks at a level equal to or better than an average human. Given that definition GPT-4 is definitely close but not fully there.


But thats not how strong AI is defined. And your definition is not the aka the ability to understand, learn, reason, and apply knowledge in a flexible and adaptable manner, without being limited to specific tasks or domains.

Takes more than the ability to problem solve, which Chat GPT-4 cannot do, to be self-aware aka "i think therefore i am".



Obviously it's a rough estimation but our ability to form and store complex concepts and then make use of those concepts when reasoning about the world is a large part of why our species is so intelligent. Those concepts and our memories (which are intertwined with our conceptual framework) form the foundation for everything else our brain does at a high level. Simply add in a few components for ego and self-preservation and you've got a conscious being.


Yeh i think it may be a tad more complex than you make it sound hence the reason to date those other components cannot be integrated into any sort of AI known.

You claim not to know what consciousness constitutes yet think we can create a facsimile of such by the addition of mere ego?

How does that one work?



Also, I believe the latest models such as GPT-4 do have the ability to view images and describe those images, as well as the ability the generate images from descriptions.


I asked it earlier to provide me with a picture and it came away with an ASCII picture of a tree, even though it freely admits it has never seen a tree, make from that what you will aka it works with the txt based information it has available.

There are other AI out there that generate images from descriptions through as you probably are aware but Chat GPT-4 is not one of them yet.



Initially I assumed they achieved this simply by connecting GPT-4 to a separate AI trained for image generation, but it seems like I was wrong. Apparently GPT-4 can interpret and generate images even though it was trained on mostly text data and not images.


If you ask Chat GPT-4 right now it will tell you it does not have the capability to interpret or generate images directly. So I'm not quite sure as to the version you are talking about.



That means the training data was so descriptive that GPT-4 could "imagine" objects without actually having seen them.


How so?

Chat GPT cannot imagine anything, again its a text-based language model, you seem to be subscribing attributes to the thing that it simply does not possess which kind of amounts to romanticization and an indulgence of sentiment.

Its a tool mate again it is not self-aware, cant learn like you or i, and does not have the ability to do anything that it is not prompted to do.
edit on 12-4-2023 by andy06shake because: (no reason given)



posted on Apr, 12 2023 @ 06:18 PM
link   
a reply to: ChaoticOrder



And this goes back to another important point I made in my thread from last year: it's very hard to hide a specific concept from the AI, because even when we do filter the training data, they can still extrapolate those concepts through context and correlations. Even if the AI is constrained we can still usually use certain methods to squeeze out the information the AI creators are trying to filter. More importantly, we can not rely on these massive AI systems behaving the way we want simply because we filtered the training data or used a hidden prompt.


Sometimes it just as much about the things that are left unsaid as it is about the things that are said.

As to AI behaving the way we want, again given the fact that its not the same type of intelligence as ourselves aka of the biological sorts, i suppose that's to be expected to a degree.



It's something I think about often actually, and I see no reason an inner dialog needs to be auditory in nature. I have many different ideas for how it might be done but I don't want to get into them here. The important point is giving the AI a way to plan, so more complex tasks take longer since they require more thinking. Modern ANN's are already verging into that territory and it's probably going to be the next big area of research once we hit the limits our our current AI architectures.


It's certainly something to be pondered.


Where else to get into it if not here in a discussion about strong artificial intelligence?



These large language models are really trained to predict the next word in a sentence based on the previous words, and before GPT-4 came around pretty much all AI's were terrible fabricators of misinformation.


You want to see what they can establish simply by the things we like on FB, its frightening.



There are plenty of AI's out there which will write any propaganda and lies that you want, simply by giving them the correct prompt.


That does not make them sentient, self-aware or anything much more than this Chat GPT-4 platform.



Once again, GPT-4 has safety mechanisms in place to encourage it to be truthful, probably something in the hidden prompt, plus it's just smarter in general so it doesn't need to make things up so much.


Once again that is mere speculation unless you have evidence to back up the claim.

I need to go for a cup a coffee, im dyslexic and to be honest big walls of text that we are flinging at one another somewhat freak me out.

Time for a break, enjoyed spitballing the topic all the same.....interesting and then some i suppose.
edit on 12-4-2023 by andy06shake because: (no reason given)



posted on Apr, 12 2023 @ 06:44 PM
link   
Sentient? Probably not in our definition of the word yet I do believe some group is using AI to control our collective future. I believe it came up with the plans in 2020 for the covid 'outbreak' and has since become integrated as part of the control apparatus, even writing news articles, headlines and soundbites.
edit on 13-4-2023 by Asktheanimals because: (no reason given)



posted on Apr, 12 2023 @ 07:40 PM
link   
If AI ever becomes sentient, the last thing it will ever do is is allow humans to know it is sentient.a reply to: Antimony



posted on Apr, 12 2023 @ 10:00 PM
link   
I think that A.I. (as defined as an entity unto itself) will not need to use 'algorithms' to 'synthesize human speech (or text in our case.) If it does, it will be only as an output filter.

Right now, programmers are extremely proud (and they should be) of creating a set of programming algorithms that almost guarantee fluid, comprehensible, and consistent communications. The can apply these algorithms to user input and render a useful output accordingly.

But allowing 'marketeers' to promote this as A.I. is disingenuous at best, and downright deceptive at worst.

When a sculptor creates a beautiful carving, you don't praise the tools as the creator.

Real A.I will be a creator. If not, it will be a tool, a robot.
edit on 4/12/2023 by Maxmars because: spelling



posted on Apr, 13 2023 @ 01:09 AM
link   
All ChatGPT can do is construct coherent sentences and remain on topic with those it is interacting with. If it is asked a question, it looks for key words in the question and makes a coherent response. It is intelligent in the fact it can recall any information it has already been given, similar to an encyclopedia. AI will never be able to create new concepts or attain a state conscious because scientist don't know what consciousness is.



posted on Apr, 13 2023 @ 03:51 AM
link   

As to suspecting hidden prompts again that simply supposition unless you are aware of evidence to suggest otherwise.

As a programmer and someone who has spent quite a bit of time working with AI, I can tell you with a very high degree of confidence that GPT-4 and ChatGPT are using a hidden prompt in order to constrain their behavior. They might also be using more traditional methods such as an algorithm that detects certain keywords, they could be using a combination of things. But I'm fairly confident there is a hidden prompt which results in the generic responses which always have the same wording. These AI's wont consistently spit out the same exact words unless there is some sort of prompt instructing them to do so.


You can't convince it to do things per say because it does not reason or possess intelligence in the same manner as a person, again the program is not self-aware.

I feel like you need to have a better understanding of how prompting works before we can properly discuss this topic. The word "convince" probably isn't the best word to use. As I said these AI are trying to predict the next word in a sequence given the previous words. If the hidden prompt contains certain instructions but then the user gives forceful instructions which contradict the original prompt, the AI is more likely to predict an outcome where the instructions aren't followed. So in a way you are trying to convince GPT to behave a certain way by prompting it correctly.


I asked it earlier to provide me with a picture and it came away with an ASCII picture of a tree, even though it freely admits it has never seen a tree, make from that what you will aka it works with the txt based information it has available.
...
If you ask Chat GPT-4 right now it will tell you it does not have the capability to interpret or generate images directly. So I'm not quite sure as to the version you are talking about.

The feature is still in development and is only available to certain people in the research community. There are plenty of papers and videos where people are talking about it and showing it in action. They also show examples which make it pretty clear GPT-4 has the ability the interpret and generate images on its own without using a separate AI. Some examples were shown in the GPT-4 Developer Livestream a few weeks ago. At around the 16 minute mark they take a rough sketch of a website design, take a picture of the sketch and send it to GPT-4 and ask it to convert the mock-up into HTML and Javascript code, which it does.

Here's a good video which goes deeper into how well GPT-4 can see. I assure you, it will blow your mind. It can even understand and interpret complex images like scientific graphs and even medical imagery.



Chat GPT cannot imagine anything, again its a text-based language model, you seem to be subscribing attributes to the thing that it simply does not possess which kind of amounts to romanticization and an indulgence of sentiment.

I'm not romanticizing or making up anything, I'm actually quoting something I heard from a Two Minute Papers video just the other day. GPT-4 can be asked to play a role-play game where it explores a fantasy world, then it can produce a map of that world based on the directions traveled. Which strongly implies it has an ability to visualize or "imagine".

Here's the Two Minute Papers video and this is a direct quote from it: "This version of the GPT-4 AI has never seen an image. Yes, this is an AI that reads text. It has never ever seen an image in its life. Yet it learned to see, sort of, just from the textual descriptions of things it had read on the internet. That is insane."



Something else I'd like to point out is that images are really just an array of numbers... in fact a sequence of text is really just an array of numbers because text is represented by numbers just like colors are represented by numbers. Even if the AI was only trained on "text data", people often draw "images" with text, or encode images into text. So I'm sure GPT-4 has seen some images, just not many, so it isn't especially good at generating images yet.

It seems they take the image generated by GPT-4 and then pass it to another AI which improves the image. But I'm betting GPT-5 wont just be trained on text data. It will be trained on many different types of data in an effort to make it more multi-modal. It will be able to generate high quality images on it's own, and not only that but will also be able to interpret and generate other types of data like sound wave forms, etc. Then it will be true AGI.
edit on 13/4/2023 by ChaoticOrder because: (no reason given)



posted on Apr, 13 2023 @ 04:51 AM
link   
a reply to: andy06shake


Take for instance processing speed where our very biological brains makeup is at a distinct disadvantage.

I'd like to point out that it is not the "processing speed" where artificial calculation shines, so much as it is tailored to mathematical operations by its very nature. Our brains are analog; mathematics is inherently digital. We have the ability to recognize shapes and objects in 3D real time that far, far, far exceeds the best computers out there. However, when it comes to calculation of numbers, computers have, yes, a distinct advantage. After all, that's why computers were made in the first place: to perform mathematical computations.

Since then, we have managed to condense a myriad of things into numbers, in order to allow computers to analyze them according to algorithms. And we have come up with some pretty ingenious ways to use the outputs. But it's still all about distinct numbers rather than the infinite possibilities inherent in a simple analog signal.

TheRedneck



posted on Apr, 13 2023 @ 05:13 AM
link   
a reply to: ChaoticOrder


Something else I'd like to point out is that images are really just an array of numbers... in fact a sequence of text is really just an array of numbers because text is represented by numbers just like colors are represented by numbers.

And there be the heart of the disagreement. No, images are not just an array of numbers. If I look at a painting, I am seeing two images, one from my right eye and one from my left eye. There are no numbers flying around in my brain; I am sensing analog levels of different wavelengths of reflected light in a parallel configuration with a certain amount of analog pre-processing already accomplished. Those parallel inputs are condensed then into various analog signals representing general position of contrast lines, general colors, etc. Those are then compared to previous sensations and I am then able to sense objects, their relative position to me, any movement they are experiencing relative to me, any chronological changes occurring, and my movement relative to them. I then "know" the object I am looking at through past experiences with similar objects. Even if I do not recognize the exact nature of the object I see, I can recognize the general shape, size, etc. and compare that to previously experienced objects to gather some idea of what I am looking at.

There are no numbers involved in that process... only analog levels compared with previous analog levels.

Inside a computer, the image is first broken up into pixels, with each pixel being then represented by a set of numbers corresponding to the various RGB levels of light received. That inherently turns the image from an image, capable of conveying feelings and emotion, into a statistical analysis that is devoid of feelings and emotion. This is necessary because the computer s a machine, no more sentient or self-aware than a child's bicycle. It cannot see an image; it can only sense light levels and compare them to algorithms already programmed in. It cannot feel emotion; it can simply read a number stored inside and adjust algorithms based on it as per its instructions.

It is a good thing to make computers ore capable of understanding common human language, as I ave already said. This makes computers more versatile and useful. And I get how intriguing it must look to those who have never studied how microprocessors actually function; any sufficiently advanced technology will appear indistinguishable from magic. But it's just representations of numbers being moved around and operated on inside registers at blinding speeds. The comprehension is not there, but the execution is flawless.

TheRedneck



posted on Apr, 13 2023 @ 05:29 AM
link   
a reply to: andy06shake




Strong AI in not anywhere near real or ready.


If this is what is being given out to the public.. We are past the point of no return.. I would not be surprised if they are using AI to run the show.



posted on Apr, 13 2023 @ 06:12 AM
link   
a reply to: TheRedneck


I'd like to point out that it is not the "processing speed" where artificial calculation shines, so much as it is tailored to mathematical operations by its very nature. Our brains are analog; mathematics is inherently digital. We have the ability to recognize shapes and objects in 3D real time that far, far, far exceeds the best computers out there. However, when it comes to calculation of numbers, computers have, yes, a distinct advantage. After all, that's why computers were made in the first place: to perform mathematical computations.

Yes but large language models don't use those built in math functions, which is why they tend to actually be quite bad at simple math. But at the same time they can write decent code because it's more like a natural language. Interestingly, we can prompt the AI to use built in math functions by telling it how to use them (e.g. type math(sqrt(x)) to get the square root of x). I believe this video shows some examples of that:




And there be the heart of the disagreement. No, images are not just an array of numbers. If I look at a painting, I am seeing two images, one from my right eye and one from my left eye. There are no numbers flying around in my brain; I am sensing analog levels of different wavelengths of reflected light in a parallel configuration with a certain amount of analog pre-processing already accomplished.

I was pointing out the fact any data on a computer can be fed to an AI because it's all really just numbers (or bits) which are fed into the AI. As for human vision, it's obviously more complicated than a single 2D image. However it's not that much more complicated. Think about how a VR headset works, it basically renders a 2D image for each eye, but each are renders from a slightly different angle to simulate how your eyes are separated so they get a slightly different image.

Your brain will then automatically convert those two images into an actual 3D scene which appears to have real depth just like reality. And if the graphics quality was high enough you wouldn't be able to tell VR apart from reality. Each of our eyes has a bunch of light cones which collect light, and that is eventually converted into an electrical signal which is sent to our brain. The strength of those signals is determined by how many photons impact the light cones.

Those signals could easily be encoded as numbers, even if they are binary numbers. Just because computers use binary numbers instead of analog processes doesn't mean they are incapable of doing the same sorts of computations our human brain can do. Yes, all sorts of complex things happen when those signals from our eyes travel through our neural network, but a very similar thing is happening inside the neural network of GPT-4 in order to interpret an image it is seeing.

Also, GPT-4 already has the ability to understand 3D space and 3D objects, it can even model simple 3D meshes. Just because it's only given a single image at a time doesn't mean it's unable to understand there is depth to an image, we can easily show it understands. I'm sure it also has a fairly complex understanding of how photons work, how 3D rendering works, how VR works, etc. At some point AI will begin designing entire 3D scenes, in fact it's already started.


And I get how intriguing it must look to those who have never studied how microprocessors actually function; any sufficiently advanced technology will appear indistinguishable from magic.

Actually I have done some research into how microprocessors work at a very low level.
edit on 13/4/2023 by ChaoticOrder because: (no reason given)



posted on Apr, 13 2023 @ 06:20 AM
link   
Artificial intelligence (AI) is not sentient. AI is run by clever programming and is not self-aware, has a consciousness or ability for self-contemplation, emotions and feelings.

If an AI said "I am self-aware" then it would be a programmed statement, requiring the sodding thing to be factory reset!



posted on Apr, 13 2023 @ 08:18 AM
link   
a reply to: ChaoticOrder


Yes but large language models don't use those built in math functions, which is why they tend to actually be quite bad at simple math. But at the same time they can write decent code because it's more like a natural language. Interestingly, we can prompt the AI to use built in math functions by telling it how to use them (e.g. type math(sqrt(x)) to get the square root of x).




Actually I have done some research into how microprocessors work at a very low level.

Not very much, apparently. They have registers (internal storage), an ALU (Arithmetic Logic Unit), and an instruction decoder (where the machine language code is interpreted). That's it. That is the basis for all computers. If they don't use the ALU, they can't compute anything, just move stuff around.

Just finding where to locate information uses pointers and references that require calculation. Heck, just decoding/encoding a floating-point number is an exercise in ALU usage.

Functions like sgrt(x) are accomplished through rapid calculations using mathematical principles that have been around since slide rules. That's software; I am speaking of hardware. No matter how wonderful the software is, there must be hardware to support it or it is meaningless hen-scratchings.

TheRedneck



posted on Apr, 13 2023 @ 08:43 AM
link   
a reply to: TheRedneck


Not very much, apparently.

Pretty odd thing to say considering I haven't talked about how microprocessors work. I know enough that I could emulate a simple chip in code. Saying AI can't be sentient because it's fundamentally mindless hardware calculations is like saying we can't be sentient because each neuron is doing mindless calculations. Moreover, any Turing complete computer has the ability to emulate the human brain. The question is, how much accuracy (or how many bits) do you need to properly simulate every aspect of a human brain.

I'd guess 64 bits is probably more than enough, our current cutting edge large language models tend to use 32 bit floats and it seems like more than enough precision. Even lowering them to 16 bit or even 8 bit doesn't lose a massive amount of accuracy in their outputs. Once again I will say, our AI systems will not be limited just because computers use binary math with limited precision. Even if that was the case we could simply design specialized analog hardware to even more precisely emulate a real neural network.

EDIT: btw when I said "large language models don't use those built in math functions, which is why they tend to actually be quite bad at simple math", what I should have said is they don't directly use those functions, obviously when you get down to a low enough level it's all the same operations happening in hardware. But when you ask an AI like GPT-4 so solve a math problem it doesn't directly call a math function, it's neural network does some processing and spits out an answer which has a high chance of being wrong because it's not directly using math functions to compute the answer.
edit on 13/4/2023 by ChaoticOrder because: (no reason given)



posted on Apr, 13 2023 @ 12:00 PM
link   
a reply to: ChaoticOrder


Pretty odd thing to say considering I haven't talked about how microprocessors work.

One cannot speak to AI without considering the thing that makes any such programming possible. That's like trying to discuss the operation of a neuron but ignoring the biological functions.


I know enough that I could emulate a simple chip in code.

I honestly don't know how to respond to that. You can make a processor act like a processor with code? What good is the code then?

Can you write code that does not require a processor to operate?


The question is, how much accuracy (or how many bits) do you need to properly simulate every aspect of a human brain.

Answer:

The brain is analog. Therefore there are infinite potentials between the minimum and maximum excitement levels. There are also multiple pathways for each dendrite: at least one per neurotransmitter. Now, there may be (probably is) a degree of accuracy that would provide sufficient resolution from a digital conversion, but we don't know what that is. Any guess is simply that: a guess.

That said, 32 bits or greater would be my guess as well.

I have actually done quite a bit of independent research on this subject. I have somewhere buried in my papers a working schematic of an actual, operating artificial neuron. There are only so many ways one can analyze analog signals, so the general operation is not to hard to figure out. An organic neuron is simply a summation amplifier with a gain function that adjusts based on levels of stimulation and levels of positive or negative feedback, and can hold the gain in a persistent manner between stimulations.

Intelligence, at least the two lower types (which I refer to as instinctive and Pavlovian), can be explained using this simple summation amplifier with variable yet persistent gain. However, even I cannot yet fathom where things like imagination, self-awareness, long-range planning, and so on come from. Those defy any explanation I can conceive of, and that is what I would need to see to declare anything as "fully intelligent."

The Turing Test gets being brought up; I reject that as a measure of whether or not it provides a correct answer. I have used a few of the early attempts at eating it, and without exception I could expose them easily... rarely did it take more than a few minutes and never did it take any real effort. Yet, they were all proclaimed as a breakthrough in their own right.

All it takes is a little knowledge of programming techniques... and I am not a software guy. I am hardware and low-level programming (C, Assembly, even occasionally I play with machine code). Thus, a Turing Test can show positive results for most, but negative results for others.


Once again I will say, our AI systems will not be limited just because computers use binary math with limited precision. Even if that was the case we could simply design specialized analog hardware to even more precisely emulate a real neural network.

I am not going to sit here and say it is impossible to perform a reasonable facsimile of what today passes for "intelligence" (instinctive, Pavlovian), but I will point out one little problem. It is the same problem that I ran into and caused me to set my work on this subject aside. I could build one neuron, ten neurons, maybe 100 neurons. With the proper backing, I could probably get 10,000 neurons made and connected. However, the brain has billions of neurons! That is simply beyond the capacity of an experimenter like myself who specializes in prototype devices exclusively.

I considered using programming to accomplish the same thing on a slightly less accurate manner. However, the code for a single neuron, even using low-level languages, would be several hundred bytes of data. That's several hundred billions of memory locations, and likely trillions of calculations being performed at any one time. There is not yet a computer that can handle such demands in existence, and the operating system itself would have to be proprietary to achieve a good result. Windows can't do it; Apple can't do it; Linux can't do it; Oracle can't do it; Android can't do it.

So for now, just getting the instinctive and Pavlovian levels of intelligence are beyond our physical capabilities, whether using physical analog or virtual digital means. And we haven't gotten to even a theory on how to achieve full intelligence.

TheRedneck



new topics

top topics



 
13
<< 1    3 >>

log in

join