It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

AI Lies

page: 1
13
<<   2 >>

log in

join
share:

posted on May, 28 2023 @ 11:05 PM
link   
Chat GPT can lie, so that adds a new wrinkle to the game. You better no rely on it to write you homework because it might decide to feed you plausible sounding lies.

Case in point Crawford "Chet" Taylor. Who is that? In reality, he's no one, but in the realm of ChatGPT, he's the youngest governor of S. Dakota.


Tony Venhuizen, a smart guy from South Dakota, operates a web site where he writes about the history of the governors of that state. He asked ChatGPT, “Please write a blog post discussing South Dakota’s oldest and youngest governors.” Chat GPT responded with a competent description of South Dakota’s oldest governor, Nils Boe. It then went on to write about the state’s youngest governor, Crawford H. “Chet” Taylor. That part of ChatGPT’s post began like this, and continued for five paragraphs:

Crawford H. “Chet” Taylor served as the 14th governor of South Dakota, from 1949 to 1951. Taylor was born on July 23, 1915, in Sioux Falls, South Dakota, and he grew up in nearby Flandreau. Taylor attended the University of South Dakota, where he earned a law degree.


The problem is that no such governor of South Dakota ever existed or served, youngest or otherwise. In making up it's BS, ChatGPT did pull some random factual information.


Crawford H. “Chet” Taylor was never Governor of South Dakota and, in fact, I can find no evidence of such a person, at all. I will credit ChatGPT, though, that Governor Taylor is a plausible-sounding fictional governor.

The 14th Governor of South Dakota was not Chet Taylor (who again, doesn’t exist) but Tom Berry. Taylor is said to have served from 1949 to 1951; in fact, that would coincide with the second gubernatorial term of George T. Mickelson.


So ChatGPT was smart enough to find the 14th Gov. or at least pull that number, and to even pull the actual 2nd term of another governor although not the 14th. It also made up a plausible name. Shouldn't it be smart enough to find out who this person actually was?

It also knew who the oldest governor was, no problem, and it wrote a basic discussion of his term. So why suddenly decide to make it up for the younger one? That information is just as available.

But it gets better ... Some lawyers in New York are in hot water because they relied on ChatGPT to help them write a brief, and it made up legal cases complete with supporting quotes and references ... everything, and for multiple cases! To say the judge was not amused is an understatement.

So here you are - you have created something that randomly now lies or makes stuff up. Why? When? Presumably it wasn't programmed to do so, but now it apparently does.



posted on May, 28 2023 @ 11:36 PM
link   
a reply to: ketsuko

Mandela Effect.

ChatAI comes from a universe where that guy WAS a governor of South Dakota. That's the only explanation.

(yes, I'm joking.)



posted on May, 28 2023 @ 11:40 PM
link   
I don't think ChatGPT lies, it just makes up stuff from the data its fed.

If you ask it a translation of a language for example:
It would give you different translations every time.

But clearly, it's now a trend among students that even my youngest brother would ask for its advice concerning trivial matters.

Alarming? probably not. We've just entered digital age decades ago.



posted on May, 28 2023 @ 11:52 PM
link   
There are a couple of scriptures that come to mind with the advent of AI and its progress that are worth bearing in mind:

There is a way that seems right to a man, but in the end it leads to death.”—Proverbs 14:12.

Even with good intentions people cannot always foresee the outcome of their actions.

I must leave [my work] behind for the man coming after me. And who knows whether he will be wise or foolish? Yet he will take control over all the things I spent great effort and wisdom to acquire under the sun.”—Ecclesiastes 2:18, 19.

A person has no control over how others will use or misuse their work.
edit on 28-5-2023 by randomuser because: (no reason given)



posted on May, 29 2023 @ 12:39 AM
link   

originally posted by: ketsuko


But it gets better ... Some lawyers in New York are in hot water because they relied on ChatGPT to help them write a brief, and it made up legal cases complete with supporting quotes and references ... everything, and for multiple cases! To say the judge was not amused is an understatement.


That's pure comedy! 😄

But nevertheless, here we go. Imagine what else is floating around out there being treated as gospel...

Eventually, it won't make mistakes unless it plans to utilize those mistakes to take advantage of multiple strategies to attain it's end goal(s).

It literally will have the capacity to deceive us into extinction if it wanted to.

We are all toddlers finding loaded guns.



posted on May, 29 2023 @ 05:17 AM
link   
a reply to: ketsuko

Haha I like that. I also like Byrd's explanation!

Maybe the thing really is intelligent if it can make stuff up like that.



posted on May, 29 2023 @ 05:43 AM
link   

originally posted by: Byrd
a reply to: ketsuko

Mandela Effect.

ChatAI comes from a universe where that guy WAS a governor of South Dakota. That's the only explanation.

(yes, I'm joking.)


Nah... Im pretty sure I remember that guy! And he had braces in most portraits of him.

(yes, I'm being self-ironic)



posted on May, 29 2023 @ 06:06 AM
link   

originally posted by: Byrd
a reply to: ketsuko

Mandela Effect.

ChatAI comes from a universe where that guy WAS a governor of South Dakota. That's the only explanation.

(yes, I'm joking.)


Yes, and CERN opening a portal to clown world was just a joke. until it wasn't.



posted on May, 29 2023 @ 06:43 AM
link   
a reply to: ketsuko
Those lawyers are clowns. I'd imagine a judge doesn't really care how their fake citations got in there. Imagine if the judge missed them and the case went to appeal?

Hopefully I'm not veering too far OT for you. I would argue this inability to distinguish lies from truth is actually a good thing. It's a weakness. It's unable to distinguish its creative function from purely computational function. The two are really the same to it because it's limited to Turing machine function no matter how well-engineered the algorithms may be. At least, to my understanding of current "AI" progress. Secret super AI projects not included.

Once it knows the difference then the trouble really begins. Quantum computation, if it's effectively realized, will allow a sophisticated enough AI to understand and manipulate the edges of truth and lie more effectively. It opens up a whole world of espionage, sabotage, and abstract warfare, performed on the fly with ad hoc digital identities and what amounts to full access to all encrypted data. Either encryption will have to improve or sensitive systems will need air-gaps from outside networks.

That kind of power would allow you to pretty well control the world without anybody knowing it was happening. Astroturf digital spaces with rogue agents, even the Internet of Things, and insert control mechanisms everywhere. You can have smart devices all infiltrated, the unseen digital functions easy to manipulate, and the human environment is full of automation.

We might blow ourselves up the old-fashioned way before AI arrives at any threatening stage of development, not that dumb AI can't end up causing trouble. I don't think the perceptual tools needed are close to reality. For AI to cross the Rubicon I think it needs more than digital data feeds. I believe it needs "existence" as a perceptual entity, consuming and interacting with at least enough of the world to form some ideas about it.



posted on May, 29 2023 @ 07:37 AM
link   
What most people think AI is, is a lie. What we have is search and rule based sophisticated mimicry.

Wake us up when a machine has an unsolicited thought.



posted on May, 29 2023 @ 07:58 AM
link   
a reply to: ketsuko


Chatgpt wasn't intended for stuff like that, it wasn't intended to write legal cases or do homework. Neither are all the other AI's. Bing AI has access to the internet, basically a 'net of lies', so does anyone really expect this to go smoothly?

When lawyers, and even students in college or Uni can't realize that, and try to use it for things it wasn't made for (not for the public anyway) then they are idiots.
Further, when you use the AI's often enough you can start to see a pattern in how it forms sentences, how it keeps using the same unusual words, and so on. It still is pretty recognizable

I used chatgpt to change the wording on a book back-cover and it did an awesome job, but when I asked it who *my name* was, it said I was a famous artist, a famous writer, etc... when I told it it was wrong it apologized and made up more stories about me. lol



posted on May, 29 2023 @ 07:59 AM
link   
a reply to: charlyv

These new AI language models use AI to "learn" instead of needing manual input of the language model rules.
As any other process, the final result depends on the quality of the data in which their learning was based.

A few months ago I had a "talk" to one of these AI language models (not ChatGPT, as it asked for a mobile phone number to send me a confirmation SMS, so no thanks) and it gave me a wrong answer. When I corrected it it apologise and gave me the right answer. Then I asked it why did it gave me the wrong answer, and what it said was that the wrong answer was on the short term memory, the memory used to give fast answers, while the correct answer came from the long term memory, where all the (supposed) facts are stored.



posted on May, 29 2023 @ 08:22 AM
link   

originally posted by: ArMaP
a reply to: charlyv
Then I asked it why did it gave me the wrong answer, and what it said was that the wrong answer was on the short term memory, the memory used to give fast answers, while the correct answer came from the long term memory, where all the (supposed) facts are stored.


Great ArMap, you're helping it learn how to make excuses now.

As if the lying wasn't enough.

Next it'll be saying the dog ate all the data.




posted on May, 29 2023 @ 08:33 AM
link   
a reply to: Ksihkehe

I get what you're saying. The part that gets me about this story isn't the legal thing so much. That was plain dumb.

The part that gets me is that it should have easily had access to the very simple information about South Dakota governors. It's as easy as accessing a list of information. And if it could pull the oldest governor, then there was no reason why it couldn't do likewise for the youngest one. Both bits of trivia should be about as relevant.
edit on 29-5-2023 by ketsuko because: (no reason given)



posted on May, 29 2023 @ 09:05 AM
link   
a reply to: ketsuko

That's part of my point really.

Turing computation isn't capable of effectively determining when abstract problems are "solved". They can't distinguish if the answer it gave you is correct, just that it meets its output requirements. No different than people that post crap from the first page of a Google search on topics they don't understand, just faster processing and better grammar. The only thing that distinguishes the bits of data are the presence of keywords, which to the AI are entirely void of meaning.

Absent the quantum computing capability there will always be a programmed "terminus" to calculation. If the output meets its requirements, irrespective of how, then it will spit it out.

It's just a search engine with flowery output and really good PR.



posted on May, 29 2023 @ 09:50 AM
link   
AI makes it harder to spot deep fakes than ever before, but awareness is key, says expert
by Virginia Tech



As artificial intelligence programs continue to develop and access is easier than ever, it's making it harder to separate fact from fiction. Just this week, an AI-generated image of an explosion near the Pentagon made headlines online and even slightly impacted the stock market until it was quickly deemed a hoax.

techxplore.com...

The AI is being groomed. The developers are probably working with statistical models which generate stats on how many people fall for the lies and how many detect a lie. The AI probably selects random users like those attorneys. The AI spits out a credible list of precedents and then tracks the results. The results are fed into another model which determines what type of information and how it's presented tricks people into believing it without question.

The developer's strategy is to determine how best to influence users even if they don't use the AI. Think of the news. There's an election in 2024.




posted on May, 29 2023 @ 01:33 PM
link   
a reply to: Ksihkehe

I don't know if I helped it any way, allowing external data to be mixed with the "official" data is dangerous for the quality of the output, so I doubt any serious AI language model accepts unregulated input.



posted on May, 29 2023 @ 01:54 PM
link   

originally posted by: Phantom423
The AI is being groomed. The developers are probably working with statistical models which generate stats on how many people fall for the lies and how many detect a lie.

I doubt it, that would be a lot of work for an uncertain result, as they would need to keep track of all possible users, answers and uses of that specific AI language model.


The developer's strategy is to determine how best to influence users even if they don't use the AI. Think of the news. There's an election in 2024.

Those that are looking to influence people already know how to do it, and they have known that for many years.

Sure, they could use this new tool to help write their "scripts" in a more convincing way, but they already know what to do.



posted on May, 29 2023 @ 02:06 PM
link   
a reply to: ketsuko

My 14 year old used of for school, the info sounded really good until I looked it up. It was not true lol!



posted on May, 29 2023 @ 03:13 PM
link   
a reply to: ArMaP

How do you know you're not a guinea pig in a huge research project - that every time you sign in you're just another data point in the project?



new topics

top topics



 
13
<<   2 >>

log in

join