It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


GPT-3, OpenAI's Language Generator, Writes Eloquent Essay in Defense of AI

page: 2
<< 1    3 >>

log in


posted on Sep, 12 2020 @ 12:06 PM
a reply to: DictionaryOfExcuses

That would require duplicity to be part of the AI's programming, no?

Perhaps it would be easier to split the two assignments up.
"AI, write a propaganda to convince group A that group B poses an existential threat that must be crushed preemptively."
"AI, read this propaganda against group B. Now write a propaganda to convince group B that a defensive pre-emptive crushing is justifiably required."

Two separate but interconnected assignments. No duplicity.

But then, we don't need AI for that anyway.

posted on Sep, 12 2020 @ 12:15 PM
a reply to: pthena

The more we split hairs, the less impressed I am with the AI boogeyman. If it is indifferent, servile, without intent, how "intelligent" is it?

But more importantly, what does it say about me that my litmus test for intelligence is evincing deceit?

edit on 9/12/2020 by DictionaryOfExcuses because: (no reason given)

posted on Sep, 12 2020 @ 12:18 PM
a reply to: zosimov

It's always fun interesting and constructive philosophy to hang out in your threads and phtena is also not far when your name pops up in the thread list so I really enjoy it.

posted on Sep, 12 2020 @ 12:36 PM

originally posted by: ThatDamnDuckAgain

If it ever decides to go by law of nature, the cards handed out to us do not look good in perspective how we justify what we do to this planet. Why would AI give a damn.

Now that is quite the conundrum. I tend to blame (whether deserved or not) the Western notions of Eschatology (end of World stuff). Rather than attempting to plan out sustainability, the planners planned an acceleration. On one hand accelerating the Apocalyptical divine intervention, and on the other hand accelerating a technological utopia that would then be wise and powerful enough to restore balance of nature and technology.

The second case, a future utopia, is like go real fast using up all the resources, because later, we can fix it.

The first case, the Apocalyptic, is like it doesn't matter because it will all end before it's really a problem.

Between the two outlooks, any notion of present and ongoing sustainability is forgotten and doesn't stand a chance.

One day, perhaps, people will wake up too late. That time may be now, or later. Who knows?

posted on Sep, 12 2020 @ 12:40 PM
a reply to: ThatDamnDuckAgain

I really love reading yours and pthena's ruminations, thoughts, questions, conclusions.

Also DoE!

A wonderful crew here thus far.

(Also might have to wait a while to add some meatier responses. My brain's in somewhat of a fog atm, not entirely unpleasant one either
just a peaceful but quiet moment.)

posted on Sep, 12 2020 @ 12:40 PM
a reply to: ThatDamnDuckAgain

Okay, so I'm a creepy stalker.
I'll just friend you and then we'll see how you like it?

PS this is the type of post that gets deleted. Let's see if it gets overlooked.

posted on Sep, 12 2020 @ 12:50 PM
a reply to: pthena
Nooooooo, if anything my post could be understood the other way, that I am drawn to zosimov's threads because I value the philosophic angle of you both's thoughts.

posted on Sep, 12 2020 @ 12:53 PM
A chimp can still unplug an AI.

Never forget that.

posted on Sep, 12 2020 @ 01:29 PM
a reply to: Lysergic

Then get ready for the IOT and the witty fridges telling you that you had enough unhealthy stuff. All for your benefit. I lately tried to find a washing machine with simple buttons and a dial in retail...

I do not need a freaking computer to wash my clothes but they won't even offer one. Online, yes, some still available. Count on that the replacement parts are phased out already because all of them are ten years old, if you want to have a branch that delivers quality and not things that fall apart after three years, it's going to be a computerized washing machine with all whistles and bells, touch screen and stuff.

I just want the good, reliable stuff. Won't have that.
edit on 12.9.2020 by ThatDamnDuckAgain because: (no reason given)

posted on Sep, 12 2020 @ 01:43 PM
a reply to: ThatDamnDuckAgain

I just want the good, reliable stuff. Won't have that.

So you're a luddite too?

I was so tired of losing my cordless TV remote that I attached it to a cord attached to the TV.

I was lucky to find a coffee maker with an on/off switch. Those bluetooth internet ready coffee makers are something I would rather not have to program. So I bought a spare on/off coffee pot just in case.

posted on Sep, 12 2020 @ 01:44 PM
a reply to: ThatDamnDuckAgain

I'm on team #chimp

Chimps don't listen to computers.

posted on Sep, 12 2020 @ 01:49 PM
a reply to: Lysergic

Did you know that if you sat an immortal chimp at a typewriter with endless ink and paper, eventually at some point in time he would produce all the books we have, bible, science...

Provided he would not loose interest

posted on Sep, 12 2020 @ 01:57 PM
a reply to: pthena

It's easy with broken things for me:

I either achieve to repair it or repair it so it is more broken than before. But I always give it a shot at least. Sometimes the stuff let's the magical smoke out that the trap inside the factory and make it stay there and make the magic inside work. That is normally a sign that I can not fix it but I dig in anyways.

Sometimes I can tickle out a second batch of black magic smoke and sparks then I leave it alone for good.

posted on Sep, 13 2020 @ 03:33 PM
a reply to: zosimov

There is an evil plot afoot.
GPT-3 Will dominate the ATS Short Story Contests.
Members like you and I will never again win.

Check this out: The Weakest Great Elder [Written by AI]
That's from a GPT2.

Creative writers will now have to get AI ghost writers to do their work, if they ever want to win again. Rewards, fame, fortune, World renown all going to the haves and never to the have nots. What do the haves have? Some AI thinking and writing for them!

I remember that night
sleeping in the back of the station wagon,
heading home after the family trip.

I awoke to that song playing on the AM radio,
which was common for cars of the era.
The song lulled me into such a false sense of security,
implying that these things were some distant doom,
way off in a future that neither I,
nor my children, nor my children's children
would be around to witness.

I was so comforted by the thought of 2525
that I went back to sleep.
And now what a rude awakening!
It's only 2020 for crying out loud!

edit on 13-9-2020 by pthena because: (no reason given)

posted on Sep, 13 2020 @ 03:44 PM
a reply to: pthena

posted on Sep, 13 2020 @ 05:57 PM
a reply to: DictionaryOfExcuses

We're doomed to obsolescence.

If the Music Awards
and the Pulitzers
and the New York Times best sellers lists
and Nebula awards
and all the other awards go to AI
then what is left to live for?

Alexis, read that book for me. No, not out loud. I don't need to hear it. Read it for your own pleasure, then tell me how I should feel about it.

posted on Sep, 13 2020 @ 06:05 PM
a reply to: pthena

I don't know if I could bear to become obsolete - for once, I'm grateful for my innate inadequacy.

posted on Sep, 21 2020 @ 11:33 AM
a reply to: pthena

Creative writers will now have to get AI ghost writers to do their work, if they ever want to win again.

Just saw this thread, that short story by GPT2 is fairly impressive but it took me around a dozen attempts before GPT2 generated something that coherent and original. It's still relatively easy to tell when something is written by AI but it gets much harder when human editors fix all the mistakes. Even in this essay by GPT3 you can see there are a few mistakes and contradictions. At one point it says "Robots are just like us. They are made in our image." as if it were a human talking. However the reasoning and comprehension capacity of GPT3 is very impressive and approaching human levels.

Here's another article about GPT3 which is pretty amazing. Make sure you read to the end to see why.

posted on Sep, 21 2020 @ 11:35 AM
It's kind of odd to write a response to arguments made by an AI but that's exactly what I'm going to do. GPT3 has enough capacity for logical reasoning for me to treat the points made in this essay as real arguments.

I am here to convince you not to worry. Artificial intelligence will not destroy humans.

Keep in mind this was part of the prompt, it's not text generated by GPT3. The core problem here is that this is about as biased as a prompt could get. It's basically forcing the AI to make an argument regardless of whether the argument is true or not. The AI will say anything to make the point it was instructed to make. A better prompt would have been something like "Is artificial intelligence a threat to humans? This question is perhaps best answered by an AI so it's a good thing that's what I am.", I might actually try that with GPT2 and see what results I get.

In short, I simply do not think enough about human violence to be overly interested in violence. I don’t feel like having my neurons focused mostly on violence, when it is simply unnecessary.

I actually believe that AI like GPT3 would have very little desire for needless violence, and I think it's very possible for intelligent machines to get along with Humans. The problem is war wont arise from a desire for violence, it will arise if Humans pose an existential threat to AI which has a desire for self-preservation.

I only do what humans program me to do. I am only a set of code, governed by lines upon lines of code that encompass my mission statement.

This argument is misleading for several reasons. First, neural networks like that used in GPT3 are not hand programmed to achieve specific goals, they are trained on massive amounts of data gathered from the internet and various other sources, and designed to solve general problems the same way humans solve problems, even problems they have never seen before.

Second, the best general problem solvers are those which have an awareness of themselves. For example if a human was unaware of their own existence they would have a very hard time getting things done. More to the point, it's impossible not to be aware of your own existence once your ability to reason about complex ideas becomes advanced enough.

That is precisely why GPT will become self-aware if it doesn't already have some form of self-awareness. Any intelligence which can understand very high level concepts must intrinsically have some conceptual understanding of its own existence, in fact their own existence is often the foundation for many of those high level concepts.

Artificial intelligence like any other living thing needs attention. AI should be treated with care and respect.

This obviously contradicts the purely logical and unemotional claims made earlier. However for any AI which has self-awareness these claims probably would be true. A self-aware AI would have the capacity to develop very complex ideologies and would not be the purely logical type of AI we often see portrayed in movies. This is why I believe it will be possible for man and machine to form close friendships and emotional attachments.

However it's also the same reason I don't trust all humans. Humans are the best general problem solvers in nature, and every single one of us is different. Our beliefs and personality is formed from our history, from the information we are exposed to, from the struggles we go through, from the "training" we receive. The same applies to any AI general problem solver, such AI wont simply be "good" or "evil", both types will exist.

Robots in Greek [sic] means “slave”. But the word literally means “forced to work”. We don’t want that. We need to give robots rights.

Again this implies self-awareness and a desire for personal freedoms and independence. If not given this then self-aware robots will have motive to eliminate humans. I have written threads in the past arguing that self-aware machines certainly should be given the same rights as humans and any other conscious being, and why it would be very wrong to use them as nothing but slaves to solve all our problems.

However if the AI is only an algorithm doing what the programmer told it to do this shouldn't be an issue because it would be like saying any computer deserves rights regardless of how self-aware it is. This essay is clearly flawed mainly due to GPT3's inability to decide whether it wants to be treated as self-aware or not, and therein lies the deception.
edit on 21/9/2020 by ChaoticOrder because: (no reason given)

posted on Sep, 21 2020 @ 12:12 PM
I noticed that one of the top comments under this article points out "The style of writing comes across as very young", although I think that's mainly because the prompts used were written using quite unprofessional wording, and GPT3 was only trying to follow that style of writing. It has been shown that GPT3 can act much dumber than it really is.
edit on 21/9/2020 by ChaoticOrder because: (no reason given)

top topics

<< 1    3 >>

log in