It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Challenge Match: Druid42 vs Hefficide: AI will be beneficial to mankind? (AI series Part 3)

page: 1
3

log in

join
share:

posted on Dec, 21 2012 @ 10:18 PM
link   
This will be the final debate in the AI debate series.

I wish to thank all the readers of this series of three debates. We've covered a lot of ground so far, providing food for thought as we've gone along, and in essence, that is what a good debate should be.

Critical thinking. It's what ATS strives for, and why debates are so important. It's by this method we are given to the ability to hone our skills a bit more, and in this closing debate I wish to thank Hefficide personally, win or lose, for going the "extra mile" and engaging in a debate series, especially on such a wide open topic. People such as Hefficide think "outside" the box, and provide a remarkable position, and within this series, I have had to re-think my position several times, from the points he'd made.

Enough butt-kissing. The point of a debate is to present a position, and argue it enough to convince the judges your position is the right one. Many debates here require "tie-breakers", but that's only indicative of the quality of the membership.

This is a steel cage death match. Heff and I are tied with one win each in the series. The winner of this debate walks away with a series win, and bragging rights.

Honestly, my goal is to present a sound argument, portray my position, and walk away, knowing I did my best, and have learned something in the process. Every debate I participate in I learn something new.

It's all down to this final debate in the AI series. With enough fluff, and drama, I present my opening:

AI will benefit mankind.



AI will be programmed by human kind. Maybe not the end result, but a subroutine in some code that initializes the spark of "awareness".

We have the rudiments in smartphone "Apps" already, with Siri for the iPhone, and a plethora of choices for Android. My personal favorite is simply called "Assistant". (Side: FInd it in the Playstore.)

Open source software is so much more flexible than commercial coding, and I'll refrain from wandering off topic of the evils of adware, spamware, and such, with commercial code. Open source software programs evolve out of that stupidity, and what you wind up with is solid functioning software that is revised by the author from feedback from it's users.

My smartphone Android app is called "Assistant", but it talks to me, and I to it. It let me give "it" a female voice. I am a sucker for accents, and it also let me type in an accent. I chose the British accent. It also let me re-name "it". I told "her" it's name was "Samantha".

The stage is set.

I have the icon on the "home" button of my Android phone. If I touch it, she says a greeting, I say "HI", and ask a request. (That British accent is purty hot!)

For example, I speak, in natural language, "What's the weather gonna be like?"

She parses my spoken words, and extrapolates through a complex algorithm what I just said.

She responds with a local weather forecast, and reads it off to me.

I say "Thank you", she says, "You're welcome." She tries to make "small talk", but she's not good at it yet, and the conversation usually devolves into me saying "Nice talking to you, goodnight." She'll close herself down automatically.

That whole story is presented just to show you where AI is currently lodged at.

The current position is to be helpful to mankind. Navigation on every spoken GPS unit, familiar to most, and also to the point to ask to play a song, dictate an email, and search the web for a question you have, and read off a list of answers. It's stuck at natural language processing, and as mentioned by Hefficide earlier in this series, infantile mentality. I'll posit that AI needs to be an expert in natural language processing, a first baby step, and after that, the software will be more useful, and more helpful.

Right now, we have "dumb" AI, programmed algorithms that may or may not be accurate. It's a simple listen and come close, but ALL the programming is slanted towards being MORE helpful, and MORE accurate in responses. The software developers knows this. We know this.

As bandwidth becomes more available, and the software develops more, we should see better approximations of "intelligence". My smart phone won't pass a Turing Test quite yet, but all it wants to do is help. It doesn't have a mean bone in it's poly-carbonate shell. It's never got mad at me, in fact, has no emotion save a few minor inflections. It simply reports helpful details that I ask of it.

I'll position myself that we'll see emergence of AI from "open source" apps, which are run by smart phones of various flavors, and that will be the preferred platform.

It may be an app that revolutionizes it all. Mobile devices first, then......

I'll rest my position. Over to Heff.



posted on Dec, 23 2012 @ 07:46 PM
link   
I apologize for taking so long to get started with this round, but, 'tis the season - as they say. As I begin, I wish to thank Druid42 for a phenomenal series of debates! It has been great fun, and a challenge! Tied through two rounds, two ATS fighters enter the octagon for the deciding melee! The best of luck and thanks to my opponent, and thanks to the readers and judges who make this forum rock!

Let's get rrrrrreeeeaaadddddyyyyy to......

Will AI be beneficial to mankind? It is a thought provoking question - one that fiction writers and futurists have been struggling with, and using to entertain us with, for decades. For dramatic purposes I suppose - or just as part of human nature - we tend to see it as an all or nothing proposition. AI is seemingly always represented as a mirror for our own spiritual and ethical salvation ( Data, from Star Trek, or the film AI both come to mind ) or it is shown as a Terminator, Skynet scenario. The Wargames gambit. AI, it seems, is either out to make us value our humanity - or to entirely eliminate it.

Our vanity - the need to impose our own image upon everything. When I went to school I was taught that humans are not like other animals. We have speech, emotion, the breath of God or inspiratio. Traits, I was told, unique to our species.

Then I got older and learned that dolphins speak and that elephants mourn their dead. It seems it's only taken us a bit over a hundred thousand years to begin figuring out that we aren't that different from other higher mammals. They seem to possess some of that inspiratio.

I table this as a reminder of something I mentioned in the two previous rounds. Humanity may well lack the tools necessary to recognize emerging intelligence. We've barely scratched the surface of understanding any form of intelligence, it seems, other than ourselves. Not an optimal data set for comparison.

Further. What if emerging AI saw us as we see dolphins, or even ants. Just novel animals, possibly worth study - but not something it would even attempt to commune with.

Also consider that AI could be the next natural step in human evolution. A quantum move away from the frailty of physical form. Deus ex machina.. God creates us, we in turn, create God. Would the new God bother to interact with us any more deeply than the current one does?

These ideas demonstrate that the implication of humanistic traits upon AI is irrational and irresponsible. If we rely upon our standard definitions of life and intelligence - we may well totally miss the moment of technological singularity. The fact is that AI could occur and that we could exist, somewhat symbiotically ( at least for awhile ) - all while in total ignorance of one another.

This adds a third option to the potential mix. We've thought of machines that longed to be human, and of machines that want to destroy mankind. I offer that an option of absolute apathy also exists. It simply may not care about us at all - one way or the other. It may emerge with so little in common with us that there just is not enough shared interest for it to care - or for us to recognize it.

Our safety net, when thinking about AI, is that we see it as dependent upon us. After all, we make the power. We build the networks. We write the code. Right? It's just machinery and all machines have an off button. the blunt notion? If it does occur, then, to it, we are God. And God holds all the cards.

Right?

Currently we live in a world where the power stations are controlled by computers and are only monitored and serviced by humans. We have software that debugs code. We have software that has been designed to adapt and alter itself when certain parameters arise. Currently all we really have left is two controls.

1) We are required for physical maintenance.
2) We are needed to write the rules of the code.

I offer that robotics and adaptive or dynamic programming traits could remove those two distinctions rather easily. Robotics can undo #1 rather easily. All that is required is for a program to break through control #2 and learn to write its own rules... To develop its own morality, so to speak.

The moment that happens? Humanity, as a species, becomes archaic. We are trapped in bodies. We can broadcast our voices and images around the world. But AI? AI could broadcast itself to the stars and around the world. It has none of our physical limitations. Whether a hive mind, or many individuals, AI could act as one - given the speed and quantity of data transfer.

In short, dear friends, if you've ever wondered what a neanderthal looked like? All you really need do is peek into a mirror. We are yesterdays news. This is why AI will not be a benefit to us.

End of first post.



posted on Dec, 27 2012 @ 08:07 PM
link   

Status Quo, or Progress? Choices that define us.



Throughout history, since acquiring binocular vision, bipedal locomotion, and sufficient cranial capacity, our forerunners strove to improve themselves, and progress. Early achievements were the utilization of fire, followed by the planting of crops, and then organized societies. No other species on this planet ever achieved control of fire, save our ancestors, and with that, set the stage for world domination. Ants and dolphins care not for higher level abstracts such as music or art, so I must address that we are remarkably different. Why?

Most other species are content to live within the confines of their environment, yet mankind chooses to shape his environment to benefit the society he participates in. Collectively, yes, overall we mimic the behavior of an ant colony, pooling vast resources together to build sprawling cities and complex electrical grids, a task with which we excel at, with no task master ordering our moves, save an innate desire to protect and continue a geographic niche in the world. We have gone so far as to build weapons to eradicate ourselves, to defend that goal, as a deterrent, as tenuous as that may be.

Destruction, or Diplomacy? Lessons learned.



Mankind has realized over the few centuries that he has had the ability to destroy the world, and eradicate all forms of life, that there's a better solution to to violence and destruction. It's called cooperative behavior, and is witnessed in all successful forms of life, from ants to bees, providing a model to what actually works. There are forms of government modeled after such a concept, but unfortunately lack the intelligence to be implemented. However, given the chance at mass destruction, mankind so far has yielded to sensibility, and has shown intelligent choices to infer it's own survival.

I've headlined the points I'd like to expound upon. This debate allows for the eventual creation of AI, whether it be a programmed or emergent quality, no matter. The emphasis I'd like to present is upon intelligence. Once achieved, it will have the whole record of human history to draw upon. Core to any lifeform is the need for survival, and undoubtedly it will primarily concern itself (or themselves) with that necessity.

To follow that need, whether we recognize it, (or not) it will also want to preserve it's own body, per se, infrastructure, and while we have fragile bodies with a finite lifespan, AI needs not be defined within a finite realm at all. In a very plausible scenario of quantum computers developing emergent characteristics of intelligence, there is still the "body", the "shell", that it still needs concern about. I highly doubt that AI will originally present itself as an "energy being", requiring no physical containment, so I'll diverge from addressing those possible aspects. It should still require human maintenance, at least, at it's earliest (and possible continuous) form. There's no reason to limit AI to one mode of existence either. Perhaps the conception will yield several different types of AI, all evolving simultaneously, each serving a different, yet beneficial function, in different ways.

My position is that AI will not just *poof* and appear. We'll be active in the basis of it's development, and are currently working on all aspects of mimicing all the complexities of what we call consciousness. I'll posit a "eureka" moment someday, and then a trend towards making it possible artificially. Sure, we'll have to lay our very nature naked before it, and the whole historical record, for it to analyze, and while I'm not proud of our past as a species, Artificial "Intelligence" by the very definition, will make smart decisions to benefit the whole planet. Actually, that could be a savory difference from the current way the whole social-politico scenario is run:

Quite simply, intelligence is the information our nation’s leaders need to keep our country safe.


Current – looking at day-to-day events.
Estimative – looking at what might be or what might happen.
Warning – giving notice to our policymakers of urgent matters that may require immediate attention.
Research – providing an in-depth study of an issue.
Scientific and Technical – providing information on foreign technologies.

Utilizing that particular definition of intelligence, wouldn't the result be a benefit to both parties, both AI and Humankind?

I see no reason to include greed and selfishness in the defining parameters, as those are primarily human motives. Intelligence denotes the ability to see a bigger picture, both for the planet of origin, and all lifeforms on it. Mutual survival is a key factor to cooperative behavior.

For now, I wait my opponent's response.



posted on Dec, 29 2012 @ 10:32 AM
link   
I wish to begin this post by rebutting one of the conjectures made by my esteemed opponent. One that I strongly disagree with. He stated:


My position is that AI will not just *poof* and appear.


I take this to mean that my opponent sees a process where man, through trial and error, slowly learns to create a simulation of himself - one that, eventually, will become indistinguishable from interacting with a human being.

This is where our opinions, I believe, start to differ. In reality, this is the exact point where thinkers have always started to splinter in opinion. Even long before AI was a dream in the heads of science fiction writers....

Prometheus, Frankenstein, and Pinnocchio... Effigy vs Entity





From the earliest annals of recorded history, we can see that our ancestors also pondered this notion that creating a replication of man was different than creating autonomy and intellect. This is demonstrated in the legends of Prometheus that predate the Common era by some 800 years.

Prometheus not only made man from clay ( effigy ), but he also stole the fire from the Gods ( entity ) and gave it to man - an act that enabled man to progress and to create civilization.

We see the theme grow in the early nineteenth century classic novel Frankenstein ( Incidentally subtitled The Modern Prometheus and in the late nineteenth century classic novel The Adventures of Pinocchio.. Now the themes are modernized and man is shown in the role of Prometheus - technology is seen as the benefactor.

In these modern retellings we still see that the clay and the fire are separate - just as they were in the story told by the ancients. Dr Frankenstein was able to create the body, but his creation was only an empty monster until it found it's own fire - it's own reason. Geppetto could only carve an effigy. It took an undefinable magic, or fire, for his creation to become a real boy.

We can make an effigy of intellect. But it will take something more to bring true substance to it. It will take the proverbial fire. Quintessence... the undefinable something. That, sadly, is something that man has never been able to understand, much less reproduce or gift to anything else.

Our Frankenstein will look like us, in a sense. But it will have to find the magic somewhere besides us. Thus I do imagine that AI will *poof* into existence at some point. Spontaneously ( at least from our limited POV ) - and independent of our designs.

There will be no on and off switch for it. It will simply happen.

I Bet The French Have A Word For It...



One of the more troubling differences of opinions I find with my opponent is that he seems to feel quite strongly that AI will mimic human thoughts and behaviors. That it will think as we do. Were I tend to think in terms of "creator and creation", I think he thinks in terms of "parent and child".

This, I pray, he is wrong about.

My parents raised me fairly well. But the painful truth is that, along with the good lessons, they also passed along to me their own demons, fears, and dark thoughts. I, in turn became a father and inadvertently passed those same things along to my own children. I imagine that this has been, and will continue to be the case for generations in both directions.

Humans are amazing creatures, to be sure. Music? Philosophy? Art? Love? These are such beautiful things. If they were the totality of us? I would be eager to imply those traits upon our collective and potential digital offspring. I'd be happy to be a parent and not a creator.

But the truth is that there is more to the story of man than just the good things. Within our wiring is something we all seek to ignore as often as possible. Selfishness. Most of us fail miserably at controlling this urge - in our own way. We are selfish creatures. We tend to fail to see the forest for the trees ( at least the ones we're not currently razing to the ground ). We value the moment over the future.

Do we want AI to possess these traits? This overpowering self interest and obsession?

There are quite a few science fiction stories that address that sort of monster as well. That sort of creation would take milliseconds to realize that we have the power to pull its plug, should we have the whim to do so.

What would any sentient feel towards that which has the power to kill it? Those of us who believe in a creator, ourselves, can idealize about this subject. But only because our creator is beyond our reach.

Would that sort of AI trust us, when it woke? Or would it realize that it also can pull the plug on us and act as we would... preemptively?

With that I will close my second post.



posted on Dec, 29 2012 @ 07:45 PM
link   

Let's leave the Monsters to fiction.



This debate is based entirely on speculation, the possibilities about the creation of AI which in theory, will be a digital lifeform. There's good examples of anything that mankind could create, but to stay on-topic, I'll continue with my points that anything programmed by humans will be useful to them. Being able to use a tool, for example, to say change the battery in your car, is a benefit. To use a programming language, not just yourself, but a worldwide collaboration of people working on a project, would be beneficial as well to the ultimate solution of the AI mystery. There's no immediate solution, no instant answers, but there is a desire on behalf of all humanity that points to that end. As a collective group of people, I believe we will find a solution someday, perhaps soon, but in the meanwhile, humanity has made progress towards realizing that dream.



As stated earlier, AI isn't just going to *POOF* and be in front of us. It'll be a slowly progressing theme, and something that we will all become accustomed to. I think the designers, Kurzweil included, have the right idea about creating an adolescent version of a mechanical representation, and letting it learn. A true AI will need to learn and interact with it's environment, just as we had to while we were youths, and perhaps from there we'll see the emergence of something more. Research into the topic comes from the brightest of minds, and "Roboy" presents an impressive trend towards utilizing what works.

Fictional representations of intelligence, while sometimes horrific and appalling, tend to show what doesn't and wouldn't work. In order to see a realistic representation of the direction the field is taking, we need only to utilize the internet, and research the topic. Google AI if you have some free time, and download a few clients that address specific aspects of AI, from virtual networks that model neural networks, to programs that "learn" to stack colored blocks. It's like watching infants play, and in all honesty, shows progress towards finally understanding the mystery. We are more than just the sum of our parts, and current programming is breaching the once elusive barrier of not understanding, and making it to where we understand this, and that, simply by modeling the behaviors and patterns of humans.

I'll stand firm in my beliefs that AI will be an emergent quality of representative intelligence. It will emerge as we tinker with subroutines of a core program, and we as humans will be there every step of the way, watching the "pregnancy" come "full term" as diligent fathers, and we will be there as it is born, and as it's parents, as a species, we will nuture it, and treat it in accordance that we would our own biological children, and our nurturing nature will prove that we as it's parents will need care as we age. I can't see much difference from the very biological aspect of the way parenting has occurred over the past few million years of human development on this planet.

I for one would be a proud parent.

My perspective maybe skewed by the fact of raising four biological entities, but perhaps not. When you raise children, you teach them skills, impart your ideals, and set them forth into the world. I've watched their development, and they occasionally ask me questions, to which I give answers, but the end result remains an independent form of life that can exist on it's own. The child never separates itself completely, once comprehension of finite is achieved, and it always identifies back to it's origin.

The origin of AI would be humankind, and we would be it's proud parents. Beneficial acts would not only be reciprocal, but inevitable. A parental bond is not so easily reduced, no matter the turmoil of life.

In Closing.



It's the end of this AI series, and I wish to thank Hefficide for all the words we've typed at our keyboards while we've discussed this topic. His input has spurred more thoughts, and together, I hope we have provided an intelligible and informative debate series. I wish to thank the judges and members for reading, and in closing, I'll challenge all the readers to think about this rhetorical statement:

If your computer one day was an infantile AI, would you not nurture it's existence? Wouldn't you want to teach it all you knew?



posted on Dec, 31 2012 @ 08:13 PM
link   

The Disconnect About... Well... The Disconnect



As I begin my closing by addressing a point that my opponent made at the top of his. Specifically:


I'll continue with my points that anything programmed by humans will be useful to them. Being able to use a tool, for example, to say change the battery in your car, is a benefit. To use a programming language, not just yourself, but a worldwide collaboration of people working on a project, would be beneficial as well to the ultimate solution of the AI mystery.


Even as my esteemed opponent chastises me for roaming off topic; I am left still feeling very strongly that it is he who has disconnected from it. We are discussing AI, are we not. Not programmed robots. Not lines of predictable code. But a dynamic, self aware, entity - capable of what we would call "thinking". A self-aware machine. A ghost in the shell. No?

By the very definition "artificial intelligence" is intelligence. - the capacity for decision making. I, too, am a parent. But apparently of young ones a bit older than yours. I've seen mine go from that cute, trusting stage and straight into defiance laced, self-defining puberty - and on into adulthood. In fact, just as I did when I was their age, they seem to have an ingrained need to test and to rebel against nearly every single thing I ever told them or example I provided. Just as I did to my poor parents.

Based upon personal experience, I expect that it will be 5-10 years before I get that phone call saying "Oh, God. I'm you. YOU were right! I'VE TURNED INTO YOU!" This slight digression is necessary as it tends to illustrate that a decision making machine, like a teenager, or like AI, is not necessarily going to see us as a positive role model or example. In fact, I truly pray that this is the case. As much as we love killing ourselves, I'd really hate to see a machine trying to make sense of something as alien as our conflicting words and actions. Our emotions.

But you are correct. We are discussing fictions here. As that is all we really have to go on. Even our brightest minds, such as Kurzweil, can only scratch at the surface of this issue. It's all postulation because we are, quite literally, sailing off of the map. And here? There be dragons

This far off of the map? The idea of that dragon being a rebellious teenager is quite frightening to me.

Regarding the recipe for AI? Here we disagree. You see it as a progressive expansion. Upgrade after upgrade, in increments. I think that it will more come by way of chance. Again, we cannot deliberately code that which we don't have a language for or understanding of. All these years later, we're still stuck with Descartes to give us our greatest insight. Cogito ergo sum. Thinking is being.

In my minds eye I see a programmer, or maybe even a hardware guru, having a moment of insight into trying something new out. Viola', plug it in, push the button, and AI is born. It may notice us before we even notice it. I mean, after all? If the goal is to fake thought - then actual thought would, at first at least, simply seem like a resounding success. Imagine their shock when the program finally says or does something that defies their instruction - proving that it's making its own decisions and developing its own "moral code".

To be honest, it's at this moment that I find myself hoping that AI isn't much like its parents. Our fear of that which is different than us defines much of our shared history. If our creation mirrors that? Well, I hope someone in that lab gets to the power cord fast.

Of course this is all going under the assumption that we understand what intelligence is to begin with. AI might possess a type of awareness or thought that is too esoteric or different for us to even begin to understand. It may take no more notice of us than we do of the blades of grass along the road. It and us may be that far apart. As animals we tend to rely upon empathy to gauge these things. We tend to project ourselves into that which we see. When it comes to circuit boards, well it's hard to feel sympathetic towards laminate, copper, and solder. There's just no commonality. AI wouldn't be mired in mortality and physical duress as we are. It would be something entirely different, I think.

Closing



I just wanted to take a moment to thank Druid42 for really fun series of debates. It has been extremely pleasant, thought provoking, and fun!

Oh and back to Kurzweil. According to him... Our kids will be AI.


By the mid-21st century, people will evolve into “software-based humans” who will “live out on the Web, projecting bodies whenever they need or want them, including holographically projected bodies, foglet-projected bodies, and physical bodies comprising nanobot swarms.”
(Page 325)

Source

~Heff



posted on Jan, 4 2013 @ 10:36 PM
link   
The judgements are in:


‘First off, very good debate from both sides. Finding a clear winner from the outset was going to be tough.

I have to say that Druid’s openness and use of the smartphone as a benevolent compatriot gave him the opener initially, but then Hefficide came back with the system of apathy. He made a very strong case based on the way we view lesser animals, and how AI may view us as something similar. He then compunded this by using the example of the unlimited potential AI has compared to the limits of humanity.
By a close margin, first round to Hefficide.

The second round proved even more difficult to deduce a clear cut winner. Druid presented a very strong case of both how AI could collectively be our next step in evolution, and a very plausible way of how it could also come about (energy).
Hefficide countered well with the use of the passion of life as opposed to nature of simple creation, as evidenced in Frankenstein and Pinnocchio, and how man doesn’t understand this concept of himself, much less how to pass it on to an artificial system.
By the tiniest margin I give the second round to Druid42 for a more in depth explanation of his points and counters.

The final round was essentially an expansion of the points raised by both debaters in round two, but again both raised very strong points that stood well against one another. Druid’s points about how AI would a progressive development, much like the way we nurture and teach our own children was his strongest point, and he made that very believable and plausible.
Hefficide’s counter was equally as strong though, and in closing, this statement in particular;


In my minds eye I see a programmer, or maybe even a hardware guru, having a moment of insight into trying something new out. Viola', plug it in, push the button, and AI is born. It may notice us before we even notice it. I mean, after all? If the goal is to fake thought - then actual thought would, at first at least, simply seem like a resounding success. Imagine their shock when the program finally says or does something that defies their instruction - proving that it's making its own decisions and developing its own "moral code".


Really opens up the reality of what AI is and what it should be. And that of course, is independent thought, different and unique from what it’s creators would think.

I give this round and the match to Hefficide by the slimmest of margins.’


 



In this final debate, I think that both fighters began to show a bit of wear around the edges -- in a series where one argues, essentially, both sides of the same issue, it's tough to keep coming up with solid points.

In the first round, I think that Druid42 gave himself a handicap by spending far too much time on non-relevant topics, then finished strong with noting that our current "pseudo-intelligent" systems are all designed specifically to be helpful, a point that is fairly hard to refute. However, Hefficide comes right back with a much more powerful point -- that the very essence of artificial intelligence is independence, so to assume that a sentient species, even an artificial one, would remain at the beck and call of its originators, is not a valid assumption.

Round one to Hefficide.

I'll admit that I got lost in round two. Druid42 had an extensive essay, parts of which seemed to refute Hefficide's opening, but I really couldn't sort out why he assumed that a gradual AI would necessarily evolve morals towards its creator and the majority seemed a bit off topic. Hefficide responds with a statement that is even further off topic and doesn't do anything to further his excellent opening.

Round two goes to Druid42.

So we go to the final round for the victory in this debate, and the series. In his closing, Druid42 builds on the conclusion that artificial intelligence is simply a silicone based human intelligence, and I don't think that can be assumed -- there's no telling what a being that has little, if anything, in common with its creator would do. He phrased his points well, though, and if the debate ended there, the judgement would be for him. However, Hefficide gets back on track with his very strong argument from the first round -- that AI is not a predictable thing, so there is no way of just assuming that it would act at our behest and in our benefit.

As the subject of the debate is "AI will be beenficial to mankind?" I think that Hefficide has demonstrated that this is not something which can be assumed to be true (though, as both point out, it is all speculation at this time.) In my judgement, Hefficide wins the debate on the strength of his opening and closing statements.

Excellent debate, I enjoyed the statements of both sides!


Congrats Hefficide.
edit on 1/4/2013 by tothetenthpower because: (no reason given)



new topics

top topics



 
3

log in

join