It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Self-Driving Cars in Four California Accidents

page: 2
5
<< 1    3 >>

log in

join
share:

posted on May, 11 2015 @ 05:57 PM
link   

originally posted by: strongfp
a reply to: FamCore

The radio stations here in Southern Ontario ripped apart self driving cars during the winter months, I remember listening in on an open lines about people bringing up very good points, like:

How will it know when the tires are slipping on black ice? [/qutote]

Anti-lock braking systems. Each wheel has motion sensors, a flywheel and a hydraulic valve attached to the braking system. If there is a difference between the speed of the flywheel and the wheel itself, then the pressure on the brake is adjusted.

en.wikipedia.org...



How can is navigate in deep un-plowed streets?


I guess they haven't got there yet, but that is going to eliminate the use of only having visual sensors at bumper level. I would imagine they will have a mix of sensors, perhaps even cameras that can filter out those wavelengths of light propagated by fog. There were some experiments on having windscreens that did this.



How would a self driving car deal with the 400 series highways, the busiest in the world? When there is traffic and it needs to pull over for emergency vehicles?

That shouldn't be too hard to program in. If the system detects one or more moving object behind it, and those objects has flashing blue or red lights or is making a loud sireny sound, then slow down and move to the left so long as there are no obstacles in the way. Once those flashing vehicles have passed, then move back to the regular route.



The list goes on, these cars cannot critically think, maybe on the open freeways it might work, but no residential or in harsh weather, no way.


From my own experiments with AI, the best way to get a system to learn is to record the moves of experts through black box recorders. Then you get the AI system to attempt to replicate the drivers decisions. Every time there is a difference, you go back and look at the code to see what should have been done. Given enough test cases, eventually it would have the skill of an experienced driver with no claims.




posted on May, 11 2015 @ 06:28 PM
link   
a reply to: peck420

But when the light turns green, they can't see the distraced driver looking down at his kindle as he goes through his red light.

Computers don't have gut instincts and experience avoiding accidents. Come to think about it, the same problem is evident in half the drivers around here. I have had to avoid accidents by pulling off the road a dozen times and had to avoid deer that were running into the side of the car. A guy ran into the side of my car and bent the mirror while trying to cross the street. I had to stop or I would have run over his legs when he fell. What about potholes, You have to go around them in the spring or you break your shocks and damage your tires throwing the alignment off. How is a computer going to do that.What is that car going to do in a sudden downpour, do the sensors go through pure water or does it stop and wait right in the middle of the highway and get rearended?

I could name hundreds of experiences I have had including coming up to where a road washed out which had water running across the washout almost to road level. It was three feet deep.

Those computer cars cannot think like a human can. They do not understand that someone can be stopping then decide to go when they should stay stopped. How many times I have had to speed up to escape a driver who decided to go at a stop sign while I was going through an intersection and would have T-boned me. What would a computer do with that one.

I see these cars causing more accidents than anything. Maybe the results will show that it is someone elses fault, but people are avoiding accidents that others almost cause all the time. The computer can't fathom others will be acting irrational.

How many possible accidents, not of your fault, have you avoided in the last five years?

My backup sensors on my explorer are a pain in the but. All four of them get full of slush and ice half the winter.


edit on 11-5-2015 by rickymouse because: (no reason given)



posted on May, 11 2015 @ 06:33 PM
link   

originally posted by: JAY1980
a reply to: FamCore

So the accidents took place at around 10mph? Doesn't sound like very stable technology if it gets in accidents at 10mph. Let alone barreling down the highway at 70mph. My opinion is it shouldn't be on the roadways yet if the technology isn't sound. It's jeopardizing others safety.


4, low speed accidents in 8 months is a damned sight less than cars under manual control.

Also, Police reports indicate all were hit from the rear in traffic, hence the low speed.

The technology is sound and none of these low speed collisions were as a result of the autonomous systems, but rather the moronic humans driving the cars behind them.



posted on May, 11 2015 @ 06:38 PM
link   

originally posted by: rickymouse
a reply to: peck420

But when the light turns green, they can't see the distraced driver looking down at his kindle as he goes through his red light.



These cars will have sensory packages that will enable them to detect a potential collision and avoid, much faster than any human can respond.In fact, the EU has passed legislation that will force all new cars to have a collision avoidance system which will react to any potential crash if the driver takes no action.

I find it odd there is opposition, because most accidents are the result of human error. The same can be said about aircraft - they can have fully autonomous aircraft now but there is such opposition from the public they can't do it yet, which is odd considering most air crashes are the result of human error.



posted on May, 11 2015 @ 06:58 PM
link   
a reply to: stumason

This is out of 50 autonomous cars though,so about 8%.How does this compare to the millions of cars driven the 'normal' way in Florida?



posted on May, 11 2015 @ 07:24 PM
link   

originally posted by: FamCore
a reply to: JAY1980

It sounds like it wasn't the technology's "fault", it was other vehicles with people driving them that caused the accidents, which is why in my OP I discussed the idea of having Dept. of Transportation making certain roads "designated" for self-driving cars, where regular vehicles cannot go (since us humans aren't able to interact properly with them)


Hey FamCore: Always enjoy your threads. Perhaps I am not informed enough yet to respond, but I gotta say, just at first glance, driving is an "interactive" skill. I'm sure we're all aware of the old defensive driving schooling of our elders, when first learning to drive….meaning, it's fluctuating constantly dependent upon the other drivers' actions on the road.
Can this even be planned for and anticipated, by a "self-driving" vehicle?

Good point, about certain roadways, therefore, being designated, as such…
tetra



posted on May, 11 2015 @ 07:58 PM
link   
a reply to: Imagewerx

Considering every single accident was a fender bender and the fault of the other car, it matters not.

But, if you want to do some maths, there are (in 2013 anyway) 253,639,380 vehicles registered in the USA.

In 2013, there were 30,057 fatal crashes and 3.8 million injuries (that required hospitalisation) as a result of traffic collisions.

Data from 2010 indicates some 24 million vehicles were damaged in one year due to collisions in the US. As the figures don't seem to vary much year on year (and I can't find any figures for 2013) I will use that as a rough baseline.

So, taking those numbers into account, it seems that 9% of all vehicles (24 million/253,639,380) were involved in a collision that incurred damage, millions of people were injured and thousands were killed.



posted on May, 11 2015 @ 08:47 PM
link   
These thinking cars are not only going to be driving for you, they are going to keep you under surveillance too. And if even one tiny little thing on your car isn't working, it's going to refuse to start at all. Trying keeping up with those maintenance bills. And, if you get in trouble, your car is going to lock you in until the cops arrive. And when The Man really gets annoyed with you, your car is going to slam you into a tree at 80mph.*

*(asked Princess Diana and Michael Hastings about that last part)
edit on 11-5-2015 by starviego because: (no reason given)



posted on May, 11 2015 @ 10:07 PM
link   
I don't know bout anyone else, but if the car I am in plans to drive, his name had better be KITT! Otherwise, no way!

10mph collisions, and we can't hear what happened? Sure.....totally safe.....



posted on May, 11 2015 @ 11:05 PM
link   

originally posted by: LadyGreenEyes
I don't know bout anyone else, but if the car I am in plans to drive, his name had better be KITT! Otherwise, no way!

10mph collisions, and we can't hear what happened? Sure.....totally safe.....


We did hear what happened - they were all hit in the rear by other drivers. I am puzzled why people don't read the articles.



posted on May, 11 2015 @ 11:45 PM
link   

originally posted by: stumason

originally posted by: LadyGreenEyes
I don't know bout anyone else, but if the car I am in plans to drive, his name had better be KITT! Otherwise, no way!

10mph collisions, and we can't hear what happened? Sure.....totally safe.....


We did hear what happened - they were all hit in the rear by other drivers. I am puzzled why people don't read the articles.


From the article linked in the OP -


Delphi sent AP an accident report showing its car was hit, but Google has not made public any records, so both enthusiasts and critics of the emerging technology have only the company's word on what happened. The California Department of Motor Vehicles said it could not release details from accident reports.

link

I did read the article.



posted on May, 12 2015 @ 12:14 AM
link   
I can see scammers targeting self diving cars just because companies like google are deep pockets and can be sued for millions.

In calif these type insurance scams happen every day.
swoop and squat.

money.howstuffworks.com...

I have had these tried on me.



posted on May, 12 2015 @ 02:01 AM
link   

originally posted by: JAY1980
a reply to: FamCore

So the accidents took place at around 10mph? Doesn't sound like very stable technology if it gets in accidents at 10mph. Let alone barreling down the highway at 70mph. My opinion is it shouldn't be on the roadways yet if the technology isn't sound. It's jeopardizing others safety.


Can't blame the car if someone else carelessly smashes into it.

It's like a driver skidding off the road and taking out a lamp post, then blaming the lamp post for being in the way of the driver who hit it.



posted on May, 12 2015 @ 07:38 AM
link   
a reply to: ANNED

That's along the lines of what I was thinking. What if this were done intentionally to create a negative public stigma? This is centered around Google, whose cars are pretty obvious to spot. What do you think the odds are that the driver of the other vehicle was employed by one of the actual automobile manufacturers? Make the Google cars look unsafe in the public eye, but then counter it by saying how the Mercedes/Lexus/Ford/GM/etc... cars are much safer and reliable.

Realistically, I'd put the odds of this being on the low end. But corporate espionage within the auto industry has been known to happen, and more frequently of late, so who knows?



posted on May, 12 2015 @ 07:50 AM
link   

originally posted by: peck420

originally posted by: rickymouse
Now, a car cannot think and see and evaluate the situation.

A professional race driver has a reaction time of approx 0.4 seconds.

A good street driver has a reaction time of approx 0.7 seconds.

My 10 year old daily's electronic suite...0.07 seconds.

Computers will "see", "evaluate" and "react" long before a human will.

Sure, it may react more quickly, but the specific reaction made will be based on software instructions input by a human.

I think that one day autonomous automobiles will be the norm, and will be much more safe, but it isn't a s simple as saying "computers can react more quickly than a human can right now; therefore, they are definitely safer than a human driver right now."

I'm not saying that these four accidents were the fault of the autonomous car; I actually think they were not, but rather they were the fault of the other human driver. All I'm saying is that safe driving about more than just reaction time. It's also about what you do once you react. They need to teach the cars to react properly, not just quickly.



posted on May, 12 2015 @ 09:26 AM
link   
a reply to: rickymouse

In every example you have given, the sensors will have provided the computer with more information, better information, and the required course of action...before a human would have had time to even get the visual input from their eyes to their brain, let alone figure out the correct response, and actually make the required physical movements.

The meat sack inside is, and has been, the biggest failing in an automobile for a very long time.



posted on May, 12 2015 @ 09:35 AM
link   

originally posted by: Soylent Green Is People
Sure, it may react more quickly, but the specific reaction made will be based on software instructions input by a human.

Good. Then, unlike a human, it will make the correct decision 99% of the time.


I think that one day autonomous automobiles will be the norm, and will be much more safe, but it isn't a s simple as saying "computers can react more quickly than a human can right now; therefore, they are definitely safer than a human driver right now."

You're right. Not only do they react faster, they analyze and respond based on the actual data. Not on the emotional waste that humanity brings into the thought process. No fear, no hesitations, no second guesses.


I'm not saying that these four accidents were the fault of the autonomous car; I actually think they were not, but rather they were the fault of the other human driver. All I'm saying is that safe driving about more than just reaction time. It's also about what you do once you react. They need to teach the cars to react properly, not just quickly.

That's just it. You can make a computer react the right way every time. Something that you will never get from humans.



posted on May, 12 2015 @ 09:58 AM
link   
a reply to: peck420

Again, I agree that the reaction time will be ultra-fast, but how does it know what ultra-fast reaction it will be making?

The answer is that the course of action (reaction) that the computer will decide to make will be based upon what humans (fallible humans) tell it to make via its programming. Therein lies the issue. Sure -- it will react quickly, but that reaction will have been programmed by a human.

The computer-driven car (at least for the foreseeable future) will be performing specific reactions based on instructions from humans. If the autonomous car senses a danger, it will quickly follow a set of per-arranged instructions that a human has input into the car's software.

No matter how quickly it reacts, that specific reaction will still be decided upon by the human software writer.


edit on 5/12/2015 by Soylent Green Is People because: (no reason given)



posted on May, 12 2015 @ 10:28 AM
link   

originally posted by: Soylent Green Is People
The answer is that the course of action (reaction) that the computer will decide to make will be based upon what humans (fallible humans) tell it to make via its programming. Therein lies the issue. Sure -- it will react quickly, but that reaction will have been programmed by a human.

The computer-driven car (at least for the foreseeable future) will be performing specific reactions based on instructions from humans. If the autonomous car senses a danger, it will quickly follow a set of per-arranged instructions that a human has input into the car's software.

No matter how quickly it reacts, that specific reaction will still be decided upon by the human software writer.

So..it is bad because it will do exactly what driving instructors and safety boards try to get humans to do?

The fallacy in your argument is that you think humans make good drivers. We don't. The most common cause of an accident is human error...and the most common error is the inability to follow the proper reaction sequence.

If all an automated car ever gets to (it terms of ability) is the already determined, through years of research on human drivers, reaction sequence...it will be far better than human drivers.



posted on May, 12 2015 @ 10:36 AM
link   

originally posted by: peck420

originally posted by: Soylent Green Is People
The answer is that the course of action (reaction) that the computer will decide to make will be based upon what humans (fallible humans) tell it to make via its programming. Therein lies the issue. Sure -- it will react quickly, but that reaction will have been programmed by a human.

The computer-driven car (at least for the foreseeable future) will be performing specific reactions based on instructions from humans. If the autonomous car senses a danger, it will quickly follow a set of per-arranged instructions that a human has input into the car's software.

No matter how quickly it reacts, that specific reaction will still be decided upon by the human software writer.

So..it is bad because it will do exactly what driving instructors and safety boards try to get humans to do?

The fallacy in your argument is that you think humans make good drivers. We don't. The most common cause of an accident is human error...and the most common error is the inability to follow the proper reaction sequence.

If all an automated car ever gets to (it terms of ability) is the already determined, through years of research on human drivers, reaction sequence...it will be far better than human drivers.


I'm saying exactly what you are saying -- i.e., that the human who programs the car to do a certain specific course of action is fallible; therefore even the autonomous car is fallible, because it's still being programmed to do what it does by a human.

All I'm responding to is your assertion that ultra-fast reaction times mean safer cars. I agreed with that assertion -- ultra-fast reaction times are very important to driving safety. However, I also added that ultra-fast reaction times are NOT the end-all to preventing accidents. A computer can react ultra-fast, but if it makes the incorrect reaction due to the human software writer telling it to make the incorrect reaction, then no matter how quickly it reacts, there can still be issues.

I agree that human drivers are not infallible, but human software writers are also not infallible.


edit on 5/12/2015 by Soylent Green Is People because: (no reason given)



new topics

top topics



 
5
<< 1    3 >>

log in

join