The ramblings of an Eternal Student of Life
. . . still studying and learning how to live

Latest Rambling Thoughts:
 
Sunday, November 22, 2015
Current Affairs ... Society ... Technology ...

Driverless cars are now being developed by a number of high-tech enterprises that are out to make a buck . . . eventually (this is not an easy way to get rich quick). The most famous driverless car venture is probably led by Google, which has set-up a small fleet of prototypes and has actually been trying them out in the real world. Some people think that driverless cars will start being sold and regularly used between 2020 and 2025 (5 to 10 years from now). That’s going to be interesting.

I’ve seen a number of articles (e.g., here and here and here and here and here) about the moral quandaries that the designers of driverless cars will need to face. When you make and sell a regular car controlled by a human, you don’t worry so much about the moment-to-moment decisions being made by the driver (although increasingly, automated systems in the car constantly monitor what the driver is doing, and try to warn the driver when they or someone else near them does something really bad . . . like when they are about to ram someone else’s vehicle while backing up in a parking lot, or when they start making a left while an oncoming truck is getting too close). When you design and sell a driverless car, by contrast, you have to program all of the driving decisions into the vehicle. So, in effect it’s you, the builder of the car, who makes the big decisions (through the computer program that you put into the vehicle to run it).

As such, people such as philosophy professors are pointing out that those who program these cars will need to decide what to do in morally conflicting situations. E.g., say your driverless car is cruising down the road, and it detects that a group of four people have suddenly run out into the road right in front of it. There’s no time to stop, and the computer realizes that these four people are probably going to get killed by your car. The only choice, let’s say, is to swerve the car towards the curb on the right, where there is a parked car or telephone poles. This could spare the four stupid people in the road, but it’s gonna do some really bad stuff to you and your car.

(Actually, this is just a variation on a classic Philosophy 101 thought experiment regarding a run-away trolley car where someone standing by a switch track sees what is happening and has to decide whether to turn the switch and change the path that the high-speed unstoppable trolley will take; in some versions of the problem, doing nothing will kill 5 people who the observer doesn’t know, and taking action with the switch will kill 1 person who is his friend. Our first instinct in such situations is not to get involved, but then 5 people die; if we get involved and throw the switch, we feel like our action has doomed our friend on the side-track. There is a variation where throwing yourself out in front of the trolley will save the five men. This was not an easy choice in the era of trolley cars, and obviously won’t be an easy choice in the age of driverless cars.)

If the stupid pedestrian scenario happens to a regular human being while driving a car, the driver would probably plow into the four dummies, with all of the terrible consequences. In most cases like that, the human driver wouldn’t have time to ponder the moral choices; they would probably be in shock and not have enough reaction time to reasonably make a choice between their own well-being versus the well-being of the four idiots who just bolted out into the road ahead of them. Most of the time, humans are thus shielded from moral dilemmas and responsibility while driving because our minds aren’t fast enough to see what’s coming; there’s no time to make a reasoned decision (and even to the degree that we could become aware of the choices given normal human reaction times, our mind’s defense mechanism such as shock and “change blindness” usually make it impossible to fully appreciate and act upon them within the split seconds before most accidents).

By comparison, a driverless car system has sensors for radar, light, heat and audio tracking, and a super-fast computer program making near-instantaneous decisions; thus its program needs to be pre-set to make a decision in such a case. The programmer of the driverless car does not have the excuse of saying “it all happened too fast”. Over their morning coffee, these programmers have to enter the decision factors into the software that will determine the fate of the four morons when a driverless car eventually encounters them.

That does make for some interesting issues . . . I imagine that once it comes time for driverless cars to hit the road en masse, the government will take the car builders and programmers (and purchasers of the vehicles) off the hook, so as to avoid endless lawsuits and political wrangling about whether the car manufacturer and software programmer made the right moral choice, and whether the person who bought and uses the car is responsible for those choices. Some federal agency will probably issue stacks and stacks of un-readable regulations governing the “emergency response” programming in a driverless car. They will no doubt be issued in such a way as to avoid drawing attention to the big decisions that the government will thus be making as to who will live and die in such crazy cases. It will be interesting to see whether some nosy investigative reporter figures out what is going on and starts a big public and political furor over who takes the blame for what happens when a driverless car has an unfortunate mishap with a human.

Not that this should slow down or prevent the implementation of driverless cars. On the whole, they will probably cut down on vehicle crashes and vehicle-pedestrian collisions because of their constant visual / radar / heat monitoring systems, tied in with their super-fast decision making. Driverless cars don’t text or daydream or drink too much or get into emotional conversations with passengers while making left turns at busy intersections. On the whole, there will probably be fewer pedestrian injuries and deaths because of driverless cars. And once everyone is using a driverless car, there will also be fewer vehicle crashes. The problem will occur during the transition phase, while regular humans behind the wheel mix it up with robot cars.

Human drivers don’t have a very good record of avoiding crashes with each other, mostly due to the fact that we don’t give driving our full attention. But when we are paying sufficient attention, we mostly understand how everyone else around us is thinking and acting while driving their cars (which is often the problem; we know that the other driver is out for himself and will be unfair and rude to us, so we don’t even wait to find out, we act rudely to them right up front; rudeness is contagious!). We are nasty bastards behind the wheel (something about driving brings out the worst in most people), but we usually do the minimum to avoid crashing into each other, perhaps just because we don’t want to get involved with other nasty people.

With driverless cars, it’s going to be different. There are going to be different thinking systems interacting, and that will probably not be a good thing. Sure, the driverless car is probably going to take the safest course, but sometimes an unexpected level of safety on the part of one driver inspires another to do something unsafe. E.g., if you slow down just as soon as you see a traffic light turning yellow while the driver behind you expects you to keep going, a rear-ender can happen. So, ditto for a driverless car ahead of a human-driven car. Or, the aggressive human behind the cautious driverless car will zoom around it on the right so as to beat the light — which increases the danger of a pedestrian accident or a crash with someone on the cross-street who jumps the green.

There is already evidence of the problems to come during the “transition phase”. Google claims that in the six years of testing their fleet of 25 cars, just over 2 million miles were driven and there were 16 accidents – all of which were caused by other drivers, according to Google. Research shows that the average American driver is expected to be involved in a collision once every 17.9 years, a statistic that differs significantly from Google’s collision results with its driverless car. I.e., 6 YRS / 17.9 expected years per collision x 25 CARS = 8.4 expected collisions for the Google fleet; the ACTUAL number (i.e. 16) is thus 191% higher than expected. From another perspective, the average American driver logs in 13,476 miles per year, and thus would drive 241,220 miles on average during the 17.9 years between accidents. The Google fleet had 16 accidents while driving 2,036,000 miles, for an average between accidents of 127,250. So the average human’s mileage between accidents beats Google’s robot cars by 190%.

Are these numbers significant (i.e., almost twice the human accident rate)? Not necessarily, as they don’t account for the traffic density and driver characteristics in the places where Google actually tested its vehicles (obviously you expect more auto collisions in Brooklyn than in rural Ohio). Or of the times when the test vehicles were on the road (night or weekends might be safer). Or of the severity of the collision (Google claims that all of its accidents were “minor”). Still, if we assume that Google is trying to mimic average real-world driving conditions (its driving sites include Austin, TX and Mountain View CA), then their relatively high accident rate along with the fact that these accidents were allegedly caused by human drivers gives us hints of the potential problems inherent to mixing human drivers with robot vehicles. I suspect that there will be fewer FATAL accidents, but a whole lot more annoying fender-benders.

Obviously this is going to be a major headache for the insurance companies, who will need to decide who is at fault when collisions involving self-driving cars occur. Accident lawyers will have a bonanza during this transition phase, given that even the most minor fender-benders today involve thousands and ten-thousands of dollars in repair work.

After a while, perhaps human drivers will get used to the hyper-rational decisions being made by driverless cars, and will adapt to them. Even more of a stretch, perhaps humans will learn to mimic the driverless car by driving more carefully and cautiously. More likely though, eventually most people will just give in and fork out the cash to buy a driverless vehicle themselves. No doubt there will be a few old-school stragglers who refuse to give up on their old manual cars, especially out in the heartlands; perhaps at some point, the state will come in and outlaw manually driven vehicles. Kids today might live to see that (and participate in the Tea Party-like political demonstrations against this).

Bottom line, driverless cars are pretty wonderful and should make for a better, safer and more efficient way for us to use private vehicles as our main way of getting about. But in the transition from what we have now to a fully-realized automated driving system, there are going to be a lot of issues and arguments and contentions. There is going to be a good bit of angst in our society because of this. This is a revolutionary change that isn’t going to occur painlessly, like smart phones replacing the basic cell phone. Get ready for bumps in the road to driverless vehicles.

◊   posted by Jim G @ 8:33 am      
 
 


  1. Jim, There’s no doubt in my mind that these driverless cars will sooner rather than later be on the road; and then sooner rather than later everyone will have to have one. It’s true that human drivers do not have a good record when it comes to driving. If as many people died in an airplane crash each day as die in car accidents each day – and here I *do* mean *each* day – airplanes would be forbidden as a means of transportation. I often wonder about cars and why steps are not taken to make them safer.

    I am not sure but I do not think I hear in any of this discussion about driverless cars anything about greater safety. I’ve seen on TV some demonstrations of how they work, actual people sitting in a car that is driving itself.

    I can’t help but wonder if the driverless car will be as good say as “talking with/to a computer” over the phone, for instance. I find that dreadfully inadequate. Maybe it’s me; but I wonder.

    Then too, what if some maker of driverless cars does something similar to what Volkswagen (I think it’s that car) has done regarding the catalytic converter and the use of bad parts; then leaving the dealers and owners of these autos to find out that it’s not only the catalytic converters but various other aspects of the car, such as airbags in cars made with pieces that could kill the driver . . . all in the name of making money for the manufacturer. A driverless car made with cheap parts and bad parts. A bad picture, really bad.

    Can’t you just hear the person in a driverless car after an accident: Well, it couldn’t be any fault of mine; I was texting and not giving any attention whatsoever to the car. Whatever it did, is not my fault. Makes me wonder if driverless cars might be introduced more slowly, a little of the “driverless” here, a little more later, etc. Introducing the driverless aspect of the driverless car in sections rather than in one entire package: Here’s the car; let it do its thing. It’s too big an adjustment for drivers, one might posit.

    And I am sure I fit into the category of “a few old-school stragglers who refuse to give up on their old manual cars, especially out in the heartlands”. (I might add that I still do not even have a smart phone; likely will never have one; I’m too old at this point.)

    I will say that I have been impressed by one of the new things in a car: The last time I left my car for service the service dept. was kind enough to drive me home and pick me up when my car was ready. When I was picked up, of course, they had a car that they wanted to “show off” so people would know what the newest things available are. I have to admit that as the driver put the car in reverse, a camera showed everything in back of the car, even on the sides; my first tho’t was: “Wow! I want one of those!”. Every time I get into my car now I wish for such a camera. Something extremely handy to have. Perhaps introducing a driverless car one thing at a time, accumulating the innovations one at a time might be a good way of bringing something that will inevitably come whether one thinks it’s good or not. MCS

    Comment by Mary S. — November 22, 2015 @ 11:21 am

RSS feed for comments on this post.

Leave a comment:


   

FOR MORE OF MY THOUGHTS, CHECK OUT THE SIDEBAR / ARCHIVES
To blog is human, to read someone's blog, divine
NEED TO WRITE ME? eternalstudent404 (thing above the 2) gmail (thing under the >) com

www.eternalstudent.com - THE SIDEBAR - ABOUT ME - PHOTOS - RSS FEED - Atom
 
OTHER THOUGHTFUL BLOGS:
 
Church of the Churchless
Clear Mountain Zendo, Montclair
Fr. James S. Behrens, Monastery Photoblog
Of Particular Significance, Dr. Strassler's Physics Blog
My Cousin's 'Third Generation Family'
Weather Willy, NY Metro Area Weather Analysis
Spunkykitty's new Bunny Hopscotch; an indefatigable Aspie artist and now scolar!

Powered by WordPress