Self driving or autonomous cars have been in the news quite a lot recently with every accident of a Tesla reported in detail and with some fatal accidents thrown in the mix, it made me think a lot more about the technology, the testing and ultimately who is responsible for the accidents.
Tesla make a lot of marketing fuss about their Model S, describing it as “the quickest and safest car on the road”. They do have a very high NCAP rating however the real world has shown that despite a high safety rating, accidents can (and do) kill some occupants.
Tesla also have their “AutoPilot” system that “takes away the most burdensome parts of driving”. Personally I don’t know which parts of driving are burdensome, although in some areas of the world where drivers are encouraged to always use freeways, it would be boring rather than burdensome.
However, AutoPilot isn’t a true self-driving system and I suspect that many of the recent accidents have been caused by the humans assuming it is and switching off their brains to let the car do the work. The problem here is that the technology is not designed to do that. Hence the driver who was killed rear-ending a truck when the ambient sunlight confused the system whilst he watched a film or the other Tesla driver who smacked into a fire truck attending another accident because he assumed the car would simply drive around the incident, were not using the system as intended by Tesla.
Then we have the recent issues around Uber’s self-driving tests that involved the death of a woman pushing her bicycle in the dark. With that incident, it was reported that the car’s systems had switched off the emergency braking system too close to the point of impact – maybe the code was written to hand back control to the driver, who wasn’t looking where the car was going and as such didn’t have enough time to brake before impact.
What is quite clear is that in their haste to be seen to have a go-getting Government, many of the US States have allowed these cars to a) be tested in the real world and b) actually be allowed on the road at all. Similarly, the manufacturers are in a similar position – they want to be seen to be at the forefront of the technology.
In the technology world, there is the concept of alpha and beta testing. I believe that the alpha testing was not completed properly in controlled environments and probably rushed to get the vehicles on the streets. The University of Michigan has MCity and in the UK there is MIRA (the Motor Industry Research Association) now owned by Horiba from Japan. I’m sure there are other venues around the world that would allow the manufacturers to simulate conditions and allow the media to view the progress.
If the manufacturers think they are in beta testing, I think they need to take a step back and really review the data generated from the alpha testing – not just the structured data coming from the systems, but also the “unstructured data” which comes from articles such as this one and other reports. Combining the data sources and stripping out the panic writing would give everyone a better view of what is really going on as well as the public perception.
Government departments such as the NHTSA in the US and their colleagues in Europe and Asia should also provide data that could be used to redefine the testing of autonomous cars. This would be preferable to one (or all) of them banning the testing completely over safety concerns. It was a dumb idea to give humans a piece of tech called an AutoPilot because despite what the documentation stated, it was never going to be used properly – and we know how good humans are at reading user manuals!
I believe autonomous vehicles need more track testing with simulated scenarios to test as many conditions as possible before they are allowed back on public roads. When they are on the road, the engineer must be focused on the testing and not just sitting there with their own brain in neutral. As we are seeing, vehicles travel in a multitude of conditions: varying light, grip, speed limits, wetness of surface and a constantly variable number of items to collide with. The onboard systems need to deal with all of these and I don’t believe that the code or data points exist to deal with them quickly enough.
Lets face it, the human brain has evolved over thousands of years and competent drivers use that base to formulate decisions in a split second. Sure, humans stack cars on a regular basis, however I don’t think computers can drive any better – just yet. The systems simply need more evolution time.
Ultimately though, I believe that any human who is the designated driver of a vehicle is responsible for the vehicle and that may also include responsibility for the accident occurring. Several Uber test drivers have been dismissed in the last year because they failed to obey basic road rules and lets face it, they still need to obey the same rules as every other driver – and that means physically controlling the vehicle when required.
Perhaps another option is to limit autonomous vehicles to specific suburbs to reduce the traffic around them until the technology has been tested more thoroughly. This is definitely a subject that will create more discussion over the next few years!
That guy that landed the plane in the Hudson has spent waaay more time than you or I thinking about the issue, but you are on the right track here. See http://www.thedrive.com/tech/8300/can-sully-transform-the-world-of-self-driving-cars. There was another incident where a Tesla struck a PARKED car. There was some pathetic excuse about that, attempting to shift blame to the driver but it is going to be one corker of an explanation as to how that is a credible position to hold! see https://www.cnbc.com/2018/05/29/tesla-in-autopilot-mode-hit-a-parked-california-police-car.html for the basics on that one.
Personally I think the concept Sully (the pilot guy) talks about of an industry reboot is coming, and their avaricious need to make dosh from pushing the tech out way too early is going to get a serious adjustment. Crikey, we haven’t even heard stories of what mischief teenage boys are going to get up to when they put their minds to it, nor how owners are going to start ‘fritzing’ some of the more inconvenient ‘safety features’ they don’t happen to agree with. Yep, it’s all coming. If nobody else has said it (I wouldn’t know!) you read it here on the MW blog first!!!
Thanks Vince, what I find interesting is that the human race is very adept at making the same mistakes over and over again albeit in different industries. The desire to push a product into a market that isn’t ready seems to always end up with the same result – the misuse of the product and then the blame game starting. The technology industry is full of examples where a product was released way to early, subsequently causing security or safety issues and loss of reputation. Now with the auto industry merging ideas, they are not learning from past mistakes.