Last month, mid January, legal action was taken against General Motors after a collision between a motorcyclist and an autonomous car being tested. I wrote a while back about GM buying Cruise Automation, and it was one of their fleet of Chevy Bolts that was involved in the incident at the heart of the lawsuit.
The Incident
Oscar Nilsson was riding his motorcycle along a city street in San Francisco behind the Chevy which moved into the left lane, Nilsson was committed to going past the car on the right when the car suddenly veered back into the right lane colliding with – and knocking Nilsson off – the bike.
The Lawsuit
Nilsson is claiming unspecified damages for neck and shoulder injuries that would require lengthy treatment.
The Battle of the Silks
Nilsson’s lawyer stated that the car (in autonomous mode with a back up driver) aborted the lane change and moved right causing the accident. However General Motors stated that the car did move into the left lane and then when it realised it needed to be in the right lane, moved, however Nilsson was lane splitting at the time (legally) and that he saw the car moving back, wobbled and fell off.
It is known that the driver attempted to abort the lane change but couldn’t do it in time before the impact. The police report blamed Nilsson for passing on the right when it wasn’t safe.
The Issues Surrounding the Case (as I see it)
There are several concerns with this lawsuit as I see it – mostly with the motorcyclist’s arguments.
Firstly, in many countries the law is that the person behind is at fault and this is because the driver/rider must be aware of the vehicle in front and be aware of any changes in direction. In a binary world, this is OK, however we do not live in a binary world unless you are Neo in the Matrix! It feels that this is the basis of the police report – Nilsson is riding behind the car and should have taken more care in his judgement to make his manoeuvre.
In theory, the riders judgement should be to wait for the vehicle to complete its manoeuvre. In the real world in city traffic, there are split second judgements to be made and with motorcycles able to change lanes and speed very quickly, other drivers also need to be more aware of their surroundings.
Secondly, an autonomous car is loaded with technology: radar, LIDAR, cameras, GPS and more. That produces lots and lots of data and video footage. That provides GM with a basis for a counter argument – which they have done. They have data to show that their car was travelling at 12mph and Nilsson was doing 17mph. With all that data collection on board, you would have to be pretty clear about the incident before launching legal action. You are behind the eight ball from the first minute if you do not have a GoPro or other device recording the accident.
The video recorded by the car shows Nilsson looking at it for a longer period than a quick check on the car’s position and showed Nilsson getting up, picking the bike up and moving to the side of the street before complaining of shoulder pain and receiving attention. A certain level of adrenalin would have kicked in after the fall for sure which would explain his ability to pick up the bike and is not necessarily proof of no injury.
Thirdly, I would think that the software developed (and being tested) would still have some holes in it – notably about when to change lanes and what was around the vehicle. This could be a valuable lesson for the systems because of the speed of manoeuvrability of a motorcycle or scooter and the fact that most cities have lots of them on the road. The technology needs to change with respect to lane splitting which is the legal ability (in some jurisdictions) to move between traffic under certain speeds. It also needs to process the data to cope with the size and speed of motorcycles, something that appears to have been lacking in this instance.
GM will use the data to show that Nilsson acted too quickly when the Chevy moved lanes and was therefore at fault, not allowing the car to complete the lane change – that’s where the grey area will need to be argued by both lawyers. In the real world, you see a car make a turn and consider it completed, not expecting it to move back again quickly. On the flip side, lane splitting a car that is turning may not have been the best choice of action.
The history of autonomous vehicles shows that if the manufacturer feels that their systems are at fault, they will settle out of court quickly, however if the data shows that the other person is at fault, they will vigorously defend their position. This case is so recent that we have not had the day in court and it will be interesting to see the arguments on both sides as well as the judge’s decision.
VinceS2 says
Interesting how the facts diverge. Here is an earlier report: https://motorbikewriter.com/police-blame-rider-autonomous-crash/ where it sounds like the biker should have had ‘a quiet chat’ and got the thing sorted. Doesn’t matter how you slice or dice it, a human driver would not have made the choice the AI did and that is clear and uncontroversial (given the actual driver saw the situation arising and couldn’t prevent it), so the bike guy should have been ‘sorted’.
Separate q why the human intervention didn’t get the collision avoided. Was the person just too slow, or was the change-over permission system configured for ‘oh you drive for a bit now’ kind of urgency, not actual emergency input? People will be asking these questions, for sure!!!