There are some interesting things to consider with cars and computers.
Suppose the car detects a crash about to happen, cannot be avoided. It could hit the wall and kill only the driver, or plough into the bus queue and save the driver but kill several innocent bystanders. What should it do? Would there be a market for 'augmented' decision making (favouring the driver)? Who would be 'guilty'? The original programmer? The person who upgraded the software? The car owner?
"our basic model, sir, always tries to minimise harm to the third parties, the premium version costs a little more but weighs up the merits of the situation and looks for the least overall harm. The executive version does have a higher price tag sir, but values you life above all other considerations..... oh a very wise choice sir"
I can see the lawyers loving this.
It has already been mooted that failure to apply a 'safety-critical' software patch to an autonomous car could make the driver/owner liable instead of the car manufacturer. What happens if you are driving at the time the update is published? Do you suddenly become liable without knowing it? Do you have to stop and apply the update? What happens if you are not 'in signal' How could you prove it?
I want my car computer-free, please. I am not a luddite - I write software and own mobile phones and such. If a phone goes wrong and you cant see Ar$3B00k then its no big deal (even a benefit). But something going wrong with software controlling a ton or two of metal travelling at 70mphwith me in it? No thank you.
Thankfully I live in the back of beyond where a computer will be hard pressed to work out where the edges of the roads are (difficult enough for a human at times), so we will get the full effect of the technology last. I hope.