Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From Fight Club:

"A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don't do one."



People criticize this logic... But it's absolutely correct.

If it doesn't come out with results you like, then the inputs are wrong. Specifically, settlement amounts are probably too low.

People who claim all safety related issues should get a recall are just wrong. If there is a 1 in a billion chance my car explodes and kills me, then a recall should not be done, because my chances of dying on the way to the dealer to have the repair done are higher.


We shouldn't expect corporations to be so recklessly amoral. I'm all in favor of higher costs for doing shit like this, but the humans making the decisions bear moral responsibility for them. When they bury critical safety issues, they should be held accountable whether or not this particular financial calculus went their way.

Are there serious "people who claim all safety-related issues should get a recall"? That's not the only other available position. Not every safety incident needs to lead to a recall, but that doesn't prevent good-faith judgements on whether or not one is necessary. The fact that we assume this won't happen demonstrates how catastrophically awry we've allowed the artificial construct of a corporation to run.


>We shouldn't expect corporations to be so recklessly amoral.

Either that, or we treat them like any other animal incapable of civility - we cage/muzzle them and don't provide them with any opportunity for responsibility.


Imagine if we had the "Nutrition Facts" equivalent for failure rates (and supply chain while we're at it.) That'd be an interesting world.


1. people will eventually tune them out, like with prop 65 warnings or the existing nutrition facts/calorie labeling

2. while it's easy to calculate what's the nutritional content in a food, estimating future failure rates isn't trivial and there's a lot of subjectivity involved. Companies will definitely be fudging the reliability numbers to get an edge. See for instance, the failure rates for hard drives. The annual failure rate on the spec sheets are around 0.3%, but empirical data by backblaze puts them anywhere from 0.3% to 12%. Therefore I'd expect these nutritional fact labels to be totally useless at best, and a waste of time/resources at worst.


Prop 65 warnings are pretty useless though, since they have very limited information that does not allow one to evaluate the risk incurred.

Case in point, my first internship in California was in a building with a sign that said "This building contains chemicals known to the state of California to cause cancer and birth defects or other reproductive harm." What's in the building? Who knows. Could be really bad chemicals, or just someone who has a beer on their desk [0].

It would be much better to have some information about the chemicals contained, how bad are the chemicals, and what is the expected effect of the chemicals at the concentration at which they're encountered.

[0] https://oehha.ca.gov/chemicals/alcoholic-beverages-0


Yes, they need some actionable information. I remember seeing my first one as 15 year old Canadian on vacation. My first thought was good thing I'm in Hawaii. My second thought was I can't do anything with this vague information.

It was some green slime you put in your bike tires to prevent puncture leaks. I had never seen that before. I bought it and took it home with me, skillfully avoiding California so it didn't become carcinogenic.


So it was better when it was impossible to know what was in things and count calories? Not perfect doesn't mean not better.

> The annual failure rate on the spec sheets are around 0.3%, but empirical data by backblaze puts them anywhere from 0.3% to 12%.

I'd suggest dealing with that as fraud, not giving up.


His argument is pure straw, made up by him; not what is actually being argued by anyone.


I don't think it's correct and I don't think its correctness is objectively decidable.

It's one thing when companies have unforeseen flaws that end up causing injury or death. No one is perfect and, while they should pay reasonable restitution in line with the level of their mistake, it seems fine overall.

Other the other hand, knowingly producing a product that you are reasonably sure will unexpectedly[1] kill or hurt sometime should have severe penalties. Those penalties should be imposed not through individual lawsuits (which are a poor tool for assuring the rights of whole classes of people - class action lawsuits not withstanding), but through prevailing regulatory action. To be honest I don't think it would be going too far to, as a standard action, nationalize a company in that situation.

We really, really do not want a situation where companies are choosing to kill their customers because they think they will come out ahead in the end. Think about it - are we happy that the leadership teams of the tobacco industry, or the oil industry, or the fiberglass industry were kept in place? How much better of a world would we be in if tobacco companies were at existential threat from their behavior? Where they needed to sell cigarettes like the USA sells guns (with the understanding they may kill)? I think we should seriously consider that standard of product safety.

[1] Products like guns, which are intended to injure or destroy, are their own thing imo.


Nice, you built a good argument against a nonsense issue. Now try the actual issue instead of a straw man.


Modern society has become incapable of proper Risk Analysis

COVID has really highlighted this rather well. People do believe there should be zero risk, they will only accept risk when it has already been assimilated into their lives, but as a society we seem incapable of assimilating of new risk.

I use to think we would get fully autonomous cars, but now I am pretty sure we will never see this technology on the public roadways, not because it is infeasible, but because it can never ELIMINATE all risk to human life, as such it will be rejected by society.

Just like the "2 weeks to flatten the curve" transformed to "everyone self isolate until covid is no more" Automated driving is no longer about being "safer" in an objective way, it has to preventing all death, and if an automated car even causes one death then we must continue with human drivers, at least that is the view of many in society. We can not allow an algorithm to resolve a trolley problem, it is better a human do that.

As a society, we have become very very very risk adverse.


It was a specific US administration that said two weeks, whom employed the "right" experts to get this conclusion. They argued with him that it wasn't long enough, furthermore, without widespread PPE and compliance from the population, it was bound to fail. The curve did flatten even in spite of these difficulties.

All that being said, you're absolutely right, we could have just accepted that life comes with risk and allowed millions within the US to die within a couple months.


>>The curve did flatten even in spite of these difficulties.

yes it did, then the goal posts were moved, it was no longer about hospital resources it became about death rates, then when death rates did not support the lock down narrative it become about infection rates

In reality (for many regions) it was always about political and economic control not public health

>>we could have just accepted that life comes with risk and allowed millions within the US to die within a couple months.

There are hundreds of different ways the pandemic could have been handled to believe the only 2 options where complete economic shutdown or death is moronic is in no way supported by the evidence, it sounds like you want have a fact based discussion but are leading off with emotional rhetoric, I am happy to debate facts, but I have no time or need for emotional responses or red herring fallacies


It's not even a question of risk. Every risk is a trade off against another one. Losing one life to an autonomous vehicle is unacceptable but losing ten to drunk drivers is fine? That's not a risk assessment at all. It's really politics masquerading as risk.

As soon as autonomous vehicles are approved you're going to have driverless Amazon delivery trucks ejecting packages in your driveway and emailing you that they've arrived.

All the truck drivers and their unions know that, so they do everything they can to inject fear mongering stories into the media every time there is a driverless car accident, because politics.

And the media eats it up because it's clickbait. If they provided a reasoned risk assessment then the conclusion wouldn't be "fear for your lives" which wouldn't drive as much traffic.


>>As soon as autonomous vehicles are approved you're going to have driverless Amazon delivery trucks ejecting packages in your driveway and emailing you that they've arrived.

That is unlikely, the population has a huge problem right now with package theft, even if it does not "cost" the customer anything when i order something I need the product it if stolen from me even if I get another one a few days later it makes me less likely to buy online for things. Amazon's market dominance is directly tied to 1-2 days delivery times.

Having a bunch of robots just toss packages 5 feet from the road might seem like a good idea to an MBA, but in reality it will make package delivery less reliable if I have to have 30% of my amazon packages redelivered because of theft, damange etc, amazon will lose its market share.

Already they are losing in many way in price, i am often times finding things for lower prices than on amazon, largely because of their INSANE platform charges (i.e the 30% "fulfilled by amazon" surcharge)

Amazon Retail business is still either break even or losing money, AWS supports the company. I am not sure they can withstand the hit that would come from fully autonomous package delivery.

>All the truck drivers and their unions know that,

I can assure you it is not Truck Drivers or the Truck Driver unions (which really have almost no power these days) that are at the heart of anti-automation reporting.

Insurance and Local governments have alot more at stake, hell most local governments have huge amounts of revenue that come from parking and other road related fines that would disappear entirely with fully automated cars.


> That is unlikely, the population has a huge problem right now with package theft

You're making the case that it won't matter because the problem is already present.

The human drivers already do this. How strong a case can you make that they won't be able to get away with something they already get away with?

> Already they are losing in many way in price, i am often times finding things for lower prices than on amazon, largely because of their INSANE platform charges (i.e the 30% "fulfilled by amazon" surcharge)

Complaint unrelated to driverless trucks.

> Insurance and Local governments have alot more at stake, hell most local governments have huge amounts of revenue that come from parking and other road related fines that would disappear entirely with fully automated cars.

By most accounts self-driving cars are going to reduce insurance liability because they don't drive drunk or text and drive or get tired or angry or distracted. But also, insurance companies don't really care about claims when they're predictable except to the extent that the corresponding premiums are so high they discourage people from buying insurance, which is a high bar when car insurance is required by law.

And listing additional groups who have the incentive to throw shade on self-driving cars for underhanded political reasons rather than legitimate risks is just more to the point.


I've seen how software is developed. If we have automated systems doing the same thing, they will all make the same mistake. It will be astounding to see it happen, and disastrous.


The existing non-driverless cars are already full of software.


Yes, but it won't drive you into a wall.



About this: I have a relation who works for a relatively large upstream auto part supplier. I asked him somewhat jokingly about this fight club scene, and he immediately and unabashedly told me he'd been involved in a number of such conversations. In retrospect, I can't understand why I was at all surprised: how else would the conversations go in a corporation (whose sole or primary incentive is by default monetary)?

(To be clear, I'm not saying that I find this morally correct — I'm not sure how I feel about that aspect, honestly, except icky at the surface level. I more means that it seems retrospectively to me that, well, of course that's how it would go, given the incentive structure)


> how else would the conversations go in a corporation (whose sole or primary incentive is by default monetary)

This isn't really a corporate issue at all though. Given scarce resources (whether physical or human), we need to be able to allocate resources efficiently. A conversation along these lines happens in public health systems all the time: how much money should be spent on medical interventions? There, the concept of a QALY (Quality-adjusted life year) is used, and typically, a price limit is set per QALY. Then, only interventions below that threshold will be funded. The idea is that since the healthcare system has limited funds, it doesn't make sense to spend exorbitant amounts delivering marginal results for one patient.

Now, one could argue this is simply a monetary issue, and if we didn't use money to measure these things, the issue would go away. The thing is, even if money isn't an issue (somehow), scarcity is still something we need to deal with. Developing and administrating medical interventions takes human labour, and spending a disproportionate amount of person hours on small gains is still an issue.

This comment is probably a little rambling, but the TL;DR is that given scarce resources (whether that's money in a corporation, or chemists and doctors in a health system), doing calculations on human lives is necessary if we want to make sure we allocate resources effectively.


In an analogous situation: how do you think a universal healthcare system should make spending and prioritisation decisions?


How does it in the US for those covered by universal healthcare?


Doesn’t take into account costs like bad publicity, effects on employee morale, etc


one of many great quotes from Fight Club.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: