This is a problem with ML approaches, right? Instead of water boiling at "100C" it boils at "99.98C +- 0.04C". Normally this is ok, but sometimes it isn't!
I imagine most humans have error rates worse than that. And what does 'error rate' even mean in that context? A small delay or a catastrophic failure ending in death and destruction?
I would think that would be quite a good error rate. Especially when considering people in the hospital are often not in good health, possibly making their veins more difficult to find.
That is assuming by error you mean missing the vein. If error is defined as a fatal complication, then 1/1000 is terrifying.
If there are 50,000 people and 50,000 people experience problems, that's bad.
If there are 5 million people and 50,000 experience problems, that's fine?
Isaac Asimov's comments about world population increase involved something about this; the more people there are, the more each individual is dehumanised and rendered irrelevant (my paraphrasing).
No I don't think it's fine at all. I think Google, twatter, Facebook, et al don't care because 50 people and who they represent don't matter to them compared to the money they make.
When all rounded up it isn't even a single penny on the balance sheet. The owners of these businesses literally never even know from the their only view into the companies.
I have no idea why I'm being downvoted on this. Hackers can't do math or what?
I think your position is a little unrealistic. 50 people experiencing problems out of what, a billion? is pretty good. Do you think that if those billion people were served by 20 million small business e-mail providers, that none of those 20 million e-mail providers would ever make a mistake and affect their 50 customers?