Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's crazy is that if AI is better than doctors by some significant degree, what do we do when the doctor and AI disagree? Like if doctors are right 85% of the time, but the AI is 90%.

I guess we treat it as another doctor? Like if we have 4 opinions that agree, we go with that one regardless of the source of those opinions (as long as they meet some minimum competence threshold).



As others have said - perform more tests. I've worked around molecular pathology testing for cancer (which comes up with a diagnosis based on data analysis after running DNA or RNA sequencing). If the molecular report differs from what the surgical pathologist saw when looking under the microscope, it's typically not 'it's cancer' vs 'it is not cancer', but more 'this is molecularly non-small cell lung cancer' vs 'this appears to be prostate cancer'. So what will happen is they'll do more staining on the sample, specific to what the molecular report came out with - and a lot of times - bam. That tumor in someone's prostate is actually lung cancer and needs to be treated that way.


Perform more tests. Start preliminary treatment and continue to monitor. Medical care isn't a 1-bit decision process.


That's an interesting thought.

Currently, what happens is that if a diagnostic test comes back and it suggests something serious, say cancer, and the doctor does not pursue it, then the doctor would be liable if it did turn out to be cancer.

So if a machine disagreed with a doctor, then I would assume that the doctor will grudgingly have to investigate further until there is enough evidence to rule out that diagnosis.

#headache

What I can see happening is that patients will go to this machine for a second opinion. And if an opinion then returns that contradicts the primary physician, then an entire can of (legal) worms will be open.

--

Addendum:

To elaborate further, there is sometimes what's called the benefit of history.

Say a patient visits 10 doctors. The 10th doctor has an unfair advantage to the first 9 simply because he/she will have the prior knowledge of which diagnoses and treatments were incorrect.

Similarly for an AI vs Human Doctor situation, the incorporation of additional information (for the AI) would require considerable amount of big data to train in order to be able to recognize prior history, failed treatments, and such.

For image specific diagnoses (eg. recognizing melanoma, retinopathy), these do lend themselves to AI very nicely. For other diagnoses that contain a significant amount of, shall we say, "human factors", then less so.


Doctors aren't liable for failing to predict the future or making imperfect diagnosis.

If a doctor reviews the available data, reasonably concludes that it shouldn't be pursued further, and it later does turn out to be cancer, then that by itself does not mean that the doctor is liable for anything. Malpractice requires actual culpable negligence, such as missing something obvious, not interpreting a questionable situation in a manner that turns out to be wrong. The existence of a second, contrary opinion doesn't change that.


This isn't a new issue, there have been CAD systems that outperformed average clinicians (on very specific tasks) since at least the mid 90s. At the end of the day in some jurisdictions liability drives the resolution process, efficiency in others.


>> Like if doctors are right 85% of the time, but the AI is 90%.

There is always the possiblity that the doctors and the device are both right 90% of the time, but not the same 90% of the time.

Or that either the doctors or the device are right most of the time for the most severe cases but the other party is right only for the milder cases, etc.

It's not easy to look at absolute numbers here.


I know that this is a totally whack-job comment, but the TV show "the Good Doctor" is kinda leaning this way. Instead of relying on a ton of personal bias - the main character is generally more similar to how a ML would diagnose things, obviously there's no way to establish any foundation to this since it's based on a TV show. But it offers a vision of what you're saying but instead of AI it's a Savant syndrome individual that is making better judgments than the rest of the doctors. That being said, I would imagine a Savant is being placed in a role like that is less likely than the show portrays so where does that place AI?


This is definitely going to be an issue. Even in cases where you're measuring your tool against "expert consensus" (often 3-5 physicians), there's a reasonable likelihood that the consensus may be wrong in certain types of cases.

Though even in those cases, you might be looking to show that your tool agrees with physicians at least as often as physicians agree with each other. Malpractice is usually about failing to offer the standard of care, and if you can show a reasonable level of concurrence with the standard of care in research and trials, you may be able to move forward and reach those higher levels of accuracy.


In practice? Pessimistic answer:

a) Usage still requires the presence of the Doctor.

b) The doctor does nothing but relay the AI’s message.

c) The doctor continues to charge the same and treat the same number of patients.

d) Everyone who expresses “hey isn’t the doctor redundant now? Shouldn’t we be treating more patients for cheaper” gets ridiculed as “one of those people”.

e) Edit: Also, the doctors’ association devotes significant resources to come up with memetically virulent reasons why the world would end if we took doctors out of the loop.

I mean, that’s how a lot of obviated jobs are currently treated...


The doctor isn't going to be making decisions without access to the computer diagnosis.

The computer aided diagnosis isn't another doctor, it's another stethoscope.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: