Viewing a single comment thread. View all comments

something-crazier t1_j1lwdnn wrote

I realize ML in healthcare is likely the way of the future, but articles like this one make me really worried about this sort of technology

159

BlishBlash t1_j1mq46q wrote

Don't let anyone gaslight you, it WILL be as bad as you think. Probably worse. Anything to make the most money possible at the expense of patients and medical workers.

36

poo2thegeek t1_j1m10fi wrote

Agreed. ML is the future, but it needs significant legislation to ensure its safe. ML probably should just be used as an aid, and not as a final truth.

34

UnkleRinkus t1_j1n2xsy wrote

If you think Congress's attempts at regulating social media were disastrous, wait until they try to regulate applied statistics and model fitting. You can't usefully regulate something you don't understand.

19

TurboTurtle- t1_j1p3p4g wrote

Of course. Why try to understand something when it’s so much easier to just accept loads of money from your favorite mega corps?

2

Hydrocoded t1_j1pdxyr wrote

They already regulate the medical system and look how wonderful that has turned out.

Lawmakers ruin everything they touch.

1

Subjective-Suspect t1_j1pndnb wrote

True story: I was threatened w police intervention by my doctor’s nurse for trying to get a refill for hydrocodone the day before Thanksgiving.

I had pinched a nerve the previous week and was in substantial pain. I knew I’d run out of meds over the long weekend, so I called. They assumed I was already out of medication and accused me of abusing it. I went by the office w the partially-full bottle, to no avail. The nurse and another staffer (witness) pulled me into a room. They refused to listen or examine my med bottle. That’s when they threatened cops if I didn’t leave immediately. I left and went straight to urgent care. Prescription given.

I booked my next—and final—visit to my doctor to tell him how furious I was to be dismissed, threatened, and ostensibly left in pain for days. I told him I was never coming back and that they were damn lucky that’s all I intended to do. He claimed no knowledge of whole ugly situation. As if.

4

faen_du_sa t1_j1m3qts wrote

Indeed. Would Imagine it would be extremely helpful in pointing to where to look in a lot of cases. Prob a while since we can rely on it exclusively tho, would also imagine that is a territory of responsibility hell. Who gets the blame if someone dies due to something not being discovered, the software team?

Pretty much all the problems that arises with automated cars and insurance issues

6

poo2thegeek t1_j1m457q wrote

Yeah, it’s certainly difficult. But it’s also complicated. For example, I believe ML models looking at certain cancer scans have higher accuracy than experts looking at the same scans. In this situation, if someone is told they have no cancer (by the scan) but it turns out they do, is the model really at fault?

I think the thing that should be done in the time being, is that models should have better uncertainty calibration (I.e, in the cancer scan example, if it says this person has an 80% chance of cancer, then if you were to take all scans that scored 80% chance, then 80% of them should have cancer, and 20% should not) and then a cutoff point at which point an expert will double check the scan (maybe anything more than a 1% ML output)

11

DogGetDownFromThere t1_j1mmy4n wrote

> For example, I believe ML models looking at certain cancer scans have higher accuracy than experts looking at the same scans.

Technically true, but not practically. The truth of the statement comes from the fact that you can crank up the sensitivity on a lot of models to flag any remotely suspicious shapes, finding ALL known tumors in the testing/validation set, including those most humans wouldn’t find… at the expense of an absurd number of false positives. Pretty reasonable misunderstanding, because paper authors routinely write about “better than human” results to make their work seem more important than it is to a lay audience. I’ve met extremely few clinicians who are truly bullish on the prospects of CAD (computer-aided detection).

(I work in healthtech R&D; spent several years doing radiology research and prepping data for machine learning models in this vein.)

7

UnkleRinkus t1_j1n3903 wrote

You didn't mention the other side which is false negatives. Who gets sued if the model misses one cancer? Which it inevitably will.

3

Subjective-Suspect t1_j1pod2z wrote

Cancer and other serious conditions get missed and misdiagnosed all the time. No person nor test is infallible. However, if you advocate properly for yourself, you’ll ask your doctor what other possible conditions you might have, and how they arrived at their diagnosis.

Most doctors routinely tell you all this stuff, anyway but, if they don’t, that’s a red flag to me. If that conversation isn’t happening, you aren’t going to be prompted by their explanation to provide clarity or more useful information you hadn’t previously thought important.

1

poo2thegeek t1_j1mqw02 wrote

Very interesting, thanks for the information! Goes to show that scientific papers don’t always mean useable results!

2

isleepinahammock t1_j1ndgzm wrote

I agree. It might be useful as an aid, but not as a final diagnosis. For example, maybe machine learning is able to discover some hitherto-unknown correlation between two seemingly unrelated conditions. That could be used as an aid in diagnosis and treatment.

For example, imagine a machine learning algorithm spat out a conclusion, "male patients of South Asian ancestry with a diagnosis of bipolar disorder have a 50% increased chance of later receiving a diagnosis of testicular cancer."

I chose these criteria off the top of my head, so they're meaningless. But bipolar disorder and testicular cancer are two diagnosis that have seemingly very little connection, and it would be even more counter-intuitive if this only significantly affected South Asian men. So it's the kind of correlation that would be very unlikely to be found through any other method than big machine learning studies. But biology is complicated, and sometimes very nonintuitive results do occur.

If this result was produced, and it was later confirmed by follow-up work, then it could be used as a diagnostic tool. Maybe South Asian men who have bipolar disorder need to be checked more often for testiclular cancer. But you would be crazy to assume that just because a South Asian man is bipolar, that they automatically also must have testicular cancer, or vice versa.

4

Hydrocoded t1_j1pdmpy wrote

Appriss is one of the most evil groups of people in the western world. They should all be jailed for life. What they do is no different than torture. They are sadistic.

There are millions of people who have chronic pain. We had advanced medications to treat their pain. Our lawmakers and companies like appriss unilaterally decide it’s better for millions of people to suffer in agony than to risk a single junkie getting a fix.

Words cannot describe how evil I believe them to be. There are many group that do awful things in this country, but there are precious few who are so gleefully, self-righteously cruel. They don’t just torture the sick, they torture the old. Their victims are our grandparents, our great aunts and uncles. They victimize our most desperate. They ensure that lives are cut short, as the stress of chronic pain leads to depression, cancer, obesity, and heart disease.

We have a treatment for pain, and these monsters want us to refuse it to those who need it.

6

james_d_rustles t1_j1p0zet wrote

That’s the scariest article I’ve read in a while. I actually saw my own “score” looking back. I’m prescribed meds for ADHD, and my doctor was telling me about how they have to follow some “new system” to prevent ODs. He showed me the computer screen, and it was in fact exactly like a credit score. Just some numbers and a few pie chart looking things that had my medical history.

Luckily, I guess my score was low, so I was allowed to continue being prescribed the medicine that I’ve been prescribed for years, but still horrible either way. I can’t even imagine what it feels like being a patient with a “high score” for reasons outside of your control.

5

scrample_egg t1_j1m56un wrote

this is no longer the way of the future this is just Now. hope everyone has fun getting charged $500 for 2 aspirin pills to help with their tooth infection. thanks Bayer

3

TurboTurtle- t1_j1p3tmg wrote

Why couldn’t they prescribe a different painkiller? Opioids are not the only one. And why terminate her from the hospital? Even if she was addicted, does that somehow make her medical emergency not matter?

1

Devil_May_Kare t1_j1pk2ul wrote

People think of drug addicts as subhuman. Doctors are people. Therefore, doctors think drug addicts are subhuman.

6

Devil_May_Kare t1_j1pjxdp wrote

I might grow breadseed poppies and extract raw opium sometime, so I can prove to doctors beyond a reasonable doubt that I'm not drug-seeking (if I have opiates at home and I'm at a doctor asking for help instead of at home getting high, obviously I'm not just there to ask for morphine). I mean, they shouldn't deny medical treatment to people they think are drug seeking, but as a stopgap measure this idea appeals to me.

My experience of telling a doctor that I had unauthorized prescription medications has been good so far (it was estradiol and bicalutamide and she was more or less chill about it). So I'm inclined to think similar strategies will go well in the future.

1