Viewing a single comment thread. View all comments

Top-Perspective2560 t1_j0ez2z9 wrote

The first thing I’d say, and this is really important: You need to put a disclaimer on the site clearly stating that it’s not medical advice of any kind.

Explainability is always the sticking point in healthcare. This is pretty cool, but unless you can explicitly state why the model is giving that advice/output, it can never be truly useful, and worse, can open you up to all sorts of issues around accountability and liability. Tracing back to the original studies is a good thing, but doesn’t necessarily answer the question of why the model thinks that study should result in that advice.

Deep Learning models in healthcare are typically relegated to the realms of decision-support at best for the moment because of these issues. Even then, they’re often ignored by clinicians on the whole for a variety of reasons.

The methodology for determining what advice to give is quite shaky too. There is usually a bit more to answering these kinds of questions. What are the effect sizes given in the studies, for example? What kind of studies are they?

Anyway, I hope that doesn’t come across as overly-critical and is constructive in some way. AI/ML for healthcare can be a bit of a minefield, but it’s my area of research so just thought I’d pass on my thoughts.

Edit just to add: It would probably be really beneficial for you to talk to a clinician or even a med student about your project. From my experience, it's pretty much impossible to build effective tools or produce good, impactful research in this domain without input from actual clinicians.

34

tmblweeds OP t1_j0hgp9v wrote

Definitely not overly critical—the whole reason I posted was to get critiques! I think you're right that I can go further with explainability, and I also think that there are ways to use NER, etc., to give more interesting answers (e.g., a table of treatments sorted by effect size or adverse events). I'll keep working in this direction.

3

Top-Perspective2560 t1_j0ho0hv wrote

Sounds good! The table of treatments sounds like a good starting point - but further down the road it's definitely an issue of making sure that it actually corresponds to the model's "answer" somehow, because the advantage of providing it is to validate the output. Quite a lot of these issues around explainability are very deep-rooted in the models themselves - I'm sure you're familiar with the general state of play on that. However, there are definitely ways to take steps in the right direction.

If you'd like any input at any point feel free to fire over a DM!

1

take_eacy t1_j0hhxrm wrote

Agreed! Clinicians are often the gatekeepers in clinical practice and have an understanding of the actual workflow

2