Submitted by adityyya13 t3_11k4qzs in MachineLearning

With increasing research and technological innovation in the Machine Learning and Deep Learning Domain, how will healthcare be impacted.

  1. If adequate and competent datasets are available for symptoms, signs and management of common and well studied diseases like Tuberculosis and Diabetes along with their complications, whats stopping AI from replacing or atleast relieving physicians at Primary Healthcare Setups. Statistics about these diseases in context to social and vertical(age) demography could be fed and treatment would be on the basis of guidelines.

  2. How hard is to process non radiological data like heart murmurs, visible body anomalies like ulcers, grading of pain, dyspnea, fatigue into well set parameters to be fed into a machine.

  3. Since the software can be centralized, shouldn't deployment of various AI modalities be widespread since only input devices will be required for investigations and the output will be generated after cloud processing.

  4. How far are we from solving data aggregation problems like noise reduction, input heterogenity and labeling bias?

  5. If regulatory and "human touch" aspects of medicine are to be hypothetically ignored, Is it possible to replace physicians with AI systems and midlevels in next few decades.

10

Comments

You must log in or register to comment.

pitcher_slayer7 t1_jb5vong wrote

One of the largest problems so far in health care is the most important thing in regards to AI/ML. Data. Yes, electronic health records now exist which is a huge step up from the paper charting of the not so distant past. However, the large EHR companies have multiple inputs for data that are not easily accessible and often times in multiple different forms. A lot of times, the necessary and clinically relevant information is not in a check-box or numerical format, but is free-text with myriad ways of describing a feature that may be hard to quantify. Additionally, often times the purpose behind charting, or put in AI/ML terms “collecting and transcribing data,” is for billing purposes, which further complicates the problem of having good data. My $0.02 is that ML methods like NLP will become more useful for chart digging purposes and trying to collect and organize data in meaningful ways. Most of a physician’s time now is currently spent charting, so the most likely applications of AI/ML will be in automating annoying tasks that physicians do not like to do in the first place. What will happen is that physicians that do not incorporate AI/ML in the future will be replaced by physicians that do use AI/ML to augment their clinical decision-making. Medicine, in my opinion, is a field in which physicians will continue to be people for the long-term future.

10

pitcher_slayer7 t1_jb5wxsv wrote

I will also add I do not see mid-level + AI/ML as being the replacement for physicians. It will be physicians + AI/ML with arguably less utility for mid-levels in the future, as the scope of mid-level practice is more likely to be affected by AI/ML before a physician scope of practice.

2

DataDrivenOrgasm t1_jb6fe6v wrote

I develop ML for medical devices. The integrated AI systems you are imagining are unlikely to be adopted for the foreseeable future.

First, the software in healthcare cannot be centralized. Every point of care has a LIMS (Laboratory Information Management System) for digitally managing lab results. Installing a modern diagnostic instrument involves communication with the LIMS. The problem is that virtually every clinic's LIMS is a bespoke creation by their IT staff. There exist almost no standards for the form of data in these systems. Performing a LIMS integration at one site does not make the process any easier for the next site. Thus an integrated AI solution for a clinic would need to be tailored to that site. There are very few sites that would generate enough data on their own to train a modern ML solution.

Similarly, the number and types of diagnostic tests performed are very different between sites. Further, there are often dozens of commercial options for any given test. So two identical patients at different sites will have different lab tests performed, and those tests may have slightly different results/coverage based on the technology adopted by that lab.

While this may seem messy, it actually makes sense for the field. Healthcare needs vary widely among different geographic contexts. Hospital-acquired infections tend to be unique to specific sites. Common injuries/illnesses/etc also tend to vary with urban vs rural environments, and the local weather patterns and ecology.

For some types of healthcare where geography is not so important, specialized centers will meet much of the demand. There will be trauma centers and cancer centers that treat similar ailments for a large geographic area. Those centers will be the best places to develop integrated AI solutions, but those solutions will only work for other similar large centers.

Additionally, the regulatory and IP environment in healthcare is not conducive to integrated solutions. Diagnostic IP is fragmented across thousands of companies, and none of them will voluntarily cooperate to help develop standards for integration. Some large companies are marketing integrated solutions, but these function as whole-sale replacements for specific lab workflows. Very few clinics will have the funding required to replace their existing workflow all at once, and even these integrated workflows require extensive customization in capability tailored to each site's needs. In the US, an integrated solution must go through the same regulatory process as the standalone tests, even if those tests are already approved by the FDA. This effectively doubles the costs of development.

COPAN is one company that has done great work in AI-assisted workflows through their integrated microbiology solutions. Despite this, they have less than 1000 sites deploying their solutions. This is because they rely on older methods and tests for integration. The newer/faster technologies are owned by other companies, requiring a partnership for integration.

Currently, AI in diagnostics is limited to what one company can accomplish, and even then the algorithms must be frozen. Updating a model based on new data requires another round of clinical trials for FDA approval. Data acquired at clinical sites cannot be included in these updates due to privacy laws. Even user telemetry data is nearly impossible to extract from a field instrument due to IT security practices.

7

frequenttimetraveler t1_jb60ifw wrote

I mean, you left the biggest blocker in the last place. It's amazing that in 2023 a visit to the doctor involves measuring blood pressure and 'listening' to your lungs. My guess is the first mass medical devices will be pirated from some awkward place because regulators won't approve them for sale. Isn't it the same reason why the iphone cant even measure SpO2 ?

And then you have the "AI Safety" mob which will prevent life-saving devices because they are biased to the blood samples of rich country dwellers.

Considering the general lack of progress in how physicians work for decades (vs the progress in drug and diagnostic devices), it seems these blockers will linger for a while

Also, consider COVID. Despite having billions and billions of cases, relatively very few studies have emerged that use same procedurs for measuring indicators, because doctors tend to stick to old, incompatible methods despite the availability of more modern alternatives. Or something like long covid, which despite billions of cases as well, is relatively understudied because records of cases were not taken, wasnt even recognized as a condition for many, and too many MDs rely on their "hunch".

In short, the Medical profession has not embraced AI , which is a requirement

4

enn_nafnlaus t1_jb7sxxi wrote

I can say this: my mother has struggled for many, many years trying to figure out what's wrong with her and causing her weird, debilitating symptoms. She finally, at long last got a diagnosis that her doctors are pretty confident in: advanced Sjögren's.

Out of curiosity, I punched her symptoms into ChatGPT, and - without access to any test results - Sjögrens was its #2 guess, and it suggested diagnostic tests that she had done and had shown it was Sjögrens. Sjögrens actually isn't super-rare (about a percent or so of the population has it), but usually much milder, and very underdiagnosed.

I think AI tools are seriously underappreciated with respect to proposing new lines of investigation on hard-to-crack cases.

4

friend_of_kalman t1_jb9nfzj wrote

I'm working with a small group at a big local university hospital. We have a hug dataset of patience data from the Neurological ICU and are currently applying AI for risk detection. For Example we are doing time series forecasting on all sorts of medical indicators(vital data, blood gas analysis etc.) Adoption is slow though and most hospitals don't have the proper infrastructure in place for this.

1

Novel-Ant-7160 t1_jb6kiv1 wrote

There's no way AI can replace a human due to liability issues. If a diagnosis is incorrect, or a patient receives the wrong advice who would be liable? The tech company that built the AI?

−1

pancomputationalist t1_jb6uc96 wrote

That's a solvable problem. Same discussion as with autopilots in cars.

With the human staff in hospitals getting thinner by the day, some people would rather trust an inexpensive machine, than having to wait ages to talk to a human doctor, who might not even be smarter than the machine. Assuming that AI will grow in popularity in general.

9