How AI is Transforming Medical Imaging for Better Patient Care
It used to appear incredible that surgeons could look inside the human body without making any incisions. However, medical imaging in radiology has come a long way, and the most recent AI-driven approaches are going far further. They analyze body scans for distinctions that even a human eye can overlook by utilizing the vast computational power of AI and machine learning.
Modern imaging in medicine incorporates complex methods of evaluating each data point to separate health from sickness and signal from noise. The next few decades of radiology will be devoted to evaluating that data to make sure nothing is missed if the first few decades were spent improving the resolution of the body images taken.
Imaging is changing from its original purpose of diagnosing medical diseases to becoming an essential component of treatment, particularly in the case of cancer. In order to monitor tumors and the spread of cancer cells and to determine whether treatments are effective, doctors are starting to rely more and more on imaging. The types of therapies patients receive will change as a result of imaging's new function, and the data doctors receive about how well those treatments are working will greatly improve, allowing doctors to ultimately make better decisions about the types of treatments they need.
Functional imaging will become a part of care within the next five years, predicts Dr. Basak Dogan, an associate professor of radiology at the University of Texas Southwestern Medical Center. We believe that the real clinical questions cannot be answered by the present conventional imaging. However, patients who desire greater accuracy in their treatment so they may make more informed decisions will find that functional approaches are the solution.
Identifying issues earlier
Whether using X-rays, CT scans, MRIs, or ultrasounds, the first challenge to maximizing the value of pictures is to automate the reading of them as much as possible to free up radiologists' important time. Because of the enormous processing power available today, it is now possible to teach computers to discern between aberrant and normal discoveries, computer-aided methods have demonstrated their value in this field. Radiologists feed computer programs their findings on tens of thousands of normal and abnormal images, which teaches the computer to recognize when images contain things that fall outside of normal parameters. Software specialists and radiologists have been working together to develop these formulas for years. The computer becomes better at fine-tuning the distinctions the more photographs it has to analyze and learn from.
An imaging-related algorithm must be accurate 80% to 90% of the time in order for the U.S. Food and Drug Administration (FDA) to approve it. About 420 of them have been approved by the FDA so far for various ailments (mostly cancer). Although the FDA still stipulates that a person must be the final arbitrator of what the machine-learning algorithm discovers, such methods are essential for identifying photos that may contain questionable discoveries so that doctors can evaluate them and, eventually, give patients speedier answers.
Continue reading: Virus Hunters Seeking to Stop the Next Pandemic
At Mass General Brigham, physicians use about 50 such algorithms to assist them in providing patient care. These algorithms range from identifying aneurysms and cancers to identifying embolisms and stroke symptoms in patients who present to the emergency room with many of the common symptoms associated with these conditions. The FDA has authorized about half of them, while the remaining ones are undergoing clinical trials.
"Early discovery is the aim. According to Dr. Keith Dreyer, chief data science officer and vice chairman of radiology at Mass General Brigham, it can sometimes take humans days to make an accurate diagnosis. In contrast, computers can work nonstop and identify patients who require immediate care. "That patient will receive treatment a lot faster if we can do that with computers."
monitoring patients more closely
The initial step in incorporating AI-based help in medicine is computer-assisted triaging, but machine learning is also developing into a potent method of monitoring patients and tracking even the smallest changes in their illnesses. This is especially important in the case of cancer, when deciding whether a patient's tumor is expanding, contracting, or staying the same requires the time-consuming effort of monitoring the patient's tumor. According to Dogan, "We struggle to grasp what is occurring to the tumor as patients receive chemotherapy." Unfortunately, unless any sort of shrinkage begins to take place halfway through treatment, which could take months, "our usual imaging tools can't identify any change."
In some circumstances, imaging can be helpful by detecting changes in tumors that are unrelated to their size or architecture. "Most of the alterations in a tumor are not nearly at the level of cell death in the very early phases of chemotherapy," adds Dogan. The modifications involve altering how the body's immune cells and cancer cells interact.
Additionally, cancer frequently does not contract in a predictable manner from the outside. Instead, certain cancer cells within a tumor may disappear while others keep growing, causing the mass to become more pockmarked and resemble a moth-eaten garment overall. In other instances, even while it doesn't always mean more cancer cell development, the tumor's size may even expand because some of that cell death is linked to inflammation. Currently, it is impossible to determine how much of a tumor is still alive and how much is dead using standard imaging. Mammography and ultrasound, the two most often used breast cancer imaging methods, are intended to detect anatomical characteristics.
Continue reading: The Race to Develop a Breast Cancer Vaccine
Dogan is evaluating two methods at UT Southwestern for using imaging to monitor functional changes in breast cancer patients. She is imaging breast cancer patients with funding from the National Institutes of Health after one chemotherapy cycle to detect minute changes in pressure around the tumor by injecting microbubbles of gas. As tumors grow, these bubbles tend to gather around them and fluctuate in pressure; compared to normal tissues, growing malignancies have more blood arteries to sustain their proliferation.
Dogan is examining optoacoustic imaging, which converts light signals into sound signals, in a different study. Breast tissue is exposed to laser light, which causes cells to vibrate and produce sound waves that are recorded and examined. Since cancer cells typically require more oxygen than healthy cells do in order to continue growing, this method is well suited to measuring the oxygen levels in tumors. Which tumor components are still growing and which are not can be determined by changes in sound waves. Dogan claims that by simply imaging a tumor, it is possible to distinguish between those that are likely to spread to the lymph nodes and those that are not. Clinicians are now unable to distinguish between tumors that will spread to the lymph from those that won't. It might provide insight into the tumor's behavior and help patients avoid needless lymph node operations, which are now considered standard of care.
Without the need for intrusive biopsies and well before they are detected by optical scans, the method may also be used to detect early indications of cancer cells that have moved to other regions of the body. Doctors may have a better chance of finding these fresh deposits of cancer cells if they concentrate on areas where cancer cells frequently spread, such as the bones, liver, and lungs.
recognizing unnoticed anomalies
According to Dreyer, these algorithms may even identify aberrations for any ailment that no human could find with enough data and photos. His team is also aiming to create an algorithm that monitors specific biomarkers in the human body, both anatomically and functionally, in order to detect changes that would indicate a person is at risk of suffering a stroke, fracture, heart attack, or other unfavorable occurrences. While it's still a few years away, according to Dreyer, "those are the kinds of things that are going to be transformational in healthcare for AI." That is the holy grail of imaging.
recognizing unnoticed anomalies
It will require a ton of data from hundreds of thousands of patients to get there. The United States compartmentalized healthcare systems make it difficult to share such data, though. One option is federated learning, in which researchers create algorithms that are then applied to databases of anonymized patient data from other institutions. Institutions won't have to risk their safety systems, and privacy is preserved.
Continue reading: How Researchers Are Trying to Improve IVF
The use of AI-based imaging may even begin to benefit patients at home if more of those models are validated, whether through federated learning or another method. People may someday be able to access imaging data using portable ultrasounds delivered via a smartphone app, for example, when self-testing and telehealth became more commonplace as a result of COVID-19.
The significant revolution in health care that will result from AI will be that it will provide many solutions to patients themselves, or before they become patients so that they can stay healthy, according to Dreyer. Giving patients the knowledge and tools to make the best decisions for their health would probably be the most effective strategy to optimize imaging.

Post a Comment