A new AI model for analyzing medical scans | Festina Lente - Your leading source of AI news | Turtles AI

A new AI model for analyzing medical scans
SLIViT achieves accuracy comparable to that of clinical specialists, improving diagnostic efficiency
Isabella V3 October 2024

 

UCLA researchers have developed an innovative AI model, SLIViT, that can rapidly analyze complex medical scans, achieving levels of accuracy similar to those of clinical experts. This system could represent a significant advance in medical imaging, making diagnosis more efficient.

Key Points:

  • SLIViT analyzes different types of medical scans, improving multidisciplinary diagnosis. 
  • Works effectively even with small training datasets. 
  • Potential reduction in analysis time compared to traditional methods. 
  • Research will focus on addressing potential biases to ensure fairness in diagnoses.

The recent development of an AI framework by researchers at the University of California, Los Angeles (UCLA) represents a significant advance in the field of medical imaging. The model, called SLIViT (SLice Integration by Vision Transformer), is designed to quickly and accurately analyze complex medical scans, such as MRIs and 3D images. The research, published in the journal Nature Biomedical Engineering, shows that SLIViT can achieve levels of accuracy comparable to those of clinical specialists. Unlike other models, which typically focus on a single type of scan or specific diseases, SLIViT was tested on a variety of data, including 3D retinal scans, ultrasound videos, and CT scans for lung nodules. This suggests that the model could be applied in broader clinical settings in the future. Neural networks, like SLIViT, learn through large, expert-annotated datasets, and analyzing 3D scans is complex, integrating depth, length, and width dimensions, requiring careful attention to identifying subtle disease markers. Co-author Oren Avram, a postdoctoral researcher at UCLA Computational Medicine, noted that compiling volumetric datasets is challenging with conventional resources, and many existing models are limited in training on a single imaging type or specific disease conditions. SLIViT stands out by integrating two AI components and a learning approach that allows it to assess disease risk across multiple volumetric modalities. The results show that, although a general-purpose model, SLIViT outperforms more specialized models, reducing analysis time by up to five thousand times. Additionally, its flexibility allows it to work effectively even with messy clinical datasets. Avram highlights the importance of this system as a baseline for future developments in predictive medicine, noting how automated annotation can help reduce the cost and time required for data acquisition. Co-senior author SriniVas R. Sadda expressed enthusiasm for SLIViT’s ability to operate in real-world conditions, demonstrating that it performs well even with a limited number of training samples. The research also raises concerns about the potential for bias in AI tools, a key area that can impact diagnostic accuracy and health disparities. While SLIViT features a fast annotation process, the research team is committed to ensuring that any systematic biases are mitigated. Future directions include expanding the research to integrate additional treatment modalities and deepening SLIViT’s predictive capabilities. The goal is to facilitate earlier diagnoses and improve treatment regimens.

This approach represents an interesting opportunity for the evolution of imaging technologies in clinical practice.