×
The submission system is temporarily under maintenance. Please send your manuscripts to
Go to Editorial ManagerMedical image segmentation plays a crucial role in the realm of medical imaging. The process involves the division of an image to obtain a comprehensive view and ensure precise diagnostics. There are various methods that are employed, ranging from traditional approaches to the more advanced deep learning techniques. Both play a significant role in enhancing healthcare. With the continuous advancement in technology, there is a growing need for accurate segmentation. While traditional methods such as thresholding and region growing are effective, they may require human intervention for complex cases. Deep learning techniques, particularly Convolutional Neural Networks (CNNs), have significantly improved the process by learning intricate details and accurately segmenting the image. When these methods are combined, healthcare professionals can achieve high-quality, precise results. Furthermore, with the advancements in hardware and technology, real-time segmentation is now possible. Generally, the process of dividing medical images into segments is extremely important for the progress of healthcare with the help of artificial intelligence and the most recent advancements in the industry, such as explainable AI and multimodal learning. However, this meticulously detailed and in-depth review provides an all-encompassing and extensive analysis of the current methods utilized, their multitude of applications across various fields, and the promising emerging advancements that have the potential to pave the way for remarkable future improvements and innovations.
In order to avoid losing sense of sight in a large portion of the working population, Diabetic Retinopathy (DR) identification during broad examination for diabetes is crucial. To prevent blindness in the future, early illness detection and measurement of disease development are essential. DR is diagnosed through medical image analysis. After the success of Deep Learning (DL) in other applications in the real world, it is considered a vital tool for upcoming health sector applications, providing solutions with accurate results for medical image analysis. This review provides a comprehensive survey of the state-of-the-art DL models for DR detection and grading using retinal fundus photography. This review thoroughly examined and summarized 81 relevant publications that were published through IEEE Xplore, Web of Science, PubMed, and Scopus between 2018 and 2023 based on the available database with binary or multiclass CNN classification models as well as the main preprocessing techniques. According to the findings of this review, transfer learning has proven to be an excellent technique for addressing the problems of limited resources for data for DR analysis. CNN models having tens or hundreds of layers are the most frequently utilized frameworks for DR classification. The most extensively utilized datasets for DR categorization are Aptos 2019 and EyePACS. Although DL has attained or surpassed human-level DR classification accuracy, there is still more work to be done in real-world clinical procedures.
One of the most common causes of mortality worldwide is Lung cancer, an early diagnosis crucial for a patient’s survival and recovery. Automated segmentation of lung lesions in chest CT has become a pre-eminent focal point for research, particularly with the development of hybrid methods combining traditional image processing with advanced deep learning methods such as CNN. These hybrid approaches aim to minimize individual methods limitations by controlling their merge strengths to enhance segmentation efficiency, precision, and clinical utility. This review comprehensively analyzes different hybrid techniques, such as deep learning improved by rule-based systems, multi-scale feature extraction, and ensemble learning. As well as inspect their clinical effect, particularly in improving diagnostic accuracy and optimizing treatment procedures. Despite their possibility, these approaches still face significant challenges, such as computational complexity, data requirements, and the requirement for explainable AI (XAI). Upcoming advancements in lung lesion segmentation will focus on refining these models to achieve faster processing, improved accuracy, and integration with diagnostic tools to protect transparency and ethical considerations.
Recently, three-dimensional models 3DM in the prosthetics field gained popularity, especially in the context of residual limb shape creation resulting from collecting medical images in Digital Imaging and Communications in Medicine DICOM format from a magnetic resonance imaging MRI after image processing accurately. In this study, a three-dimensional model of the residual limb for a patient with transtibial amputation was realized with the integration of artificial intelligence and a computer vision approach demonstrating the benefits of AI segmentation tools and artificial algorithms to generate higher accuracy three-dimensional model before prosthetic socket design or in case of comparison the 3D model generated from MRI with another 3D model generated from another technique, where a residual limb of a 23 years old male patient with amputation in the left leg wearing a prosthetic socket liner, and having 62 kg weight, 168 cm height, with high activity level. The patient was scanned using GE Medical Systems, 1,5 Tesla Signa Excite. MRI images in DICOM format were read to retrieve essential metadata such as pixel spacing and slice thickness. These images were processed to obtain a model that reflects the real shape of the residual limb using a specific algorithm, and the 3D model was extracted using AI segmentation tools. The obtained 3D model result with high resolution proves the potential of the artificial intelligence approach with deep learning to reconstruct 3D models concluding that AI has an instrumental role in medical image analysis, particularly in the areas of organ and tissue classification and segmentation., thus generating automatic and repetitive a 3D model.
Deep learning modeling could provide to detected Corona Virus 2019 (COVID-19) which is a critical task these days to make a treatment decision according to the diagnostic results. On the other hand, advances in the areas of artificial intelligence, machine learning, deep learning, and medical imaging techniques allow demonstrating impressive performance, especially in problems of detection, classification, and segmentation. These innovations enabled physicians to see the human body with high accuracy, which led to an increase in the accuracy of diagnosis and non-surgical examination of patients. There are many imaging models used to detect COVID-19, but we use computerized tomography (CT) because is commonly used. Moreover, we use for detection a deep learning model based on convolutional neural network (CNN) for COVID-19 detection. The dataset has been used is 544 slice of CT scan which is not sufficient for high accuracy, but we can say that it is acceptable because of the few datasets available in these days. The proposed model achieves validation and test accuracy 84.4% and 90.09%, respectively. The proposed model has been compared with other models to prove superiority of our model over the other models.