×
The submission system is temporarily under maintenance. Please send your manuscripts to
Go to Editorial ManagerKidney disease is a global health concern, often leading to kidney failure and impaired function. Artificial intelligence and deep learning have been extensively researched, with numerous proposed models and methods to improve kidney disease diagnosis. This work aims to enhance the efficiency and accuracy of the diagnostic system for kidney disease by using Deep Learning, thereby contributing to effective healthcare delivery. This work proposed three models: CNN, CNN-XGBoost and CNN-RF to extract features and classify kidney Ultrasound images into four categories: three abnormal cases (stones, hydronephrosis, and cysts) and one normal case. The models were tested on a real dataset of 1260 kidney ultrasound images (from 1000 patients) collected from the Lithotripsy Centre in Iraq. CNN models are often viewed as black boxes due to the challenge of understanding their learned behaviors, Visualizing Intermediate Activations (VIA) was used to address this issue. The proposed framework was assessed based on precision, recall, F1-score, and accuracy. CNN-RF is the most accurate model, with an accuracy of 99.6%. This study can potentially assist radiologists in high-volume medical facilities and enhance the accuracy of the diagnostic system for kidney disease.
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that severely impacts cognitive functions such as memory, attention, and reasoning, ultimately affecting daily life. Early and accurate detection is crucial for timely intervention and management. Traditional diagnostic methods, including neuroimaging and cognitive assessments, can be expensive and time-consuming, necessitating more accessible and efficient alternatives. This study aims to develop an automated and efficient deep learning-based detection system that uses Electroencephalogram (EEG) signals to accurately classify AD and healthy individuals. A Convolutional Neural Network (CNN) model was designed to extract meaningful features from preprocessed EEG data. The architecture consists of convolutional layers with max pooling, dropout regularization, and fully connected layers to improve classification accuracy. The model was trained and evaluated on a comprehensive EEG dataset, using key performance metrics such as accuracy, recall, precision, and F1-score. The proposed CNN model achieved a high classification accuracy of 94.56%, a low loss of 0.2162, and an AUC value of 0.93828, demonstrating superior classification capability. The results indicate that the model effectively distinguishes between AD and healthy individuals, outperforming several state-of-the-art approaches. The findings highlight the potential of deep learning-based EEG analysis for AD detection, providing an accessible and cost-effective tool for early diagnosis. The high accuracy of the proposed CNN model suggests that it can assist medical professionals in making well-informed decisions, ultimately improving patient outcomes.
In order to avoid losing sense of sight in a large portion of the working population, Diabetic Retinopathy (DR) identification during broad examination for diabetes is crucial. To prevent blindness in the future, early illness detection and measurement of disease development are essential. DR is diagnosed through medical image analysis. After the success of Deep Learning (DL) in other applications in the real world, it is considered a vital tool for upcoming health sector applications, providing solutions with accurate results for medical image analysis. This review provides a comprehensive survey of the state-of-the-art DL models for DR detection and grading using retinal fundus photography. This review thoroughly examined and summarized 81 relevant publications that were published through IEEE Xplore, Web of Science, PubMed, and Scopus between 2018 and 2023 based on the available database with binary or multiclass CNN classification models as well as the main preprocessing techniques. According to the findings of this review, transfer learning has proven to be an excellent technique for addressing the problems of limited resources for data for DR analysis. CNN models having tens or hundreds of layers are the most frequently utilized frameworks for DR classification. The most extensively utilized datasets for DR categorization are Aptos 2019 and EyePACS. Although DL has attained or surpassed human-level DR classification accuracy, there is still more work to be done in real-world clinical procedures.
Breast cancer is one of the greatest frequent tumours among females in Iraq. Medical ultrasound imaging has become a common modality for breast tumour imaging because of its ease of use, low cost, and safety. In the present study, Convolutional Neural Network (CNN) feature extraction approaches were used to classify breast ultrasound imaging. The CNN model used is composed of four-layer for breast cancer ultrasound image analysis. Two types of free datasets were used. These data were divided into groups A and B. Group A has three classes, namely benign, malignant and normal, while group B has two classes, namely, benign and malignant. The proposed technique was assessed based on its accuracy, precision, F1 score and recall. The model's classification accuracy for data A was 96%, whereas for data B was 100%.
Power outages are a common and persistent problem in Iraq, significantly impacting various aspects of life and business. These interruptions disrupt routine household tasks and hinder more complex technical operations in industries and services. Emphasizing the need for careful management and proactive solutions. This paper introduces a real-world time series dataset for Baghdad city, including historical outages, weather conditions (such as temperature), and power overloads, and analyzes the correlation among these parameters in different seasons. The research uses this dataset to train one-dimensional Convolutional Neural Networks (1D CNN) to find patterns and relationships that can accurately predict when power outages can happen in the long term and short term to improve the management of the Baghdad electricity grid through data-driven networks. This model was evaluated using performance metrics, and the results show that CNN is accurate in predicting outages in the short term with a Mean Absolute Error (MAE) of (0.0077), whereas, in the long term, it has achieved an MAE of (0.0775). These predictive models have the potential to facilitate the development of proactive measures aimed at reducing the impact of power outages by anticipating potential outages in advance. This research focuses on enhancing the reliability and efficiency of Baghdad's electricity supply, ultimately contributing to economic growth and stability.
Optical coherence tomography (OCT) allows for direct and immediate imaging of the morphology of retinal tissue. It has become a crucial imaging modality for diagnosing eye problems in ophthalmology. One of the most significant morphological characteristics of the retina is the structure of the retinal layers, which provides important evidence for diagnostic purposes and is related to a variety of retinal diseases. In this paper, a convolutional neural network (CNN) model is proposed that can identify the difference between a normal retina and three common macular diseases: Diabetic macular edema (DME), Drusen, and Choroidal neovascularization (CNV). This proposed model was trained and tested on an open source dataset of OCT images also with professional disease classifications such as DME, CNV, Drusen, and Normal. The suggested model has achieved 98.3% overall classification accuracy, with only 7 wrong classifications out of 368 test samples. The suggested model significantly outperforms other models that made use of the identical dataset. The final results show that the suggested model is particularly adapted to the detection of retinal disorders in ophthalmology centers.
The interest in the Eye-tracking technology field dramatically grew up in the last two decades for different purposes and applications like keeping the focus of where the person is looking, how his pupils and irises are reacting for a variety of actions, etc. The resulted data can deliver an extraordinary amount of information about the user when it's interlocked through advanced data analysis systems, it may show information concerned with the user’s age, gender, biometric identity, interests, etc. This paper is concerned about eye motion tracking as an unadulterated tool for different applications in any field required. The improvements in this area of artificial intelligence (AI), machine learning (ML), and deep learning (DL) with eye-tracking techniques allow large opportunities to develop algorithms and applications. In this paper number of models were proposed based on Convolutional neural network (CNN) have been designed, and then the most powerful and accurate model was chosen. The dataset used for the training process (for 16 screen points) consists of 2800 training images and 800 test images (with an average of 175 training images and 50 test images for each spot on the screen of the 16 spots), and it can be collected by the user of any application based on this model. The highest accuracy achieved by the best model was (91.25%) and the minimum loss was (0.23%). The best model consists of (11) layers (4 convolutions, 4 Max pooling, and 3 Dense). Python 3.7 was used to implement the algorithms, KERAS framework for the deep learning algorithms, Visual studio code as an Integrated Development Environment (IDE), and Anaconda navigator for downloading the different libraries. The model was trained with data that can be gathered using cameras of laptops or PCs and without the necessity of special and expensive equipment, also It can be trained for any single eye, depending on application requirements.
One of the most common causes of mortality worldwide is Lung cancer, an early diagnosis crucial for a patient’s survival and recovery. Automated segmentation of lung lesions in chest CT has become a pre-eminent focal point for research, particularly with the development of hybrid methods combining traditional image processing with advanced deep learning methods such as CNN. These hybrid approaches aim to minimize individual methods limitations by controlling their merge strengths to enhance segmentation efficiency, precision, and clinical utility. This review comprehensively analyzes different hybrid techniques, such as deep learning improved by rule-based systems, multi-scale feature extraction, and ensemble learning. As well as inspect their clinical effect, particularly in improving diagnostic accuracy and optimizing treatment procedures. Despite their possibility, these approaches still face significant challenges, such as computational complexity, data requirements, and the requirement for explainable AI (XAI). Upcoming advancements in lung lesion segmentation will focus on refining these models to achieve faster processing, improved accuracy, and integration with diagnostic tools to protect transparency and ethical considerations.
Deep learning modeling could provide to detected Corona Virus 2019 (COVID-19) which is a critical task these days to make a treatment decision according to the diagnostic results. On the other hand, advances in the areas of artificial intelligence, machine learning, deep learning, and medical imaging techniques allow demonstrating impressive performance, especially in problems of detection, classification, and segmentation. These innovations enabled physicians to see the human body with high accuracy, which led to an increase in the accuracy of diagnosis and non-surgical examination of patients. There are many imaging models used to detect COVID-19, but we use computerized tomography (CT) because is commonly used. Moreover, we use for detection a deep learning model based on convolutional neural network (CNN) for COVID-19 detection. The dataset has been used is 544 slice of CT scan which is not sufficient for high accuracy, but we can say that it is acceptable because of the few datasets available in these days. The proposed model achieves validation and test accuracy 84.4% and 90.09%, respectively. The proposed model has been compared with other models to prove superiority of our model over the other models.