×
The submission system is temporarily under maintenance. Please send your manuscripts to
Go to Editorial ManagerAlzheimer's disease (AD) is a progressive neurodegenerative disorder that severely impacts cognitive functions such as memory, attention, and reasoning, ultimately affecting daily life. Early and accurate detection is crucial for timely intervention and management. Traditional diagnostic methods, including neuroimaging and cognitive assessments, can be expensive and time-consuming, necessitating more accessible and efficient alternatives. This study aims to develop an automated and efficient deep learning-based detection system that uses Electroencephalogram (EEG) signals to accurately classify AD and healthy individuals. A Convolutional Neural Network (CNN) model was designed to extract meaningful features from preprocessed EEG data. The architecture consists of convolutional layers with max pooling, dropout regularization, and fully connected layers to improve classification accuracy. The model was trained and evaluated on a comprehensive EEG dataset, using key performance metrics such as accuracy, recall, precision, and F1-score. The proposed CNN model achieved a high classification accuracy of 94.56%, a low loss of 0.2162, and an AUC value of 0.93828, demonstrating superior classification capability. The results indicate that the model effectively distinguishes between AD and healthy individuals, outperforming several state-of-the-art approaches. The findings highlight the potential of deep learning-based EEG analysis for AD detection, providing an accessible and cost-effective tool for early diagnosis. The high accuracy of the proposed CNN model suggests that it can assist medical professionals in making well-informed decisions, ultimately improving patient outcomes.
Deep learning modeling could provide to detected Corona Virus 2019 (COVID-19) which is a critical task these days to make a treatment decision according to the diagnostic results. On the other hand, advances in the areas of artificial intelligence, machine learning, deep learning, and medical imaging techniques allow demonstrating impressive performance, especially in problems of detection, classification, and segmentation. These innovations enabled physicians to see the human body with high accuracy, which led to an increase in the accuracy of diagnosis and non-surgical examination of patients. There are many imaging models used to detect COVID-19, but we use computerized tomography (CT) because is commonly used. Moreover, we use for detection a deep learning model based on convolutional neural network (CNN) for COVID-19 detection. The dataset has been used is 544 slice of CT scan which is not sufficient for high accuracy, but we can say that it is acceptable because of the few datasets available in these days. The proposed model achieves validation and test accuracy 84.4% and 90.09%, respectively. The proposed model has been compared with other models to prove superiority of our model over the other models.
Optical coherence tomography (OCT) allows for direct and immediate imaging of the morphology of retinal tissue. It has become a crucial imaging modality for diagnosing eye problems in ophthalmology. One of the most significant morphological characteristics of the retina is the structure of the retinal layers, which provides important evidence for diagnostic purposes and is related to a variety of retinal diseases. In this paper, a convolutional neural network (CNN) model is proposed that can identify the difference between a normal retina and three common macular diseases: Diabetic macular edema (DME), Drusen, and Choroidal neovascularization (CNV). This proposed model was trained and tested on an open source dataset of OCT images also with professional disease classifications such as DME, CNV, Drusen, and Normal. The suggested model has achieved 98.3% overall classification accuracy, with only 7 wrong classifications out of 368 test samples. The suggested model significantly outperforms other models that made use of the identical dataset. The final results show that the suggested model is particularly adapted to the detection of retinal disorders in ophthalmology centers.
Breast cancer is one of the greatest frequent tumours among females in Iraq. Medical ultrasound imaging has become a common modality for breast tumour imaging because of its ease of use, low cost, and safety. In the present study, Convolutional Neural Network (CNN) feature extraction approaches were used to classify breast ultrasound imaging. The CNN model used is composed of four-layer for breast cancer ultrasound image analysis. Two types of free datasets were used. These data were divided into groups A and B. Group A has three classes, namely benign, malignant and normal, while group B has two classes, namely, benign and malignant. The proposed technique was assessed based on its accuracy, precision, F1 score and recall. The model's classification accuracy for data A was 96%, whereas for data B was 100%.
The interest in the Eye-tracking technology field dramatically grew up in the last two decades for different purposes and applications like keeping the focus of where the person is looking, how his pupils and irises are reacting for a variety of actions, etc. The resulted data can deliver an extraordinary amount of information about the user when it's interlocked through advanced data analysis systems, it may show information concerned with the user’s age, gender, biometric identity, interests, etc. This paper is concerned about eye motion tracking as an unadulterated tool for different applications in any field required. The improvements in this area of artificial intelligence (AI), machine learning (ML), and deep learning (DL) with eye-tracking techniques allow large opportunities to develop algorithms and applications. In this paper number of models were proposed based on Convolutional neural network (CNN) have been designed, and then the most powerful and accurate model was chosen. The dataset used for the training process (for 16 screen points) consists of 2800 training images and 800 test images (with an average of 175 training images and 50 test images for each spot on the screen of the 16 spots), and it can be collected by the user of any application based on this model. The highest accuracy achieved by the best model was (91.25%) and the minimum loss was (0.23%). The best model consists of (11) layers (4 convolutions, 4 Max pooling, and 3 Dense). Python 3.7 was used to implement the algorithms, KERAS framework for the deep learning algorithms, Visual studio code as an Integrated Development Environment (IDE), and Anaconda navigator for downloading the different libraries. The model was trained with data that can be gathered using cameras of laptops or PCs and without the necessity of special and expensive equipment, also It can be trained for any single eye, depending on application requirements.