×
The submission system is temporarily under maintenance. Please send your manuscripts to
Go to Editorial ManagerArtificial intelligence (AI) is rapidly advancing as a valuable tool in oncology for enhancing detection and management of cancer. The integration of AI with PET/CT imaging presents significant scenarios for improving efficiency and accuracy of cancer diagnosis. This study examines the current applications of AI with PET/CT imaging, highlighting its role in diagnosing, differentiating, delineating, staging, assessing therapy response, determining prognosis, and enhancing image quality. A comprehensive literature search was conducted in six data-bases to get the most recent works, use Springer, Scopus, PubMed, Web of Science, IEEE, and Google Scholar in the last five years (2019-2024), identifying 80 studies that met the criteria for inclusion that focused on AI-driven models applied to PET/CT data in various cancers, with lung cancer being the most studied. Other cancers examined include head and neck, breast, lymph nodes, whole body, and others. All studies involved human subjects. The findings indicate that AI holds promise in improving cancer detection, identifying benign from malignant tumors, aiding in segmentation, response evaluation, staging, and determining the prognosis. However, the application of AI-powered models and PET/CT-derived radiomics in clinical practice is limited because of issues of data normalization, reproducibility, and the requirement of large multi-center data sets for improving model generalizability. All these limitations have to be solved to guarantee the dependable and ethical use of AI in day-to-day clinical activities.
Recently, three-dimensional models 3DM in the prosthetics field gained popularity, especially in the context of residual limb shape creation resulting from collecting medical images in Digital Imaging and Communications in Medicine DICOM format from a magnetic resonance imaging MRI after image processing accurately. In this study, a three-dimensional model of the residual limb for a patient with transtibial amputation was realized with the integration of artificial intelligence and a computer vision approach demonstrating the benefits of AI segmentation tools and artificial algorithms to generate higher accuracy three-dimensional model before prosthetic socket design or in case of comparison the 3D model generated from MRI with another 3D model generated from another technique, where a residual limb of a 23 years old male patient with amputation in the left leg wearing a prosthetic socket liner, and having 62 kg weight, 168 cm height, with high activity level. The patient was scanned using GE Medical Systems, 1,5 Tesla Signa Excite. MRI images in DICOM format were read to retrieve essential metadata such as pixel spacing and slice thickness. These images were processed to obtain a model that reflects the real shape of the residual limb using a specific algorithm, and the 3D model was extracted using AI segmentation tools. The obtained 3D model result with high resolution proves the potential of the artificial intelligence approach with deep learning to reconstruct 3D models concluding that AI has an instrumental role in medical image analysis, particularly in the areas of organ and tissue classification and segmentation., thus generating automatic and repetitive a 3D model.
The interest in the Eye-tracking technology field dramatically grew up in the last two decades for different purposes and applications like keeping the focus of where the person is looking, how his pupils and irises are reacting for a variety of actions, etc. The resulted data can deliver an extraordinary amount of information about the user when it's interlocked through advanced data analysis systems, it may show information concerned with the user’s age, gender, biometric identity, interests, etc. This paper is concerned about eye motion tracking as an unadulterated tool for different applications in any field required. The improvements in this area of artificial intelligence (AI), machine learning (ML), and deep learning (DL) with eye-tracking techniques allow large opportunities to develop algorithms and applications. In this paper number of models were proposed based on Convolutional neural network (CNN) have been designed, and then the most powerful and accurate model was chosen. The dataset used for the training process (for 16 screen points) consists of 2800 training images and 800 test images (with an average of 175 training images and 50 test images for each spot on the screen of the 16 spots), and it can be collected by the user of any application based on this model. The highest accuracy achieved by the best model was (91.25%) and the minimum loss was (0.23%). The best model consists of (11) layers (4 convolutions, 4 Max pooling, and 3 Dense). Python 3.7 was used to implement the algorithms, KERAS framework for the deep learning algorithms, Visual studio code as an Integrated Development Environment (IDE), and Anaconda navigator for downloading the different libraries. The model was trained with data that can be gathered using cameras of laptops or PCs and without the necessity of special and expensive equipment, also It can be trained for any single eye, depending on application requirements.
Kidney disease is a global health concern, often leading to kidney failure and impaired function. Artificial intelligence and deep learning have been extensively researched, with numerous proposed models and methods to improve kidney disease diagnosis. This work aims to enhance the efficiency and accuracy of the diagnostic system for kidney disease by using Deep Learning, thereby contributing to effective healthcare delivery. This work proposed three models: CNN, CNN-XGBoost and CNN-RF to extract features and classify kidney Ultrasound images into four categories: three abnormal cases (stones, hydronephrosis, and cysts) and one normal case. The models were tested on a real dataset of 1260 kidney ultrasound images (from 1000 patients) collected from the Lithotripsy Centre in Iraq. CNN models are often viewed as black boxes due to the challenge of understanding their learned behaviors, Visualizing Intermediate Activations (VIA) was used to address this issue. The proposed framework was assessed based on precision, recall, F1-score, and accuracy. CNN-RF is the most accurate model, with an accuracy of 99.6%. This study can potentially assist radiologists in high-volume medical facilities and enhance the accuracy of the diagnostic system for kidney disease.
Recent research has focused on analysing megakaryocyte images to extract the information needed to track the progression of nervous system diseases. Segmentation is a fundamental step in describing and analysing the core contents of megakaryocytes, including the cytoplasm and nucleus. In this study, 45 megakaryocyte images were obtained. A new segmentation image technique was proposed, called the updating fuzzy c-means technique, through the intelligent selection of the centres of each cluster to separate cell components. The first step of this technique (fuzzification) was based on a knowledge analysis of the local parameters (entropy, contrast and standard deviation) that had a substantial influence on the grey-level distribution between the cytoplasm and nucleus. The second important step was the construction of fuzzy rules in terms of the variation in these local parameters to control the intelligent pick-out or update the centroid of each cluster and obtain a successful separation of the cytoplasm and nucleus. The final step was defuzzification to obtain the output images. The results revealed the superiority of the proposed method over recent technique. The accuracy of the segmented nucleus was greater than 7.46%; in the case of the cytoplasm, the accuracy was higher at 18%. These results indicated that this technique may be applied on other biomedical images.
Medical image segmentation plays a crucial role in the realm of medical imaging. The process involves the division of an image to obtain a comprehensive view and ensure precise diagnostics. There are various methods that are employed, ranging from traditional approaches to the more advanced deep learning techniques. Both play a significant role in enhancing healthcare. With the continuous advancement in technology, there is a growing need for accurate segmentation. While traditional methods such as thresholding and region growing are effective, they may require human intervention for complex cases. Deep learning techniques, particularly Convolutional Neural Networks (CNNs), have significantly improved the process by learning intricate details and accurately segmenting the image. When these methods are combined, healthcare professionals can achieve high-quality, precise results. Furthermore, with the advancements in hardware and technology, real-time segmentation is now possible. Generally, the process of dividing medical images into segments is extremely important for the progress of healthcare with the help of artificial intelligence and the most recent advancements in the industry, such as explainable AI and multimodal learning. However, this meticulously detailed and in-depth review provides an all-encompassing and extensive analysis of the current methods utilized, their multitude of applications across various fields, and the promising emerging advancements that have the potential to pave the way for remarkable future improvements and innovations.
This study compares two different sockets, traditional and smart. It includes designs, manufacturing, and testing to evaluate the influence of the socket designs on gait symmetry. The proposed materials are locally available in the prosthetics center where traditional sockets are manufactured. and smart socket designs with the same materials as traditional additions. A simple electronic system programmed to control the movement of the stump by pneumatic pads and prevent slipping during movement is considered an advanced suspension system. A gait cycle test was carried out to evaluate the sockets. it was performed on a patient with AK amputation in two cases: the first when the patient was wearing the traditional and the second when wearing the smart. Where the difference in (gait cycle time, step velocity, heel contact, and mid-stance) between the left and right leg is equal to (0.54, 4.3, 0.19, and 0.34) respectively, when the patient uses the traditional, while these values reduce to (0.09, 0.7, 0.07, and 0.27) respectively when the patient used the smart, it improves comfort by modifying pressure distribution, relieving pressure points, and enhancing functionality through gait analysis. They adjust to the volume of the residual limb, ensuring an effective fit. Real-time monitoring and remote modifications decrease the need for in-person meetings and enhance user confidence. The smart socket, designed to fit user requirements, provides enhanced comfort, functionality, and independence. The studies will explore its long-term benefits and broader applications, focusing on its originality, practical implications, and outcome measurement.
Deep learning modeling could provide to detected Corona Virus 2019 (COVID-19) which is a critical task these days to make a treatment decision according to the diagnostic results. On the other hand, advances in the areas of artificial intelligence, machine learning, deep learning, and medical imaging techniques allow demonstrating impressive performance, especially in problems of detection, classification, and segmentation. These innovations enabled physicians to see the human body with high accuracy, which led to an increase in the accuracy of diagnosis and non-surgical examination of patients. There are many imaging models used to detect COVID-19, but we use computerized tomography (CT) because is commonly used. Moreover, we use for detection a deep learning model based on convolutional neural network (CNN) for COVID-19 detection. The dataset has been used is 544 slice of CT scan which is not sufficient for high accuracy, but we can say that it is acceptable because of the few datasets available in these days. The proposed model achieves validation and test accuracy 84.4% and 90.09%, respectively. The proposed model has been compared with other models to prove superiority of our model over the other models.
Face recognition and identification have recently become the most widely employed biometric authentication technologies, especially for access to persons and other security purposes. It represents one of the most significant pattern recognition technologies that uses characteristics included in facial images or videos to detect the identity of individuals. However, most of the traditional facial algorithms have faced limitations in identification and verification accuracy. As a result, this paper presents a sophisticated system for face identification adopting a novel algorithm of deep learning, namely, You Only Look Once version 8 (YOLOv8). This system can detect the face identity of different individuals with different positions with high accuracy. The YOLOv8 model has been trained for several target face images classified as training and validation images of 1190 and 255, respectively. The experimental results show a significant improvement in face identification accuracy of 99% of mean average precision, which outperforms many state-of-the-art face identification techniques.
Today, our cities are facing a host of challenges to accomplish the quality of life or their inhabitants. On the one hand, city planners and architects seek to preserve heritage, habits, and city peculiarities. On the other hand, it is necessary that the city is kept abreast of the rapid changes in Information and Communications Technology (ICT), Internet of Things (IoT), Artificial Intelligence (AI) and smart city concept. In Baghdad, it could be observed that there are several activities based on community initiatives, awareness campaigns, and initiatives which are self-funding from youth or funding from NGOs, and INGOs. How can we invest in such initiatives to achieve a smart city, emphasizing that the city is for the people, not a city of things? As we know that smart cities have six factors: smart (economy, governance, environment, people, mobility, and living)._x000D_ This paper assumes that smart communities are the seventh factor of smart cities factors which could play an essential role to apply the smartness in Baghdad. In this case, it will help to achieve making decisions and a feedback evaluation system will be subject to transparency, openness, vitality, and sustainability because it will stem from the community and ensure the sustainability in a smart city.