Research & Works
Featured Publications
Featured Research
A Hybrid Chatbot Model for Enhancing Administrative Support in Education: Comparative Analysis, Integration, and Optimization
The increasing number of students and the limited administrative resources in educational institutions have highlighted inefficiencies in traditional communication methods. Although chatbots offer potential solutions, current implementations are constrained by their architectures: rule-based chatbots lack flexibility, retrieval-based systems depend heavily on predefined datasets, and generative models risk inaccuracies. This study introduces a hybrid chatbot model designed to overcome these challenges by combining rule-based, retrieval-based, and generative approaches. The proposed model employs a Multinomial Naive Bayes classifier to intelligently route user queries to the most appropriate module, ensuring optimized performance across diverse query types. A domain-specific dataset from Sakarya University was developed, featuring 2,253 question-answer pairs categorized into seven administrative domains. Experimental results demonstrate that the hybrid chatbot model outperforms standalone approaches, achieving 97.57% accuracy and improved user satisfaction. Moreover, the model’s flexibility allows seamless adaptation to other sectors, such as healthcare and e-commerce. By significantly reducing administrative workload and enhancing engagement, the proposed hybrid chatbot provides an effective framework for scalable, robust, and adaptable conversational agents in educational settings and beyond.
Efficient Q-learning hyperparameter tuning using FOX optimization algorithm
Reinforcement learning is a branch of artificial intelligence in which agents learn optimal actions through interactions with their environment. Hyperparameter tuning is crucial for optimizing reinforcement learning algorithms and involves the selection of parameters that can significantly impact learning performance and reward. Conventional Q-learning relies on fixed hyperparameter without tuning throughout the learning process, which is sensitive to the outcomes and can hinder optimal performance. In this paper, a new adaptive hyperparameter tuning method, called Q-learning-FOX (Q-FOX), is proposed. This method utilizes the FOX Optimizer—an optimization algorithm inspired by the hunting behaviour of red foxes—to adaptively optimize the learning rate (α) and discount factor (γ) in the Q-learning. Furthermore, a novel objective function is proposed that maximizes the average Q-values. The FOX utilizes this function to select the optimal solutions with maximum fitness, thereby enhancing the optimization process. The effectiveness of the proposed method is demonstrated through evaluations conducted on two OpenAI Gym control tasks: Cart Pole and Frozen Lake. The proposed method achieved superior cumulative reward compared to established optimization algorithms, as well as fixed and random hyperparameter tuning methods. The fixed and random methods represent the conventional Q-learning. However, the proposed Q-FOX method consistently achieved an average cumulative reward of 500 (the maximum possible) for the Cart Pole task and 0.7389 for the Frozen Lake task across 30 independent runs, demonstrating a 23.37% higher average cumulative reward than conventional Q-learning, which uses established optimization algorithms in both control tasks. Ultimately, the study demonstrates that Q-FOX is superior to tuning hyperparameters adaptively in Q-learning, outperforming established methods.
Foxtsage vs. Adam: Revolution or evolution in optimization?
Optimisation techniques are crucial in neural network training, influencing predictive performance, convergence efficiency, and computational feasibility. Traditional Optimisers such as Adam offer adaptive learning rates but struggle with convergence stability and hyperparameter sensitivity, whereas SGD provides stability but lacks adaptiveness. We propose Foxtsage, a novel hybrid optimisation algorithm that integrates the FOX-TSA (for global search and exploration) with SGD (for fine-tuned local exploitation) to address these limitations. The proposed Foxtsage Optimiser is benchmarked against the widely used Adam Optimiser across multiple standard datasets, including MNIST, IMDB, and CIFAR-10. Performance is evaluated based on training loss, accuracy, precision, recall, F1-score, and computational time. The study further explores computational complexity and the trade-off between optimisation performance and efficiency. Experimental findings demonstrate that Foxtsage achieves a 42.03% reduction in mean loss (Foxtsage: 9.508, Adam: 16.402) and a 42.19% decrease in loss standard deviation (Foxtsage: 20.86, Adam: 36.085), indicating greater consistency and robustness in optimisation. Additionally, modest improvements are observed in accuracy (0.78%), precision (0.91%), recall (1.02%), and F1-score (0.89%), showcasing better generalization capability. However, these gains come at a significant computational cost, with a 330.87% increase in time mean (Foxtsage: 39.541 sec, Adam: 9.177 sec), raising concerns about practical feasibility in time-sensitive applications. By effectively combining FOX-TSA’s global search power with SGD’s adaptive stability, Foxtsage provides a promising alternative for neural network training. While it enhances performance and robustness, its increased computational overhead presents a critical trade-off. Future work will focus on reducing computational complexity, improving scalability, and exploring its applicability in real-world deep-learning tasks.
Decoding drug discovery: exploring A-to-Z in Silico methods for beginners
The drug development process is a critical challenge in the pharmaceutical industry due to its time-consuming nature and the need to discover new drug potentials to address various ailments. The initial step in drug development, drug target identification, often consumes considerable time. While valid, traditional methods such as in vivo and in vitro approaches are limited in their ability to analyze vast amounts of data efficiently, leading to wasteful outcomes. To expedite and streamline drug development, an increasing reliance on computer-aided drug design (CADD) approaches has merged. These sophisticated in silico methods offer a promising avenue for efficiently identifying viable drug candidates, thus providing pharmaceutical firms with significant opportunities to uncover new prospective drug targets. The main goal of this work is to review in silico methods used in the drug development process with a focus on identifying therapeutic targets linked to specific diseases at the genetic or protein level. This article thoroughly discusses A-to-Z in silico techniques, which are essential for identifying the targets of bioactive compounds and their potential therapeutic effects. This review intends to improve drug discovery processes by illuminating the state of these cutting-edge approaches, thereby maximizing the effectiveness and duration of clinical trials for novel drug target investigation.
FOX-TSA: navigating complex search spaces and superior performance in benchmark and real-world optimization problems
In the dynamic field of optimisation, hybrid algorithms have garnered significant attention for their ability to combine the strengths of multiple methods. This study presents the Hybrid FOX-TSA algorithm, a novel optimisation technique that merges the exploratory capabilities of the FOX algorithm with the exploitative power of the TSA algorithm. The primary objective is to evaluate the efficiency, robustness, and scalability of this hybrid approach across multiple CEC benchmark suites, including CEC2014, CEC2017, CEC2019, CEC2020, and CEC2022, alongside real-world engineering design problems. The results demonstrate that the Hybrid FOX-TSA algorithm consistently outperforms established optimisation techniques, such as PSO, GWO, and the original FOX and TSA algorithms, in terms of convergence speed, solution quality, and computational efficiency. Notably, the hybrid approach avoids premature convergence and navigating complex search spaces, producing optimal or near-optimal solutions in various test cases. For instance, the algorithm achieved superior performance in minimizing design costs in the Pressure Vessel and Welded Beam Design problems, as well as effectively handling the complex landscapes of the CEC2020 and CEC2022 benchmarks. These results affirm the Hybrid FOX-TSA algorithm as a powerful and adaptable tool for tackling complex optimization problems, particularly in high-dimensional and multimodal landscapes. The integration of statistical analyses, such as t-tests and Wilcoxon signed-rank tests, further supports the statistical significance of its performance improvements.
GMOCSO: Multi-objective Cat Swarm Optimization Algorithm based on a Grid System
This paper presents a multi-objective version of the Cat Swarm Optimization (CSO) Algorithm called the Grid-based Multi-objective Cat Swarm Optimization Algorithm (GMOCSO). Convergence and diversity preservation are the two main goals pursued by modern multi-objective algorithms to yield robust results. To achieve these goals, we first replace the roulette wheel method of the original CSO algorithm with a greedy method. Then, two key concepts from the Pareto Archived Evolution Strategy Algorithm are adopted: the grid system and double archive strategy. Several test functions and a real-world scenario called the pressure vessel design problem are used to evaluate the proposed algorithm’s performance. In the experiment, the proposed algorithm is compared with other well-known algorithms using different metrics, such as the Reversed Generational Distance, Spacing metric, and Spread metric. The optimization results show the robustness of the proposed algorithm, and the results are further confirmed using statistical methods and graphs. Finally, conclusions and future research directions are presented.
Optimizing feature selection with genetic algorithms: A review of methods and applications
Analyzing large datasets to select optimal features is one of the most important research areas in machine learning and data mining. This feature selection procedure involves dimensionality reduction which is crucial in enhancing the performance of the model, making it less complex. Recently, several types of attribute selection methods have been proposed that use different approaches to obtain representative subsets of the attributes. However, population-based evolutionary algorithms like Genetic Algorithms (GAs) have been proposed to provide remedies for these drawbacks by avoiding local optima and improving the selection process itself. This manuscript presents a sweeping review on GA-based feature selection techniques in applications and their effectiveness across different domains. This review was conducted using the PRISMA methodology; hence, the systematic identification, screening, and analysis of relevant literature were performed. Thus, our results hint that the field's hybrid GA methodologies including, but not limited to, GA-Wrapper feature selector and HGA-neural networks, have substantially improved their potential through the resolution of problems such as exploration of unnecessary search space, accuracy performance problems, and complexity. The conclusions of this paper would result in discussing the potential that GAs bear in feature selection and future research directions for their enhancement in applicability and performance.
LEO: Lagrange elementary optimization
Global optimization problems are often addressed using the practical and efficient approach of evolutionary sophistication, which refers to advanced processes inspired by various systems, particularly those rooted in biological systems. However, these problems, like the original evolutionary systems that inspired them, become increasingly complex, particularly in terms of efficiency, scalability, and achieving a balance between exploitation and exploration. To address these challenges, this research introduces the Lagrange elementary optimization (LEO) algorithm, an evolutionary method that is self-adaptive and inspired by the exceptional accuracy of vaccinations, modeled using the albumin quotient of human blood. They develop intelligent agents using their fitness function value after gene crossing. These genes direct the search agents during both exploration and exploitation. The main objective of the LEO algorithm is presented in this paper along with the inspiration and motivation for the concept. To demonstrate its precision, the proposed algorithm is validated against a variety of test functions, including 19 traditional benchmark functions and the CEC-C06 2019 test functions. The results of LEO for 19 classic benchmark test functions are evaluated against DA, PSO, and GA separately, and then two other recent algorithms such as FDO and LPB are also included in the evaluation. In addition, the LEO is tested by ten functions on CEC-C06 2019 with DA, WOA, SSA, FDO, LPB, and FOX algorithms distinctly. The cumulative outcomes demonstrate LEO’s capacity to increase the starting population and move toward the global optimum. Different standard measurements are used to verify and prove the stability of LEO in both the exploration and exploitation phases. Moreover, Statistical analysis supports the findings results of the proposed research. Finally, novel applications in the real world are introduced to demonstrate the practicality of LEO.
Unraveling the Versatility and Impact of Multi-Objective Optimization: Algorithms, Applications, and Trends for Solving Complex Real-World Problems
Multi-Objective Optimization (MOO) techniques have become increasingly popular in recent years due to their potential for solving real-world problems in various fields, such as logistics, finance, environmental management, and engineering. These techniques offer comprehensive solutions that traditional single-objective approaches fail to provide. Due to the many innovative algorithms, it has been challenging for researchers to choose the optimal algorithms for solving their problems. This paper examines recently developed MOO-based algorithms. MOO is introduced along with Pareto optimality and trade-off analysis. In real-world case studies, MOO algorithms address complicated decision-making challenges. This paper examines algorithmic methods, applications, trends, and issues in multi-objective optimization research. This exhaustive review explains MOO algorithms, their methods, and their applications to real-world problems. This paper aims to contribute further advancements in MOO research. No singular strategy is superior; instead, selecting a particular method depends on the natural optimization problem, the computational resources available, and the specific objectives of the optimization tasks
MRSO: balancing exploration and exploitation through modified rat swarm optimization for global optimization
The rapid advancement of intelligent technology has led to the development of optimization algorithms that leverage natural behaviors to address complex issues. Among these, the Rat Swarm Optimizer (RSO), inspired by rats’ social and behavioral characteristics, has demonstrated potential in various domains, although its convergence precision and exploration capabilities are limited. To address these shortcomings, this study introduces the Modified Rat Swarm Optimizer (MRSO), designed to enhance the balance between exploration and exploitation. The MRSO incorporates unique modifications to improve search efficiency and robustness, making it suitable for challenging engineering problems such as Welded Beam, Pressure Vessel, and Gear Train Design. Extensive testing with classical benchmark functions shows that the MRSO significantly improves performance, avoiding local optima and achieving higher accuracy in six out of nine multimodal functions and in all seven fixed-dimension multimodal functions. In the CEC 2019 benchmarks, the MRSO outperforms the standard RSO in six out of ten functions, demonstrating superior global search capabilities. When applied to engineering design problems, the MRSO consistently delivers better average results than the RSO, proving its effectiveness. Additionally, we compared our approach with eight recent and well-known algorithms using both classical and CEC-2019 benchmarks. The MRSO outperformed each of these algorithms, achieving superior results in six out of 23 classical benchmark functions and in four out of ten CEC-2019 benchmark functions. These results further demonstrate the MRSO’s significant contributions as a reliable and efficient tool for optimization tasks in engineering applications.
FOX-TSA hybrid algorithm: Advancing for superior predictive accuracy in tourism-driven multi-layer perceptron models
Nature-inspired optimization models have received a great deal of interest due to the performance of these algorithms in solving resourceful and authentic problems. However, achieving high predictive accuracy in machine learning models for specialized domains, such as the tourism industry, remains challenging. Predictive modelling in tourism is vital for improving decision-making, including forecasting visitor behaviours and enhancing customer experiences. As the volume and complexity of tourism data increase, there is a need for optimization methods that enhance model training while effectively handling intricate datasets. This study proposes a hybrid FOX-TSA algorithm to optimize the MLP model. The hybrid algorithm synergises the Fox Optimization Algorithm's exploration capabilities with the Tree-Seed Algorithm's exploitation strengths. Using a tourism dataset with user preferences and ratings, the performance of the anticipated algorithm is compared with standalone FOX, TSA, PSO, and GWO algorithms. Results indicate that the hybrid FOX-TSA achieves superior predictive accuracy (94.64 %), faster convergence speed (reducing iterations by 25 %), and improved F1-score (94.63 %) on the test dataset. These findings underline the potential of the hybrid FOX-TSA algorithm to advance predictive modelling in the tourism sector and other domains requiring complex data handling.
Central Kurdish text-to-speech synthesis with novel end-to-end transformer training
Recent advancements in text-to-speech (TTS) models have aimed to streamline the two-stage process into a single-stage training approach. However, many single-stage models still lag behind in audio quality, particularly when handling Kurdish text and speech. There is a critical need to enhance text-to-speech conversion for the Kurdish language, particularly for the Sorani dialect, which has been relatively neglected and is underrepresented in recent text-to-speech advancements. This study introduces an end-to-end TTS model for efficiently generating high-quality Kurdish audio. The proposed method leverages a variational autoencoder (VAE) that is pre-trained for audio waveform reconstruction and is augmented by adversarial training. This involves aligning the prior distribution established by the pre-trained encoder with the posterior distribution of the text encoder within latent variables. Additionally, a stochastic duration predictor is incorporated to imbue synthesized Kurdish speech with diverse rhythms. By aligning latent distributions and integrating the stochastic duration predictor, the proposed method facilitates the real-time generation of natural Kurdish speech audio, offering flexibility in pitches and rhythms. Empirical evaluation via the mean opinion score (MOS) on a custom dataset confirms the superior performance of our approach (MOS of 3.94) compared with that of a one-stage system and other two-staged systems as assessed through a subjective human evaluation.
Leveraging Chatbots for Effective Educational Administration: A Systematic Review
Educational institutions are increasingly utilizing chatbots to enhance communication and streamline administrative tasks, including answering frequently asked questions (FAQs). Serving as the first review about employing chatbots in educational administration, this systematic review explores the potential of chatbots to improve educational administration. While research on chatbots in education has primarily focused on their role in learning, their application for administrative tasks remains underexplored. This gap necessitates a systematic investigation of how chatbots are utilized and implemented to streamline administrative processes. Following the PRISMA framework, this review analyzes 54 studies to address five key areas: functionalities performed, data employed, technologies used, evaluation methods, and future development challenges. The review reveals a critical need for a comprehensive framework for developing administrative chatbots that are specifically tailored to educational institutions. This review underscores the importance of such a framework in addressing identified gaps in the literature and in incorporating generative AI into educational administrative tasks management. The review also highlights a noticeable lack of research in this domain, including the potential of generative AI, like ChatGPT, to improve administrative efficiency. Furthermore, the study discusses the potential impact, practical applications of administrative chatbots in education and future research directions.
Advanced Clustering Techniques for Speech Signal Enhancement: A Review and Metanalysis of Fuzzy C-Means, K-Means, and Kernel Fuzzy C-Means Methods
Speech signal processing is a cornerstone of modern communication technologies, tasked with improving the clarity and comprehensibility of audio data in noisy environments. The primary challenge in this field is the effective separation and recognition of speech from background noise, crucial for applications ranging from voice-activated assistants to automated transcription services. The quality of speech recognition directly impacts user experience and accessibility in technology-driven communication. This review paper explores advanced clustering techniques, particularly focusing on the Kernel Fuzzy C-Means (KFCM) method, to address these challenges. Our findings indicate that KFCM, compared to traditional methods like K-Means (KM) and Fuzzy C-Means (FCM), provides superior performance in handling non-linear and non-stationary noise conditions in speech signals. The most notable outcome of this review is the adaptability of KFCM to various noisy environments, making it a robust choice for speech enhancement applications. Additionally, the paper identifies gaps in current methodologies, such as the need for more dynamic clustering algorithms that can adapt in real time to changing noise conditions without compromising speech recognition quality. Key contributions include a detailed comparative analysis of current clustering algorithms and suggestions for further integrating hybrid models that combine KFCM with neural networks to enhance speech recognition accuracy. Through this review, we advocate for a shift towards more sophisticated, adaptive clustering techniques that can significantly improve speech enhancement and pave the way for more resilient speech processing systems.
In search of excellence: SHOA as a competitive shrike optimization algorithm for multimodal problems
This paper proposes the Shrike Optimization Algorithm (SHOA) as a swarm intelligence optimization algorithm. Many creatures, who live in groups and survive for the next generation, randomly search for food; they follow the best one in the swarm, a phenomenon known as swarm intelligence. While swarm-based algorithms mimic the behaviors of creatures, they struggle to find optimal solutions in multi-modal problem competitions. The swarming behaviors of shrike birds in nature serve as the main inspiration for the proposed algorithm. The shrike birds migrate from their territory in order to survive. However, the SHOA replicates the survival strategies of shrike birds to facilitate their living, adaptation, and breeding. Two parts of optimization exploration and exploitation are designed by modeling shrike breeding and searching for foods to feed nestlings until they get ready to fly and live independently. This paper is a mathematical model for the SHOA to perform optimization. The SHOA benchmarked 19 well-known mathematical test functions, 10 from CEC-2019 and 12 from CEC-2022’s most recent test functions, for a total of 41 competitive mathematical test functions and four real-world engineering problems with different conditions, both constrained and unconstrained. The statistical results obtained from the Wilcoxon ranking sum and Fridman test show that SHOA has a significant statistical superiority in handling the test benchmarks compared to competitor algorithms in multi-modal problems. The results for engineering optimization problems show the SHOA outperforms other nature-inspired algorithms in many cases.
Advancements in optimization: critical analysis of evolutionary, swarm, and behavior-based algorithms
The research work on optimization has witnessed significant growth in the past few years, particularly within multi- and single-objective optimization algorithm areas. This study provides a comprehensive overview and critical evaluation of a wide range of optimization algorithms from conventional methods to innovative metaheuristic techniques. The methods used for analysis include bibliometric analysis, keyword analysis, and content analysis, focusing on studies from the period 2000–2023. Databases such as IEEE Xplore, SpringerLink, and ScienceDirect were extensively utilized. Our analysis reveals that while traditional algorithms like evolutionary optimization (EO) and particle swarm optimization (PSO) remain popular, newer methods like the fitness-dependent optimizer (FDO) and learner performance-based behavior (LPBB) are gaining attraction due to their adaptability and efficiency. The main conclusion emphasizes the importance of algorithmic diversity, benchmarking standards, and performance evaluation metrics, highlighting future research paths including the exploration of hybrid algorithms, use of domain-specific knowledge, and addressing scalability issues in multi-objective optimization.
End-to-end transformer-based automatic speech recognition for Northern Kurdish: A pioneering approach
Automatic Speech Recognition (ASR) for low-resource languages remains a challenging task due to limited training data. This paper introduces a comprehensive study exploring the effectiveness of Whisper, a pre-trained ASR model, for Northern Kurdish (Kurmanji) an under-resourced language spoken in the Middle East. We investigate three fine-tuning strategies: vanilla, specific parameters, and additional modules. Using a Northern Kurdish fine-tuning speech corpus containing approximately 68 hours of validated transcribed data, our experiments demonstrate that the additional module fine-tuning strategy significantly improves ASR accuracy on a specialized test set, achieving a Word Error Rate (WER) of 10.5% and Character Error Rate (CER) of 5.7% with Whisper version 3. These results underscore the potential of sophisticated transformer models for low-resource ASR and emphasize the importance of tailored fine-tuning techniques for optimal performance.
NER-RoBERTa: Fine-tuning RoBERTa for named entity recognition (NER) within low-resource languages
Nowadays, Natural Language Processing (NLP) is an important tool for most people's daily life routines, ranging from understanding speech, translation, named entity recognition (NER), and text categorization, to generative text models such as ChatGPT. Due to the existence of big data and consequently large corpora for widely used languages like English, Spanish, Turkish, Persian, and many more, these applications have been developed accurately. However, the Kurdish language still requires more corpora and large datasets to be included in NLP applications. This is because Kurdish has a rich linguistic structure, varied dialects, and a limited dataset, which poses unique challenges for Kurdish NLP (KNLP) application development. While several studies have been conducted in KNLP for various applications, Kurdish NER (KNER) remains a challenge for many KNLP tasks, including text analysis and classification. In this work, we address this limitation by proposing a methodology for fine-tuning the pre-trained RoBERTa model for KNER. To this end, we first create a Kurdish corpus, followed by designing a modified model architecture and implementing the training procedures. To evaluate the trained model, a set of experiments is conducted to demonstrate the performance of the KNER model using different tokenization methods and trained models. The experimental results show that fine-tuned RoBERTa with the SentencePiece tokenization method substantially improves KNER performance, achieving a 12.8% improvement in F1-score compared to traditional models, and consequently establishes a new benchmark for KNLP.
Breaking Walls: Pioneering Automatic Speech Recognition for Central Kurdish: End-to-End Transformer Paradigm
End-to-end transformer-based models epitomize the cutting-edge in Automatic Speech Recognition (ASR) systems. Despite their substantial benefits, these models demand extensive training data to perform optimally, presenting a significant challenge for low-resource languages such as Central Kurdish. Addressing this issue requires innovative methods and techniques. This paper aims to develop an ASR system for Intermediate Kurdish by collecting a robust corpus of speech, using the N-GRAM language model, and utilizing an external Kurdish tokenizer for refinement and integration techniques to enhance the model's performance. We collect a comprehensive 100-hour speech corpus from diverse sources. Additionally, applied fine-tuning techniques to our speech corpus on Persian, English, and Arabic pre-trained models, specifically utilizing the xls-r-300m, xls-r-1b, and xls-r-2b Wav2vec 2.0 models. And utilized language models trained by 3-gram and 4-gram from a large text corpus of 300 million tokens. The fine-tuned xls-r-2b model, combined with a 3-gram language model and included external Kurdish tokenizer, achieved the best performance, yielding a Word Error Rate (WER) of 10.0% on the validation set and 11.8% on the Asosoft test set. The ASR model has demonstrated the advantages of having a large vocabulary compared to the existing Kurdish ASR models. Compared to other models, it produced more accurate and higher performance outcomes by working with a lower error rate.
Offline Handwriting Signature Verification: A Transfer Learning and Feature Selection Approach
Handwritten signature verification poses a formidable challenge in biometrics and document authenticity. The objective is to ascertain the authenticity of a provided handwritten signature, distinguishing between genuine and forged ones. This issue has many applications in sectors such as finance, legal documentation, and security. Currently, the field of computer vision and machine learning has made significant progress in the domain of handwritten signature verification. The outcomes, however, may be enhanced depending on the acquired findings, the structure of the datasets, and the used models. Four stages make up our suggested strategy. First, we collected a large dataset of 12600 images from 420 distinct individuals, and each individual has 30 signatures of a certain kind (All authors' signatures are genuine). In the subsequent stage, the best features from each image were extracted using a deep learning model named MobileNetV2. During the feature selection step, three selectors—neighborhood component analysis (NCA), Chi2, and mutual_info (MI)—were used to pull out 200, 300, 400, and 500 features, giving a total of 12 feature vectors. Finally, 12 results have been obtained by applying machine learning techniques such as SVM with kernels (rbf, poly, and linear), KNN, DT, Linear Discriminant Analysis, and Naïve Bayes. Without employing feature selection techniques, our suggested offline signature verification achieved a classification accuracy of 91.3%, whereas using the NCA feature selection approach with just 300 features it achieved a classification accuracy of 97.7%. High classification accuracy was achieved using the designed and suggested model, which also has the benefit of being a self-organized framework. Consequently, using the optimum minimally chosen features, the proposed method could identify the best model performance and result validation prediction vectors.
Steel Plate fault detection using the fitness-dependent optimizer and neural networks
Detecting faults in steel plates is crucial for ensuring the safety and reliability of the structures and industrial equipment. Early detection of faults can prevent further damage and costly repairs. This chapter aims at diagnosing and predicting the likelihood of steel plates developing faults using experimental text data. Various machine learning methods such as GWO-based and FDO-based MLP and CMLP are tested to classify steel plates as either faulty or non-faulty. The experiments produced promising results for all models, with similar accuracy and performance. However, the FDO-based MLP and CMLP models consistently achieved the best results, with 100% accuracy in all tested datasets. The other models’ outcomes varied from one experiment to another. The findings indicate that models that employed the FDO as a learning algorithm had the potential to achieve higher accuracy with a little longer runtime compared to other algorithms. In conclusion, early detection of faults in steel plates is critical for maintaining safety and reliability, and machine learning techniques can help predict and diagnose these faults accurately.
HER2GAN: overcome the scarcity of HER2 breast cancer dataset based on transfer learning and GAN model
Introduction Immunohistochemistry (IHC) is crucial for breast cancer diagnosis, classification, and individualized treatment. IHC is used to measure the levels of expression of hormone receptors (estrogen and progesterone receptors), human epidermal growth factor receptor 2 (HER2), and other biomarkers, which are used to make treatment decisions and predict how well a patient will do. The evaluation of the breast cancer score on IHC slides, taking into account structural and morphological features as well as a scarcity of relevant data, is one of the most important issues in the IHC debate. Several recent studies have utilized machine learning and deep learning techniques to resolve these issues. Materials and Methods This paper introduces a new approach for addressing the issue based on supervised deep learning. A GAN-based model is proposed for generating high-quality HER2 images and identifying and classifying HER2 levels. Using transfer learning methodologies, the original and generated images were evaluated. Results and Conclusion All of the models have been trained and evaluated using publicly accessible and private data sets, respectively. The InceptionV3 and InceptionResNetV2 models achieved a high accuracy of 93% with the combined generated and original images used for training and testing, demonstrating the exceptional quality of the details in the synthesized images.
Equitable and fair performance evaluation of whale optimization algorithm
It is essential that all algorithms are exhaustively, somewhat, and intelligently evaluated. Nonetheless, evaluating the effectiveness of optimization algorithms equitably and fairly is not an easy process for various reasons. Choosing and initializing essential parameters, such as the size issues of the search area for each method and the number of iterations required to reduce the issues, might be particularly challenging. As a result, this chapter aims to contrast the Whale Optimization Algorithm (WOA) with the most recent algorithms on a selected set of benchmark problems with varying benchmark function hardness scores and initial control parameters comparable problem dimensions and search space. When solving a wide range of numerical optimization problems with varying difficulty scores, dimensions, and search areas, the experimental findings suggest that WOA may be statistically superior or inferior to the preceding algorithms referencing convergence speed, running time, and memory utilization.
Planning the development of text-to-speech synthesis models and datasets with dynamic deep learning
Synthesis of Text-to-speech (TTS) is a process that involves translating a natural language text into a speech. Speech synthesisers face a major challenge when recognizing the prosodic elements of written text, such as intonation (the rise and fall of the voice in speaking), and length. In contrast, continuous speech features are influenced by the personality and emotions of the artist. A database is maintained to store the synthesized speech pieces. Its output is determined by how similar the person utters the words and how capable they are of being implied. In the past few years, the field of text-to-speech synthesis has been heavily impacted by the emergence of deep learning, an AI technology that has gained widespread popularity. This review paper presents a taxonomy of models and architectures that are based on deep learning and discusses the various datasets that are utilised in the TTS process. It also covers the evaluation matrices that are commonly used. The paper ends with a look at the future directions of the system and reaches to some Deep learning models that give promising results in this field.
Balancing exploration and exploitation phases in whale optimization algorithm: an insightful and empirical analysis
Agents of any metaheuristic algorithms are moving in two modes, namely exploration and exploitation. Obtaining robust results in any algorithm is strongly dependent on how to balance between these two modes. Whale optimization algorithm as a robust and well recognized metaheuristic algorithms in the literature, has proposed a novel scheme to achieve this balance. It has also shown superior results on a wide range of applications. Moreover, in the previous chapter, an equitable and fair performance evaluation of the algorithm was provided. However, to this point, only comparison of the final results is considered, which does not explain how these results are obtained. Therefore, this chapter attempts to empirically analyze the WOA algorithm in terms of the local and global search capabilities, i.e., the ratio of exploration and exploitation phases. To achieve this objective, the dimension-wise diversity measurement is employed, which, at various stages of the optimization process, statistically evaluates the population's convergence and diversity.
Improving performance of extreme learning machine for classification challenges by modified firefly algorithm and validation on medical benchmark datasets
The extreme learning machine (ELM) stands out as a contemporary neural network learning model designed for neural networks, specifically emphasizing those with a single hidden layer. This model has gained significant importance in recent years and is frequently employed in research projects due to its reputation as one of the swiftest and most robust methods. ELM is distinguished by its ability to obtain accurate results without the need for prolonged training, setting it apart from other classifiers. Additionally, its reduced reliance on human intervention significantly diminishes the likelihood of errors. Despite their considerable potential, ELMs are not extensively employed. One contributing factor could be the ongoing challenges that ELM is yet to overcome, requiring successful resolution. A prevalent issue is the model’s performance being notably dependent on the weights, biases within the hidden layer, and the quantity of neurons in that layer. Optimizing the number of neurons, referred to as hyperparameter optimization, falls under the category of NP-hard optimization problems. The second challenge lies in training the ELM, which involves establishing the weights and biases tailored for a specific task, presenting another NP-hard challenge. The research presented in this manuscript concentrates on addressing both aspects: optimizing hyperparameters, specifically the number of neurons in the hidden layer, and training the network to fine-tune the weights and biases. The main goal of this research is to effectively resolve both optimization and training by utilizing an improved swarm intelligence algorithm. As a result, both issues were addressed using an adapted version of the firefly algorithm. The proposed approach was tested and validated on twelve authentic datasets and four synthetic datasets designed for classification purposes. One of the forefront tasks among them involves the fetal nonstress test, commonly known as the cardiotocography problem, requiring the interpretation of data from two wearable sensors to discriminate between 3 and 10 imbalanced classes. The obtained outcomes are compared with the results reached by similar state of the art approaches, and the simulations show that the firefly algorithm improved by the group search operator can lead to superior performance. Additionally, enhancements of proposed method are confirmed by rigid statistical tests and results of best generated model for significant heart disease dataset are interpreted by valuable Shapley Additive Explanations (SHAP) tool.
Modified-improved fitness dependent optimizer for complex and engineering problems
Fitness dependent optimizer (FDO) is considered one of the novel swarm intelligent algorithms. Recently, FDO has been enhanced several times to improve its capability. One of the improvements is called improved FDO (IFDO). However, according to the research findings, the variants of FDO are constrained by two primary limitations that have been identified. Firstly, if the number of agents employed falls below five, it significantly diminishes the algorithm's precision. Secondly, the efficacy of FDO is intricately tied to the quantity of search agents utilized. To overcome these limitations, this study proposes a modified version of IFDO, called M-IFDO. The enhancement is conducted by updating the location of the scout bee to the IFDO to move the scout bees to achieve better performance and optimal solutions. More specifically, two parameters in IFDO, which are alignment and cohesion, are removed. Instead, the Lambda parameter is replaced in the place of alignment and cohesion. To verify the performance of the newly introduced algorithm, M-IFDO is tested on 19 basic benchmark functions, 10 IEEE Congress of Evolutionary Computation (CECsingle bondC06 2019), and five real-world problems. M-IFDO is compared against five state-of-the-art algorithms: Improved Fitness Dependent Optimizer (IFDO), Improving Multi-Objective Differential Evolution algorithm (IMODE), Hybrid Sampling Evolution Strategy (HSES), Linear Success-History based Parameter Adaptation for Differential Evolution (LSHADE) and CMA-ES Integrated with an Occasional Restart Strategy and Increasing Population Size and An Iterative Local Search (NBIPOP-aCMAES). The verification criteria are based on how well the algorithm reaches convergence, memory usage, and statistical results. The results show that M-IFDO surpasses its competitors in several cases on the benchmark functions and five real-world problems.
Comparative Analysis of AES, Blowfish, Twofish, Salsa20, and ChaCha20 for Image Encryption
Nowadays, cybersecurity has grown into a more significant and difficult sci-entific issue. The recognition of threats and attacks meant for knowledge and safety on the internet is growing harder to detect. Since cybersecurity guar-antees the privacy and security of data sent via the Internet, it is essential, while also providing protection against malicious attacks. Encrypt has grown into an answer that has become an essential element of information security systems. To ensure the security of shared data, including text, images, or videos, it is essential to employ various methods and strategies. This study delves into the prevalent cryptographic methods and algorithms utilized for prevention and stream encryption, examining their encoding techniques such as advanced encryption standard (AES), Blowfish, Twofish, Salsa20, and ChaCha20. The primary objective of this research is to identify the optimal times and throughputs (speeds) for data encryption and decryption processes. The methodology of this study involved selecting five distinct types of images to compare the outcomes of the techniques evaluated in this research. The as-sessment focused on processing time and speed parameters, examining visual encoding and decoding using Java as the primary platform. A comparative analysis of several symmetric key ciphers was performed, focusing on handling large datasets. Despite this limitation, comparing different images helped evaluate the techniques' novelty. The results showed that ChaCha20 had the best average time for both encryption and decryption, being over 50% faster than some other algorithms. However, the Twofish algorithm had lower throughput during testing. The paper concludes with findings and suggestions for future improvements.
Modified Bat Algorithm: a newly proposed approach for solving complex and real-world problems
Bat Algorithm (BA) is a nature-inspired metaheuristic search algorithm designed to efficiently explore complex problem spaces and find near-optimal solutions. The algorithm is inspired by the echolocation behavior of bats, which acts as a signal system to estimate the distance and hunt prey. Although the BA has proven effective for various optimization problems, it exhibits limited exploration ability and susceptibility to local optima. The algorithm updates velocities and positions based on the current global best solution, causing all agents to converge toward a specific location, potentially leading to local optima issues in optimization problems. On this premise, this paper proposes the Modified Bat Algorithm (MBA) as an enhancement to address the local optima limitation observed in the original BA. MBA incorporates the frequency and velocity of the current best solution, enhancing convergence speed to the optimal solution and preventing local optima entrapment. While the original BA faces diversity issues, both the original BA and MBA are introduced. To assess MBA’s performance, three sets of test functions (classical benchmark functions, CEC2005, and CEC2019) are employed, with results compared to those of the original BA, Particle Swarm Optimization (PSO), Genetic Algorithm (GA), and Dragonfly Algorithm (DA). The outcomes demonstrate the MBA’s significant superiority over other algorithms. In addition, MBA successfully addresses a real-world assignment problem (call center problem), traditionally solved using linear programming methods, with satisfactory results.
From A-to-Z review of clustering validation indices
Data clustering involves identifying latent similarities within a dataset and organizing them into clusters or groups. The outcomes of various clustering algorithms differ as they are susceptible to the intrinsic characteristics of the original dataset, including noise and dimensionality. The effectiveness of such clustering procedures directly impacts the homogeneity of clusters, underscoring the significance of evaluating algorithmic outcomes. Consequently, the assessment of clustering quality presents a significant and complex endeavor. A pivotal aspect affecting clustering validation is the cluster validity metric, which aids in determining the optimal number of clusters. The main goal of this study is to comprehensively review and explain the mathematical operation of internal and external cluster validity indices, but not all, to categorize these indices and to brainstorm suggestions for future advancement of clustering validation research. In addition, we review and evaluate the performance of internal and external clustering validation indices on the most common clustering algorithms, such as the evolutionary clustering algorithm star (ECA*). Finally, we suggest a classification framework for examining the functionality of both internal and external clustering validation measures regarding their ideal values, user-friendliness, responsiveness to input data, and appropriateness across various fields. This classification aids researchers in selecting the appropriate clustering validation measure to suit their specific requirements.
NSGA-II-DL: metaheuristic optimal feature selection with deep learning framework for HER2 classification in breast cancer
Immunohistochemistry (IHC) slides are graded for breast cancer based on visual markers and morphological characteristics of stained membrane regions. The usage of whole slide images (WSIs) from histology in digital pathology algorithms for computer-assisted evaluations has increased recently. Human epidermal growth factor receptor 2 (HER2)-stained microscopic images are challenging, time-consuming, and error-prone to evaluate manually. This is due to different staining, overlapped regions, and huge, non-homogeneous slides. Additionally, the classification of HER2 images by the selection of fundamental features must be used to capture the difficult elements of the images, such as the irregular cell structure and the coloring of the tissue of the cells. To solve the above problems, a transfer learning model-based, trainable metaheuristic method for choosing the best features is suggested in this paper. Moreover, the suggested model is efficient in reducing model complexity and computational costs as well as avoiding overfitting. The four main components of the proposed cascaded design are: (1) converting WSIs to tiled images and enhancing contrast with fast local Laplacian filtering (FlLpF); (2) extracting features with a ResNet50 CNN technique based on transfer learning; (3) selecting the most informative features with the help of a non-dominated sorting genetic algorithm (NSGA-II) optimizer; and (4) using a support vector machine (SVM) to classify HER2 scores. Results from the HER2SC and HER2GAN datasets show that the suggested model is superior to other methods already in use, with 94.4% accuracy, 93.71% precision, 98.07% specificity, 93.83% sensitivity, and a 93.71% F1-score for the HER2SC dataset being achieved.
GOOSE algorithm: a powerful optimization tool for real-world engineering challenges and beyond
This study proposes the GOOSE algorithm as a novel metaheuristic algorithm based on the goose's behavior during rest and foraging. The goose stands on one leg and keeps his balance to guard and protect other individuals in the flock. The GOOSE algorithm is benchmarked on 19 well-known benchmark test functions, and the results are verified by a comparative study with genetic algorithm (GA), particle swarm optimization (PSO), dragonfly algorithm (DA), and fitness dependent optimizer (FDO). In addition, the proposed algorithm is tested on 10 modern benchmark functions, and the gained results are compared with three recent algorithms, such as the dragonfly algorithm, whale optimization algorithm (WOA), and salp swarm algorithm (SSA). Moreover, the GOOSE algorithm is tested on 5 classical benchmark functions, and the obtained results are evaluated with six algorithms, such as fitness dependent optimizer (FDO), FOX optimizer, butterfly optimization algorithm (BOA), whale optimization algorithm, dragonfly algorithm, and chimp optimization algorithm (ChOA). The achieved findings attest to the proposed algorithm's superior performance compared to the other algorithms that were utilized in the current study. The technique is then used to optimize Welded beam design and Economic Load Dispatch Problems, Pressure vessel Design Problems, and the Pathological IgG Fraction in the Nervous System, four renowned real-world challenges. The outcomes of the engineering case studies illustrate how well the suggested approach can optimize issues that arise in the real-world.
Multi-transfer learning techniques for detecting auditory brainstem response
The assessment of the well-being of the peripheral auditory nerve system in individuals experiencing hearing impairment is conducted through auditory brainstem response (ABR) testing. Audiologists assess and document the results of the ABR test. They interpret the findings and assign labels to them using reference-based markers like peak latency, waveform morphology, amplitude, and other relevant factors. Inaccurate assessment of ABR tests may lead to incorrect judgments regarding the integrity of the auditory nerve system; therefore, proper Hearing Loss (HL) diagnosis and analysis are essential. In order to identify and assess ABR automation while decreasing the possibility of human error, machine learning methods, notably deep learning, may be an appropriate option. To address these issues, this study proposed deep-learning models using the transfer-learning (TL) approach to extract features from ABR testing and diagnose HL using support vector machines (SVM). Pre-trained convolutional neural network (CNN) architectures like AlexNet, DenseNet, GoogleNet, InceptionResNetV2, InceptionV3, MobileNetV2, NASNetMobile, ResNet18, ResNet50, ResNet101, ShuffleNet, and SqueezeNet are used to extract features from the collected ABR reported images dataset in the proposed model. It has been decided to use six measures accuracy, precision, recall, geometric mean (GM), standard deviation (SD), and area under the ROC curve to measure the effectiveness of the proposed model. According to experimental findings, the ShuffleNet and ResNet50 models' TL is effective for ABR to diagnosis HL using an SVM classifier, with a high accuracy rate of 95% when using the 5-fold cross-validation method.
Detection of auditory brainstem response peaks using image processing techniques in infants with normal hearing sensitivity
Introduction The auditory brainstem response (ABR) is measured to find the brainstem-level peripheral auditory nerve system integrity in children having normal hearing. The Auditory Evoked Potential (AEP)s are generated using acoustic stimuli. Interpreting these waves requires competence to avoid misdiagnosing hearing problems. Automating ABR test labeling with computer vision may reduce human error. Method The ABR test results of 26 children aged 1 to 20 months with normal hearing in both ears were used. A new approach is suggested for automatically calculating the peaks of waves of different intensities (in decibels). The procedure entails acquiring wave images from an Audera device using the Color Thresholder method, segmenting each wave as a single wave image using the Image Region Analyzer application, converting all wave images into waves using Image Processing (IP) techniques, and finally calculating the latency of the peaks for each wave to be used by an audiologist for diagnosing the disease. Findings Image processing techniques were able to detect 1, 3, and 5 waves in the diagnosis field with accuracy (0.82), (0.98), and (0.98), respectively, and its precision for waves 1, 3, and 5, were respectively (0.32), (0.97) and (0.87). This evaluation also worked well in the thresholding part and 82.7 % correctly detected the ABR waves. Conclusion Our findings indicate that the audiology test battery suite can be made more accurate, quick, and error-free by using technology to automatically detect and label ABR waves.
A novel enhanced normalization technique for a mandible bones segmentation using deep learning: batch normalization with the dropout
Several cases of oral and maxillofacial surgery require 3D virtual surgical planning, which is essential for craniofacial tumor resection and flap reconstruction of the mandible. This could only be achieved if the mandible bone was segmented accurately using computed Tomography (CT) images. The convolutional Neural Network (CNN) has achieved high accuracy and more robust segmentation within less processing time in segmentation. In this research, we propose a CNN-based system to improve the accuracy and performance of the segmentation. The proposed system consists of U-Net-based on CNN for the segmentation of mandible bone using the dropout technique and batch normalization in fully connected layers of a convolutional neural network to avoid over-fitting and instability of the process. This method provides 3D segmentation of mandible bones from 2D segmented regions from three different orthogonal planes. Four different types of planar data were used to achieve better accuracy and processing time of the segmentation of mandible bones. Dataset was taken from Public Domain Database for Computational Anatomy (PDDCA). Greyscale computed tomography (CT) images were used. 310 CT scan images were used. A confusion matrix has been used to measure the accuracy, i.e., true positive, false positive, and false negative. In contrast to the state-of-art solutions, Results of the proposed solution show that the accuracy of mandible bones’ segmentation has been improved by 21%, on average, and the processing time has been reduced by 30% second. Our proposed enhanced system is based on the accurate segmentation of mandible bones in datasets from two different kinds of planes, i.e., single-planar and multi-planar. And single planar data has further been divided into three types i.e., axial, sagittal, and coronal planes.
The fifteen puzzle—a new approach through hybridizing three heuristics methods
The Fifteen Puzzle problem is one of the most classical problems that has captivated mathematics enthusiasts for centuries. This is mainly because of the huge size of the state space with approximately 1013 states that have to be explored, and several algorithms have been applied to solve the Fifteen Puzzle instances. In this paper, to manage this large state space, the bidirectional A* (BA*) search algorithm with three heuristics, such as Manhattan distance (MD), linear conflict (LC), and walking distance (WD), has been used to solve the Fifteen Puzzle problem. The three mentioned heuristics will be hybridized in a way that can dramatically reduce the number of states generated by the algorithm. Moreover, all these heuristics require only 25 KB of storage, but help the algorithm effectively reduce the number of generated states and expand fewer nodes. Our implementation of the BA* search can significantly reduce the space complexity, and guarantee either optimal or near-optimal solutions.
CDDO–HS: Child drawing development optimization–harmony search algorithm
Child drawing development optimization (CDDO) is a recent example of a metaheuristic algorithm. The motive for inventing this method is children’s learning behavior and cognitive development, with the golden ratio being employed to optimize the aesthetic value of their artwork. Unfortunately, CDDO suffers from low performance in the exploration phase, and the local best solution stagnates. Harmony search (HS) is a highly competitive algorithm relative to other prevalent metaheuristic algorithms, as its exploration phase performance on unimodal benchmark functions is outstanding. Thus, to avoid these issues, we present CDDO–HS, a hybridization of both standards of CDDO and HS. The hybridized model proposed consists of two phases. Initially, the pattern size (PS) is relocated to the algorithm’s core and the initial pattern size is set to 80% of the total population size. Second, the standard harmony search (HS) is added to the pattern size (PS) for the exploration phase to enhance and update the solution after each iteration. Experiments are evaluated using two distinct standard benchmark functions, known as classical test functions, including 23 common functions and 10 CEC-C06 2019 functions. Additionally, the suggested CDDO–HS is compared to CDDO, the HS, and six others widely used algorithms. Using the Wilcoxon rank-sum test, the results indicate that CDDO–HS beats alternative algorithms.
Multi-population Black Hole Algorithm for the problem of data clustering
The retrieval of important information from a dataset requires applying a special data mining technique known as data clustering (DC). DC classifies similar objects into a groups of similar characteristics. Clustering involves grouping the data around k-cluster centres that typically are selected randomly. Recently, the issues behind DC have called for a search for an alternative solution. Recently, a nature-based optimization algorithm named Black Hole Algorithm (BHA) was developed to address the several well-known optimization problems. The BHA is a metaheuristic (population-based) that mimics the event around the natural phenomena of black holes, whereby an individual star represents the potential solutions revolving around the solution space. The original BHA algorithm showed better performance compared to other algorithms when applied to a benchmark dataset, despite its poor exploration capability. Hence, this paper presents a multi-population version of BHA as a generalization of the BHA called MBHA wherein the performance of the algorithm is not dependent on the best-found solution but a set of generated best solutions. The method formulated was subjected to testing using a set of nine widespread and popular benchmark test functions. The ensuing experimental outcomes indicated the highly precise results generated by the method compared to BHA and comparable algorithms in the study, as well as excellent robustness. Furthermore, the proposed MBHA achieved a high rate of convergence on six real datasets (collected from the UCL machine learning lab), making it suitable for DC problems. Lastly, the evaluations conclusively indicated the appropriateness of the proposed algorithm to resolve DC issues.
Enhancing Software Development Projects: The Impact of Delegation and Leadership in Software Management
The software development lifecycle depends heavily on the testing process, which is an essential part of finding issues and reviewing the quality of software. Software testing can be done in two ways: manually and automatically. With an emphasis on its primary function within the software lifecycle, the relevance of testing in general, and the advantages that come with it, this article aims to give a thorough review of automated testing. Finding time- and cost-effective methods for software testing. The research examines how automated testing makes it easier to evaluate software quality, how it saves time as compared to manual testing, and how it differs from each of them in terms of benefits and drawbacks. The process of testing software applications is simplified, customized to certain testing situations, and can be successfully carried out by using automated testing tools.
An enhanced multioperator Runge–Kutta algorithm for optimizing complex water engineering problems
Water engineering problems are typically nonlinear, multivariable, and multimodal optimization problems. Accurate water engineering problem optimization helps predict these systems’ performance. This paper proposes a novel optimization algorithm named enhanced multioperator Runge–Kutta optimization (EMRUN) to accurately solve different types of water engineering problems. The EMRUN’s novelty is focused mainly on enhancing the exploration stage, utilizing the Runge–Kutta search mechanism (RK-SM), the covariance matrix adaptation evolution strategy (CMA-ES) techniques, and improving the exploitation stage by using the enhanced solution quality (IESQ) and sequential quadratic programming (SQP) methods. In addition to that, adaptive parameters were included to improve the stability of these two stages. The superior performance of EMRUN is initially tested against a set of CEC-17 benchmark functions. Afterward, the proposed algorithm extracts parameters from an eight-parameter Muskingum model. Finally, the EMRUM is applied to a practical hydropower multireservoir system. The experimental findings show that EMRUN performs much better than advanced optimization approaches. Furthermore, the EMRUN has demonstrated the ability to converge up to 99.99% of the global solution. According to the findings, the suggested method is a competitive algorithm that should be considered in optimizing water engineering problems.
Fitness-dependent optimizer for IoT healthcare using adapted parameters: A case study implementation
In fitness-dependent optimizer (FDO), the search agent’s position is updated using speed or velocity, but it is done differently. It creates weights based on the fitness function value of the problem, which assists lead the agents through the exploration and exploitation processes. Other algorithms are evaluated and compared to FDO as genetic algorithm (GA) and particle swarm optimization (PSO) in the original work. The salp-swarm algorithm (SSA), dragonfly algorithm (DA), and whale optimization algorithm (WOA) have been evaluated against FDO in terms of their results. Using these FDO experimental findings, we may conclude that FDO outperforms the other techniques stated. There are two primary goals for this chapter: (1) The implementation of FDO will be shown step-by-step so that readers can better comprehend the algorithm method and apply FDO to solve real-world applications quickly. (2) It deals with …
A Comprehensive Study on Automated Testing with the Software Lifecycle
The software development lifecycle depends heavily on the testing process, which is an essential part of finding issues and reviewing the quality of software. Software testing can be done in two ways: manually and automatically. With an emphasis on its primary function within the software lifecycle, the relevance of testing in general, and the advantages that come with it, this article aims to give a thorough review of automated testing. Finding time- and cost-effective methods for software testing. The research examines how automated testing makes it easier to evaluate software quality, how it saves time as compared to manual testing, and how it differs from each of them in terms of benefits and drawbacks. The process of testing software applications is simplified, customized to certain testing situations, and can be successfully carried out by using automated testing tools.
A novel solution of deep learning for enhanced support vector machine for predicting the onset of type 2 diabetes
Type 2 Diabetes is one of the most major and fatal diseases known to human beings, where thousands of people are subjected to the onset of Type 2 Diabetes every year. However, the diagnosis and prevention of Type 2 Diabetes are relatively costly in today’s scenario; hence, the use of machine learning and deep learning techniques is gaining momentum for predicting the onset of Type 2 Diabetes. This research aims to increase the accuracy and Area Under the Curve (AUC) metric while improving the processing time for predicting the onset of Type 2 Diabetes. The proposed system consists of a deep learning technique that uses the Support Vector Machine (SVM) algorithm along with the Radial Base Function (RBF) along with the Long Short-term Memory Layer (LSTM) for prediction of onset of Type 2 Diabetes. The proposed solution provides an average accuracy of 86.31% and an average AUC value of 0.8270 or 82.70%, with an improvement of 3.8 milliseconds in the processing. Radial Base Function (RBF) kernel and the LSTM layer enhance the prediction accuracy and AUC metric from the current industry standard, making it more feasible for practical use without compromising the processing time.
Deep learning for size and microscope feature extraction and classification in Oral Cancer: Enhanced convolution neural network
Deep learning technology has not been implemented successfully in oral cancer images classification due to the overfitting problem. Due to the network arrangement and lack of proper data set for training, the network might not produce the required feature map with dimension reduction which result in overfitting problems. This research aims to reduce the overfitting by producing the required feature map with dimension reduction through using Convolutional Neural Network.
Perspectives on the impact of e-learning pre-and post-COVID-19 pandemic—the case of the Kurdistan Region of Iraq
The COVID-19 pandemic profoundly affected global patterns, and the period of the declared virus pandemic has had a negative influence on all aspects of life. This research focuses on categorizing and empirically investigating the role of digital platforms in learning and business processes during the COVID-19 pandemic outbreak. The main purpose of this paper is to investigate to what extent the use of electronic learning (EL) has been boosted by COVID-19’s spread, and EL’s effectiveness on the sustainable development of electronic commerce due to the demand for a variety of electronic devices. For this purpose, the information has been collected through an online questionnaire applied to 430 participants from the Kurdistan Region of Iraq (KRI). The results indicate that participant usage and skills with electronic devices and online software programs are increasing, as the ratio indicated a level of 68% for both genders. Thus, the significance of EL concerning electronic commercial enterprises has been openly acknowledged and influenced by numerous factors. In addition, several suggestions and steps to be undertaken by the government are highlighted. Finally, this research mentions the current limitations of EL and suggests future works to build sustainable online experiences.
Awareness requirement and performance management for adaptive systems: a survey: TA Rashid et al.
Self-adaptive software can assess and modify its behavior when the assessment indicates that the program is not performing as intended or when improved functionality or performance is available. Since the mid-1960s, system adaptivity has been extensively researched, and during the last decade, many application areas and technologies involving self-adaptation have gained prominence. All of these efforts have in common the introduction of self-adaptability through software. Thus, it is essential to investigate systematic software engineering methods to create self-adaptive systems that may be used across different domains. The primary objective of this research is to summarize current advances in awareness requirements for adaptive strategies and their performance management based on an examination of state-of-the-art methods described in the literature. This paper reviews self-adaptive systems in the context of requirement awareness and summarizes the most common methodologies applied. At first glance, it examines the previous surveys and works about self-adaptive systems. Afterward, it classifies the current self-adaptive systems based on six criteria. Then, it presents performance management in the current adaptive systems and then evaluates the most common self-adaptive approaches. Lastly, the self-adaptive models are evaluated based on four concepts (requirements description, monitoring, relationship, dependency/impact, and tools).
A new Lagrangian problem crossover—a systematic review and meta-analysis of crossover standards
The performance of most evolutionary metaheuristic algorithms relies on various operators. The crossover operator is a standard based on population-based algorithms, which is divided into two types: application-dependent and application-independent crossover operators. In the process of optimization, these standards always help to select the best-fit point. The high efficiency of crossover operators allows engineers to minimize errors in engineering application optimization while saving time and avoiding overpricing. There are two crucial objectives behind this paper; first, we provide an overview of the crossover standards classification that has been used by researchers for solving engineering operations and problem representation. This paper proposes a novel standard crossover based on the Lagrangian Dual Function (LDF) to enhance the formulation of the Lagrangian Problem Crossover (LPX). The LPX for 100 generations of different pairs parent chromosomes is compared to Simulated Binary Crossover (SBX) standards and Blended Crossover (BX) for real-coded crossovers. Three unimodal test functions with various random values show that LPX has better performance in most cases and comparative results in other cases. Moreover, the LPB algorithm is used to compare LPX with SBX, BX, and Qubit Crossover (Qubit-X) operators to demonstrate accuracy and performance during exploitation evaluations. Finally, the proposed crossover stand operator results are demonstrated, proved, and analyzed statistically by the Wilcoxon signed-rank sum test.
Enhancing Algorithm Selection through Comprehensive Performance Evaluation: Statistical Analysis of Stochastic Algorithms
Analyzing stochastic algorithms for comprehensive performance and comparison across diverse contexts is essential. By evaluating and adjusting algorithm effectiveness across a wide spectrum of test functions, including both classical benchmarks and CEC-C06 2019 conference functions, distinct patterns of performance emerge. In specific situations, underscoring the importance of choosing algorithms contextually. Additionally, researchers have encountered a critical issue by employing a statistical model randomly to determine significance values without conducting other studies to select a specific model for evaluating performance outcomes. To address this concern, this study employs rigorous statistical testing to underscore substantial performance variations between pairs of algorithms, thereby emphasizing the pivotal role of statistical significance in comparative analysis. It also yields valuable insights into the suitability of algorithms for various optimization challenges, providing professionals with information to make informed decisions. This is achieved by pinpointing algorithm pairs with favorable statistical distributions, facilitating practical algorithm selection. The study encompasses multiple nonparametric statistical hypothesis models, such as the Wilcoxon rank-sum test, single-factor analysis, and two-factor ANOVA tests. This thorough evaluation enhances our grasp of algorithm performance across various evaluation criteria. Notably, the research addresses discrepancies in previous statistical test findings in algorithm comparisons, enhancing result reliability in the later research. The results proved that there are differences in significance results, as seen in examples like Leo versus the FDO, the DA versus the WOA, and so on. It highlights the need to tailor test models to specific scenarios, as p-value outcomes differ among various tests within the same algorithm pair.
Fitness dependent optimizer with neural networks for COVID-19 patients
The Coronavirus, known as COVID-19, which appeared in 2019 in China, has significantly affected the global health and become a huge burden on health institutions all over the world. These effects are continuing today. One strategy for limiting the virus's transmission is to have an early diagnosis of suspected cases and take appropriate measures before the disease spreads further. This work aims to diagnose and show the probability of getting infected by the disease according to textual clinical data. In this work, we used five machine learning techniques (GWO_MLP, GWO_CMLP, MGWO_MLP, FDO_MLP, FDO_CMLP) all of which aim to classify Covid-19 patients into two categories (Positive and Negative). Experiments showed promising results for all used models. The applied methods showed very similar performance, typically in terms of accuracy. However, in each tested dataset, FDO_MLP and FDO_CMLP produced the best results with 100% accuracy. The other models' results varied from one experiment to the other. It is concluded that the models on which the FDO algorithm was used as a learning algorithm had the possibility of obtaining higher accuracy. However, it is found that FDO has the longest runtime compared to the other algorithms. The link to the Covid 19 models is found here: https://github.com/Tarik4Rashid4/covid19models
Sentiment Analysis Based on Hybrid Neural Network Techniques Using Binary Coordinate Ascent Algorithm
Sentiment analysis is a technique for determining whether data is positive, negative, or neutral using Natural Language Processing (NLP). The particular challenge in classifying huge amounts of data is that it takes a long time and requires the employment of specialist human resources. Various deep learning techniques have been employed by different researchers to train and classify different datasets with varying outcomes. However, the results are not satisfactory. To address this challenge, this paper proposes a novel Sentiment Analysis approach based on Hybrid Neural Network Techniques. The preprocessing step is first applied to the Amazon Fine Food Reviews dataset in our architecture, which includes a number of data cleaning and text normalization techniques. The word embedding technique is then used to capture the semantics of the input by clustering semantically related inputs in the embedding space on the cleaned dataset. Finally, generated features were classified using three different deep learning techniques, including Recurrent Neural Network (RNN), Convolutional Neural Network (CNN), and Hybrid CNN-RNN models, in two different ways, with each technique as follows: classification on the original feature set and classification on the reduced feature set based on Binary Coordinate Ascent (BCA) and Optimal Coordinate Ascent (OCA). The experimental results show that a hybrid CNN-RNN with the BCA and OCA algorithms outperforms state-of-the-art methods with 97.91% accuracy.
Multi-objective fitness-dependent optimizer algorithm
This paper proposes the multi-objective variant of the recently-introduced fitness dependent optimizer (FDO). The algorithm is called a multi-objective fitness dependent optimizer (MOFDO) and is equipped with all five types of knowledge (situational, normative, topographical, domain, and historical knowledge) as in FDO. MOFDO is tested on two standard benchmarks for the performance-proof purpose: classical ZDT test functions, which is a widespread test suite that takes its name from its authors Zitzler, Deb, and Thiele, and on IEEE Congress of Evolutionary Computation benchmark (CEC-2019) multi-modal multi-objective functions. MOFDO results are compared to the latest variant of multi-objective particle swarm optimization, non-dominated sorting genetic algorithm third improvement (NSGA-III), and multi-objective dragonfly algorithm. The comparative study shows the superiority of MOFDO in most cases and comparative results in other cases. Moreover, MOFDO is used for optimizing real-world engineering problems (e.g., welded beam design problems). It is observed that the proposed algorithm successfully provides a wide variety of well-distributed feasible solutions, which enable the decision-makers to have more applicable-comfort choices to consider.
Analysis of social engineering awareness among students and lecturers
The massive technological progress and wide use of Information Technology have increased cyber security threats. Social engineering attacks are a common type of cyber security threat that faces everyone. It uses several methods, such as pretexting using Artificial Intelligence or phishing, to attack users’ valuable data due to human error. The risks of data attacks have increased, especially in the institutions sector, as the use of digital technologies become easier around the users. This paper investigates the awareness of social engineering attacks and cyber-security threats at the University of Sulaimani. The University of Sulaimani, based in the Kurdistan Region of Iraq, has a large number of students and staff; due to the increase of social engineering threats and lack of knowledge of cyber securities, the internet users at the University of Sulaimani put their confidential data at risk. This research has employed a quantitative approach, using a self-report questionnaire to gather primary data from participants. The online survey has been launched at the University of Sulaimani to provide a measurement of social engineering attacks on students and staff. The results show a variety of factors impacting participants’ awareness of their data. The objective of this study is to evaluate the participants’ knowledge of cyber-security and analyze their awareness of social engineering data breaches. One implication of this study is that the participants are inexperienced with network security systems. The attendees also emphasized the significance of SE training and ongoing instruction in order to protect against threats.
A novel enhanced convolution neural network with extreme learning machine: facial emotional recognition in psychology practices
Facial emotional recognition is one of the essential tools used by recognition psychology to diagnose patients. Face and facial emotional recognition are areas where machine learning is excelling. Facial Emotion Recognition in an unconstrained environment is an open challenge for digital image processing due to different environments, such as lighting conditions, pose variation, yaw motion, and occlusions. Deep learning approaches have shown significant improvements in image recognition. However, accuracy and time still need improvements. This research aims to improve facial emotion recognition accuracy during the training session and reduce processing time using a modified Convolution Neural Network Enhanced with Extreme Learning Machine (CNNEELM). The proposed system consists of an optical flow estimation technique that detects the motion of change in facial expression and extracts peak images from video frames for image pre-processing. The system entails (CNNEELM) improving the accuracy in image registration during the training session. Furthermore, the system recognizes six facial emotions – happy, sad, disgust, fear, surprise, and neutral with the proposed CNNEELM model. The study shows that the overall facial emotion recognition accuracy is improved by 2% than the state of art solutions with a modified Stochastic Gradient Descent (SGD) technique. With the Extreme Learning Machine (ELM) classifier, the processing time is brought down to 65 ms from 113 ms, which can smoothly classify each frame from a video clip at 20fps. With the pre-trained InceptionV3 model, the proposed CNNEELM model is trained with JAFFE, CK+, and FER2013 expression datasets. The simulation results show significant improvements in accuracy and processing time, making the model suitable for the video analysis process. Besides, the study solves the issue of the large processing time required to process the facial images.
Deep learning neural network for lung cancer classification: enhanced optimization function
Convolutional neural network is widely used for image recognition in the medical area at nowadays. However, overall accuracy in predicting lung tumor is low and the processing time is high as the error occurred while reconstructing the CT image. The aim of this work is to increase the overall prediction accuracy along with reducing processing time by using multispace image in pooling layer of convolution neural network. The proposed method has the autoencoder system to improve the overall accuracy, and to predict lung cancer by using multispace image in pooling layer of convolution neural network and Adam Algorithm for optimization. First, the CT images were pre-processed by feeding image to the convolution filter and down sampled by using max pooling. Then, features are extracted using the autoencoder model based on convolutional neural network and multispace image reconstruction technique is used to reduce error while reconstructing the image which then results improved accuracy to predict lung nodule. Finally, the reconstructed images are taken as input for SoftMax classifier to classify the CT images. The state-of-art and proposed solutions were processed in Python Tensor Flow and It provides significant increase in accuracy in classification of lung cancer to 99.5 from 98.9 and decrease in processing time from 10 frames/second to 12 seconds/second. The proposed solution provides high classification accuracy along with less processing time compared to the state of art. For future research, large dataset can be implemented, and low pixel image can be processed to evaluate the classification.
A survey on dragonfly algorithm and its applications in engineering
The dragonfly algorithm was developed in 2016. It is one of the algorithms used by researchers to optimize an extensive series of uses and applications in various areas. At times, it offers superior performance compared to the most well-known optimization techniques. However, this algorithm faces several difficulties when it is utilized to enhance complex optimization problems. This work addressed the robustness of the method to solve real-world optimization issues, and its deficiency to improve complex optimization problems. This review paper shows a comprehensive investigation of the dragonfly algorithm in the engineering area. First, an overview of the algorithm is discussed. Besides, we also examined the modifications of the algorithm. The merged forms of this algorithm with different techniques and the modifications that have been done to make the algorithm perform better are addressed. Additionally, a survey on applications in the engineering area that used the dragonfly algorithm is offered. The utilized engineering applications are the applications in the field of mechanical engineering problems, electrical engineering problems, optimal parameters, economic load dispatch, and loss reduction. The algorithm is tested and evaluated against particle swarm optimization algorithm and firefly algorithm. To evaluate the ability of the dragonfly algorithm and other participated algorithms a set of traditional benchmarks (TF1-TF23) were utilized. Moreover, to examine the ability of the algorithm to optimize large-scale optimization problems CEC-C2019 benchmarks were utilized. A comparison is made between the algorithm and other metaheuristic techniques to show its ability to enhance various problems. The outcomes of the algorithm from the works that utilized the dragonfly algorithm previously and the outcomes of the benchmark test functions proved that in comparison with participated algorithms (GWO, PSO, and GA), the dragonfly algorithm owns an excellent performance, especially for small to intermediate applications. Moreover, the congestion facts of the technique and some future works are presented. The authors conducted this research to help other researchers who want to study the algorithm and utilize it to optimize engineering problems.
An efficient hybrid classification approach for COVID-19 based on harris hawks optimization and salp swarm optimization
Feature selection can be defined as one of the pre-processing steps that decrease the dimensionality of a dataset by identifying the most significant attributes while also boosting the accuracy of classification. For solving feature selection problems, this study presents a hybrid binary version of the Harris Hawks Optimization algorithm (HHO) and Salp Swarm Optimization (SSA) (HHOSSA) for Covid-19 classification. The proposed (HHOSSA) presents a strategy for improving the basic HHO's performance using the Salp algorithm's power to select the best fitness values. The HHOSSA was tested against two well-known optimization algorithms, the Whale Optimization Algorithm (WOA) and the Grey wolf optimizer (GWO), utilizing a total of 800 chest X-ray images. A total of four performance metrics (Accuracy, Recall, Precision, F1) were employed in the studies using three classifiers (Support vector machines (SVMs), k-Nearest Neighbor (KNN), and Extreme Gradient Boosting (XGBoost)). The proposed algorithm (HHOSSA) achieved 96% accuracy with the SVM classifier, and 98% accuracy with two classifiers, XGboost and KNN.
Deep learning for breast cancer classification: Enhanced tangent function
Recently, deep learning using convolutional neural network (CNN) has been used successfully to classify the images of breast cells accurately. However, the accuracy of manual classification of those histopathological images is comparatively low. This research aims to increase the accuracy of the classification of breast cancer images by utilizing a patch-based classifier (PBC) along with deep learning architecture. The proposed system consists of a deep convolutional neural network that helps in enhancing and increasing the accuracy of the classification process. This is done by the use of the PBC. CNN has completely different layers where images are first fed through convolutional layers using hyperbolic tangent function together with the max-pooling layer, drop out layers, and SoftMax function for classification. Further, the output obtained is fed to a PBC that consists of patch-wise classification output followed by majority voting. The results are obtained throughout the classification stage for breast cancer images that are collected from breast-histology datasets. The proposed solution improves the accuracy of classification whether or not the images had normal, benign, in-situ, or invasive carcinoma from 87% to 94% with a decrease in processing time from 0.45 to 0.2 s on average. The proposed solution focused on increasing the accuracy of classifying cancer in the breast by enhancing the image contrast and reducing the vanishing gradient. Finally, this solution for the implementation of the contrast limited adaptive histogram equalization technique and modified tangent function helps in increasing the accuracy.
Freedom: effective surveillance and investigation of water-borne diseases from data-centric networking using machine learning techniques
Worldwide, epidemics continue to be a concern on public health. Even with the technological advances, there are still barriers present in predicting the outbreaks. We propose a new methodology known as FREEDOM (Effective Surveillance and Investigation of Water-borne Diseases from data-centric networking using Machine Learning) to perform effective surveillance and investigation of water-borne diseases from social media with next-generation data. In the proposed model, we collected the data from the Twitter media, preprocessed the tweet content, performed hierarchical spectral clustering, and generated the frequent word set from each cluster through the apriori algorithm. At last, the inferences are extracted from the frequent word set through human intervention. From the experimental results, the support and confidence value of the outcome derived from the Apriori algorithm has exhibited the different water-borne diseases that are not listed in the WHO (World Health Organization), and the surveillance of those diseases with percentage ranking and has been achieved using the data-centric networking. They get aligned with precise results portraying real statistics. This type of analysis will empower doctors and health organizations (Government sector) to keep track of the water-borne diseases, their symptoms for early detection, and safe recovery thereby sufficiently reducing the death tolls.
Deep learning neural networks for emotion classification from text: enhanced leaky rectified linear unit activation and weighted loss
Accurate emotion classification for online reviews is vital for business organizations to gain deeper insights into markets. Although deep learning has been successfully implemented in this area, accuracy and processing time are still major problems preventing it from reaching its full potential. This paper proposes an Enhanced Leaky Rectified Linear Unit activation and Weighted Loss (ELReLUWL) algorithm for enhanced text emotion classification and faster parameter convergence speed. This algorithm includes the definition of the inflection point and the slope for inputs on the left side of the inflection point to avoid gradient saturation. It also considers the weight of samples belonging to each class to compensate for the influence of data imbalance. Convolutional Neural Network (CNN) combined with the proposed algorithm to increase the classification accuracy and decrease the processing time by eliminating the gradient saturation problem and minimizing the negative effect of data imbalance, demonstrated on a binary sentiment problem. All work was carried out using supervised deep learning. The results for accuracy and processing time are obtained by using different datasets and different review types. The results show that the proposed solution achieves better classification performance in different data scenarios and different review types. The proposed model takes less convergence time to achieve model optimization with seven epochs against the current convergence time of 11.5 epochs on average. The proposed solution improves accuracy and reduces the processing time of text emotion classification. The solution provides an average class accuracy of 96.63% against a current average accuracy of 91.56%. It also provides a processing time of 23.3 milliseconds compared to the current average processing time of 33.2 milliseconds. Finally, this study solves the issues of gradient saturation and data imbalance. It enhances overall average class accuracy and decreases processing time.
FOXANN: A Method for Boosting Neural Network Performance
Artificial neural networks play a crucial role in machine learning and there is a need to improve their performance. This paper presents FOXANN, a novel classification model that combines the recently developed Fox optimizer with ANN to solve ML problems. Fox optimizer replaces the backpropagation algorithm in ANN; optimizes synaptic weights; and achieves high classification accuracy with a minimum loss, improved model generalization, and interpretability. The performance of FOXANN is evaluated on three standard datasets: Iris Flower, Breast Cancer Wisconsin, and Wine. The results presented in this paper are derived from 100 epochs using 10-fold cross-validation, ensuring that all dataset samples are involved in both the training and validation stages. Moreover, the results show that FOXANN outperforms traditional ANN and logistic regression methods as well as other models proposed in the literature such as ABC-ANN, ABC-MNN, CROANN, and PSO-DNN, achieving a higher accuracy of 0.9969 and a lower validation loss of 0.0028. These results demonstrate that FOXANN is more effective than traditional methods and other proposed models across standard datasets. Thus, FOXANN effectively addresses the challenges in ML algorithms and improves classification performance.
Deep learning for sleep stages classification: modified rectified linear unit activation function and modified orthogonal weight initialisation
Each stage of sleep can affect human health, and not getting enough sleep at any stage may lead to sleep disorder like parasomnia, apnea, insomnia, etc. Sleep-related diseases could be diagnosed using Convolutional Neural Network Classifier. However, this classifier has not been successfully implemented into sleep stage classification systems due to high complexity and low accuracy of classification. The aim of this research is to increase the accuracy and reduce the learning time of Convolutional Neural Network Classifier. The proposed system used a modified Orthogonal Convolutional Neural Network and a modified Adam optimisation technique to improve the sleep stage classification accuracy and reduce the gradient saturation problem that occurs due to sigmoid activation function. The proposed system uses Leaky Rectified Linear Unit (ReLU) instead of sigmoid activation function as an activation function. The proposed system called Enhanced Sleep Stage Classification system (ESSC) used six different databases for training and testing the proposed model on the different sleep stages. These databases are University College Dublin database (UCD), Beth Israel Deaconess Medical Center MIT database (MIT-BIH), Sleep European Data Format (EDF), Sleep EDF Extended, Montreal Archive of Sleep Studies (MASS), and Sleep Heart Health Study (SHHS). Our results show that the gradient saturation problem does not exist anymore. The modified Adam optimiser helps to reduce the noise which in turn result in faster convergence time. The convergence speed of ESSC is increased along with better classification accuracy compared to the state of art solution.
Medical dataset classification for Kurdish short text over social media
The Facebook application is used as a resource for collecting the comments of this dataset, The dataset consists of 6756 comments to create a Medical Kurdish Dataset (MKD). The samples are comments of users, which are gathered from different posts of pages (Medical, News, Economy, Education, and Sport). Six steps as a preprocessing technique are performed on the raw dataset to clean and remove noise in the comments by replacing characters. The comments (short text) are labeled for positive class (medical comment) and negative class (non-medical comment) as text classification. The percentage ratio of the negative class is 55% while the positive class is 45%.
Establishment of Dynamic Evolving Neural‐Fuzzy Inference System Model for Natural Air Temperature Prediction
Air temperature (AT) prediction can play a significant role in studies related to climate change, radiation and heat flux estimation, and weather forecasting. This study applied and compared the outcomes of three advanced fuzzy inference models, i.e., dynamic evolving neural-fuzzy inference system (DENFIS), hybrid neural-fuzzy inference system (HyFIS), and adaptive neurofuzzy inference system (ANFIS) for AT prediction. Modelling was done for three stations in North Dakota (ND), USA, i.e., Robinson, Ada, and Hillsboro. The results reveal that FIS type models are well suited when handling highly variable data, such as AT, which shows a high positive correlation with average daily dew point (DP), total solar radiation (TSR), and negative correlation with average wind speed (WS). At the Robinson station, DENFIS performed the best with a coefficient of determination (R2) of 0.96 and a modified index of agreement (md) of 0.92, followed by ANFIS with R2 of 0.94 and md of 0.89, and HyFIS with R2 of 0.90 and md of 0.84. A similar result was observed for the other two stations, i.e., Ada and Hillsboro stations where DENFIS performed the best with R2: 0.953/0.960, md: 0.903/0.912, then ANFIS with R2: 0.943/0.942, md: 0.888/0.890, and HyFIS with R2: 0.908/0.905, md: 0.845/0.821, respectively. It can be concluded that all three models are capable of predicting AT with high efficiency by only using DP, TSR, and WS as input variables. This makes the application of these models more reliable for a meteorological variable with the need for the least number of input variables. The study can be valuable for the areas where the climatological and seasonal variations are studied and will allow providing excellent prediction results with the least error margin and without a huge expenditure.
Hate speech detection in social media for the Kurdish language
With the rapid growth of technology over the world, especially, on the internet, people enormously use social media freely to express their ideologies. Sometimes the freedom of media is caught up and the rights of others are beaten down by hate speech. Moreover, social media is an easy and vague way to desecrate people, groups, and parties since there is no any way to recognize anonymous users over social media. Testing human speech is common for English, Arabic, and Turkish languages while there is no attempt for the Kurdish language. For that reason, the Kurdish hate speech dataset is collected from comments on the Facebook application as an effort for detecting hate speech and removing them. The raw dataset consists of 6882 comments which are divided into hate and hot hate classes. Support Vector Machine (SVM), Decision Tree (DT), and Naïve Bays (NB) algorithms are implemented and compared. Based on the results, the SVM is found most excellent with the F1 measure being 0.687.
Application of machine learning to express measurement uncertainty
The continuing increase in data processing power in modern devices and the availability of a vast amount of data via the internet and the internet of things (sensors, monitoring systems, financial records, health records, social media, etc.) enabled the accelerated development of machine learning techniques. However, the collected data can be inconsistent, incomplete, and noisy, leading to a decreased confidence in data analysis. The paper proposes a novel “judgmental” approach to evaluating the measurement uncertainty of the machine learning model that implements the dropout additive regression trees algorithm. The considered method uses the procedure for expressing the type B measurement uncertainty and the maximal value of the empirical absolute loss function of the model. It is related to the testing and monitoring of power equipment and determining partial discharge location by the non-iterative, all-acoustic method. The example uses the dataset representing the correlation of the mean distance of partial discharge and acoustic sensors and the temperature coefficient of the sensitivity of the non-iterative algorithm. The dropout additive regression trees algorithm achieved the best performance based on the highest coefficient of determination value. Most of the model’s predictions (>97%) fell into the proposed standard measurement uncertainty interval for both “seen” and “unseen” data.
Exploiting the generative adversarial network approach to create a synthetic topography corneal image
Corneal diseases are the most common eye disorders. Deep learning techniques are used to perform automated diagnoses of cornea. Deep learning networks require large-scale annotated datasets, which is conceded as a weakness of deep learning. In this work, a method for synthesizing medical images using conditional generative adversarial networks (CGANs), is presented. It also illustrates how produced medical images may be utilized to enrich medical data, improve clinical decisions, and boost the performance of the conventional neural network (CNN) for medical image diagnosis. The study includes using corneal topography captured using a Pentacam device from patients with corneal diseases. The dataset contained 3448 different corneal images. Furthermore, it shows how an unbalanced dataset affects the performance of classifiers, where the data are balanced using the resampling approach. Finally, the results obtained from CNN networks trained on the balanced dataset are compared to those obtained from CNN networks trained on the imbalanced dataset. For performance, the system estimated the diagnosis accuracy, precision, and F1-score metrics. Lastly, some generated images were shown to an expert for evaluation and to see how well experts could identify the type of image and its condition. The expert recognized the image as useful for medical diagnosis and for determining the severity class according to the shape and values, by generating images based on real cases that could be used as new different stages of illness between healthy and unhealthy patients.
Artificial flora optimization algorithm with genetically guided operators for feature selection and neural network training
Many real-life optimization problems in different fields of engineering, science, business, and economics are challenging to solve, due to the complexity and as such they are classified as non-deterministic polynomial-time hard. In recent years, nature-inspired metaheuristics are proved to be robust solvers for global optimization problems. Hybridization is a commonly used technique that can further improve metaheuristic algorithms. Hybrid algorithms are designed by combining the advantages of various algorithms, which produce a synergistic effect. Hybridization results in intensifying specific advantages in different algorithms and the hybridized algorithm implementation often performs better than the original version. In this paper, we present the hybridized artificial flora optimization algorithm, named genetically guided best artificial flora. This hybridization is achieved by using a uniform crossover and mutation operators from the genetic algorithms that facilitate exploration of the search space and make the right balance between diversification and intensification. Furthermore, the proposed hybrid algorithm is adopted for two real-world problems, artificial neural network training, and feature selection problem. Following good practice, proposed method was first tested on standard unconstrained functions before it was evaluated for these two very important machine learning challenges. The experimental results show that the proposed hybridized algorithm is highly competitive and that it establishes a better balance between exploration and exploitation than the original one and that it is superior over other state-of-the-art methods in artificial neural network training and feature selection.
A comprehensive review and evaluation on text predictive and entertainment systems
Providing text prediction systems is an important way to facilitate communication and interaction with the systems and machines. Although a text prediction system facilitates the typing process, it is helpful for people with disabilities to type or enter texts at a limited or slow speed. This means that when a user types a word, then the system suggests the next word to be chosen. It is also beneficial for people with dyslexia and those who are not good at spelling words. Besides, it can be used in entertainment as a game, for example, to determine a target word and reach it or tackle it within 10 attempts of prediction. Generally, the text prediction systems depend on a corpus. Writing every single word is time-consuming; therefore, it is vitally important to decrease time consumption by reducing efforts to input texts in the systems by offering the most probable words for the user to select. This paper addresses a survey of miscellaneous techniques toward text prediction with entertainment systems and their evaluation. It also determines a modal technique to be utilized for the next word prediction system from the perspective of ease of implementation and obtaining a good result.
Kurdish handwritten character recognition using deep learning techniques
Handwriting recognition is regarded as a dynamic and inspiring topic in the exploration of pattern recognition and image processing. It has many applications including a blind reading aid, computerized reading, and processing for paper documents, making any handwritten document searchable and converting it into structural text form. High accuracy rates have been achieved by this technology when recognizing handwriting recognition systems for English, Chinese Arabic, Persian, and many other languages. However, there is not such a system for recognizing Kurdish handwriting. In this paper, an attempt is made to design and develop a model that can recognize handwritten characters for Kurdish alphabets using deep learning techniques. Kurdish (Sorani) contains 34 characters and mainly employs an Arabic/Persian based script with modified alphabets. In this work, a Deep Convolutional Neural Network model is employed that has shown exemplary performance in handwriting recognition systems. Then, a comprehensive database has been created for handwritten Kurdish characters which contain more than 40 thousand images. The created database has been used for training the Deep Convolutional Neural Network model for classification and recognition tasks. In the proposed system the experimental results show an acceptable recognition level. The testing results reported an 83% accuracy rate, and training accuracy reported a 96% accuracy rate. From the experimental results, it is clear that the proposed deep learning model is performing well and comparable to the similar to other languages handwriting recognition systems.
Multi-objective learner performance-based behavior algorithm with five multi-objective real-world engineering problems
In this work, a new multi-objective optimization algorithm called multi-objective learner performance-based behavior algorithm is proposed. The proposed algorithm is based on the process of moving graduated students from high school to college. The proposed technique produces a set of non-dominated solutions. To test the ability and efficacy of the proposed multi-objective algorithm, it is applied to a group of benchmarks and five real-world engineering optimization problems. Several widely used metrics are employed in the quantitative statistical comparisons. The proposed algorithm is compared with three multi-objective algorithms: Multi-Objective Water Cycle Algorithm (MOWCA), Non-dominated Sorting Genetic Algorithm (NSGA-II), and Multi-Objective Dragonfly Algorithm (MODA). The produced results for the benchmarks and engineering problems show that in general the accuracy and diversity of the proposed algorithm are better compared to the MOWCA and MODA. However, the NSGA-II outperformed the proposed work in some of the cases and showed better accuracy and diversity. Nevertheless, in problems, such as coil compression spring design problem, the quality of solutions produced by the proposed algorithm outperformed all the participated algorithms. Moreover, in regard to the processing time, the proposed work provided better results compared with all the participated algorithms.
Sine Cosine Algorithm for Simple recurrent neural network Tuning for Stock Market Prediction
Deep artificial neural networks have recently gained popularity in the time series forecasting literature. Recurrent neural networks’ higher suitability for this type of problem is the reason why this type of network has been chosen over other deep neural network approaches. Due to the number of parameters used the simplicity of these networks is considerable. This characteristic makes deep recurrent neural networks highly suitable for the problems of forecasting. Unfortunately, finding recurrent neural architecture for each specific task is NP-hard, therefore employment of metaheuristics is appropriate. Accordingly, the research proposed in this paper tackles tuning simple recurrent neural networks by sine cosine algorithm for stock market prediction. The proposed method’s performance was compared with other metaheuristics and validated against the Nikkei stock exchange.
Pulmonary diffuse airspace opacities diagnosis from chest X-ray images using deep convolutional neural networks fine-tuned by whale optimizer
The early diagnosis and the accurate separation of COVID-19 from non-COVID-19 cases based on pulmonary diffuse airspace opacities is one of the challenges facing researchers. Recently, researchers try to exploit the Deep Learning (DL) method’s capability to assist clinicians and radiologists in diagnosing positive COVID-19 cases from chest X-ray images. In this approach, DL models, especially Deep Convolutional Neural Networks (DCNN), propose real-time, automated effective models to detect COVID-19 cases. However, conventional DCNNs usually use Gradient Descent-based approaches for training fully connected layers. Although GD-based Training (GBT) methods are easy to implement and fast in the process, they demand numerous manual parameter tuning to make them optimal. Besides, the GBT’s procedure is inherently sequential, thereby parallelizing them with Graphics Processing Units is very difficult. Therefore, for the sake of having a real-time COVID-19 detector with parallel implementation capability, this paper proposes the use of the Whale Optimization Algorithm for training fully connected layers. The designed detector is then benchmarked on a verified dataset called COVID-Xray-5k, and the results are verified by a comparative study with classic DCNN, DUICM, and Matched Subspace classifier with Adaptive Dictionaries. The results show that the proposed model with an average accuracy of 99.06% provides 1.87% better performance than the best comparison model. The paper also considers the concept of Class Activation Map to detect the regions potentially infected by the virus. This was found to correlate with clinical results, as confirmed by experts. Although results are auspicious, further investigation is needed on a larger dataset of COVID-19 images to have a more comprehensive evaluation of accuracy rates.
Harmony search: Current studies and uses on healthcare systems
One of the popular metaheuristic search algorithms is Harmony Search (HS). It has been verified that HS can find solutions to optimization problems due to its balanced exploratory and convergence behavior and its simple and flexible structure. This capability makes the algorithm preferable to be applied in several real-world applications in various fields, including healthcare systems, different engineering fields, and computer science. The popularity of HS urges us to provide a comprehensive survey of the literature on HS and its variants on health systems, analyze its strengths and weaknesses, and suggest future research directions. In this review paper, the current studies and uses of harmony search are studied in four main domains. (i) The variants of HS, including its modifications and hybridization. (ii) Summary of the previous review works. (iii) Applications of HS in healthcare systems. (iv) And finally, an operational framework is proposed for the applications of HS in healthcare systems. The main contribution of this review is intended to provide a thorough examination of HS in healthcare systems while also serving as a valuable resource for prospective scholars who want to investigate or implement this method.
Wireless Sensor Networks Localization by Improved Whale Optimization Algorithm
Wireless sensor networks that are composed of a finite number of spatially distributed autonomous sensors are widely used in different areas with many potential applications. However, in order to be implemented efficiently, especially in poorly accessible terrains, localization challenge should be addressed. Localization refers to determining the unknown target nodes positions by using information about location of anchor nodes, based on different measurements, such as the time and the angle of arrival, time difference of arrival, and so on. This task is considered to be NP-hard by its nature and cannot be addressed with traditional deterministic approaches. In this research we have proposed the improved implementation of swarm intelligence approach, whale optimization algorithm, to address localization challenge in wireless sensor networks. Observed drawbacks of original whale optimization algorithm are overcome in enhanced implementation by incorporating quasi-reflected-based learning algorithm. Proposed metaheuristics is tested using the same network topology and experimental conditions as other advanced metaheuristics which results are published in the most recent computer science literature. Based on simulation results, devised algorithm manages to establish lower localization error than the basic whale optimization algorithm, as well as other outstanding metaheuristics.
An improved deep convolutional neural network by using hybrid optimization algorithms to detect and classify brain tumor using augmented MRI images
Automated brain tumor detection is becoming a highly considerable medical diagnosis research. In recent medical diagnoses, detection and classification are highly considered to employ machine learning and deep learning techniques. Nevertheless, the accuracy and performance of current models need to be improved for suitable treatments. In this paper, an improvement in deep convolutional learning is ensured by adopting enhanced optimization algorithms, Thus, Deep Convolutional Neural Network (DCNN) based on improved Harris Hawks Optimization (HHO), called G-HHO has been considered. This hybridization features Grey Wolf Optimization (GWO) and HHO to give better results, limiting the convergence rate and enhancing performance. Moreover, Otsu thresholding is adopted to segment the tumor portion that emphasizes brain tumor detection. Experimental studies are conducted to validate the performance of the suggested method on a total number of 2073 augmented MRI images. The technique’s performance was ensured by comparing it with the nine existing algorithms on huge augmented MRI images in terms of accuracy, precision, recall, f-measure, execution time, and memory usage. The performance comparison shows that the DCNN-G-HHO is much more successful than existing methods, especially on a scoring accuracy of 97%. Additionally, the statistical performance analysis indicates that the suggested approach is faster and utilizes less memory at identifying and categorizing brain tumor cancers on the MR images. The implementation of this validation is conducted on the Python platform. The relevant codes for the proposed approach are available at: https://github.com/bryarahassan/DCNN-G-HHO.
Optimizing bag-of-tasks scheduling on cloud data centers using hybrid swarm-intelligence meta-heuristic
Usually, a large number of concurrent bag-of-tasks (BoTs) application execution requests are submitted to cloud data centers (CDCs), which needs to be optimally scheduled on the physical cloud resources to obtain maximal performance. In the current paper, NP-Hard cloud task scheduling (CTS) problem for scheduling concurrent BoT applications is modeled as an optimization problem involving minimization of makespan and energy consumption. Whale optimization algorithm (WOA) has been found effective in solving a wide range of optimization problems. However, standard WOA has certain deficiencies such as inadequate exploration ability, slow convergence, high computation complexity, and insufficient exploration–exploitation phase trade-off. These deficiencies eventually result in unacceptable results when the original WOA is applied over task scheduling optimization problems. To address these limitations, a multi-objective scheduling algorithm called OWPSO is suggested, which incorporates opposition-based learning (OBL) and particle swarm optimization (PSO) mechanisms into the standard WOA method. Firstly, the OBL method is applied to produce an optimal initial population to enhance the exploration and convergence speed of the proposed OWPSO approach in the successive generations. Secondly, PSO and OBL methods are incorporated in the exploration phase of the standard WOA approach to enhance exploration ability further. Thirdly, a fitness-based switching mechanism is added to provide an adequate exploration–exploitation phase trade-off. Finally, a discrete resource allocation heuristic is incorporated in the OWPSO to provide an efficient resource allocation. Simulation experiments over the CloudSim simulator reveal that OWPSO approach results in makespan reduction in the range of 1.68−18.38% (for CEA-Curie workloads), 2.10−24.32% (for HPC2N workloads), and energy consumption reduction in the range of 0.93−14.70% (for CEA-Curie workloads), and 0.73−25.94% (for HPC2N workloads) over other well-known meta-heuristics. Statistical tests and box plots further revealed the robustness of proposed OWPSO algorithm.
Training a multilayer perception for modeling stock price index predictions using modified whale optimization algorithm
The prediction of the stock market trends represents a challenge that many researchers, investment bankers and stockbrokers try to solve, as correct predictions of the stock market’s direction can be very rewarding. However, the stock market forecasting is also one of the hardest tasks, as the stock market is very unpredictable, and historical data is pretty much nonlinear. This research suggests a new machine learning approach to forecast the movement of the stock market index, on the case study of the Borsa Istanbul 100 index. To perform this task, a multilayer perceptron hybridized with a modified whale optimization algorithm has been used, in two different cases of the output functions, namely Tanh(x) and Gaussian function. The dataset used in this research included Borsa Istanbul historical data from period 1996–2020, where nine technical indicators have been monitored. The obtained experimental results were validated by using RMSE, MAPE and correlation coefficient, and the proposed method was compared to similar methods that have been executed on the same dataset recently. In conclusion, the suggested approach was able to improve the accuracy of the model and was superior when compared to other methods observed in the comparative analysis.
Generative adversarial network (GAN) and enhanced root mean square error (ERMSE): deep learning for stock price movement prediction
The prediction of stock price movement direction is significant in financial circles and academic. Stock price contains complex, incomplete, and fuzzy information which makes it an extremely difficult task to predict its development trend. Predicting and analysing financial data is a nonlinear, time-dependent problem. With rapid development in machine learning and deep learning, this task can be performed more effectively by a purposely designed network. This paper aims to improve prediction accuracy and minimizing forecasting error loss through deep learning architecture by using Generative Adversarial Networks. It was proposed a generic model consisting of Phase-space Reconstruction (PSR) method for reconstructing price series and Generative Adversarial Network (GAN) which is a combination of two neural networks which are Long Short-Term Memory (LSTM) as Generative model and Convolutional Neural Network (CNN) as Discriminative model for adversarial training to forecast the stock market. LSTM will generate new instances based on historical basic indicators information and then CNN will estimate whether the data is predicted by LSTM or is real. It was found that the Generative Adversarial Network (GAN) has performed well on the enhanced root mean square error to LSTM, as it was 4.35% more accurate in predicting the direction and reduced processing time and RMSE by 78 s and 0.029, respectively. This study provides a better result in the accuracy of the stock index. It seems that the proposed system concentrates on minimizing the root mean square error and processing time and improving the direction prediction accuracy, and provides a better result in the accuracy of the stock index.
Current Studies and Applications of Shuffled Frog Leaping Algorithm: A Review
Shuffled Frog Leaping Algorithm (SFLA) is one of the most widespread algorithms. It was developed by Eusuff and Lansey in 2006. SFLA is a population-based metaheuristic algorithm that combines the benefits of memetics with particle swarm optimization. It has been used in various areas, especially in engineering problems due to its implementation easiness and limited variables. Many improvements have been made to the algorithm to alleviate its drawbacks, whether they were achieved through modifications or hybridizations with other well-known algorithms. This paper reviews the most relevant works on this algorithm. An overview of the SFLA is first conducted, followed by the algorithm's most recent modifications and hybridizations. Next, recent applications of the algorithm are discussed. Then, an operational framework of SLFA and its variants is proposed to analyze their uses on different cohorts of applications. Finally, future improvements to the algorithm are suggested. The main incentive to conduct this survey to provide useful information about the SFLA to researchers interested in working on the algorithm's enhancement or application.
Child drawing development optimization algorithm based on child’s cognitive development
This paper proposes a novel metaheuristic Child Drawing Development Optimization (CDDO) algorithm inspired by the child's learning behavior and cognitive development using the golden ratio to optimize the beauty behind their art. The golden ratio was first introduced by the famous mathematician Fibonacci. The ratio of two consecutive numbers in the Fibonacci sequence is similar, and it is called the golden ratio, which is prevalent in nature, art, architecture, and design. CDDO uses golden ratio and mimics cognitive learning and child's drawing development stages starting from the scribbling stage to the advanced pattern-based stage. Hand pressure width, length and golden ratio of the child's drawing are tuned to attain better results. This helps children with evolving, improving their intelligence and collectively achieving shared goals. CDDO shows superior performance in finding the global optimum solution for the optimization problems tested by 19 benchmark functions. Its results are evaluated against more than one state-of-art algorithms such as PSO, DE, WOA, GSA, and FEP. The performance of the CDDO is assessed, and the test result shows that CDDO is relatively competitive through scoring 2.8 ranks. This displays that the CDDO is outstandingly robust in exploring a new solution. Also, it reveals the competency of the algorithm to evade local minima as it covers promising regions extensively within the design space and exploits the best solution.
COV-ECGNET: COVID-19 detection using ECG trace images with deep convolutional neural network
The reliable and rapid identification of the COVID-19 has become crucial to prevent the rapid spread of the disease, ease lockdown restrictions and reduce pressure on public health infrastructures. Recently, several methods and techniques have been proposed to detect the SARS-CoV-2 virus using different images and data. However, this is the first study that will explore the possibility of using deep convolutional neural network (CNN) models to detect COVID-19 from electrocardiogram (ECG) trace images. In this work, COVID-19 and other cardiovascular diseases (CVDs) were detected using deep-learning techniques. A public dataset of ECG images consisting of 1937 images from five distinct categories, such as normal, COVID-19, myocardial infarction (MI), abnormal heartbeat (AHB), and recovered myocardial infarction (RMI) were used in this study. Six different deep CNN models (ResNet18, ResNet50, ResNet101, InceptionV3, DenseNet201, and MobileNetv2) were used to investigate three different classification schemes: (i) two-class classification (normal vs COVID-19); (ii) three-class classification (normal, COVID-19, and other CVDs), and finally, (iii) five-class classification (normal, COVID-19, MI, AHB, and RMI). For two-class and three-class classification, Densenet201 outperforms other networks with an accuracy of 99.1%, and 97.36%, respectively; while for the five-class classification, InceptionV3 outperforms others with an accuracy of 97.83%. ScoreCAM visualization confirms that the networks are learning from the relevant area of the trace images. Since the proposed method uses ECG trace images which can be captured by smartphones and are readily available facilities in low-resources countries, this study will help in faster computer-aided diagnosis of COVID-19 and other cardiac abnormalities.
Forecasting tunnel boring machine penetration rate using LSTM deep neural network optimized by grey wolf optimization algorithm
Achieving an accurate and reliable estimation of tunnel boring machine (TBM) performance can diminish the hazards related to extreme capital costs and planning tunnel construction. Here, a hybrid long short-term memory (LSTM) model enhanced by grey wolf optimization (GWO) is developed for predicting TBM-penetration rate (TBM-PR). 1125 datasets were considered including six input parameters. To vanish overfitting, the dropout technique was used. The effect of input time series length on the model performance was studied. The TBM-PR results of the LSTM-GWO model were compared to some other machine learning (ML) models such as LSTM. The results were evaluated using root mean square error (RMSE), mean absolute percentage error (MAPE), and correlation coefficient (R2). Finally, the LSTM-GWO model produced the most accurate results (test: R2 = 0.9795; RMSE = 0.004; MAPE = 0.009 %). The mutual information test revealed that input parameters of rock fracture class and uniaxial compressive strength have the most and least impact on the TBM-PR, respectively.
A comprehensive survey on the Internet of Things with the industrial marketplace
There is no doubt that new technology has become one of the crucial parts of most people’s lives around the world. By and large, in this era, the Internet and the Internet of Things (IoT) have become the most indispensable parts of our lives. Recently, IoT technologies have been regarded as the most broadly used tools among other technologies. The tools and the facilities of IoT technologies within the marketplace are part of Industry 4.0. The marketplace is too regarded as a new area that can be used with IoT technologies. One of the main purposes of this paper is to highlight using IoT technologies in Industry 4.0, and the Industrial Internet of Things (IIoT) is another feature revised. This paper focuses on the value of the IoT in the industrial domain in general; it reviews the IoT and focuses on its benefits and drawbacks, and presents some of the IoT applications, such as in transportation and healthcare. In addition, the trends and facts that are related to the IoT technologies on the marketplace are reviewed. Finally, the role of IoT in telemedicine and healthcare and the benefits of IoT technologies for COVID-19 are presented as well.
FOX: a FOX-inspired optimization algorithm
This paper proposes a novel nature-inspired optimization algorithm called the Fox optimizer (FOX) which mimics the foraging behavior of foxes in nature when hunting preys. The algorithm is based on techniques for measuring the distance between the fox and its prey to execute an efficient jump. After presenting the mathematical models and the algorithm of FOX, five classical benchmark functions and CEC2019 benchmark test functions are used to evaluate it’s performance. The FOX algorithm is also compared against the Dragonfly optimization Algorithm (DA), Particle Swarm Optimization (PSO), Fitness Dependent Optimizer (FDO), Grey Wolf Optimization (GWO), Whale Optimization Algorithm (WOA), Chimp Optimization Algorithm (ChOA), Butterfly Optimization Algorithm (BOA) and Genetic Algorithm (GA). The results indicate that FOX outperforms the above-mentioned algorithms. Subsequently, the Wilcoxon rank-sum test is used to ensure that FOX is better than the comparative algorithms in statistically significant manner. Additionally, parameter sensitivity analysis is conducted to show different exploratory and exploitative behaviors in FOX. The paper also employs FOX to solve engineering problems, such as pressure vessel design, and it is also used to solve electrical power generation: economic load dispatch problems. The FOX has achieved better results in terms of optimizing the problems against GWO, PSO, WOA, and FDO.
A novel visualization system of using augmented reality in knee replacement surgery: Enhanced bidirectional maximum correntropy algorithm
Background and Aim Image registration and alignment are the main limitations of augmented reality (AR)-based knee replacement surgery. This research aims to decrease the registration error, eliminate outcomes that are trapped in local minima to improve the alignment problems, handle the occlusion and maximize the overlapping parts. Methodology Markerless image registration method was used for AR-based knee replacement surgery to guide and visualize the surgical operation. While weight least square algorithm was used to enhance stereo camera-based tracking by filling border occlusion in right-to-left direction and non-border occlusion from left-to-right direction. Results This study has improved video precision to 0.57–0.61 mm alignment error. Furthermore, with the use of bidirectional points, that is, forward and backward directional cloud point, the iteration on image registration was decreased. This has led to improve the processing time as well. The processing time of video frames was improved to 7.4–11.74 frames per second. Conclusions It seems clear that this proposed system has focused on overcoming the misalignment difficulty caused by the movement of patient and enhancing the AR visualization during knee replacement surgery. The proposed system was reliable and favourable which helps in eliminating alignment error by ascertaining the optimal rigid transformation between two cloud points and removing the outliers and non-Gaussian noise. The proposed AR system helps in accurate visualization and navigation of anatomy of knee such as femur, tibia, cartilage, blood vessels and so forth.
Using Fitness Dependent Optimizer for Training Multi-layer Perceptron
Using fitness dependent optimizer for training multi-layer perceptron Authors Dosti Kh Abbas, Tarik A Rashid, Karmand H Bacanin, Abeer Alsadoon Publication date 2022/1/3 Journal Description This study presents a novel training algorithm depending upon the recently proposed Fitness Dependent Optimizer (FDO). The stability of this algorithm has been verified and performance-proofed in both the exploration and exploitation stages using some standard measurements. This influenced our target to gauge the performance of the algorithm in training multilayer perceptron neural networks (MLP). This study combines FDO with MLP (codename FDO-MLP) for optimizing weights and biases to predict outcomes of students. This study can improve the learning system in terms of the educational background of students besides increasing their achievements. The experimental results of this approach are affirmed by comparing with the Back-Propagation algorithm (BP) and some evolutionary models such as FDO with cascade MLP (FDO-CMLP), Grey Wolf Optimizer (GWO) combined with MLP (GWO-MLP), modified GWO combined with MLP (MGWO-MLP), GWO with cascade MLP (GWO-CMLP), and modified GWO with cascade MLP (MGWO-CMLP). The qualitative and quantitative results prove that the proposed approach using FDO as a trainer can outperform the other approaches using different trainers on the dataset in terms of convergence speed and local optima avoidance. The proposed FDO-MLP approach classifies with a rate of 0.97.
An extensive dataset of handwritten central Kurdish isolated characters Authors
To collect the handwritten format of separate Kurdish characters, each character has been printed on a grid of 14 × 9 of A4 paper. Each paper is filled with only one printed character so that the volunteers know what character should be written in each paper. Then each paper has been scanned, spliced, and cropped with a macro in photoshop to make sure the same process is applied for all characters. The grids of the characters have been filled mainly by volunteers of students from multiple universities in Erbil.
Chaotic fitness-dependent optimizer for planning and engineering design
Fitness-dependent optimizer (FDO) is a recent metaheuristic algorithm that mimics the reproduction behavior of the bee swarm in finding better hives. This algorithm is similar to particle swarm optimization, but it works differently. The algorithm is very powerful and has better results compared to other common metaheuristic algorithms. This paper aims at improving the performance of FDO; thus, the chaotic theory is used inside FDO to propose chaotic FDO (CFDO). Ten chaotic maps are used in the CFDO to consider which of them are performing well to avoid local optima and finding global optima. New technic is used to conduct population in specific limitation since FDO technic has a problem to amend population. The proposed CFDO is evaluated by using 10 benchmark functions from CEC2019. Finally, the results show that the ability of CFDO is improved. Singer map has a great impact on improving CFDO, while the Tent map is the worst. Results show that CFDO is superior to GA, FDO, and CSO. Both CEC2013 and CEC2005 are used to evaluate CFDO. Finally, the proposed CFDO is applied to classical engineering problems, such as pressure vessel design and the result shows that CFDO can handle the problem better than WOA, GWO, FDO, and CGWO. Besides, CFDO is applied to solve the task assignment problem and then compared to the original FDO. The results prove that CFDO has better capability to solve the problem.
A new K-means grey wolf algorithm for engineering problems
Purpose This paper aims at studying meta-heuristic algorithms. One of the common meta-heuristic optimization algorithms is called grey wolf optimization (GWO). The key aim is to enhance the limitations of the wolves’ searching process of attacking gray wolves. Design/methodology/approach The development of meta-heuristic algorithms has increased by researchers to use them extensively in the field of business, science and engineering. In this paper, the K-means clustering algorithm is used to enhance the performance of the original GWO; the new algorithm is called K-means clustering gray wolf optimization (KMGWO). Findings Results illustrate the efficiency of KMGWO against to the GWO. To evaluate the performance of the KMGWO, KMGWO applied to solve CEC2019 benchmark test functions. Originality/value Results prove that KMGWO is superior to GWO. KMGWO is also compared to cat swarm optimization (CSO), whale optimization algorithm-bat algorithm (WOA-BAT), WOA and GWO so KMGWO achieved the first rank in terms of performance. In addition, the KMGWO is used to solve a classical engineering problem and it is superior.
Artificial intelligence algorithms for natural language processing and the semantic web ontology learning
Evolutionary clustering algorithms have considered as the most popular and widely used evolutionary algorithms for minimising optimisation and practical problems in nearly all fields. In this thesis, a new evolutionary clustering algorithm star (ECA*) is proposed. Additionally, a number of experiments were conducted to evaluate ECA* against five state-of-the-art approaches. For this, 32 heterogeneous and multi-featured datasets were used to examine their performance using internal and external clustering measures, and to measure the sensitivity of their performance towards dataset features in the form of operational framework. The results indicate that ECA* overcomes its competitive techniques in terms of the ability to find the right clusters. Based on its superior performance, exploiting and adapting ECA* on the ontology learning had a vital possibility. In the process of deriving concept hierarchies from corpora, generating formal context may lead to a time-consuming process. Therefore, formal context size reduction results in removing uninterested and erroneous pairs, taking less time to extract the concept lattice and concept hierarchies accordingly. In this premise, this work aims to propose a framework to reduce the ambiguity of the formal context of the existing framework using an adaptive version of ECA*. In turn, an experiment was conducted by applying 385 sample corpora from Wikipedia on the two frameworks to examine the reduction of formal context size, which leads to yield concept lattice and concept hierarchy. The resulting lattice of formal context was evaluated to the original one using concept lattice-invariants. Accordingly, the homomorphic between the two lattices preserves the quality of resulting concept hierarchies by 89% in contrast to the basic ones, and the reduced concept lattice inherits the structural relation of the original one.
Critical analysis: bat algorithm-based investigation and application on several domains
Purpose The purpose of this study is to provide the reader with a full study of the bat algorithm, including its limitations, the fields that the algorithm has been applied, versatile optimization problems in different domains and all the studies that assess its performance against other meta-heuristic algorithms. Design/methodology/approach Bat algorithm is given in-depth in terms of backgrounds, characteristics, limitations, it has also displayed the algorithms that hybridized with BA (K-Medoids, back-propagation neural network, harmony search algorithm, differential evaluation strategies, enhanced particle swarm optimization and Cuckoo search algorithm) and their theoretical results, as well as to the modifications that have been performed of the algorithm (modified bat algorithm, enhanced bat algorithm, bat algorithm with mutation (BAM), uninhabited combat aerial vehicle-BAM and non-linear optimization). It also provides a summary review that focuses on improved and new bat algorithm (directed artificial bat algorithm, complex-valued bat algorithm, principal component analyzes-BA, multiple strategies coupling bat algorithm and directional bat algorithm). Findings Shed light on the advantages and disadvantages of this algorithm through all the research studies that dealt with the algorithm in addition to the fields and applications it has addressed in the hope that it will help scientists understand and develop it. Originality/value As far as the research community knowledge, there is no comprehensive survey study conducted on this algorithm covering all its aspects.
Dynamic Cat Swarm Optimization algorithm for backboard wiring problem
This paper presents a powerful swarm intelligence metaheuristic optimization algorithm called Dynamic Cat Swarm Optimization. The formulation is through modifying the existing Cat Swarm Optimization Algorithm. The original Cat Swarm Optimization suffers from the shortcoming of “premature convergence,” which is the possibility of entrapment in local optima which usually happens due to the off balance between exploration and exploitation phases. Therefore, the proposed algorithm suggests a new method to provide a proper balance between these phases by modifying the selection scheme and the seeking mode of the algorithm. To evaluate the performance of the proposed algorithm, 23 classical test functions, 10 modern test functions (CEC 2019) and a real-world scenario are used. In addition, the dimension-wise diversity metric is used to measure the percentage of the exploration and exploitation phases. The optimization results show the effectiveness of the proposed algorithm, which ranks first compared to several well-known algorithms available in the literature. Furthermore, statistical methods and graphs are also used to further confirm the outperformance of the algorithm. Finally, the conclusion and future directions to further improve the algorithm are discussed.
Improvement of variant adaptable LSTM trained with metaheuristic algorithms for healthcare analysis
Recently, the population of the world has increased along with health problems. Diabetes mellitus disease as an example causes issues to the health of many patients globally. The task of this chapter is to develop a dynamic and intelligent decision support system for patients with different diseases, and it aims at examining machine-learning techniques supported by optimization techniques. Artificial neural networks have been used in healthcare for several decades. Most research works utilize multilayer layer perceptron (MLP) trained with back propagation (BP) learning algorithm to achieve diabetes mellitus classification. Nonetheless, MLP has some drawbacks, such as, convergence, which can be slow; local minima can affect the training process. It is hard to scale and cannot be used with time series data sets. To overcome these drawbacks, long short-term memory (LSTM) is suggested, which is a more advanced form of recurrent neural networks. In this chapter, adaptable LSTM trained with two optimizing algorithms instead of the back propagation learning algorithm is presented. The optimization algorithms are biogeography-based optimization (BBO) and genetic algorithm (GA). Dataset is collected locally and another benchmark dataset is used as well. Finally, the datasets fed into adaptable models; LSTM with BBO (LSTMBBO) and LSTM with GA (LSTMGA) for classification purposes. The experimental and testing results are compared and they are promising. This system helps physicians and doctors to provide proper health treatment for patients with diabetes mellitus. Details of source code and implementation of our system can be obtained in the following link “https://github.com/hamakamal/LSTM.”
Performance evaluation results of evolutionary clustering algorithm star for clustering heterogeneous datasets
This article presents the data used to evaluate the performance of evolutionary clustering algorithm star (ECA*) compared to five traditional and modern clustering algorithms. Two experimental methods are employed to examine the performance of ECA* against genetic algorithm for clustering++ (GENCLUST++), learning vector quantisation (LVQ), expectation maximisation (EM), K-means++ (KM++) and K-means (KM). These algorithms are applied to 32 heterogenous and multi-featured datasets to determine which one performs well on the three tests. For one, ther paper examines the efficiency of ECA* in contradiction of its corresponding algorithms using clustering evaluation measures. These validation criteria are objective function and cluster quality measures. For another, it suggests a performance rating framework to measurethe the performance sensitivity of these algorithms on varos dataset features (cluster dimensionality, number of clusters, cluster overlap, cluster shape and cluster structure). The contributions of these experiments are two-folds: (i) ECA* exceeds its counterpart aloriths in ability to find out the right cluster number; (ii) ECA* is less sensitive towards dataset features compared to its competitive techniques. Nonetheless, the results of the experiments performed demonstrate some limitations in the ECA*: (i) ECA* is not fully applied based on the premise that no prior knowledge exists; (ii) Adapting and utilising ECA* on several real applications has not been achieved yet.
Feedforward Multi-Layer Perceptron Training by Hybridized Method between Genetic Algorithm and Artificial Bee Colony
Artificial neural networks (ANNs) represent the most advanced method in machine learning and they are being increasingly applied to find solutions for different problems. Nevertheless, owing to the high dimensionality of the search domain, one of the biggest ANN challenges to this day remains the complexity of ANN's training itself. It is particularly hard to determine the right values for parameters such are weights and biases. Poorly chosen parameters can lead to inaccurate results, increased execution time, and prolong the development of the network. Genetic algorithms (GAs) have proved a promising technique for training ANNs. Basic GA exhibits the behavior of slow and premature convergence by trapping in the local optima of the search space. To overcome this drawback, in this work we present a hybridized approach between GA and artificial bee colony (ABC) swarm intelligence metaheuristics. By incorporating an exploration procedure from the ABC algorithm, the GA’s deficiencies concerning local optima stagnation and premature convergence are suppressed. The proposed hybrid approach was applied to feedforward multi-layer perceptron (MLP) training. Simulations were performed with three standard medical datasets. Based on simulation results and obtained performance metrics, the proposed method shows robust performance, in terms of the classification test error rate.
Tunnel geomechanical parameters prediction using Gaussian process regression
The purpose of this study is to apply a modern intelligent method of Gaussian process regression (GPR) to predict the geological parameter of Rock Quality Designation (RQD) along the tunnel route. This method can also be used for any geological parameter prediction of tunnel future levels. The GPR method has been studied based on data obtained from 51 tunnels all over the world. Fifty data sets were utilized for intelligent modeling, while one of the data sets that belonged to Hamru tunnel in Iran, was used to evaluate the prediction approach. The comparisons’ results indicate that the GPR model’s prediction results are generally in good agreement with the actual results. The proposed GPR, on the whole, performs better than the support vector machine (SVM), artificial neural network (ANN) and linear regression (LR) in predictive analysis of the RQD parameter.
ANA: ant nesting algorithm for optimizing real-world problems
In this paper, a novel swarm intelligent algorithm is proposed called ant nesting algorithm (ANA). The algorithm is inspired by Leptothorax ants and mimics the behavior of ants searching for positions to deposit grains while building a new nest. Although the algorithm is inspired by the swarming behavior of ants, it does not have any algorithmic similarity with the ant colony optimization (ACO) algorithm. It is worth mentioning that ANA is considered a continuous algorithm that updates the search agent position by adding the rate of change (e.g., step or velocity). ANA computes the rate of change differently as it uses previous, current solutions, fitness values during the optimization process to generate weights by utilizing the Pythagorean theorem. These weights drive the search agents during the exploration and exploitation phases. The ANA algorithm is benchmarked on 26 well-known test functions, and the results are verified by a comparative study with genetic algorithm (GA), particle swarm optimization (PSO), dragonfly algorithm (DA), five modified versions of PSO, whale optimization algorithm (WOA), salp swarm algorithm (SSA), and fitness dependent optimizer (FDO). ANA outperformances these prominent metaheuristic algorithms on several test cases and provides quite competitive results. Finally, the algorithm is employed for optimizing two well-known real-world engineering problems: antenna array design and frequency-modulated synthesis. The results on the engineering case studies demonstrate the proposed algorithm’s capability in optimizing real-world problems.
Energy efficient clustering in wireless sensor networks by opposition-based initialization bat algorithm
Wireless sensor networks belong to the group of technologies that enabled emerging and fast developing of other novel technologies such as cloud computing, environmental and air pollution monitoring, and health applications. One important challenge that must be solved for any wireless sensor network is energy-efficient clustering, that is categorized as NP-hard problem. This led to a great number of novel clustering algorithms, that emerged with sole purpose to establish the proper balance in energy consumption between the sensors, and to enhance the efficiency and lifetime of the network itself. In this manuscript, a modified version of the bat algorithm, that belongs to a group of nature-inspired swarm intelligence metaheuristics, is proposed. Devised algorithm was utilized to tackle the energy-efficient clustering problems. Performance of proposed improved bat metaheuristics has been validated by conducting a comparative analysis with its original version, and also with other metaheuristics approaches that were tested for the same problem. Obtained results from conducted experiments suggest that the proposed method’s performance is superior, and that it could bring valuable results in the other domains of use as well.
A novel cluster detection of COVID-19 patients and medical disease conditions using improved evolutionary clustering algorithm star
With the increasing number of samples, the manual clustering of COVID-19 and medical disease data samples becomes time-consuming and requires highly skilled labour. Recently, several algorithms have been used for clustering medical datasets deterministically; however, these definitions have not been effective in grouping and analysing medical diseases. The use of evolutionary clustering algorithms may help to effectively cluster these diseases. On this presumption, we improved the current evolutionary clustering algorithm star (ECA*), called iECA*, in three manners: (i) utilising the elbow method to find the correct number of clusters; (ii) cleaning and processing data as part of iECA* to apply it to multivariate and domain-theory datasets; (iii) using iECA* for real-world applications in clustering COVID-19 and medical disease datasets. Experiments were conducted to examine the performance of iECA* against state-of-the-art algorithms using performance and validation measures (validation measures, statistical benchmarking, and performance ranking framework). The results demonstrate three primary findings. First, iECA* was more effective than other algorithms in grouping the chosen medical disease datasets according to the cluster validation criteria. Second, iECA* exhibited the lower execution time and memory consumption for clustering all the datasets, compared to the current clustering methods analysed. Third, an operational framework was proposed to rate the effectiveness of iECA* against other algorithms in the datasets analysed, and the results indicated that iECA* exhibited the best performance in clustering all medical datasets. Further research is required on real-world multi-dimensional data containing complex knowledge fields for experimental verification of iECA* compared to evolutionary algorithms.
A multidisciplinary ensemble algorithm for clustering heterogeneous datasets
Clustering is a commonly used method for exploring and analysing data where the primary objective is to categorise observations into similar clusters. In recent decades, several algorithms and methods have been developed for analysing clustered data. We notice that most of these techniques deterministically define a cluster based on the value of the attributes, distance, and density of homogenous and single-featured datasets. However, these definitions are not successful in adding clear semantic meaning to the clusters produced. Evolutionary operators and statistical and multidisciplinary techniques may help in generating meaningful clusters. Based on this premise, we propose a new evolutionary clustering algorithm (ECA*) based on social class ranking and meta-heuristic algorithms for stochastically analysing heterogeneous and multifeatured datasets. The ECA* is integrated with recombinational evolutionary operators, Levy flight optimisation, and some statistical techniques, such as quartiles and percentiles, as well as the Euclidean distance of the K-means algorithm. Experiments are conducted to evaluate the ECA* against five conventional approaches: K-means (KM), K-means++ (KM++), expectation maximisation (EM), learning vector quantisation (LVQ), and the genetic algorithm for clustering++ (GENCLUST++). That the end, 32 heterogeneous and multifeatured datasets are used to examine their performance using internal and external and basic statistical performance clustering measures and to measure how their performance is sensitive to five features of these datasets (cluster overlap, the number of clusters, cluster dimensionality, the cluster structure, and the cluster shape) in the form of an operational framework. The results indicate that the ECA* surpasses its counterpart techniques in terms of the ability to find the right clusters. Significantly, compared to its counterpart techniques, the ECA* is less sensitive to the five properties of the datasets mentioned above. Thus, the order of overall performance of these algorithms, from best performing to worst performing, is the ECA*, EM, KM++, KM, LVQ, and the GENCLUST++. Meanwhile, the overall performance rank of the ECA* is 1.1 (where the rank of 1 represents the best performing algorithm and the rank of 6 refers to the worst performing algorithm) for 32 datasets based on the five dataset features mentioned above.
Formal context reduction in deriving concept hierarchies from corpora using adaptive evolutionary clustering algorithm star
It is beneficial to automate the process of deriving concept hierarchies from corpora since a manual construction of concept hierarchies is typically a time-consuming and resource-intensive process. As such, the overall process of learning concept hierarchies from corpora encompasses a set of steps: parsing the text into sentences, splitting the sentences and then tokenising it. After the lemmatisation step, the pairs are extracted using formal context analysis (FCA). However, there might be some uninteresting and erroneous pairs in the formal context. Generating formal context may lead to a time-consuming process, so formal context size reduction is require to remove uninterested and erroneous pairs, taking less time to extract the concept lattice and concept hierarchies accordingly. In this premise, this study aims to propose two frameworks: (1) A framework to review the current process of deriving concept hierarchies from corpus utilising formal concept analysis (FCA); (2) A framework to decrease the formal context’s ambiguity of the first framework using an adaptive version of evolutionary clustering algorithm (ECA*). Experiments are conducted by applying 385 sample corpora from Wikipedia on the two frameworks to examine the reducing size of formal context, which leads to yield concept lattice and concept hierarchy. The resulting lattice of formal context is evaluated to the standard one using concept lattice-invariants. Accordingly, the homomorphic between the two lattices preserves the quality of resulting concept hierarchies by 89% in contrast to the basic ones, and the reduced concept lattice inherits the structural relation of the standard one. The adaptive ECA* is examined against its four counterpart baseline algorithms (Fuzzy K-means, JBOS approach, AddIntent algorithm, and FastAddExtent) to measure the execution time on random datasets with different densities (fill ratios). The results show that adaptive ECA* performs concept lattice faster than other mentioned competitive techniques in different fill ratios.
Hybrid genetic algorithm and machine learning method for covid-19 cases prediction
A novel type of coronavirus, now known under the acronym COVID-19, was initially discovered in the city of Wuhan, China. Since then, it has spread across the globe and now it is affecting over 210 countries worldwide. The number of confirmed cases is rapidly increasing and has recently reached over 14 million on July 18, 2020, with over 600,000 confirmed deaths. In the research presented within this paper, a new forecasting model to predict the number of confirmed cases of COVID-19 disease is proposed. The model proposed in this paper is a hybrid between machine learning adaptive neuro-fuzzy inference system and enhanced genetic algorithm metaheuristics. The enhanced genetic algorithm is applied to determine the parameters of the adaptive neuro-fuzzy inference system and to enhance the overall quality and performances of the prediction model. Proposed hybrid method was tested by using realistic official dataset on the COVID-19 outbreak in the state of China. In this paper, proposed approach was compared against multiple existing state-of-the-art techniques that were tested in the same environment, on the same datasets. Based on the simulation results and conducted comparative analysis, it is observed that the proposed hybrid approach has outperformed other sophisticated approaches and that it can be used as a tool for other time-series prediction.
A new evolutionary algorithm: Learner performance based behavior algorithm
A novel evolutionary algorithm called learner performance based behavior algorithm (LPB) is proposed in this article. The basic inspiration of LPB originates from the process of accepting graduated learners from high school in different departments at university. In addition, the changes those learners should do in their studying behaviors to improve their study level at university. The most important stages of optimization; exploitation and exploration are outlined by designing the process of accepting graduated learners from high school to university and the procedure of improving the learner’s studying behavior at university to improve the level of their study, respectively. To show the accuracy of the proposed algorithm, it is evaluated against a number of test functions, such as traditional benchmark functions, CEC-C06 2019 test functions, and a real-world case study problem. The results of the proposed algorithm are then compared to the DA, GA, and PSO. The proposed algorithm produced superior results in most of the cases and comparative in some others. It is proved that the algorithm has a great ability to deal with the large optimization problems comparing to the DA, GA, and PSO. The overall results proved the ability of LPB in improving the initial population and converging towards the global optima. Moreover, the results of the proposed work are proved statistically.
Hybrid fruit-fly optimization algorithm with k-means for text document clustering
The fast-growing Internet results in massive amounts of text data. Due to the large volume of the unstructured format of text data, extracting relevant information and its analysis becomes very challenging. Text document clustering is a text-mining process that partitions the set of text-based documents into mutually exclusive clusters in such a way that documents within the same group are similar to each other, while documents from different clusters differ based on the content. One of the biggest challenges in text clustering is partitioning the collection of text data by measuring the relevance of the content in the documents. Addressing this issue, in this work a hybrid swarm intelligence algorithm with a K-means algorithm is proposed for text clustering. First, the hybrid fruit-fly optimization algorithm is tested on ten unconstrained CEC2019 benchmark functions. Next, the proposed method is evaluated on six standard benchmark text datasets. The experimental evaluation on the unconstrained functions, as well as on text-based documents, indicated that the proposed approach is robust and superior to other state-of-the-art methods.
Real‑time COVID-19 diagnosis from X-Ray images using deep CNN and extreme learning machines stabilized by chimp optimization algorithm
Real-time detection of COVID-19 using radiological images has gained priority due to the increasing demand for fast diagnosis of COVID-19 cases. This paper introduces a novel two-phase approach for classifying chest X-ray images. Deep Learning (DL) methods fail to cover these aspects since training and fine-tuning the model's parameters consume much time. In this approach, the first phase comes to train a deep CNN working as a feature extractor, and the second phase comes to use Extreme Learning Machines (ELMs) for real-time detection. The main drawback of ELMs is to meet the need of a large number of hidden-layer nodes to gain a reliable and accurate detector in applying image processing since the detective performance remarkably depends on the setting of initial weights and biases. Therefore, this paper uses Chimp Optimization Algorithm (ChOA) to improve results and increase the reliability of …
Performance of a Novel Chaotic Firefly Algorithm with Enhanced Exploration for Tackling Global Optimization Problems: Application for Dropout Regularization
Swarm intelligence techniques have been created to respond to theoretical and practical global optimization problems. This paper puts forward an enhanced version of the firefly algorithm that corrects the acknowledged drawbacks of the original method, by an explicit exploration mechanism and a chaotic local search strategy. The resulting augmented approach was theoretically tested on two sets of bound-constrained benchmark functions from the CEC suites and practically validated for automatically selecting the optimal dropout rate for the regularization of deep neural networks. Despite their successful applications in a wide spectrum of different fields, one important problem that deep learning algorithms face is overfitting. The traditional way of preventing overfitting is to apply regularization; the first option in this sense is the choice of an adequate value for the dropout parameter. In order to demonstrate its ability in finding an optimal dropout rate, the boosted version of the firefly algorithm has been validated for the deep learning subfield of convolutional neural networks, with respect to five standard benchmark datasets for image processing: MNIST, Fashion-MNIST, Semeion, USPS and CIFAR-10. The performance of the proposed approach in both types of experiments was compared with other recent state-of-the-art methods. To prove that there are significant improvements in results, statistical tests were conducted. Based on the experimental data, it can be concluded that the proposed algorithm clearly outperforms other approaches.
A comprehensive survey and taxonomy of the SVM-based intrusion detection systems
The increasing number of security attacks have inspired researchers to employ various classifiers, such as support vector machines (SVMs), to deal with them in Intrusion detection systems (IDSs). This paper presents a comprehensive study and investigation of the SVM-based intrusion detection and feature selection systems proposed in the literature. It first presents the essential concepts and background knowledge about security attacks, IDS, and SVM classifiers. It then provides a taxonomy of the SVM-based IDS schemes and describes how they have adapted numerous types of SVM classifiers in detecting various types of anomalies and intrusions. Moreover, it discusses the main contributions of the investigated schemes and highlights the algorithms and techniques combined with the SVM to enhance its detection rate and accuracy. Finally, different properties and limitations of the SVM-based IDS schemes are discussed.
IoMT and healthcare delivery in chronic diseases
Digital health broadly incorporates categories such as mobile health (mHealth), information technology comprising electronic health records, reimbursements (IT), wearable devices, telehealth and telemedicine. Recent advancements on this front includes Internet of Things (IoT) ecosystem, which provides a connected ecosystem for flawless information flow within various technologies involved at hardware, software and networking layer. These enabling technologies include devices embedded with sensor, actuator and communication protocol, which transmits and receives the data in real time. Along with other end applications such as smart energy transmission, smart homes, intelligent logistics and smart towns, healthcare provides an attractive opportunity area for successful implementation. Current estimates predict that nearly 60% of organizations a have implemented IoT in healthcare industry in partial or complete form, to deliver value to patients and transition from disjoint and reactive model towards interoperable and proactive service delivery model. Internet of medical things (IoMT)-enabled machine-tomachine interaction between devices in patient’s body environment with enabling architecture, is predicted to provide higher impact in chronic disease care. Further, current topic would broadly review clinician side transformations of technology, explaining how IoT applications would create value for patients in different scenario and its relevance in clinical settings.
Advances in Telemedicine for Health Monitoring Technologies, design and applications
Advances in telemedicine technologies have offered clinicians greater levels of real-time guidance and technical assistance for diagnoses, monitoring, operations or interventions from colleagues based in remote locations. The topic includes the use of videoconferencing, mentorship during surgical procedures, or machine-to-machine communication to process data from one location by programmes running in another.
The COVID-19 infection and the immune system: The role of complementary and alternative medicines.
Emerging in China in late 2019, the new COVID-19 virus infection epidemic is growing rapidly and new cases are reported around the world. The first cases were linked to a wet market, and subsequently, the virus has spread rapidly in China through human-to-human transmission, and the universal impact of the COVID-19 virus is now spreading worldwide. The disease originated from COVID19 is a type of viral pneumonia that is caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2). Currently, no clinically approved antiviral drugs have been introduced for SARS-CoV-2 infection. Identifying the mechanism of action of the virus and its interaction with the immune system will help prevent and treat the disease. In other words, understanding the disease and its effect on the immune system will improve disease management. The immune system has a fundamental protective function against most infectious diseases such as SARS-CoV-2. This study investigates the effectiveness of Complementary and Alternative Medicines (CAMs) in boosting immune response against infection diseases. The role of vitamins in COVID-19 in its early stages is also investigated and the previous research findings are reported. The result of this study is important especially for the patients with COVID-19 who may found CAMs as effective way in boosting immune response against this virus and useful option for management and treatment of COVID-19 in its early stages. We suggest that further studies through consumers’ experience analysis on CAMs are required to come to robust conclusions in the effectiveness of CAMs for management and treatment of COVID-19.
Next word prediction based on the N-gram model for Kurdish Sorani and Kurmanji
Next word prediction is an input technology that simplifies the process of typing by suggesting the next word to a user to select, as typing in a conversation consumes time. A few previous studies have focused on the Kurdish language, including the use of next word prediction. However, the lack of a Kurdish text corpus presents a challenge. Moreover, the lack of a sufficient number of N-grams for the Kurdish language, for instance, five-grams, is the reason for the rare use of next Kurdish word prediction. Furthermore, the improper display of several Kurdish letters in the RStudio software is another problem. This paper provides a Kurdish corpus, creates five, and presents a unique research work on next word prediction for Kurdish Sorani and Kurmanji. The N-gram model has been used for next word prediction to reduce the amount of time while typing in the Kurdish language. In addition, little work has been conducted on next Kurdish word prediction; thus, the N-gram model is utilized to suggest text accurately. To do so, R programming and RStudio are used to build the application. The model is 96.3% accurate.
An energy efficient service composition mechanism using a hybrid meta-heuristic algorithm in a mobile cloud environment
By increasing mobile devices in technology and human life, using a runtime and mobile services has gotten more complex along with the composition of a large number of atomic services. Different services are provided by mobile cloud components to represent the non-functional properties as Quality of Service (QoS), which is applied by a set of standards. On the other hand, the growth of the energy-source heterogeneity in mobile clouds is an emerging challenge according to the energy saving problem in mobile nodes. In order to mobile cloud service composition as an NP-Hard problem, an efficient selection method should be taken by problem using optimal energy-aware methods that can extend the deployment and interoperability of mobile cloud components. Also, an energy-aware service composition mechanism is required to preserve high energy saving scenarios for mobile cloud components. In this paper, an energy-aware mechanism is applied to optimize mobile cloud service composition using a hybrid Shuffled Frog Leaping Algorithm and Genetic Algorithm (SFGA). Experimental results capture that the proposed mechanism improves the feasibility of the service composition with minimum energy consumption, response time, and cost for mobile cloud components against some current algorithms.
Datasets on statistical analysis and performance evaluation of backtracking search optimisation algorithm compared with its counterpart algorithms
In this data article, we present the data used to evaluate the statistical success of the backtracking search optimisation algorithm (BSA) in comparison with the other four evolutionary optimisation algorithms. The data presented in this data article is related to the research article entitles ‘Operational Framework for Recent Advances in Backtracking Search Optimisation Algorithm: A Systematic Review and Performance Evaluation’ [1]. Three statistical tests conducted on BSA compared to differential evolution algorithm (DE), particle swarm optimisation (PSO), artificial bee colony (ABC), and firefly algorithm (FF). The tests are used to evaluate these mentioned algorithms and to determine which one could solve a specific optimisation problem concerning the statistical success of 16 benchmark problems taking several criteria into account. The criteria are initializing control parameters, dimension of the problems, their search space, and number of iterations needed to minimise a problem, the performance of the computer used to code the algorithms and their programming style, getting a balance on the effect of randomization, and the use of different type of optimisation problem in terms of hardness and their cohort. In addition, all the three tests include necessary statistical measures (Mean: mean-solution, S.D.: standard-deviation of mean-solution, Best: the best solution, Worst: the worst solution, Exec. Time: mean runtime in seconds, No. of succeeds: number of successful minimisation, and No. of Failure: number of failed minimisation).
Deep learning for vision‐based fall detection system: Enhanced optical dynamic flow
Accurate fall detection for the assistance of older people is crucial to reduce incidents of deaths or injuries due to falls. Meanwhile, vision-based fall detection system has shown some significant results to detect falls. Still, numerous challenges need to be resolved. The impact of deep learning has changed the landscape of the vision-based system, such as action recognition. The deep learning technique has not been successfully implemented in vision-based fall detection system due to the requirement of a large amount of computation power and requirement of a large amount of sample training data. This research aims to propose a vision-based fall detection system that improves the accuracy of fall detection in some complex environments such as the change of light condition in the room. Also, this research aims to increase the performance of the pre-processing of video images. The proposed system consists of Enhanced Dynamic Optical Flow technique that encodes the temporal data of optical flow videos by the method of rank pooling, which thereby improves the processing time of fall detection and improves the classification accuracy in dynamic lighting condition. The experimental results showed that the classification accuracy of the fall detection improved by around 3% and the processing time by 40–50 ms. The proposed system concentrates on decreasing the processing time of fall detection and improving the classification accuracy. Meanwhile, it provides a mechanism for summarizing a video into a single image by using dynamic optical flow technique, which helps to increase the performance of image preprocessing steps.
Remote tracking of Parkinson's disease progression using ensembles of deep belief network and self-organizing map
Parkinson’s Disease (PD) is one of the most prevalent neurological disorders characterized by impairment of motor function. Early diagnosis of PD is important for initial treatment. This paper presents a newly developed method for application in remote tracking of PD progression. The method is based on deep learning and clustering approaches. Specifically, we use the Deep Belief Network (DBN) and Support Vector Regression (SVR) to predict Unified Parkinson's Disease Rating Scale (UPDRS). The DBN prediction models were developed by different epoch numbers. We use a clustering approach, namely, Self-Organizing Map (SOM), to improve the accuracy and scalability of prediction. We evaluate our method on a real-world PD dataset. In all, nine clusters were detected from the data with the best SOM map quality for clustering, and for each cluster, a DBN was developed with a specific number of epochs. The results of the DBN prediction models were integrated by the SVR technique. Further, we compare our work with other supervised learning techniques, SVR and Neuro-Fuzzy techniques. The results revealed that the hybrid of clustering and DBN with the aid of SVR for an ensemble of the DBN outputs can make relatively better predictions of Total-UPDRS and Motor-UPDRS than other learning techniques.
Operational framework for recent advances in backtracking search optimisation algorithm: A systematic review and performance evaluation
Backtracking search optimisation algorithm (BSA) is a commonly used meta-heuristic optimisation algorithm and was proposed by Civicioglu in 2013. When it was first used, it exhibited its strong potential for solving numerical optimisation problems. Additionally, the experiments conducted in previous studies demonstrated the successful performance of BSA and its non-sensitivity toward the several types of optimisation problems. This success of BSA motivated researchers to work on expanding it, e.g., developing its improved versions or employing it for different applications and problem domains. However, there is a lack of literature review on BSA; therefore, reviewing the aforementioned modifications and applications systematically will aid further development of the algorithm. This paper provides a systematic review and meta-analysis that emphasise on reviewing the related studies and recent developments on BSA. Hence, the objectives of this work are two-fold: (i) First, two frameworks for depicting the main extensions and the uses of BSA are proposed. The first framework is a general framework to depict the main extensions of BSA, whereas the second is an operational framework to present the expansion procedures of BSA to guide the researchers who are working on improving it. (ii) Second, the experiments conducted in this study fairly compare the analytical performance of BSA with four other competitive algorithms: differential evolution (DE), particle swarm optimisation (PSO), artificial bee colony (ABC), and firefly (FF) on 16 different hardness scores of the benchmark functions with different initial control parameters such as problem dimensions and search space. The experimental results indicate that BSA is statistically superior than the aforementioned algorithms in solving different cohorts of numerical optimisation problems such as problems with different levels of hardness score, problem dimensions, and search spaces. This study can act as a systematic and meta-analysis guide for the scholars who are working on improving BSA.
Coronary heart disease diagnosis through self-organizing map and fuzzy support vector machine with incremental updates
The trade-off between computation time and predictive accuracy is important in the design and implementation of clinical decision support systems. Machine learning techniques with incremental updates have proven its usefulness in analyzing large collection of medical datasets for diseases diagnosis. This research aims to develop a predictive method for heart disease diagnosis using machine learning techniques. To this end, the proposed method is developed by unsupervised and supervised learning techniques. In particular, this research relies on Principal Component Analysis (PCA), Self-Organizing Map, Fuzzy Support Vector Machine (Fuzzy SVM), and two imputation techniques for missing value imputation. Furthermore, we apply the incremental PCA and FSVM for incremental learning of the data to reduce the computation time of disease prediction. Our data analysis on two real-world datasets, Cleveland and Statlog, showed that the use of incremental Fuzzy SVM can significantly improve the accuracy of heart disease classification. The experimental results further revealed that the method is effective in reducing the computation time of disease diagnosis in relation to the non-incremental learning technique.
A novel hybrid GWO with WOA for global numerical optimization and solving pressure vessel design
A recent metaheuristic algorithm, such as Whale optimization algorithm (WOA), was proposed. The idea of proposing this algorithm belongs to the hunting behavior of the humpback whale. However, WOA suffers from poor performance in the exploitation phase and stagnates in the local best solution. Grey wolf optimization (GWO) is a very competitive algorithm comparing to other common metaheuristic algorithms as it has a super performance in the exploitation phase, while it is tested on unimodal benchmark functions. Therefore, the aim of this paper is to hybridize GWO with WOA to overcome the problems. GWO can perform well in exploiting optimal solutions. In this paper, a hybridized WOA with GWO which is called WOAGWO is presented. The proposed hybridized model consists of two steps. Firstly, the hunting mechanism of GWO is embedded into the WOA exploitation phase with a new condition which is related to GWO. Secondly, a new technique is added to the exploration phase to improve the solution after each iteration. Experimentations are tested on three different standard test functions which are called benchmark functions: 23 common functions, 25 CEC2005 functions, and 10 CEC2019 functions. The proposed WOAGWO is also evaluated against original WOA, GWO, and three other commonly used algorithms. Results show that WOAGWO outperforms other algorithms depending on the Wilcoxon rank-sum test. Finally, WOAGWO is likewise applied to solve an engineering problem such as pressure vessel design. Then the results prove that WOAGWO achieves optimum solution which is better than WOA and fitness-dependent optimizer (FDO).
Cat Swarm Optimization Algorithm: A Survey and Performance Evaluation
This paper presents an in-depth survey and performance evaluation of cat swarm optimization (CSO) algorithm. CSO is a robust and powerful metaheuristic swarm-based optimization approach that has received very positive feedback since its emergence. It has been tackling many optimization problems, and many variants of it have been introduced. However, the literature lacks a detailed survey or a performance evaluation in this regard. Therefore, this paper is an attempt to review all these works, including its developments and applications, and group them accordingly. In addition, CSO is tested on 23 classical benchmark functions and 10 modern benchmark functions (CEC 2019). The results are then compared against three novel and powerful optimization algorithms, namely, dragonfly algorithm (DA), butterfly optimization algorithm (BOA), and fitness dependent optimizer (FDO). These algorithms are then ranked according to Friedman test, and the results show that CSO ranks first on the whole. Finally, statistical approaches are employed to further confirm the outperformance of CSO algorithm.
A Comprehensive Study on Pedestrians' Evacuation
Human beings face threats because of unexpected happenings, which can be avoided through an adequate crisis evacuation plan, which is vital to stop wound and demise as its negative results. Consequently, different typical evacuation pedestrians have been created. Moreover, through applied research, these models for various applications, reproductions, and conditions have been examined to present an operational model. Furthermore, new models have been developed to cooperate with system evacuation in residential places in case of unexpected events. This research has taken into account an inclusive and a 'systematic survey of pedestrian evacuation' to demonstrate models methods by focusing on the applications' features, techniques, implications, and after that gather them under various types, for example, classical models, hybridized models, and generic model. The current analysis assists scholars in this field of study to write their forthcoming papers about it, which can suggest a novel structure to recent typical intelligent reproduction with novel features.
Investigating the effect of competitiveness power in estimating the average weighted price in electricity market
This paper aims at evaluating the impact of power extent on price in the electricity market. The competitiveness extent of electricity market during specific times in a day is considered to achieve this. Then, the effect of competitiveness extent on forecasting preciseness of daily power price is assessed. A price forecasting model based on multi-layer perception via back propagation with the Levenberg-Marquardt mechanism is used. Residual Supply Index (RSI) and other variables that affect prices are used as inputs to the model to evaluate market competitiveness. Results show that using market power indices as an input helps to increase forecasting accuracy. Thus, the competitiveness extent of market power in different daily time periods is a notable variable in price formation. Moreover, market players cannot ignore the explanatory power of market power in price forecasting. In this research, real data of electricity market from 2013 is used and the main source of data is the Grid Management Company in Iran.
Updating ground conditions and time-cost scatter-gram in tunnels during excavation
Minimizing uncertainties is an important issue among the significant discussions pertaining to the project design and planning. Usually, the uncertainties in subsurface projects arise from the unknown ground conditions that may cause the designer to fail to consider all the potential issues prone to occur during the construction procedure. Total time and costs uncertainties can be considered as the most important uncertainties during the planning and excavation of a tunnel project that is directly connected with cognition of the subsurface conditions. This work presents an updating procedure and associated code, which allows one to refine predictions during construction. Updating does not only involve replacing the original prediction by actual data from the excavation but also includes a learning effect. The updating uses information from the actual excavation to arrive at an improved prediction for the unexcavated part. Updating the ground conditions and time-cost scatter-gram in tunnels during excavation is a tool, which helps the user refine input parameters by deriving relevant information from construction data and presenting it together with original input. In this paper, an example project shows that the updating result has a significant impact on the precision of the prediction and reduces the uncertainty about ground conditions and construction time and cost of the tunnel substantially. It facilitates both the owners and the contractors to be aware of the risk they should carry before construction of the unexcavated part, and it is useful for both tendering and bidding.
A study of the convolutional neural networks applications
At present, deep learning is widely used in a broad range of arenas. A convolutional neural networks (CNN) is becoming the star of deep learning as it gives the best and most precise results when cracking real-world problems. In this work, a brief description of the applications of CNNs in two areas will be presented: First, in computer vision, generally, that is, scene labeling, face recognition, action recognition, and image classification; Second, in natural language processing, that is, the fields of speech recognition and text classification.
A multi hidden recurrent neural network with a modified grey wolf optimizer
Identifying university students’ weaknesses results in better learning and can function as an early warning system to enable students to improve. However, the satisfaction level of existing systems is not promising. New and dynamic hybrid systems are needed to imitate this mechanism. A hybrid system (a modified Recurrent Neural Network with an adapted Grey Wolf Optimizer) is used to forecast students’ outcomes. This proposed system would improve instruction by the faculty and enhance the students’ learning experiences. The results show that a modified recurrent neural network with an adapted Grey Wolf Optimizer has the best accuracy when compared with other models.
An intelligent method for iris recognition using supervised machine learning techniques
In the new millennium, with chaotic situation that exist in the world, people are threaten with multifarious terrorist attacks. There have been several intelligent ways in order to recognize and diminish these assaults wisely. Biometric traits have proven to be one of the useful ways for tackling these problems. Among all these traits, the iris recognition systems are appropriate tools for the human identification not only the iris pattern is well-known features, but also it has numerous features such as compactness representation, uniqueness texture, and stability. In spite of the fact that there have been published many approaches in these areas, there are still abundant problems in these approaches like time consuming, and computational complexity. In order to solve these obstacles, we propose the extravagant iris recognition methods that are based on combination of two dimensional Gabor kernel (2-DGK), step filtering (SF) and polynomial filtering (PF) for feature extraction and hybrid radial basis function neural network (RBFNN) with genetic algorithm (GA) for matching task. To assess the performance of the proposed method, we use two benchmarks in our algorithm and implemented it on CASIA-Iris V3, UBIRIS. V1 and UCI machine learning repository datasets. The experimental results of the proposed method reveal that the method is efficient in the iris recognition.
Donkey and smuggler optimization algorithm: A collaborative working approach to path finding
Swarm Intelligence is a metaheuristic optimization approach that has become very predominant over the last few decades. These algorithms are inspired by animals' physical behaviors and their evolutionary perceptions. The simplicity of these algorithms allows researchers to simulate different natural phenomena to solve various real-world problems. This paper suggests a novel algorithm called Donkey and Smuggler Optimization Algorithm (DSO). The DSO is inspired by the searching behavior of donkeys. The algorithm imitates transportation behavior such as searching and selecting routes for movement by donkeys in the actual world. Two modes are established for implementing the search behavior and route-selection in this algorithm. These are the Smuggler and Donkeys. In the Smuggler mode, all the possible paths are discovered and the shortest path is then found. In the Donkeys mode, several donkey behaviors are utilized such as Run, Face & Suicide, and Face & Support. Real world data and applications are used to test the algorithm. The experimental results consisted of two parts, firstly, we used the standard benchmark test functions to evaluate the performance of the algorithm in respect to the most popular and the state of the art algorithms. Secondly, the DSO is adapted and implemented on three real-world applications namely; traveling salesman problem, packet routing, and ambulance routing. The experimental results of DSO on these real-world problems are very promising. The results exhibit that the suggested DSO is appropriate to tackle other unfamiliar search spaces and complex problems.
Dragonfly algorithm and its applications in applied science survey
One of the most recently developed heuristic optimization algorithms is dragonfly by Mirjalili. Dragonfly algorithm has shown its ability to optimizing different real-world problems. It has three variants. In this work, an overview of the algorithm and its variants is presented. Moreover, the hybridization versions of the algorithm are discussed. Furthermore, the results of the applications that utilized the dragonfly algorithm in applied science are offered in the following area: machine learning, image processing, wireless, and networking. It is then compared with some other metaheuristic algorithms. In addition, the algorithm is tested on the CEC-C06 2019 benchmark functions. The results prove that the algorithm has great exploration ability and its convergence rate is better than the other algorithms in the literature, such as PSO and GA. In general, in this survey, the strong and weak points of the algorithm are discussed. Furthermore, some future works that will help in improving the algorithm’s weak points are recommended. This study is conducted with the hope of offering beneficial information about dragonfly algorithm to the researchers who want to study the algorithm.
Fuzzy logic approach for infectious disease diagnosis: A methodical evaluation, literature and classification
This paper presents a systematic review of the literature and the classification of fuzzy logic application in an infectious disease. Although the emergence of infectious diseases and their subsequent spread have a significant impact on global health and economics, a comprehensive literature evaluation of this topic has yet to be carried out. Thus, the current study encompasses the first systematic, identifiable and comprehensive academic literature evaluation and classification of the fuzzy logic methods in infectious diseases. 40 papers on this topic, which have been published from 2005 to 2019 and related to the human infectious diseases were evaluated and analyzed. The findings of this evaluation clearly show that the fuzzy logic methods are vastly used for diagnosis of diseases such as dengue fever, hepatitis and tuberculosis. The key fuzzy logic methods used for the infectious disease are the fuzzy inference system; the rule-based fuzzy logic, Adaptive Neuro-Fuzzy Inference System (ANFIS) and fuzzy cognitive map. Furthermore, the accuracy, sensitivity, specificity and the Receiver Operating Characteristic (ROC) curve were universally applied for a performance evaluation of the fuzzy logic techniques. This thesis will also address the various needs between the different industries, practitioners and researchers to encourage more research regarding the more overlooked areas, and it will conclude with several suggestions for the future infectious disease researches.
Measuring sustainability through ecological sustainability and human sustainability: A machine learning approach
Nowadays, sustainability is recognized as one of the most important development paradigms and included in the international and national strategies of almost all organizations. Sustainability assessment methods have been important for monitoring sustainability performance. These methods are developed to help decision-makers in their attempts to make society more sustainable. Various methods have been proposed for assessing the sustainability performance, however, this research is the first attempt to employ fuzzy clustering and supervised machine learning techniques to country sustainability assessment. This research tries to extend the previous sustainability assessment systems by the use of these techniques to reveal the relationships between human sustainability, ecological sustainability and overall sustainability performance by discovering the decision rules. The decision rules discovered from the Sustainability Assessment by Fuzzy Evaluation data of 128 countries are used to predict the country sustainability performance. The method proposed in this paper is flexible to accept large number indicators of sustainability to be used in the assessment of countries sustainability. The results of our analysis on countries sustainability data showed that the proposed method has potential to be used as a decision-making tool for sustainability assessment through a large set of indicators.
Factors influencing medical tourism adoption in Malaysia: A DEMATEL-Fuzzy TOPSIS approach
Tourism is one of the biggest competitive industries in the world. Nowadays, medical tourism is quickly developing as a part of tourism for health and wellness care. There are many factors influencing the development of medical tourism in developing countries. This research aims to identify those factors for medical tourism development in Malaysia. We investigate the factors from the literature to develop a new decision-making model. We use two multi-criteria decision making techniques, Decision Making Trial and Evaluation Laboratory (DEMATEL) and Fuzzy Order of Preference by Similarity to Ideal Solution (Fuzzy TOPSIS), to reveal the interrelationships among the factors and to find the relative importance of these factors in the decision making model. The results showed that human and technological factors are the most important factors for medical tourism adoption in Malaysia. The results of this study will enable key players in the medical tourism industry to potentially assign investments for medical tourism in the developing countries.
A Systematic and Meta-Analysis Survey of Whale Optimization Algorithm
The whale optimization algorithm (WOA) is a nature-inspired metaheuristic optimization algorithm, which was proposed by Mirjalili and Lewis in 2016. This algorithm has shown its ability to solve many problems. Comprehensive surveys have been conducted about some other nature-inspired algorithms, such as ABC and PSO. Nonetheless, no survey search work has been conducted on WOA. Therefore, in this paper, a systematic and meta-analysis survey of WOA is conducted to help researchers to use it in different areas or hybridize it with other common algorithms. Thus, WOA is presented in depth in terms of algorithmic backgrounds, its characteristics, limitations, modifications, hybridizations, and applications. Next, WOA performances are presented to solve different problems. Then, the statistical results of WOA modifications and hybridizations are established and compared with the most common optimization algorithms and WOA. The survey’s results indicate that WOA performs better than other common algorithms in terms of convergence speed and balancing between exploration and exploitation. WOA modifications and hybridizations also perform well compared to WOA. In addition, our investigation paves a way to present a new technique by hybridizing both WOA and BAT algorithms. The BAT algorithm is used for the exploration phase, whereas the WOA algorithm is used for the exploitation phase. Finally, statistical results obtained from WOA-BAT are very competitive and better than WOA in 16 benchmarks functions. WOA-BAT also outperforms well in 13 functions from CEC2005 and 7 functions from CEC2019.
Revealing customers’ satisfaction and preferences through online review analysis: The case of Canary Islands hotels
Travelers can enjoy a wide range of choices with the assistance of online review websites such as TripAdvisor. Online reviews provided by customers are an important portion of hotels' online business worldwide as they have value in terms of understanding customers' observations of hotels' product and service features. Hotel managers seek to understand travelers' satisfaction and hotel preferences through online reviews to improve their marketing strategy and decision making. This research uses the travelers' generated content in online hotel reviews to provide reasonable and benchmarking understandings about customers' satisfaction and preferences. Hence, the aim of this study is identifying the important factors for hotel selection based on previous travelers' reviews on TripAdvisor. Accordingly, we develop a new method for the use of Multi-Criteria Decision-Making (MCDM) and soft computing approaches. Concentrating on the case study of the Canary Islands hotels, we show how this method can be applied to determine the satisfaction and preferences among travelers that impact their decision in hotel choices. The results help to identify four customer segments for Canary Islands hotels. These segments are “Highly Satisfied Travelers”, “Satisfied Travelers”, “Moderately Satisfied Travelers”, and “Unsatisfied Travelers”, showing that different travelers have various degrees of satisfaction with dissimilar preferences. We found that travelers' preference and satisfaction segmentation is a crucial stage in travelers' behavior analysis to improve the quality of hotels' products and services. This form of analysis can enhance hotel managers' understanding of different market segments according to customers’ satisfaction level and their preferences. The findings of this study will help managers to set priority instructions for improving the corresponding hotel features and use online customer reviews to improve customer satisfaction and hotel performance.
Fitness Dependent Optimizer: Inspired by the Bee Swarming Reproductive Process
In this paper, a novel swarm intelligent algorithm is proposed, known as the fitness dependent optimizer (FDO). The bee swarming the reproductive process and their collective decision-making have inspired this algorithm; it has no algorithmic connection with the honey bee algorithm or the artificial bee colony algorithm. It is worth mentioning that the FDO is considered a particle swarm optimization (PSO)-based algorithm that updates the search agent position by adding velocity (pace). However, the FDO calculates velocity differently; it uses the problem fitness function value to produce weights, and these weights guide the search agents during both the exploration and exploitation phases. Throughout this paper, the FDO algorithm is presented, and the motivation behind the idea is explained. Moreover, the FDO is tested on a group of 19 classical benchmark test functions, and the results are compared with three well-known algorithms: PSO, the genetic algorithm (GA), and the dragonfly algorithm (DA); in addition, the FDO is tested on the IEEE Congress of Evolutionary Computation Benchmark Test Functions (CEC-C06, 2019 Competition) [1]. The results are compared with three modern algorithms: (DA), the whale optimization algorithm (WOA), and the salp swarm algorithm (SSA). The FDO results show better performance in most cases and comparative results in other cases. Furthermore, the results are statistically tested with the Wilcoxon rank-sum test to show the significance of the results. Likewise, the FDO stability in both the exploration and exploitation phases is verified and performance-proofed using different standard measurements. Finally, the FDO is applied to real-world applications as evidence of its feasibility.
A modified particle swarm optimization with neural network via euclidean distance
In this paper, a new modified model of Feed Forward Neural Network with Particle Swarm Optimization via using Euclidean Distance method (FNNPSOED) is used to better handle a classification problem of the employee’s behavior. The Particle Swarm Optimization (PSO) as a Nature Inspired Algorithm is used to support the Feed Forward Neural Network (FNN) that has one hidden layer to gain the optimum weights and biases using different hidden layer neurons. The key reason of using Euclidean Distance (ED) with PSO is to take the distance between each two feature values and use this distance as a random number for the velocity value in the velocity equation in the PSO algorithm. The FNNPSOED is used to classify employees’ behavior using 29 unique features. The FNNPSOED is evaluated against the Feed Forward Neural Network with Particle Swarm Optimization (FNNPSO). The FNNPSOED produced satisfactory results.
Improving Kurdish Web Mining through Tree Data Structure and Porter’s Stemmer Algorithms
Stemming is one of the main important preprocessing techniques that can be used to enhance the accuracy of text classification. The key purpose of using the stemming is combining the number of words that have the same stem to decrease high dimensionality of feature space. Reducing feature space causes to decline time to construct a model and minimize the memory space. In this paper, a new stemming approach is explored for enhancing Kurdish text classification performance. Tree data structure and Porter’s stemmer algorithms are incorporated for building the proposed approach. The system is assessed through using support vector machine and decision tree (C4. 5) to illustrate the performance of the suggested stemmer after and before applying it. Furthermore, the usefulness of using stop words is considered before and after implementing the suggested approach.
An evaluation of Reber stemmer with longest match stemmer technique in Kurdish Sorani text classification
Stemming is one of the most significant preprocessing. stages in text categorization that most of the academic investigators aim to improve and optimize the accuracy of the classification task. High dimensionality of feature space is one of the challenges in text classification that can be decreased by many techniques. In stemming, high dimensionality of feature space is decreased by grouping those words that they have same grammatical forms and then getting their root. This work is dedicated to build an approach for Kurdish language classification using Reber Stemmer. Thus, an innovative approach is investigated to get the stem of words in Kurdish language by removing longest suffix and prefixes of words. This approach has a strong capability and meets the requirements in responding to the process of deleting as many of the required affixes as possible to get the stem of words in Kurdish language. The advantage of this stemmer is that it ignores the ordering list of affixes that receives correct stem for more than one words that have the same format. The stemming technique is implemented on KDC-4007 dataset that consists of eight classes. Support Vector Machine (SVM) and Decision Tree (DT or C 4.5) are used for the classification. This stemmer has been successfully compared with the Longest-Match stemmer technique. According to results, the F-measure of Reber stemmer and Longest-Match method in SVM is higher than DT. Reber stemmer in SVM for classes (religion, sport, health and education) obtained higher F-measure, while the rest of classes are lower in Longest-Match. Reber stemmer in DT for classes (religion, sport and art) had higher F-measure for Reber stemmer while in Longest match the rest of classes showed lower F-measure.
A hybrid of artificial bee colony, genetic algorithm, and neural network for diabetic mellitus diagnosing
Researchers widely have introduced the Artificial Bee Colony (ABC) as an optimization algorithm to deal with classification and prediction problems. ABC has been combined with different Artificial Intelligent (AI) techniques to obtain optimum performance indicators. This work introduces a hybrid of ABC, Genetic Algorithm (GA), and Back Propagation Neural Network (BPNN) in the application of classifying, and diagnosing Diabetic Mellitus (DM). The optimized algorithm is combined with a mutation technique of Genetic Algorithm (GA) to obtain the optimum set of training weights for a BPNN. The idea is to prove that weights’ initial index in their initialized set has an impact on the performance rate. Experiments are conducted in three different cases; standard BPNN alone, BPNN trained with ABC, and BPNN trained with the mutation based ABC. The work tests all three cases of optimization on two different datasets (Primary dataset, and Secondary dataset) of diabetic mellitus (DM). The primary dataset is built by this work through collecting 31 features of 501 DM patients in local hospitals. The secondary dataset is the Pima dataset. Results show that the BPNN trained with the mutation based ABC can produce better local solutions than the standard BPNN and BPNN trained in combination with ABC.
Should academic research be relevant and useful to practitioners? The contrasting difference between three applied disciplines
Many within and outside of academia argue that research conducted in our universities should have impact on society, especially research from the applied fields. One discipline attracting disproportional criticism over the relevance of its research is business schools. While anecdotal evidence surrounding the practical usefulness of business schools research is growing, empirical support for the problem is limited. This study explores the issue from the perspective of accounting practitioners by examining their sources of information, including the use of articles from academic accounting research journals. The issue is further examined by comparing accountants with practitioners from two other applied disciplines, engineering and medicine. A survey of 560 practitioners was undertaken and the data analysed using both descriptive and regression methods. The study found that accounting practitioners’ perception of academia and use of academic research is very low, and when compared with engineers and medical practitioners, the differences were found to be statistically significant. Due to a disconnect from the real-world of practice, academic business researchers and business schools become increasingly vulnerable to adverse research funding decisions in the future.
Kurdish stemmer pre-processing steps for improving information retrieval
The rapid increase in the quantity of Kurdish documents over the last several years has created a need for improving information accuracy and precision in text classification and retrieval. Language stemming is an imperative pre-processing step for increasing the possibility of matching terms in a document in text classification tasks. Stemming helps reduce the total number of searchable terms within a document or query. This article proposes an active approach for stemming Kurdish Sorani texts to reduce variations of words to single terms or stems. The outcomes of the process, described in this article, demonstrate that decreasing the dimensionality of feature vectors in documents will increase the effectiveness of retrieval when the stemming process is used. This process applied for Kurdish Sorani can be adapted and applied in Kurdish Kurmanji as well for greater efficiency and effectiveness in digital text classification and applications.
Using accuracy measure for improving the training of LSTM with metaheuristic algorithms
Recurrent Neural Networks (RNNs) are possibly the most prevailing and advantageous type of neural network. On the other hand, these networks still have some weaknesses in terms of learning speed, error convergence, and accuracy due to long-term dependencies, which need to be solved. Long-term dependencies are mainly exploding and vanishing gradients through Back Propagation Learning Algorithm. In this paper, Long Short Term Memory or LSTM is used and well structured for resolving the above concerns. Four different optimizers based on Metaheuristic Algorithms are chosen to train LSTM (these are; Harmony Search (HS), Gray Wolf Optimizer (GWO), Sine Cosine (SCA), and Ant Lion Optimization algorithms (ALOA). The suggested representations are used for classification and analysis of real and medical time series data sets (Breast Cancer Wisconsin Data Set and Epileptic Seizure Recognition Data Set). Classification accuracy measure has been used instead of error rate and mean square error methods to train LSTM with above optimizing algorithms. The experimental results are verified using the 5-fold cross validation. Details of simulations and coding in R programing language can be obtained in the following link “https://github.com/pollaeng/rnn”.
Low Power Wide Area Networks: A Survey of Enabling Technologies, Applications and Interoperability Needs
Low-power wide area (LPWA) technologies are strongly recommended as the underlying networks for Internet of Things (IoT) applications. They offer attractive features, including wide-range coverage, long battery life, and low data rates. This paper reviews the current trends in this technology, with an emphasis on the services it provides and the challenges it faces. The industrial paradigms for LPWA implementation are presented. Compared with other work in the field, this paper focuses on the need for integration among different LPWA technologies and recommends the appropriate LPWA solutions for a wide range of IoT application and service use cases. Opportunities created by these technologies in the market are also analyzed. The latest research efforts to investigate and improve the operation of LPWA networks are also compared and classified to enable researchers to quickly get up to speed on the current status of this technology. Finally, challenges facing LPWA are identified and directions for future research are recommended.
A robust categorization system for Kurdish Sorani text documents
Background: Text classification is a process of automatically assigning sets of documents into class labels depending on their data contents. It is also considered as an important element in the management of tasks and organizing information. Seemingly, the text classification process depends hugely on the quality of preprocessing steps. Materials and Methods: In this study, a novel pre-processing method (Normalizing, stemming, removing stopwords and removing non-Kurdish texts and symbols) was evaluated by means of comparing the performance of two text classification techniques, namely; decision tree (C4.5) classifier and Support Vector Machine (SVM) classifier. Two automatic learning algorithms for text categorization were compared using a set of Kurdish Sorani text documents that was collected from different Kurdish websites. The set of documents falls into 8 main categories namely: Sports, religions, arts, economics, educations, socials, styles and health. A set of preprocessing steps was performed on text documents such as normalizing some characters, stemming, removing stopwords and removing non-Kurdish texts and symbols, next, the documents were changed into a appropriate file format and finally the classification was conducted. Results: The findings of this study illustrated that the highest accuracy value 93.1% and the smallest time taken to building classifier was achieved with the SVM classifier after pre-processing and feature weighting steps were performed. Conclusion: The experimental results of this study can be utilized in future as a baseline to compare with other classifiers and Kurdish stemmers.
Automatic Kurdish Text Classification Using KDC 4007 Dataset
Due to the large volume of text documents uploaded on the Internet daily. The quantity of Kurdish documents which can be obtained via the web increases drastically with each passing day. Considering news appearances, specifically, documents identified with categories, for example, health, politics, and sport appear to be in the wrong category or archives might be positioned in a nonspecific category called others. This paper is concerned with text classification of Kurdish text documents to placing articles or an email into its right class per their contents. Even though there are considerable numbers of studies directed on text classification in other languages, and the quantity of studies conducted in Kurdish is extremely restricted because of the absence of openness, and convenience of datasets. In this paper, a new dataset named KDC-4007 that can be widely used in the studies of text classification about Kurdish news and articles is created. KDC-4007 dataset its file formats are compatible with well-known text mining tools. Comparisons of three best-known algorithms (such as Support Vector Machine (SVM), Naïve Bays (NB) and Decision Tree (DT) classifiers) for text classification and TF × IDF feature weighting method are evaluated on KDC-4007. The paper also studies the effects of utilizing Kurdish stemmer on the effectiveness of these classifiers. The experimental results indicate that the good accuracy value 91.03% is provided by the SVM classifier, especially when the stemming and TF × IDF feature weighting are involved in the preprocessing phase. KDC-4007 datasets are available publicly and the outcome of this study can be further used in future as a baseline for evaluations with other classifiers by other researchers.
Lecturer performance analysis using multiple classifiers
Lecturer performance analysis has enormous influence on the educational life of lecturers in universities. The existing system in universities in Kurdistan-Iraq is conducted conventionally, what is more, the evaluation process of performance analysis of lecturers is assessed by the managers at various branches at the university andin view of that, in some cases the outcomes of this process cause a low level of endorsement among staffs who believe that most of these cases are opinionated. This paper suggests a smart and an activesystem in which both unique and multiple soft computing classifier techniques are used to examine performance analysis of lecturers of college of engineering at Salahaddin University-Erbil (SUE). The dataset collected from the quality assurancedepartment at SUE. The dataset composes of three sub-datasets namely: Student Feedback (FB), Continuous Academic Development (CAD) and lecturer's portfolio (PRF). Each of the mentioned subdatasets is classified with a different classifier technique. FB uses Back-Propagation Neural Network (BPNN), CAD uses Naïve Bayes Classifier (NBC) and the third sub-dataset uses Support Vector Machine (SVM) as a classifier technique. After implementing the system, the results of the above sub-datasets are collected and then fed as input data to BPNN technique to obtain the final result and accordingly, the lectures are awarded, warned or punished.
Decision Support System for Diabetes Mellitus through Machine Learning Techniques
recently, the diseases of diabetes mellitus have grown into extremely feared problems that can have damaging effects on the health condition of their sufferers globally. In this regard, several machine learning models have been used to predict and classify diabetes types. Nevertheless, most of these models attempted to solve two problems; categorizing patients in terms of diabetic types and forecasting blood surge rate of patients. This paper presents an automatic decision support system for diabetes mellitus through machine learning techniques by taking into account the above problems, plus, reflecting the skills of medical specialists who believe that there is a great relationship between patient's symptoms with some chronic diseases and the blood sugar rate. Data sets are collected from Layla Qasim Clinical Center in Kurdistan Region, then, the data is cleaned and proposed using feature selection techniques such as Sequential Forward Selection and the Correlation Coefficient, finally, the refined data is fed into machine learning models for prediction, classification, and description purposes. This system enables physicians and doctors to provide diabetes mellitus (DM) patients good health treatments and recommendations.
Face behavior recognition through support vector machines
Communication between computers and humans has grown to be a major field of research. Facial Behavior Recognition through computer algorithms is a motivating and difficult field of research for establishing emotional interactions between humans and computers. Although researchers have suggested numerous methods of emotion recognition within the literature of this field, as yet, these research works have mainly focused on one method for their system output ie used one facial database for assessing their works. This may diminish the generalization method and additionally it might shrink the comparability range. A proposed technique for recognizing emotional expressions that are expressed through facial aspects of still images is presented. This technique uses the Support Vector Machines (SVM) as a classifier of emotions. Substantive problems are considered such as diversity in facial databases, the samples included in each database, the number of facial expressions experienced an accurate method of extracting facial features, and the variety of structural models. After many experiments and the results of different models being compared, it is determined that this approach produces high recognition rates.
Student academic performance using artificial intelligence
No description available.
An Intelligent Approach for Diabetes Classification, Prediction and Description
A number of machine learning models have been applied to a prediction or classification task of diabetes. These models either tried to categorise patients into insulin and non-insulin, or anticipate the patients’ blood surge rate. Most medical experts have realised that there is a great relationship between patient’s symptoms with some chronic diseases and the blood sugar rate. This paper proposes a diabetes-chronic disease prediction-description model in the form of two sub-modules to verify this relationship. The first sub-module uses Artificial Neural Network (ANN) to classify the types of case and to predict the rate of fasting blood sugar (FBS) of patients. The post-process module is used to figure out the relations between the FBS and symptoms with prediction rate. The second sub-module describes the impact of the rate of FBS and the symptoms on the patient’s health. Decision Trees (DT) is used to achieve the description part of diabetes symptoms.
Lecturer performance system using neural network with particle swarm optimization
The field of analyzing performance is very important and sensitive in particular when it is related to the performance of lecturers in academic institutions. Locating the weak points of lecturers through a system that provides an early warning to notify or reward the lecturers with warned or punished notices will help them to improve their weaknesses, leads to a better quality in the institutions. The current system has major issues in the higher education at Salahaddin University-Erbil (SUE) in Kurdistan-Iraq. These issues are: first, the assessment of lecturers’ activities is conducted traditionally via the Quality Assurance Teams at different departments and colleges at the university, second, the outcomes in some cases of lecturers’ performance provoke a low level of acceptance among lectures, as these cases are reflected and viewed by some academic communities as unfair cases, and finally, the current system is not accurate and vigorous. In this paper, Particle Swarm Optimization with Neural Network is used to assess performance of lecturers in more fruitful way and also to enhance the accuracy of recognition system. Different real and novel data sets are collected from SUE. The prepared datasets preprocessed and important features are then fed as input source to the training and testing phases. Particle Swarm Optimization is used to find the best weights and biases in the training phase of the neural network. The best accuracy rate obtained in the test phase is 98.28%. © 2016 Wiley Periodicals, Inc. Comput Appl Eng Educ 24:629–638, 2016; View this article online at wileyonlinelibrary.com/journal/cae; DOI 10.1002/cae.21737
Convolutional neural networks based method for improving facial expression recognition
Recognizing facial expressions via algorithms has been a problematic mission among researchers from fields of science. Numerous methods of emotion recognition were previously proposed based on one scheme using one data set or using the data set as it is collected to evaluate the system without performing extra preprocessing steps such as data balancing process that is needed to enhance the generalization and increase the accuracy of the system. In this paper, a technique for recognizing facial expressions using different imbalanced data sets of facial expression is presented. The data is preprocessed, then, balanced, next, a technique for extracting significant features of face is implemented. Finally, the significant features are used as inputs to a classifier model. Four main classifier models are selected, namely; Decision Tree (DT), Multi-Layer Perceptron (MLP) and Convolutional Neural Network (CNN). The Convolutional Neural Network is determined to produce the best recognition accuracy.
Classification of churn and non-churn customers for telecommunication companies
Telecommunication is very important as it serves various processes, using of electronic systems to transmit messages via physical cables, telephones, or cell phones. The two main factors that affect the vitality of telecommunications are the rapid growth of modern technology and the market demand and its competition. These two factors in return create new technologies and products, which open a series of options and offers to customers, in order to satisfy their needs and requirements. However, one crucial problem that commercial companies in general and telecommunication companies in particular suffer from is a loss of valuable customers to competitors; this is called customer-churn prediction. In this paper the dynamic training technique is introduced. Dynamic training is used to improve the prediction of performance. This technique is based on two ANN network configurations to minimise the total error of the network to predict two different classes: namely churn and non-customers.
A new feature set with new window techniques for customer churn prediction in land-line telecommunications
In order to improve the prediction rates of churn prediction in land-line telecommunication service field, this paper proposes a new set of features with three new input window techniques. The new features are demographic profiles, account information, grant information, Henley segmentation, aggregated call-details, line information, service orders, bill and payment history. The basic idea of the three input window techniques is to make the position order of some monthly aggregated call-detail features from previous months in the combined feature set for testing be as the same one as for training phase. For evaluating these new features and window techniques, the two most common modelling techniques (decision trees and multilayer perceptron neural networks) and one of the most promising approaches (support vector machines) are selected as predictors. The experimental results show that the new features with the new window techniques are efficient for churn prediction in land-line telecommunication service fields.
Auto-regressive recurrent neural network approach for electricity load forecasting
This paper presents an auto-regressive network called the Auto-Regressive Multi-Context Recurrent Neural Network (ARMCRN), which forecasts the daily peak load for two large power plant systems. The auto-regressive network is a combination of both recurrent and non-recurrent networks. Weather component variables are the key elements in forecasting because any change in these variables affects the demand of energy load. So the AR-MCRN is used to learn the relationship between past, previous, and future exogenous and endogenous variables. Experimental results show that using the change in weather components and the change that occurred in past load as inputs to the AR-MCRN, rather than the basic weather parameters and past load itself as inputs to the same network, produce higher accuracy of predicted load. Experimental results also show that using exogenous and endogenous variables as inputs is better than using only the exogenous variables as inputs to the network.
Multi-Context Recurrent Neural Network for Time Series Applications
—this paper presents a multi-context recurrent network for time series analysis. While simple recurrent network (SRN) are very popular among recurrent neural networks, they still have some shortcomings in terms of learning speed and accuracy that need to be addressed. To solve these problems, we proposed a multi-context recurrent network (MCRN) with three different learning algorithms. The performance of this network is evaluated on some real-world application such as handwriting recognition and energy load forecasting. We study the performance of this network and we compared it to a very well established SRN. The experimental results showed that MCRN is very efficient and very well suited to time series analysis and its applications.
A Practical Approach for Electricity Load Forecasting.
This paper is a continuation of our daily energy peak load forecasting approach using our modified network which is part of the recurrent networks family and is called feed forward and feed back multi context artificial neural network (FFFB-MCANN). The inputs to the network were exogenous variables such as the previous and current change in the weather components, the previous and current status of the day and endogenous variables such as the past change in the loads. Endogenous variable such as the current change in the loads were used on the network output. Experiment shows that using endogenous and exogenous variables as inputs to the FFFBMCANN rather than either exogenous or endogenous variables as inputs to the same network produces better results. Experiments show that using the change in variables such as weather components and the change in the past load as inputs to the FFFB-MCANN rather than the absolute values for the weather components and past load as inputs to the same network has a dramatic impact and produce better accuracy.
A new modified network based on the elman network Authors
Simple recurrent networks have been used in simulations and modelling of many real-world applications. Usually these networks deal with tasks of temporal sequences. In this paper, we introduce a new SRN based on Elman network. The new model is introduced in order to improve the speed and accuracy of SRNs. The new model is studied using different training algorithms. The three training algorithms, which are back propagation, back propagation through the time, and real-time recurrent learning, are implemented and compared to traditional Elman network.
A New Simple Recurrent Network with Real Time Recurrent Learning Process
The simple recurrent network (SRN) is one of the most attractive types of recurrent neural networks (RNN), which deals with temporal sequences. The SRN has been used to handle many tasks, for example prediction and classification. Nevertheless, the SRN trained with back propagation through time (BPTT) has some limitations which restrict the real time application. To avoid these limitations, real time recurrent learning (RTRL) is used to train SRNs. The training speed of the SRN with RTRL is not fast. In this paper we propose a modified network trained with RTRL to try to solve these problems. Based on SRN, the modified architecture adds two extra parts: a Multi-Context Layer (MCL), and feed forward connections from the MCL to the output layer. This paper includes the full mathematical model derived according to this model architecture and learning algorithm. Some simple applications are implemented with the modified network.