PeerJ Computer Sciencehttps://peerj.com/articles/index.atom?journal=cs Articles published in PeerJ Computer ScienceA secure fingerprint hiding technique based on DNA sequence and mathematical functionhttps://peerj.com/articles/cs-18472024-03-192024-03-19Wala’a Essa Al-AhmadiAsia Othman AljahdaliFursan ThabitAsmaa Munshi
DNA steganography is a technique for securely transmitting important data using DNA sequences. It involves encrypting and hiding messages within DNA sequences to prevent unauthorized access and decoding of sensitive information. Biometric systems, such as fingerprinting and iris scanning, are used for individual recognition. Since biometric information cannot be changed if compromised, it is essential to ensure its security. This research aims to develop a secure technique that combines steganography and cryptography to protect fingerprint images during communication while maintaining confidentiality. The technique converts fingerprint images into binary data, encrypts them, and embeds them into the DNA sequence. It utilizes the Feistel network encryption process, along with a mathematical function and an insertion technique for hiding the data. The proposed method offers a low probability of being cracked, a high number of hiding positions, and efficient execution times. Four randomly chosen keys are used for hiding and decoding, providing a large key space and enhanced key sensitivity. The technique undergoes evaluation using the NIST statistical test suite and is compared with other research papers. It demonstrates resilience against various attacks, including known-plaintext and chosen-plaintext attacks. To enhance security, random ambiguous bits are introduced at random locations in the fingerprint image, increasing noise. However, it is important to note that this technique is limited to hiding small images within DNA sequences and cannot handle video, audio, or large images.
DNA steganography is a technique for securely transmitting important data using DNA sequences. It involves encrypting and hiding messages within DNA sequences to prevent unauthorized access and decoding of sensitive information. Biometric systems, such as fingerprinting and iris scanning, are used for individual recognition. Since biometric information cannot be changed if compromised, it is essential to ensure its security. This research aims to develop a secure technique that combines steganography and cryptography to protect fingerprint images during communication while maintaining confidentiality. The technique converts fingerprint images into binary data, encrypts them, and embeds them into the DNA sequence. It utilizes the Feistel network encryption process, along with a mathematical function and an insertion technique for hiding the data. The proposed method offers a low probability of being cracked, a high number of hiding positions, and efficient execution times. Four randomly chosen keys are used for hiding and decoding, providing a large key space and enhanced key sensitivity. The technique undergoes evaluation using the NIST statistical test suite and is compared with other research papers. It demonstrates resilience against various attacks, including known-plaintext and chosen-plaintext attacks. To enhance security, random ambiguous bits are introduced at random locations in the fingerprint image, increasing noise. However, it is important to note that this technique is limited to hiding small images within DNA sequences and cannot handle video, audio, or large images.The experience of a tele-operated avatar being touched increases operator’s sense of discomforthttps://peerj.com/articles/cs-19262024-03-192024-03-19Mitsuhiko KimotoMasahiro Shiomi
Recent advancements in tele-operated avatars, both on-screen and robotic, have expanded opportunities for human interaction that exceed spatial and physical limitations. While numerous studies have enhanced operator control and improved the impression left on remote users, one area remains underexplored: the experience of operators during touch interactions between an avatar and a remote interlocutor. Touch interactions have become commonplace with avatars, especially those displayed on or integrated with touchscreen interfaces. Although the need for avatars to exhibit human-like touch responses has been recognized as beneficial for maintaining positive impressions on remote users, the sensations and experiences of the operators behind these avatars during such interactions remain largely uninvestigated. This study examines the sensations felt by an operator when their tele-operated avatar is touched remotely. Our findings reveal that operators can perceive a sensation of discomfort when their on-screen avatar is touched. This feeling is intensified when the touch is visualized and the avatar reacts to it. Although these autonomous responses may enhance the human-like perceptions of remote users, they might also lead to operator discomfort. This situation underscores the importance of designing avatars that address the experiences of both remote users and operators. We address this issue by proposing a tele-operated avatar system that minimizes unwarranted touch interactions from unfamiliar interlocutors based on social intimacy.
Recent advancements in tele-operated avatars, both on-screen and robotic, have expanded opportunities for human interaction that exceed spatial and physical limitations. While numerous studies have enhanced operator control and improved the impression left on remote users, one area remains underexplored: the experience of operators during touch interactions between an avatar and a remote interlocutor. Touch interactions have become commonplace with avatars, especially those displayed on or integrated with touchscreen interfaces. Although the need for avatars to exhibit human-like touch responses has been recognized as beneficial for maintaining positive impressions on remote users, the sensations and experiences of the operators behind these avatars during such interactions remain largely uninvestigated. This study examines the sensations felt by an operator when their tele-operated avatar is touched remotely. Our findings reveal that operators can perceive a sensation of discomfort when their on-screen avatar is touched. This feeling is intensified when the touch is visualized and the avatar reacts to it. Although these autonomous responses may enhance the human-like perceptions of remote users, they might also lead to operator discomfort. This situation underscores the importance of designing avatars that address the experiences of both remote users and operators. We address this issue by proposing a tele-operated avatar system that minimizes unwarranted touch interactions from unfamiliar interlocutors based on social intimacy.Visual resource extraction and artistic communication model design based on improved CycleGAN algorithmhttps://peerj.com/articles/cs-18892024-03-182024-03-18Anyu YangMuhammad Kashif Hanif
Through the application of computer vision and deep learning methodologies, real-time style transfer of images becomes achievable. This process involves the fusion of diverse artistic elements into a single image, resulting in the creation of innovative pieces of art. This article centers its focus on image style transfer within the realm of art education and introduces an ATT-CycleGAN model enriched with an attention mechanism to enhance the quality and precision of style conversion. The framework enhances the generators within CycleGAN. At first, images undergo encoder downsampling before entering the intermediate transformation model. In this intermediate transformation model, feature maps are acquired through four encoding residual blocks, which are subsequently input into an attention module. Channel attention is incorporated through multi-weight optimization achieved via global max-pooling and global average-pooling techniques. During the model’s training process, transfer learning techniques are employed to improve model parameter initialization, enhancing training efficiency. Experimental results demonstrate the superior performance of the proposed model in image style transfer across various categories. In comparison to the traditional CycleGAN model, it exhibits a notable increase in structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) metrics. Specifically, on the Places365 and selfi2anime datasets, compared with the traditional CycleGAN model, SSIM is increased by 3.19% and 1.31% respectively, and PSNR is increased by 10.16% and 5.02% respectively. These findings provide valuable algorithmic support and crucial references for future research in the fields of art education, image segmentation, and style transfer.
Through the application of computer vision and deep learning methodologies, real-time style transfer of images becomes achievable. This process involves the fusion of diverse artistic elements into a single image, resulting in the creation of innovative pieces of art. This article centers its focus on image style transfer within the realm of art education and introduces an ATT-CycleGAN model enriched with an attention mechanism to enhance the quality and precision of style conversion. The framework enhances the generators within CycleGAN. At first, images undergo encoder downsampling before entering the intermediate transformation model. In this intermediate transformation model, feature maps are acquired through four encoding residual blocks, which are subsequently input into an attention module. Channel attention is incorporated through multi-weight optimization achieved via global max-pooling and global average-pooling techniques. During the model’s training process, transfer learning techniques are employed to improve model parameter initialization, enhancing training efficiency. Experimental results demonstrate the superior performance of the proposed model in image style transfer across various categories. In comparison to the traditional CycleGAN model, it exhibits a notable increase in structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) metrics. Specifically, on the Places365 and selfi2anime datasets, compared with the traditional CycleGAN model, SSIM is increased by 3.19% and 1.31% respectively, and PSNR is increased by 10.16% and 5.02% respectively. These findings provide valuable algorithmic support and crucial references for future research in the fields of art education, image segmentation, and style transfer.Performance discrepancy mitigation in heart disease prediction for multisensory inter-datasetshttps://peerj.com/articles/cs-19172024-03-182024-03-18Mahmudul HasanMd Abdus SahidMd Palash UddinMd Abu MarjanSeifedine KadryJungeun Kim
Heart disease is one of the primary causes of morbidity and death worldwide. Millions of people have had heart attacks every year, and only early-stage predictions can help to reduce the number. Researchers are working on designing and developing early-stage prediction systems using different advanced technologies, and machine learning (ML) is one of them. Almost all existing ML-based works consider the same dataset (intra-dataset) for the training and validation of their method. In particular, they do not consider inter-dataset performance checks, where different datasets are used in the training and testing phases. In inter-dataset setup, existing ML models show a poor performance named the inter-dataset discrepancy problem. This work focuses on mitigating the inter-dataset discrepancy problem by considering five available heart disease datasets and their combined form. All potential training and testing mode combinations are systematically executed to assess discrepancies before and after applying the proposed methods. Imbalance data handling using SMOTE-Tomek, feature selection using random forest (RF), and feature extraction using principle component analysis (PCA) with a long preprocessing pipeline are used to mitigate the inter-dataset discrepancy problem. The preprocessing pipeline builds on missing value handling using RF regression, log transformation, outlier removal, normalization, and data balancing that convert the datasets to more ML-centric. Support vector machine, K-nearest neighbors, decision tree, RF, eXtreme Gradient Boosting, Gaussian naive Bayes, logistic regression, and multilayer perceptron are used as classifiers. Experimental results show that feature selection and classification using RF produce better results than other combination strategies in both single- and inter-dataset setups. In certain configurations of individual datasets, RF demonstrates 100% accuracy and 96% accuracy during the feature selection phase in an inter-dataset setup, exhibiting commendable precision, recall, F1 score, specificity, and AUC score. The results indicate that an effective preprocessing technique has the potential to improve the performance of the ML model without necessitating the development of intricate prediction models. Addressing inter-dataset discrepancies introduces a novel research avenue, enabling the amalgamation of identical features from various datasets to construct a comprehensive global dataset within a specific domain.
Heart disease is one of the primary causes of morbidity and death worldwide. Millions of people have had heart attacks every year, and only early-stage predictions can help to reduce the number. Researchers are working on designing and developing early-stage prediction systems using different advanced technologies, and machine learning (ML) is one of them. Almost all existing ML-based works consider the same dataset (intra-dataset) for the training and validation of their method. In particular, they do not consider inter-dataset performance checks, where different datasets are used in the training and testing phases. In inter-dataset setup, existing ML models show a poor performance named the inter-dataset discrepancy problem. This work focuses on mitigating the inter-dataset discrepancy problem by considering five available heart disease datasets and their combined form. All potential training and testing mode combinations are systematically executed to assess discrepancies before and after applying the proposed methods. Imbalance data handling using SMOTE-Tomek, feature selection using random forest (RF), and feature extraction using principle component analysis (PCA) with a long preprocessing pipeline are used to mitigate the inter-dataset discrepancy problem. The preprocessing pipeline builds on missing value handling using RF regression, log transformation, outlier removal, normalization, and data balancing that convert the datasets to more ML-centric. Support vector machine, K-nearest neighbors, decision tree, RF, eXtreme Gradient Boosting, Gaussian naive Bayes, logistic regression, and multilayer perceptron are used as classifiers. Experimental results show that feature selection and classification using RF produce better results than other combination strategies in both single- and inter-dataset setups. In certain configurations of individual datasets, RF demonstrates 100% accuracy and 96% accuracy during the feature selection phase in an inter-dataset setup, exhibiting commendable precision, recall, F1 score, specificity, and AUC score. The results indicate that an effective preprocessing technique has the potential to improve the performance of the ML model without necessitating the development of intricate prediction models. Addressing inter-dataset discrepancies introduces a novel research avenue, enabling the amalgamation of identical features from various datasets to construct a comprehensive global dataset within a specific domain.The reconstruction of equivalent underlying model based on direct causality for multivariate time serieshttps://peerj.com/articles/cs-19222024-03-182024-03-18Liyang XuDezheng Wang
This article presents a novel approach for reconstructing an equivalent underlying model and deriving a precise equivalent expression through the use of direct causality topology. Central to this methodology is the transfer entropy method, which is instrumental in revealing the causality topology. The polynomial fitting method is then applied to determine the coefficients and intrinsic order of the causality structure, leveraging the foundational elements extracted from the direct causality topology. Notably, this approach efficiently discovers the core topology from the data, reducing redundancy without requiring prior domain-specific knowledge. Furthermore, it yields a precise equivalent model expression, offering a robust foundation for further analysis and exploration in various fields. Additionally, the proposed model for reconstructing an equivalent underlying framework demonstrates strong forecasting capabilities in multivariate time series scenarios.
This article presents a novel approach for reconstructing an equivalent underlying model and deriving a precise equivalent expression through the use of direct causality topology. Central to this methodology is the transfer entropy method, which is instrumental in revealing the causality topology. The polynomial fitting method is then applied to determine the coefficients and intrinsic order of the causality structure, leveraging the foundational elements extracted from the direct causality topology. Notably, this approach efficiently discovers the core topology from the data, reducing redundancy without requiring prior domain-specific knowledge. Furthermore, it yields a precise equivalent model expression, offering a robust foundation for further analysis and exploration in various fields. Additionally, the proposed model for reconstructing an equivalent underlying framework demonstrates strong forecasting capabilities in multivariate time series scenarios.Sensor-based systems for the measurement of Functional Reach Test results: a systematic reviewhttps://peerj.com/articles/cs-18232024-03-152024-03-15Luís FranciscoJoão DuarteAntónio Nunes GodinhoEftim ZdravevskiCarlos AlbuquerqueIvan Miguel PiresPaulo Jorge Coelho
The measurement of Functional Reach Test (FRT) is a widely used assessment tool in various fields, including physical therapy, rehabilitation, and geriatrics. This test evaluates a person’s balance, mobility, and functional ability to reach forward while maintaining stability. Recently, there has been a growing interest in utilizing sensor-based systems to objectively and accurately measure FRT results. This systematic review was performed in various scientific databases or publishers, including PubMed Central, IEEE Explore, Elsevier, Springer, the Multidisciplinary Digital Publishing Institute (MDPI), and the Association for Computing Machinery (ACM), and considered studies published between January 2017 and October 2022, related to methods for the automation of the measurement of the Functional Reach Test variables and results with sensors. Camera-based devices and motion-based sensors are used for Functional Reach Tests, with statistical models extracting meaningful information. Sensor-based systems offer several advantages over traditional manual measurement techniques, as they can provide objective and precise measurements of the reach distance, quantify postural sway, and capture additional parameters related to the movement.
The measurement of Functional Reach Test (FRT) is a widely used assessment tool in various fields, including physical therapy, rehabilitation, and geriatrics. This test evaluates a person’s balance, mobility, and functional ability to reach forward while maintaining stability. Recently, there has been a growing interest in utilizing sensor-based systems to objectively and accurately measure FRT results. This systematic review was performed in various scientific databases or publishers, including PubMed Central, IEEE Explore, Elsevier, Springer, the Multidisciplinary Digital Publishing Institute (MDPI), and the Association for Computing Machinery (ACM), and considered studies published between January 2017 and October 2022, related to methods for the automation of the measurement of the Functional Reach Test variables and results with sensors. Camera-based devices and motion-based sensors are used for Functional Reach Tests, with statistical models extracting meaningful information. Sensor-based systems offer several advantages over traditional manual measurement techniques, as they can provide objective and precise measurements of the reach distance, quantify postural sway, and capture additional parameters related to the movement.Blockchain based general data protection regulation compliant data breach detection systemhttps://peerj.com/articles/cs-18822024-03-152024-03-15Kainat AnsarMansoor AhmedSaif Ur Rehman MalikMarkus HelfertJungsuk Kim
Context
Data breaches caused by insiders are on the rise, both in terms of frequency and financial impact on organizations. Insider threat originates from within the targeted organization and users with authorized access to an organization’s network, applications, or databases commit insider attacks.
Motivation
Insider attacks are difficult to detect because an attacker with administrator capabilities can change logs and login records to destroy the evidence of the attack. Moreover, when such a harmful insider attack goes undetected for months, it can do a lot of damage. Such data breaches may significantly impact the affected data owner’s life. Developing a system for rapidly detecting data breaches is still critical and challenging. General Data Protection Regulation (GDPR) has defined the procedures and policies to mitigate the problems of data protection. Therefore, under the GDPR implementation, the data controller must notify the data protection authority when a data breach has occurred.
Problem Statement
Existing data breach detection mechanisms rely on a reliable third party. Because of the presence of a third party, such systems are not trustworthy, transparent, secure, immutable, and GDPR-compliant.
Contributions
To overcome these issues, this study proposed a GDPR-compliant data breach detection system by leveraging the benefits of blockchain technology. Smart contracts are written in Solidity and deployed on a local Ethereum test network to implement the solution. The proposed system can generate alert notifications against every data breach.
Results
We tested and deployed our proposed system, and the findings indicate that it can accomplish the insider threat mitigation objective. Furthermore, the GDPR compliance analysis of our system was also evaluated to make sure that it complies with the GDPR principles (such as right to be forgotten, access control, conditions for consent, and breach notifications). The conducted analysis has confirmed that the proposed system offers capabilities to comply with the GDPR from an application standpoint.
Context
Data breaches caused by insiders are on the rise, both in terms of frequency and financial impact on organizations. Insider threat originates from within the targeted organization and users with authorized access to an organization’s network, applications, or databases commit insider attacks.
Motivation
Insider attacks are difficult to detect because an attacker with administrator capabilities can change logs and login records to destroy the evidence of the attack. Moreover, when such a harmful insider attack goes undetected for months, it can do a lot of damage. Such data breaches may significantly impact the affected data owner’s life. Developing a system for rapidly detecting data breaches is still critical and challenging. General Data Protection Regulation (GDPR) has defined the procedures and policies to mitigate the problems of data protection. Therefore, under the GDPR implementation, the data controller must notify the data protection authority when a data breach has occurred.
Problem Statement
Existing data breach detection mechanisms rely on a reliable third party. Because of the presence of a third party, such systems are not trustworthy, transparent, secure, immutable, and GDPR-compliant.
Contributions
To overcome these issues, this study proposed a GDPR-compliant data breach detection system by leveraging the benefits of blockchain technology. Smart contracts are written in Solidity and deployed on a local Ethereum test network to implement the solution. The proposed system can generate alert notifications against every data breach.
Results
We tested and deployed our proposed system, and the findings indicate that it can accomplish the insider threat mitigation objective. Furthermore, the GDPR compliance analysis of our system was also evaluated to make sure that it complies with the GDPR principles (such as right to be forgotten, access control, conditions for consent, and breach notifications). The conducted analysis has confirmed that the proposed system offers capabilities to comply with the GDPR from an application standpoint.Architecting an enterprise financial management model: leveraging multi-head attention mechanism-transformer for user information transformationhttps://peerj.com/articles/cs-19282024-03-152024-03-15Wan YuHabib Hamam
Financial management assumes a pivotal role as a fundamental information system contributing to enterprise development. Nonetheless, prevalent methodologies frequently encounter challenges in proficiently overseeing diverse information streams inherent to financial management. This study introduces an innovative paradigm for enterprise financial management centered on the transformation of user information signals. In its initial phases, the methodology augments the Transformer network and self-attention mechanism to extract features pertaining to both users and financial data, fostering a more cohesive integration of financial and user information. Subsequently, a reinforcement learning-based alignment method is implemented to reconcile disparities between financial and user information, thereby enhancing semantic alignment. Ultimately, a signal conversion technique employing generative adversarial networks is deployed to harness user information, elevating financial management efficacy and, consequently, optimizing overall financial operations. The empirical validation of this approach, achieving an impressive mAP score of 81.9%, not only outperforms existing methodologies but also underscores the tangible impact and enhanced execution prowess that this paradigm brings to financial management systems. As such, this work not only contributes to the state of the art but also holds promise for revolutionizing the landscape of enterprise financial management.
Financial management assumes a pivotal role as a fundamental information system contributing to enterprise development. Nonetheless, prevalent methodologies frequently encounter challenges in proficiently overseeing diverse information streams inherent to financial management. This study introduces an innovative paradigm for enterprise financial management centered on the transformation of user information signals. In its initial phases, the methodology augments the Transformer network and self-attention mechanism to extract features pertaining to both users and financial data, fostering a more cohesive integration of financial and user information. Subsequently, a reinforcement learning-based alignment method is implemented to reconcile disparities between financial and user information, thereby enhancing semantic alignment. Ultimately, a signal conversion technique employing generative adversarial networks is deployed to harness user information, elevating financial management efficacy and, consequently, optimizing overall financial operations. The empirical validation of this approach, achieving an impressive mAP score of 81.9%, not only outperforms existing methodologies but also underscores the tangible impact and enhanced execution prowess that this paradigm brings to financial management systems. As such, this work not only contributes to the state of the art but also holds promise for revolutionizing the landscape of enterprise financial management.Data aggregation algorithm for wireless sensor networks with different initial energy of nodeshttps://peerj.com/articles/cs-19322024-03-152024-03-15Zhenpeng LiuJialiang ZhangYi LiuFan FengYifan Liu
Data aggregation plays a critical role in sensor networks for efficient data collection. However, the assumption of uniform initial energy levels among sensors in existing algorithms is unrealistic in practical production applications. This discrepancy in initial energy levels significantly impacts data aggregation in sensor networks. To address this issue, we propose Data Aggregation with Different Initial Energy (DADIE), a novel algorithm that aims to enhance energy-saving, privacy-preserving efficiency, and reduce node death rates in sensor networks with varying initial energy nodes. DADIE considers the transmission distance between nodes and their initial energy levels when forming the network topology, while also limiting the number of child nodes. Furthermore, DADIE reconstructs the aggregation tree before each round of data transmission. This allows nodes closer to the receiving end with higher initial energy to undertake more data aggregation and transmission tasks while limiting energy consumption. As a result, DADIE effectively reduces the node death rate and improves the efficiency of data transmission throughout the network. To enhance network security, DADIE establishes secure transmission channels between transmission nodes prior to data transmission, and it employs slice-and-mix technology within the network. Our experimental simulations demonstrate that the proposed DADIE algorithm effectively resolves the data aggregation challenges in sensor networks with varying initial energy nodes. It achieves 5–20% lower communication overhead and energy consumption, 10–20% higher security, and 10–30% lower node mortality than existing algorithms.
Data aggregation plays a critical role in sensor networks for efficient data collection. However, the assumption of uniform initial energy levels among sensors in existing algorithms is unrealistic in practical production applications. This discrepancy in initial energy levels significantly impacts data aggregation in sensor networks. To address this issue, we propose Data Aggregation with Different Initial Energy (DADIE), a novel algorithm that aims to enhance energy-saving, privacy-preserving efficiency, and reduce node death rates in sensor networks with varying initial energy nodes. DADIE considers the transmission distance between nodes and their initial energy levels when forming the network topology, while also limiting the number of child nodes. Furthermore, DADIE reconstructs the aggregation tree before each round of data transmission. This allows nodes closer to the receiving end with higher initial energy to undertake more data aggregation and transmission tasks while limiting energy consumption. As a result, DADIE effectively reduces the node death rate and improves the efficiency of data transmission throughout the network. To enhance network security, DADIE establishes secure transmission channels between transmission nodes prior to data transmission, and it employs slice-and-mix technology within the network. Our experimental simulations demonstrate that the proposed DADIE algorithm effectively resolves the data aggregation challenges in sensor networks with varying initial energy nodes. It achieves 5–20% lower communication overhead and energy consumption, 10–20% higher security, and 10–30% lower node mortality than existing algorithms.An improved differential evolution algorithm for multi-modal multi-objective optimizationhttps://peerj.com/articles/cs-18392024-03-142024-03-14Dan QuHualin XiaoHuafei ChenHongyi Li
Multi-modal multi-objective problems (MMOPs) have gained much attention during the last decade. These problems have two or more global or local Pareto optimal sets (PSs), some of which map to the same Pareto front (PF). This article presents a new affinity propagation clustering (APC) method based on the Multi-modal multi-objective differential evolution (MMODE) algorithm, called MMODE_AP, for the suit of CEC’2020 benchmark functions. First, two adaptive mutation strategies are adopted to balance exploration and exploitation and improve the diversity in the evolution process. Then, the affinity propagation clustering method is adopted to define the crowding degree in decision space (DS) and objective space (OS). Meanwhile, the non-dominated sorting scheme incorporates a particular crowding distance to truncate the population during the environmental selection process, which can obtain well-distributed solutions in both DS and OS. Moreover, the local PF membership of the solution is defined, and a predefined parameter is introduced to maintain of the local PSs and solutions around the global PS. Finally, the proposed algorithm is implemented on the suit of CEC’2020 benchmark functions for comparison with some MMODE algorithms. According to the experimental study results, the proposed MMODE_AP algorithm has about 20 better performance results on benchmark functions compared to its competitors in terms of reciprocal of Pareto sets proximity (rPSP), inverted generational distances (IGD) in the decision (IGDX) and objective (IGDF). The proposed algorithm can efficiently achieve the two goals, i.e., the convergence to the true local and global Pareto fronts along with better distributed Pareto solutions on the Pareto fronts.
Multi-modal multi-objective problems (MMOPs) have gained much attention during the last decade. These problems have two or more global or local Pareto optimal sets (PSs), some of which map to the same Pareto front (PF). This article presents a new affinity propagation clustering (APC) method based on the Multi-modal multi-objective differential evolution (MMODE) algorithm, called MMODE_AP, for the suit of CEC’2020 benchmark functions. First, two adaptive mutation strategies are adopted to balance exploration and exploitation and improve the diversity in the evolution process. Then, the affinity propagation clustering method is adopted to define the crowding degree in decision space (DS) and objective space (OS). Meanwhile, the non-dominated sorting scheme incorporates a particular crowding distance to truncate the population during the environmental selection process, which can obtain well-distributed solutions in both DS and OS. Moreover, the local PF membership of the solution is defined, and a predefined parameter is introduced to maintain of the local PSs and solutions around the global PS. Finally, the proposed algorithm is implemented on the suit of CEC’2020 benchmark functions for comparison with some MMODE algorithms. According to the experimental study results, the proposed MMODE_AP algorithm has about 20 better performance results on benchmark functions compared to its competitors in terms of reciprocal of Pareto sets proximity (rPSP), inverted generational distances (IGD) in the decision (IGDX) and objective (IGDF). The proposed algorithm can efficiently achieve the two goals, i.e., the convergence to the true local and global Pareto fronts along with better distributed Pareto solutions on the Pareto fronts.