PeerJ Computer Science:Theory and Formal Methodshttps://peerj.com/articles/index.atom?journal=cs&subject=11800Theory and Formal Methods articles published in PeerJ Computer ScienceDeep learning-based information retrieval with normalized dominant feature subset and weighted vector modelhttps://peerj.com/articles/cs-18052024-01-222024-01-22Poluru EswaraiahHussain Syed
Multimedia data, which includes textual information, is employed in a variety of practical computer vision applications. More than a million new records are added to social media and news sites every day, and the text content they contain has gotten increasingly complex. Finding a meaningful text record in an archive might be challenging for computer vision researchers. Most image searches still employ the tried and true language-based techniques of query text and metadata. Substantial work has been done in the past two decades on content-based text retrieval and analysis that still has limitations. The importance of feature extraction in search engines is often overlooked. Web and product search engines, recommendation systems, and question-answering activities frequently leverage these features. Extracting high-quality machine learning features from large text volumes is a challenge for many open-source software packages. Creating an effective feature set manually is a time-consuming process, but with deep learning, new actual feature demos from training data are analyzed. As a novel feature extraction method, deep learning has made great strides in text mining. Automatically training a deep learning model with the most pertinent text attributes requires massive datasets with millions of variables. In this research, a Normalized Dominant Feature Subset with Weighted Vector Model (NDFS-WVM) is proposed that is used for feature extraction and selection for information retrieval from big data using natural language processing models. The suggested model outperforms the conventional models in terms of text retrieval. The proposed model achieves 98.6% accuracy in information retrieval.
Multimedia data, which includes textual information, is employed in a variety of practical computer vision applications. More than a million new records are added to social media and news sites every day, and the text content they contain has gotten increasingly complex. Finding a meaningful text record in an archive might be challenging for computer vision researchers. Most image searches still employ the tried and true language-based techniques of query text and metadata. Substantial work has been done in the past two decades on content-based text retrieval and analysis that still has limitations. The importance of feature extraction in search engines is often overlooked. Web and product search engines, recommendation systems, and question-answering activities frequently leverage these features. Extracting high-quality machine learning features from large text volumes is a challenge for many open-source software packages. Creating an effective feature set manually is a time-consuming process, but with deep learning, new actual feature demos from training data are analyzed. As a novel feature extraction method, deep learning has made great strides in text mining. Automatically training a deep learning model with the most pertinent text attributes requires massive datasets with millions of variables. In this research, a Normalized Dominant Feature Subset with Weighted Vector Model (NDFS-WVM) is proposed that is used for feature extraction and selection for information retrieval from big data using natural language processing models. The suggested model outperforms the conventional models in terms of text retrieval. The proposed model achieves 98.6% accuracy in information retrieval.A novel incomplete hesitant fuzzy information supplement and clustering method for large-scale group decision-makinghttps://peerj.com/articles/cs-18032024-01-162024-01-16Jingdong WangWenhui WangFanqi MengPeifang WangXuesong WangShuang WeiTong LiuShuaisong Yang
Clustering is an effective means to reduce the scaling of large-scale group decision-making (LSGDM). However, there are many problems with clustering methods, such as incomplete or ambiguous information usually provided by different decision makers. Traditional clustering methods may not be able to handle these situations effectively, resulting in incomplete decision-making information. Calculating the clustering centers may become very complex and time-consuming. Inappropriate distance weights may also lead to incorrect cluster assignments, and these problems will seriously affect the clustering results. This research provides a novel incomplete hesitant fuzzy information supplement and clustering approach for large-scale group decision-making in order to address the aforementioned difficulties. First, the approach takes into account the trust degradation and the inhibition of relationships of distrust in the process of trust propagation, and then it builds a global and local network of trust. A novel supplemental formula is provided that takes into account the decision-preference maker’s as well as the trust-neighbor’s information, allowing the decision-neighbor maker’s recommendation to be realized. Therefore, an improved distance function can be proposed to calculate the weights by combining the relative standard deviation theory and selecting the selected clustering centers by using the density peaks in order to optimize the selection of clustering centers and reduce the complexity and scaling of the decision. Finally, an example is presented to demonstrate how the proposed method can be applied. The consistency index and comparison experiments are used to evaluate if the suggested approach is effective and reliable.
Clustering is an effective means to reduce the scaling of large-scale group decision-making (LSGDM). However, there are many problems with clustering methods, such as incomplete or ambiguous information usually provided by different decision makers. Traditional clustering methods may not be able to handle these situations effectively, resulting in incomplete decision-making information. Calculating the clustering centers may become very complex and time-consuming. Inappropriate distance weights may also lead to incorrect cluster assignments, and these problems will seriously affect the clustering results. This research provides a novel incomplete hesitant fuzzy information supplement and clustering approach for large-scale group decision-making in order to address the aforementioned difficulties. First, the approach takes into account the trust degradation and the inhibition of relationships of distrust in the process of trust propagation, and then it builds a global and local network of trust. A novel supplemental formula is provided that takes into account the decision-preference maker’s as well as the trust-neighbor’s information, allowing the decision-neighbor maker’s recommendation to be realized. Therefore, an improved distance function can be proposed to calculate the weights by combining the relative standard deviation theory and selecting the selected clustering centers by using the density peaks in order to optimize the selection of clustering centers and reduce the complexity and scaling of the decision. Finally, an example is presented to demonstrate how the proposed method can be applied. The consistency index and comparison experiments are used to evaluate if the suggested approach is effective and reliable.Comprehensive evaluation for the sustainable development of fresh agricultural products logistics enterprises based on combination empowerment-TOPSIS methodhttps://peerj.com/articles/cs-17192023-12-122023-12-12Dechao SunXuefang HuBangquan Liu
To solve the problems of environmental pollution and resource waste caused by the rapid development of cold chain logistics of fresh agricultural products and improve the competitiveness of logistics enterprises in the market, a performance evaluation method of cold chain logistics enterprises based on the combined empowerment-TOPSIS was proposed. Firstly, from the five dimensions of cold supply chain capacity, service quality, economic efficiency, informatization degree and development ability, a comprehensive evaluation system of logistics enterprises’ sustainable development is constructed, which consists of 16 indicators, such as storage and preservation capacity, distribution accuracy, and equipment input rate. Then, G1 method and entropy weight method are used to calculate the subjective and objective weights of the evaluation indicators, and the combined weights are calculated with the objective of minimizing the deviation of the subjective and objective weighted attributes. Finally, the TOPSIS method is used to calculate the comprehensive evaluation indicators. The results show that the established performance evaluation model can effectively evaluate the performance of fresh agricultural products logistics enterprises and provide theoretical basis for enterprise logistics management.
To solve the problems of environmental pollution and resource waste caused by the rapid development of cold chain logistics of fresh agricultural products and improve the competitiveness of logistics enterprises in the market, a performance evaluation method of cold chain logistics enterprises based on the combined empowerment-TOPSIS was proposed. Firstly, from the five dimensions of cold supply chain capacity, service quality, economic efficiency, informatization degree and development ability, a comprehensive evaluation system of logistics enterprises’ sustainable development is constructed, which consists of 16 indicators, such as storage and preservation capacity, distribution accuracy, and equipment input rate. Then, G1 method and entropy weight method are used to calculate the subjective and objective weights of the evaluation indicators, and the combined weights are calculated with the objective of minimizing the deviation of the subjective and objective weighted attributes. Finally, the TOPSIS method is used to calculate the comprehensive evaluation indicators. The results show that the established performance evaluation model can effectively evaluate the performance of fresh agricultural products logistics enterprises and provide theoretical basis for enterprise logistics management.Pick-up point recommendation strategy based on user incentive mechanismhttps://peerj.com/articles/cs-16922023-11-202023-11-20Jing ZhangBiao LiXiucai YeYi Chen
In recent years, with the development of spatial crowdsourcing technology, online car-hailing, as a typical spatiotemporal crowdsourcing task application scenario, has attracted widespread attention. Existing researches on spatial crowdsourcing are mainly based on the coordinate positions of user and worker roles to achieve task allocation with the goal of maximum matching number or lowest cost. However, they ignores the problem of the selection of the pick-up point which needs to be solved in the actual scene of online car booking. This problem needs to take into account the four-dimensional coordinate positions of users, workers, pick-up point and destination. Based on this, this study designs a pick-up point recommendation strategy based on user incentive mechanism. Firstly, a new four-dimensional crowdsourcing model is established, which is closer to the practical application of crowdsourcing problem. Secondly, taking cost optimization as the index, a user incentive mechanism is designed to encourage users to walk to the appropriate pick-up point within a certain distance. Thirdly, a concept of forward rate is proposed to reduce the computation time. Some key factors, such as the maximum walking distance limit of users and task cost, are considered as the recommendation index for measuring the pick-up point. Then, an effective pick-up point recommendation strategy is designed based on this index. Experiments show that the strategy proposed in this article can achieve reasonable recommendation for pick-up points and improve the efficiency of drivers and reduce the total trip cost of orders to the greatest extent.
In recent years, with the development of spatial crowdsourcing technology, online car-hailing, as a typical spatiotemporal crowdsourcing task application scenario, has attracted widespread attention. Existing researches on spatial crowdsourcing are mainly based on the coordinate positions of user and worker roles to achieve task allocation with the goal of maximum matching number or lowest cost. However, they ignores the problem of the selection of the pick-up point which needs to be solved in the actual scene of online car booking. This problem needs to take into account the four-dimensional coordinate positions of users, workers, pick-up point and destination. Based on this, this study designs a pick-up point recommendation strategy based on user incentive mechanism. Firstly, a new four-dimensional crowdsourcing model is established, which is closer to the practical application of crowdsourcing problem. Secondly, taking cost optimization as the index, a user incentive mechanism is designed to encourage users to walk to the appropriate pick-up point within a certain distance. Thirdly, a concept of forward rate is proposed to reduce the computation time. Some key factors, such as the maximum walking distance limit of users and task cost, are considered as the recommendation index for measuring the pick-up point. Then, an effective pick-up point recommendation strategy is designed based on this index. Experiments show that the strategy proposed in this article can achieve reasonable recommendation for pick-up points and improve the efficiency of drivers and reduce the total trip cost of orders to the greatest extent.Multidimensional solution of fuzzy linear programminghttps://peerj.com/articles/cs-16462023-11-162023-11-16Seyyed Ahmad Edalatpanah
There are several approaches to address fuzzy linear programming problems (FLPP). However, due to using standard interval arithmetic (SIA), these methods have some limitations and are not complete solutions. This article establishes a new approach to fuzzy linear programming via the theory of horizontal membership functions and the multidimensional relative-distance-measure fuzzy interval arithmetic. Furthermore, we propose a multidimensional solution based on the primal Simplex approach that satisfies any equivalent form of FLPP. The new solutions of FLPP are also compared with the results of existing methods. Some numerical examples have been illustrated to show the efficiency of the proposed method.
There are several approaches to address fuzzy linear programming problems (FLPP). However, due to using standard interval arithmetic (SIA), these methods have some limitations and are not complete solutions. This article establishes a new approach to fuzzy linear programming via the theory of horizontal membership functions and the multidimensional relative-distance-measure fuzzy interval arithmetic. Furthermore, we propose a multidimensional solution based on the primal Simplex approach that satisfies any equivalent form of FLPP. The new solutions of FLPP are also compared with the results of existing methods. Some numerical examples have been illustrated to show the efficiency of the proposed method.Modelling and verification of post-quantum key encapsulation mechanisms using Maudehttps://peerj.com/articles/cs-15472023-09-192023-09-19Víctor GarcíaSantiago EscobarKazuhiro OgataSedat AkleylekAyoub Otmani
Communication and information technologies shape the world’s systems of today, and those systems shape our society. The security of those systems relies on mathematical problems that are hard to solve for classical computers, that is, the available current computers. Recent advances in quantum computing threaten the security of our systems and the communications we use. In order to face this threat, multiple solutions and protocols have been proposed in the Post-Quantum Cryptography project carried on by the National Institute of Standards and Technologies. The presented work focuses on defining a formal framework in Maude for the security analysis of different post-quantum key encapsulation mechanisms under assumptions given under the Dolev-Yao model. Through the use of our framework, we construct a symbolic model to represent the behaviour of each of the participants of the protocol in a network. We then conduct reachability analysis and find a man-in-the-middle attack in each of them and a design vulnerability in Bit Flipping Key Encapsulation. For both cases, we provide some insights on possible solutions. Then, we use the Maude Linear Temporal Logic model checker to extend the analysis of the symbolic system regarding security, liveness and fairness properties. Liveness and fairness properties hold while the security property does not due to the man-in-the-middle attack and the design vulnerability in Bit Flipping Key Encapsulation.
Communication and information technologies shape the world’s systems of today, and those systems shape our society. The security of those systems relies on mathematical problems that are hard to solve for classical computers, that is, the available current computers. Recent advances in quantum computing threaten the security of our systems and the communications we use. In order to face this threat, multiple solutions and protocols have been proposed in the Post-Quantum Cryptography project carried on by the National Institute of Standards and Technologies. The presented work focuses on defining a formal framework in Maude for the security analysis of different post-quantum key encapsulation mechanisms under assumptions given under the Dolev-Yao model. Through the use of our framework, we construct a symbolic model to represent the behaviour of each of the participants of the protocol in a network. We then conduct reachability analysis and find a man-in-the-middle attack in each of them and a design vulnerability in Bit Flipping Key Encapsulation. For both cases, we provide some insights on possible solutions. Then, we use the Maude Linear Temporal Logic model checker to extend the analysis of the symbolic system regarding security, liveness and fairness properties. Liveness and fairness properties hold while the security property does not due to the man-in-the-middle attack and the design vulnerability in Bit Flipping Key Encapsulation.An optimized multi-attribute decision-making approach to construction supply chain management by using complex picture fuzzy soft sethttps://peerj.com/articles/cs-15402023-08-302023-08-30Ali AsgharKhuram A. KhanMarwan A. AlbaharAbdullah Alammari
Supplier selection is a critical decision-making process for any organization, as it directly impacts the quality, cost, and reliability of its products and services. However, the supplier selection problem can become highly complex due to the uncertainties and vagueness associated with it. To overcome these complexities, multi-criteria decision analysis, and fuzzy logic have been used to incorporate uncertainties and vagueness into the supplier selection process. These techniques can help organizations make informed decisions and mitigate the risks associated with supplier selection. In this article, a complex picture fuzzy soft set (cpFSS), a generalized fuzzy set-like structure, is developed to deal with information-based uncertainties involved in the supplier selection process. It can maintain the expected information-based periodicity by introducing amplitude and phase terms. The amplitude term is meant for fuzzy membership, and the phase term is for managing its periodicity within the complex plane. The cpFSS also facilitates the decision-makers by allowing them the opportunity to provide their neutral grade-based opinions for objects under observation. Firstly, the essential notions and set-theoretic operations of cpFSS are investigated and illustrated with examples. Secondly, a MADM-based algorithm is proposed by describing new matrix-based aggregations of cpFSS like the core matrix, maximum and minimum decision value matrices, and score. Lastly, the proposed algorithm is implemented in real-world applications with the aim of selecting a suitable supplier for the provision of required materials for construction projects. With the sensitivity analysis of score values through Pythagorean means, it can be concluded that the results and rankings of the suppliers are consistent. Moreover, through structural comparison, the proposed structure is proven to be more flexible and reliable as compared to existing fuzzy set-like structures.
Supplier selection is a critical decision-making process for any organization, as it directly impacts the quality, cost, and reliability of its products and services. However, the supplier selection problem can become highly complex due to the uncertainties and vagueness associated with it. To overcome these complexities, multi-criteria decision analysis, and fuzzy logic have been used to incorporate uncertainties and vagueness into the supplier selection process. These techniques can help organizations make informed decisions and mitigate the risks associated with supplier selection. In this article, a complex picture fuzzy soft set (cpFSS), a generalized fuzzy set-like structure, is developed to deal with information-based uncertainties involved in the supplier selection process. It can maintain the expected information-based periodicity by introducing amplitude and phase terms. The amplitude term is meant for fuzzy membership, and the phase term is for managing its periodicity within the complex plane. The cpFSS also facilitates the decision-makers by allowing them the opportunity to provide their neutral grade-based opinions for objects under observation. Firstly, the essential notions and set-theoretic operations of cpFSS are investigated and illustrated with examples. Secondly, a MADM-based algorithm is proposed by describing new matrix-based aggregations of cpFSS like the core matrix, maximum and minimum decision value matrices, and score. Lastly, the proposed algorithm is implemented in real-world applications with the aim of selecting a suitable supplier for the provision of required materials for construction projects. With the sensitivity analysis of score values through Pythagorean means, it can be concluded that the results and rankings of the suppliers are consistent. Moreover, through structural comparison, the proposed structure is proven to be more flexible and reliable as compared to existing fuzzy set-like structures.Qualitative reachability for open interval Markov chainshttps://peerj.com/articles/cs-14892023-08-282023-08-28Jeremy Sproston
Interval Markov chains extend classical Markov chains with the possibility to describe transition probabilities using intervals, rather than exact values. While the standard formulation of interval Markov chains features closed intervals, previous work has considered open interval Markov chains, in which the intervals can also be open or half-open. In this article we focus on qualitative reachability problems for open interval Markov chains, which consider whether the optimal (maximum or minimum) probability with which a certain set of states can be reached is equal to 0 or 1. We present polynomial-time algorithms for these problems for both of the standard semantics of interval Markov chains. Our methods do not rely on the closure of open intervals, in contrast to previous approaches for open interval Markov chains, and can address situations in which probability 0 or 1 can be attained not exactly but arbitrarily closely.
Interval Markov chains extend classical Markov chains with the possibility to describe transition probabilities using intervals, rather than exact values. While the standard formulation of interval Markov chains features closed intervals, previous work has considered open interval Markov chains, in which the intervals can also be open or half-open. In this article we focus on qualitative reachability problems for open interval Markov chains, which consider whether the optimal (maximum or minimum) probability with which a certain set of states can be reached is equal to 0 or 1. We present polynomial-time algorithms for these problems for both of the standard semantics of interval Markov chains. Our methods do not rely on the closure of open intervals, in contrast to previous approaches for open interval Markov chains, and can address situations in which probability 0 or 1 can be attained not exactly but arbitrarily closely.A message recovery attack on multivariate polynomial trapdoor functionhttps://peerj.com/articles/cs-15212023-08-282023-08-28Rashid AliMuhammad Mubashar HussainShamsa KanwalFahima HajjejSaba Inam
Cybersecurity guarantees the exchange of information through a public channel in a secure way. That is the data must be protected from unauthorized parties and transmitted to the intended parties with confidentiality and integrity. In this work, we mount an attack on a cryptosystem based on multivariate polynomial trapdoor function over the field of rational numbers Q. The developers claim that the security of their proposed scheme depends on the fact that a polynomial system consisting of 2n (where n is a natural number) equations and 3n unknowns constructed by using quasigroup string transformations, has infinitely many solutions and finding exact solution is not possible. We explain that the proposed trapdoor function is vulnerable to a Gröbner basis attack. Selected polynomials in the corresponding Gröbner basis can be used to recover the plaintext against a given ciphertext without the knowledge of the secret key.
Cybersecurity guarantees the exchange of information through a public channel in a secure way. That is the data must be protected from unauthorized parties and transmitted to the intended parties with confidentiality and integrity. In this work, we mount an attack on a cryptosystem based on multivariate polynomial trapdoor function over the field of rational numbers Q. The developers claim that the security of their proposed scheme depends on the fact that a polynomial system consisting of 2n (where n is a natural number) equations and 3n unknowns constructed by using quasigroup string transformations, has infinitely many solutions and finding exact solution is not possible. We explain that the proposed trapdoor function is vulnerable to a Gröbner basis attack. Selected polynomials in the corresponding Gröbner basis can be used to recover the plaintext against a given ciphertext without the knowledge of the secret key.Research on three-state reliability evaluation method of high reliability system based on multi-source prior informationhttps://peerj.com/articles/cs-14392023-07-272023-07-27Jingde HuangZhangyu HuangXin Zhan
A high reliability system has the characteristics of complexity, modularization, high cost and small sample size. Throughout the entire lifecycle of system development, storage and use, the high reliability requirements and the risk analysis form a direct contradiction with the testing expenses. In order to ensure the system, module or component maintains good reliability status and effectively reduces the cost of sampling tests, it is necessary to make full use of multi-source prior information to evaluate its reliability. Therefore, in order to evaluate the reliability of highly reliable equipment under the condition of a small sample size correctly, the equipment reliability evaluation model should be built based on multi-source prior information and form scientific computing methods to meet the needs of condition evaluation and fund assurance of high reliability system. In engineering practice, high reliability system or module gradually develops from normal state to failure state, generally going through three working states of “safety-potential failure-functional failure”. Firstly, the historical test data under the three states can be used for the data source for the reliability evaluation of the system at the current stage, which supplements the deficiency of the field data; secondly, due to the lack of accurate judgment on the working state of a high reliability system or modules and analysis of the health status, the unnecessary maintenance may aggravate the evolution speed from potential failure to functional failure; thirdly, when high reliability system or module operates under overload or harsh conditions, the potential failure will be worsened to a certain extent. Aiming at the difficulty of multi-state system reliability evaluation, a reliability evaluation method based on non-information prior distribution is proposed by fusing multi-source prior information, which provides ideas and methods for reliability evaluation and optimization analysis of high reliability system or module. The results show that the three-state reliability evaluation method proposed in this article is consistent with the actual engineering situation, providing a scientific theoretical basis for preventive maintenance of high reliability system. At the same time, the research method not only helps evaluate the reliability state of a high reliability system accurately, but also achieves the goal of effectively reducing test costs with good economic benefits and engineering application value.
A high reliability system has the characteristics of complexity, modularization, high cost and small sample size. Throughout the entire lifecycle of system development, storage and use, the high reliability requirements and the risk analysis form a direct contradiction with the testing expenses. In order to ensure the system, module or component maintains good reliability status and effectively reduces the cost of sampling tests, it is necessary to make full use of multi-source prior information to evaluate its reliability. Therefore, in order to evaluate the reliability of highly reliable equipment under the condition of a small sample size correctly, the equipment reliability evaluation model should be built based on multi-source prior information and form scientific computing methods to meet the needs of condition evaluation and fund assurance of high reliability system. In engineering practice, high reliability system or module gradually develops from normal state to failure state, generally going through three working states of “safety-potential failure-functional failure”. Firstly, the historical test data under the three states can be used for the data source for the reliability evaluation of the system at the current stage, which supplements the deficiency of the field data; secondly, due to the lack of accurate judgment on the working state of a high reliability system or modules and analysis of the health status, the unnecessary maintenance may aggravate the evolution speed from potential failure to functional failure; thirdly, when high reliability system or module operates under overload or harsh conditions, the potential failure will be worsened to a certain extent. Aiming at the difficulty of multi-state system reliability evaluation, a reliability evaluation method based on non-information prior distribution is proposed by fusing multi-source prior information, which provides ideas and methods for reliability evaluation and optimization analysis of high reliability system or module. The results show that the three-state reliability evaluation method proposed in this article is consistent with the actual engineering situation, providing a scientific theoretical basis for preventive maintenance of high reliability system. At the same time, the research method not only helps evaluate the reliability state of a high reliability system accurately, but also achieves the goal of effectively reducing test costs with good economic benefits and engineering application value.