PeerJ Computer Science:Scientific Computing and Simulationhttps://peerj.com/articles/index.atom?journal=cs&subject=11100Scientific Computing and Simulation articles published in PeerJ Computer ScienceArchitecting an enterprise financial management model: leveraging multi-head attention mechanism-transformer for user information transformationhttps://peerj.com/articles/cs-19282024-03-152024-03-15Wan YuHabib Hamam
Financial management assumes a pivotal role as a fundamental information system contributing to enterprise development. Nonetheless, prevalent methodologies frequently encounter challenges in proficiently overseeing diverse information streams inherent to financial management. This study introduces an innovative paradigm for enterprise financial management centered on the transformation of user information signals. In its initial phases, the methodology augments the Transformer network and self-attention mechanism to extract features pertaining to both users and financial data, fostering a more cohesive integration of financial and user information. Subsequently, a reinforcement learning-based alignment method is implemented to reconcile disparities between financial and user information, thereby enhancing semantic alignment. Ultimately, a signal conversion technique employing generative adversarial networks is deployed to harness user information, elevating financial management efficacy and, consequently, optimizing overall financial operations. The empirical validation of this approach, achieving an impressive mAP score of 81.9%, not only outperforms existing methodologies but also underscores the tangible impact and enhanced execution prowess that this paradigm brings to financial management systems. As such, this work not only contributes to the state of the art but also holds promise for revolutionizing the landscape of enterprise financial management.
Financial management assumes a pivotal role as a fundamental information system contributing to enterprise development. Nonetheless, prevalent methodologies frequently encounter challenges in proficiently overseeing diverse information streams inherent to financial management. This study introduces an innovative paradigm for enterprise financial management centered on the transformation of user information signals. In its initial phases, the methodology augments the Transformer network and self-attention mechanism to extract features pertaining to both users and financial data, fostering a more cohesive integration of financial and user information. Subsequently, a reinforcement learning-based alignment method is implemented to reconcile disparities between financial and user information, thereby enhancing semantic alignment. Ultimately, a signal conversion technique employing generative adversarial networks is deployed to harness user information, elevating financial management efficacy and, consequently, optimizing overall financial operations. The empirical validation of this approach, achieving an impressive mAP score of 81.9%, not only outperforms existing methodologies but also underscores the tangible impact and enhanced execution prowess that this paradigm brings to financial management systems. As such, this work not only contributes to the state of the art but also holds promise for revolutionizing the landscape of enterprise financial management.A multi-level classification based ensemble and feature extractor for credit risk assessmenthttps://peerj.com/articles/cs-19152024-02-292024-02-29Yuanyuan WangZhuang WuJing GaoChenjun LiuFangfang Guo
With the growth of people’s demand for loans, banks and other financial institutions put forward higher requirements for customer credit risk level classification, the purpose is to make better loan decisions and loan amount allocation and reduce the pre-loan risk. This article proposes a Multi-Level Classification based Ensemble and Feature Extractor (MLCEFE) that incorporates the strengths of sampling, feature extraction, and ensemble classification. MLCEFE uses SMOTE + Tomek links to solve the problem of data imbalance and then uses a deep neural network (DNN), auto-encoder (AE), and principal component analysis (PCA) to transform the original variables into higher-level abstract features for feature extraction. Finally, it combined multiple ensemble learners to improve the effect of personal credit risk multi-classification. During performance evaluation, MLCEFE has shown remarkable results in the multi-classification of personal credit risk compared with other classification methods.
With the growth of people’s demand for loans, banks and other financial institutions put forward higher requirements for customer credit risk level classification, the purpose is to make better loan decisions and loan amount allocation and reduce the pre-loan risk. This article proposes a Multi-Level Classification based Ensemble and Feature Extractor (MLCEFE) that incorporates the strengths of sampling, feature extraction, and ensemble classification. MLCEFE uses SMOTE + Tomek links to solve the problem of data imbalance and then uses a deep neural network (DNN), auto-encoder (AE), and principal component analysis (PCA) to transform the original variables into higher-level abstract features for feature extraction. Finally, it combined multiple ensemble learners to improve the effect of personal credit risk multi-classification. During performance evaluation, MLCEFE has shown remarkable results in the multi-classification of personal credit risk compared with other classification methods.Identifying optical microscope images of CVD-grown two-dimensional MoS2 by convolutional neural networks and transfer learninghttps://peerj.com/articles/cs-18852024-02-212024-02-21Cahit Perkgoz
Background
In Complementary Metal-Oxide Semiconductor (CMOS) technology, scaling down has been a key strategy to improve chip performance and reduce power losses. However, challenges such as sub-threshold leakage and gate leakage, resulting from short-channel effects, contribute to an increase in distributed static power. Two-dimensional transition metal dichalcogenides (2D TMDs) emerge as potential solutions, serving as channel materials with steep sub-threshold swings and lower power consumption. However, the production and development of these 2-dimensional materials require some time-consuming tasks. In order to employ them in different fields, including chip technology, it is crucial to ensure that their production meets the required standards of quality and uniformity; in this context, deep learning techniques show significant potential.
Methods
This research introduces a transfer learning-based deep convolutional neural network (CNN) to classify chemical vapor deposition (CVD) grown molybdenum disulfide (MoS2) flakes based on their uniformity or the occurrence of defects affecting electronic properties. Acquiring and labeling a sufficient number of microscope images for CNN training may not be realistic. To address this challenge, artificial images were generated using Fresnel equations to pre-train the CNN. Subsequently, accuracy was improved through fine-tuning with a limited set of real images.
Results
The proposed transfer learning-based CNN method significantly improved all measurement metrics with respect to the ordinary CNNs. The initial CNN, trained with limited data and without transfer learning, achieved 68% average accuracy for binary classification. Through transfer learning and artificial images, the same CNN achieved 85% average accuracy, demonstrating an average increase of approximately 17%. While this study specifically focuses on MoS2 structures, the same methodology can be extended to other 2-dimensional materials by simply incorporating their specific parameters when generating artificial images.
Background
In Complementary Metal-Oxide Semiconductor (CMOS) technology, scaling down has been a key strategy to improve chip performance and reduce power losses. However, challenges such as sub-threshold leakage and gate leakage, resulting from short-channel effects, contribute to an increase in distributed static power. Two-dimensional transition metal dichalcogenides (2D TMDs) emerge as potential solutions, serving as channel materials with steep sub-threshold swings and lower power consumption. However, the production and development of these 2-dimensional materials require some time-consuming tasks. In order to employ them in different fields, including chip technology, it is crucial to ensure that their production meets the required standards of quality and uniformity; in this context, deep learning techniques show significant potential.
Methods
This research introduces a transfer learning-based deep convolutional neural network (CNN) to classify chemical vapor deposition (CVD) grown molybdenum disulfide (MoS2) flakes based on their uniformity or the occurrence of defects affecting electronic properties. Acquiring and labeling a sufficient number of microscope images for CNN training may not be realistic. To address this challenge, artificial images were generated using Fresnel equations to pre-train the CNN. Subsequently, accuracy was improved through fine-tuning with a limited set of real images.
Results
The proposed transfer learning-based CNN method significantly improved all measurement metrics with respect to the ordinary CNNs. The initial CNN, trained with limited data and without transfer learning, achieved 68% average accuracy for binary classification. Through transfer learning and artificial images, the same CNN achieved 85% average accuracy, demonstrating an average increase of approximately 17%. While this study specifically focuses on MoS2 structures, the same methodology can be extended to other 2-dimensional materials by simply incorporating their specific parameters when generating artificial images.Intelligent control strategy for industrial furnaces based on yield classification prediction using a gray relative correlation-convolutional neural network-multilayer perceptron (GCM) machine learning modelhttps://peerj.com/articles/cs-18362024-02-192024-02-19Hua GuoShengxiang DengJingbiao Yang
Industrial furnaces still play an important role in national economic growth. Owing to the complexity of the production process, the product yield fluctuates, and cannot be executed in real time, which has not kept pace with the development of the intelligent technologies in Industry 4.0. In this study, based on the deep learning theory and operational data collected from more than one year of actual production of a lime kiln, we proposed a hybrid deep network model combining a gray relative correlation, a convolutional neural network and a multilayer perceptron model (GCM) to categorize production processes and predict yield classifications. The results show that the loss and calculation time of the model based on the screened set of variables are significantly reduced, and the accuracy is almost unaffected; the GCM model has the best performance in predicting the yield classification of lime kilns. The intelligent control strategy for non-fault state is then set according to the predicted yield classification. Operating parameters are adjusted in a timely manner according to different priority control sequences to achieve higher yield, ensure high production efficiency, reduce unnecessary waste, and save energy.
Industrial furnaces still play an important role in national economic growth. Owing to the complexity of the production process, the product yield fluctuates, and cannot be executed in real time, which has not kept pace with the development of the intelligent technologies in Industry 4.0. In this study, based on the deep learning theory and operational data collected from more than one year of actual production of a lime kiln, we proposed a hybrid deep network model combining a gray relative correlation, a convolutional neural network and a multilayer perceptron model (GCM) to categorize production processes and predict yield classifications. The results show that the loss and calculation time of the model based on the screened set of variables are significantly reduced, and the accuracy is almost unaffected; the GCM model has the best performance in predicting the yield classification of lime kilns. The intelligent control strategy for non-fault state is then set according to the predicted yield classification. Operating parameters are adjusted in a timely manner according to different priority control sequences to achieve higher yield, ensure high production efficiency, reduce unnecessary waste, and save energy.A new approach for atmospheric turbulence removal using low-rank matrix factorizationhttps://peerj.com/articles/cs-17132024-01-312024-01-31Mahdi JafaeiAmirhassan MonadjemiPayman MoallemMohammad Saeed Ehsani
In this article, a novel method for removing atmospheric turbulence from a sequence of turbulent images and restoring a high-quality image is presented. Turbulence is modeled using two factors: the geometric transformation of pixel locations represents the distortion, and the varying pixel brightness represents spatiotemporal varying blur. The main framework of the proposed method involves the utilization of low-rank matrix factorization, which achieves the modeling of both the geometric transformation of pixels and the spatiotemporal varying blur through an iterative process. In the proposed method, the initial step involves the selection of a subset of images using the random sample consensus method. Subsequently, estimation of the mixture of Gaussian noise parameters takes place. Following this, a window is chosen around each pixel based on the entropy of the surrounding region. Within this window, the transformation matrix is locally estimated. Lastly, by considering both the noise and the estimated geometric transformations of the selected images, an estimation of a low-rank matrix is conducted. This estimation process leads to the production of a turbulence-free image. The experimental results were obtained from both real and simulated datasets. These results demonstrated the efficacy of the proposed method in mitigating substantial geometrical distortions. Furthermore, the method showcased the ability to improve spatiotemporal varying blur and effectively restore the details present in the original image.
In this article, a novel method for removing atmospheric turbulence from a sequence of turbulent images and restoring a high-quality image is presented. Turbulence is modeled using two factors: the geometric transformation of pixel locations represents the distortion, and the varying pixel brightness represents spatiotemporal varying blur. The main framework of the proposed method involves the utilization of low-rank matrix factorization, which achieves the modeling of both the geometric transformation of pixels and the spatiotemporal varying blur through an iterative process. In the proposed method, the initial step involves the selection of a subset of images using the random sample consensus method. Subsequently, estimation of the mixture of Gaussian noise parameters takes place. Following this, a window is chosen around each pixel based on the entropy of the surrounding region. Within this window, the transformation matrix is locally estimated. Lastly, by considering both the noise and the estimated geometric transformations of the selected images, an estimation of a low-rank matrix is conducted. This estimation process leads to the production of a turbulence-free image. The experimental results were obtained from both real and simulated datasets. These results demonstrated the efficacy of the proposed method in mitigating substantial geometrical distortions. Furthermore, the method showcased the ability to improve spatiotemporal varying blur and effectively restore the details present in the original image.Research on intelligent file management system: a design strategy based on RFID technology and improved AICT algorithmhttps://peerj.com/articles/cs-17942024-01-082024-01-08Shan Ge
In this modern era of technology and digitalization, keeping track of a manual file system in the office environment is a challenging task. This research proposes a radio frequency identification (RFID) technology to improve conventional file management systems. The proposed system includes advances in information and communication technology (AICT) algorithm that addresses tag collision problems, resulting in increased data collection and reduced communication traffic. The system consists of modules such as information collection, file management, and user management. Each is analyzed from various aspects. Simulation results show that the AICT algorithm outperforms the improved collision tree algorithm by increasing recognition efficiency by 10% and reducing communication traffic by 30%. Moreover, the proposed approach provides a simple and convenient way to manage files in real-time meeting the needs of modern times. The AICT algorithm excelled the other algorithms regarding recognition efficiency.
In this modern era of technology and digitalization, keeping track of a manual file system in the office environment is a challenging task. This research proposes a radio frequency identification (RFID) technology to improve conventional file management systems. The proposed system includes advances in information and communication technology (AICT) algorithm that addresses tag collision problems, resulting in increased data collection and reduced communication traffic. The system consists of modules such as information collection, file management, and user management. Each is analyzed from various aspects. Simulation results show that the AICT algorithm outperforms the improved collision tree algorithm by increasing recognition efficiency by 10% and reducing communication traffic by 30%. Moreover, the proposed approach provides a simple and convenient way to manage files in real-time meeting the needs of modern times. The AICT algorithm excelled the other algorithms regarding recognition efficiency.Comprehensive evaluation for the sustainable development of fresh agricultural products logistics enterprises based on combination empowerment-TOPSIS methodhttps://peerj.com/articles/cs-17192023-12-122023-12-12Dechao SunXuefang HuBangquan Liu
To solve the problems of environmental pollution and resource waste caused by the rapid development of cold chain logistics of fresh agricultural products and improve the competitiveness of logistics enterprises in the market, a performance evaluation method of cold chain logistics enterprises based on the combined empowerment-TOPSIS was proposed. Firstly, from the five dimensions of cold supply chain capacity, service quality, economic efficiency, informatization degree and development ability, a comprehensive evaluation system of logistics enterprises’ sustainable development is constructed, which consists of 16 indicators, such as storage and preservation capacity, distribution accuracy, and equipment input rate. Then, G1 method and entropy weight method are used to calculate the subjective and objective weights of the evaluation indicators, and the combined weights are calculated with the objective of minimizing the deviation of the subjective and objective weighted attributes. Finally, the TOPSIS method is used to calculate the comprehensive evaluation indicators. The results show that the established performance evaluation model can effectively evaluate the performance of fresh agricultural products logistics enterprises and provide theoretical basis for enterprise logistics management.
To solve the problems of environmental pollution and resource waste caused by the rapid development of cold chain logistics of fresh agricultural products and improve the competitiveness of logistics enterprises in the market, a performance evaluation method of cold chain logistics enterprises based on the combined empowerment-TOPSIS was proposed. Firstly, from the five dimensions of cold supply chain capacity, service quality, economic efficiency, informatization degree and development ability, a comprehensive evaluation system of logistics enterprises’ sustainable development is constructed, which consists of 16 indicators, such as storage and preservation capacity, distribution accuracy, and equipment input rate. Then, G1 method and entropy weight method are used to calculate the subjective and objective weights of the evaluation indicators, and the combined weights are calculated with the objective of minimizing the deviation of the subjective and objective weighted attributes. Finally, the TOPSIS method is used to calculate the comprehensive evaluation indicators. The results show that the established performance evaluation model can effectively evaluate the performance of fresh agricultural products logistics enterprises and provide theoretical basis for enterprise logistics management.Pick-up point recommendation strategy based on user incentive mechanismhttps://peerj.com/articles/cs-16922023-11-202023-11-20Jing ZhangBiao LiXiucai YeYi Chen
In recent years, with the development of spatial crowdsourcing technology, online car-hailing, as a typical spatiotemporal crowdsourcing task application scenario, has attracted widespread attention. Existing researches on spatial crowdsourcing are mainly based on the coordinate positions of user and worker roles to achieve task allocation with the goal of maximum matching number or lowest cost. However, they ignores the problem of the selection of the pick-up point which needs to be solved in the actual scene of online car booking. This problem needs to take into account the four-dimensional coordinate positions of users, workers, pick-up point and destination. Based on this, this study designs a pick-up point recommendation strategy based on user incentive mechanism. Firstly, a new four-dimensional crowdsourcing model is established, which is closer to the practical application of crowdsourcing problem. Secondly, taking cost optimization as the index, a user incentive mechanism is designed to encourage users to walk to the appropriate pick-up point within a certain distance. Thirdly, a concept of forward rate is proposed to reduce the computation time. Some key factors, such as the maximum walking distance limit of users and task cost, are considered as the recommendation index for measuring the pick-up point. Then, an effective pick-up point recommendation strategy is designed based on this index. Experiments show that the strategy proposed in this article can achieve reasonable recommendation for pick-up points and improve the efficiency of drivers and reduce the total trip cost of orders to the greatest extent.
In recent years, with the development of spatial crowdsourcing technology, online car-hailing, as a typical spatiotemporal crowdsourcing task application scenario, has attracted widespread attention. Existing researches on spatial crowdsourcing are mainly based on the coordinate positions of user and worker roles to achieve task allocation with the goal of maximum matching number or lowest cost. However, they ignores the problem of the selection of the pick-up point which needs to be solved in the actual scene of online car booking. This problem needs to take into account the four-dimensional coordinate positions of users, workers, pick-up point and destination. Based on this, this study designs a pick-up point recommendation strategy based on user incentive mechanism. Firstly, a new four-dimensional crowdsourcing model is established, which is closer to the practical application of crowdsourcing problem. Secondly, taking cost optimization as the index, a user incentive mechanism is designed to encourage users to walk to the appropriate pick-up point within a certain distance. Thirdly, a concept of forward rate is proposed to reduce the computation time. Some key factors, such as the maximum walking distance limit of users and task cost, are considered as the recommendation index for measuring the pick-up point. Then, an effective pick-up point recommendation strategy is designed based on this index. Experiments show that the strategy proposed in this article can achieve reasonable recommendation for pick-up points and improve the efficiency of drivers and reduce the total trip cost of orders to the greatest extent.Multidimensional solution of fuzzy linear programminghttps://peerj.com/articles/cs-16462023-11-162023-11-16Seyyed Ahmad Edalatpanah
There are several approaches to address fuzzy linear programming problems (FLPP). However, due to using standard interval arithmetic (SIA), these methods have some limitations and are not complete solutions. This article establishes a new approach to fuzzy linear programming via the theory of horizontal membership functions and the multidimensional relative-distance-measure fuzzy interval arithmetic. Furthermore, we propose a multidimensional solution based on the primal Simplex approach that satisfies any equivalent form of FLPP. The new solutions of FLPP are also compared with the results of existing methods. Some numerical examples have been illustrated to show the efficiency of the proposed method.
There are several approaches to address fuzzy linear programming problems (FLPP). However, due to using standard interval arithmetic (SIA), these methods have some limitations and are not complete solutions. This article establishes a new approach to fuzzy linear programming via the theory of horizontal membership functions and the multidimensional relative-distance-measure fuzzy interval arithmetic. Furthermore, we propose a multidimensional solution based on the primal Simplex approach that satisfies any equivalent form of FLPP. The new solutions of FLPP are also compared with the results of existing methods. Some numerical examples have been illustrated to show the efficiency of the proposed method.Swarm intelligence-based packet scheduling for future intelligent networkshttps://peerj.com/articles/cs-16712023-11-162023-11-16Arif HusenMuhammad Hasanain ChaudaryFarooq AhmadMuhammad Farooq-i-AzamChan Hwang SeeArfan Ghani
Network operations involve several decision-making tasks. Some of these tasks are related to operators, such as extending the footprint or upgrading the network capacity. Other decision tasks are related to network functions, such as traffic classifications, scheduling, capacity, coverage trade-offs, and policy enforcement. These decisions are often decentralized, and each network node makes its own decisions based on the preconfigured rules or policies. To ensure effectiveness, it is essential that planning and functional decisions are in harmony. However, human intervention-based decisions are subject to high costs, delays, and mistakes. On the other hand, machine learning has been used in different fields of life to automate decision processes intelligently. Similarly, future intelligent networks are also expected to see an intense use of machine learning and artificial intelligence techniques for functional and operational automation. This article investigates the current state-of-the-art methods for packet scheduling and related decision processes. Furthermore, it proposes a machine learning-based approach for packet scheduling for agile and cost-effective networks to address various issues and challenges. The analysis of the experimental results shows that the proposed deep learning-based approach can successfully address the challenges without compromising the network performance. For example, it has been seen that with mean absolute error from 6.38 to 8.41 using the proposed deep learning model, the packet scheduling can maintain 99.95% throughput, 99.97% delay, and 99.94% jitter, which are much better as compared to the statically configured traffic profiles.
Network operations involve several decision-making tasks. Some of these tasks are related to operators, such as extending the footprint or upgrading the network capacity. Other decision tasks are related to network functions, such as traffic classifications, scheduling, capacity, coverage trade-offs, and policy enforcement. These decisions are often decentralized, and each network node makes its own decisions based on the preconfigured rules or policies. To ensure effectiveness, it is essential that planning and functional decisions are in harmony. However, human intervention-based decisions are subject to high costs, delays, and mistakes. On the other hand, machine learning has been used in different fields of life to automate decision processes intelligently. Similarly, future intelligent networks are also expected to see an intense use of machine learning and artificial intelligence techniques for functional and operational automation. This article investigates the current state-of-the-art methods for packet scheduling and related decision processes. Furthermore, it proposes a machine learning-based approach for packet scheduling for agile and cost-effective networks to address various issues and challenges. The analysis of the experimental results shows that the proposed deep learning-based approach can successfully address the challenges without compromising the network performance. For example, it has been seen that with mean absolute error from 6.38 to 8.41 using the proposed deep learning model, the packet scheduling can maintain 99.95% throughput, 99.97% delay, and 99.94% jitter, which are much better as compared to the statically configured traffic profiles.