PeerJ Computer Science:Emerging Technologieshttps://peerj.com/articles/index.atom?journal=cs&subject=10100Emerging Technologies articles published in PeerJ Computer ScienceDesign of smart citrus picking model based on Mask RCNN and adaptive threshold segmentationhttps://peerj.com/articles/cs-18652024-03-042024-03-04Ziwei GuoYuanwu ShiIbrar Ahmad
Smart agriculture is steadily progressing towards automation and heightened efficacy. The rapid ascent of deep learning technology provides a robust foundation for this trajectory. Leveraging computer vision and the depths of deep learning techniques enables real-time monitoring and management within agriculture, facilitating swift detection of plant growth and autonomous assessment of ripeness. In response to the demands of smart agriculture, this exposition delves into automated citrus harvesting, presenting an ATT-MRCNN target detection model that seamlessly integrates channel attention and spatial attention mechanisms for discerning and identifying citrus images. This framework commences by subjecting diverse citrus image classifications to Mask Region-based CNN’s (Mask RCNN’s) discerning scrutiny, enhancing the model’s efficacy through the incorporation of attention mechanisms. During the model’s training phase, transfer learning is utilized to expand data performance and optimize training efficiency, culminating in parameter initialization. Empirical results notably demonstrate that this method achieves a recognition rate surpassing the 95% threshold across the three sensory recognition tasks. This provides invaluable algorithmic support and essential guidance for the imminent era of intelligent harvesting.
Smart agriculture is steadily progressing towards automation and heightened efficacy. The rapid ascent of deep learning technology provides a robust foundation for this trajectory. Leveraging computer vision and the depths of deep learning techniques enables real-time monitoring and management within agriculture, facilitating swift detection of plant growth and autonomous assessment of ripeness. In response to the demands of smart agriculture, this exposition delves into automated citrus harvesting, presenting an ATT-MRCNN target detection model that seamlessly integrates channel attention and spatial attention mechanisms for discerning and identifying citrus images. This framework commences by subjecting diverse citrus image classifications to Mask Region-based CNN’s (Mask RCNN’s) discerning scrutiny, enhancing the model’s efficacy through the incorporation of attention mechanisms. During the model’s training phase, transfer learning is utilized to expand data performance and optimize training efficiency, culminating in parameter initialization. Empirical results notably demonstrate that this method achieves a recognition rate surpassing the 95% threshold across the three sensory recognition tasks. This provides invaluable algorithmic support and essential guidance for the imminent era of intelligent harvesting.An integrative decision-making framework to guide policies on regulating ChatGPT usagehttps://peerj.com/articles/cs-18452024-02-292024-02-29Umar Ali BukarMd Shohel SayeedSiti Fatimah Abdul RazakSumendra YogarayanOluwatosin Ahmed Amodu
Generative artificial intelligence has created a moment in history where human beings have begin to closely interact with artificial intelligence (AI) tools, putting policymakers in a position to restrict or legislate such tools. One particular example of such a tool is ChatGPT which is the first and world's most popular multipurpose generative AI tool. This study aims to put forward a policy-making framework of generative artificial intelligence based on the risk, reward, and resilience framework. A systematic search was conducted, by using carefully chosen keywords, excluding non-English content, conference articles, book chapters, and editorials. Published research were filtered based on their relevance to ChatGPT ethics, yielding a total of 41 articles. Key elements surrounding ChatGPT concerns and motivations were systematically deduced and classified under the risk, reward, and resilience categories to serve as ingredients for the proposed decision-making framework. The decision-making process and rules were developed as a primer to help policymakers navigate decision-making conundrums. Then, the framework was practically tailored towards some of the concerns surrounding ChatGPT in the context of higher education. In the case of the interconnection between risk and reward, the findings show that providing students with access to ChatGPT presents an opportunity for increased efficiency in tasks such as text summarization and workload reduction. However, this exposes them to risks such as plagiarism and cheating. Similarly, pursuing certain opportunities such as accessing vast amounts of information, can lead to rewards, but it also introduces risks like misinformation and copyright issues. Likewise, focusing on specific capabilities of ChatGPT, such as developing tools to detect plagiarism and misinformation, may enhance resilience in some areas (e.g., academic integrity). However, it may also create vulnerabilities in other domains, such as the digital divide, educational equity, and job losses. Furthermore, the finding indicates second-order effects of legislation regarding ChatGPT which have implications both positively and negatively. One potential effect is a decrease in rewards due to the limitations imposed by the legislation, which may hinder individuals from fully capitalizing on the opportunities provided by ChatGPT. Hence, the risk, reward, and resilience framework provides a comprehensive and flexible decision-making model that allows policymakers and in this use case, higher education institutions to navigate the complexities and trade-offs associated with ChatGPT, which have theoretical and practical implications for the future.
Generative artificial intelligence has created a moment in history where human beings have begin to closely interact with artificial intelligence (AI) tools, putting policymakers in a position to restrict or legislate such tools. One particular example of such a tool is ChatGPT which is the first and world's most popular multipurpose generative AI tool. This study aims to put forward a policy-making framework of generative artificial intelligence based on the risk, reward, and resilience framework. A systematic search was conducted, by using carefully chosen keywords, excluding non-English content, conference articles, book chapters, and editorials. Published research were filtered based on their relevance to ChatGPT ethics, yielding a total of 41 articles. Key elements surrounding ChatGPT concerns and motivations were systematically deduced and classified under the risk, reward, and resilience categories to serve as ingredients for the proposed decision-making framework. The decision-making process and rules were developed as a primer to help policymakers navigate decision-making conundrums. Then, the framework was practically tailored towards some of the concerns surrounding ChatGPT in the context of higher education. In the case of the interconnection between risk and reward, the findings show that providing students with access to ChatGPT presents an opportunity for increased efficiency in tasks such as text summarization and workload reduction. However, this exposes them to risks such as plagiarism and cheating. Similarly, pursuing certain opportunities such as accessing vast amounts of information, can lead to rewards, but it also introduces risks like misinformation and copyright issues. Likewise, focusing on specific capabilities of ChatGPT, such as developing tools to detect plagiarism and misinformation, may enhance resilience in some areas (e.g., academic integrity). However, it may also create vulnerabilities in other domains, such as the digital divide, educational equity, and job losses. Furthermore, the finding indicates second-order effects of legislation regarding ChatGPT which have implications both positively and negatively. One potential effect is a decrease in rewards due to the limitations imposed by the legislation, which may hinder individuals from fully capitalizing on the opportunities provided by ChatGPT. Hence, the risk, reward, and resilience framework provides a comprehensive and flexible decision-making model that allows policymakers and in this use case, higher education institutions to navigate the complexities and trade-offs associated with ChatGPT, which have theoretical and practical implications for the future.A machine learning-based hybrid recommender framework for smart medical systemshttps://peerj.com/articles/cs-18802024-02-202024-02-20Jianhua WeiHonglin YanXiaoli ShaoLili ZhaoLin HanPeng YanShengyu Wang
This article presents a hybrid recommender framework for smart medical systems by introducing two methods to improve service level evaluations and doctor recommendations for patients. The first method uses big data techniques and deep learning algorithms to develop a registration review system in medical institutions. This system outperforms conventional evaluation methods, thus achieving higher accuracy. The second method implements the term frequency and inverse document frequency (TF-IDF) algorithm to construct a model based on the patient’s symptom vector space, incorporating score weighting, modified cosine similarity, and K-means clustering. Then, the alternating least squares (ALS) matrix decomposition and user collaborative filtering algorithm are applied to calculate patients’ predicted scores for doctors and recommend top-performing doctors. Experimental results show significant improvements in metrics called precision and recall rates compared to conventional methods, making the proposed approach a practical solution for department triage and doctor recommendation in medical appointment platforms.
This article presents a hybrid recommender framework for smart medical systems by introducing two methods to improve service level evaluations and doctor recommendations for patients. The first method uses big data techniques and deep learning algorithms to develop a registration review system in medical institutions. This system outperforms conventional evaluation methods, thus achieving higher accuracy. The second method implements the term frequency and inverse document frequency (TF-IDF) algorithm to construct a model based on the patient’s symptom vector space, incorporating score weighting, modified cosine similarity, and K-means clustering. Then, the alternating least squares (ALS) matrix decomposition and user collaborative filtering algorithm are applied to calculate patients’ predicted scores for doctors and recommend top-performing doctors. Experimental results show significant improvements in metrics called precision and recall rates compared to conventional methods, making the proposed approach a practical solution for department triage and doctor recommendation in medical appointment platforms.Evaluating generative AI integration in Saudi Arabian education: a mixed-methods studyhttps://peerj.com/articles/cs-18792024-02-162024-02-16Abdullah Alammari
Incorporating generative artificial intelligence (GAI) in education has become crucial in contemporary educational environments. This research article thoroughly investigates the ramifications of implementing GAI in the higher education context of Saudi Arabia, employing a blend of quantitative and qualitative research approaches. Survey-based quantitative data reveals a noteworthy correlation between educators’ awareness of GAI and the frequency of its application. Notably, around half of the surveyed educators are at stages characterized by understanding and familiarity with GAI integration, indicating a tangible readiness for its adoption. Moreover, the study’s quantitative findings underscore the perceived value and ease associated with integrating GAI, thus reinforcing the assumption that educators are motivated and inclined to integrate GAI tools like ChatGPT into their teaching methodologies. In addition to the quantitative analysis, qualitative insights from in-depth interviews with educators unveil a rich tapestry of perspectives. The qualitative data emphasizes GAI’s role as a catalyst for collaborative learning, contributing to professional development, and fostering innovative teaching practices.
Incorporating generative artificial intelligence (GAI) in education has become crucial in contemporary educational environments. This research article thoroughly investigates the ramifications of implementing GAI in the higher education context of Saudi Arabia, employing a blend of quantitative and qualitative research approaches. Survey-based quantitative data reveals a noteworthy correlation between educators’ awareness of GAI and the frequency of its application. Notably, around half of the surveyed educators are at stages characterized by understanding and familiarity with GAI integration, indicating a tangible readiness for its adoption. Moreover, the study’s quantitative findings underscore the perceived value and ease associated with integrating GAI, thus reinforcing the assumption that educators are motivated and inclined to integrate GAI tools like ChatGPT into their teaching methodologies. In addition to the quantitative analysis, qualitative insights from in-depth interviews with educators unveil a rich tapestry of perspectives. The qualitative data emphasizes GAI’s role as a catalyst for collaborative learning, contributing to professional development, and fostering innovative teaching practices.Controller-driven vector autoregression model for predicting content popularity in programmable named data networking deviceshttps://peerj.com/articles/cs-18542024-02-082024-02-08Firdous QaiserMudassar HussainAbdul AhadIvan Miguel Pires
Named Data Networking (NDN) has emerged as a promising network architecture for content delivery in edge infrastructures, primarily due to its name-based routing and integrated in-network caching. Despite these advantages, sub-optimal performance often results from the decentralized decision-making processes of caching devices. This article introduces a paradigm shift by implementing a Software Defined Networking (SDN) controller to optimize the placement of highly popular content in NDN nodes. The optimization process considers critical networking factors, including network congestion, security, topology modification, and flowrules alterations, which are essential for shaping content caching strategies. The article presents a novel content caching framework, Popularity-aware Caching in Popular Programmable NDN nodes (PaCPn). Employing a multi-variant vector autoregression (VAR) model driven by an SDN controller, PaCPn periodically updates content popularity based on time-series data, including ‘request rates’ and ‘past popularity’. It also introduces a controller-driven heuristic algorithm that evaluates the proximity of caching points to consumers, considering factors such as ‘distance cost,’ ‘delivery time,’ and the specific ‘status of the requested content’. PaCPn utilizes customized DATA named packets to ensure the source stores content with a valid residual freshness period while preventing intermediate nodes from caching it. The experimental results demonstrate significant improvements achieved by the proposed technique PaCPn compared to existing schemes. Specifically, the technique enhances cache hit rates by 20% across various metrics, including cache size, Zipf parameter, and exchanged traffic within edge infrastructure. Moreover, it reduces content retrieval delays by 28%, considering metrics such as cache capacity, the number of consumers, and network throughput. This research advances NDN content caching and offers potential optimizations for edge infrastructures.
Named Data Networking (NDN) has emerged as a promising network architecture for content delivery in edge infrastructures, primarily due to its name-based routing and integrated in-network caching. Despite these advantages, sub-optimal performance often results from the decentralized decision-making processes of caching devices. This article introduces a paradigm shift by implementing a Software Defined Networking (SDN) controller to optimize the placement of highly popular content in NDN nodes. The optimization process considers critical networking factors, including network congestion, security, topology modification, and flowrules alterations, which are essential for shaping content caching strategies. The article presents a novel content caching framework, Popularity-aware Caching in Popular Programmable NDN nodes (PaCPn). Employing a multi-variant vector autoregression (VAR) model driven by an SDN controller, PaCPn periodically updates content popularity based on time-series data, including ‘request rates’ and ‘past popularity’. It also introduces a controller-driven heuristic algorithm that evaluates the proximity of caching points to consumers, considering factors such as ‘distance cost,’ ‘delivery time,’ and the specific ‘status of the requested content’. PaCPn utilizes customized DATA named packets to ensure the source stores content with a valid residual freshness period while preventing intermediate nodes from caching it. The experimental results demonstrate significant improvements achieved by the proposed technique PaCPn compared to existing schemes. Specifically, the technique enhances cache hit rates by 20% across various metrics, including cache size, Zipf parameter, and exchanged traffic within edge infrastructure. Moreover, it reduces content retrieval delays by 28%, considering metrics such as cache capacity, the number of consumers, and network throughput. This research advances NDN content caching and offers potential optimizations for edge infrastructures.Machine learning based framework for fine-grained word segmentation and enhanced text normalization for low resourced languagehttps://peerj.com/articles/cs-17042024-01-312024-01-31Shahzad NazirMuhammad AsifMariam RehmanShahbaz Ahmad
In text applications, pre-processing is deemed as a significant parameter to enhance the outcomes of natural language processing (NLP) chores. Text normalization and tokenization are two pivotal procedures of text pre-processing that cannot be overstated. Text normalization refers to transforming raw text into scriptural standardized text, while word tokenization splits the text into tokens or words. Well defined normalization and tokenization approaches exist for most spoken languages in world. However, the world’s 10th most widely spoken language has been overlooked by the research community. This research presents improved text normalization and tokenization techniques for the Urdu language. For Urdu text normalization, multiple regular expressions and rules are proposed, including removing diuretics, normalizing single characters, separating digits, etc. While for word tokenization, core features are defined and extracted against each character of text. Machine learning model is considered with specified handcrafted rules to predict the space and to tokenize the text. This experiment is performed, while creating the largest human-annotated dataset composed in Urdu script covering five different domains. The results have been evaluated using precision, recall, F-measure, and accuracy. Further, the results are compared with state-of-the-art. The normalization approach produced 20% and tokenization approach achieved 6% improvement.
In text applications, pre-processing is deemed as a significant parameter to enhance the outcomes of natural language processing (NLP) chores. Text normalization and tokenization are two pivotal procedures of text pre-processing that cannot be overstated. Text normalization refers to transforming raw text into scriptural standardized text, while word tokenization splits the text into tokens or words. Well defined normalization and tokenization approaches exist for most spoken languages in world. However, the world’s 10th most widely spoken language has been overlooked by the research community. This research presents improved text normalization and tokenization techniques for the Urdu language. For Urdu text normalization, multiple regular expressions and rules are proposed, including removing diuretics, normalizing single characters, separating digits, etc. While for word tokenization, core features are defined and extracted against each character of text. Machine learning model is considered with specified handcrafted rules to predict the space and to tokenize the text. This experiment is performed, while creating the largest human-annotated dataset composed in Urdu script covering five different domains. The results have been evaluated using precision, recall, F-measure, and accuracy. Further, the results are compared with state-of-the-art. The normalization approach produced 20% and tokenization approach achieved 6% improvement.Intelligent search system for resume and labor lawhttps://peerj.com/articles/cs-17862024-01-192024-01-19Hien NguyenVuong PhamHung Q. NgoAnh HuynhBinh NguyenJosé Machado
Labor and employment are important issues in social life. The demand for online job searching and searching for labor regulations in legal documents, particularly regarding the policy for unemployment benefits, is essential. Nowadays, each function has some programs for its working. However, there is no program that combines both functions. In practice, when users seek a job, they may be unemployed or want to transfer to another work. Thus, they are required to search for regulations about unemployment insurance policies and related information, as well as regulations about workers working smoothly and following labor law. Ontology is a useful technique for representing areas of practical knowledge. This article proposes an ontology-based method for solving labor and employment-related problems. First, we construct an ontology of job skills to match curriculum vitae (CV) and job descriptions (JD). In addition, an ontology for representing labor law documents is proposed to aid users in their search for legal labor law regulations. These ontologies are combined to construct the knowledge base of a job-searching and labor law-searching system. In addition, this integrated ontology is used to study several issues involving the matching of CVs and JDs and the search for labor law issues. A system for intelligent resume searching in information technology is developed using the proposed method. This system also incorporates queries pertaining to Vietnamese labor law policies regarding unemployment and healthcare benefits. The experimental results demonstrate that the method designed to assist job seekers and users searching for legal labor documents is effective.
Labor and employment are important issues in social life. The demand for online job searching and searching for labor regulations in legal documents, particularly regarding the policy for unemployment benefits, is essential. Nowadays, each function has some programs for its working. However, there is no program that combines both functions. In practice, when users seek a job, they may be unemployed or want to transfer to another work. Thus, they are required to search for regulations about unemployment insurance policies and related information, as well as regulations about workers working smoothly and following labor law. Ontology is a useful technique for representing areas of practical knowledge. This article proposes an ontology-based method for solving labor and employment-related problems. First, we construct an ontology of job skills to match curriculum vitae (CV) and job descriptions (JD). In addition, an ontology for representing labor law documents is proposed to aid users in their search for legal labor law regulations. These ontologies are combined to construct the knowledge base of a job-searching and labor law-searching system. In addition, this integrated ontology is used to study several issues involving the matching of CVs and JDs and the search for labor law issues. A system for intelligent resume searching in information technology is developed using the proposed method. This system also incorporates queries pertaining to Vietnamese labor law policies regarding unemployment and healthcare benefits. The experimental results demonstrate that the method designed to assist job seekers and users searching for legal labor documents is effective.Controller placement with critical switch aware in software-defined network (CPCSA)https://peerj.com/articles/cs-16982023-12-192023-12-19Nura Muhammed YusufKamalrulnizam Abu BakarBabangida IsyakuAbdelzahir AbdelmaboudWamda Nagmeldin
Software-defined networking (SDN) is a networking architecture with improved efficiency achieved by moving networking decisions from the data plane to provide them critically at the control plane. In a traditional SDN, typically, a single controller is used. However, the complexity of modern networks due to their size and high traffic volume with varied quality of service requirements have introduced high control message communications overhead on the controller. Similarly, the solution found using multiple distributed controllers brings forth the ‘controller placement problem’ (CPP). Incorporating switch roles in the CPP modelling during network partitioning for controller placement has not been adequately considered by any existing CPP techniques. This article proposes the controller placement algorithm with network partition based on critical switch awareness (CPCSA). CPCSA identifies critical switch in the software defined wide area network (SDWAN) and then partition the network based on the criticality. Subsequently, a controller is assigned to each partition to improve control messages communication overhead, loss, throughput, and flow setup delay. The CPSCSA experimented with real network topologies obtained from the Internet Topology Zoo. Results show that CPCSA has achieved an aggregate reduction in the controller’s overhead by 73%, loss by 51%, and latency by 16% while improving throughput by 16% compared to the benchmark algorithms.
Software-defined networking (SDN) is a networking architecture with improved efficiency achieved by moving networking decisions from the data plane to provide them critically at the control plane. In a traditional SDN, typically, a single controller is used. However, the complexity of modern networks due to their size and high traffic volume with varied quality of service requirements have introduced high control message communications overhead on the controller. Similarly, the solution found using multiple distributed controllers brings forth the ‘controller placement problem’ (CPP). Incorporating switch roles in the CPP modelling during network partitioning for controller placement has not been adequately considered by any existing CPP techniques. This article proposes the controller placement algorithm with network partition based on critical switch awareness (CPCSA). CPCSA identifies critical switch in the software defined wide area network (SDWAN) and then partition the network based on the criticality. Subsequently, a controller is assigned to each partition to improve control messages communication overhead, loss, throughput, and flow setup delay. The CPSCSA experimented with real network topologies obtained from the Internet Topology Zoo. Results show that CPCSA has achieved an aggregate reduction in the controller’s overhead by 73%, loss by 51%, and latency by 16% while improving throughput by 16% compared to the benchmark algorithms.Guidelines for a participatory Smart City model to address Amazon’s urban environmental problemshttps://peerj.com/articles/cs-16942023-12-122023-12-12Jonas Gomes da Silva
Climate change is a global challenge, and the Brazilian Amazon Forest is a particular concern due to the possibility of reaching a tipping point that could amplify environmental crises. Despite many studies on the Amazon Forest, this research was conducted in Manaus, the capital of Amazonas state, to address five gaps, including the lack of local citizen consultation on urban environmental issues, Smart Cities, decarbonization, and disruptive technologies. This study holds significance for the academy community, government bodies, policymakers, and investors, as it offers novel insights into the Amazon region and proposes a model to engage citizens in Smart Cities. This model could also guide other municipalities aspiring for participatory sustainable development with a decarbonization focus, mitigating future risks, and protecting future generations. Basically, it is an explanatory and applied study that employs mixed methods, including literature, bibliometric and documentary reviews, two questionnaires, and descriptive statistical approaches, organized in four phases to reach the following goals: (a) provide information on the main challenges facing humanity, the Brazilian Amazon state, and the city of Manaus; (b) identify the best Smart City approaches for engaging citizens in solving urban problems; (c) contextualize and consult Manaus City Hall about the effectiveness of the Smart City project; (d) investigate the perceptions of citizens living in Manaus on the main city’s environmental problems, as well as their level of knowledge and interest on issues related to Smart Cities, decarbonization, and disruptive technologies; (e) propose a participatory Smart City model with recommendations. Among the result, the study found that the term “Smart City” dominates scholarly publications among nineteen urban-related terms, and the five main environmental problems in Manaus are an increase in stream pollution, garbage accumulation, insufficient urban afforestation, air pollution, and traffic congestion. Although citizens are willing to help, the majority lack knowledge on Smart City and Decarbonized City issues, but there is a considerable interest in training related to these issues, as well as disruptive technologies. It was found that Amsterdam, Melbourne, Montreal, San Francisco, Seoul, and Taipei all have a formal model to engage citizens in solving their urban problems. The main conclusion is that, after 6 years, the Smart City Project in Manaus is a political fallacy, as no model, especially with a citizen participatory approach, has been effectively adopted. In addition, after conducting a literature and documentary review and analyzing 25 benchmark Smart Cities, the P5 model and the Citizen Engagement Kit model are proposed with 120 approaches and guidelines for addressing the main environmental problems by including Manaus’ citizens in the Smart City and/or decarbonization journey.
Climate change is a global challenge, and the Brazilian Amazon Forest is a particular concern due to the possibility of reaching a tipping point that could amplify environmental crises. Despite many studies on the Amazon Forest, this research was conducted in Manaus, the capital of Amazonas state, to address five gaps, including the lack of local citizen consultation on urban environmental issues, Smart Cities, decarbonization, and disruptive technologies. This study holds significance for the academy community, government bodies, policymakers, and investors, as it offers novel insights into the Amazon region and proposes a model to engage citizens in Smart Cities. This model could also guide other municipalities aspiring for participatory sustainable development with a decarbonization focus, mitigating future risks, and protecting future generations. Basically, it is an explanatory and applied study that employs mixed methods, including literature, bibliometric and documentary reviews, two questionnaires, and descriptive statistical approaches, organized in four phases to reach the following goals: (a) provide information on the main challenges facing humanity, the Brazilian Amazon state, and the city of Manaus; (b) identify the best Smart City approaches for engaging citizens in solving urban problems; (c) contextualize and consult Manaus City Hall about the effectiveness of the Smart City project; (d) investigate the perceptions of citizens living in Manaus on the main city’s environmental problems, as well as their level of knowledge and interest on issues related to Smart Cities, decarbonization, and disruptive technologies; (e) propose a participatory Smart City model with recommendations. Among the result, the study found that the term “Smart City” dominates scholarly publications among nineteen urban-related terms, and the five main environmental problems in Manaus are an increase in stream pollution, garbage accumulation, insufficient urban afforestation, air pollution, and traffic congestion. Although citizens are willing to help, the majority lack knowledge on Smart City and Decarbonized City issues, but there is a considerable interest in training related to these issues, as well as disruptive technologies. It was found that Amsterdam, Melbourne, Montreal, San Francisco, Seoul, and Taipei all have a formal model to engage citizens in solving their urban problems. The main conclusion is that, after 6 years, the Smart City Project in Manaus is a political fallacy, as no model, especially with a citizen participatory approach, has been effectively adopted. In addition, after conducting a literature and documentary review and analyzing 25 benchmark Smart Cities, the P5 model and the Citizen Engagement Kit model are proposed with 120 approaches and guidelines for addressing the main environmental problems by including Manaus’ citizens in the Smart City and/or decarbonization journey.Management of investment portfolios employing reinforcement learninghttps://peerj.com/articles/cs-16952023-12-112023-12-11Gustavo Carvalho SantosDaniel GarrutiFlavio BarbozaKamyr Gomes de SouzaJean Carlos DomingosAntônio Veiga
Investors are presented with a multitude of options and markets for pursuing higher returns, a task that often proves complex and challenging. This study examines the effectiveness of reinforcement learning (RL) algorithms in optimizing investment portfolios, comparing their performance with traditional strategies and benchmarking against American and Brazilian indices. Additionally, it was explore the impact of incorporating commodity derivatives into portfolios and the associated transaction costs. The results indicate that the inclusion of derivatives can significantly enhance portfolio performance while reducing volatility, presenting an attractive opportunity for investors. RL techniques also demonstrate superior effectiveness in portfolio optimization, resulting in an average increase of 12% in returns without a commensurate increase in risk. Consequently, this research makes a substantial contribution to the field of finance. It not only sheds light on the application of RL but also provides valuable insights for academia. Furthermore, it challenges conventional notions of market efficiency and modern portfolio theory, offering practical implications. It suggests that data-driven investment management holds the potential to enhance efficiency, mitigate conflicts of interest, and reduce biased decision-making, thereby transforming the landscape of financial investment.
Investors are presented with a multitude of options and markets for pursuing higher returns, a task that often proves complex and challenging. This study examines the effectiveness of reinforcement learning (RL) algorithms in optimizing investment portfolios, comparing their performance with traditional strategies and benchmarking against American and Brazilian indices. Additionally, it was explore the impact of incorporating commodity derivatives into portfolios and the associated transaction costs. The results indicate that the inclusion of derivatives can significantly enhance portfolio performance while reducing volatility, presenting an attractive opportunity for investors. RL techniques also demonstrate superior effectiveness in portfolio optimization, resulting in an average increase of 12% in returns without a commensurate increase in risk. Consequently, this research makes a substantial contribution to the field of finance. It not only sheds light on the application of RL but also provides valuable insights for academia. Furthermore, it challenges conventional notions of market efficiency and modern portfolio theory, offering practical implications. It suggests that data-driven investment management holds the potential to enhance efficiency, mitigate conflicts of interest, and reduce biased decision-making, thereby transforming the landscape of financial investment.