PeerJ Computer Science Preprints: Network Science and Online Social Networkshttps://peerj.com/preprints/index.atom?journal=cs&subject=10700Network Science and Online Social Networks articles published in PeerJ Computer Science PreprintsSEO: A unique approach to enhance the site rank by implementing Efficient Keywords Schemehttps://peerj.com/preprints/276092019-03-222019-03-22Khalil ur RehmanAnaa YasinTariq MahmoodMuhammad AzeemSaqib Ali
In search engine optimization individual website pages are optimized through precise keywords, while the websites are optimized using back link watch. The existing literature has no proper guideline for keywords selection and back link generation. In this research, we proposed a model for making back link watch generation and the selection of keywords through precise research analysis. The information on webpages consist of specific keywords while the website traffic is monitored through referrals. we concluded that during the development of Page Content, and architecture, if selected keywords are used in Title, Headings and Meta Tag then the page result is higher in search results. Moreover, for the back-link generation use a shorter volume of URL that monitor the complete traffic of a site can be placed on trusted location which increase the ranks of a site. Proposed model has been validated by comparing quantitative data of website rank taken before and after implementation of framework. Results revealed that overall increase gained in site rank by applying the proposed model was 40%.
In search engine optimization individual website pages are optimized through precise keywords, while the websites are optimized using back link watch. The existing literature has no proper guideline for keywords selection and back link generation. In this research, we proposed a model for making back link watch generation and the selection of keywords through precise research analysis. The information on webpages consist of specific keywords while the website traffic is monitored through referrals. we concluded that during the development of Page Content, and architecture, if selected keywords are used in Title, Headings and Meta Tag then the page result is higher in search results. Moreover, for the back-link generation use a shorter volume of URL that monitor the complete traffic of a site can be placed on trusted location which increase the ranks of a site. Proposed model has been validated by comparing quantitative data of website rank taken before and after implementation of framework. Results revealed that overall increase gained in site rank by applying the proposed model was 40%.Evaluating social network extraction for classic and modern fiction literaturehttps://peerj.com/preprints/272632018-10-082018-10-08Niels DekkerTobias KuhnMarieke van Erp
The analysis of literary works has experienced a surge in computer-assisted processing. To obtain insights into the community structures and social interactions portrayed in novels the creation of social networks from novels has gained popularity. Many methods rely on identifying named entities and relations for the construction of these networks, but many of these tools are not specifically created for the literary domain. Furthermore, many of the studies on information extraction from literature typically focus on 19th century source material. Because of this, it is unclear if these techniques are as suitable to modern-day science fiction and fantasy literature as they are to those 19th century classics. We present a study to compare classic literature to modern literature in terms of performance of natural language processing tools for the automatic extraction of social networks as well as their network structure. We find that there are no significant differences between the two sets of novels but that both are subject to a high amount of variance. Furthermore, we identify several issues that complicate named entity recognition in modern novels and we present methods to remedy these.
The analysis of literary works has experienced a surge in computer-assisted processing. To obtain insights into the community structures and social interactions portrayed in novels the creation of social networks from novels has gained popularity. Many methods rely on identifying named entities and relations for the construction of these networks, but many of these tools are not specifically created for the literary domain. Furthermore, many of the studies on information extraction from literature typically focus on 19th century source material. Because of this, it is unclear if these techniques are as suitable to modern-day science fiction and fantasy literature as they are to those 19th century classics. We present a study to compare classic literature to modern literature in terms of performance of natural language processing tools for the automatic extraction of social networks as well as their network structure. We find that there are no significant differences between the two sets of novels but that both are subject to a high amount of variance. Furthermore, we identify several issues that complicate named entity recognition in modern novels and we present methods to remedy these.A review of crypto networkshttps://peerj.com/preprints/269112018-05-032018-05-03Mian ZhangYuhong Ji
Bitcoin is a crypto currency system that has been rapidly adopted due to its anonymity and decentralized properties. Blockchain is the underpinning technology that maintains the Bitcoin transaction ledger. The blockchain network operates in a state of consensus, which automatically checks in with itself periodically. One of biggest innovation by Bitcoin system is that it is a new way to develop open networks. Anything that happens on the blockchain is a function of the network as a whole. Crypto networks represent a fundamental shift in the way our society transacts, organizes, and works with each other, which could be explained and deciphered by econophysics of the network itself. From a networking perspective, we reviewed a list of current literatures that studied the crypto networks, mostly Bitcoin transaction networks. We identified the potential research areas that would further provide insights into the design of a more resilient and secure crypto networks.
Bitcoin is a crypto currency system that has been rapidly adopted due to its anonymity and decentralized properties. Blockchain is the underpinning technology that maintains the Bitcoin transaction ledger. The blockchain network operates in a state of consensus, which automatically checks in with itself periodically. One of biggest innovation by Bitcoin system is that it is a new way to develop open networks. Anything that happens on the blockchain is a function of the network as a whole. Crypto networks represent a fundamental shift in the way our society transacts, organizes, and works with each other, which could be explained and deciphered by econophysics of the network itself. From a networking perspective, we reviewed a list of current literatures that studied the crypto networks, mostly Bitcoin transaction networks. We identified the potential research areas that would further provide insights into the design of a more resilient and secure crypto networks.Resilience enhancement of container-based cloud load balancing servicehttps://peerj.com/preprints/268752018-04-202018-04-20Dongsheng Zhang
Web traffic is highly jittery and unpredictable. Load balancer plays a significant role in mitigating the uncertainty in web environments. With the growing adoption of cloud computing infrastructure, software load balancer becomes more common in recent years. Current load balancer services distribute the network requests based on the number of network connections to the backend servers. However, the load balancing algorithm fails to work when other resources such as CPU or memory in a backend server saturates. We experimented and discussed the resilience evaluation and enhancement of container-based software load balancer services in cloud computing environments. We proposed a pluggable framework that can dynamically adjust the weight assigned to each backend server based on real-time monitoring metrics.
Web traffic is highly jittery and unpredictable. Load balancer plays a significant role in mitigating the uncertainty in web environments. With the growing adoption of cloud computing infrastructure, software load balancer becomes more common in recent years. Current load balancer services distribute the network requests based on the number of network connections to the backend servers. However, the load balancing algorithm fails to work when other resources such as CPU or memory in a backend server saturates. We experimented and discussed the resilience evaluation and enhancement of container-based software load balancer services in cloud computing environments. We proposed a pluggable framework that can dynamically adjust the weight assigned to each backend server based on real-time monitoring metrics.What an entangled Web we weave: An information-centric approach to time-evolving socio-technical systemshttps://peerj.com/preprints/27892018-04-152018-04-15Markus Luczak-RoeschKieron O'HaraJesse David DinneenRamine Tinati
A new layer of complexity, constituted of networks of information token recurrence, has been identified in socio-technical systems such as the Wikipedia online community and the Zooniverse citizen science platform. The identification of this complexity reveals that our current understanding of the actual structure of those systems, and consequently the structure of the entire World Wide Web, is incomplete. Here we establish the principled foundations and practical advantages of analyzing information diffusion within and across Web systems with Transcendental Information Cascades, and outline resulting directions for future study in the area of socio-technical systems. We also suggest that Transcendental Information Cascades may be applicable to any kind of time-evolving system that can be observed using digital technologies, and that the structures found in such systems consist of properties common to all naturally occurring complex systems.
A new layer of complexity, constituted of networks of information token recurrence, has been identified in socio-technical systems such as the Wikipedia online community and the Zooniverse citizen science platform. The identification of this complexity reveals that our current understanding of the actual structure of those systems, and consequently the structure of the entire World Wide Web, is incomplete. Here we establish the principled foundations and practical advantages of analyzing information diffusion within and across Web systems with Transcendental Information Cascades, and outline resulting directions for future study in the area of socio-technical systems. We also suggest that Transcendental Information Cascades may be applicable to any kind of time-evolving system that can be observed using digital technologies, and that the structures found in such systems consist of properties common to all naturally occurring complex systems.Evaluating the complementarity of communication tools for learning platformshttps://peerj.com/preprints/31142017-12-262017-12-26Leonardo CarvalhoEduardo BezerraGustavo GuedesLaura AssisLeonardo LimaArtur ZivianiFabio PortoRafael BarbastefanoEduardo Ogasawara
Due to the constant innovations in communications tools, several educational institutions are continually evaluating the adoption of new communication tools (NCT) for their adopted learning platforms (LP). Notably, many educational institutions are interested in checking if NCT is bringing benefits in their teaching and learning process. We can state an important problem that tackles this interest as for how to identify when NCT is providing a significantly different complementary communication flow concerning the current communication tools (CCT) provided at LP. This paper presents the Mixed Graph Framework (MGF) to address the problem of measuring the complementarity of an NCT in the scenario where some CCT is already established. Since we are interested in the methodological process, we evaluated MGF using synthetic data. Our experiments observed that the MGF was able to identify whether an NCT produces significant changes in the overall communications of an LP according to some centrality measures.
Due to the constant innovations in communications tools, several educational institutions are continually evaluating the adoption of new communication tools (NCT) for their adopted learning platforms (LP). Notably, many educational institutions are interested in checking if NCT is bringing benefits in their teaching and learning process. We can state an important problem that tackles this interest as for how to identify when NCT is providing a significantly different complementary communication flow concerning the current communication tools (CCT) provided at LP. This paper presents the Mixed Graph Framework (MGF) to address the problem of measuring the complementarity of an NCT in the scenario where some CCT is already established. Since we are interested in the methodological process, we evaluated MGF using synthetic data. Our experiments observed that the MGF was able to identify whether an NCT produces significant changes in the overall communications of an LP according to some centrality measures.Stability analysis of MTopGO for module identification in PPI networkshttps://peerj.com/preprints/32892017-09-272017-09-27Danila VellaAllan TuckerRiccardo Bellazzi
MTopGo is a novel algorithm of module identification for PPI Network analysis, it is designed to consider two key aspects of these models, the topological properties of the network and the apriori knowledge about the proteins involved, represented by GO annotations.
MTopGO rely on random components, thus stability of the results across different runs is a critical aspect of the algorithm. Moreover, when evaluating an algorithm specific for PPI Networks an important aspect is the stability in presence of false positive and false negative edges. In this work, two different stability analyses have been executed to evaluate MTopGO performance. Firstly, one to evaluate the stability of the result over many runs starting from a same input, to consider the range of variability introduced by the random components of the algorithm; secondly, one to evaluate the robustness of the output clusters when the input is affected by noise and uncertainty.
The results showed that MTopGO was more stable in case of false negative edges than false positive edges (adding false edges to a PPI Network was more damaging than removing existing links).
MTopGo is a novel algorithm of module identification for PPI Network analysis, it is designed to consider two key aspects of these models, the topological properties of the network and the apriori knowledge about the proteins involved, represented by GO annotations.MTopGO rely on random components, thus stability of the results across different runs is a critical aspect of the algorithm. Moreover, when evaluating an algorithm specific for PPI Networks an important aspect is the stability in presence of false positive and false negative edges. In this work, two different stability analyses have been executed to evaluate MTopGO performance. Firstly, one to evaluate the stability of the result over many runs starting from a same input, to consider the range of variability introduced by the random components of the algorithm; secondly, one to evaluate the robustness of the output clusters when the input is affected by noise and uncertainty.The results showed that MTopGO was more stable in case of false negative edges than false positive edges (adding false edges to a PPI Network was more damaging than removing existing links).Weather events identification in social media streams: tools to detect their evidence in Twitterhttps://peerj.com/preprints/22412017-09-212017-09-21Valentina GrassoImad ZazaFederica ZabiniGianni PantaleoPaolo NesiAlfonso Crisci
Severe weather impact identification and monitoring through social media data is a good challenge for data science. In last years we assisted to an increase of natural disasters, also due to climate change. Many works showed that during such events people tend to share specific messages by of mean of social media platforms, especially Twitter. Not only they contribute to"situational" awareness also improving the dissemination of information during emergency but can be used to assess social impact of crisis events. We present in this work preliminary findings concerning how temporal distribution of weather related messages may help the identification of severe events that impacted a community. Severe weather events are recognizable by observing the synchronization of twitter streams volumes concerning extractions by using different but semantically graduate terms and hash-tags including the specific containing geo-content names. Impacting events seems immediately recognizable by graphical representation of weather streams and when the time-line show a specific parallel-wise pattern that we named "Half Onion Shape". Different but weather semantically linked twitter streams could exhibits different magnitude, in order to their term popularity, but they show, when a weather event occurs, the same temporal relative maximum. In reason of to these interesting indications, that needs to be confirmed through more deeper analysis, and of the great use of social media, as Twitter, during crisis events it's becoming fundamental to have a suite of suitable tools to monitor social media data. For Twitter data a comprehensive suite of tools is presented: the DISIT-Twitter Vigilance Platform for twitter data retrieve,management and visualization.
Severe weather impact identification and monitoring through social media data is a good challenge for data science. In last years we assisted to an increase of natural disasters, also due to climate change. Many works showed that during such events people tend to share specific messages by of mean of social media platforms, especially Twitter. Not only they contribute to"situational" awareness also improving the dissemination of information during emergency but can be used to assess social impact of crisis events. We present in this work preliminary findings concerning how temporal distribution of weather related messages may help the identification of severe events that impacted a community. Severe weather events are recognizable by observing the synchronization of twitter streams volumes concerning extractions by using different but semantically graduate terms and hash-tags including the specific containing geo-content names. Impacting events seems immediately recognizable by graphical representation of weather streams and when the time-line show a specific parallel-wise pattern that we named "Half Onion Shape". Different but weather semantically linked twitter streams could exhibits different magnitude, in order to their term popularity, but they show, when a weather event occurs, the same temporal relative maximum. In reason of to these interesting indications, that needs to be confirmed through more deeper analysis, and of the great use of social media, as Twitter, during crisis events it's becoming fundamental to have a suite of suitable tools to monitor social media data. For Twitter data a comprehensive suite of tools is presented: the DISIT-Twitter Vigilance Platform for twitter data retrieve,management and visualization.Vertical handoff algorithm for different wireless technologieshttps://peerj.com/preprints/29702017-05-082017-05-08Radhwan Mohamed AbdullahZuriati Ahmad Zukarnain
Transferring a huge amount of data between different network locations over the network links depends on the heterogeneous wireless network. Such a network consists of several networks with different access technologies. Traditionally, a mobile device may be moved to achieve the operations of vertical handover, considering only one criterion, that is, the received signal strength (RSS). The use of a single criterion may cause service interruption, an unbalanced network load, and an inefficient vertical handover. In this paper, we propose enhanced vertical handover decision algorithm based on multiple criteria in the heterogeneous wireless network. The algorithm consists of three technology interfaces: Long-Term Evolution (LTE), Worldwide interoperability for Microwave Access (WiMAX), and Wireless Local Area Network (WLAN). It also employs three types of the vertical handover decision algorithms: equal priority, mobile priority, and network priority. The simulation results illustrate that the proposed handover decision algorithm outperforms the traditional network decision algorithm in terms of handover number probability and the handover failure probability. In addition, it is noticed that the network priority handover decision algorithm produces better results compared to equal priority and mobile priority handover decision algorithm. Finally, the simulation results are validated by the analytical model.
Transferring a huge amount of data between different network locations over the network links depends on the heterogeneous wireless network. Such a network consists of several networks with different access technologies. Traditionally, a mobile device may be moved to achieve the operations of vertical handover, considering only one criterion, that is, the received signal strength (RSS). The use of a single criterion may cause service interruption, an unbalanced network load, and an inefficient vertical handover. In this paper, we propose enhanced vertical handover decision algorithm based on multiple criteria in the heterogeneous wireless network. The algorithm consists of three technology interfaces: Long-Term Evolution (LTE), Worldwide interoperability for Microwave Access (WiMAX), and Wireless Local Area Network (WLAN). It also employs three types of the vertical handover decision algorithms: equal priority, mobile priority, and network priority. The simulation results illustrate that the proposed handover decision algorithm outperforms the traditional network decision algorithm in terms of handover number probability and the handover failure probability. In addition, it is noticed that the network priority handover decision algorithm produces better results compared to equal priority and mobile priority handover decision algorithm. Finally, the simulation results are validated by the analytical model.nanoHUB: Experiences and insights on care and feeding of a successful, engaged science gateway communityhttps://peerj.com/preprints/25252016-10-162016-10-16Lynn ZentnerGerhard Klimeck
Established in 2002, nanoHUB.org continues to attract a large community of users for computational tools and learning materials related to nanotechnology [1, 2]. Over the last 12 months, nanoHUB has engaged over 1.4 million visitors and 13,000 simulation users with over 5,000 items of content, making it a premier example of an established science gateway. The nanoHUB team tracks references to nanoHUB in the scientific literature and have found nearly 1,600 vetted citations to nanoHUB, with over 19,000 secondary citations to the primary papers, supporting the concept that nanoHUB enables quality research. nanoHUB is also used extensively for both informal and formal education [3,4], with automatic algorithms detecting use in 1,501 classrooms reaching nearly 30,000 students. During 14 years of operation, the nanoHUB team has had an opportunity to study the behaviors of its user base, evaluate mechanisms for success, and learn when and how to make adjustments to better serve the community and stakeholders. We have developed a set of success criteria for a science gateway such as nanoHUB, for attracting and growing an active community of users. Outstanding science content is necessary and that content must continue to expand or the gateway and community will grow stagnant. A large challenge is to incentivize a community to not only use the site, but more importantly, to contribute [5,6]. There is often a recruitment and conversion process that involves, first, attracting users, giving them reason to stay, use, and share increasingly complex content, and then go on to become content authors themselves. This process requires a good understanding of the user community and its needs as well as an active outreach program, led by a user-oriented content steward with a technical background sufficient to understand the work and needs of the community. A reliable infrastructure is a critical key to maintaining an active, participatory community. Using underlying HUBzero® technology, nanoHUB is able to leverage infrastructure developments from across a wide variety of hubs, and by utilizing platform support from the HUBzero team, access development and operational expertise from a team of 25 professionals that one scientific project would be hard-pressed to support on its own. nanoHUB has found that open assessment and presentation of stats and impact metrics not only inform development and outreach activities but also incentivize users and provide transparency to the scientific community at large.
Established in 2002, nanoHUB.org continues to attract a large community of users for computational tools and learning materials related to nanotechnology [1, 2]. Over the last 12 months, nanoHUB has engaged over 1.4 million visitors and 13,000 simulation users with over 5,000 items of content, making it a premier example of an established science gateway. The nanoHUB team tracks references to nanoHUB in the scientific literature and have found nearly 1,600 vetted citations to nanoHUB, with over 19,000 secondary citations to the primary papers, supporting the concept that nanoHUB enables quality research. nanoHUB is also used extensively for both informal and formal education [3,4], with automatic algorithms detecting use in 1,501 classrooms reaching nearly 30,000 students. During 14 years of operation, the nanoHUB team has had an opportunity to study the behaviors of its user base, evaluate mechanisms for success, and learn when and how to make adjustments to better serve the community and stakeholders. We have developed a set of success criteria for a science gateway such as nanoHUB, for attracting and growing an active community of users. Outstanding science content is necessary and that content must continue to expand or the gateway and community will grow stagnant. A large challenge is to incentivize a community to not only use the site, but more importantly, to contribute [5,6]. There is often a recruitment and conversion process that involves, first, attracting users, giving them reason to stay, use, and share increasingly complex content, and then go on to become content authors themselves. This process requires a good understanding of the user community and its needs as well as an active outreach program, led by a user-oriented content steward with a technical background sufficient to understand the work and needs of the community. A reliable infrastructure is a critical key to maintaining an active, participatory community. Using underlying HUBzero® technology, nanoHUB is able to leverage infrastructure developments from across a wide variety of hubs, and by utilizing platform support from the HUBzero team, access development and operational expertise from a team of 25 professionals that one scientific project would be hard-pressed to support on its own. nanoHUB has found that open assessment and presentation of stats and impact metrics not only inform development and outreach activities but also incentivize users and provide transparency to the scientific community at large.