PeerJ Computer Science Preprints: Graphicshttps://peerj.com/preprints/index.atom?journal=cs&subject=10200Graphics articles published in PeerJ Computer Science PreprintsSAS macros for longitudinal IRT modelshttps://peerj.com/preprints/267402018-03-202018-03-20Maja OlsbjergKarl Bang Christensen
IRT models are often applied when observed items are used to measure a unidimensional latent variable. Originally used in educational research, IRT models are now widely used when focus is on physical functioning or psychological well-being. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a collection of SAS macros that can be used for fitting data to, simulating from, and visualizing longitudinal IRT models. The macros encompass dichotomous as well as polytomous item response formats and are sufficiently flexible to accommodate changes in item parameters across time points and local dependence between responses at different time points.
IRT models are often applied when observed items are used to measure a unidimensional latent variable. Originally used in educational research, IRT models are now widely used when focus is on physical functioning or psychological well-being. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a collection of SAS macros that can be used for fitting data to, simulating from, and visualizing longitudinal IRT models. The macros encompass dichotomous as well as polytomous item response formats and are sufficiently flexible to accommodate changes in item parameters across time points and local dependence between responses at different time points.Emulation of surgical fluid interactions in real-timehttps://peerj.com/preprints/33342018-02-232018-02-23Donald StredneyBradley HittleHector Medina-FettermanThomas KerwinGregory Wiet
The surgical skills required to successfully maintain hemostasis, the control of operative blood, requires considerable deliberate practice. Hemostasis requires the deft orchestration of bi-dexterous tool manipulation. We present our approach to computationally emulate both irrigation and bleeding associated with neurotologic surgical technique. The overall objective is to provide a visually plausible, three dimensional, real-time simulation of bleeding and irrigation in a virtual otologic simulator system. The results present a unique simulation environment for deliberate study and practice.
The surgical skills required to successfully maintain hemostasis, the control of operative blood, requires considerable deliberate practice. Hemostasis requires the deft orchestration of bi-dexterous tool manipulation. We present our approach to computationally emulate both irrigation and bleeding associated with neurotologic surgical technique. The overall objective is to provide a visually plausible, three dimensional, real-time simulation of bleeding and irrigation in a virtual otologic simulator system. The results present a unique simulation environment for deliberate study and practice.Infrastructure and tools for teaching computing throughout the statistical curriculumhttps://peerj.com/preprints/31812017-08-242017-08-24Mine Cetinkaya-RundelColin W Rundel
Modern statistics is fundamentally a computational discipline, but too often this fact is not reflected in our statistics curricula. With the rise of big data and data science it has become increasingly clear that students both want, expect, and need explicit training in this area of the discipline. Additionally, recent curricular guidelines clearly state that working with data requires extensive computing skills and that statistics students should be fluent in accessing, manipulating, analyzing, and modeling with professional statistical analysis software. Much has been written in the statistics education literature about pedagogical tools and approaches to provide a practical computational foundation for students. This article discusses the computational infrastructure and toolkit choices to allow for these pedagogical innovations while minimizing frustration and improving adoption for both our students and instructors.
Modern statistics is fundamentally a computational discipline, but too often this fact is not reflected in our statistics curricula. With the rise of big data and data science it has become increasingly clear that students both want, expect, and need explicit training in this area of the discipline. Additionally, recent curricular guidelines clearly state that working with data requires extensive computing skills and that statistics students should be fluent in accessing, manipulating, analyzing, and modeling with professional statistical analysis software. Much has been written in the statistics education literature about pedagogical tools and approaches to provide a practical computational foundation for students. This article discusses the computational infrastructure and toolkit choices to allow for these pedagogical innovations while minimizing frustration and improving adoption for both our students and instructors.Visualising higher-dimensional space-time and space-scale objects as projections to \(\mathbb{R}^3\)https://peerj.com/preprints/28442017-03-022017-03-02Ken Arroyo OhoriHugo LedouxJantien Stoter
Objects of more than three dimensions can be used to model geographic phenomena that occur in space, time and scale. For instance, a single 4D object can be used to represent the changes in a 3D object's shape across time or all its optimal representations at various levels of detail. In this paper, we look at how such higher-dimensional space-time and space-scale objects can be visualised as projections from \(\mathbb{R}^4\) to \(\mathbb{R}^3\). We present three projections that we believe are particularly intuitive for this purpose: (i) a simple `long axis' projection that puts 3D objects side by side; (ii) the well-known orthographic and perspective projections; and (iii) a projection to a 3-sphere (\(S^3\)) followed by a stereographic projection to \(\mathbb{R}^3\), which results in an inwards-outwards fourth axis. Our focus is in using these projections from \(\mathbb{R}^4\) to \(\mathbb{R}^3\), but they are formulated from \(\mathbb{R}^n\) to \(\mathbb{R}^{n-1}\) so as to be easily extensible and to incorporate other non-spatial characteristics. We present a prototype interactive visualiser that applies these projections from 4D to 3D in real-time using the programmable pipeline and compute shaders of the Metal graphics API.
Objects of more than three dimensions can be used to model geographic phenomena that occur in space, time and scale. For instance, a single 4D object can be used to represent the changes in a 3D object's shape across time or all its optimal representations at various levels of detail. In this paper, we look at how such higher-dimensional space-time and space-scale objects can be visualised as projections from \(\mathbb{R}^4\) to \(\mathbb{R}^3\). We present three projections that we believe are particularly intuitive for this purpose: (i) a simple `long axis' projection that puts 3D objects side by side; (ii) the well-known orthographic and perspective projections; and (iii) a projection to a 3-sphere (\(S^3\)) followed by a stereographic projection to \(\mathbb{R}^3\), which results in an inwards-outwards fourth axis. Our focus is in using these projections from \(\mathbb{R}^4\) to \(\mathbb{R}^3\), but they are formulated from \(\mathbb{R}^n\) to \(\mathbb{R}^{n-1}\) so as to be easily extensible and to incorporate other non-spatial characteristics. We present a prototype interactive visualiser that applies these projections from 4D to 3D in real-time using the programmable pipeline and compute shaders of the Metal graphics API.A comprehensive investigation of visual cryptography and its role in secure communicationshttps://peerj.com/preprints/26822017-01-202017-01-20Elham ShahabHadi Abdolrahimpour
Secret sharing approach and in particular Visual Cryptography (VC) try to address the security issues in dealing with images. In fact, VC is a powerful technique that combines the notions of perfect ciphers and secret sharing in cryptography. VC takes an image (secret) as an input and encrypts (divide) into two or more pieces (shares) that each of them can not reveal any information about the main input. The decryption way in this scenario is done through superimposing shares on top of each other to receive the input image. No computer participation is required, thus showing one of the distinguishing features of VC. It is claimed that VC is a unique technique in the sense that the encrypted message can be decrypted directly by the human visual system.
Secret sharing approach and in particular Visual Cryptography (VC) try to address the security issues in dealing with images. In fact, VC is a powerful technique that combines the notions of perfect ciphers and secret sharing in cryptography. VC takes an image (secret) as an input and encrypts (divide) into two or more pieces (shares) that each of them can not reveal any information about the main input. The decryption way in this scenario is done through superimposing shares on top of each other to receive the input image. No computer participation is required, thus showing one of the distinguishing features of VC. It is claimed that VC is a unique technique in the sense that the encrypted message can be decrypted directly by the human visual system.Semi-automatic wavelet soft-thresholding applied to digital image error level analysishttps://peerj.com/preprints/26192016-12-062016-12-06Daniel C Jeronymo
In this paper a method for detection of image forgery in lossy compressed digital images known as error level analysis (ELA) is presented and it's noisy components are filtered with automatic wavelet soft-thresholding. With ELA, a lossy compressed image is recompressed at a known error rate and the absolute differences between these images, known as error levels, are computed. This method might be weakened if the image noise generated by the compression scheme is too intense, creating the necessity of noise filtering. Wavelet thresholding is a proven denoising technique which is capable of removing an image's noise avoiding altering other components, like high frequencies regions, by thresholding the wavelet transform coefficients, thus not causing blurring. Despite its effectiveness, the choice of the threshold is a known issue. However there are some approaches to select it automatically. In this paper, a lowpass filter is implemented through wavelet thresholding, attenuating error level noises. An efficient method to automatically determine the threshold level is used, showing good results in threshold selection for the presented problem. Standard test images have been doctored to simulate image tampering, error levels for these images are computed and wavelet thresholding is performed to attenuate noise. Results are presented, confirming the method's efficiency at noise filtering while preserving necessary error levels.
In this paper a method for detection of image forgery in lossy compressed digital images known as error level analysis (ELA) is presented and it's noisy components are filtered with automatic wavelet soft-thresholding. With ELA, a lossy compressed image is recompressed at a known error rate and the absolute differences between these images, known as error levels, are computed. This method might be weakened if the image noise generated by the compression scheme is too intense, creating the necessity of noise filtering. Wavelet thresholding is a proven denoising technique which is capable of removing an image's noise avoiding altering other components, like high frequencies regions, by thresholding the wavelet transform coefficients, thus not causing blurring. Despite its effectiveness, the choice of the threshold is a known issue. However there are some approaches to select it automatically. In this paper, a lowpass filter is implemented through wavelet thresholding, attenuating error level noises. An efficient method to automatically determine the threshold level is used, showing good results in threshold selection for the presented problem. Standard test images have been doctored to simulate image tampering, error levels for these images are computed and wavelet thresholding is performed to attenuate noise. Results are presented, confirming the method's efficiency at noise filtering while preserving necessary error levels.Building size modelizationhttps://peerj.com/preprints/22642016-09-282016-09-28Arlette AntoniThierry Dhorne
New challenges in the efficient management of cities depend on a deep knowledge of their inner structures. It is therefore very important to have access to reliable models of cities characteristics and organization. This paper aims at providing and validating a stochastic modelization based on statistical data of buildings parameters which can be useful as an entry for many other models considered in a wide range of fields where buildings structure is a main factor of a thorough modelization of cities. The interest of such an approach is highlighted through the detection of errors in the data or as a tool for visual clustering.
New challenges in the efficient management of cities depend on a deep knowledge of their inner structures. It is therefore very important to have access to reliable models of cities characteristics and organization. This paper aims at providing and validating a stochastic modelization based on statistical data of buildings parameters which can be useful as an entry for many other models considered in a wide range of fields where buildings structure is a main factor of a thorough modelization of cities. The interest of such an approach is highlighted through the detection of errors in the data or as a tool for visual clustering.GRNsight: a web application and service for visualizing models of small- to medium-scale gene regulatory networkshttps://peerj.com/preprints/20682016-08-182016-08-18Kam D DahlquistJohn David N DionisioBen G FitzpatrickNicole A AnguianoAnindita VarshneyaBritain J SouthwickMihir Samdarshi
GRNsight is a web application and service for visualizing models of gene regulatory networks (GRNs). A gene regulatory network consists of genes, transcription factors, and the regulatory connections between them which govern the level of expression of mRNA and protein from genes. The original motivation came from our efforts to perform parameter estimation and forward simulation of the dynamics of a differential equations model of a small GRN with 21 nodes and 31 edges. We wanted a quick and easy way to visualize the weight parameters from the model which represent the direction and magnitude of the influence of a transcription factor on its target gene, so we created GRNsight. GRNsight automatically lays out either an unweighted or weighted network graph based on an Excel spreadsheet containing an adjacency matrix where regulators are named in the columns and target genes in the rows, a Simple Interaction Format (SIF) text file, or a GraphML XML file. When a user uploads an input file specifying an unweighted network, GRNsight automatically lays out the graph using black lines and pointed arrowheads. For a weighted network, GRNsight uses pointed and blunt arrowheads, and colors the edges and adjusts their thicknesses based on the sign (positive for activation or negative for repression) and magnitude of the weight parameter. GRNsight is written in JavaScript, with diagrams facilitated by D3.js, a data visualization library. Node.js and the Express framework handle server-side functions. GRNsight’s diagrams are based on D3.js’s force graph layout algorithm, which was then extensively customized to support the specific needs of GRNs. Nodes are rectangular and support gene labels of up to 12 characters. The edges are arcs, which become straight lines when the nodes are close together. Self-regulatory edges are indicated by a loop. When a user mouses over an edge, the numerical value of the weight parameter is displayed. Visualizations can be modified by sliders that adjust the force graph layout parameters and through manual node dragging. GRNsight is best-suited for visualizing networks of fewer than 35 nodes and 70 edges, although it accepts networks of up to 75 nodes or 150 edges. GRNsight has general applicability for displaying any small, unweighted or weighted network with directed edges for systems biology or other application domains. GRNsight serves as an example of following and teaching best practices for scientific computing and complying with FAIR Principles, using an open and test-driven development model with rigorous documentation of requirements and issues on GitHub. An exhaustive unit testing framework using Mocha and the Chai assertion library consists of around 160 automated unit tests that examine nearly 530 test files to ensure that the program is running as expected. The GRNsight application (http://dondi.github.io/GRNsight/) and code (https://github.com/dondi/GRNsight) are available under the open source BSD license.
GRNsight is a web application and service for visualizing models of gene regulatory networks (GRNs). A gene regulatory network consists of genes, transcription factors, and the regulatory connections between them which govern the level of expression of mRNA and protein from genes. The original motivation came from our efforts to perform parameter estimation and forward simulation of the dynamics of a differential equations model of a small GRN with 21 nodes and 31 edges. We wanted a quick and easy way to visualize the weight parameters from the model which represent the direction and magnitude of the influence of a transcription factor on its target gene, so we created GRNsight. GRNsight automatically lays out either an unweighted or weighted network graph based on an Excel spreadsheet containing an adjacency matrix where regulators are named in the columns and target genes in the rows, a Simple Interaction Format (SIF) text file, or a GraphML XML file. When a user uploads an input file specifying an unweighted network, GRNsight automatically lays out the graph using black lines and pointed arrowheads. For a weighted network, GRNsight uses pointed and blunt arrowheads, and colors the edges and adjusts their thicknesses based on the sign (positive for activation or negative for repression) and magnitude of the weight parameter. GRNsight is written in JavaScript, with diagrams facilitated by D3.js, a data visualization library. Node.js and the Express framework handle server-side functions. GRNsight’s diagrams are based on D3.js’s force graph layout algorithm, which was then extensively customized to support the specific needs of GRNs. Nodes are rectangular and support gene labels of up to 12 characters. The edges are arcs, which become straight lines when the nodes are close together. Self-regulatory edges are indicated by a loop. When a user mouses over an edge, the numerical value of the weight parameter is displayed. Visualizations can be modified by sliders that adjust the force graph layout parameters and through manual node dragging. GRNsight is best-suited for visualizing networks of fewer than 35 nodes and 70 edges, although it accepts networks of up to 75 nodes or 150 edges. GRNsight has general applicability for displaying any small, unweighted or weighted network with directed edges for systems biology or other application domains. GRNsight serves as an example of following and teaching best practices for scientific computing and complying with FAIR Principles, using an open and test-driven development model with rigorous documentation of requirements and issues on GitHub. An exhaustive unit testing framework using Mocha and the Chai assertion library consists of around 160 automated unit tests that examine nearly 530 test files to ensure that the program is running as expected. The GRNsight application (http://dondi.github.io/GRNsight/) and code (https://github.com/dondi/GRNsight) are available under the open source BSD license.The GEOBASI (Geochemical Database of Tuscany) open source toolshttps://peerj.com/preprints/22832016-07-122016-07-12Manuela CorongiuBrunella RacoAntonella BucciantiPatrizia MaceraRiccardo MariStefano RomanelliBarbara NisiStefano Menichetti
The tools implemented for the new Regional Geochemical Database, called GEOBASI, are hereafter presented. The database has permitted the construction of a repository where the geochemical information (compositional and isotopic) has been stored in a structured way so that it can be available for different groups of users (e.g. institutional, public and private companies). The information contained in the database can in fact be downloaded freely and queried to correlate geochemistry to other non compositional variables. The repository has been aimed at promoting the use of the geochemical data already available from previous investigations through a powerful Web-GIS interface. The resulting graphical-numerical tools in such a complex database have been developed to: 1) analyse the spatial variability of the investigated context, 2) highlight the geographic location of data pertaining to classes of values or single cases, 3) compare the results of different analytical methodologies applied to the determination of the same element and/or chemical species, 4) extract the geochemical data related to specific monitoring plans and/or geographical areas, and finally 5) recover information about data below the detection limit to understand their impact on the behaviour of the investigated variable.
The tools implemented for the new Regional Geochemical Database, called GEOBASI, are hereafter presented. The database has permitted the construction of a repository where the geochemical information (compositional and isotopic) has been stored in a structured way so that it can be available for different groups of users (e.g. institutional, public and private companies). The information contained in the database can in fact be downloaded freely and queried to correlate geochemistry to other non compositional variables. The repository has been aimed at promoting the use of the geochemical data already available from previous investigations through a powerful Web-GIS interface. The resulting graphical-numerical tools in such a complex database have been developed to: 1) analyse the spatial variability of the investigated context, 2) highlight the geographic location of data pertaining to classes of values or single cases, 3) compare the results of different analytical methodologies applied to the determination of the same element and/or chemical species, 4) extract the geochemical data related to specific monitoring plans and/or geographical areas, and finally 5) recover information about data below the detection limit to understand their impact on the behaviour of the investigated variable.BibeR: A Web-based tool for bibliometric analysis in scientific literaturehttps://peerj.com/preprints/18792016-03-202016-03-20Yang LiuMeng Li
Bibliometric analysis is a statistical method to summarize the amount of scientific activity in a domain. Insights can be derived from bibliometrics to understand the development trend of the research domain. R is an open sourced programming language specialized for statistical computing and graphic visualization. To benefit from the convenience of R and the outcome of bibliometric analysis, we here introduce BibeR, a web-based application for the visualization of bibliometric analysis. An example of bibliometric analysis on the articles published in journal Scientometrics is used to illustrate the usage of BibeR. The development of BibeR is still in progress, and future possibly improvements on BibeR are discussed.
Bibliometric analysis is a statistical method to summarize the amount of scientific activity in a domain. Insights can be derived from bibliometrics to understand the development trend of the research domain. R is an open sourced programming language specialized for statistical computing and graphic visualization. To benefit from the convenience of R and the outcome of bibliometric analysis, we here introduce BibeR, a web-based application for the visualization of bibliometric analysis. An example of bibliometric analysis on the articles published in journal Scientometrics is used to illustrate the usage of BibeR. The development of BibeR is still in progress, and future possibly improvements on BibeR are discussed.