Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on February 23rd, 2018 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on March 16th, 2018.
  • The first revision was submitted on May 1st, 2018 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on May 3rd, 2018.

Version 0.2 (accepted)

· May 3, 2018 · Academic Editor

Accept

The revisions made seem to have enhanced the article, I am happy to Accept it.

# PeerJ Staff Note - this decision was reviewed and approved by Dezene Huber, a PeerJ Section Editor covering this Section #

Version 0.1 (original submission)

· Mar 16, 2018 · Academic Editor

Minor Revisions

The referees have found merit in your submission and recommend relatively minor revision; I agree. I would add only one thing to their comments - the title is awkward. The current title is unclear (less accurate - less than what?). Why not rephrase in terms of 'Input data accuracy impacts on...' or similar?

Reviewer 1 ·

Basic reporting

The manuscript was reasonably well organized, but I struggled with some of the writings.

1.1. This paper clearly provides its purposes (Lines 92-95). But, the scope of the work presented in this paper is narrower than the purposes. In the introduction, the aim was to evaluate the effects of input data accuracy, terrain configuration, and a number of visual obstacles, as well as the height of forests. However, this paper only deals with the input data accuracy and height of obstacles. Please describe this purpose more concisely, or supplement it with other results.

1.2. All dataset names are always used throughout after they are defined (Lines 115-122) in order to keep consistency and to help interpretation. Several different representations of the datasets (e.g., lines 169 and 202) can give a confusion when interpreting related figures and tables.

1.3. OFFSETA is a term in specific GIS software (Line 136). Please provide a brief explanation of the process instead of just proving the OFFSETA parameter in the Viewshed tool.

1.4. When the authors compare spatial differences among the datasets (Lines 188-197), using a table or a graph may assist readers to easily understand the result.

1.5. Thank you for providing the raw data. But, if the locations of random observer points are also supplied in raw data, it would be very helpful to understand analysis results.

1.6. Minor suggestions:
1) Line 174, “ca” might be “are”.

Experimental design

The authors well explained each individual step of the analysis. However, the explanation of the sampling may be improved upon, and some additional descriptions are necessary.

2.1. 104 sample locations were chosen using stratified random sampling with three categories of the proportion of forest and three categories of the standard deviation of elevations. Thus, I think the number of samples might be a multiple of 9 to contain the same number of samples in each category. Please explain why you selected this number of samples.

2.2. Related to the sampling again, sample locations are highly clustered in the northern part. Are there any reasons for this clustering?

2.3. The authors should provide a data source for the calculation of elevation differences (i.e., the standard deviation of elevation) and forested proportion. Especially, the elevation differences can vary considerably according to a source dataset.

2.4. I do not understand the meaning of the last sentence in 2.1 Sampling locations. I thought the authors do not choose adjacent samples, but I saw some adjacent map pages in the raw data.

2.5. The authors compared spatial differences among the results of viewshed analyses from different datasets. However, the term of spatial difference is not familiar to me. Please include a detailed process for the spatial difference calculation.

Validity of the findings

The authors found that visible areas are overestimated in a small-scale dataset and heights of obstacles have a minor effect on a viewshed analysis.

3.1. However, the overestimation might result from a smaller variance (or a standard deviation) of elevations in small-scale datasets than large-scale datasets. Because the authors also knew the effect of an elevation variance on the size of a visible area, I think they used the stratified random sampling. Therefore, a useful addition may be investigating the relationship, if it exists, between the sizes of visible areas and the variances of elevations among the different datasets.

3.2. The authors also classified the sample according to the proportion of forests. But, they did not examine different impacts of obstacle heights on the sizes of visible areas according to forest proportions. I think it also would be worth to be investigated.

Reviewer 2 ·

Basic reporting

The manuscript is written clearly, with few errors. It is well-organized and logically structured. The figures are helpful and well-done. The background and motivation are solid, and references are appropriate, There are others that could be included, but I don't think it's essential.

It is not clear if the raw data are shared - the editor can confirm this.

The paper provides an aim, but not any specific hypotheses - see below for more comments about this. It could be strengthened with a couple of specific questions that the results already incorporate.

One correction: in Table 1 the terms 'large scale' and 'small scale' have reversed meaning to that of conventional cartographic usage. 1:5000 is large scale, as the fraction 1/5000 is relatively large, while 1:500,000 is small scale, as 1/500,000 is relatively small.

Experimental design

The design is rigorous and enables the authors to address several aspects of their research aims. However, not all could be considered with this experimental design.

Starting on line 92: "the aim of our study was to evaluate the effects that input data accuracy, terrain configuration, number of visual obstacles such as forests and buildings, and the quality of expertly assigned obstacle height (particularly of forests) have on the results of simple binary viewshed analysis." In fact, the study is able to effectively demonstrate that DSMs constructed by 'adding' volumes for forested and built up areas result in overestimation of the iewshed area. It is not possible for this investigation to determine specific contributions of data accuracy, spatial resolution, or forest/land modeling. The paper aim could be modified to more carefully reflect what the analysis is able to accomplish.

In addition, I have two points about methodology that were not clear to me:
i. Were observer points offset from the ground elevation or the surface model elevation? That is, if an observation point was in a forest, would the observer be placed 1.8 meters above the ground or above the (25m) canopy surface?

ii. Explain the manner in which spatial differences between viewsheds were measured and then tested (line 154-155)

The knowledge gap the paper fills is real, and the paper is able to make a contribution.

Validity of the findings

This data-heavy paper uses appropriate methods and the findings are solid. Given the importance of accuracy, it would be helpful to discuss the accuracy of the vectorized forests and buildings layers - presumably errors in these features could contribute to the big differences in viewshed visible area identified in the paper.

Conclusions follow logically and are connected to parallel findings in the literature. The rationale for why viewsheds are overestimated so much by the non-lidar DSM is not a strong point, though. More discussion on 'why', even if it's speculation, would improve the paper.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.