To increase transparency, PeerJ operates a system of 'optional signed reviews and history'. This takes two forms: (1) peer reviewers are encouraged, but not required, to provide their names (if they do so, then their profile page records the articles they have reviewed), and (2) authors are given the option of reproducing their entire peer review history alongside their published article (in which case the complete peer review process is provided, including revisions, rebuttal letters and editor decision letters).
All comments by the reviewers and myself have been addressed appropriately.
The small mistakes and typos have been fixed. The added tables make the paper more readable and make comparison to other methods more easy.
I am particularly happy with the addition of section 4.3 where the influence of the presence of pathology is investigated. This is a nice addition to the paper as robustness in pathological retinas is of highest interest.
I am satisfied with the content and current state of the paper and think the good results show the right to exist for this method.
The authors have worked on this reviewer's comments.
I thank the reviewers for their thorough and constructive reviews. I agree with their observations. In addition, I'd like to challenge you to define a rationale for using GC. Is GC essential in arriving at an improved robustness? Furthermore, you claim the proposed method is robust, but you fail to provide evidence. How do you define robustness? How does this compare to other methods? Either substantiate a claim or remove it.
The problem at hand and the relevance of the proposed method is introduced properly. The method itself introduces no novelty but does produce good results as shown by evaluation on multiple publicly available databases.
The detection of the optic disc is an important preprocessing step in many CAD systems. It is typically seen as a relatively simple task, therefore robustness is of importance, which this paper focuses on.
The level of English is sufficient, some small mistakes here and there, but in general easy to read.
As noted by the authors, there are many other methods already developed for detection and segmentation of the optic disc, but as this operation is quite crucial in further processing steps, an improvement in robustness is worth reporting.
The evaluation on multiple publicly available datasets, and the comparison to other previously developed methods show the robustness of the method.
The evaluation is thorough, and provides a wide array of performance metrics. The comparison to other methods is a nice addition, and shows the right to exist for this method.
The conclusion interprets and summarizes the results correctly.
This paper presents a method for the detection and segmentation of optic disc in digital fundus images. There are a lot of methods applied for this task, even somehow related to the presented one. This paper is well written and results look good, but many of its sub parts have been previously presented, stripping that away leaves the application of the grow cut algorithm for the segmentation of the optic disc.
No attention is being paid to those issues that make segmentation of the optic disc difficult, especially peripapillary atrophy. The weak (or strong) points of the presented approach are not discussed. There are some potential issues with the evaluation (see detailed remarks).
1. How the success rate for the OD detection is obtained?
2. A table for the comparison of proposed method with the other OD detection techniques is missing.
3. Table 4 compares the proposed OD segmentation method with others but only for DRIVE and DIARETDB1 datasets. Since the ground truth of OD segmentation for the ONHSD and the MESSIDOR dataset is also publicly available , a new table similar to table 4 would be nice for a fair comparison with other methods.
 Expert System for Early Automated Detection of DR by Analysis of Digital Retinal Images Project Website, http://www.uhu.es/retinopathy/muestras2.php,2012
4. In several published papers the OD segmentation method is also evaluated on the ONHSD dataset . I suggest authors of this paper to do the same and repeat their experiments on this dataset for the sake of comparison with other methods.
 J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, E. Fletcher, L. Kennedy, Optic nerve head segmentation, IEEE Trans. Med. Imaging 23 (2004) 256–264.
5. In section “Discussion and conclusion” line 384: it is mentioned that “the GC algorithm has been utilized for the first time in 385 localizing and segmenting OD in retinal images.” As I understand the GC algorithm has been used only for segmentation and for OD is localized using circular Hough transform. Does the final OD segmentation result modify the initial obtained OD location?
6. There is a typo in equation (2).
7. Line 196: It is not clear how the ValMostPixels is calculated.
8. The discussion part needs more work.
The authors refer frequently to their own work, which should appropriately be referred as 'our earlier work' rather than as any work.
There seems to be a mistake in equation 2, where both the first & second conditions are same.
Line 211, image resizing - does that not disturb the aspect ratio, it needs to be commented in the manuscript. Further, how this resizing increases robustness is not clear
Lines 226-229, not clear, better rephrase
Line 252, how is the boundary approximated to a circular shape, this needs to be described
The pipe symbol in equations 8 & 9 is not described, is that a union? It needs to be described.
Lines 324-326, better report these results as a table
Time reported in Table 4 right column, is this average time? needs to be specified
Figures 5 & 6 may be presented together as a single figure. Further, the images with some pathology in Figure 5/6 should be highlighted.
No comments, except that this paper seems to be very similar to the authors' earlier conference paper http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7395436. The authors may need to comment what's the difference from the conference version to this submission.
The results appear to be good. However, it's suggested to compute boundary/contour distance (e.g. mean contour distance, or Hausdroff distance) as a validation measure between algorithm's boundary and the ground truth boundary. The measures used are all similar to each other and rely on overlap.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.