Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on November 21st, 2015 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on December 17th, 2015.
  • The first revision was submitted on February 16th, 2016 and was reviewed by 3 reviewers and the Academic Editor.
  • A further revision was submitted on May 10th, 2016 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on May 12th, 2016.

Version 0.3 (accepted)

· May 12, 2016 · Academic Editor

Accept

Congratulations. You have followed the reviewers' suggestions and the manuscript is ready now.

Version 0.2

· Mar 21, 2016 · Academic Editor

Minor Revisions

Try to modify the paper attending the comments of reviewer 1. For example, about the use of accuracy or other more specific metrics for classification problems instead of RMS

Reviewer 1 ·

Basic reporting

The authors have made an effort to improve the paper. I think that it is much clearer now.
However, I have to object that I cannot really see the necessity of this new method, mainly looking at the results on the general data-sets. They are always worse than previous ones when looking at the statistical comparison. Hence, why not to apply previous methods to their problem instead of inventing some new complex algorithm? I cannot really see this point. I would expect an application paper where they consider different existing algorithms, and in case that the new one is better in that scenario, which would make much more sense.
Moreover, why are they considering the RMSE? Being a classification model, it does not make much sense …
With respect to the problem of classes with only one instance, I continue not understanding why this is not a problem. I cannot understand how this can work in a classification scenario where cross-validation is used to validate the models.

Experimental design

It is much better now, but I do not understand why they considered the RMSE. Also, they did not show the p-values of the statistical tests. The problem with classes with one instance still continues.

Validity of the findings

Not clear due to the data-set used.

Reviewer 2 ·

Basic reporting

No Comments

Experimental design

No Comments

Validity of the findings

No Comments

Reviewer 3 ·

Basic reporting

No Comments

Experimental design

No Comments

Validity of the findings

No Comments

Additional comments

All suggestions mentioned in the previous revision have been correctly addressed by the authors.

Version 0.1 (original submission)

· Dec 17, 2015 · Academic Editor

Major Revisions

Put special enphasis in addressing the recommentations of reviewer 1.

Reviewer 1 ·

Basic reporting

The author presents a model for making profiling for modus operandi analysis in criminal investigations. The authors present an automatic model for this problem and then also consider it to address commonly used UCI datasets.
The idea presented by the authors is interesting. They properly motivate their work. They are presenting an automatic system to find similarities between crimes. What I miss from this paper is the lack of comparison against the profiling given by humans, that is, that from the police.
Overall, the method is properly explained and easy to follow. However, it is not clear how they fine-tune the fuzzy inference system. Could the authors provide more insights on this?
It is also not clear how the authors labeled the classification data-set. One of the problems with the paper is that it is not clear whether the data used is enough so as to obtain meaningful conclusions and to be rally useful. They consider only 67 instances with 20 classes (suspects) and one can imagine that the total number of crimes is much higher. As a result, there are classes with only one instance, which are impossible to learn/predict. Does this make sense? I am not sure of that, and even if the paper makes sense the experimental part is quite weak due to these problems.
Moreover, it is not clear how their model is applied in the rest of common data-sets. Given that it is a classification model, why not to try with a greater variety of data-sets? Likewise, why not to test their model against commonly used classifiers? From my point of view, this last part of the paper is poor and a superficial analysis of the results is carried out. Furthermore, I would say that the statistical analysis is not properly carried out, and ranks are not properly interpreted.
The English of the paper should also be revised.
Overall, the paper is interesting for its application, but there is a lot of work to be done on it.

Experimental design

The authors data may be meaninigless so as to achieve meaninful conclusions. The experimental framework and analysis of UCI data-sets is not properly carried out and the statistical tests are misinterpreted.

Validity of the findings

I am not sure of the real validity of their findings due to the low number of instances considered (67, with some classes having only 1 instance).

Reviewer 2 ·

Basic reporting

Section "Related work": -
- clearly separate (in several paragraphs) data mining techniques: Association rules , Link analysis, Classification
- mention in particularly the distance metrics between binary vectors proposed in the literature and cite them when introducing DP and CP

Rows 196-197: replace "materials" with "systems"

Section "Generating the dynamic MOs.." :
- if DMO is related to a particular criminal, then replace in row 292 "second criminal" with "same criminal"
- no need of equation (2) - it's sufficient to mention that M_D is the median of D sequence

Section "Finding deviation ...":
- change in Eq (6) "x_i - y_i" with (x_i = ) AND (y_i = 0)
- row 350 - there is no table 7 here
- row 355 - replace "Equation" with "expression"

Sometimes the information is repeated, as in the rows 549 - 552.

Experimental design

No comments

Validity of the findings

No comments

Additional comments

Re-read carefully the text. Sometimes the predicate is missing.

Reviewer 3 ·

Basic reporting

The paper is well written and structured. The abstract and introduction reflects clearly the aim of this practical paper.

The bibliography must be updated, only one submitted reference in 2015 is referenced. Most of references are to dates earlier than 2011. Reference 49 must include the year of publication.

Experimental design

No comments

Validity of the findings

No comments

Additional comments

In this paper, a method for identifying modus operandi of criminals is proposed. The method is based on a "binary feature vector profiling" where relationships between criminal and conducted crimes are analyzed by means of a fuzzy inference system. A crime data set was used in testing and the results show that the proposed method provide good results.

In general, the idea presented in this paper is very interesting and well developed.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.