Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on May 11th, 2016 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on July 3rd, 2016.
  • The first revision was submitted on February 4th, 2017 and was reviewed by 1 reviewer and the Academic Editor.
  • A further revision was submitted on June 21st, 2017 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on August 1st, 2017.

Version 0.3 (accepted)

· Aug 1, 2017 · Academic Editor

Accept

I have attached an annotated PDF to fix typographic errors and in some cases, rewording to clarify the flow of the text. Please integrate these edits while in the production phase.

Reviewer 1 has asked that you disclose the countries for companies F and G in your dataset. As I have mentioned in my annotation in the text - such a disclosure would not appear to cause a problem, but please let us know if this does indeed breach any relevant confidentiality agreements.

Lastly, in the conclusions, I have requested that you specifically mention Hassenzahl one last time since this work relies on Hassenzahl's findings concerning the attributes unique to UX.

Reviewer 3 ·

Basic reporting

The paper fulfils these requirements

Experimental design

The paper fulfils these requirements

Validity of the findings

The paper fulfils this set of requirements

Additional comments

Thank you for your revisions, which have improved the paper considerably. The presentation is clearer and the claims made are better justified.

In particular I am pleased to see an explicit acknowledgement that you are focusing on Hassenzahl's model for your analysis. My previous discomfort with your paper stemmed in part from your implicit reliance on this model, which was not clearly acknowledged, as this is only one view.

I have one small suggestion, which is to include the locations of companies G and F when they are first mentioned. This struck me as an odd omission.

Version 0.2

· Apr 25, 2017 · Academic Editor

Major Revisions

I was very pleased when I read the revised manuscript. The organisation is very much improved, and the conceptual diagrams really help communicate the arguments presented in the text. However, there are still a number of problems. In general, the manuscript is still far too long, and in its current form, far too complex to be easily comprehended by a lay-reader. The introduction and methods section are in reasonable shape, but the results section includes a number of paragraphs that really should be saved for the discussion section, and the discussion section includes whole sections that simply restate review material which dilutes the detailed analysis you present of how your findings both support, and provide further illumination on UX practice and challenges.

1. Please examine and address the comments and revisions that I have made to your submitted PDF.

my annotations are variously:

- grammatical and typographic revisions. In addition to a normal number of typos in a paper of this length, I have noted several sentences that have faulty grammar - I strongly recommend you ask a native English speaker to help you identify these (e.g. the word ‘still’ was often placed in the wrong position in the sentence).
- text that is problematic because it combine discussions of several different aspects at once, rather than present them in an ordered manner. This results in complex sentences and often a degree of repetition. I have highlighted some of these as annotations in the PDF, and offered suggestions.
- paragraphs where findings are first summarised, and then discussed again with much more detail. These are problematic because by presenting summary statements first, you have not introduced the reader to your evidence - which is of prime importance when reporting findings from qualitative research.
- fragments of discussion that are not actually in the discussion - these appear in both the methods and results section.
- observations which are attributed to an interviewee but no quote is provided. I have highlighted cases where you need to provide quotes to support your interpretation.
- quantitative statements that have no supporting evidence. You occasionally state that ‘X occurs more often than Y’ (e.g. power struggles in UX occur more often than in usability) without providing evidence to support this. Reviewer 1 in the the first round requested that you provide more detailed statistics (counts of occurrence, frequencies, etc) for particular encodings - it is particularly important that this is done if you wish to include such quantitative statements in this paper.

- use of figures. The thematic figures really do help - but you should actually refer to them and utilise their structure (e.g. the ordering of spokes) when either reporting each aspect of a theme. They may also be useful when discussing the interrelationships between themes.

2. The original reviewers were not available to re-review your revised manuscript, but Reviewer 3 has commented positively on the methodological aspects of this new manuscript, indicating that you have successfully addressed the major criticisms identified in the first round.

Reviewer 3 also lists a number of concerns which you should respond to, and if possible revise the manuscript accordingly.

One question posed by Reviewer 3 is that there may be cultural differences in the definition and perception of UX different nations. My own reading of the manuscript suggests that in some cases you (or the interviewee’s you reported) have simply not used proper terminology. Regardless, it will help if the nationalities of interviewees and countries are more clearly stated in the result tables, and if you are actually aware of any differences in interpretation or use of terminology, it would help if you were to include a statement to clarify this in order to allow readers to faithfully interpret the quotes presented by interviewees.

3. R3 also highlights a problem with your principle description of UX focusing upon the use of the word ‘hedonic’ as pleasurable (which is only one ‘hedonic’ aspect). It is possible that R3 is not familiar with Hassenzahl’s categorisation, so you should make sure to disambiguate ‘hedonic’ from the more specific term ‘hedonistic’. R3 further suggests a corner case in your description of UX: ‘helpful’ is a high value goal for UX design that encompasses both emotional and functional outcomes. If possible, please respond to R3’s question and revise the manuscript accordingly if this aspect was out of scope in this study.

4. Think about your readers. Reviewer 1 captured this in the first paragraph of their first round review:

“I did not have a clear idea of UX challenges that are unique from other quality issues in software development and evaluation – the authors’ stated purpose in the article. For example, which software quality issues are measurable and how did software specialists arrive at the metrics? What UX issues are different so that they defy the same approach to measurement? What did respondents say that gives evidence for claiming such differences? “

They then go on to say:
“From the article readers get a clear idea of challenges related to UX without such a comparison but only at a high level – not the detailed level promised in the Abstract and Intro. Readers would not be able to succinctly state at a detailed level what the most important overlaps are between UX and other software quality issues. “

More clarity can be achieved by tightening the text, removing repetition, and properly structuring the results and discussion to separate descriptions of your thematic analysis (results), from the validation and interpretation of your findings in the context of your research questions and other UX research (discussion).

Again, I strongly suggest you re-read all of the first round reviews. Reviewer 1 in particular would like to be able to cite this paper as evidence to add strength to their own findings, but in its current form, the paper tries to communicate too much for the reader to comprehend.

5. R3 observes that you have not identified any previously unrecognised observations. This does not mean that the manuscript cannot be published - and in fact, you make the same observation in the present version of the manuscript. As you revise the discussion and conclusions, please take care to *concisely* highlight aspects of your analysis that provide additional insight beyond what is already widely understood by UX practitioners.

Reviewer 3 ·

Basic reporting

The paper uses clear, unambiguous and professional english in general
There are further literature references that could be included, but the list is sizeable
The structure is clear and appropriate, with suitable figures. Some raw data appears in the text
The paper does not have hypotheses but does have research questions which are answered with the work presented
The paper does define terms, as used in the paper but part of my unease is because these definitions are not necessarily used more widely, and particularly within the participant group for the study

Experimental design

The experimental design, in this case an interview qualitative study, seems to have been well-designed, and performed in a rigorous fashion as befits a qualitative study.

Validity of the findings

Conclusions are well stated and clear, and are linked to the data. However the background and literature context of the work, together with the interpretation of the findings differs from my understanding and experience of practice and my knowledge of the UX field (more below). This disturbs me but the design and execution of the study itself appears to be sound. The interpretation of the findings makes me uneasy, but this may be due to differing countries (the countries for the companies are not listed in Table 1) and UX cultures.

Additional comments

Based on the title and abstract for this paper I would expect to find it useful and interesting. Indeed I did find some of it useful, which is why I have recommended only minor changes, but your interpretation of the data and the tenor of the discussion does not sit easily with my own experience of software practice and UX. My experience is largely based in UK and US, but with enough experience and understanding of international practice to know that different countries have quite different development cultures, and in particular UX. To support my minor corrections recommendation, I would suggest:

1. that the location (and possibly global distribution?) of the companies be included in Table 1.
2. that more be made of the comparison of UX with other software qualities, as I felt that this was a bit eclipsed by the UX model.
3. that the strong separation you emphasise between usability and user experience be better explained. UX is not just hedonic (pleasurable), and usability is not just about measurement.
4. clarify when you're talking about an agile team and when you're talking about a traditional or safety-critical situation as UX is very different in different application areas and development models
5. Smaller points: clarify Table 2 headings (education level? experience of?)
6. check for typos (there are a few)

The following comments are an attempt to explain my unease around the tone of the paper and its interpretation of practitioner comments. I would be pleased if you're able to respond to these in your writing.

The notion that context and application determine how important is user experience compared to usability, for example, does not feature in the paper. The context-dependent nature of UX is included, because it is one of the characteristics, but the implications of that seem to be missing in the overall tenor of the discussion. User experience is important for all products, but the exact nature of usability and user experience goals, their measurement and their trade-off is context-dependent. I think this is probably what you mean when you say that they are categories of experience, but the treatment seems to imply that it is problematic. Having qualities of software that are difficult to measure quantitatively is not necessarily negative. UX is not an exact science and it requires dealing with people and real world contexts. Compromises and pragmatism are needed. That is the nature of UX. For example “limited access to end users can negatively impact UX measurement” p26 is true and will continue to be true for lots of reasons, but there are pragmatic ways to handle that situation, and good UX practitioners and educators know that and apply those work-arounds.

The paper presents a sound study of UX in practice, with well-reasoned findings. But the picture painted by the findings doesn’t surprise me. For example, in practice, UX, interaction design and usability have all been used in overlapping ways, so the fact that practitioners don’t know what UX is (according to the definition used here) doesn’t surprise me. However it does not mean that practitioners don’t understand the concept underlying UX. The problems of evaluating UX characteristics such as fear, excitement, expectation are well-known, and are being studied in research.

1. The terms “User experience” “interaction design” and even “usability” have been used in practice in overlapping ways for many years due to a range of reasons. This finding is not a surprise, and indeed if you asked practitioners and academics for a definition of software engineering or agile you will equally get differing views beyond a top level definition agreement.
2. Evaluating user experience is not easy because you are trying to ‘measure’ how someone ‘feel’, although ‘helpful’ is a user experience characteristic and that’s arguably not hedonic (a term used in the paper in several places to describe user experience)

Version 0.1 (original submission)

· Jul 3, 2016 · Academic Editor

Major Revisions

In your revised manuscript, please make sure you comply with PeerJ CS Author guidelines with regard to:
* referencing (you appear to be double quoting author names throughout the manuscript, making it very difficult to read)
* formatting of tables ( a number of typographic problems appear in table 1)
* numbering and styles for section and subsection headings (e.g. ‘0.5 the identified challenges')
* provision of data. Since we conduct confidential peer review, it is important that our reviewers have access to sufficient raw data to assess the validity of your treatment in the manuscript, regardless of whether that data will subsequently be published.

In addition, the readability of the manuscript can be dramatically improved by:
* Brevity. Much of the text in the manuscript is discursive, recounting observations and recommendations already found in the literature. Please avoid repeating and expanding on already published work, and try to keep explanations to the bare minimum required to allow a researcher to understand your arguments.
* Avoid conflict in referencing schemes (C is both a type of company and used to denote challenges)
it may also be more effective if you were to employ mnemonics rather than letters/numbers for the 8 different business types and 11 different challenges.
collating and numbering quotes relevant each identified challenge in a table, or otherwise separate them from the body of the discussion.

Finally, please carefully consider how your revised manuscript addresses the issues highlighted below by the reviewers. Both consider your work to be a valid contribution to the field, but at the same time highlight important issues regarding the approaches used for collection and analysis of your qualitative data that must be addressed.

Reviewer 1 ·

Basic reporting

Clear, unambiguous, professional English language used throughout.
The basic reporting is in professional English

Intro & background to show context
Too much is context in the form of prior research (literature) . The Introduction gives background literature; this contextual information is also interwoven throughout the Results where the main information should be the response data and code results, and the Discussion is largely more literature review and citations rather than the authors emphasizing the meaning of the data as presented in the Results. Too little context is given about the respondents, their roles and contributions to products, their interactions across “disciplinary” lines in their companies.
,
It would help if the authors explicate from the start what they see as the differences – in discussing results – between functional requirements, quality requirements, usability, and experiential requirements (that fall outside the scope of the prior terms). Is emotion/affect the same for all types of software. Is perceived waiting time an emotion? Does it differ for different software? User tasks and task-related expectations? Also are features and quality mutually exclusive as the authors seem to imply in the body of the manuscript (speaking of features in a way that tacitly disparages them on the sub-text level).

Structure conforms to PeerJ standard, discipline norm, or improved for clarity.
The main section headings conform to standards. But the material within the sections diverges from what readers usually expect (e.g. a lot of literature review/citations woven into Results). The Intro lit review is more a laundry list of studies that may have something to do with the topic – but it is not organized and presented as supporting studies for key arguments that a reader will later see emerge from the interview findings. The authors’ arguments are buried in regard to points unique to their findings. Overall, the piece comes across as an extensive literature review into which data from the study are fit. It should be other way around. The numbering for subsections is not standard and should be changed. Sub-section numbering should begin anew within each major section.


Figures are relevant, high quality, well labelled & described. Not applicable.

Raw data supplied. Good appendices.

Experimental design

Original primary research within Scope of the journal.
Very good research topic and good interview questions. They are original and within the scope of the journal.

Research question well defined, relevant & meaningful. It is stated how research fills an identified knowledge gap.

The research problem and questions are meaningful, and they relate to an important gap that has to be addressed. The aspects of the manuscript that need to be revised do not lie its focus on a problem in the field but in its design, execution and presentation of data analysis .Unfortunately, upon finishing the article, I did not have a clear idea of UX challenges that are unique from other quality issues in software development and evaluation – the authors’ stated purpose in the article. For example, which software quality issues are measurable and how did software specialists arrive at the metrics? What UX issues are different so that they defy the same approach to measurement? What did respondents say that gives evidence for claiming such differences?

From the article readers get a clear idea of challenges related to UX without such a comparison but only at a high level – not the detailed level promised in the Abstract and Intro. Readers would not be able to succinctly state at a detailed level what the most important overlaps are between UX and other software quality issues.

Rigorous investigation performed to a high technical & ethical standard.
Rigorous investigation is not apparent in the analysis of data.

Methods described with sufficient detail & information to replicate.
Methods applicable to qualitative research are presented and justified well. It isn’t unclear how the coding was applied or analyzed. Were there statistical outcomes from the analysis? One advantage of coding is that you can derive statistics and unite them with qualitatively derived insights. It isn’t clear if codes on “demographics” (role, company, type of software produced, years of experience dealing with UX/development) were used and if analysis of coded data accounted for these demographics to find similarities and divergences across demographic traits. Example of passages and how and why they were coded as they were would help. Importantly, no reliability statistics are provided to show cross-coder or intra-coder reliability.

There also isn’t information on methods used (if at all) for integrating coding outcomes with traditional qualitative analysis of the themes that grew out of the data after iterative readings. I found that, as a reader who wasn’t clear on these methods, the challenges that sub-headed the Results portion seemed to be imposed by the authors on the data based on prior knowledge and literature reviews rather than having emerged from the data.

The authors do not organize Results so that readers clearly can see issues or themes that they derived from the data. Because the section is organized around challenges that are already well known and previously studied, a reader is likely to assume that the authors imposed this thematic categorizing on the data rather than probing the data for relationships that would give rise to themes that offer new insight for the research questions – i.e. overlaps and distinctions between UX and other quality issues in software.

I am a researcher who often seeks studies like this one to cite to support my own empirical research, Yet I find myself unable to use the Results for support. There isn’t enough evidence of who perceived what and why. The authors express results about “Practitioners” or Participants” as the agents of responses but that is too vague in terms of show where certain types of respondents agree and diverge. The authors cross-reference their chart of participants (e.g. C1 as the agent) for some of the Results. But the hardship falls on readers to trace back to the label, to make the distinctions and to draw the inferences themselves about similarities and differences.

Finally, the authors talk about results from a qualitative research perspective but I do not see the methods they used for deriving themes qualitatively (to complement/supplement the codes) – e.g. affinity diagrams, post-it clustering. They also talk about challenges to validity that, in fact, are threats to quantitative research. The authors need to address qualitative research threats to validity (namely imposing categories on the data rather than letting the data lead to the themes and reliability scores in coding (missing in this manuscript).

Validity of the findings

I did not find the distinction between usability and UX clear. Nor was I entirely clear what significant point the authors were trying to make by saying that the two are distinct. I believe they are. But I’d like the authors to answer the “so what?” question. Ultimately they should relate the issue to the relationship between UX and other software quality issues (there are more quality issues than usability; plus they should show how usability – when software engineers adopted it as a quality issue – is often defined by them differently from how usability experts define it. Evidence needs to be drawn from the data, differentiating by respondent role, company and software.

I do not believe that the authors answered the question of how to deal with the questions of measurable requirements for UX – in the bigger picture of UX requirements being like other quality measures in software so that UX is more integrated into the development cycle.

A key problem is that experiential (subjective) requirements have not yet been measured at all or not to a standard that software teams use. The authors “answer” is to have more measurable UX requirements. That issue has been plaguing us for at least 20 years (see some of the soft goals literature in Requirements Engineering and Aspects computing (cross-cutting goals). I’m not sure if the data from interviews gives insight into details of measures beyond calling for the need for them. I’d suggest seeing what the data do reveal that is insightful and can address and highlight it.

The authors should also try to avoid what I’d call “sleight of hands” in reporting results. In lines 807+ they say “Still practitioners from 6 out of 8 companies stated they often come into projects only in later stages…” Two companies in the study include all technical respondents. The framing of the sentence may lead readers to think that in 25% of the companies this isn’t so – unless they refer back to the chart and see the composition of respondents by company. Similarly consultancies by definition may only contract for coming in at the end, making this phenomenon a function of being a consultancy – or even seeing that user-centeredness should be outsourced rather than integrated into a multi-disciplinary software team.

Additional comments

I do not want to close the door on this manuscript. The data are very rich and, as such, are important to analyze for insight and novelty more rigourously and with more nuances. I believe that the piece needs a major rewrite – going back to the methods for analyzing the data, restructuring the Results to show qualitatively derived themes that grow, and writing a Discussion that show how insights from this study Overlaps with existing knowledge should be brought in early in the research literature. A small initial sub-section in Results may enumerate what the data from the interviews show that is already known. The rest of the subsections in Results should be insightful themes.

I also suggest that the authors write with more sense of audience/readers. Cut down the number of points made so that readers come away impressed by the 2 or 3 main insights that the data reveal and a strong sense of why they are significant for the larger issue of software quality

Reviewer 2 ·

Basic reporting

The mainly proposes to investigate how user experience is perceived and defined while identifying the main challenges in incorporating UX in the software development life-cycle. The authors investigated 17 practitioners from 8 companies to identify 11 challenges that these practitioners are facing when trying to incorporate UX in their daily work. and processes. The authors reviewed the literature on UX and suggested to classify the perception of UX under three different categories, Work on UX, work on UX in Agile and work on UX/Usability. The 11 challenges are well discussed.

Experimental design

The design of the exploratory and qualitative study that have been conducted is discussed. However various key information are missing:

- How the companies have been selected and why these companies and not others. Why they have different profiles and business focus. UX is really subjective and the perceptions may differ drastically from one company to another one, UX design-oriented firm, service design, mobile and Web companies are really very different, even the profile of their UX designers and developers is sometimes totally different. Table 1 need to incorporate more data that gives a true and realistic picture of the companies.

- Again why semi-structured interviews? Such method for interview may bring very divergent data, especially that the goal here was to build a consensus around the 11 challenges! A screening scenario that detail how the interview have been conducted is needed

- The data were analyzed using a inductive and deductive approaches, thematic analysis. Why this method again? Usually, we also consider triangularisation to consolidate data collected from different sources, for example it was good if at each company you tried to understand the point of view of developers, projects managers and seniors staff and why not external stakeholders. Still the data collected was honestly and objectively analyzed.

Validity of the findings

- Validly threats reported are not really very convincing as there was many missing information as presented above.

A major effort need to made on this section, I suggest that the authors look at to the Common Industry Format on how and what we should document in UX/usability studies ISO/IEC 25062:2006(en)
Software engineering — Software product Quality Requirements and Evaluation (SQuaRE) — Common Industry Format (CIF) for usability test reports

The second big concern with the paper is that many of the challenges reported are well known. They have been largely discussed in the scientific literature. Various workshops have discussed how and why UX/Usability should be considered in the llifecycle.

Additional comments

I suggested when revising this paper to discuss your findings while contrasting them with the findings from previous studies and the conclusions of previous workshops. The topic of integrating usability in the development life cycle have been around since 1998, see CHI workshop on use cases and task modeling. What is the differences and what is new here? The last section of the paper (discussion) bring some useful information especially for practitioners. What should be done by researchers to overcome these concerns.

Are these challenges totally independent from SE methods such as Agile? Agile is very close to UX somehow, at least they share some common concerns engaging users/consumers.

List of references need to be extended, check recent publications 2015 and 2016 for ACM and IEEE digital library

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.