skip to main content

Reviewer Guidelines

Guidance for Reviewers

Even if you have done peer review before, we encourage you to read our guidelines. If you are new to peer review: great! We wrote this guidance and our reviewer form to make the process clear and easy for you. Peer review is a form of assessing and giving feedback on a piece of content; so if you ever reviewed a talk or game for a games event, you are likely to already be somewhat familiar with the process.

As a reviewer, you have two equally important jobs:

(1) Quality Control: Help the Associate Editor make a justified decision whether this work is or can become important, useful, and trustworthy enough that it will benefit our readers.

You do this by rating the submission on a number of criteria we provide, and providing written comments explaining why you gave your rating, together with a final decision recommendation.

(2) Quality Support: Help the authors turn their submitted work into the best possible version it can become, to their benefit and the benefit of our readers.

You do this by providing written comments that explain how the authors have to or could revise and improve the submission to warrant acceptance.

We ask you to be honest in what you say and professional and kind in how you say it. We aim to publish the best and most important work in games and playable media research, so it is likely that many submissions will not meet our standards – at least at first. But we also expressly invite and support submissions from first-time authors and backgrounds who may not be familiar with peer-reviewed publications and therefore not meet all criteria – at least at first. 

Reviews, especially if they point out shortcomings of one’s work and are written in a harsh tone, can be deeply discouraging and exclusionary, dissuading people from submitting work again. We wish for authors whose work we rejected to want to submit again because of the quality of the reviews they received. That’s in your power. 

If that sounds a bit abstract, here are two resources that we found particularly helpful:

Expectations

As a reviewer, you are expected to:

  • Disclose any possible Conflict of Interest to the handling Associate Editor
  • Understand and follow our review guidelines
  • Carefully read and review the whole submission by yourself, including supplementary files, by the agreed deadline
  • Review subsequent revisions of a submission that they initially reviewed, should the Associate Editor feel that is appropriate
  • Keep the submission, its existence and status confidential
  • Not use results or material of the submission in your own work, unless and until they appear in some publicly available format
  • Not distribute the submission to anyone unless approved by the handling Associate Editor
  • Maintain the anonymity of authors, reviewers, and Associate Editor

Recognition

ACM offers free membership (including access to the ACM Digital Library) to non-ACM members who perform three reviews or more (across the board for all ACM journals) within a one-year time frame. ACM also participates in Publons, an opt-in service that allows you to track and display all review activities that you did across academic journals and venues.

Review Criteria

In the following, we explain the criteria we ask you to use and comment on in our review form, and a rubric to guide assessment.

Contribution: Does the work advance how we understand; make; or learn and teach games?

We publish work that makes one or more of three contributions to games and their communities:

  • Understanding: New knowledge about games, playable media, and how we study them; this could be new empirical findings (empirical contribution); new or improved concepts, models, or theories (theoretical contribution); new methods or insights into how to conduct research (methodological contribution); or meta-research that provides a useful new synthesis and organization of existing research (meta contribution). 
  • Making: New documentations of methods, tools, techniques, guidelines, examples, well-supported insights, or recorded and reflected practical experience that improve or inspire how we make games and playable media. ‘Making’ here encompasses the full range of practices surrounding and supporting game-making reflected in our scope: user research; design; development; writing; sound and music; visual arts; production; QA; business, management, and marketing; community management; etc. 
  • Learning and Teaching: New knowledge that advances how we understand, do, or support the learning and teaching of games and playable media. This could be both research- or practice-based contributions, and includes the full range of learning and teaching forms, from formal education to informal learning and teaching among peer groups and communities, as well as all aspects of game creation and research, including game development, design, or evaluation.

Across all three categories, we expressly recognise and welcome critical contributions that present new evidence or argument about social, political, ethical, and environmental issues around games and playable media. 

The contribution a submission makes can be affected by other aspects we ask you to review, such as support or transparency: A significantly better-performing AI technique will make a smaller contribution if it is presented in very abstract terms or incompletely such that readers will not be able to recreate it. A research finding about what makes people prefer different game genres could be a major contribution, but if the evidence and argument supporting it are faulty, it would make no contribution at all.

Some criteria you can use to assess are:

  • Novelty: How new or innovative is the work? Has this finding or method been publicly documented before?
  • Relevance: How important and valuable is the work? Will it marginally matter for a small number of specialists under rare circumstances or will it strongly inform the work of a wide range of communities?
  • Scope: How wide-ranging and generalisable is the work and its ramifications? Does it only matter for a narrow sub-genre or rarely used tool, is it unlikely to hold beyond the narrow confines of the study – or is it likely to be of value and validity for a wide range of people and contexts?
  • Quality: How well has the work been done and presented? 

A good litmus test for a major contribution that fits our remit is: Will professional games researchers, makers, or educators in one or more communities represented by our Tracks (games user research, game AI, inclusive gaming, etc.) say: “I want to read this, this is important and valuable for my work”?

Rubric for rating contribution

  • (0) No contribution: Makers, researchers, and/or educators of games and playable media will not benefit from reading this, compared to existing work
  • (1) Minor contribution: A select few may find this of cursory interest or use; it mainly repeats existing work, with marginal additions of better empirical support, new ideas and argument, clearer presentation, greater transparency, or higher practical utility
  • (2) Major contribution: Members of a practice community in games will find this of interest or good use; it adds something substantive to existing work that people are likely to remember and come back to
  • (3) Outstanding contribution: Members of one or more practice communities in games will find this of great immediate interest or great use; has the chance to become a ‘classic’ in the field

Usefulness: How useful is the work for practice?

To bridge research and practice, we publish work that has at least some practical utility. Any kind of professional practice connecting to games and playable media counts: it can be how we make (distribute, market, etc.) games and playable media, how we teach and learn them, or how we study them and their players, if such study is part of making. The important part is that reading and using the work is likely to change how games and playable media communities do things for the better. This can include technical know-how, concepts or insights, and critical work that identifies and fuels a need for changing how we do things.

One useful question for assessing usefulness is how much additional translation work a person would have to put after reading the submission to do things differently the next day: A step-by-step tutorial on a new shader technique complete with an free plugin for all major game engines would be highly useful, as would a case study on a new technique for teaching playtesting that comes with supplementary instructions, work sheets, and syllabus. A Viewpoint calling to action against crunch can be highly practically relevant, but would be less useful if it doesn’t directly motivate readers to act, or motivates, but doesn’t spell out concrete ways to act.

Rubric for rating usefulness

  • (0) Not useful: There are no clear ramifications for how game makers or educators could do things better
  • (1) A little useful: Reading this could help or energize game makers and/or educators to do things better, but putting it into daily practice would require significant translation effort
  • (2) Somewhat useful: Reading this is likely to help or energize game makers and/or educators to do things better, but putting it into daily practice would still require some translation effort 
  • (3) Very useful: Reading this is likely to help game makers and/or educators to do things better, and the provided information and materials do a good job helping to put it into daily practice

Support: Does the work support the claims it makes with good evidence, argument, or references?

“Novelty matters more than difficulty for engaging players”; “our AI agent plays stronger than current industry alternatives”; “this storyboarding method helps planning game narratives”: whatever main claims a submission makes, a review ensures they are trustworthy and supported. Researchers call this the validity (from Latin validus = strong) of claims. We broadly accept three kinds of support:

  • Empirical data: Experiments, A/B tests, usage telemetry, simulation results, interviews, ethnographic fieldwork, documents, recorded personal practice experience – some kind of data that is collected and analysed in a way that it would likely show if the supported claim was false.
  • Reference: Pointing to or citing other publicly documented reliable work that supports the claim with data or argument.
  • Logical argument: Presenting a sound reasoning.

Different communities have different standards for what counts as appropriate support – e.g. in industry, it may be sufficient empirical support for the usefulness of a new tool that a person used it when developing a successfully published game, and their studio afterwards chose to keep using for its next production. An academic researcher may require an empirical study like a survey or controlled experiment for the same claim. We ask you to apply the standards you consider most fitting for the type of work you are reviewing.

Rubric for rating support

  • (0) Not supported: The submission provides no evidence, argument, or references for its main claims, or what it provides is disconnected from its actual claims
  • (1) Partially or possibly supported: Parts of the main claims are well-supported or could be well-supported if they were delimited or the provided arguments, evidence, or references would be expanded on
  • (2) Mostly supported: The main claims are broadly well-supported. Some minor claims need delimiting or expanded support
  • (3) Fully supported: All major and minor claims in the paper are well-supported.

Clarity: How well does the work communicate its content?

We serve diverse communities with different writing styles, formats, and technical vocabularies. Any piece submitted should at a minimum be easy to understand for a professional maker, researcher, and/or educator working in the matching field: a Research Article on “Simulation” can use games physics technical terms, equations, pseudo-code, etc., as it speaks to professionals working in games physics. Horizons and submissions that make a broad point speaking to multiple fields should be easy to understand and follow for people working in any area of games and playable media. 

For all Article Types, but especially for Tutorials and Case Studies, we strongly recommend using worked examples and non-textual elements like images, diagrams, embedded or linked interactives or video figures to help communicate – if you haven’t, have a look at the work of Bret Victor, Nicky Case, Freya Holmér, or interactive articles

Note: We expressly encourage submissions from first-time authors from underrepresented communities, including authors whose native language is not English. Language use on its own is not a reason for rejecting any submissions. Please don’t tone-police or pre-assume from uncommon grammar or word choice that an author is not a native speaker. Instead, guide authors on how to make their submission easier to understand and follow. We do expect the final published version to use error-free English, but we offer support for authors to proof their submissions.

Rubric for rating clarity

  • (0) Not clear: The writing is difficult to parse, with errors in grammar and/or word choice; the structure and argument are hard to make out and follow; needed non-textual elements are missing or inscrutable
  • (1) Somewhat clear: The target audience can follow the basic argument and logical structure, but many minor and/or some major parts of the text or non-textual elements are unclear and need reworking
  • (2) Mostly clear: Overall clear and understandable for the target audience, with a sound structure, non-textual elements where needed, and explanation of unfamiliar terms and concepts; some improvements possible
  • (3) Very clear: Clear, easy, and engaging read; good structure; good use of non-textual elements; no unexplained unfamiliar terms and concepts

Context: How well-embedded in prior work is the submission?

Key values to us are that work pays explicit recognition to prior work it builds on in the form of references, and that any material a work takes verbatim from other works is identified as a direct citation. This also helps readers and reviewers to identify what about the work is truly novel. Submissions should also inform the reader where the state of the art of the field is and where they fit in it, what they add, and why that matters. This again usually involves referencing current work. Researchers often call this referencing of other related work contextualizing or embedding. This often happens in a dedicated text section called “Background” or “Prior Work”, which the text then returns to in a “Discussion” section to tease out what the insights of the texts mean relative to the state of the art.

Giving due recognition, not plagiarizing, and establishing what makes you different are values and concerns in industry as well. However, academic researchers have turned these into far more formalized standards for referencing that can be daunting and inaccessible. We actively encourage submissions from first-time and non-academic authors new to formal referencing. Therefore, we don’t reject submissions on the basis of weak contextualization. And especially if you are an academic reviewer, we ask you to accurately rate how well-contextualized the work is, but not to scold authors for lacking or improper referencing. Instead, please point authors to relevant missing work and how to improve the submission.

When assessing contextualization, we also ask you to take into account the Article Type and kind of contribution: a short, informal Viewpoint or Dialogue can be intentionally partial and has no space for lengthy establishing background sections; a Case Study exists to document practice experience, akin to e.g. a postmortem talk, where you similarly would not expect speakers to exhaustively establish prior work. In comparison, a Review would be expected to feature dozens of references comprehensively describing the state of the art of a field.

Rubric for rating context

  • (0) Not embedded: Does not reference prior work it builds on; does not establish the state of the art and what it adds relative to it
  • (1) Little embedded: Some references, but doesn’t fully recognise the prior work it directly builds on and/or doesn’t sufficiently establish the start of the art and what it adds to it, given this type of article
  • (2) Mostly embedded: Prior work it directly builds on is fully referenced; establishes the state of the art and what it adds sufficiently for this type of article; the submission would benefit from some minor additions
  • (3) Well-embedded: Prior work is fully referenced; provides an insightful writeup of the state of the art, and discussion of what it adds

Aptness: How apt and well-applied are the study design and methods used to collect and analyze empirical data?

Researchers are continually advancing methods and standards for empirical research that ensure that the data they collect and the conclusions they draw from them are as robust as possible, steering clear of biases, errors, or over-interpretations. Games research uses many different methods (and epistemologies): quantitative, qualitative, engineering, or research through design methods, to name some common ones. We don’t consider any method to be in and of itself more apt, rigorous, or ‘better’ than any other. When reviewing the aptness of reported methods, we ask you to assess two things: 

  • How suited is the chosen method and research design for generating the kinds of evidence and analysis that can support or falsify the kinds of claims the submission makes? For instance, a one-shot quantitative survey is less suited to understand how people come to understand the same game differently as they play it than a qualitative think-aloud study. 
  • How well-designed and conducted is the chosen method, judging by the standards of research practitioners of this method? For instance, experimental researchers may want to see fully randomized trials with a control and pre/post measurements for a study that makes causal claims.

This criterion applies to all submissions that use research methods to collect and analyze empirical evidence. By default, these are all Registered Reports, most Research Articles, and Reviews that use Systematic Review or Meta-Analysis methods. If a submission does not use research methods to collect and analyze empirical evidence, select “not applicable” and leave the comment box blank.

Rubric for rating aptness

  • Not applicable: The submission does not use research methods to collect and analyze empirical evidence
  • (0) Not at all apt: The methods and study design don’t produce data and analysis that can support the submission’s main claims; or they were applied so poorly that resulting data and inferences cannot be relied on 
  • (1) Somewhat apt: The chosen methods and design produce data and analysis that can speak to some aspects of the main claims, but they come with limitations and issues that are not acknowledged, requiring further different data collection or analysis and reporting
  • (2) Mostly apt: The chosen methods and design produce data and analysis that can speak to the main claims, with some minor limitations or missing justifications that should be added
  • (3) Very apt: The chosen methods and design produce data and analysis that are apt, well-done, and well-justified

Transparency: How fully and easily can readers check and redo the methods, processes, or techniques described in the paper?

Transparency, openness, and reproducibility are key values to us: by openly documenting how a work was done, and sharing used data, code, and materials as much as possible, we allow reviewers and readers to verify our work, thereby making it more trustworthy. And we allow others to reuse and build on our work, contributing to the common good. 

This criterion applies to all submissions that report original empirical research – likely most Research Articles and Registered Reports – or that document methods or techniques, such as Tutorials, Research Articles about methods, or possibly, Case Studies. Else, select “not applicable” and leave the comment box blank.

For documenting methods and techniques, there are no good public standards we know of. Just ask yourself: With the information and material submitted, could a professional in the field easily and confidently implement it themselves? We know that for much work, especially in industry, there will be proprietary data or code that cannot be made public, or may even not be shareable for review. If that is the case, please assess the transparency assuming someone has access to all the information and material that was submitted for review.

For empirical studies, researchers have developed standards for good transparent reporting of different kinds of methods. A good source is https://www.equator-network.org/. We have defined our own Transparency and Openness Guidelines: If the submission reports empirical work, we ask you to assess whether it complies with our guidelines, check supplementary materials (like data and analysis code), and re-run the analysis to see that the reported results can be reproduced. If this is something you don’t feel qualified to do, you can skip it and tell us so in the reviewing form.

Decision Recommendations

At the end, we ask you to recommend a decision for the submission. You can give five possible recommendations (plus one for the Registered Report Article Type, which you won’t use otherwise):

  • Reject: The submission should not be considered further
  • Reject for possible resubmission: The submission should not be considered further, but authors may submit a thoroughly revised version including additional new work for renewed review 
  • Accept with major revisions: The submission should be accepted, provided the authors make significant changes that require renewed review
  • Accept with minor revisions: The submission should be accepted, provided the authors make minor changes that could be checked by the handling editors
  • Accept: The submission should be accepted as is; it needs at most proof-reading
  • Accept in principle (use only for Registered Reports): The submission should be accepted in principle, and authors can begin collecting data

Rubric for decision recommendation

How do you determine which decision to recommend? For each review criterion, the review form gives you a scale to rate the submission from 0 (no/not at all) to 3 (very). You can directly use the scores to help you determine what decision to recommend, using the following key. Importantly, these are guidelines, not hard rules. Taking everything about a submission into account, you can come to a recommendation that differs.

  • Reject:
    • The submission will make no (0) or only a minor (1) Contribution, even after feasible revisions OR
    • The submission will have no (0) Usefulness and/or Support, even after feasible revisions OR
    • If the submission reports empirical research, the used research methods have no (0) Aptness to support the claims presented OR
    • If the submission reports empirical research or methods/techniques, they have no (0) Transparency. 
  • Reject for possible resubmission
    • The submission scores a (0) or (1) in Contribution OR
    • a (0) in Usefulness, Support, Aptness, or Transparency (where applicable) AND
    • A feasible revision, possibly including additional original work, could lift the submission to (2) or higher in all criteria
  • Major revisions:
    • The submission makes or could make a major (2) contribution or more AND
    • It scores at least a (1) in Support and Usefulness, and if applicable, Transparency and Aptness AND
    • It scores below (2) in one or more criteria AND
    • feasible revisions could lift it to a (2) or more in all criteria
  • Minor Revisions:
    • The submission scores at least (2) in all criteria AND
    • it scores below (3) in one or more criteria AND
    • feasible revisions could lift it to a (3) in all criteria
  • Accept:
    • The submission scores a (2) or (3) in all criteria AND
    • feasible revisions would not lift it in any criterion