Reproducibility PRINCIPLES: Taking the pulse

The community meeting about fundamental principles related to reproducibility, convened by the P-WG on March 25, 2021, was lively and informative. About 25 people attended the meeting, and collaborative notes and slides from the meeting can be found here. The central question before the group was, what are the principles we can agree on and what still needs to be clarified? As with all great conversations, we left with more questions than answers, though some positions were quickly identified as common and others more wide ranging.

The working definition of reproducibility for this group is the ability of a different team to arrive at the same scientific results using the same experimental setup, as described in the recent report on Reproducibility and Replicability in Science by the National Academies of Science, Engineering, and Medicine (NASEM). “For computational experiments,” the ACM adds, “this means that an independent group can obtain the same result using the author’s own artifacts.”

After brief introductions and background on the goals of the Emerging Interest Group (EIG) on Reproducibility and Replicability and the P-WG, the group expressed its priorities for the conversation: The labor involved in reproducible research garnered the greatest interest, followed by issues relating to open source, and to ethics and research integrity.

For each topic, we present the questions posed to the group, and a summary of the main themes and questions.


Who is responsible for the work of reproducibility? How should that work be rewarded?

For reproducibility to be ubiquitous, it must be understood as a broad expectation and as inherent to the scientific enterprise.

  • How to effectively instill the notion that research should be reproducible?
  • How to think about incentives that are normative, not only transactional?
  • Do we have clear expectations and guidelines for reproducibility? What are the minimal requirements for reproducibility? What are acceptable exceptions for reproducibility and how should that be treated?

Reproducibility involves labor. It is an investment of time and expertise. Ideally, reproducibility labor is baked into the research process so that the effort it takes is indistinguishable from the research effort itself.

  • How do we measure the extent and the value of reproducibility labor?

Reproducible research is ultimately conducted by researchers. In reality, however, labor related to reproducibility often involves specialized actions that are treated as separate from the core research process. This labor is often considered “extra” and the responsibility for undertaking it gets shifted on to others in the research team, often graduate students who may not receive appropriate training – or recognition – for the work.

In addition to reproducible practices incorporated in the research process itself, reproducibility also involves labor on the part of users of the research. That labor is even less rewarded, incentivized, or recognized. The archiving and preservation community is an important ally.

  • How do we make reproducibility labor visible? “It takes a village.”

It is not clear if researchers are held accountable for the reproducibility of their research, or whether they should be. Currently, reproducibility is a choice for most researchers and the incentives for doing reproducible research are only somewhat effective. Efforts around citations can increase recognition, an important reward in academia. But most incentives have proved insufficiently effective. Publishers can help enforce reproducibility standards that emerge from the community; both funders and publishers have an important role to play but cannot in themselves drive change.

  • There should be accountability for reproducible research.
  • This community can signal to ACM that it wants reproducibility to be consequential.

Open source

What are the SIGs’ position on open-source solutions vs. proprietary solutions that support reproducibility? Are there promising approaches currently being tested or implemented, for example, around licensing?

At our meeting, open source software is preferred, but it was recognized as not always available or possible. The base position should be, “open when possible.” Generally, the consensus is that tools or platforms can be a step towards improving reproducibility and the community should embrace experimenting with different tools. However, there is no approved list of tools, or a checklist of tool attributes for reproducibility. For example, vendor lock-in is problematic when it applies to a component deemed an essential part of the reproducibility tool chain.   

  • What is the ACM’s position on including proprietary software in the reproducibility tool chain, and on how to manage interaction between open and proprietary tools?
  • Is there a set of recommended tools that meet minimum requirements for reproducibility? And should the ACM maintain such a list? (for example, what would be required to make something like Matlab work within a reproducibility framework?)
  • What is the ACM’s position on workflow steps that are not automated or machine readable?

Closed or proprietary software is an impediment to reproducibility. At its core, this is an issue of access. The community acknowledges restricted access to data – for legal, ethical, and privacy considerations – and has devised ways to accommodate it. What about software?

  • What is the ACM’s position with respect to restricted access to software (e.g., because of cost, licensing)? Should commercial interests of private companies be accommodated in the same way as restricted access to data?
  • What is the ACM’s position on the 2020 Principles of Open Scholarly Infrastructure which state that, “Open source – All software required to run the infrastructure should be available under an open source license”?
  • What is the ACM’s position on sustainability of free and open software, and about organizations such as CZI and NumFocus as a way to support essential parts of the tool chain?


What are the stated ethical guidelines that matter for reproducibility? e.g., algorithmic bias, unfair data practices. How does and should reproducibility touch on issues of research integrity? e.g., fraud, abuse, questionable research practices, etc.

The extent to which questions of ethics overlap with reproducibility needs to be clarified. It could be argued that ethical questions relating to research conduct and integrity are directly relevant to the discussion of reproducibility. Behaviors involving data fraud, online harassment, and bad faith reproducibility attempts fall under this category. The ACM Code of Ethics lists general principles that should apply to all parties involved in reproducibility.

  • How effective is the ACM Code of Ethics?
  • Are ethical violations relating to reproducibility a special case?

Also relevant are other timely topics that fall under ethics, such as algorithmic bias and unethical data scraping and analysis, which are at the intersection of ethics and computation.

  • What lines can we draw between reproducibility and concerns about unfair research practices?

By Limor Peer and Vicky Rampin

April 8, 2021