Tool/Benchmark Papers

JSys will publish papers that present a new open-source tool or benchmark. The goal is to support work that provides significant value to the community, yet cannot appear at top venues since the techniques used may not be novel. A good benchmark may spur a lot of research and innovation (for example, see YCSB).

JSys will do a review of both the paper that introduces the tool/benchmark, and the artifact itself. In this sense, it will be similar to Artifact Evaluation at systems conferences such as SOSP, OSDI, and EuroSys. The artifact is expected to be well-documented, and easy-to-use by a third party.

Difference with research papers that build systems.A number of research papers also introduce new artifacts. JSys tool/benchmark papers differ in two important ways:

  1. The tool/benchmark does not need to use novel techniques or advance the state-of-the-art in terms of research
  2. The tool/benchmark needs to pass Artifact Evaluation to be accepted. This is mandatory (it is usually optional for research papers that introduce artifacts).

Tool papers will not be rejected for:

  • Lack of novelty in the techniques used by the tool (i.e “just engineering”)

Tool papers may be rejected for:

  • Not clearly establishing what value the new tool/benchmark brings to the community
  • If there are existing tools that are similar, the paper should explain why they aren’t enough
  • Not passing artifact evaluation due to lack of documentation or being hard to use

Desk Rejections:

  • The tool or benchmark is not available as open-source
  • The tool or benchmark is not available for researchers to use for free (i.e there must be an academic license. It can still be sold to industry practitioners commercially).
  • The tool or benchmark does not work with an open-source operating system such as Linux

Guideline for authors:

  • Check the Tool or Benchmark checkbox on the submission form to flag them for the review process
  • Please provide a link to the open-source tool/benchmark