DMLR submissions are required to meet the following criteria for acceptance:

Does the submission fit the scope?
Submissions should primarily emphasize the data-related aspects of machine learning research, as outlined in the "Scope" section. Some submissions may have a dual focus on both the data and other components. This aligns with the scope criteria, if the submission offers valuable insights derived from the data-related contribution, with broad applicability.

Does the submission adhere to the highest standards and best practices in data-centric machine learning research?
DMLR has a high bar for the quality of data-centric machine learning research. The submissions must be technically sound and correct, the contribution must meet a significance bar with respect to its scientific value. The narrative and arguments should be articulated with clarity, with no gaps between the claims and supporting evidence. The authors should also adequately address the limitations and potential negative societal impact of their work and submissions must comply with the NeurIPS Code of Ethics.

What are the eligibility criteria for extended versions of a prior work?
Extended submissions must meet two requirements: (1) Prior publications should originate from a conference or workshop, not from a journal and (2) Extended versions should contain a minimum of 30% additional content compared to their prior versions. Authors will also be required to provide a link to the previously published work during the submission process on OpenReview. This link serves as a point of reference for reviewers and action editors to evaluate the changes made.

If the submission introduces or incorporates a dataset and/or benchmark, does it provide adequate information regarding data collection and organization, availability, maintenance, as well as responsible use?
Submissions introducing or incorporating a dataset and/or benchmark are required to include comprehensive documentation with intended applications, a URL for reviewer access to the dataset, plans for hosting licensing, and ongoing maintenance. In the case of benchmarks, authors must provide details that support reproducibility. This includes all necessary datasets, code, and evaluation procedures. We strongly recommend the adoption of a reproducibility framework, such as the ML reproducibility checklist, to ensure that all results can be readily replicated. If the submission includes code, please refer to the NeurIPS Code and Data Submission Guidelines for further instructions.

Back to the top