Use this link to cite:
http://hdl.handle.net/2183/41361 Beaver: Efficiently Building Test Collections for Novel Tasks
Loading...
Identifiers
Publication date
Advisors
Other responsabilities
Journal Title
Bibliographic citation
Otero, D., Parapar, J., & Barreiro, Á. ‘Beaver: Efficiently Building Test Collections for Novel Tasks’, CEUR Workshop Proceedings, Vol. 2621, art. 23, pp. 1-2, 2020, Proceedings of the Joint Conference of the Information Retrieval Communities in Europe (CIRCLE 2020) Samatan, Gers, France, July 6-9, 2020.
Type of academic work
Academic degree
Abstract
[Abstract]: Evaluation is a mandatory task for Information Retrieval research.
Under the Cranfield paradigm, this evaluation needs test collections.
The creation of these is a time and resource-consuming process. At
the same time, new tasks and models are continuously appearing.
These tasks demand the building of new test collections. Typically,
the researchers organize TREC-like competitions for building these
evaluation benchmarks. This is very expensive, both for the organizers and for the participants. In this paper, we present a platform
to easily and cheaply build datasets for Information Retrieval evaluation without the need of organizing expensive campaigns. In
particular, we propose the simulation of participant systems and
the use of pooling strategies to make the most of the assessor’s
work. Our platform is aimed to cover the whole process of building
the test collection, from document gathering to judgment creation.
Description
Proceedings of the Joint Conference of the Information Retrieval Communities in Europe (CIRCLE 2020) Samatan, Gers, France, July 6-9, 2020, published at http://ceur-ws.org.
Editor version
Rights
Atribución 4.0 Internacional
Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).







