COMET: High-quality Machine Translation Evaluation =================================================== .. image:: _static/img/COMET_lockup-dark.png :width: 800 :alt: COMET by Unbabel What is COMET ============== COMET is an open-source framework for MT evaluation that can be used for two purposes: * To evaluate MT systems with our currently available high-performing metrics (check: :ref:`models:COMET Metrics`). * To train and develop new metrics. Contents: ========= .. toctree:: :maxdepth: 2 installation running faqs models training License ============== Free software: Apache License 2.0 Model Licenses can be found `here `_ Contributing ============== We welcome contributions to improve COMET. Please refer to `CONTRIBUTING.md `_ for quick instructions or to contributing instructions for more detailed instructions on how to set up your development environment. Publications ================== * `CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task `_ * `COMET-22: Unbabel-IST 2022 Submission for the Metrics Shared Task `_ * `Searching for Cometinho: The Little Metric That Could `_ * `Are References Really Needed? Unbabel-IST 2021 Submission for the Metrics Shared Task `_ * `Uncertainty-Aware Machine Translation Evaluation `_ * `COMET - Deploying a New State-of-the-art MT Evaluation Metric in Production `_ * `Unbabel's Participation in the WMT20 Metrics Shared Task `_ * `COMET: A Neural Framework for MT Evaluation `_