kiwi.systems.outputs.quality_estimation

Module Contents

Classes

WordLevelConfig

Base class for all pydantic configs. Used to configure base behaviour of configs.

SentenceLevelConfig

Base class for all pydantic configs. Used to configure base behaviour of configs.

QEOutputs

Base class for all neural network modules.

Functions

tag_metrics(*targets, prefix=None, labels=None)

kiwi.systems.outputs.quality_estimation.logger
class kiwi.systems.outputs.quality_estimation.WordLevelConfig

Bases: kiwi.utils.io.BaseConfig

Base class for all pydantic configs. Used to configure base behaviour of configs.

target :bool = False

Train or predict target tags

gaps :bool = False

Train or predict gap tags

source :bool = False

Train or predict source tags

class_weights :Dict[str, Dict[str, float]]

Relative weight for labels on each output side.

class kiwi.systems.outputs.quality_estimation.SentenceLevelConfig

Bases: kiwi.utils.io.BaseConfig

Base class for all pydantic configs. Used to configure base behaviour of configs.

hter :bool = False

Predict Sentence Level Scores. Requires the appropriate input files (usually with HTER).

use_distribution :bool = False

Use probabilistic Loss for sentence scores instead of squared error. If set (requires hter to also be set), the model will output mean and variance of a truncated Gaussian distribution over the interval [0, 1], and use the NLL of ground truth scores as the loss. This seems to improve performance, and gives you uncertainty estimates for sentence level predictions as a byproduct.

binary :bool = False

Predict Binary Label for each sentence, indicating hter == 0.0. Requires the appropriate input files (usually with HTER).

class kiwi.systems.outputs.quality_estimation.QEOutputs(inputs_dims, vocabs: Dict[str, Vocabulary], config: Config)

Bases: kiwi.systems._meta_module.MetaModule

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

class Config

Bases: kiwi.utils.io.BaseConfig

Base class for all pydantic configs. Used to configure base behaviour of configs.

word_level :WordLevelConfig
sentence_level :SentenceLevelConfig
sentence_loss_weight :float = 1.0

Multiplier for sentence_level loss weight.

dropout :float = 0.0
last_activation :bool = False
n_layers_output :int = 3
forward(self, features: Dict[str, Tensor], batch_inputs: MultiFieldBatch) → Dict[str, Tensor]
loss(self, model_out: Dict[str, Tensor], batch: MultiFieldBatch) → Dict[str, Tensor]
word_losses(self, model_out: Dict[str, Tensor], batch_outputs: MultiFieldBatch)

Compute sequence tagging loss.

sentence_losses(self, model_out: Dict[str, Tensor], batch_outputs: MultiFieldBatch)

Compute sentence score loss.

metrics_step(self, batch: MultiFieldBatch, model_out: Dict[str, Tensor], loss_dict: Dict[str, Tensor]) → Dict[str, Tensor]
metrics_end(self, steps: List[Dict[str, Tensor]], prefix='')
property metrics(self) → List[Metric]
labels(self, field: str) → List[str]
decode_outputs(self, model_out: Dict[str, Tensor], batch_inputs: MultiFieldBatch, positive_class_label: str = const.BAD) → Dict[str, List]
decode_word_outputs(self, model_out: Dict[str, Tensor], batch_inputs: MultiFieldBatch, positive_class_label: str = const.BAD) → Dict[str, List]
static decode_sentence_outputs(model_out: Dict[str, Tensor]) → Dict[str, List]
kiwi.systems.outputs.quality_estimation.tag_metrics(*targets, prefix=None, labels=None)