Recommender Module#
- class recwizard.modules.unicrs.configuration_unicrs_rec.UnicrsRecConfig(pretrained_model: str = '', kgprompt_config: dict | None = None, num_tokens: int = 0, pad_token_id: int = 0, **kwargs)[source]#
- __init__(pretrained_model: str = '', kgprompt_config: dict | None = None, num_tokens: int = 0, pad_token_id: int = 0, **kwargs)[source]#
- Parameters:
WEIGHT_DIMENSIONS (dict, optional) – The dimension and dtype of module parameters. Used to initialize the parameters when they are not explicitly specified in module initialization. Defaults to None. See also
recwizard.module_utils.BaseModule.prepare_weight()
.**kwargs – Additional parameters. Will be passed to the PretrainedConfig.__init__.
- class recwizard.modules.unicrs.tokenizer_unicrs_rec.UnicrsRecTokenizer(context_tokenizer: str = 'microsoft/DialoGPT-small', prompt_tokenizer: str = 'roberta-base', context_max_length: int = 200, prompt_max_length: int = 200, pad_entity_id: int = 31161, entity2id: Dict[str, int] | None = None, **kwargs)[source]#
- __init__(context_tokenizer: str = 'microsoft/DialoGPT-small', prompt_tokenizer: str = 'roberta-base', context_max_length: int = 200, prompt_max_length: int = 200, pad_entity_id: int = 31161, entity2id: Dict[str, int] | None = None, **kwargs)[source]#
- Parameters:
entity2id (Dict[str, int]) – a dict mapping entity name to entity id. If not provided, it will be generated from id2entity.
id2entity (Dict[int, str]) – a dict mapping entity id to entity name. If not provided, it will be generated from entity2id.
pad_entity_id (int) – the id for padding entity. If not provided, it will be the maximum entity id + 1.
tokenizers (List[PreTrainedTokenizerBase]) – a list of tokenizers to be used.
**kwargs – other arguments for PreTrainedTokenizer
- __call__(*args, **kwargs)[source]#
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.
- Parameters:
text (str, List[str], List[List[str]], optional) – The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair (str, List[str], List[List[str]], optional) – The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_target (str, List[str], List[List[str]], optional) – The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair_target (str, List[str], List[List[str]], optional) – The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
add_special_tokens (bool, optional, defaults to True) – Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or [~utils.PaddingStrategy], optional, defaults to False) –
Activates and controls padding. Accepts the following values:
True or ‘longest’: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
’max_length’: Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
False or ‘do_not_pad’ (default): No padding (i.e., can output a batch with sequences of different lengths).
truncation (bool, str or [~tokenization_utils_base.TruncationStrategy], optional, defaults to False) –
Activates and controls truncation. Accepts the following values:
True or ‘longest_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.
’only_first’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
’only_second’: Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or ‘do_not_truncate’ (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size).
max_length (int, optional) –
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) – If set to a number along with max_length, the overflowing tokens returned when return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) – Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) – If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or [~utils.TensorType], optional) –
If set, will return tensors instead of list of python integers. Acceptable values are:
’tf’: Return TensorFlow tf.constant objects.
’pt’: Return PyTorch torch.Tensor objects.
’np’: Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) –
Whether to return token type IDs. If left to the default, will return the token type IDs according to the specific tokenizer’s default, defined by the return_outputs attribute.
[What are token type IDs?](../glossary#token-type-ids)
return_attention_mask (bool, optional) –
Whether to return the attention mask. If left to the default, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute.
[What are attention masks?](../glossary#attention-mask)
return_overflowing_tokens (bool, optional, defaults to False) – Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) – Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) –
Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from [PreTrainedTokenizerFast], if using Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) – Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) – Whether or not to print more information and warnings.
**kwargs – passed to the self.tokenize() method
- Returns:
A [BatchEncoding] with the following fields:
input_ids – List of token ids to be fed to a model.
[What are input IDs?](../glossary#input-ids)
token_type_ids – List of token type ids to be fed to a model (when return_token_type_ids=True or if “token_type_ids” is in self.model_input_names).
[What are token type IDs?](../glossary#token-type-ids)
attention_mask – List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names).
[What are attention masks?](../glossary#attention-mask)
overflowing_tokens – List of overflowing tokens sequences (when a max_length is specified and return_overflowing_tokens=True).
num_truncated_tokens – Number of tokens truncated (when a max_length is specified and return_overflowing_tokens=True).
special_tokens_mask – List of 0s and 1s, with 1 specifying added special tokens and 0 specifying regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length – The length of the inputs (when return_length=True)
- Return type:
[BatchEncoding]
- classmethod load_from_dataset(dataset='redial_unicrs', **kwargs)[source]#
Initialize the tokenizer from the dataset. By default, it will load the entity2id from the dataset. :param dataset: the dataset name :param **kwargs: the other arguments for initialization
- Returns:
the initialized tokenizer
- Return type:
- static mergeEncoding(encodings: List[BatchEncoding]) BatchEncoding [source]#
Merge a list of encodings into one encoding. Assumes each encoding has the same attributes other than data.
- encodes(encode_funcs: List[Callable], texts: List[str | List[str]], *args, **kwargs) List[BatchEncoding] [source]#
This function is called to apply encoding functions from different tokenizers. It will be used by both encode_plus and batch_encode_plus.
If you want to call different tokenizers with different arguments, override this method.
- Parameters:
encode_funcs – the encoding functions from self.tokenizers.
texts – the processed text for each encoding function
**kwargs –
- Returns:
a list of BatchEncoding, the length of the list is the same as the number of tokenizer
- decode(raw_input, *args, **kwargs)#
Overrides the decode function from PreTrainedTokenizer. By default, calls the decode function of the first tokenizer.