Supporting Modules#
- class recwizard.modules.unicrs.kg_prompt.KGPrompt(hidden_size, token_hidden_size, n_head, n_layer, n_block, num_bases, kg_info, edge_index, edge_type, num_tokens, n_prefix_rec=None, n_prefix_conv=None)[source]#
- __init__(hidden_size, token_hidden_size, n_head, n_layer, n_block, num_bases, kg_info, edge_index, edge_type, num_tokens, n_prefix_rec=None, n_prefix_conv=None)[source]#
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(entity_ids=None, prompt=None, rec_mode=False, use_rec_prefix=False, use_conv_prefix=False)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class recwizard.modules.unicrs.prompt_gpt2.GPT2Attention(config, is_cross_attention=False)[source]#
- __init__(config, is_cross_attention=False)[source]#
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- _split_heads(tensor, num_heads, attn_head_size)[source]#
Splits hidden_size dim into attn_head_size and num_heads
- _merge_heads(tensor, num_heads, attn_head_size)[source]#
Merges attn_head_size dim and num_attn_heads dim into hidden_size
- forward(hidden_states, layer_past=None, prompt_embeds=None, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, use_cache=False, output_attentions=False)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class recwizard.modules.unicrs.prompt_gpt2.GPT2Block(config)[source]#
- __init__(config)[source]#
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(hidden_states, layer_past=None, prompt_embeds=None, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, use_cache=False, output_attentions=False)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class recwizard.modules.unicrs.prompt_gpt2.GPT2Model(config)[source]#
- __init__(config)[source]#
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- get_input_embeddings()[source]#
Returns the model’s input embeddings.
- Returns:
A torch module mapping vocabulary to hidden states.
- Return type:
nn.Module
- set_input_embeddings(new_embeddings)[source]#
Set model’s input embeddings.
- Parameters:
value (nn.Module) – A module mapping vocabulary to hidden states.
- _prune_heads(heads_to_prune)[source]#
Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer}
- forward(input_ids=None, past_key_values=None, prompt_embeds=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class recwizard.modules.unicrs.prompt_gpt2.PromptGPT2LMHead(config)[source]#
- __init__(config)[source]#
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- get_output_embeddings()[source]#
Returns the model’s output embeddings.
- Returns:
A torch module mapping hidden states to vocabulary.
- Return type:
nn.Module
- forward(input_ids=None, past_key_values=None, prompt_embeds=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, use_cache=None, output_attentions=None, output_hidden_states=None, conv=True, labels=None, return_dict=True) CausalLMOutputWithCrossAttentions [source]#
- labels (torch.LongTensor of shape (batch_size, sequence_length), optional):
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, …, config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, …, config.vocab_size]