BasicHebbianGRUModel Documentation¶
Table of Contents¶
1. Introduction ¶
The BasicHebbianGRUModel
is a PyTorch-based model designed for text-based tasks. It combines Hebbian learning with a GRU (Gated Recurrent Unit) layer to process sequential data. This model introduces non-linearity through the ReLU (Rectified Linear Unit) activation function.
Purpose¶
- The model is designed to learn and represent patterns in sequential data, making it suitable for various natural language processing (NLP) tasks.
- It applies Hebbian learning to adaptively adjust weights based on input patterns, followed by GRU processing for sequential data handling.
- The ReLU activation function introduces non-linearity, enabling the model to capture complex relationships in the data.
Key Features¶
- Hebbian learning for weight adaptation.
- GRU layer for sequential data processing.
- ReLU activation for non-linearity.
2. Class Definition ¶
class BasicHebbianGRUModel(nn.Module):
"""
A basic Hebbian learning model combined with a GRU for text-based tasks.
Parameters:
- dim (int): Dimension of the input features.
- hidden_dim (int): Dimension of the hidden state in the GRU.
- output_dim (int): Dimension of the output features.
"""
The BasicHebbianGRUModel
class has the following attributes and methods:
dim
(int): Dimension of the input features.hidden_dim
(int): Dimension of the hidden state in the GRU.output_dim
(int): Dimension of the output features.
3. Initialization ¶
To create an instance of the BasicHebbianGRUModel
, you need to specify the dimensions of input, hidden state, and output features. Here's how you can initialize the model:
dim = 512 # Dimension of the input features
hidden_dim = 256 # Dimension of the hidden state in the GRU
output_dim = 128 # Dimension of the output features
model = BasicHebbianGRUModel(dim, hidden_dim, output_dim)
4. Forward Pass ¶
The forward pass of the model processes input data through several stages:
- It applies Hebbian update rules to the weights.
- The data is then passed through a GRU layer.
- A ReLU activation function is applied to introduce non-linearity.
- Finally, the output is passed through a fully connected layer.
Here's how to perform a forward pass:
5. Usage Examples ¶
Example 1: Model Initialization¶
dim = 512
hidden_dim = 256
output_dim = 128
model = BasicHebbianGRUModel(dim, hidden_dim, output_dim)
Example 2: Forward Pass¶
Example 3: Accessing Model Parameters¶
# Accessing model parameters (weights, GRU parameters, FC layer parameters)
model_weights = model.weights
gru_parameters = model.gru.parameters()
fc_parameters = model.fc.parameters()
6. Additional Information ¶
Tips for Effective Usage¶
- For optimal results, ensure that input data is properly preprocessed and normalized.
- Experiment with different hyperparameters, such as the dimensions of hidden states and output features, to fine-tune the model for your specific task.
References¶
This documentation provides an overview of the BasicHebbianGRUModel
, its purpose, usage, and key features. For more details on its implementation and advanced usage, refer to the source code and additional resources.