Activation functions
BaseActivation
Bases: BaseLayer
Template for activation layers.
Activation functions do not need to update or expose their weights, since they do not have trainable parameters
Source code in src/nnfs/activations.py
Sigmoid
Bases: BaseActivation
Sigmoid activation layer.
Applies the sigmoid function element-wise to the input during forward propagation, and computes its derivative during backpropagation.
Attributes:
| Name | Type | Description |
|---|---|---|
output |
ndarray
|
Stores the output from the forward pass for use in the backward pass. |
layer_name |
str
|
Short name for the layer type. |
index |
int
|
Position of the layer within the full model (initializes at 0). |
Source code in src/nnfs/activations.py
forward(X_input)
Performs the forward pass using the sigmoid activation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X_input
|
ndarray
|
Input array to the layer. |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Activated output after applying sigmoid. |
Source code in src/nnfs/activations.py
backward(grad_next)
Performs the backward pass for the sigmoid layer.
Computes the gradient of the loss with respect to the input, reusing the cached output from the forward pass.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
grad_next
|
ndarray
|
Gradient from the next layer. |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Gradient of the loss with respect to this layer's input. |
Source code in src/nnfs/activations.py
Softmax
Bases: BaseActivation
Softmax activation layer.
Applies the softmax function the input during forward propagation, and computes its derivative during backpropagation.
Attributes:
| Name | Type | Description |
|---|---|---|
output |
ndarray
|
Stores the output from the forward pass for use in the backward pass. |
layer_name |
str
|
Short name for the layer type. |
index |
int
|
Position of the layer within the full model (initializes at 0). |
Source code in src/nnfs/activations.py
forward(X_input)
Performs the forward pass using the softmax activation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X_input
|
ndarray
|
Input array to the layer. |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Activated output after applying softmax. |
Source code in src/nnfs/activations.py
backward(grad_next)
Performs the backward pass for the softmax layer.
Computes the gradient of the loss with respect to the input, reusing the cached output from the forward pass.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
grad_next
|
ndarray
|
Gradient from the next layer. |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Gradient of the loss with respect to this layer's input. |