Module training.torch_loss

Torch loss functions and regularizers useful for training networks using Torch Modules.

Functions overview

summed_exp_boundary_loss(data[, ...])

Compute the summed exponential error of boundary violations of an input.

Classes overview

ParameterBoundaryRegularizer(*args, **kwargs)

Class wrapper for the summed exponential error of boundary violations of an input.

Functions

training.torch_loss.summed_exp_boundary_loss(data, lower_bound=None, upper_bound=None)[source]

Compute the summed exponential error of boundary violations of an input.

\[ \begin{align}\begin{aligned}\textrm{sebl}(y, y_{lower}, y_{upper}) = \sum_i \textrm{sebl}(y_i, y_{lower}, y_{upper})\\\begin{split}\textrm{sebl}(y_i, y_{lower}, y_{upper}) = \begin{cases} \exp(y_i - y_{upper}), & \text{if $y_i > y_{upper}$} \\ \exp(y_{lower} - y_i), & \text{if $y_i < y_{lower}$} \\ 0, & \text{otherwise} \\ \end{cases}\end{split}\end{aligned}\end{align} \]

This function allows for soft parameter constraints by creating a loss for boundary violations. This can be reached by adding summed_exp_boundary_loss(data, lower_bound, upper_bound) to your general loss, where data is an arbitrary tensor and both bounds are scalars. If either of the bounds is given as None, its boundary will not be penalized.

In the example below we will introduce soft constraints to tau_mem of the first layer of the model, so that values tau_mem > 1e-1 and tau_mem < 1e-3 will be punished and considered in the optimization step.

# Calculate the training loss
y_hat, _, _ = model(x)
train_loss = F.mse_loss(y, y_hat)

# Set soft constraints to the time constants of the first layer of the Parameter
boundary_loss = summed_exp_boundary_loss(model[0].tau_mem, 1e-3, 1e-1)
complete_loss = train_loss + boundary_loss

# Do backpropagation over both losses and optimize the model parameters accordingly
complete_loss.backward()
optimizer.step()

If we would only like to introduce a lower bound penalty to a parameter we can easily do that by leaving away the definition for upper_bound. The same works analogously for only penalizing upper bounds.

boundary_loss = summed_exp_boundary_loss(model[0].thr_up, lower_bound=1e-4)
complete_loss = train_loss + boundary_loss

# Do backpropagation over both losses and optimize the model parameters accordingly
complete_loss.backward()
optimizer.step()
Parameters
  • data (torch.Tensor) – The data which boundary violations will be penalized, with shape (N,).

  • lower_bound (float) – Lower bound for the data.

  • upper_bound (float) – Upper bound for the data.

Returns

Summed exponential error of boundary violations.

Return type

float

Classes

class training.torch_loss.ParameterBoundaryRegularizer(*args, **kwargs)[source]

Class wrapper for the summed exponential error of boundary violations of an input. See summed_exp_boundary_loss() for more information. Allows to define the boundaries of a value just once in an object.

__init__(lower_bound=None, upper_bound=None)[source]

Initialise this module

You must override this method to initialise your module.

Parameters
  • *args

  • **kwargs

forward(input)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.