nn.modules.LIFExodus

class nn.modules.LIFExodus(*args, **kwargs)[source]

Bases: LIFBaseTorch

Attributes overview

T_destination

call_super_init

class_name

Class name of self

dump_patches

full_name

The full name of this module (class plus module name)

name

The name of this module, or an empty string if None

shape

The shape of this module

size

(DEPRECATED) The output size of this module

size_in

The input size of this module

size_out

The output size of this module

spiking_input

If True, this module receives spiking input.

spiking_output

If True, this module sends spiking output.

leak_mode

(str) The mode by which leaks are determined for this module.

n_synapses

(int) Number of input synapses per neuron

dt

(float) Euler simulator time-step in seconds

w_rec

(Tensor) Recurrent weights (Nout, Nin)

noise_std

(float) Noise std.dev.

tau_mem

(Tensor) Membrane time constants (Nout,) or ()

tau_syn

(Tensor) Synaptic time constants (Nin,) or ()

alpha

(Tensor) Membrane decay factor (Nout,) or ()

beta

(Tensor) Synaptic decay factor (Nin,) or ()

dash_mem

(Tensor) membrane bitshift in xylo (Nout,) or ()

dash_syn

(Tensor) synaptic bitshift in xylo (Nout,) or ()

bias

(Tensor) Neuron biases (Nout,) or ()

threshold

(Tensor) Firing threshold for each neuron (Nout,)

learning_window

(float) Learning window cutoff for surrogate gradient function

vmem

(Tensor) Membrane potentials (Nout,)

isyn

(Tensor) Synaptic currents (Nin,)

spikes

(Tensor) Spikes (Nin,)

spike_generation_fn

(Callable) Spike generation function with surrograte gradient

max_spikes_per_dt

(float) Maximum number of events that can be produced in each time-step

training

Methods overview

__init__(shape[, tau_mem, tau_syn, ...])

Instantiate an LIF module using the Exodus backend

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

as_graph()

Convert this module to a computational graph

attributes_named(name)

Search for attributes of this or submodules by time

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

evolve(input_data[, record])

Implement the Rockpool low-level evolution API

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(data)

forward method for processing data through this layer Adds synaptic inputs to the synaptic states and mimics the Leaky Integrate and Fire dynamics

from_torch(obj[, retain_torch_api])

Convert a torch module into a Rockpool TorchModule in-place

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module's state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

ipu([device])

Moves all model parameters and buffers to the IPU.

json_to_param(jparam)

load(fn)

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

merge(a, b)

modules(*args, **kwargs)

Return a dictionary of all sub-modules of this module

named_buffers([prefix, recurse, ...])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse, ...])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

param_to_json(param)

parameters([family])

Return a nested dictionary of module and submodule Parameters

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook, *[, prepend, ...])

Registers a forward hook on the module.

register_forward_pre_hook(hook, *[, ...])

Registers a forward pre-hook on the module.

register_full_backward_hook(hook[, prepend])

Registers a backward hook on the module.

register_full_backward_pre_hook(hook[, prepend])

Registers a backward pre-hook on the module.

register_load_state_dict_post_hook(hook)

Registers a post hook to be run after module's load_state_dict is called.

register_module(name, module)

Alias for add_module().

register_parameter(name, param)

Adds a parameter to the module.

register_state_dict_pre_hook(hook)

These hooks will be called with arguments: self, prefix, and keep_vars before calling state_dict on self.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

reset_parameters()

Reset all parameters in this module

reset_state()

Reset the state of this module

save(fn)

set_attributes(new_attributes)

Set the attributes and sub-module attributes from a dictionary

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

simulation_parameters([family])

Return a nested dictionary of module and submodule SimulationParameters

state([family])

Return a nested dictionary of module and submodule States

state_dict(*args[, destination, prefix, ...])

Returns a dictionary containing references to the whole state of the module.

timed([output_num, dt, add_events])

Convert this module to a TimedModule

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

to_json()

to_torch([use_torch_call])

Convert the module to use the torch.nn.Module API

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

T_destination = ~T_destination
__init__(shape: tuple, tau_mem: float | ParameterBase = 0.02, tau_syn: float | ParameterBase = 0.05, threshold: float | ParameterBase = 1.0, learning_window: float | ParameterBase = 0.5, bias: float | ParameterBase = 0.0, has_rec: bool = False, noise_std: float | ParameterBase = 0.0, *args, **kwargs)[source]

Instantiate an LIF module using the Exodus backend

Uses the Exodus accelerated CUDA module to implement an LIF neuron. A CUDA device is required to instantiate this module.

The output of evolving this module is the neuron spike events; synaptic currents and membrane potentials are available using the record = True argument to evolve().

Warning

Exodus does not currently support training thresholds.

Exodus does not support noise injection.

Examples

Instantitate an LIF module with 2 neurons, with 2 synapses each (4 input channels).

>>> mod = LIFExodus((4, 2))

Specify the membrane and synapse time constants, as well as time-step dt.

>>> mod = LIFExodus((4, 2), tau_mem = 30e-3, tau_syn = 10e-3, dt = 10e-3)

Pass the model and data to the same cuda device, since it is required to use CUDA on this module.

>>> data = torch.ones((1, 10, 4))
>>> device = 'cuda: 1'
>>> mod.to(device)
>>> data = data.to(device)
>>> output = mod(data)
Parameters:
  • shape (tuple) – The shape of this module

  • tau_syn (float) – An optional array with concrete initialisation data for the synaptic time constants. If not provided, 50ms will be used by default.

  • tau_mem (float) – An optional array with concrete initialisation data for the membrane time constants. If not provided, 20ms will be used by default.

  • bias (float) –

  • threshold (float) – An optional array specifying the firing threshold of each neuron. If not provided, 1. will be used by default.

  • learning_window (float) – Cutoff value for the surrogate gradient. Default: 0.5

  • dt (float) – Time step in seconds. Default: 1 ms.

_abc_impl = <_abc._abc_data object>
_apply(fn)
_auto_batch(data: Tensor, states: Tuple = (), target_shapes: Tuple | None = None) Tuple[Tensor, Tuple[Tensor]]

Automatically replicate states over batches and verify input dimensions

Usage:
>>> data, (state0, state1, state2) = self._auto_batch(data, (self.state0, self.state1, self.state2))

This will verify that data has the correct final dimension (i.e. self.size_in). If data has only two dimensions (T, Nin), then it will be augmented to (1, T, Nin). The individual states will be replicated out from shape (a, b, c, ...) to (n_batches, a, b, c, ...) and returned.

Parameters:
  • data (torch.Tensor) – Input data tensor. Either (batches, T, Nin) or (T, Nin)

  • states (Tuple) – Tuple of state variables. Each will be replicated out over batches by prepending a batch dimension

Returns:

(torch.Tensor, Tuple[torch.Tensor]) data, states

_backward_hooks: Dict[int, Callable]
_backward_pre_hooks: Dict[int, Callable]
_buffers: Dict[str, Tensor | None]
_call_impl(*args, **kwargs)
_force_set_attributes

(bool) If True, do not sanity-check attributes when setting.

_forward_hooks: Dict[int, Callable]
_forward_hooks_with_kwargs: Dict[int, bool]
_forward_pre_hooks: Dict[int, Callable]
_forward_pre_hooks_with_kwargs: Dict[int, bool]
_get_all_leak_params()

Calculate and return all decay parameters, depending on leak mode

_get_attribute_family(type_name: str, family: Tuple | List | str | None = None) dict

Search for attributes of this module and submodules that match a given family

This method can be used to conveniently get all weights for a network; or all time constants; or any other family of parameters. Parameter families are defined simply by a string: "weights" for weights; "taus" for time constants, etc. These strings are arbitrary, but if you follow the conventions then future developers will thank you (that includes you in six month’s time).

Parameters:
  • type_name (str) – The class of parameters to search for. Must be one of ["Parameter", "SimulationParameter", "State"] or another future subclass of ParameterBase

  • family (Union[str, Tuple[str]]) – A string or list or tuple of strings, that define one or more attribute families to search for

Returns:

A nested dictionary of attributes that match the provided type_name and family

Return type:

dict

_get_attribute_registry() Tuple[Dict, Dict]

Return or initialise the attribute registry for this module

Returns:

registered_attributes, registered_modules

Return type:

(tuple)

_get_backward_hooks()

Returns the backward hooks for use in the call function. It returns two lists, one with the full backward hooks and one with the non-full backward hooks.

_get_backward_pre_hooks()
_get_name()
_has_registered_attribute(name: str) bool

Check if the module has a registered attribute

Parameters:

name (str) – The name of the attribute to check

Returns:

True if the attribute name is in the attribute registry, False otherwise.

Return type:

bool

_in_Module_init

(bool) If exists and True, indicates that the module is in the __init__ chain.

_is_full_backward_hook: bool | None
_load_from_state_dict(state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs)

Copies parameters and buffers from state_dict into only this module, but not its descendants. This is called on every submodule in load_state_dict(). Metadata saved for this module in input state_dict is provided as local_metadata. For state dicts without metadata, local_metadata is empty. Subclasses can achieve class-specific backward compatible loading using the version number at local_metadata.get("version", None).

Note

state_dict is not the same object as the input state_dict to load_state_dict(). So it can be modified.

Parameters:
  • state_dict (dict) – a dict containing parameters and persistent buffers.

  • prefix (str) – the prefix for parameters and buffers used in this module

  • local_metadata (dict) – a dict containing the metadata for this module. See

  • strict (bool) – whether to strictly enforce that the keys in state_dict with prefix match the names of parameters and buffers in this module

  • missing_keys (list of str) – if strict=True, add missing keys to this list

  • unexpected_keys (list of str) – if strict=True, add unexpected keys to this list

  • error_msgs (list of str) – error messages should be added to this list, and will be reported together in load_state_dict()

_load_state_dict_post_hooks: Dict[int, Callable]
_load_state_dict_pre_hooks: Dict[int, Callable]
_maybe_warn_non_full_backward_hook(inputs, result, grad_fn)
_modules: Dict[str, 'Module' | None]
_name: str | None

Name of this module, if assigned

_named_members(get_members_fn, prefix='', recurse=True, remove_duplicate: bool = True)

Helper method for yielding various names + members of modules.

_non_persistent_buffers_set: Set[str]
_parameters: Dict[str, Parameter | None]
_register_attribute(name: str, val: ParameterBase)

Record an attribute in the attribute registry

Parameters:
  • name (str) – The name of the attribute to register

  • val (ParameterBase) – The ParameterBase subclass object to register. e.g. Parameter, SimulationParameter or State.

_register_load_state_dict_pre_hook(hook, with_module=False)

These hooks will be called with arguments: state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs, before loading state_dict into self. These arguments are exactly the same as those of _load_from_state_dict.

If with_module is True, then the first argument to the hook is an instance of the module.

Parameters:
  • hook (Callable) – Callable hook that will be invoked before loading the state dict.

  • with_module (bool, optional) – Whether or not to pass the module instance to the hook as the first parameter.

_register_module(name: str, mod)

Add a submodule to the module registry

Parameters:
  • name (str) – The name of the submodule, extracted from the assigned attribute name

  • mod (TorchModule) – The submodule to register

Raises:

ValueError – If the assigned submodule is not a TorchModule

_register_state_dict_hook(hook)

These hooks will be called with arguments: self, state_dict, prefix, local_metadata, after the state_dict of self is set. Note that only parameters and buffers of self or its children are guaranteed to exist in state_dict. The hooks may modify state_dict inplace or return a new one.

_replicate_for_data_parallel()
_reset_attribute(name: str) ModuleBase

Reset an attribute to its initialisation value

Parameters:

name (str) – The name of the attribute to reset

Returns:

For compatibility with the functional API

Return type:

self (Module)

_save_to_state_dict(destination, prefix, keep_vars)

Saves module state to destination dictionary, containing a state of the module, but not its descendants. This is called on every submodule in state_dict().

In rare cases, subclasses can achieve class-specific behavior by overriding this method with custom logic.

Parameters:
  • destination (dict) – a dict where state will be stored

  • prefix (str) – the prefix for parameters and buffers used in this module

_set_leak_param(name, value)

Set the value of a named decay parameter, depending on leak mode

_shape

The shape of this module

_slow_forward(*input, **kwargs)
_spiking_input: bool

Whether this module receives spiking input

_spiking_output: bool

Whether this module produces spiking output

_state_dict_hooks: Dict[int, Callable]
_state_dict_pre_hooks: Dict[int, Callable]
_submodulenames: List[str]

Registry of sub-module names

_version: int = 1

This allows better BC support for load_state_dict(). In state_dict(), the version number will be saved as in the attribute _metadata of the returned state dict, and thus pickled. _metadata is a dictionary with keys that follow the naming convention of state dict. See _load_from_state_dict on how to use this information in loading.

If new parameters/buffers are added/removed from a module, this number shall be bumped, and the module’s _load_from_state_dict method can compare the version number and do appropriate changes if the state dict is from before the change.

_wrap_recorded_state(recorded_dict: dict, t_start: float) Dict[str, TimeSeries]

Convert a recorded dictionary to a TimeSeries representation

This method is optional, and is provided to make the timed() conversion to a TimedModule work better. You should override this method in your custom Module, to wrap each element of your recorded state dictionary as a TimeSeries

Parameters:
  • state_dict (dict) – A recorded state dictionary as returned by evolve()

  • t_start (float) – The initial time of the recorded state, to use as the starting point of the time series

Returns:

The mapped recorded state dictionary, wrapped as TimeSeries objects

Return type:

Dict[str, TimeSeries]

add_module(name: str, module: Module | None) None

Adds a child module to the current module.

The module can be accessed as an attribute using the given name.

Parameters:
  • name (str) – name of the child module. The child module can be accessed from this module using the given name

  • module (Module) – child module to be added to the module.

alpha: P_tensor

(Tensor) Membrane decay factor (Nout,) or ()

apply(fn: Callable[[Module], None]) T

Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also nn-init-doc).

Parameters:

fn (Module -> None) – function to be applied to each submodule

Returns:

self

Return type:

Module

Example:

>>> @torch.no_grad()
>>> def init_weights(m):
>>>     print(m)
>>>     if type(m) == nn.Linear:
>>>         m.weight.fill_(1.0)
>>>         print(m.weight)
>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
>>> net.apply(init_weights)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[1., 1.],
        [1., 1.]], requires_grad=True)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[1., 1.],
        [1., 1.]], requires_grad=True)
Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
as_graph() GraphModuleBase

Convert this module to a computational graph

Returns:

The computational graph corresponding to this module

Return type:

GraphModuleBase

Raises:

NotImplementedError – If as_graph() is not implemented for this subclass

attributes_named(name: Tuple[str] | List[str] | str) dict

Search for attributes of this or submodules by time

Parameters:

name (Union[str, Tuple[str]) – The name of the attribute to search for

Returns:

A nested dictionary of attributes that match name

Return type:

dict

beta: P_tensor

(Tensor) Synaptic decay factor (Nin,) or ()

bfloat16() T

Casts all floating point parameters and buffers to bfloat16 datatype.

Note

This method modifies the module in-place.

Returns:

self

Return type:

Module

bias: P_tensor

(Tensor) Neuron biases (Nout,) or ()

buffers(recurse: bool = True) Iterator[Tensor]

Returns an iterator over module buffers.

Parameters:

recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module.

Yields:

torch.Tensor – module buffer

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> for buf in model.buffers():
>>>     print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
call_super_init: bool = False
children() Iterator[Module]

Returns an iterator over immediate children modules.

Yields:

Module – a child module

property class_name: str

Class name of self

Type:

str

cpu() T

Moves all model parameters and buffers to the CPU.

Note

This method modifies the module in-place.

Returns:

self

Return type:

Module

cuda(device: int | device | None = None) T

Moves all model parameters and buffers to the GPU.

This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.

Note

This method modifies the module in-place.

Parameters:

device (int, optional) – if specified, all parameters will be copied to that device

Returns:

self

Return type:

Module

dash_mem: P_tensor

(Tensor) membrane bitshift in xylo (Nout,) or ()

dash_syn: P_tensor

(Tensor) synaptic bitshift in xylo (Nout,) or ()

double() T

Casts all floating point parameters and buffers to double datatype.

Note

This method modifies the module in-place.

Returns:

self

Return type:

Module

dt: P_float

(float) Euler simulator time-step in seconds

dump_patches: bool = False
eval() T

Sets the module in evaluation mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

This is equivalent with self.train(False).

See locally-disable-grad-doc for a comparison between eval() and several similar mechanisms that may be confused with it.

Returns:

self

Return type:

Module

evolve(input_data: Tensor, record: bool = False) Tuple[Any, Any, Any]

Implement the Rockpool low-level evolution API

evolve() is provided by TorchModule to connect the Rockpool low-level API to the Torch API (i.e. forward() etc.). You should not override evolve() if using TorchModule directly, but should implement the Torch API to perform evaluation of the module.

evolve() will automatically set the _record flag according to the input argument to evolve(). You can use this within your forward() method, and should build a dictionary _record_dict. This will be returned automatically from evolve(), if requested.

Parameters:
  • input_data – This might be a numpy array or Torch tensor, containing the input data to evolve over

  • record (bool) – Iff True, return a dictionary of state variables as record_dict, containing the time series of those state variables over evolution. Default: False, do not record state during evolution

Returns:

(output_data, new_states, record_dict)

output_data is the output from the TorchModule, probably as a torch Tensor. new_states is a dictionary containing the updated state for this module, post evolution. If the record argument is True, record_dict is a dictionary containing the recorded state variables for this and all submodules, recorded over evolution.

Return type:

(array, dict, dict)

extra_repr() str

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

float() T

Casts all floating point parameters and buffers to float datatype.

Note

This method modifies the module in-place.

Returns:

self

Return type:

Module

forward(data: Tensor) Tensor[source]

forward method for processing data through this layer Adds synaptic inputs to the synaptic states and mimics the Leaky Integrate and Fire dynamics

Parameters:

data (torch.Tensor) – Data takes the shape of (batch, time_steps, n_synapses)

Returns:

Out of spikes with the shape (batch, time_steps, n_neurons)

Return type:

torch.Tensor

classmethod from_torch(obj: Module, retain_torch_api: bool = False) None

Convert a torch module into a Rockpool TorchModule in-place

Parameters:
  • obj (torch.nn.Module) – Torch module to convert to a Rockpool

  • retain_torch_api (bool) – If True, calling the resulting module will use the Torch API. Default: False, convert the module to the Rockpool low-level API for __call__().

property full_name: str

The full name of this module (class plus module name)

Type:

str

get_buffer(target: str) Tensor

Returns the buffer given by target if it exists, otherwise throws an error.

See the docstring for get_submodule for a more detailed explanation of this method’s functionality as well as how to correctly specify target.

Parameters:

target – The fully-qualified string name of the buffer to look for. (See get_submodule for how to specify a fully-qualified string.)

Returns:

The buffer referenced by target

Return type:

torch.Tensor

Raises:

AttributeError – If the target string references an invalid path or resolves to something that is not a buffer

get_extra_state() Any

Returns any extra state to include in the module’s state_dict. Implement this and a corresponding set_extra_state() for your module if you need to store extra state. This function is called when building the module’s state_dict().

Note that extra state should be picklable to ensure working serialization of the state_dict. We only provide provide backwards compatibility guarantees for serializing Tensors; other objects may break backwards compatibility if their serialized pickled form changes.

Returns:

Any extra state to store in the module’s state_dict

Return type:

object

get_parameter(target: str) Parameter

Returns the parameter given by target if it exists, otherwise throws an error.

See the docstring for get_submodule for a more detailed explanation of this method’s functionality as well as how to correctly specify target.

Parameters:

target – The fully-qualified string name of the Parameter to look for. (See get_submodule for how to specify a fully-qualified string.)

Returns:

The Parameter referenced by target

Return type:

torch.nn.Parameter

Raises:

AttributeError – If the target string references an invalid path or resolves to something that is not an nn.Parameter

get_submodule(target: str) Module

Returns the submodule given by target if it exists, otherwise throws an error.

For example, let’s say you have an nn.Module A that looks like this:

A(
    (net_b): Module(
        (net_c): Module(
            (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2))
        )
        (linear): Linear(in_features=100, out_features=200, bias=True)
    )
)

(The diagram shows an nn.Module A. A has a nested submodule net_b, which itself has two submodules net_c and linear. net_c then has a submodule conv.)

To check whether or not we have the linear submodule, we would call get_submodule("net_b.linear"). To check whether we have the conv submodule, we would call get_submodule("net_b.net_c.conv").

The runtime of get_submodule is bounded by the degree of module nesting in target. A query against named_modules achieves the same result, but it is O(N) in the number of transitive modules. So, for a simple check to see if some submodule exists, get_submodule should always be used.

Parameters:

target – The fully-qualified string name of the submodule to look for. (See above example for how to specify a fully-qualified string.)

Returns:

The submodule referenced by target

Return type:

torch.nn.Module

Raises:

AttributeError – If the target string references an invalid path or resolves to something that is not an nn.Module

half() T

Casts all floating point parameters and buffers to half datatype.

Note

This method modifies the module in-place.

Returns:

self

Return type:

Module

ipu(device: int | device | None = None) T

Moves all model parameters and buffers to the IPU.

This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on IPU while being optimized.

Note

This method modifies the module in-place.

Parameters:

device (int, optional) – if specified, all parameters will be copied to that device

Returns:

self

Return type:

Module

isyn: P_tensor

(Tensor) Synaptic currents (Nin,)

json_to_param(jparam)
leak_mode

(str) The mode by which leaks are determined for this module.

learning_window: P_tensor

(float) Learning window cutoff for surrogate gradient function

load(fn)
load_state_dict(state_dict: Mapping[str, Any], strict: bool = True)

Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function.

Parameters:
  • state_dict (dict) – a dict containing parameters and persistent buffers.

  • strict (bool, optional) – whether to strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict() function. Default: True

Returns:

  • missing_keys is a list of str containing the missing keys

  • unexpected_keys is a list of str containing the unexpected keys

Return type:

NamedTuple with missing_keys and unexpected_keys fields

Note

If a parameter or buffer is registered as None and its corresponding key exists in state_dict, load_state_dict() will raise a RuntimeError.

max_spikes_per_dt: P_float

(float) Maximum number of events that can be produced in each time-step

merge(a, b)
modules(*args, **kwargs)

Return a dictionary of all sub-modules of this module

Returns:

A dictionary containing all sub-modules. Each item will be named with the sub-module name.

Return type:

dict

n_synapses: P_int

(int) Number of input synapses per neuron

property name: str

The name of this module, or an empty string if None

Type:

str

named_buffers(prefix: str = '', recurse: bool = True, remove_duplicate: bool = True) Iterator[Tuple[str, Tensor]]

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

Parameters:
  • prefix (str) – prefix to prepend to all buffer names.

  • recurse (bool, optional) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Defaults to True.

  • remove_duplicate (bool, optional) – whether to remove the duplicated buffers in the result. Defaults to True.

Yields:

(str, torch.Tensor) – Tuple containing the name and buffer

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> for name, buf in self.named_buffers():
>>>     if name in ['running_var']:
>>>         print(buf.size())
named_children() Iterator[Tuple[str, Module]]

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

Yields:

(str, Module) – Tuple containing a name and child module

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> for name, module in model.named_children():
>>>     if name in ['conv4', 'conv5']:
>>>         print(module)
named_modules(memo: Set[Module] | None = None, prefix: str = '', remove_duplicate: bool = True)

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

Parameters:
  • memo – a memo to store the set of modules already added to the result

  • prefix – a prefix that will be added to the name of the module

  • remove_duplicate – whether to remove the duplicated module instances in the result or not

Yields:

(str, Module) – Tuple of name and module

Note

Duplicate modules are returned only once. In the following example, l will be returned only once.

Example:

>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.named_modules()):
...     print(idx, '->', m)

0 -> ('', Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
))
1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
named_parameters(prefix: str = '', recurse: bool = True, remove_duplicate: bool = True) Iterator[Tuple[str, Parameter]]

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

Parameters:
  • prefix (str) – prefix to prepend to all parameter names.

  • recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.

  • remove_duplicate (bool, optional) – whether to remove the duplicated parameters in the result. Defaults to True.

Yields:

(str, Parameter) – Tuple containing the name and parameter

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> for name, param in self.named_parameters():
>>>     if name in ['bias']:
>>>         print(param.size())
noise_std: P_float

(float) Noise std.dev. injected onto the membrane of each neuron during evolution

param_to_json(param)
parameters(family: Tuple | List | str | None = None) Dict

Return a nested dictionary of module and submodule Parameters

Use this method to inspect the Parameters from this and all submodules. The optional argument family allows you to search for Parameters in a particular family — for example "weights" for all weights of this module and nested submodules.

Although the family argument is an arbitrary string, reasonable choises are "weights", "taus" for time constants, "biases" for biases…

Examples

Obtain a dictionary of all Parameters for this module (including submodules):

>>> mod.parameters()
dict{ ... }

Obtain a dictionary of Parameters from a particular family:

>>> mod.parameters("weights")
dict{ ... }
Parameters:

family (str) – The family of Parameters to search for. Default: None; return all parameters.

Returns:

A nested dictionary of Parameters of this module and all submodules

Return type:

dict

register_backward_hook(hook: Callable[[Module, Tuple[Tensor, ...] | Tensor, Tuple[Tensor, ...] | Tensor], None | Tuple[Tensor, ...] | Tensor]) RemovableHandle

Registers a backward hook on the module.

This function is deprecated in favor of register_full_backward_hook() and the behavior of this function will change in future versions.

Returns:

a handle that can be used to remove the added hook by calling handle.remove()

Return type:

torch.utils.hooks.RemovableHandle

register_buffer(name: str, tensor: Tensor, persistent: bool = True, *args, **kwargs) None

Adds a buffer to the module.

This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting persistent to False. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’s state_dict.

Buffers can be accessed as attributes using given names.

Parameters:
  • name (str) – name of the buffer. The buffer can be accessed from this module using the given name

  • tensor (Tensor or None) – buffer to be registered. If None, then operations that run on buffers, such as cuda, are ignored. If None, the buffer is not included in the module’s state_dict.

  • persistent (bool) – whether the buffer is part of this module’s state_dict.

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> self.register_buffer('running_mean', torch.zeros(num_features))
register_forward_hook(hook: Callable[[T, Tuple[Any, ...], Any], Any | None] | Callable[[T, Tuple[Any, ...], Dict[str, Any], Any], Any | None], *, prepend: bool = False, with_kwargs: bool = False) RemovableHandle

Registers a forward hook on the module.

The hook will be called every time after forward() has computed an output.

If with_kwargs is False or not specified, the input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after forward() is called. The hook should have the following signature:

hook(module, args, output) -> None or modified output

If with_kwargs is True, the forward hook will be passed the kwargs given to the forward function and be expected to return the output possibly modified. The hook should have the following signature:

hook(module, args, kwargs, output) -> None or modified output
Parameters:
  • hook (Callable) – The user defined hook to be registered.

  • prepend (bool) – If True, the provided hook will be fired before all existing forward hooks on this torch.nn.modules.Module. Otherwise, the provided hook will be fired after all existing forward hooks on this torch.nn.modules.Module. Note that global forward hooks registered with register_module_forward_hook() will fire before all hooks registered by this method. Default: False

  • with_kwargs (bool) – If True, the hook will be passed the kwargs given to the forward function. Default: False

Returns:

a handle that can be used to remove the added hook by calling handle.remove()

Return type:

torch.utils.hooks.RemovableHandle

register_forward_pre_hook(hook: Callable[[T, Tuple[Any, ...]], Any | None] | Callable[[T, Tuple[Any, ...], Dict[str, Any]], Tuple[Any, Dict[str, Any]] | None], *, prepend: bool = False, with_kwargs: bool = False) RemovableHandle

Registers a forward pre-hook on the module.

The hook will be called every time before forward() is invoked.

If with_kwargs is false or not specified, the input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned (unless that value is already a tuple). The hook should have the following signature:

hook(module, args) -> None or modified input

If with_kwargs is true, the forward pre-hook will be passed the kwargs given to the forward function. And if the hook modifies the input, both the args and kwargs should be returned. The hook should have the following signature:

hook(module, args, kwargs) -> None or a tuple of modified input and kwargs
Parameters:
  • hook (Callable) – The user defined hook to be registered.

  • prepend (bool) – If true, the provided hook will be fired before all existing forward_pre hooks on this torch.nn.modules.Module. Otherwise, the provided hook will be fired after all existing forward_pre hooks on this torch.nn.modules.Module. Note that global forward_pre hooks registered with register_module_forward_pre_hook() will fire before all hooks registered by this method. Default: False

  • with_kwargs (bool) – If true, the hook will be passed the kwargs given to the forward function. Default: False

Returns:

a handle that can be used to remove the added hook by calling handle.remove()

Return type:

torch.utils.hooks.RemovableHandle

register_full_backward_hook(hook: Callable[[Module, Tuple[Tensor, ...] | Tensor, Tuple[Tensor, ...] | Tensor], None | Tuple[Tensor, ...] | Tensor], prepend: bool = False) RemovableHandle

Registers a backward hook on the module.

The hook will be called every time the gradients with respect to a module are computed, i.e. the hook will execute if and only if the gradients with respect to module outputs are computed. The hook should have the following signature:

hook(module, grad_input, grad_output) -> tuple(Tensor) or None

The grad_input and grad_output are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in grad_input and grad_output will be None for all non-Tensor arguments.

For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.

Warning

Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error.

Parameters:
  • hook (Callable) – The user-defined hook to be registered.

  • prepend (bool) – If true, the provided hook will be fired before all existing backward hooks on this torch.nn.modules.Module. Otherwise, the provided hook will be fired after all existing backward hooks on this torch.nn.modules.Module. Note that global backward hooks registered with register_module_full_backward_hook() will fire before all hooks registered by this method.

Returns:

a handle that can be used to remove the added hook by calling handle.remove()

Return type:

torch.utils.hooks.RemovableHandle

register_full_backward_pre_hook(hook: Callable[[Module, Tuple[Tensor, ...] | Tensor], None | Tuple[Tensor, ...] | Tensor], prepend: bool = False) RemovableHandle

Registers a backward pre-hook on the module.

The hook will be called every time the gradients for the module are computed. The hook should have the following signature:

hook(module, grad_output) -> Tensor or None

The grad_output is a tuple. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the output that will be used in place of grad_output in subsequent computations. Entries in grad_output will be None for all non-Tensor arguments.

For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.

Warning

Modifying inputs inplace is not allowed when using backward hooks and will raise an error.

Parameters:
  • hook (Callable) – The user-defined hook to be registered.

  • prepend (bool) – If true, the provided hook will be fired before all existing backward_pre hooks on this torch.nn.modules.Module. Otherwise, the provided hook will be fired after all existing backward_pre hooks on this torch.nn.modules.Module. Note that global backward_pre hooks registered with register_module_full_backward_pre_hook() will fire before all hooks registered by this method.

Returns:

a handle that can be used to remove the added hook by calling handle.remove()

Return type:

torch.utils.hooks.RemovableHandle

register_load_state_dict_post_hook(hook)

Registers a post hook to be run after module’s load_state_dict is called.

It should have the following signature::

hook(module, incompatible_keys) -> None

The module argument is the current module that this hook is registered on, and the incompatible_keys argument is a NamedTuple consisting of attributes missing_keys and unexpected_keys. missing_keys is a list of str containing the missing keys and unexpected_keys is a list of str containing the unexpected keys.

The given incompatible_keys can be modified inplace if needed.

Note that the checks performed when calling load_state_dict() with strict=True are affected by modifications the hook makes to missing_keys or unexpected_keys, as expected. Additions to either set of keys will result in an error being thrown when strict=True, and clearing out both missing and unexpected keys will avoid an error.

Returns:

a handle that can be used to remove the added hook by calling handle.remove()

Return type:

torch.utils.hooks.RemovableHandle

register_module(name: str, module: Module | None) None

Alias for add_module().

register_parameter(name: str, param: Parameter) None

Adds a parameter to the module.

The parameter can be accessed as an attribute using given name.

Parameters:
  • name (str) – name of the parameter. The parameter can be accessed from this module using the given name

  • param (Parameter or None) – parameter to be added to the module. If None, then operations that run on parameters, such as cuda, are ignored. If None, the parameter is not included in the module’s state_dict.

register_state_dict_pre_hook(hook)

These hooks will be called with arguments: self, prefix, and keep_vars before calling state_dict on self. The registered hooks can be used to perform pre-processing before the state_dict call is made.

requires_grad_(requires_grad: bool = True) T

Change if autograd should record operations on parameters in this module.

This method sets the parameters’ requires_grad attributes in-place.

This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training).

See locally-disable-grad-doc for a comparison between requires_grad_() and several similar mechanisms that may be confused with it.

Parameters:

requires_grad (bool) – whether autograd should record operations on parameters in this module. Default: True.

Returns:

self

Return type:

Module

reset_parameters()

Reset all parameters in this module

Returns:

The updated module is returned for compatibility with the functional API

Return type:

Module

reset_state() ModuleBase

Reset the state of this module

Returns:

The updated module is returned for compatibility with the functional API

Return type:

Module

save(fn)
set_attributes(new_attributes: dict) ModuleBase

Set the attributes and sub-module attributes from a dictionary

This method can be used with the dictionary returned from module evolution to set the new state of the module. It can also be used to set multiple parameters of a module and submodules.

Examples

Use the functional API to evolve, obtain new states, and set those states:

>>> _, new_state, _ = mod(input)
>>> mod = mod.set_attributes(new_state)

Obtain a parameter dictionary, modify it, then set the parameters back:

>>> params = mod.parameters()
>>> params['w_input'] *= 0.
>>> mod.set_attributes(params)
Parameters:

new_attributes (dict) – A nested dictionary containing parameters of this module and sub-modules.

set_extra_state(state: Any)

This function is called from load_state_dict() to handle any extra state found within the state_dict. Implement this function and a corresponding get_extra_state() for your module if you need to store extra state within its state_dict.

Parameters:

state (dict) – Extra state from the state_dict

property shape: tuple

The shape of this module

Type:

tuple

share_memory() T

See torch.Tensor.share_memory_()

simulation_parameters(family: Tuple | List | str | None = None) Dict

Return a nested dictionary of module and submodule SimulationParameters

Use this method to inspect the SimulationParameters from this and all submodules. The optional argument family allows you to search for SimulationParameters in a particular family.

Examples

Obtain a dictionary of all SimulationParameters for this module (including submodules):

>>> mod.simulation_parameters()
dict{ ... }
Parameters:

family (str) – The family of SimulationParameters to search for. Default: None; return all SimulationParameter attributes.

Returns:

A nested dictionary of SimulationParameters of this module and all submodules

Return type:

dict

property size: int

(DEPRECATED) The output size of this module

Type:

int

property size_in: int

The input size of this module

Type:

int

property size_out: int

The output size of this module

Type:

int

spike_generation_fn: P_Callable

(Callable) Spike generation function with surrograte gradient

spikes: P_tensor

(Tensor) Spikes (Nin,)

property spiking_input: bool

If True, this module receives spiking input. If False, this module expects continuous input.

Type:

bool

property spiking_output

If True, this module sends spiking output. If False, this module sends continuous output.

Type:

bool

state(family: Tuple | List | str | None = None) Dict

Return a nested dictionary of module and submodule States

Use this method to inspect the States from this and all submodules. The optional argument family allows you to search for States in a particular family.

Examples

Obtain a dictionary of all States for this module (including submodules):

>>> mod.state()
dict{ ... }
Parameters:

family (str) – The family of States to search for. Default: None; return all State attributes.

Returns:

A nested dictionary of States of this module and all submodules

Return type:

dict

state_dict(*args, destination=None, prefix='', keep_vars=False)

Returns a dictionary containing references to the whole state of the module.

Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to None are not included.

Note

The returned object is a shallow copy. It contains references to the module’s parameters and buffers.

Warning

Currently state_dict() also accepts positional arguments for destination, prefix and keep_vars in order. However, this is being deprecated and keyword arguments will be enforced in future releases.

Warning

Please avoid the use of argument destination as it is not designed for end-users.

Parameters:
  • destination (dict, optional) – If provided, the state of module will be updated into the dict and the same object is returned. Otherwise, an OrderedDict will be created and returned. Default: None.

  • prefix (str, optional) – a prefix added to parameter and buffer names to compose the keys in state_dict. Default: ''.

  • keep_vars (bool, optional) – by default the Tensor s returned in the state dict are detached from autograd. If it’s set to True, detaching will not be performed. Default: False.

Returns:

a dictionary containing a whole state of the module

Return type:

dict

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> module.state_dict().keys()
['bias', 'weight']
tau_mem: P_tensor

(Tensor) Membrane time constants (Nout,) or ()

tau_syn: P_tensor

(Tensor) Synaptic time constants (Nin,) or ()

threshold: P_tensor

(Tensor) Firing threshold for each neuron (Nout,)

timed(output_num: int = 0, dt: float | None = None, add_events: bool = False)

Convert this module to a TimedModule

Parameters:
  • output_num (int) – Specify which output of the module to take, if the module returns multiple output series. Default: 0, take the first (or only) output.

  • dt (float) – Used to provide a time-step for this module, if the module does not already have one. If self already defines a time-step, then self.dt will be used. Default: None

  • add_events (bool) – Iff True, the TimedModule will add events occurring on a single timestep on input and output. Default: False, don’t add time steps.

Returns: TimedModule: A timed module that wraps this module

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

This can be called as

to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)

Its signature is similar to torch.Tensor.to(), but only accepts floating point or complex dtypes. In addition, this method will only cast the floating point or complex parameters and buffers to dtype (if given). The integral parameters and buffers will be moved device, if that is given, but with dtypes unchanged. When non_blocking is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.

See below for examples.

Note

This method modifies the module in-place.

Parameters:
  • device (torch.device) – the desired device of the parameters and buffers in this module

  • dtype (torch.dtype) – the desired floating point or complex dtype of the parameters and buffers in this module

  • tensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module

  • memory_format (torch.memory_format) – the desired memory format for 4D parameters and buffers in this module (keyword only argument)

Returns:

self

Return type:

Module

Examples:

>>> # xdoctest: +IGNORE_WANT("non-deterministic")
>>> linear = nn.Linear(2, 2)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
        [-0.5113, -0.2325]])
>>> linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
        [-0.5113, -0.2325]], dtype=torch.float64)
>>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1)
>>> gpu1 = torch.device("cuda:1")
>>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
        [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
>>> cpu = torch.device("cpu")
>>> linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
        [-0.5112, -0.2324]], dtype=torch.float16)

>>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
>>> linear.weight
Parameter containing:
tensor([[ 0.3741+0.j,  0.2382+0.j],
        [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
>>> linear(torch.ones(3, 2, dtype=torch.cdouble))
tensor([[0.6122+0.j, 0.1150+0.j],
        [0.6122+0.j, 0.1150+0.j],
        [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
to_empty(*, device: str | device) T

Moves the parameters and buffers to the specified device without copying storage.

Parameters:

device (torch.device) – The desired device of the parameters and buffers in this module.

Returns:

self

Return type:

Module

to_json()
to_torch(use_torch_call: bool = True)

Convert the module to use the torch.nn.Module API

This method exposes the torch API for .__call__(), .parameters() and .__repr__() methods, recursively. By default, .__call__() is only replaced on the top-level module. This is to ensure that the nested .forward() methods do not break.

Parameters:

use_torch_call (bool) – Use the torch-type __call__() method for this object

Returns:

The converted object

train(mode: bool = True) T

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Parameters:

mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.

Returns:

self

Return type:

Module

training: bool
type(dst_type: dtype | str) T

Casts all parameters and buffers to dst_type.

Note

This method modifies the module in-place.

Parameters:

dst_type (type or string) – the desired type

Returns:

self

Return type:

Module

vmem: P_tensor

(Tensor) Membrane potentials (Nout,)

w_rec: P_ndarray

(Tensor) Recurrent weights (Nout, Nin)

xpu(device: int | device | None = None) T

Moves all model parameters and buffers to the XPU.

This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.

Note

This method modifies the module in-place.

Parameters:

device (int, optional) – if specified, all parameters will be copied to that device

Returns:

self

Return type:

Module

zero_grad(set_to_none: bool = True) None

Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context.

Parameters:

set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details.