devices.dynapse.DynapSim

class devices.dynapse.DynapSim(*args, **kwargs)[source]

Bases: JaxModule

DynapSim solves dynamical chip equations for the DPI neuron and synapse models. Receives configuration as bias currents and solves membrane and synapse dynamics using jax backend. One block has

  • 1 synapse receiving spikes from the other circuits

  • 1 recurrent synapse for spike frequency adaptation (AHP)

  • 1 membrane evaluating the state and deciding fire or not

For all the synapses, the DPI Synapse update equations below are solved in parallel.

DPI Synapse:

\[\begin{split}I_{syn}(t_1) = \begin{cases} I_{syn}(t_0) \cdot exp \left( \dfrac{-dt}{\tau} \right) &\text{in any case} \\ \\ I_{syn}(t_1) + \dfrac{I_{th} I_{w}}{I_{\tau}} \cdot \left( 1 - exp \left( \dfrac{-t_{pulse}}{\tau} \right) \right) &\text{if a spike arrives} \end{cases}\end{split}\]

Where

\[\tau = \dfrac{C U_{T}}{\kappa I_{\tau}}\]

For the membrane update, the forward Euler solution below is applied.

Membrane:

\[\begin{split}dI_{mem} &= \dfrac{I_{mem}}{\tau \left( I_{mem} + I_{th} \right) } \cdot \left( I_{mem_{\infty}} + f(I_{mem}) - I_{mem} \left( 1 + \dfrac{I_{ahp}}{I_{\tau}} \right) \right) \cdot dt \\\\ I_{mem}(t_1) &= I_{mem}(t_0) + dI_{mem}\end{split}\]

Where

\[\begin{split}I_{mem_{\infty}} &= \dfrac{I_{th}}{I_{\tau}} \left( I_{in} - I_{ahp} - I_{\tau}\right) \\\\ f(I_{mem}) &= \dfrac{I_{a}}{I_{\tau}} \left(I_{mem} + I_{th} \right ) \\\\ I_{a} &= \dfrac{I_{a_{gain}}}{1+ exp\left(-\dfrac{I_{mem}+I_{a_{th}}}{I_{a_{norm}}}\right)} \\\\\end{split}\]
On spiking:

When the membrane potential for neuron \(j\), \(I_{mem, j}\) exceeds the threshold current \(I_{spkthr}\), then the neuron emits a spike.

\[\begin{split}I_{mem, j} > I_{spkthr} \rightarrow S_{j} &= 1 \\ I_{mem, j} &= I_{reset} \\\end{split}\]

See also

For detailed explanations of the equations and the usage

DynapSim Neuron Model

Attributes overview

class_name

Class name of self

full_name

The full name of this module (class plus module name)

name

The name of this module, or an empty string if None

shape

The shape of this module

size

(DEPRECATED) The output size of this module

size_in

The input size of this module

size_out

The output size of this module

spiking_input

If True, this module receives spiking input.

spiking_output

If True, this module sends spiking output.

iahp

Spike frequency adaptation current states of the neurons in Amperes with shape (Nrec,)

iampa

Fast excitatory AMPA synapse current states of the neurons in Amperes with shape (Nrec,)

igaba

Slow inhibitory adaptation current states of the neurons in Amperes with shape (Nrec,)

imem

Membrane current states of the neurons in Amperes with shape (Nrec,)

inmda

Slow excitatory synapse current states of the neurons in Amperes with shape (Nrec,)

ishunt

Fast inhibitory shunting synapse current states of the neurons in Amperes with shape (Nrec,)

spikes

Logical spiking raster for each neuron at the last simulation time-step with shape (Nrec,)

timer_ref

timer to keep the time from the spike generation until the refractory period ends

vmem

Membrane potential states of the neurons in Volts with shape (Nrec,)

Idc

Constant DC current injected to membrane in Amperes with shape

If_nmda

NMDA gate soft cut-off current setting the NMDA gating voltage in Amperes with shape (Nrec,)

Igain_ahp

gain bias current of the spike frequency adaptation block in Amperes with shape (Nrec,)

Igain_mem

gain bias current for neuron membrane in Amperes with shape (Nrec,)

Igain_syn

gain bias current of synaptic gates (AMPA, GABA, NMDA, SHUNT) combined in Amperes with shape (Nrec,)

Ipulse_ahp

bias current setting the pulse width for spike frequency adaptation block t_pulse_ahp in Amperes with shape (Nrec,)

Ipulse

bias current setting the pulse width for neuron membrane t_pulse in Amperes with shape (Nrec,)

Iref

bias current setting the refractory period t_ref in Amperes with shape (Nrec,)

Ispkthr

spiking threshold current, neuron spikes if \(I_{mem} > I_{spkthr}\) in Amperes with shape (Nrec,)

Itau_ahp

Spike frequency adaptation leakage current setting the time constant tau_ahp in Amperes with shape (Nrec,)

Itau_mem

Neuron membrane leakage current setting the time constant tau_mem in Amperes with shape (Nrec,)

Itau_syn

(AMPA, GABA, NMDA, SHUNT) synapses combined leakage current setting the time constant tau_syn in Amperes with shape (Nrec,)

Iw_ahp

spike frequency adaptation weight current of the neurons of the core in Amperes with shape (Nrec,)

C_ahp

AHP synapse capacitance in Farads with shape (Nrec,)

C_syn

synaptic capacitance in Farads with shape (Nrec,)

C_pulse_ahp

spike frequency adaptation circuit pulse-width creation sub-circuit capacitance in Farads with shape (Nrec,)

C_pulse

pulse-width creation sub-circuit capacitance in Farads with shape (Nrec,)

C_ref

refractory period sub-circuit capacitance in Farads with shape (Nrec,)

C_mem

neuron membrane capacitance in Farads with shape (Nrec,)

Io

Dark current in Amperes that flows through the transistors even at the idle state with shape (Nrec,)

kappa_n

Subthreshold slope factor (n-type transistor) with shape (Nrec,)

kappa_p

Subthreshold slope factor (p-type transistor) with shape (Nrec,)

Ut

Thermal voltage in Volts with shape (Nrec,)

Vth

The cut-off Vgs potential of the transistors in Volts (not type specific) with shape (Nrec,)

Iscale

weight scaling current of the neurons of the core in Amperes

dt

The time step for the forward-Euler ODE solver

rng_key

The Jax RNG seed to use on initialisation.

Methods overview

__init__(shape[,Β Idc,Β If_nmda,Β Igain_ahp,Β ...])

__init__ constructs a DynapSim object

as_graph()

as_graph returns a computational graph for the for the simulated Dynap-SE neurons

attributes_named(name)

Search for attributes of this or submodules by time

evolve(input_data[,Β record])

evolve implements raw rockpool JAX evolution function for a DynapSim module.

from_graph(se[,Β weights])

from_graph constructs a DynapSim object from a computational graph

modules()

Return a dictionary of all sub-modules of this module

parameters([family])

Return a nested dictionary of module and submodule Parameters

reset_parameters()

Reset all parameters in this module

reset_state()

Reset the state of this module

set_attributes(new_attributes)

Assign new attributes to this module and submodules

simulation_parameters([family])

Return a nested dictionary of module and submodule SimulationParameters

state([family])

Return a nested dictionary of module and submodule States

timed([output_num,Β dt,Β add_events])

Convert this module to a TimedModule

tree_flatten()

Flatten this module tree for Jax

tree_unflatten(aux_data,Β children)

Unflatten a tree of modules from Jax to Rockpool

C_ahp

AHP synapse capacitance in Farads with shape (Nrec,)

C_mem

neuron membrane capacitance in Farads with shape (Nrec,)

C_pulse

pulse-width creation sub-circuit capacitance in Farads with shape (Nrec,)

C_pulse_ahp

spike frequency adaptation circuit pulse-width creation sub-circuit capacitance in Farads with shape (Nrec,)

C_ref

refractory period sub-circuit capacitance in Farads with shape (Nrec,)

C_syn

synaptic capacitance in Farads with shape (Nrec,)

Idc

Constant DC current injected to membrane in Amperes with shape

If_nmda

NMDA gate soft cut-off current setting the NMDA gating voltage in Amperes with shape (Nrec,)

Igain_ahp

gain bias current of the spike frequency adaptation block in Amperes with shape (Nrec,)

Igain_mem

gain bias current for neuron membrane in Amperes with shape (Nrec,)

Igain_syn

gain bias current of synaptic gates (AMPA, GABA, NMDA, SHUNT) combined in Amperes with shape (Nrec,)

Io

Dark current in Amperes that flows through the transistors even at the idle state with shape (Nrec,)

Ipulse

bias current setting the pulse width for neuron membrane t_pulse in Amperes with shape (Nrec,)

Ipulse_ahp

bias current setting the pulse width for spike frequency adaptation block t_pulse_ahp in Amperes with shape (Nrec,)

Iref

bias current setting the refractory period t_ref in Amperes with shape (Nrec,)

Iscale

weight scaling current of the neurons of the core in Amperes

Ispkthr

spiking threshold current, neuron spikes if \(I_{mem} > I_{spkthr}\) in Amperes with shape (Nrec,)

Itau_ahp

Spike frequency adaptation leakage current setting the time constant tau_ahp in Amperes with shape (Nrec,)

Itau_mem

Neuron membrane leakage current setting the time constant tau_mem in Amperes with shape (Nrec,)

Itau_syn

(AMPA, GABA, NMDA, SHUNT) synapses combined leakage current setting the time constant tau_syn in Amperes with shape (Nrec,)

Iw_ahp

spike frequency adaptation weight current of the neurons of the core in Amperes with shape (Nrec,)

Ut

Thermal voltage in Volts with shape (Nrec,)

Vth

The cut-off Vgs potential of the transistors in Volts (not type specific) with shape (Nrec,)

__init__(shape: ~typing.Tuple[int] | int, Idc: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 5e-13, If_nmda: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 5e-13, Igain_ahp: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 2.8368794326241126e-11, Igain_mem: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 2.1276595744680848e-11, Igain_syn: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 8.687943262411346e-09, Ipulse_ahp: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 3.5e-07, Ipulse: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 3.4999999999999996e-08, Iref: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 1.0499999999999999e-09, Ispkthr: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 1e-07, Itau_ahp: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 2.8368794326241126e-11, Itau_mem: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 5.319148936170212e-12, Itau_syn: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 8.687943262411346e-11, Iw_ahp: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 5e-13, C_ahp: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 4e-11, C_syn: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 2.45e-11, C_pulse_ahp: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 5e-13, C_pulse: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 5e-13, C_ref: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 1.5e-12, C_mem: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 3e-12, Io: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 5e-13, kappa_n: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 0.75, kappa_p: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 0.66, Ut: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 0.025, Vth: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 0.7, Iscale: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 1e-08, w_rec: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | None = None, has_rec: bool = False, weight_init_func: ~typing.Callable[[~typing.Tuple], float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array] | None = <function kaiming>, dt: float = 0.001, percent_mismatch: float | None = None, rng_key: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | None = None, spiking_input: bool = False, spiking_output: bool = True, *args, **kwargs) None[source]

__init__ constructs a DynapSim object

Parameters:
  • shape (Tuple[int]) – Either a single dimension N, which defines a feed-forward layer of DynapSE AdExpIF neurons, or two dimensions (N, N), which defines a recurrent layer of DynapSE AdExpIF neurons.

  • Idc (FloatVector, optinoal) – Constant DC current injected to membrane in Amperes with shape

  • If_nmda (FloatVector, optinoal) – NMDA gate soft cut-off current setting the NMDA gating voltage in Amperes with shape (Nrec,)

  • Igain_ahp (FloatVector, optinoal) – gain bias current of the spike frequency adaptation block in Amperes with shape (Nrec,)

  • Igain_mem (FloatVector, optinoal) – gain bias current for neuron membrane in Amperes with shape (Nrec,)

  • Igain_syn (FloatVector, optinoal) – gain bias current of synaptic gates (AMPA, GABA, NMDA, SHUNT) combined in Amperes with shape (Nrec,)

  • Ipulse_ahp (FloatVector, optinoal) – bias current setting the pulse width for spike frequency adaptation block t_pulse_ahp in Amperes with shape (Nrec,)

  • Ipulse (FloatVector, optinoal) – bias current setting the pulse width for neuron membrane t_pulse in Amperes with shape (Nrec,)

  • Iref (FloatVector, optinoal) – bias current setting the refractory period t_ref in Amperes with shape (Nrec,)

  • Ispkthr (FloatVector, optinoal) – spiking threshold current, neuron spikes if \(I_{mem} > I_{spkthr}\) in Amperes with shape (Nrec,)

  • Itau_ahp (FloatVector, optinoal) – Spike frequency adaptation leakage current setting the time constant tau_ahp in Amperes with shape (Nrec,)

  • Itau_mem (FloatVector, optinoal) – Neuron membrane leakage current setting the time constant tau_mem in Amperes with shape (Nrec,)

  • Itau_syn (FloatVector, optinoal) – (AMPA, GABA, NMDA, SHUNT) synapses combined leakage current setting the time constant tau_syn in Amperes with shape (Nrec,)

  • Iw_ahp (FloatVector, optinoal) – spike frequency adaptation weight current of the neurons of the core in Amperes with shape (Nrec,)

  • C_ahp (FloatVector, optional) – AHP synapse capacitance in Farads with shape (Nrec,)

  • C_syn (FloatVector, optional) – synaptic capacitance in Farads with shape (Nrec,)

  • C_pulse_ahp (FloatVector, optional) – spike frequency adaptation circuit pulse-width creation sub-circuit capacitance in Farads with shape (Nrec,)

  • C_pulse (FloatVector, optional) – pulse-width creation sub-circuit capacitance in Farads with shape (Nrec,)

  • C_ref (FloatVector, optional) – refractory period sub-circuit capacitance in Farads with shape (Nrec,)

  • C_mem (FloatVector, optional) – neuron membrane capacitance in Farads with shape (Nrec,)

  • Io (FloatVector, optional) – Dark current in Amperes that flows through the transistors even at the idle state with shape (Nrec,)

  • kappa_n (FloatVector, optional) – Subthreshold slope factor (n-type transistor) with shape (Nrec,)

  • kappa_p (FloatVector, optional) – Subthreshold slope factor (p-type transistor) with shape (Nrec,)

  • Ut (FloatVector, optional) – Thermal voltage in Volts with shape (Nrec,)

  • Vth (FloatVector, optional) – The cut-off Vgs potential of the transistors in Volts (not type specific) with shape (Nrec,)

  • Iscale (FloatVector, optinoal) – weight scaling current of the neurons of the core in Amperes

  • w_rec (Optional[FloatVector], optional) – If the module is initialised in recurrent mode, one can provide a concrete initialisation for the recurrent weights, which must be a square matrix with shape (Nrec, Nrec, 4). The last 4 holds a weight matrix for 4 different synapse types. If the model is not initialised in recurrent mode, then you may not provide w_rec, defaults tp None

  • has_rec (bool, optional) – When True the module provides a trainable recurrent weight matrix. False, module is feed-forward, defaults to True

  • weight_init_func (Optional[Callable[[Tuple], FloatVector]], optional) – The initialisation function to use when generating weights, gets the shape and returns the initial weights, defatuls to kaiming

  • dt (float, optional) – The time step for the forward-Euler ODE solver, defaults to 1e-3

  • percent_mismatch (Optional[float], optional) – Gaussian parameter mismatch percentage (check transform.mismatch_generator implementation), defaults to None

  • rng_key (Optional[FloatVector], optional) – The Jax RNG seed to use on initialisation. By default, a new seed is generated, defaults to None

  • spiking_input (bool, optional) – Whether this module receives spiking input, defaults to True

  • spiking_output (bool, optional) – Whether this module produces spiking output, defaults to True

Raises:
  • ValueError – shape must be a one- or two-element tuple (Nin, Nout)

  • ValueError – Multapses are not currently supported in DynapSim pipeline!

_abc_impl = <_abc._abc_data object>
_auto_batch(data: Array, states: Tuple = (), target_shapes: Tuple | None = None) Tuple[Array, Tuple[Array]]

Automatically replicate states over batches and verify input dimensions

Usage:
>>> data, (state0, state1, state2) = self._auto_batch(data, (self.state0, self.state1, self.state2))

This will verify that data has the correct final dimension (i.e. self.size_in). If data has only two dimensions (T, Nin), then it will be augmented to (1, T, Nin). The individual states will be replicated out from shape (a, b, c, ...) to (n_batches, a, b, c, ...) and returned.

Parameters:
  • data (np.ndarray) – Input data tensor. Either (batches, T, Nin) or (T, Nin)

  • states (Tuple) – Tuple of state variables. Each will be replicated out over batches by prepending a batch dimension

Returns:

(np.ndarray, Tuple[np.ndarray]) data, states

_force_set_attributes

(bool) If True, do not sanity-check attributes when setting.

_get_attribute_family(type_name: str, family: Tuple | List | str | None = None) dict

Search for attributes of this module and submodules that match a given family

This method can be used to conveniently get all weights for a network; or all time constants; or any other family of parameters. Parameter families are defined simply by a string: "weights" for weights; "taus" for time constants, etc. These strings are arbitrary, but if you follow the conventions then future developers will thank you (that includes you in six month’s time).

Parameters:
  • type_name (str) – The class of parameters to search for. Must be one of ["Parameter", "SimulationParameter", "State"] or another future subclass of ParameterBase

  • family (Union[str, Tuple[str]]) – A string or list or tuple of strings, that define one or more attribute families to search for

Returns:

A nested dictionary of attributes that match the provided type_name and family

Return type:

dict

_get_attribute_registry() Tuple[Dict, Dict]

Return or initialise the attribute registry for this module

Returns:

registered_attributes, registered_modules

Return type:

(tuple)

_has_registered_attribute(name: str) bool

Check if the module has a registered attribute

Parameters:

name (str) – The name of the attribute to check

Returns:

True if the attribute name is in the attribute registry, False otherwise.

Return type:

bool

_in_Module_init

(bool) If exists and True, indicates that the module is in the __init__ chain.

_name: str | None

Name of this module, if assigned

_register_attribute(name: str, val: ParameterBase)

Record an attribute in the attribute registry

Parameters:
  • name (str) – The name of the attribute to register

  • val (ParameterBase) – The ParameterBase subclass object to register. e.g. Parameter, SimulationParameter or State.

_register_module(name: str, mod)

Add a submodule to the module registry

Parameters:
  • name (str) – The name of the submodule, extracted from the assigned attribute name

  • mod (JaxModule) – The submodule to register

_reset_attribute(name: str) ModuleBase

Reset an attribute to its initialisation value

Parameters:

name (str) – The name of the attribute to reset

Returns:

For compatibility with the functional API

Return type:

self (Module)

_rockpool_pytree_registry = []

The internal registry of registered JaxModule s

_shape

The shape of this module

_spiking_input: bool

Whether this module receives spiking input

_spiking_output: bool

Whether this module produces spiking output

_submodulenames: List[str]

Registry of sub-module names

_wrap_recorded_state(recorded_dict: dict, t_start: float) Dict[str, TimeSeries]

Convert a recorded dictionary to a TimeSeries representation

This method is optional, and is provided to make the timed() conversion to a TimedModule work better. You should override this method in your custom Module, to wrap each element of your recorded state dictionary as a TimeSeries

Parameters:
  • state_dict (dict) – A recorded state dictionary as returned by evolve()

  • t_start (float) – The initial time of the recorded state, to use as the starting point of the time series

Returns:

The mapped recorded state dictionary, wrapped as TimeSeries objects

Return type:

Dict[str, TimeSeries]

as_graph() GraphHolder[source]

as_graph returns a computational graph for the for the simulated Dynap-SE neurons

Returns:

a GraphHolder object wrapping the DynapseNeurons graph.

Return type:

GraphHolder

attributes_named(name: Tuple[str] | List[str] | str) dict

Search for attributes of this or submodules by time

Parameters:

name (Union[str, Tuple[str]) – The name of the attribute to search for

Returns:

A nested dictionary of attributes that match name

Return type:

dict

property class_name: str

Class name of self

Type:

str

dt

The time step for the forward-Euler ODE solver

evolve(input_data: float | ndarray | Tensor | array, record: bool = True) Tuple[Array, Dict[str, Array], Dict[str, Array]][source]

evolve implements raw rockpool JAX evolution function for a DynapSim module. The function solves the dynamical equations introduced at the DynapSim module definition

Parameters:
  • input_data (FloatVector) – Input array of shape (T, Nrec, 4) to evolve over. Represents number of spikes at that timebin for different synaptic gates

  • record (bool, optional) – record the each timestep of evolution or not, defaults to True

Returns:

spikes_ts, states, record_dict :spikes_ts: is an array with shape (T, Nrec) containing the output data(spike raster) produced by the module. :states: is a dictionary containing the updated module state following evolution. :record_dict: is a dictionary containing the recorded state variables during the evolution at each time step, if the record argument is True else empty dictionary {}

Return type:

Tuple[jax.Array, Dict[str, jax.Array], Dict[str, jax.Array]]

classmethod from_graph(se: DynapseNeurons, weights: LinearWeights | None = None) DynapSim[source]

from_graph constructs a DynapSim object from a computational graph

Parameters:
  • se (DynapseNeurons) – the reference computational graph to restore the computational module

  • weights (Optional[LinearWeights], optional) – additional weights graph if one wants to impose recurrent weights, defaults to None

Returns:

a DynapSim object

Return type:

DynapSim

property full_name: str

The full name of this module (class plus module name)

Type:

str

iahp

Spike frequency adaptation current states of the neurons in Amperes with shape (Nrec,)

iampa

Fast excitatory AMPA synapse current states of the neurons in Amperes with shape (Nrec,)

igaba

Slow inhibitory adaptation current states of the neurons in Amperes with shape (Nrec,)

imem

Membrane current states of the neurons in Amperes with shape (Nrec,)

inmda

Slow excitatory synapse current states of the neurons in Amperes with shape (Nrec,)

ishunt

Fast inhibitory shunting synapse current states of the neurons in Amperes with shape (Nrec,)

kappa_n

Subthreshold slope factor (n-type transistor) with shape (Nrec,)

kappa_p

Subthreshold slope factor (p-type transistor) with shape (Nrec,)

modules() Dict

Return a dictionary of all sub-modules of this module

Returns:

A dictionary containing all sub-modules. Each item will be named with the sub-module name.

Return type:

dict

property name: str

The name of this module, or an empty string if None

Type:

str

parameters(family: Tuple | List | str | None = None) Dict

Return a nested dictionary of module and submodule Parameters

Use this method to inspect the Parameters from this and all submodules. The optional argument family allows you to search for Parameters in a particular family β€” for example "weights" for all weights of this module and nested submodules.

Although the family argument is an arbitrary string, reasonable choises are "weights", "taus" for time constants, "biases" for biases…

Examples

Obtain a dictionary of all Parameters for this module (including submodules):

>>> mod.parameters()
dict{ ... }

Obtain a dictionary of Parameters from a particular family:

>>> mod.parameters("weights")
dict{ ... }
Parameters:

family (str) – The family of Parameters to search for. Default: None; return all parameters.

Returns:

A nested dictionary of Parameters of this module and all submodules

Return type:

dict

reset_parameters()

Reset all parameters in this module

Returns:

The updated module is returned for compatibility with the functional API

Return type:

Module

reset_state() JaxModule

Reset the state of this module

Returns:

The updated module is returned for compatibility with the functional API

Return type:

Module

rng_key

The Jax RNG seed to use on initialisation. By default, a new seed is generated

set_attributes(new_attributes: Iterable | MutableMapping | Mapping) JaxModule

Assign new attributes to this module and submodules

Parameters:

new_attributes (Tree) – The tree of new attributes to assign to this module tree

Return type:

JaxModule

property shape: tuple

The shape of this module

Type:

tuple

simulation_parameters(family: Tuple | List | str | None = None) Dict

Return a nested dictionary of module and submodule SimulationParameters

Use this method to inspect the SimulationParameters from this and all submodules. The optional argument family allows you to search for SimulationParameters in a particular family.

Examples

Obtain a dictionary of all SimulationParameters for this module (including submodules):

>>> mod.simulation_parameters()
dict{ ... }
Parameters:

family (str) – The family of SimulationParameters to search for. Default: None; return all SimulationParameter attributes.

Returns:

A nested dictionary of SimulationParameters of this module and all submodules

Return type:

dict

property size: int

(DEPRECATED) The output size of this module

Type:

int

property size_in: int

The input size of this module

Type:

int

property size_out: int

The output size of this module

Type:

int

spikes

Logical spiking raster for each neuron at the last simulation time-step with shape (Nrec,)

property spiking_input: bool

If True, this module receives spiking input. If False, this module expects continuous input.

Type:

bool

property spiking_output

If True, this module sends spiking output. If False, this module sends continuous output.

Type:

bool

state(family: Tuple | List | str | None = None) Dict

Return a nested dictionary of module and submodule States

Use this method to inspect the States from this and all submodules. The optional argument family allows you to search for States in a particular family.

Examples

Obtain a dictionary of all States for this module (including submodules):

>>> mod.state()
dict{ ... }
Parameters:

family (str) – The family of States to search for. Default: None; return all State attributes.

Returns:

A nested dictionary of States of this module and all submodules

Return type:

dict

timed(output_num: int = 0, dt: float | None = None, add_events: bool = False)

Convert this module to a TimedModule

Parameters:
  • output_num (int) – Specify which output of the module to take, if the module returns multiple output series. Default: 0, take the first (or only) output.

  • dt (float) – Used to provide a time-step for this module, if the module does not already have one. If self already defines a time-step, then self.dt will be used. Default: None

  • add_events (bool) – Iff True, the TimedModule will add events occurring on a single timestep on input and output. Default: False, don’t add time steps.

Returns: TimedModule: A timed module that wraps this module

timer_ref

timer to keep the time from the spike generation until the refractory period ends

tree_flatten() Tuple[tuple, tuple]

Flatten this module tree for Jax

classmethod tree_unflatten(aux_data, children)

Unflatten a tree of modules from Jax to Rockpool

vmem

Membrane potential states of the neurons in Volts with shape (Nrec,)