# devices.dynapse.DynapSimο

class devices.dynapse.DynapSim(*args, **kwargs)[source]ο

Bases: JaxModule

DynapSim solves dynamical chip equations for the DPI neuron and synapse models. Receives configuration as bias currents and solves membrane and synapse dynamics using jax backend. One block has

• 1 synapse receiving spikes from the other circuits

• 1 recurrent synapse for spike frequency adaptation (AHP)

• 1 membrane evaluating the state and deciding fire or not

For all the synapses, the DPI Synapse update equations below are solved in parallel.

DPI Synapse:

$\begin{split}I_{syn}(t_1) = \begin{cases} I_{syn}(t_0) \cdot exp \left( \dfrac{-dt}{\tau} \right) &\text{in any case} \\ \\ I_{syn}(t_1) + \dfrac{I_{th} I_{w}}{I_{\tau}} \cdot \left( 1 - exp \left( \dfrac{-t_{pulse}}{\tau} \right) \right) &\text{if a spike arrives} \end{cases}\end{split}$

Where

$\tau = \dfrac{C U_{T}}{\kappa I_{\tau}}$

For the membrane update, the forward Euler solution below is applied.

Membrane:

$\begin{split}dI_{mem} &= \dfrac{I_{mem}}{\tau \left( I_{mem} + I_{th} \right) } \cdot \left( I_{mem_{\infty}} + f(I_{mem}) - I_{mem} \left( 1 + \dfrac{I_{ahp}}{I_{\tau}} \right) \right) \cdot dt \\\\ I_{mem}(t_1) &= I_{mem}(t_0) + dI_{mem}\end{split}$

Where

$\begin{split}I_{mem_{\infty}} &= \dfrac{I_{th}}{I_{\tau}} \left( I_{in} - I_{ahp} - I_{\tau}\right) \\\\ f(I_{mem}) &= \dfrac{I_{a}}{I_{\tau}} \left(I_{mem} + I_{th} \right ) \\\\ I_{a} &= \dfrac{I_{a_{gain}}}{1+ exp\left(-\dfrac{I_{mem}+I_{a_{th}}}{I_{a_{norm}}}\right)} \\\\\end{split}$
On spiking:

When the membrane potential for neuron $$j$$, $$I_{mem, j}$$ exceeds the threshold current $$I_{spkthr}$$, then the neuron emits a spike.

$\begin{split}I_{mem, j} > I_{spkthr} \rightarrow S_{j} &= 1 \\ I_{mem, j} &= I_{reset} \\\end{split}$

For detailed explanations of the equations and the usage

DynapSim Neuron Model

Attributes overview

 class_name Class name of self full_name The full name of this module (class plus module name) name The name of this module, or an empty string if None shape The shape of this module size (DEPRECATED) The output size of this module size_in The input size of this module size_out The output size of this module spiking_input If True, this module receives spiking input. spiking_output If True, this module sends spiking output. iahp Spike frequency adaptation current states of the neurons in Amperes with shape (Nrec,) iampa Fast excitatory AMPA synapse current states of the neurons in Amperes with shape (Nrec,) igaba Slow inhibitory adaptation current states of the neurons in Amperes with shape (Nrec,) imem Membrane current states of the neurons in Amperes with shape (Nrec,) inmda Slow excitatory synapse current states of the neurons in Amperes with shape (Nrec,) ishunt Fast inhibitory shunting synapse current states of the neurons in Amperes with shape (Nrec,) spikes Logical spiking raster for each neuron at the last simulation time-step with shape (Nrec,) timer_ref timer to keep the time from the spike generation until the refractory period ends vmem Membrane potential states of the neurons in Volts with shape (Nrec,) Idc Constant DC current injected to membrane in Amperes with shape If_nmda NMDA gate soft cut-off current setting the NMDA gating voltage in Amperes with shape (Nrec,) Igain_ahp gain bias current of the spike frequency adaptation block in Amperes with shape (Nrec,) Igain_mem gain bias current for neuron membrane in Amperes with shape (Nrec,) Igain_syn gain bias current of synaptic gates (AMPA, GABA, NMDA, SHUNT) combined in Amperes with shape (Nrec,) Ipulse_ahp bias current setting the pulse width for spike frequency adaptation block t_pulse_ahp in Amperes with shape (Nrec,) Ipulse bias current setting the pulse width for neuron membrane t_pulse in Amperes with shape (Nrec,) Iref bias current setting the refractory period t_ref in Amperes with shape (Nrec,) Ispkthr spiking threshold current, neuron spikes if $$I_{mem} > I_{spkthr}$$ in Amperes with shape (Nrec,) Itau_ahp Spike frequency adaptation leakage current setting the time constant tau_ahp in Amperes with shape (Nrec,) Itau_mem Neuron membrane leakage current setting the time constant tau_mem in Amperes with shape (Nrec,) Itau_syn (AMPA, GABA, NMDA, SHUNT) synapses combined leakage current setting the time constant tau_syn in Amperes with shape (Nrec,) Iw_ahp spike frequency adaptation weight current of the neurons of the core in Amperes with shape (Nrec,) C_ahp AHP synapse capacitance in Farads with shape (Nrec,) C_syn synaptic capacitance in Farads with shape (Nrec,) C_pulse_ahp spike frequency adaptation circuit pulse-width creation sub-circuit capacitance in Farads with shape (Nrec,) C_pulse pulse-width creation sub-circuit capacitance in Farads with shape (Nrec,) C_ref refractory period sub-circuit capacitance in Farads with shape (Nrec,) C_mem neuron membrane capacitance in Farads with shape (Nrec,) Io Dark current in Amperes that flows through the transistors even at the idle state with shape (Nrec,) kappa_n Subthreshold slope factor (n-type transistor) with shape (Nrec,) kappa_p Subthreshold slope factor (p-type transistor) with shape (Nrec,) Ut Thermal voltage in Volts with shape (Nrec,) Vth The cut-off Vgs potential of the transistors in Volts (not type specific) with shape (Nrec,) Iscale weight scaling current of the neurons of the core in Amperes dt The time step for the forward-Euler ODE solver rng_key The Jax RNG seed to use on initialisation.

Methods overview

 __init__(shape[,Β Idc,Β If_nmda,Β Igain_ahp,Β ...]) __init__ constructs a DynapSim object as_graph() as_graph returns a computational graph for the for the simulated Dynap-SE neurons attributes_named(name) Search for attributes of this or submodules by time evolve(input_data[,Β record]) evolve implements raw rockpool JAX evolution function for a DynapSim module. from_graph(se[,Β weights]) from_graph constructs a DynapSim object from a computational graph modules() Return a dictionary of all sub-modules of this module parameters([family]) Return a nested dictionary of module and submodule Parameters Reset all parameters in this module Reset the state of this module set_attributes(new_attributes) Assign new attributes to this module and submodules simulation_parameters([family]) Return a nested dictionary of module and submodule SimulationParameters state([family]) Return a nested dictionary of module and submodule States timed([output_num,Β dt,Β add_events]) Convert this module to a TimedModule Flatten this module tree for Jax tree_unflatten(aux_data,Β children) Unflatten a tree of modules from Jax to Rockpool
C_ahpο

AHP synapse capacitance in Farads with shape (Nrec,)

C_memο

neuron membrane capacitance in Farads with shape (Nrec,)

C_pulseο

pulse-width creation sub-circuit capacitance in Farads with shape (Nrec,)

C_pulse_ahpο

spike frequency adaptation circuit pulse-width creation sub-circuit capacitance in Farads with shape (Nrec,)

C_refο

refractory period sub-circuit capacitance in Farads with shape (Nrec,)

C_synο

synaptic capacitance in Farads with shape (Nrec,)

Idcο

Constant DC current injected to membrane in Amperes with shape

If_nmdaο

NMDA gate soft cut-off current setting the NMDA gating voltage in Amperes with shape (Nrec,)

Igain_ahpο

gain bias current of the spike frequency adaptation block in Amperes with shape (Nrec,)

Igain_memο

gain bias current for neuron membrane in Amperes with shape (Nrec,)

Igain_synο

gain bias current of synaptic gates (AMPA, GABA, NMDA, SHUNT) combined in Amperes with shape (Nrec,)

Ioο

Dark current in Amperes that flows through the transistors even at the idle state with shape (Nrec,)

Ipulseο

bias current setting the pulse width for neuron membrane t_pulse in Amperes with shape (Nrec,)

Ipulse_ahpο

bias current setting the pulse width for spike frequency adaptation block t_pulse_ahp in Amperes with shape (Nrec,)

Irefο

bias current setting the refractory period t_ref in Amperes with shape (Nrec,)

Iscaleο

weight scaling current of the neurons of the core in Amperes

Ispkthrο

spiking threshold current, neuron spikes if $$I_{mem} > I_{spkthr}$$ in Amperes with shape (Nrec,)

Itau_ahpο

Spike frequency adaptation leakage current setting the time constant tau_ahp in Amperes with shape (Nrec,)

Itau_memο

Neuron membrane leakage current setting the time constant tau_mem in Amperes with shape (Nrec,)

Itau_synο

(AMPA, GABA, NMDA, SHUNT) synapses combined leakage current setting the time constant tau_syn in Amperes with shape (Nrec,)

Iw_ahpο

spike frequency adaptation weight current of the neurons of the core in Amperes with shape (Nrec,)

Utο

Thermal voltage in Volts with shape (Nrec,)

Vthο

The cut-off Vgs potential of the transistors in Volts (not type specific) with shape (Nrec,)

__init__(shape: ~typing.Tuple[int] | int, Idc: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 5e-13, If_nmda: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 5e-13, Igain_ahp: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 2.8368794326241126e-11, Igain_mem: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 2.1276595744680848e-11, Igain_syn: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 8.687943262411346e-09, Ipulse_ahp: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 3.5e-07, Ipulse: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 3.4999999999999996e-08, Iref: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 1.0499999999999999e-09, Ispkthr: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 1e-07, Itau_ahp: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 2.8368794326241126e-11, Itau_mem: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 5.319148936170212e-12, Itau_syn: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 8.687943262411346e-11, Iw_ahp: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 5e-13, C_ahp: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 4e-11, C_syn: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 2.45e-11, C_pulse_ahp: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 5e-13, C_pulse: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 5e-13, C_ref: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 1.5e-12, C_mem: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 3e-12, Io: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 5e-13, kappa_n: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 0.75, kappa_p: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 0.66, Ut: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 0.025, Vth: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 0.7, Iscale: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array = 1e-08, w_rec: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | None = None, has_rec: bool = False, weight_init_func: ~typing.Callable[[~typing.Tuple], float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array] | None = <function kaiming>, dt: float = 0.001, percent_mismatch: float | None = None, rng_key: float | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | None = None, spiking_input: bool = False, spiking_output: bool = True, *args, **kwargs) None[source]ο

__init__ constructs a DynapSim object

Parameters:
• shape (Tuple[int]) β Either a single dimension N, which defines a feed-forward layer of DynapSE AdExpIF neurons, or two dimensions (N, N), which defines a recurrent layer of DynapSE AdExpIF neurons.

• Idc (FloatVector, optinoal) β Constant DC current injected to membrane in Amperes with shape

• If_nmda (FloatVector, optinoal) β NMDA gate soft cut-off current setting the NMDA gating voltage in Amperes with shape (Nrec,)

• Igain_ahp (FloatVector, optinoal) β gain bias current of the spike frequency adaptation block in Amperes with shape (Nrec,)

• Igain_mem (FloatVector, optinoal) β gain bias current for neuron membrane in Amperes with shape (Nrec,)

• Igain_syn (FloatVector, optinoal) β gain bias current of synaptic gates (AMPA, GABA, NMDA, SHUNT) combined in Amperes with shape (Nrec,)

• Ipulse_ahp (FloatVector, optinoal) β bias current setting the pulse width for spike frequency adaptation block t_pulse_ahp in Amperes with shape (Nrec,)

• Ipulse (FloatVector, optinoal) β bias current setting the pulse width for neuron membrane t_pulse in Amperes with shape (Nrec,)

• Iref (FloatVector, optinoal) β bias current setting the refractory period t_ref in Amperes with shape (Nrec,)

• Ispkthr (FloatVector, optinoal) β spiking threshold current, neuron spikes if $$I_{mem} > I_{spkthr}$$ in Amperes with shape (Nrec,)

• Itau_ahp (FloatVector, optinoal) β Spike frequency adaptation leakage current setting the time constant tau_ahp in Amperes with shape (Nrec,)

• Itau_mem (FloatVector, optinoal) β Neuron membrane leakage current setting the time constant tau_mem in Amperes with shape (Nrec,)

• Itau_syn (FloatVector, optinoal) β (AMPA, GABA, NMDA, SHUNT) synapses combined leakage current setting the time constant tau_syn in Amperes with shape (Nrec,)

• Iw_ahp (FloatVector, optinoal) β spike frequency adaptation weight current of the neurons of the core in Amperes with shape (Nrec,)

• C_ahp (FloatVector, optional) β AHP synapse capacitance in Farads with shape (Nrec,)

• C_syn (FloatVector, optional) β synaptic capacitance in Farads with shape (Nrec,)

• C_pulse_ahp (FloatVector, optional) β spike frequency adaptation circuit pulse-width creation sub-circuit capacitance in Farads with shape (Nrec,)

• C_pulse (FloatVector, optional) β pulse-width creation sub-circuit capacitance in Farads with shape (Nrec,)

• C_ref (FloatVector, optional) β refractory period sub-circuit capacitance in Farads with shape (Nrec,)

• C_mem (FloatVector, optional) β neuron membrane capacitance in Farads with shape (Nrec,)

• Io (FloatVector, optional) β Dark current in Amperes that flows through the transistors even at the idle state with shape (Nrec,)

• kappa_n (FloatVector, optional) β Subthreshold slope factor (n-type transistor) with shape (Nrec,)

• kappa_p (FloatVector, optional) β Subthreshold slope factor (p-type transistor) with shape (Nrec,)

• Ut (FloatVector, optional) β Thermal voltage in Volts with shape (Nrec,)

• Vth (FloatVector, optional) β The cut-off Vgs potential of the transistors in Volts (not type specific) with shape (Nrec,)

• Iscale (FloatVector, optinoal) β weight scaling current of the neurons of the core in Amperes

• w_rec (Optional[FloatVector], optional) β If the module is initialised in recurrent mode, one can provide a concrete initialisation for the recurrent weights, which must be a square matrix with shape (Nrec, Nrec, 4). The last 4 holds a weight matrix for 4 different synapse types. If the model is not initialised in recurrent mode, then you may not provide w_rec, defaults tp None

• has_rec (bool, optional) β When True the module provides a trainable recurrent weight matrix. False, module is feed-forward, defaults to True

• weight_init_func (Optional[Callable[[Tuple], FloatVector]], optional) β The initialisation function to use when generating weights, gets the shape and returns the initial weights, defatuls to kaiming

• dt (float, optional) β The time step for the forward-Euler ODE solver, defaults to 1e-3

• percent_mismatch (Optional[float], optional) β Gaussian parameter mismatch percentage (check transform.mismatch_generator implementation), defaults to None

• rng_key (Optional[FloatVector], optional) β The Jax RNG seed to use on initialisation. By default, a new seed is generated, defaults to None

• spiking_input (bool, optional) β Whether this module receives spiking input, defaults to True

• spiking_output (bool, optional) β Whether this module produces spiking output, defaults to True

Raises:
• ValueError β shape must be a one- or two-element tuple (Nin, Nout)

• ValueError β Multapses are not currently supported in DynapSim pipeline!

_abc_impl = <_abc._abc_data object>ο
_auto_batch(data: Array, states: Tuple = (), target_shapes: Tuple = None) Tuple[Array, Tuple[Array]]ο

Automatically replicate states over batches and verify input dimensions

Usage:
>>> data, (state0, state1, state2) = self._auto_batch(data, (self.state0, self.state1, self.state2))

This will verify that data has the correct final dimension (i.e. self.size_in). If data has only two dimensions (T, Nin), then it will be augmented to (1, T, Nin). The individual states will be replicated out from shape (a, b, c, ...) to (n_batches, a, b, c, ...) and returned.

Parameters:
• data (np.ndarray) β Input data tensor. Either (batches, T, Nin) or (T, Nin)

• states (Tuple) β Tuple of state variables. Each will be replicated out over batches by prepending a batch dimension

Returns:

(np.ndarray, Tuple[np.ndarray]) data, states

_force_set_attributesο

(bool) If True, do not sanity-check attributes when setting.

_get_attribute_family(type_name: str, family: str | Tuple | List = None) dictο

Search for attributes of this module and submodules that match a given family

This method can be used to conveniently get all weights for a network; or all time constants; or any other family of parameters. Parameter families are defined simply by a string: "weights" for weights; "taus" for time constants, etc. These strings are arbitrary, but if you follow the conventions then future developers will thank you (that includes you in six monthβs time).

Parameters:
• type_name (str) β The class of parameters to search for. Must be one of ["Parameter", "SimulationParameter", "State"] or another future subclass of ParameterBase

• family (Union[str, Tuple[str]]) β A string or list or tuple of strings, that define one or more attribute families to search for

Returns:

A nested dictionary of attributes that match the provided type_name and family

Return type:

dict

_get_attribute_registry() Tuple[Dict, Dict]ο

Return or initialise the attribute registry for this module

Returns:

registered_attributes, registered_modules

Return type:

(tuple)

_has_registered_attribute(name: str) boolο

Check if the module has a registered attribute

Parameters:

name (str) β The name of the attribute to check

Returns:

True if the attribute name is in the attribute registry, False otherwise.

Return type:

bool

_in_Module_initο

(bool) If exists and True, indicates that the module is in the __init__ chain.

_name: str | Noneο

Name of this module, if assigned

_register_attribute(name: str, val: ParameterBase)ο

Record an attribute in the attribute registry

Parameters:
• name (str) β The name of the attribute to register

• val (ParameterBase) β The ParameterBase subclass object to register. e.g. Parameter, SimulationParameter or State.

_register_module(name: str, mod)ο

Add a submodule to the module registry

Parameters:
• name (str) β The name of the submodule, extracted from the assigned attribute name

• mod (JaxModule) β The submodule to register

_reset_attribute(name: str) ModuleBaseο

Reset an attribute to its initialisation value

Parameters:

name (str) β The name of the attribute to reset

Returns:

For compatibility with the functional API

Return type:

self (Module)

_rockpool_pytree_registry = []ο

The internal registry of registered JaxModule s

_shapeο

The shape of this module

_spiking_input: boolο

Whether this module receives spiking input

_spiking_output: boolο

Whether this module produces spiking output

_submodulenames: List[str]ο

Registry of sub-module names

_wrap_recorded_state(recorded_dict: dict, t_start: float) Dict[str, TimeSeries]ο

Convert a recorded dictionary to a TimeSeries representation

This method is optional, and is provided to make the timed() conversion to a TimedModule work better. You should override this method in your custom Module, to wrap each element of your recorded state dictionary as a TimeSeries

Parameters:
• state_dict (dict) β A recorded state dictionary as returned by evolve()

• t_start (float) β The initial time of the recorded state, to use as the starting point of the time series

Returns:

The mapped recorded state dictionary, wrapped as TimeSeries objects

Return type:

Dict[str, TimeSeries]

as_graph() GraphHolder[source]ο

as_graph returns a computational graph for the for the simulated Dynap-SE neurons

Returns:

a GraphHolder object wrapping the DynapseNeurons graph.

Return type:

GraphHolder

attributes_named(name: Tuple[str] | List[str] | str) dictο

Search for attributes of this or submodules by time

Parameters:

name (Union[str, Tuple[str]) β The name of the attribute to search for

Returns:

A nested dictionary of attributes that match name

Return type:

dict

property class_name: strο

Class name of self

Type:

str

dtο

The time step for the forward-Euler ODE solver

evolve(input_data: float | ndarray | Tensor | array, record: bool = True) Tuple[Array, Dict[str, Array], Dict[str, Array]][source]ο

evolve implements raw rockpool JAX evolution function for a DynapSim module. The function solves the dynamical equations introduced at the DynapSim module definition

Parameters:
• input_data (FloatVector) β Input array of shape (T, Nrec, 4) to evolve over. Represents number of spikes at that timebin for different synaptic gates

• record (bool, optional) β record the each timestep of evolution or not, defaults to True

Returns:

spikes_ts, states, record_dict :spikes_ts: is an array with shape (T, Nrec) containing the output data(spike raster) produced by the module. :states: is a dictionary containing the updated module state following evolution. :record_dict: is a dictionary containing the recorded state variables during the evolution at each time step, if the record argument is True else empty dictionary {}

Return type:

Tuple[jax.Array, Dict[str, jax.Array], Dict[str, jax.Array]]

classmethod from_graph(se: DynapseNeurons, weights: LinearWeights | None = None) [source]ο

from_graph constructs a DynapSim object from a computational graph

Parameters:
• se (DynapseNeurons) β the reference computational graph to restore the computational module

• weights (Optional[LinearWeights], optional) β additional weights graph if one wants to impose recurrent weights, defaults to None

Returns:

a DynapSim object

Return type:

DynapSim

property full_name: strο

The full name of this module (class plus module name)

Type:

str

iahpο

Spike frequency adaptation current states of the neurons in Amperes with shape (Nrec,)

iampaο

Fast excitatory AMPA synapse current states of the neurons in Amperes with shape (Nrec,)

igabaο

Slow inhibitory adaptation current states of the neurons in Amperes with shape (Nrec,)

imemο

Membrane current states of the neurons in Amperes with shape (Nrec,)

inmdaο

Slow excitatory synapse current states of the neurons in Amperes with shape (Nrec,)

ishuntο

Fast inhibitory shunting synapse current states of the neurons in Amperes with shape (Nrec,)

kappa_nο

Subthreshold slope factor (n-type transistor) with shape (Nrec,)

kappa_pο

Subthreshold slope factor (p-type transistor) with shape (Nrec,)

modules() Dictο

Return a dictionary of all sub-modules of this module

Returns:

A dictionary containing all sub-modules. Each item will be named with the sub-module name.

Return type:

dict

property name: strο

The name of this module, or an empty string if None

Type:

str

parameters(family: str | Tuple | List = None) Dictο

Return a nested dictionary of module and submodule Parameters

Use this method to inspect the Parameters from this and all submodules. The optional argument family allows you to search for Parameters in a particular family β for example "weights" for all weights of this module and nested submodules.

Although the family argument is an arbitrary string, reasonable choises are "weights", "taus" for time constants, "biases" for biasesβ¦

Examples

Obtain a dictionary of all Parameters for this module (including submodules):

>>> mod.parameters()
dict{ ... }

Obtain a dictionary of Parameters from a particular family:

>>> mod.parameters("weights")
dict{ ... }
Parameters:

family (str) β The family of Parameters to search for. Default: None; return all parameters.

Returns:

A nested dictionary of Parameters of this module and all submodules

Return type:

dict

reset_parameters()ο

Reset all parameters in this module

Returns:

The updated module is returned for compatibility with the functional API

Return type:

Module

reset_state() ο

Reset the state of this module

Returns:

The updated module is returned for compatibility with the functional API

Return type:

Module

rng_keyο

The Jax RNG seed to use on initialisation. By default, a new seed is generated

set_attributes(new_attributes: Iterable | MutableMapping | Mapping) ο

Assign new attributes to this module and submodules

Parameters:

new_attributes (Tree) β The tree of new attributes to assign to this module tree

Return type:

JaxModule

property shape: tupleο

The shape of this module

Type:

tuple

simulation_parameters(family: str | Tuple | List = None) Dictο

Return a nested dictionary of module and submodule SimulationParameters

Use this method to inspect the SimulationParameters from this and all submodules. The optional argument family allows you to search for SimulationParameters in a particular family.

Examples

Obtain a dictionary of all SimulationParameters for this module (including submodules):

>>> mod.simulation_parameters()
dict{ ... }
Parameters:

family (str) β The family of SimulationParameters to search for. Default: None; return all SimulationParameter attributes.

Returns:

A nested dictionary of SimulationParameters of this module and all submodules

Return type:

dict

property size: intο

(DEPRECATED) The output size of this module

Type:

int

property size_in: intο

The input size of this module

Type:

int

property size_out: intο

The output size of this module

Type:

int

spikesο

Logical spiking raster for each neuron at the last simulation time-step with shape (Nrec,)

property spiking_input: boolο

If True, this module receives spiking input. If False, this module expects continuous input.

Type:

bool

property spiking_outputο

If True, this module sends spiking output. If False, this module sends continuous output.

Type:

bool

state(family: str | Tuple | List = None) Dictο

Return a nested dictionary of module and submodule States

Use this method to inspect the States from this and all submodules. The optional argument family allows you to search for States in a particular family.

Examples

Obtain a dictionary of all States for this module (including submodules):

>>> mod.state()
dict{ ... }
Parameters:

family (str) β The family of States to search for. Default: None; return all State attributes.

Returns:

A nested dictionary of States of this module and all submodules

Return type:

dict

timed(output_num: int = 0, dt: float = None, add_events: bool = False)ο

Convert this module to a TimedModule

Parameters:
• output_num (int) β Specify which output of the module to take, if the module returns multiple output series. Default: 0, take the first (or only) output.

• dt (float) β Used to provide a time-step for this module, if the module does not already have one. If self already defines a time-step, then self.dt will be used. Default: None

• add_events (bool) β Iff True, the TimedModule will add events occurring on a single timestep on input and output. Default: False, donβt add time steps.

Returns: TimedModule: A timed module that wraps this module

timer_refο

timer to keep the time from the spike generation until the refractory period ends

tree_flatten() Tuple[tuple, tuple]ο

Flatten this module tree for Jax

classmethod tree_unflatten(aux_data, children)ο

Unflatten a tree of modules from Jax to Rockpool

vmemο

Membrane potential states of the neurons in Volts with shape (Nrec,)