devices.dynapse.DynapseNeurons
- class devices.dynapse.DynapseNeurons(input_nodes: ~rockpool.graph.graph_base.SetList[GraphNode], output_nodes: ~rockpool.graph.graph_base.SetList[GraphNode], name: str, computational_module: ~typing.Any, Idc: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, If_nmda: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Igain_ahp: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Igain_mem: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Igain_syn: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Ipulse_ahp: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Ipulse: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Iref: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Ispkthr: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Itau_ahp: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Itau_mem: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Itau_syn: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Iw_ahp: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Iscale: float | None = None, dt: float | None = None)[source]
Bases:
GenericNeurons
DynapseNeurons stores the core computational properties of a Dynap-SE network
Attributes overview
the scaling current
the time step for the forward-Euler ODE solver
Constant DC current injected to membrane in Amperes
NMDA gate soft cut-off current setting the NMDA gating voltage in Amperes
gain bias current of the spike frequency adaptation block in Amperes
gain bias current for neuron membrane in Amperes
gain bias current of synaptic gates (AMPA, GABA, NMDA, SHUNT) combined in Amperes
bias current setting the pulse width for spike frequency adaptation block
t_pulse_ahp
in Amperesbias current setting the pulse width for neuron membrane
t_pulse
in Amperesbias current setting the refractory period
t_ref
in Amperesspiking threshold current, neuron spikes if \(I_{mem} > I_{spkthr}\) in Amperes
Spike frequency adaptation leakage current setting the time constant
tau_ahp
in AmperesNeuron membrane leakage current setting the time constant
tau_mem
in AmperesAMPA, GABA, NMDA, SHUNT) synapses combined leakage current setting the time constant
tau_syn
in Amperesspike frequency adaptation weight current of the neurons of the core in Amperes
The input nodes attached to this module
The output nodes attached to this module
An arbitrary name attached to this specific
GraphModule
The computational module that acts as the source for this graph module
Methods overview
__init__
(input_nodes, output_nodes, name, ...)add_input
(node)Add a
GraphNode
as an input source to this module, and connect itadd_output
(node)Add a
GraphNode
as an output of this module, and connect itRemove all
GraphNode
s as inputs of this moduleRemove all
GraphNode
s as outputs of this modulecurrent_attrs lists all current paramters stored inside DynapseNeurons computational graph
get_full
()get_full creates a dictionary of parameteric current attributes with extended current values
merge
(graph_list)merge combines a list of computational
DynapseNeuron
objects into one.remove_input
(node)Remove a
GraphNode
as an input of this module, and disconnect itremove_output
(node)Remove a
GraphNode
as an output of this module, and disconnect it- Idc: IntVector | FloatVector
Constant DC current injected to membrane in Amperes
- If_nmda: IntVector | FloatVector
NMDA gate soft cut-off current setting the NMDA gating voltage in Amperes
- Igain_ahp: IntVector | FloatVector
gain bias current of the spike frequency adaptation block in Amperes
- Igain_mem: IntVector | FloatVector
gain bias current for neuron membrane in Amperes
- Igain_syn: IntVector | FloatVector
gain bias current of synaptic gates (AMPA, GABA, NMDA, SHUNT) combined in Amperes
- Ipulse: IntVector | FloatVector
bias current setting the pulse width for neuron membrane
t_pulse
in Amperes
- Ipulse_ahp: IntVector | FloatVector
bias current setting the pulse width for spike frequency adaptation block
t_pulse_ahp
in Amperes
- Iref: IntVector | FloatVector
bias current setting the refractory period
t_ref
in Amperes
- Iscale: float | None = None
the scaling current
- Ispkthr: IntVector | FloatVector
spiking threshold current, neuron spikes if \(I_{mem} > I_{spkthr}\) in Amperes
- Itau_ahp: IntVector | FloatVector
Spike frequency adaptation leakage current setting the time constant
tau_ahp
in Amperes
- Itau_mem: IntVector | FloatVector
Neuron membrane leakage current setting the time constant
tau_mem
in Amperes
- Itau_syn: IntVector | FloatVector
AMPA, GABA, NMDA, SHUNT) synapses combined leakage current setting the time constant
tau_syn
in Amperes
- Iw_ahp: IntVector | FloatVector
spike frequency adaptation weight current of the neurons of the core in Amperes
- __gain_current(Itau: float | ndarray | Tensor | array) float | ndarray | Tensor | array
__gain_current converts a gain ratio to a amplifier gain current using the leakage current provided
- Parameters:
r (float) – the desired amplifier gain ratio
Itau (FloatVector) – the depended leakage current
- Returns:
an amplifier gain current
- Return type:
FloatVector
- __get_equal_shape() Tuple[int]
get_equal_shape makes sure that the all arguments have the same shape
- Raises:
ValueError – Attribute shapes does not match!
- Returns:
the equal shape of all the arguments
- Return type:
Tuple[int]
- __init__(input_nodes: ~rockpool.graph.graph_base.SetList[GraphNode], output_nodes: ~rockpool.graph.graph_base.SetList[GraphNode], name: str, computational_module: ~typing.Any, Idc: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, If_nmda: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Igain_ahp: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Igain_mem: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Igain_syn: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Ipulse_ahp: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Ipulse: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Iref: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Ispkthr: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Itau_ahp: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Itau_mem: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Itau_syn: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Iw_ahp: int | ~numpy.ndarray | ~torch.Tensor | ~jax._src.numpy.lax_numpy.array | float = <factory>, Iscale: float | None = None, dt: float | None = None) None
- __leakage_current(C: float) float | ndarray | Tensor | array
__leakage_current uses default layout configuration and converts a time constant to a leakage current using the conversion method defined in
DynapSimTime
module- Parameters:
tc (FloatVector) – the time constant in seconds
C (float) – the capacitance value in Farads
- Returns:
the leakage current
- Return type:
FloatVector
- __pulse_current(C: float) float | ndarray | Tensor | array
__pulse_current uses default layout configuration and converts a pulse width to a pulse current using the conversion method defined in
DynapSimTime
module- Parameters:
t_pw (FloatVector) – the pulse width in seconds
C (float) – the capacitance value in Farads
- Returns:
a pulse current
- Return type:
FloatVector
- __scale(scale: float) List[float]
__scale converts any FloatVector to list and scales
- Parameters:
v (FloatVector) – the float vector of interest
scale (float) – the scaling factor
- Returns:
a float list
- Return type:
List[float]
- classmethod _convert_from(mod: GraphModule, r_gain_mem: float | ndarray | Tensor | array = 4, r_gain_syn: float | ndarray | Tensor | array = 100, t_pulse: float | ndarray | Tensor | array = 1e-05, t_ref: float | ndarray | Tensor | array = 0.001, C_pulse: float | ndarray | Tensor | array = 5e-13, C_ref: float | ndarray | Tensor | array = 1.5e-12, C_mem: float | ndarray | Tensor | array = 3e-12, C_syn: float | ndarray | Tensor | array = 2.5e-11, Iscale: float = 1e-08) DynapseNeurons [source]
_convert_from converts a graph module to DynapseNeuron structure. Uses default parameter
NOTE:
- LIF does not have equivalent computation for
AHP (After-Hyper-Polarization)
NMDA Voltage Depended Gating
Therefore : Itau_ahp, If_nmda, Igain_ahp, Ipulse_ahp, and, Iw_ahp currents are zero.
- Parameters:
mod (GraphModule) – the reference graph module
r_gain_mem (FloatVector, optional) – neuron membrane gain ratio, defaults to default_gain_ratios[“r_gain_mem”]
r_gain_syn (FloatVector, optional) – synapse gain ratio, defaults to default_gain_ratios[“r_gain_ampa”]
t_pulse (FloatVector, optional) – the spike pulse width for neuron membrane in seconds, defaults to default_time_constants[“t_pulse”]
t_ref (FloatVector, optional) – refractory period of the neurons in seconds, defaults to default_time_constants[“t_ref”]
C_pulse (FloatVector, optional) – pulse-width creation sub-circuit capacitance in Farads, defaults to default_layout[“C_pulse”]
C_ref (FloatVector, optional) – refractory period sub-circuit capacitance in Farads, defaults to default_layout[“C_ref”]
C_mem (FloatVector, optional) – neuron membrane capacitance in Farads, defaults to default_layout[“C_mem”]
C_syn (FloatVector, optional) – synaptic capacitance in Farads, defaults to default_layout[“C_syn”]
Iscale (float, optional) – the scaling current, defaults to default_weights[“Iscale”]
- Raises:
ValueError – graph module cannot be converted
- Returns:
DynapseNeurons object instance
- Return type:
- classmethod _factory(size_in: int, size_out: int, name: str | None = None, computational_module: Any | None = None, *args, **kwargs) GraphModuleBase
Build a new
GraphModule
orGraphModule
subclass, with new input and outputGraphNode
s created automaticallyUse this factory method to construct a new
GraphModule
from scratch, which needs new input and outputGraphNode
s created automatically. This helper method will be inherited by newGraphModule
subclasses, and will act as factory methods also for your customGraphModule
subclass.- Parameters:
size_in (int) – The number of input
GraphNode
s to create and attachsize_out (int) – The number of output
GraphNode
s to create and attachname (str, optional) – An arbitrary name to attach to this
GraphModule
, defaults to Nonecomputational_module (Optional[Module], optional) – A rockpool computational module that forms the “generator” of this graph module, defaults to None
- Returns:
The newly constructed
GraphModule
orGraphModule
subclass- Return type:
- add_input(node: GraphNode) None
Add a
GraphNode
as an input source to this module, and connect itThe new node will be appended after the last current input node. The node will be connected with this
GraphModule
as a sink.- Parameters:
node (GraphNode) – The node to add as an input source
- add_output(node: GraphNode) None
Add a
GraphNode
as an output of this module, and connect itThe new node will be appended after the last current output node. The node will be connected with this
GraphModule
as a source.- Parameters:
node (GraphNode) – The node to add as an output
- computational_module: Module
The computational module that acts as the source for this graph module
- Type:
- static current_attrs() List[str] [source]
current_attrs lists all current paramters stored inside DynapseNeurons computational graph
- Returns:
a list of parametric curents
- Return type:
List[str]
- dt: float | None = None
the time step for the forward-Euler ODE solver
- get_full() Dict[str, ndarray] [source]
get_full creates a dictionary of parameteric current attributes with extended current values
- Returns:
the object dictionary with extended current arrays
- Return type:
Dict[str, np.ndarray]
- classmethod merge(graph_list: List[DynapseNeurons]) DynapseNeurons [source]
merge combines a list of computational
DynapseNeuron
objects into one. The length of attributes are equal to the number of output nodes. Even though the submodules has single valued attributes, they are repeated as many times as the number of their output neurons.NOTE : Returned single object is neither connected to the input nor the outputs of the any of the modules given.
- Parameters:
graph_list (List[DynapseNeurons]) – an ordered list of DynapseNeuron objects
- Returns:
a single
DynapseNeuron
object with combined arrays of attributes- Return type:
- name: str
An arbitrary name attached to this specific
GraphModule
- Type:
str
- output_nodes: SetList['GraphNode']
The output nodes attached to this module
- Type:
SetList[GraphNode]
- remove_input(node: GraphNode) None
Remove a
GraphNode
as an input of this module, and disconnect itThe node will be disconnected from this
GraphModule
as a sink, and will be removed from the module.- Parameters:
node (GraphNode) – The node to remove. If this node exists as an input to the module, it will be removed.
- remove_output(node: GraphNode) None
Remove a
GraphNode
as an output of this module, and disconnect itThe node will be disconnected from this
GraphModule
as a source, and will be removed from the module.- Parameters:
node (GraphNode) – The node to remove. If this node exists as an output to the module, it will be removed.