Full API summary for Rockpool
Package structure summary
A machine learning library for training and deploying spiking neural network applications |
Base classes
See also
|
The native Python / numpy |
|
The Rockpool base class for all |
Attribute types
|
Base class for Rockpool registered attributes |
|
Represent a module parameter |
|
Represent a module state |
|
Represent a module simulation parameter |
|
Identify an initialisation argument as a constant (non-trainable) parameter |
Alternative base classes
|
Base class for |
|
Base class for modules that are compatible with both Torch and Rockpool |
Combinator modules
|
Assemble modules into a feed-forward stack, with linear weights in between |
|
Build a sequential stack of modules by connecting them end-to-end |
|
Build a residual block over a sequential stack of modules |
Time series classes
See also
|
Base class to represent a continuous or event-based time series. |
|
Represents a continuously-sampled time series. |
|
Represents a discrete time series, composed of binary events (present or absent). |
Module
subclasses
|
Encapsulates a population of rate neurons, supporting feed-forward and recurrent modules |
|
Encapsulates a population of rate neurons, supporting feed-forward and recurrent modules, with a Jax backend |
|
Encapsulates a population of rate neurons, supporting feed-forward and recurrent modules, with a Toch backend |
|
A leaky integrate-and-fire spiking neuron model |
|
A leaky integrate-and-fire spiking neuron model, with a Jax backend |
|
A leaky integrate-and-fire spiking neuron model with a Torch backend |
|
A leaky integrate-and-fire spiking neuron model with adaptive hyperpolarisation, with a Torch backend |
|
A leaky integrate-and-fire spiking neuron model |
|
Feedforward layer that converts each analogue input channel to one spiking up and one spiking down channel. |
|
Encapsulates a linear weight matrix |
|
Encapsulates a linear weight matrix, with a Jax backend |
|
Applies a linear transformation to the incoming data: \(y = xA + b\) |
|
Wrap a callable function as an instantaneous Rockpool module |
|
Wrap a callable function as an instantaneous Rockpool module, with a Jax backend |
|
Wrap a callable function as an instantaneous Rockpool module, with a Torch backend |
|
Exponential synapse module |
|
Exponential synapse module with a Jax backend |
|
Exponential synapse module with a Torch backend |
|
A Jax-backed module implementing a smoothed weighted softmax, compatible with spiking inputs |
|
A Jax-backed module implementing a smoothed weighted softmax, compatible with spiking inputs |
|
Define a Butterworth filter bank (mel spacing) filtering layer with continuous sampled output |
|
Define a Butterworth filter bank filtering layer with continuous output |
|
|
|
|
|
Layer
subclasses from Rockpool v1
These classes are deprecated, but are still usable via the high-level API, until they are converted to the v2 API.
|
Base class for Layers in rockpool |
|
A spiking feedforward layer with current inputs and spiking outputs |
|
Spiking feedforward layer with spiking inputs and outputs |
|
A spiking recurrent layer with current inputs and spiking outputs, using a Brian2 backend |
|
Spiking recurrent layer with spiking in- and outputs, and a Brian2 backend |
|
Define an exponential synapse layer (spiking input), with a Brian2 backend |
Standard networks
|
Implement a WaveSense network |
|
Define a |
Conversion utilities
Wrap a low-level Rockpool |
|
An adapter class to wrap a Rockpool v1 |
|
Convert a Rockpool v1 class to a v2 class |
Jax
training utilities
Jax functions useful for training networks using Jax Modules. |
|
Functions to implement adversarial training approaches using Jax |
|
Performs the PGA (projected gradient ascent) based attack on the parameters of the network given inputs. |
Implement a hybrid task / adversarial robustness loss |
PyTorch
training utilities
Torch loss functions and regularizers useful for training networks using Torch Modules. |
PyTorch
transformation API (beta)
Defines the parameter and activation transformation-in-training pipeline for |
Xylo™ hardware support and simulation
Support modules
Enumerate connected Xylo HDKs, and import the corresponding support module |
Xylo-family device simulations, deployment and HDK support |
Package to support the Xylo HW SYNS61300 (Xylo™ core; "Pollen") |
|
Package to support the Xylo HW SYNS65300 (Xylo™Audio 1) |
|
Package to support the Xylo HW SYNS61201 (Xylo™Audio 2) |
|
Package to support the Xylo HW SYNS65302 (Xylo™Audio 3) |
|
Package to support the Xylo HW SYNS63300 (Xylo™IMU) |
Quantize a Xylo model for deployment, using global parameter scaling |
|
Quantize a Xylo model for deployment, using per-channel parameter scaling |
Xylo™Audio 2 support
See also
|
Map a computational graph onto the Xylo v2 (SYNS61201) architecture |
Convert a full network specification to a xylo config and validate it |
|
|
Read a Xylo configuration from disk in JSON format |
|
Save a Xylo configuration to disk in JSON format |
Calculate the average number of cycles required for a given network architecture |
|
|
Estimate the required master clock frequency, to run a network in real-time |
|
A |
|
A spiking neuron |
|
A spiking neuron |
|
A |
|
Interface to the Audio Front-End module on a Xylo-A2 HDK |
A digital divisive normalization block |
|
A |
|
A |
Xylo™Audio 3 Support
See also
|
AFESim module that simulates audio signal preprocessing on XyloAudio 3 chip. |
|
AFESim module that simulates audio signal preprocessing on XyloAudio 3 chip. |
|
A |
|
A spiking neuron |
|
A spiking neuron |
A |
|
A |
|
Map a computational graph onto the XyloAudio 3 architecture |
Convert a full network specification to a XyloAudio3 config and validate it |
|
|
Read a Xylo configuration from disk in JSON format |
|
Save a Xylo configuration to disk in JSON format |
Calculate the average number of cycles required for a given network architecture |
|
|
Estimate the required master clock frequency, to run a network in real-time |
Xylo™IMU support
See also
|
Map a computational graph onto the Xylo IMU architecture |
Convert a full network specification to a xylo config and validate it |
|
|
Read a Xylo configuration from disk in JSON format |
|
Save a Xylo configuration to disk in JSON format |
Calculate the average number of cycles required for a given network architecture |
|
|
Estimate the required master clock frequency, to run a network in real-time |
|
A |
|
A spiking neuron |
|
A spiking neuron |
A |
|
A |
|
A spiking neuron |
|
A |
|
A spiking neuron |
IMU Preprocessing Interface
|
A |
|
A module wrapping the Xylo IMU IF on hardware, permitting recording |
|
Interface to the IMU sensor on a Xylo IMU HDK |
IMU-IF submodules, as implemented in Xylo IMU |
|
A |
A Rockpool module simulating the rotation estimation and removal block in the Xylo IMU interface |
|
Class that instantiates a single quantised band-pass filter, as implemented on Xylo IMU hardware |
|
This class builds the block-diagram version of the filters, which is exactly as it is done in HW. |
|
Encode spikes as follows |
|
Synchronous integrate and fire spike encoder |
|
The quantizer that converts the input signal into python-object which can handle/simulate arbitrary register size in hardware implementation. |
Dynap-SE2 hardware support and simulation
See also
Dynap-SE2 Application Programming Interface (API) |
Simulation
Dynap-SE2 Simulation Module |
|
|
DynapSim solves dynamical chip equations for the DPI neuron and synapse models. |
Mismatch
|
mismatch_generator returns a function which simulates the analog device mismatch effect. |
frozen_mismatch_prototype process the module attributes tree and returns a frozen mismatch prototype which indicates the values to be deviated. |
|
dynamic_mismatch_prototype process the module attributes tree and returns a dynamical mismatch prototype which indicates the values to be deviated at run-time. |
Device to Simulation
|
mapper maps a computational graph onto Dynap-SE2 architecture. |
autoencoder_quantization executes the unsupervised weight configuration learning approach |
|
config_from_specification gets a specification and creates a samna configuration object for Dynap-SE2 chip. |
Computer Interface
find_dynapse_boards identifies the Dynap-SE2 boards plugged in to the system. |
|
|
DynapseSamna bridges the gap between the chip and the computer. |
Simulation to Device
dynapsim_net_from_specification gets a specification and creates a sequential dynapsim network consisting of a linear layer (virtual connections) and a recurrent layer (hardware connections) |
|
dynapsim_net_from_config constructs a |
More
|
DynapseNeurons stores the core computational properties of a Dynap-SE network |
|
DynapSimCore stores the simulation currents and manages the conversion from configuration objects. |
Graph tracing and mapping
Base modules
|
Base class for graph modules |
|
Describe a module of computation in a graph |
|
Describe a node connecting |
|
A |
Encapsulate a |
Computational graph modules
|
A |
|
A |
|
A |
A |
|
A |
Utilities for generating and manipulating computational graphs |
General Utilities
Utility functionality for managing backends |
|
Tree manipulation utilities with no external dependencies |
|
Utility functions for working with trees. |
|
type_handling.py - Convenience functions for checking and converting object types |
NIR import and export
alias of |
|
alias of |