Change logο
All notable changes between Rockpool releases will be documented in this file
[v.2.9.1 hotfix] β 2024-10-14ο
Fixedο
Rockpool package was not generate on conda-forge. Update package build requirements.
[v2.9] β 2024-10-11ο
Addedο
Support for Xyloβ’Audio 3 development kit
Hardware interface via samna
Digital microphone input and simulaton package
Cycles model
Simulation support for audio front-end:
AFESimExternal
,AFESimAGC
, andAFESimPDM
with all the necessary sub-modules
Tutorial and documentation for the
SynNet
architecture, to improve visibility
Changedο
Update
release notes
for developers in documentationAdd check for version
Add check for copyright
Update dependency version of Jax to >=0.4.28
Move instructions to build documentation inside
Contributing
section
[v2.8] β 2024-06-24ο
Addedο
Add cycles model for Xylo Audio and Xylo IMU, enabling users to calculate the required master clock frequency for Xylo
Add support for NIR, for importing and exporting Rockpool torch networks
Changedο
LIFExodus
now supports vectors as threshold parameterStandard
LIF
modules now havew_rec
as a simulation parameter when in non-recurrent mode
Fixedο
TypeError
when usingLIFExodus
Update
jax.config
usagePower measurement for
xyloA2
was not considering AFE channelsRemove
check_grads
from Jax tests, since this will fail for LIF neurons due to surrograte gradientsFix a bug in
AFESim
on windows, where the maximum int32 value would be exceeded when seeding the AFE simulationFix stochasticity in some unit tests
Fix a bug in
channel_quantize
, where quantization would be incorrectly applied for Xylo IMU networks with Nien < NhidFix a bug in
channel_quantize
, where hidden unit biases would be incorrectly used in place of output unit biasesFix a non-handled buffer bug in
LIFJax
, where non-recurrent modules would sometimes have garbage inw_rec
instead of all zerosFix a bug in
TorchSequential.as_graph()
, where torch module functions would be called instead of rockpool modules, leading to a failing call to.as_graph()
.
Deprecatedο
Brian2 tests are not running β Brian2 backend will be soon removed
Removedο
Securityο
[v.2.7.1 hotfix] β 2024-01-19ο
Fixedο
Bug in Xylo IMU mapper, where networks with more than 128 hidden neurons could not be mapped
[v2.7] β 2023-09-25ο
Addedο
Dependency on
pytest-random-order
v1.1.0 for test order randomization.New HowTo tutorial for performing constrained optimisation with torch and jax
Xylo IMU application software support:
mapper
,config_from_specification
and graph mapping supportXyloSim
module: SNN core simulation for Xylo IMUIMUIFSim
module: Simulation of the input encoding interface with sub-modules:BandPassFilter
FilterBank
RotationRemoval
IAFSpikeEncoder
ScaleSpikeEncoder
XyloIMUMonitor
module: Real-time hardware monitoring for Xylo IMU.XyloSamna
module: Interface to the SNN core.IMUIFSamna
module: Interface toIMUIF
, utilizing neurons in the SNN core.IMUData
module: Collection of sensor data from the onboard IMU sensor.Utility functions for network mapping to the Xylo IMU HDK, interfacing, and data processing.
Introductory documentation providing an overview of Xylo IMU and instructions on configuring preprocessing.
New losses, with structure similar to PyTorch.
PeakLoss which can be imported as
peak_loss = rockpool.nn.losses.PeakLoss()
.MSELoss which can be imported as
mse_loss = rockpool.nn.losses.MSELoss()
.
Changedο
Update dependency version of pytest-xdist to >=3.2.1.
Update to
Sequential
API.Sequential
now permits instantiation with anOrderedDict
to specify module names.Sequential
now supports an.append()
method, to append new modules, optionally specifying a module name.Cleaned up tree manipulation libraries and added to documentation. Implemented unit tests.
Removed obsolete unit tests
Changed semantics of transformation configurations for QAT, to only include attributes which will be transformed, rather than all attributes. This fixes an incompatibility with torch >= 2.0.
Added support for latest
torch
versionsNew fine-grained installation options
Renamed power measurement dict keys returned by Xylo Audio 2 (
syns61201
)XyloSamna
module, to be more descriptiveUpgrade minimum Python version supported to 3.8
Upgrade minimum JAX version supported to 0.4.10
Rearranged Xylo documentation to separate overview, Xylo Audio and Xylo IMU
Fixedο
Fixed bug in initialising access to MC3620 IMU sensor on Xylo IMU HDK, where it would fail with an error the on the second initialisation
Deprecatedο
Removedο
NEST backend completely removed
Removed spiking output surrogate βUβ from LIF modules
Securityο
[v2.6] β 2023-03-22ο
Addedο
Dynap-SE2 Application Software Support (jax-backend)
jax backend
DynapSim
neuron model with its own custom surrogate gradient implementationDynapSamna
module handling low-level HDK interface under-the-hoodrockpool network <-> hardware configuration bi-directional conversion utilities
Network mapping:
mapper()
andconfig_from_specification()
sequentially combined
LinearJax
+DynapSim
network getters :dynapsim_net_from_config()
anddynapsim_net_from_spec()
transistor lookup tables to ease high-level parameters <-> currents <-> DAC (coarse, fine values) conversions
Dynap-SE2 specific auto-encoder quantization
autoencoder_quantization()
Custom
DigitalAutoEncoder
implementation and training pipeline
samna
alias classes compensating the missing documentation supportunit tests + tutorials + developer docs
DynapseNeuron
graph module which supports conversion from and toLIFNeuronWithSynsRealValue
graphhardcoded frozen and dynamic mismatch prototypes
mismatch transformation (jax)
LIFExodus
now supports training time constants, and multiple time constantsImproved API for
LIFTorch
Implemented
ExpSynExodus
for accelerated training of exponential synapse modulesAdded initial developer documentation
Added MNIST tutorial
Fixed notebook links to MyBinder.org
Changedο
Updated Samna version requirement to >=0.19.0
User explicitly defines Cuda device for LIFExodus, ExpDynExodus and LIFMembraneExodus
Improved error message when a backend is missing
Improved transient removal in
syns61201.AFESim
Fixedο
Weight scaling was too different for output layers and hidden layers.
Hotfix: Regression in
LIFExodus
[v2.5] β 2022-11-29ο
Addedο
Added support for Xylo-Audio v2 (SYNS61201) devices and HDK
Added hardware versioning for Xylo devices
Added a beta implementation of Quantisation-Aware Training for Torch backend in
rockpool.transform.torch_transform
Added support for parameter boundary constraints in
rockpool.training.torch_loss
Added tutorial for Spiking Heidelberg Digits audio classification
Added tutorial and documentation for WaveSense network architecture
Added support to
LIFTorch
for training decays and bitshift parametersAdded a new utility package
rockpool.utilities.tree_utils
Changedο
Updated support for Exodus v1.1
Updated
XyloSim.from_specification
to handle NIEN β NRSN β NOEN for Xylo devicesUpdated
LIFTorch
to provide propertau
s for.as_graph()
in case of decay and bitshift traningImproved backend management, to test torch version requirements
Fixedο
Fixed usage of Jax optimisers in tutorial notebooks to reflect Jax API changes
Fixed issues with
LIFTorch
andaLIFTorch
, preventingdeepcopy
protocolFixed bug in
tree_utils
, whereTree
was used instead ofdict
inisinstance
checkReplaced outdated reference from
FFRateEuler
toRate
module in high-level API tutorialFixed seeds in torch and numpy to avoid
nan
loss problem while training in tutorialFixed bug in
TorchModule
where assigning to an existing registered attribute would clear the family of the attributeFixed a bug in Constant handling for
torch.Tensor
s, which would raise errors in torch 1.12Fixed bug in
LIFTorch
, which would cause recorded state to hang around post-evolution, causing errors fromdeepcopy
Fixed bug in
Module._register_module()
, where replacing an existing submodule would cause the string representation to be incorrect
[v2.4.2] β 2022-10-06ο
Hotfixο
Improved handling of weights when using XyloSim.from_specification
[v2.4] β 2022-08ο
Major changesο
Linear...
modules now do not have a bias parameter, by default.
Addedο
Support for Xylo SNN core v2, via XyloSim. Including biases and quantisation support; mapping and deployment for Xylo SNN core v2 (SYNS61201)
Added support for Xylo-A2 test board, with audio recording support from Xylo AFE (
AFESamna
andXyloSamna
)Support for an LIF neuron including a trainable adaptive threshold (
aLIFTorch
). Deployable to XyloNew module
BooleanState
, which maintains a boolean stateSupport for membrane potential training using
LIFExodus
Changedο
Xylo package support for HW versioning (SYNS61300; SYNS61201)
Ability to return events, membrane potentials or synaptic currents as output from
XyloSim
andXyloSamna
Enhanced Xylo
mapper
to be more lenient about weight matrix size β now assumes missing weights are zeroXylo
mapper
is now more lenient about HW constraints, permitting larger numbers of input and output channels than supported by existing HDKsXylo
mapper
supports a configurable number of maxmimum hidden and output neuronsRunning
black
is enforced by the CI pipelineLinear...
modules now export bias parameters, if they are presentLinear...
modules now do not include bias parameters by defaultXylo
mapper
now raises a warning if any linear weights have biasesLIFSlayer
renamed toLIFExodus
, corresponding tosinabs.exodus
library name changePeriodic exponetial surrogate function now supports training thresholds
Fixedο
Fixes related to torch modules moved to simulation devices
Fixed issue in
dropout.py
, where if jax was missing an ImportError was raisedFixed an issue with
Constant
torch
parameters, wheredeepcopy
would raise an errorFixed issue with newer versions of torch; torch v1.12 is now supported
Updated to support changes in latest jax api
Fixed bug in
WavesenseNet
, where neuron class would not be checked properlyFixed bug in
channel_quantize
, where unquantized weights were returned instead of quantized weights
Deprecatedο
LIFSlayer
is now deprecated
[v2.3.1] β 2022-03-24ο
Hotfixο
Improved CI pipeline such that pipline is not blocked with sinabs.exodus cannot be installed
Fixed UserWarning raised by some torch-backed modules
Improved some unit tests
[v2.3] β 2022-03-16ο
Addedο
Standard dynamics introduced for LIF, Rate, Linear, Instant, ExpSyn. These are standardised across Jax, Torch and Numpy backends. We make efforts to guarantee identical dynamics for the standard modules across these backends, down to numerical precision
LIF modules can now train threhsolds and biases as well as time constants
New
JaxODELIF
module, which implements a trainable LIF neuron following common dynamical equations for LIF neuronsNew addition of the WaveSense network architecture, for temporal signal processing with SNNs. This is available in
rockpool.networks
, and is documented with a tutorialA new system for managing computational graphs, and mapping these graphs onto hardware architectures was introduced. These are documented in the Xylo quick-start tutorial, and in more detail in tutorials covering Computational Graphs and Graph Mapping. The mapping system performs design-rule checks for Xylo HDK
Included methods for post-traning quantisation for Xylo, in
rockpool.transform
Added simulation of a divisive normalisation block for Xylo audio applications
Added a
Residual
combinator, for convenient generation of networks with residual blocksSupport for
sinabs
layers and ExodusModule
,JaxModule
andTorchModule
provide facility for auto-batching of input data. Input data shape is(B, T, Nin)
, or(T, Nin)
when only a single batch is providedExpanded documentation on parameters and type-hinting
Changedο
Python > 3.6 is now required
Improved import handling, when various computational back-ends are missing
Updated for new versions of
samna
Renamed Cimulator -> XyloSim
Better parameter handling and rockpool/torch parameter registration for Torch modules
(Most) modules can accept batched input data
Improved / additional documentation for Xylo
Fixedο
Improved type casting and device handling for Torch modules
Fixed bug in Module, where
modules()
would return a non-ordered dict. This caused issues withJaxModule
Removedο
Removed several obsolete
Layer
s andNetwork
s from Rockpool v1
[v2.2] β 2021-09-09ο
Addedο
Added support for the Xylo development kit in
.devices.xylo
, including several tutorialsAdded CTC loss implementations in
.training.ctc_loss
New trainable
torch
modules:LIFTorch
and others in.nn.modules.torch
, including an asynchronous delta modulatorUpDownTorch
Added
torch
training utilities and loss functions in.training.torch_loss
New
TorchSequential
class to supportSequential
combinator fortorch
modulesAdded a
FFwdStackTorch
class to supportFFwdStack
combinator fortorch
modules
Changedο
Existing
LIFTorch
module renamed toLIFBitshiftTorch
; updated module to align better with Rockpool APIImprovements to
.typehints
packageTorchModule
now raises an error if submodules are notTorchmodules
Fixedο
Updated LIF torch training tutorial to use new
LIFBitshiftTorch
moduleImproved installation instructions for
zsh
[v2.1] β 2021-07-20ο
Addedο
πΉ Adversarial training of parameters using the Jax back-end, including a tutorial
π° βEasterβ tutorial demonstrating an SNN trained to generate images
π₯ Torch tutorials for training non-spiking and spiking networks with Torch back-ends
Added new method
nn.Module.timed()
, to automatically convert a module to aTimedModule
New
LIFTorch
module that permits training of neuron and synaptic time constants in addition to other network parametersNew
ExpSynTorch
module: exponential leak synapses with Torch back-endNew
LinearTorch
module: linear model with Torch back-endNew
LowPass
module: exponential smoothing with Torch back-endNew
ExpSmoothJax
module: single time-constant exponential smoothing layer, supporting arbitrary transfer functions on outputNew
softmax
andlog_softmax
losses injax_loss
packageNew
utilities.jax_tree_utils
package containing useful parameter tree handling functionsNew
TSContinuous.to_clocked()
convenience method, to easily rasterise a continuous time seriesAlpha: Optional
_wrap_recorded_state()
method added tonn.Module
base class, which supports wrapping recorded state dictionaries asTimeSeries
objects, when using the high-levelTimeSeries
APISupport for
add_events
flag for time-series wrapper classNew Parameter dictionary classes to simplify conversion and handling of Torch and Jax module parameters
Added
astorch()
method to parameter dictionaries returned formTorchModule
Improved type hinting
Changedο
Old
LIFTorch
module renamed toLIFBitshiftTorch
Kaiming and Xavier initialisation support for
Linear
modulesLinear
modules provide a bias by defaultMoved
filter_bank
package from V1 layers intonn.modules
Update Jax requirement to > v2.13
Fixedο
Fixed binder links for tutorial notebooks
Fixed bug in
Module
for multiple inheritance, where the incorrect__repr__()
method would be calledFixed
TimedModuleWrapper.reset_state()
methodFixed axis limit bug in
TSEvent.plot()
methodRemoved page width constraint for docs
Enable
FFExpSyn
module by making it independent of oldRRTrainedLayer
Deprecatedο
Removed
rpyc
dependency
Removedο
[v2.0] β 2021-03-24ο
New Rockpool API. Breaking change from v1.x
Documentation for new API
Native support for Jax and Torch backends
Many v1 Layers transferred
[v1.1.0.4] β 2020-11-06ο
Hotfix to remove references to ctxctl and aiCTX
Hotfix to include NEST documentation in CI-built docs
Hotfix to include change log in build docs
[v1.1] β 2020-09-12ο
Addedο
Considerably expanded support for Denève-Machens spike-timing networks, including training arbitrary dynamical systems in a new
RecFSSpikeADS
layer. Added tutorials for standard D-M networks for linear dynamical systems, as well as a tutorial for training ADS networksAdded a new βIntro to SNNsβ getting-started guide
A new βsharp points of Rockpoolβ tutorial collects the tricks and traps for new users and old
A new
Network
class,JaxStack
, supports stacking and end-to-end gradient-based training of all Jax-based layers. A new tutorial has been added for this functionalityTimeSeries
classes now support best-practices creation from clock or rasterised data.TSContinuous
provides a.from_clocked()
method, andTSEvent
provides a.from_raster()
method for this purpose..from_clocked()
a sample-and-hold interpolation, for intuitive generation of time series from periodically-sampled data.TSContinuous
now supports a.fill_value
property, which permits extrapolation usingscipy.interpolate
New
TSDictOnDisk
class for storingTimeSeries
objects transparently on diskAllow ignoring data points for specific readout units in ridge regression Fisher relabelling. To be used, for example with all-vs-all classification
Added exponential synapse Jax layers
Added
RecLIFCurrentIn_SO
layer
Changedο
TSEvent
time series no longer support creation without explicitly settingt_stop
. The previous default of taking the final event time ast_stop
was causing too much confusion. For related reasons,TSEvent
now forbids events to occur att_stop
TimeSeries
classes by default no longer permit sampling outside of the time range they are defined for, raising aValueError
exception if this occurs. This renders safe several traps that new users were falling in to. This behaviour is selectable per time series, and can be transferred to a warning instead of an exception using thebeyond_range_exception
flagJax trainable layers now import from a new mixin class
JaxTrainer
. THe class provides a default loss function, which can be overridden in each sub-class to provide suitable regularisation. The training interface now returns loss value and gradients directly, rather than requiring an extra function call and additional evolutionImproved training method for JAX rate layers, to permit parameterisation of loss function and optimiser
Improved the
._prepare_input...()
methods in theLayer
class, such that allLayer
s that inherit from this superclass are consistent in the number of time steps returned from evolutionThe
Network.load()
method is now a class methodTest suite now uses multiple cores for faster testing
Changed company branding from aiCTX -> SynSense
Documentation is now hosted at https://rockpool.ai
Fixedο
Fixed bugs in precise spike-timing layer
RecSpikeBT
Fixed behavior of
Layer
class when passing weights in wrong formatStability improvements in
DynapseControl
Fix faulty z_score_standardization and Fisher relabelling in
RidgeRegrTrainer
. Fisher relabelling now has better handling of differently sized batchesFixed bugs in saving and loading several layers
More sensible default values for
VirtualDynapse
baseweightsFix handling of empty
channels
argument inTSEvent._matching_channels()
methodFixed bug in
Layer._prepare_input
, where it would raise an AssertionError when no input TS was providedFixed a bug in
train_output_target
, where the gradient would be incorrectly handled if no batching was performedFixed
to_dict
method forFFExpSynJax
classesRemoved redundant
_prepare_input()
method from Torch layerMany small documentation improvements
[v1.0.8] β 2020-01-17ο
Addedο
Introduced new
TimeSeries
class methodconcatenate_t()
, which permits construction of a new time series by concatenating a set of existing time series, in the time dimensionNetwork
class now provides ato_dict()
method for export.Network
now also can treat sub-Network
s as layers.Training methods for spiking LIF Jax-backed layers in
rockpool.layers.training
. Tutorial demonstrating SGD training of a feed-forward LIF network. Improvements in JAX LIF layers.Added
filter_bank
layers, providinglayer
subclasses which act as filter banks with spike-based outputAdded a
filter_width
parameter for butterworth filtersAdded a convenience function
start_at_zero()
to delayTimeSeries
so that it starts at 0Added a change log in
CHANGELOG.md
Changedο
Improved
TSEvent.raster()
to make it more intuitive. Rasters are now produced in line with time bases that can be created easily withnumpy.arange()
Updated
conda_merge_request.sh
to work for conda feedstockTimeSeries.concatenate()
renamed toconcatenate_t()
RecRateEuler
warns iftau
is too small instead of silently changingdt
Fixed or improvedο
Fixed issue in
Layer
, where internal property was used when accessing._dt
. This causes issues with layers that have an unusual internal type for._dt
(e.g. if data is stored in a JAX variable on GPU)Reduce memory footprint of
.TSContinuous
by approximately halfReverted regression in layer class
.RecLIFJax_IO
, wheredt
was by default set to1.0
, instead of being determined bytau_...
Fixed incorrect use of
Optional[]
type hintsAllow for small numerical differences in comparison between weights in NEST test
test_setWeightsRec
Improvements in inline documentation
Increasing memory efficiency of
FFExpSyn._filter_data
by reducing kernel sizeImplemented numerically stable timestep count for TSEvent rasterisation
Fixed bugs in
RidgeRegrTrainer
Fix plotting issue in time series
Fix bug of RecRateEuler not handling
dt
argument in__init__()
Fixed scaling between torch and nest weight parameters
Move
contains()
method fromTSContinuous
toTimeSeries
parent classFix warning in
RRTrainedLayer._prepare_training_data()
when times of target and input are not alignedBrian layers: Replace
np.asscalar
withfloat
[v1.0.7.post1] β 2019-11-28ο
Addedο
New
.Layer
superclass.RRTrainedLayer
. This superclass implements ridge regression for layers that support ridge regression training.TimeSeries
subclasses now add axes labels on plottingNew spiking LIF JAX layers, with documentation and tutorials
.RecLIFJax
,.RecLIFJax_IO
,.RecLIFCurrentInJax
,.RecLIFCurrentInJAX_IO
Added
save
andload
facilities to.Network
objects._matching_channels()
now accepts an arbitrary list of event channels, which is used when analysing a periodic time series
Changedο
Documentation improvements
:py:meth:
.TSContinuous.plot
method now supportsstagger
andskip
arguments.Layer
and.Network
now deal with a.Layer.size_out
attribute. This is used to determine whether two layers are compatible to connect, rather than using.size
Extended unit test for periodic event time series to check non-periodic time series as well
Fixedο
Fixed bug in
TSEvent.plot()
, where stop times were not correctly handledFix bug in
Layer._prepare_input_events()
, where if only a duration was provided, the method would return an input raster with an incorrect number of time stepsFixed bugs in handling of periodic event time series
.TSEvent
Bug fix:
.Layer._prepare_input_events
was failing for.Layer
s with spiking inputTSEvent.__call__()
now correctly handles periodic event time series
[v1.0.6] β 2019-11-01ο
CI build and deployment improvements
[v1.0.5] β 2019-10-30ο
CI Build and deployment improvements
[v1.0.4] β 2019-10-28ο
Remove deployment dependency on docs
Hotfix: Fix link to
Black
Add links to gitlab docs
[v1.0.3] β 2019-10-28ο
Hotfix for incorrect license text
Updated installation instructions
Included some status indicators in readme and docs
Improved CI
Extra meta-data detail in
setup.py
Added more detail for contributing
Update README.md
[v1.0.2] β 2019-10-25ο
First public release