rockpool.devices.dynapse.quantization.autoencoder_quantization

rockpool.devices.dynapse.quantization.autoencoder_quantization(n_cluster: int, core_map: List[int], weights_in: ndarray, weights_rec: ndarray, Iscale: float, n_bits: int | None = 4, fixed_epoch: bool = False, num_epoch: int = 10000000, num_epoch_checkpoint: int = 1000, eps: float = 1e-06, record_loss: bool = True, optimizer: str = 'adam', step_size: float | Callable[[int], float] = 0.0001, opt_params: Dict[str, Any] | None = {}, *args, **kwargs) Dict[str, ndarray | float][source]

autoencoder_quantization executes the unsupervised weight configuration learning approach rockpool.devices.dynapse.quantization.autoencoder.learn.learn_weights for each cluster seperately. The function subsets input and recurrent weights for each cluster and quantizes the weights according to regarding cluster’s constraints.

Parameters:
  • n_cluster (int) – total number of clusters, neural cores allocated

  • core_map (List[int]) – core mapping for real hardware neurons (neuron_id : core_id)

  • weights_in (Optional[np.ndarray]) – input layer weights used in Dynap-SE2 simulation

  • weights_rec (Optional[np.ndarray]) – recurrent layer (in-device neurons) weights used in Dynap-SE2 simulation

  • Iscale (float) – base weight scaling current in Amperes used in simulation

  • n_bits (Optional[int], optional) – number of target weight bits, defaults to 4

  • fixed_epoch (bool, optional) – used fixed number of epochs or control the convergence by loss decrease, defaults to False

  • num_epoch (int, optional) – the fixed number of epochs as global limit, defaults to 10,000,000

  • num_epoch_checkpoint (int, optional) – at this point (number of epochs), pipeline checks the loss decrease and decide to continue or not, defaults to 1,000.

  • eps (float, optional) – the epsilon tolerance value. If the loss does not decrease more than this for five consecutive checkpoints, then training stops. defaults to 1e-6

  • record_loss (bool, optional) – record the loss evolution or not, defaults to True

  • optimizer (str, optional) – one of the optimizer defined in jax.example_libraries.optimizers : , defaults to β€œadam”

  • step_size (Union[float, Callable[[int], float]], optional) – positive scalar, or a callable representing a step size schedule that maps the iteration index to a positive scalar. , defaults to 1e-4

  • opt_params (Optional[Dict[str, Any]]) – optimizer parameters dictionary, defaults to {}

Returns:

A dictionary of quantized weights and parameters, the quantization loss

Return type:

Dict[str, Union[np.ndarray, float]]