rockpool.devices.dynapse.quantization.autoencoder_quantizationο
- rockpool.devices.dynapse.quantization.autoencoder_quantization(n_cluster: int, core_map: List[int], weights_in: ndarray, weights_rec: ndarray, Iscale: float, n_bits: int | None = 4, fixed_epoch: bool = False, num_epoch: int = 10000000, num_epoch_checkpoint: int = 1000, eps: float = 1e-06, record_loss: bool = True, optimizer: str = 'adam', step_size: float | Callable[[int], float] = 0.0001, opt_params: Dict[str, Any] | None = {}, *args, **kwargs) Dict[str, ndarray | float] [source]ο
autoencoder_quantization executes the unsupervised weight configuration learning approach
rockpool.devices.dynapse.quantization.autoencoder.learn.learn_weights
for each cluster seperately. The function subsets input and recurrent weights for each cluster and quantizes the weights according to regarding clusterβs constraints.- Parameters:
n_cluster (int) β total number of clusters, neural cores allocated
core_map (List[int]) β core mapping for real hardware neurons (neuron_id : core_id)
weights_in (Optional[np.ndarray]) β input layer weights used in Dynap-SE2 simulation
weights_rec (Optional[np.ndarray]) β recurrent layer (in-device neurons) weights used in Dynap-SE2 simulation
Iscale (float) β base weight scaling current in Amperes used in simulation
n_bits (Optional[int], optional) β number of target weight bits, defaults to 4
fixed_epoch (bool, optional) β used fixed number of epochs or control the convergence by loss decrease, defaults to False
num_epoch (int, optional) β the fixed number of epochs as global limit, defaults to 10,000,000
num_epoch_checkpoint (int, optional) β at this point (number of epochs), pipeline checks the loss decrease and decide to continue or not, defaults to 1,000.
eps (float, optional) β the epsilon tolerance value. If the loss does not decrease more than this for five consecutive checkpoints, then training stops. defaults to 1e-6
record_loss (bool, optional) β record the loss evolution or not, defaults to True
optimizer (str, optional) β one of the optimizer defined in
jax.example_libraries.optimizers
: , defaults to βadamβstep_size (Union[float, Callable[[int], float]], optional) β positive scalar, or a callable representing a step size schedule that maps the iteration index to a positive scalar. , defaults to 1e-4
opt_params (Optional[Dict[str, Any]]) β optimizer parameters dictionary, defaults to {}
- Returns:
A dictionary of quantized weights and parameters, the quantization loss
- Return type:
Dict[str, Union[np.ndarray, float]]