Fine-tuning utilities

mlip.training.finetuning_utils.mask_optimizer_for_finetuning(optimizer: GradientTransformation, params: dict[str, dict[str, Array | dict]], finetuning_blocks: list[str])

Masks a given optimizer for fine-tuning tasks, where only the new readout heads are updated.

Parameters:
  • optimizer – The base optimizer to mask.

  • params – The parameters.

  • finetuning_blocks – A list of names for those blocks in the parameters that should be updated during fine-tuning.

Returns:

The masked optimizer.

mlip.models.params_transfer.transfer_params(params_source: dict[str, dict[str, Array | dict]], params_destination: dict[str, dict[str, Array | dict]], scale_factor: float = 1.0) tuple[dict[str, dict[str, Array | dict]], list[str]]

Transfer parameters from a source to a destination.

Typically, the destination will be some newly initialized parameters that have some additional blocks in them compared to a source, which is an already trained model. This function will raise an exception if the two parameters deviate more than this from one another.

Parameters:
  • params_source – The parameters to transfer into the destination.

  • params_destination – The destination parameters that may contain additional blocks compared to the source.

  • scale_factor – Scale factor to multiply the new parameters by. Default is 1.0.

Returns:

A tuple of the updated destination parameters and a list of strings, which represent the key names that were missing in the source parameters.

Raises:

ParameterTransferImpossibleError – if the source and destination parameters are incompatible with each other.

class mlip.models.params_transfer.ParameterTransferImpossibleError

Exception to be raised if the destination and source parameters deviate more in their structures than just having some missing blocks in the source.