Reactivity¶
- class mlipaudit.benchmarks.reactivity.reactivity.ReactivityBenchmark(force_field: ForceField | Calculator, data_input_dir: str | PathLike = './data', run_mode: RunMode | Literal['dev', 'fast', 'standard'] = RunMode.STANDARD)¶
Benchmark for transition state energies.
- name¶
The unique benchmark name that should be used to run the benchmark from the CLI and that will determine the output folder name for the result file. The name is
reactivity.- Type:
str
- category¶
A string that describes the category of the benchmark, used for example, in the UI app for grouping. Default, if not overridden, is “General”. This benchmark’s category is “Small Molecules”.
- Type:
str
- result_class¶
A reference to the type of
BenchmarkResultthat will determine the return type ofself.analyze(). The result class isReactivityResult.- Type:
type[mlipaudit.benchmark.BenchmarkResult] | None
- model_output_class¶
A reference to the
ReactivityModelOutputclass.- Type:
type[mlipaudit.benchmark.ModelOutput] | None
- required_elements¶
The set of atomic element types that are present in the benchmark’s input files.
- Type:
set[str] | None
- skip_if_elements_missing¶
Whether the benchmark should be skipped entirely if there are some atomic element types that the model cannot handle. If False, the benchmark must have its own custom logic to handle missing atomic element types. For this benchmark, the attribute is set to True.
- Type:
bool
- __init__(force_field: ForceField | Calculator, data_input_dir: str | PathLike = './data', run_mode: RunMode | Literal['dev', 'fast', 'standard'] = RunMode.STANDARD) None¶
Initializes the benchmark.
- Parameters:
force_field – The force field model to be benchmarked.
data_input_dir – The local input data directory. Defaults to “./data”. If the subdirectory “{data_input_dir}/{benchmark_name}” exists, the benchmark expects the relevant data to be in there, otherwise it will download it from HuggingFace.
run_mode – Whether to run the standard benchmark length, a faster version, or a very fast development version. Subclasses should ensure that when
RunMode.DEV, their benchmark runs in a much shorter timeframe, by running on a reduced number of test cases, for instance. ImplementingRunMode.FASTbeing different fromRunMode.STANDARDis optional and only recommended for very long-running benchmarks. This argument can also be passed as a string “dev”, “fast”, or “standard”.
- Raises:
ChemicalElementsMissingError – If initialization is attempted with a force field that cannot perform inference on the required elements.
ValueError – If force field type is not compatible.
- run_model() None¶
Run energy predictions.
- analyze() ReactivityResult¶
Analysis.
- Returns:
A
ReactivityResultobject with the benchmark results.- Raises:
RuntimeError – If called before
run_model().
- class mlipaudit.benchmarks.reactivity.reactivity.ReactivityResult(*, failed: bool = False, score: Annotated[float | None, Ge(ge=0), Le(le=1)] = None, reaction_results: dict[str, ReactionResult] = {}, mae_activation_energy: Annotated[float, Ge(ge=0)] | None = None, rmse_activation_energy: Annotated[float, Ge(ge=0)] | None = None, mae_enthalpy_of_reaction: Annotated[float, Ge(ge=0)] | None = None, rmse_enthalpy_of_reaction: Annotated[float, Ge(ge=0)] | None = None, failed_reactions: list[str] = [])¶
Result object for the reactivity benchmark.
- reaction_results¶
A dictionary of reaction results where the keys are the reaction identifiers.
- Type:
dict[str, mlipaudit.benchmarks.reactivity.reactivity.ReactionResult]
- mae_activation_energy¶
The MAE of the activation energies.
- Type:
float | None
- rmse_activation_energy¶
The RMSE of the activation energies.
- Type:
float | None
- mae_enthalpy_of_reaction¶
The MAE of the enthalpies of reactions.
- Type:
float | None
- rmse_enthalpy_of_reaction¶
The RMSE of the enthalpies of reactions.
- Type:
float | None
- failed_reactions¶
A list of reaction ids for which inference failed.
- Type:
list[str]
- failed¶
Whether all the simulations or inferences failed and no analysis could be performed. Defaults to False.
- Type:
bool
- score¶
the final score for the benchmark between 0 and 1.
- Type:
float | None
- class mlipaudit.benchmarks.reactivity.reactivity.ReactivityModelOutput(*, reaction_ids: list[str], energy_predictions: list[ReactionModelOutput], failed_reactions: list[str] = [])¶
Stores the model outputs for the reactivity benchmark, consisting of the energy predictions for each reaction.
- reaction_ids¶
A list of reaction identifiers for the successful reactions.
- Type:
list[str]
- energy_predictions¶
A corresponding list of energy predictions for each successful reaction.
- Type:
list[mlipaudit.benchmarks.reactivity.reactivity.ReactionModelOutput]
- failed_reactions¶
A list of reaction ids for which inference failed.
- Type:
list[str]