topobench.evaluator.metrics package#
Init file for custom metrics in evaluator module.
- class ExampleRegressionMetric(squared=True, num_outputs=1, **kwargs)#
Bases:
MetricExample metric.
- Parameters:
- squaredbool
Whether to compute the squared error (default: True).
- num_outputsint
The number of outputs.
- **kwargsAny
Additional keyword arguments.
- __init__(squared=True, num_outputs=1, **kwargs)#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- compute()#
Compute mean squared error over state.
- Returns:
- torch.Tensor
Mean squared error.
- update(preds, target)#
Update state with predictions and targets.
- Parameters:
- predstorch.Tensor
Predictions from model.
- targettorch.Tensor
Ground truth values.
- sum_squared_error: Tensor#
- total: Tensor#
Submodules#
- topobench.evaluator.metrics.example module
AnyExampleRegressionMetricMetricMetric.__init__()Metric.add_state()Metric.clone()Metric.compute()Metric.double()Metric.float()Metric.forward()Metric.half()Metric.merge_state()Metric.persistent()Metric.plot()Metric.reset()Metric.set_dtype()Metric.state_dict()Metric.sync()Metric.sync_context()Metric.type()Metric.unsync()Metric.update()Metric.deviceMetric.dtypeMetric.full_state_updateMetric.higher_is_betterMetric.is_differentiableMetric.metric_stateMetric.plot_legend_nameMetric.plot_lower_boundMetric.plot_upper_boundMetric.update_calledMetric.update_count