topobench.evaluator.metrics package#

Submodules#

topobench.evaluator.metrics.example module#

Loss module for the topobench package.

class topobench.evaluator.metrics.example.ExampleRegressionMetric(squared: bool = True, num_outputs: int = 1, **kwargs: Any)[source]#

Bases: Metric

Example metric.

Parameters:
squaredbool

Whether to compute the squared error (default: True).

num_outputsint

The number of outputs.

**kwargsAny

Additional keyword arguments.

compute() Tensor[source]#

Compute mean squared error over state.

Returns:
torch.Tensor

Mean squared error.

full_state_update: bool | None = False#
higher_is_better: bool | None = False#
is_differentiable: bool | None = True#
sum_squared_error: Tensor#
total: Tensor#
update(preds: Tensor, target: Tensor) None[source]#

Update state with predictions and targets.

Parameters:
predstorch.Tensor

Predictions from model.

targettorch.Tensor

Ground truth values.

Module contents#

Init file for custom metrics in evaluator module.

class topobench.evaluator.metrics.ExampleRegressionMetric(squared: bool = True, num_outputs: int = 1, **kwargs: Any)[source]#

Bases: Metric

Example metric.

Parameters:
squaredbool

Whether to compute the squared error (default: True).

num_outputsint

The number of outputs.

**kwargsAny

Additional keyword arguments.

compute() Tensor[source]#

Compute mean squared error over state.

Returns:
torch.Tensor

Mean squared error.

full_state_update: bool | None = False#
higher_is_better: bool | None = False#
is_differentiable: bool | None = True#
sum_squared_error: Tensor#
total: Tensor#
update(preds: Tensor, target: Tensor) None[source]#

Update state with predictions and targets.

Parameters:
predstorch.Tensor

Predictions from model.

targettorch.Tensor

Ground truth values.