topobench.utils package#

class RankedLogger(name='topobench.utils.pylogger', rank_zero_only=False, extra=None)#

Bases: LoggerAdapter

Initialize a multi-GPU-friendly python command line logger.

The logger logs on all processes with their rank prefixed in the log message.

Parameters:
namestr, optional

The name of the logger, by default __name__.

rank_zero_onlybool, optional

Whether to force all logs to only occur on the rank zero process (default: False).

extraMapping[str, object], optional

A dict-like object which provides contextual information. See logging.LoggerAdapter for more information (default: None).

__init__(name='topobench.utils.pylogger', rank_zero_only=False, extra=None)#

Initialize the adapter with a logger and a dict-like object which provides contextual information. This constructor signature allows easy stacking of LoggerAdapters, if so desired.

You can effectively pass keyword arguments as shown in the following example:

adapter = LoggerAdapter(someLogger, dict(p1=v1, p2=”v2”))

log(level, msg, rank=None, *args, **kwargs)#

Delegate a log call to the underlying logger.

The function first prefixes the message with the rank of the process it’s being logged from and then logs the message. If ‘rank’ is provided, then the log will only occur on that rank/process.

Parameters:
levelint

The level to log at. Look at logging.__init__.py for more information.

msgstr

The message to log.

rankint, optional

The rank to log at (default: None).

*argsAny

Additional args to pass to the underlying logging function.

**kwargsAny

Any additional keyword args to pass to the underlying logging function.

enforce_tags(cfg, save_to_file=False)#

Prompt user to input tags from terminal if no tags are provided in config.

Parameters:
cfgDictConfig

A DictConfig composed by Hydra.

save_to_filebool, optional

Whether to export tags to the hydra output folder (default: False).

extras(cfg)#

Apply optional utilities before the task is started.

Utilities:
  • Ignoring python warnings.

  • Setting tags from command line.

  • Rich config printing.

Parameters:
cfgDictConfig

A DictConfig object containing the config tree.

get_metric_value(metric_dict, metric_name)#

Safely retrieves value of the metric logged in LightningModule.

Parameters:
metric_dictdict

A dict containing metric values.

metric_namestr, optional

If provided, the name of the metric to retrieve.

Returns:
float, None

If a metric name was provided, the value of the metric.

instantiate_callbacks(callbacks_cfg)#

Instantiate callbacks from config.

Parameters:
callbacks_cfgDictConfig

A DictConfig object containing callback configurations.

Returns:
list[Callback]

A list of instantiated callbacks.

instantiate_loggers(logger_cfg)#

Instantiate loggers from config.

Parameters:
logger_cfgDictConfig

A DictConfig object containing logger configurations.

Returns:
list[Logger]

A list of instantiated loggers.

log_hyperparameters(object_dict)#

Control which config parts are saved by Lightning loggers.

Additionally saves:
  • Number of model parameters

Parameters:
object_dictdict[str, Any]
A dictionary containing the following objects:
  • “cfg”: A DictConfig object containing the main config.

  • “model”: The Lightning model.

  • “trainer”: The Lightning trainer.

print_config_tree(cfg, print_order=('dataset', 'model', 'transforms', 'callbacks', 'logger', 'trainer', 'paths', 'extras'), resolve=False, save_to_file=False)#

Print the contents of a DictConfig using the Rich library.

Parameters:
cfgDictConfig

A DictConfig object containing the config tree.

print_orderSequence[str], optional

Determines in what order config components are printed, by default (“data”, “model”, “callbacks”, “logger”, “trainer”, “paths”, “extras”).

resolvebool, optional

Whether to resolve reference fields of DictConfig, by default False.

save_to_filebool, optional

Whether to export config to the hydra output folder, by default False.

task_wrapper(task_func)#

Optional decorator that controls the failure behavior when executing the task function.

This wrapper can be used to: - make sure loggers are closed even if the task function raises an exception (prevents multirun failure). - save the exception to a .log file. - mark the run as failed with a dedicated file in the logs/ folder (so we can find and rerun it later). - etc. (adjust depending on your needs).

Example: ``` @utils.task_wrapper def train(cfg: DictConfig) -> Tuple[Dict[str, Any], Dict[str, Any]]:

… return metric_dict, object_dict

```

Parameters:
task_funcCallable

The task function to be wrapped.

Returns:
Callable

The wrapped task function.

Submodules#