topobench.data.loaders.graph.modecule_datasets module#

Loaders for Molecule datasets (ZINC and AQSOL).

class AQSOL(root, split='train', transform=None, pre_transform=None, pre_filter=None, force_reload=False)#

Bases: InMemoryDataset

The AQSOL dataset from the Benchmarking Graph Neural Networks paper based on AqSolDB, a standardized database of 9,982 molecular graphs with their aqueous solubility values, collected from 9 different data sources.

The aqueous solubility targets are collected from experimental measurements and standardized to LogS units in AqSolDB. These final values denote the property to regress in the AQSOL dataset. After filtering out few graphs with no bonds/edges, the total number of molecular graphs is 9,833. For each molecular graph, the node features are the types of heavy atoms and the edge features are the types of bonds between them, similar as in the ZINC dataset.

Parameters:
  • root (str) – Root directory where the dataset should be saved.

  • split (str) – If "train", loads the training dataset. If "val", loads the validation dataset. If "test", loads the test dataset.

  • transform (Callable | None) – A function/transform that takes in a torch_geometric.data.Data object and returns a transformed version. The data object will be transformed before every access.

  • pre_transform (Callable | None) – A function/transform that takes in a torch_geometric.data.Data object and returns a transformed version. The data object will be transformed before being saved to disk.

  • pre_filter (callable, optional) – A function that takes in an torch_geometric.data.Data object and returns a boolean value, indicating whether the data object should be included in the final dataset.

  • force_reload (bool) – Whether to re-process the dataset.

STATS:

#graphs

#nodes

#edges

#features

#classes

9,833

~17.6

~35.8

1

1

__init__(root, split='train', transform=None, pre_transform=None, pre_filter=None, force_reload=False)#
atoms()#
bonds()#
download()#

Downloads the dataset to the self.raw_dir folder.

process()#

Processes the dataset to the self.processed_dir folder.

property processed_file_names: List[str]#

The name of the files in the self.processed_dir folder that must be present in order to skip processing.

property raw_file_names: List[str]#

The name of the files in the self.raw_dir folder that must be present in order to skip downloading.

url = 'https://www.dropbox.com/s/lzu9lmukwov12kt/aqsol_graph_raw.zip?dl=1'#
class AbstractLoader(parameters)#

Bases: ABC

Abstract class that provides an interface to load data.

Parameters:
parametersDictConfig

Configuration parameters.

__init__(parameters)#
get_data_dir()#

Get the data directory.

Returns:
Path

The path to the dataset directory.

load(**kwargs)#

Load data.

Parameters:
**kwargsdict

Additional keyword arguments.

Returns:
tuple[torch_geometric.data.Data, str]

Tuple containing the loaded data and the data directory.

abstract load_dataset()#

Load data into a dataset.

Returns:
Union[torch_geometric.data.Dataset, torch.utils.data.Dataset]

The loaded dataset, which could be a PyG or PyTorch dataset.

Raises:
NotImplementedError

If the method is not implemented.

class Dataset(root=None, transform=None, pre_transform=None, pre_filter=None, log=True, force_reload=False)#

Bases: Dataset

Dataset base class for creating graph datasets. See here for the accompanying tutorial.

Parameters:
  • root (str, optional) – Root directory where the dataset should be saved. (optional: None)

  • transform (callable, optional) – A function/transform that takes in a Data or HeteroData object and returns a transformed version. The data object will be transformed before every access. (default: None)

  • pre_transform (callable, optional) – A function/transform that takes in a Data or HeteroData object and returns a transformed version. The data object will be transformed before being saved to disk. (default: None)

  • pre_filter (callable, optional) – A function that takes in a Data or HeteroData object and returns a boolean value, indicating whether the data object should be included in the final dataset. (default: None)

  • log (bool, optional) – Whether to print any console output while downloading and processing the dataset. (default: True)

  • force_reload (bool, optional) – Whether to re-process the dataset. (default: False)

__init__(root=None, transform=None, pre_transform=None, pre_filter=None, log=True, force_reload=False)#
download()#

Downloads the dataset to the self.raw_dir folder.

get(idx)#

Gets the data object at index idx.

get_summary()#

Collects summary statistics for the dataset.

index_select(idx)#

Creates a subset of the dataset from specified indices idx. Indices idx can be a slicing object, e.g., [2:5], a list, a tuple, or a torch.Tensor or np.ndarray of type long or bool.

indices()#
len()#

Returns the number of data objects stored in the dataset.

print_summary(fmt='psql')#

Prints summary statistics of the dataset to the console.

Parameters:

fmt (str, optional) – Summary tables format. Available table formats can be found here. (default: "psql")

process()#

Processes the dataset to the self.processed_dir folder.

shuffle(return_perm=False)#

Randomly shuffles the examples in the dataset.

Parameters:

return_perm (bool, optional) – If set to True, will also return the random permutation used to shuffle the dataset. (default: False)

to_datapipe()#

Converts the dataset into a torch.utils.data.DataPipe.

The returned instance can then be used with :pyg:`PyG's` built-in DataPipes for batching graphs as follows:

from torch_geometric.datasets import QM9

dp = QM9(root='./data/QM9/').to_datapipe()
dp = dp.batch_graphs(batch_size=2, drop_last=True)

for batch in dp:
    pass

See the PyTorch tutorial for further background on DataPipes.

property has_download: bool#

Checks whether the dataset defines a download() method.

property has_process: bool#

Checks whether the dataset defines a process() method.

property num_classes: int#

Returns the number of classes in the dataset.

property num_edge_features: int#

Returns the number of features per edge in the dataset.

property num_features: int#

Returns the number of features per node in the dataset. Alias for num_node_features.

property num_node_features: int#

Returns the number of features per node in the dataset.

property processed_dir: str#

!! processed by numpydoc !!

property processed_file_names: str | List[str] | Tuple[str, ...]#

The name of the files in the self.processed_dir folder that must be present in order to skip processing.

property processed_paths: List[str]#

The absolute filepaths that must be present in order to skip processing.

property raw_dir: str#

!! processed by numpydoc !!

property raw_file_names: str | List[str] | Tuple[str, ...]#

The name of the files in the self.raw_dir folder that must be present in order to skip downloading.

property raw_paths: List[str]#

The absolute filepaths that must be present in order to skip downloading.

class DictConfig(content, key=None, parent=None, ref_type=typing.Any, key_type=typing.Any, element_type=typing.Any, is_optional=True, flags=None)#

Bases: BaseContainer, MutableMapping[Any, Any]

__init__(content, key=None, parent=None, ref_type=typing.Any, key_type=typing.Any, element_type=typing.Any, is_optional=True, flags=None)#
copy()#
get(key, default_value=None)#

Return the value for key if key is in the dictionary, else default_value (defaulting to None).

items() a set-like object providing a view on D's items#
items_ex(resolve=True, keys=None)#
keys() a set-like object providing a view on D's keys#
pop(k[, d]) v, remove specified key and return the corresponding value.#

If key is not found, d is returned if given, otherwise KeyError is raised.

setdefault(k[, d]) D.get(k,d), also set D[k]=d if k not in D#
class MoleculeDatasetLoader(parameters)#

Bases: AbstractLoader

Load molecule datasets (ZINC and AQSOL) with predefined splits.

Parameters:
parametersDictConfig
Configuration parameters containing:
  • data_dir: Root directory for data

  • data_name: Name of the dataset

  • data_type: Type of the dataset (e.g., “molecule”)

__init__(parameters)#
get_data_dir()#

Get the data directory.

Returns:
Path

The path to the dataset directory.

load_dataset()#

Load the molecule dataset with predefined splits.

Returns:
Dataset

The combined dataset with predefined splits.

Raises:
RuntimeError

If dataset loading fails.

class Path(*args, **kwargs)#

Bases: PurePath

PurePath subclass that can make system calls.

Path represents a filesystem path but unlike PurePath, also offers methods to do system calls on path objects. Depending on your system, instantiating a Path will return either a PosixPath or a WindowsPath object. You can also instantiate a PosixPath or WindowsPath directly, but cannot instantiate a WindowsPath on a POSIX system or vice versa.

absolute()#

Return an absolute version of this path by prepending the current working directory. No normalization or symlink resolution is performed.

Use resolve() to get the canonical path to a file.

chmod(mode, *, follow_symlinks=True)#

Change the permissions of the path, like os.chmod().

classmethod cwd()#

Return a new path pointing to the current working directory (as returned by os.getcwd()).

exists()#

Whether this path exists.

expanduser()#

Return a new path with expanded ~ and ~user constructs (as returned by os.path.expanduser)

glob(pattern)#

Iterate over this subtree and yield all existing files (of any kind, including directories) matching the given relative pattern.

group()#

Return the group name of the file gid.

Make this path a hard link pointing to the same file as target.

Note the order of arguments (self, target) is the reverse of os.link’s.

classmethod home()#

Return a new path pointing to the user’s home directory (as returned by os.path.expanduser(‘~’)).

is_block_device()#

Whether this path is a block device.

is_char_device()#

Whether this path is a character device.

is_dir()#

Whether this path is a directory.

is_fifo()#

Whether this path is a FIFO.

is_file()#

Whether this path is a regular file (also True for symlinks pointing to regular files).

is_mount()#

Check if this path is a POSIX mount point

is_socket()#

Whether this path is a socket.

Whether this path is a symbolic link.

iterdir()#

Iterate over the files in this directory. Does not yield any result for the special paths ‘.’ and ‘..’.

lchmod(mode)#

Like chmod(), except if the path points to a symlink, the symlink’s permissions are changed, rather than its target’s.

Make the target path a hard link pointing to this path.

Note this function does not make this path a hard link to target, despite the implication of the function and argument names. The order of arguments (target, link) is the reverse of Path.symlink_to, but matches that of os.link.

Deprecated since Python 3.10 and scheduled for removal in Python 3.12. Use hardlink_to() instead.

lstat()#

Like stat(), except if the path points to a symlink, the symlink’s status information is returned, rather than its target’s.

mkdir(mode=511, parents=False, exist_ok=False)#

Create a new directory at this given path.

open(mode='r', buffering=-1, encoding=None, errors=None, newline=None)#

Open the file pointed by this path and return a file object, as the built-in open() function does.

owner()#

Return the login name of the file owner.

read_bytes()#

Open the file in bytes mode, read it, and close the file.

read_text(encoding=None, errors=None)#

Open the file in text mode, read it, and close the file.

Return the path to which the symbolic link points.

rename(target)#

Rename this path to the target path.

The target path may be absolute or relative. Relative paths are interpreted relative to the current working directory, not the directory of the Path object.

Returns the new Path instance pointing to the target path.

replace(target)#

Rename this path to the target path, overwriting if that path exists.

The target path may be absolute or relative. Relative paths are interpreted relative to the current working directory, not the directory of the Path object.

Returns the new Path instance pointing to the target path.

resolve(strict=False)#

Make the path absolute, resolving all symlinks on the way and also normalizing it.

rglob(pattern)#

Recursively yield all existing files (of any kind, including directories) matching the given relative pattern, anywhere in this subtree.

rmdir()#

Remove this directory. The directory must be empty.

samefile(other_path)#

Return whether other_path is the same or not as this file (as returned by os.path.samefile()).

stat(*, follow_symlinks=True)#

Return the result of the stat() system call on this path, like os.stat() does.

Make this path a symlink pointing to the target path. Note the order of arguments (link, target) is the reverse of os.symlink.

touch(mode=438, exist_ok=True)#

Create this file with the given access mode, if it doesn’t exist.

Remove this file or link. If the path is a directory, use rmdir() instead.

write_bytes(data)#

Open the file in bytes mode, write to it, and close the file.

write_text(data, encoding=None, errors=None, newline=None)#

Open the file in text mode, write to it, and close the file.

class ZINC(root, subset=False, split='train', transform=None, pre_transform=None, pre_filter=None, force_reload=False)#

Bases: InMemoryDataset

The ZINC dataset from the ZINC database and the “Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules” paper, containing about 250,000 molecular graphs with up to 38 heavy atoms. The task is to regress the penalized logP (also called constrained solubility in some works), given by y = logP - SAS - cycles, where logP is the water-octanol partition coefficient, SAS is the synthetic accessibility score, and cycles denotes the number of cycles with more than six atoms. Penalized logP is a score commonly used for training molecular generation models, see, e.g., the “Junction Tree Variational Autoencoder for Molecular Graph Generation” and “Grammar Variational Autoencoder” papers.

Parameters:
  • root (str) – Root directory where the dataset should be saved.

  • subset (bool, optional) – If set to True, will only load a subset of the dataset (12,000 molecular graphs), following the “Benchmarking Graph Neural Networks” paper. (default: False)

  • split (str, optional) – If "train", loads the training dataset. If "val", loads the validation dataset. If "test", loads the test dataset. (default: "train")

  • transform (callable, optional) – A function/transform that takes in an torch_geometric.data.Data object and returns a transformed version. The data object will be transformed before every access. (default: None)

  • pre_transform (callable, optional) – A function/transform that takes in an torch_geometric.data.Data object and returns a transformed version. The data object will be transformed before being saved to disk. (default: None)

  • pre_filter (callable, optional) – A function that takes in an torch_geometric.data.Data object and returns a boolean value, indicating whether the data object should be included in the final dataset. (default: None)

  • force_reload (bool, optional) – Whether to re-process the dataset. (default: False)

STATS:

Name

#graphs

#nodes

#edges

#features

#classes

ZINC Full

249,456

~23.2

~49.8

1

1

ZINC Subset

12,000

~23.2

~49.8

1

1

__init__(root, subset=False, split='train', transform=None, pre_transform=None, pre_filter=None, force_reload=False)#
download()#

Downloads the dataset to the self.raw_dir folder.

process()#

Processes the dataset to the self.processed_dir folder.

property processed_dir: str#

!! processed by numpydoc !!

property processed_file_names: List[str]#

The name of the files in the self.processed_dir folder that must be present in order to skip processing.

property raw_file_names: List[str]#

The name of the files in the self.raw_dir folder that must be present in order to skip downloading.

split_url = 'https://raw.githubusercontent.com/graphdeeplearning/benchmarking-gnns/master/data/molecules/{}.index'#
url = 'https://www.dropbox.com/s/feo9qle74kg48gy/molecules.zip?dl=1'#