topobench.transforms.liftings.graph2hypergraph.kernel_lifting module#

This module implements the HypergraphKernelLifting class.

class Graph2HypergraphLifting(**kwargs)#

Bases: GraphLifting

Abstract class for lifting graphs to hypergraphs.

Parameters:
**kwargsoptional

Additional arguments for the class.

__init__(**kwargs)#
class HypergraphKernelLifting(graph_kernel='heat', feat_kernel='identity', C='prod', fraction=0.5, **kwargs)#

Bases: Graph2HypergraphLifting

Lift graphs to hypergraph domain by the kernel over graphs (features can be included).

Parameters:
graph_kernelstr or callable

The kernel function to be applied to the graph topology, if a string, it specifies a predefined kernel type. Currently, only “heat” is supported. If a callable, it should be a function that takes the graph Laplacian and additional kwargs as input and returns a kernel matrix.

feat_kernelstr or callable

The kernel function to be applied to the features, if a string, it specifies a predefined kernel type. Currently, only “identity” is supported. If a callable, it should be a function that takes the features and additional kwargs as input and returns a kernel matrix.

Cstr or callable

Default is “heat”.

fractionfloat

The fraction of the kernel to be considered for the hypergraph construction. Default is 0.5.

**kwargsoptional

Additional arguments for the class.

__init__(graph_kernel='heat', feat_kernel='identity', C='prod', fraction=0.5, **kwargs)#
lift_topology(data)#

Lift the topology of a graph to hypergraph domain by considering the kernel over vertices or alternatively features.

In a most generic form the kernel looks like: $$K = C(K_v(v, v^{prime}) K_x(x, x^{prime})),$$ where $K_v$ is a kernel over the graph (graph_kernel), $K_x$ is a kernel over the features (feat_kernel), and C is the function to combine those (for instance sum or prod).

Parameters:
datatorch_geometric.data.Data

The input data to be lifted.

Returns:
typing.Dict[str, torch.Tensor]

The lifted topology.

Raises:
ValueError: if the input is incomplete or in incorrect format.
fmp(A, t)#

Compute the fractional power of a matrix.

Proceeds according to the discussion in section (6) of [1].

The documentation is written assuming array arguments are of specified “core” shapes. However, array argument(s) of this function may have additional “batch” dimensions prepended to the core shape. In this case, the array is treated as a batch of lower-dimensional slices; see Batched Linear Operations for details.

Parameters:
A(N, N) array_like

Matrix whose fractional power to evaluate.

tfloat

Fractional power.

Returns:
X(N, N) array_like

The fractional power of the matrix.

References

[1]

Nicholas J. Higham and Lijing lin (2011) “A Schur-Pade Algorithm for Fractional Powers of a Matrix.” SIAM Journal on Matrix Analysis and Applications, 32 (3). pp. 1056-1078. ISSN 0895-4798

Examples

>>> import numpy as np
>>> from scipy.linalg import fractional_matrix_power
>>> a = np.array([[1.0, 3.0], [1.0, 4.0]])
>>> b = fractional_matrix_power(a, 0.5)
>>> b
array([[ 0.75592895,  1.13389342],
       [ 0.37796447,  1.88982237]])
>>> np.dot(b, b)      # Verify square root
array([[ 1.,  3.],
       [ 1.,  4.]])
get_combination(c_name_or_func)#

Return a combination function based on the specified type or function.

Parameters:
c_name_or_funcstr or callable

The combination method to use. This can be: - A string specifying a predefined combination type:

  • “prod”: Returns a function that computes the element-wise product of two inputs.

  • “sum”: Returns a function that computes the element-wise sum of two inputs.

  • A callable: A custom combination function that takes two arguments (A and B) and combines them.

Returns:
callable

A function that combines two inputs based on the specified combination type or custom function. The returned function takes two parameters, A and B, which can be scalars, tensors, or other compatible types, and returns their combined result.

Raises:
ValueError

If c_name_or_func is a string that does not match any supported predefined combination type.

Examples

Example with the “prod” combination:
>>> prod_fn = get_combination("prod")
>>> result = prod_fn(2, 3)
>>> print(result)
6
Example with the “sum” combination:
>>> sum_fn = get_combination("sum")
>>> result = sum_fn(2, 3)
>>> print(result)
5
Example with a custom combination function:
>>> def custom_combination(A, B):
...     return A - B
>>> custom_fn = get_combination(custom_combination)
>>> result = custom_fn(7, 4)
>>> print(result)
3
get_feat_kernel(features, kernel='identity', **kwargs)#

Compute a kernel matrix for the given features based on the specified kernel type.

Parameters:
featurestorch.Tensor

A 2D tensor representing the features for which the kernel matrix is to be computed. Each row corresponds to a feature vector.

kernelstr or callable, optional

Specifies the type of kernel to apply or a custom kernel function. - If a string, it specifies a predefined kernel type. Currently, only “identity” is supported. The “identity” kernel returns an identity matrix of size (N, N), where N is the number of features. - If a callable, it should be a function that takes features and additional keyword arguments (**kwargs) as input and returns a kernel matrix. Default is “identity”.

**kwargsdict, optional

Additional keyword arguments required by the custom kernel function if kernel is a callable.

Returns:
torch.Tensor

The computed kernel matrix. If kernel=”identity”, the result is an identity matrix of size (N, N). If kernel is a callable, the result is determined by the custom kernel function.

Raises:
ValueError

If kernel is a string but not one of the supported kernel types (currently only “identity”).

Examples

Example with the “identity” kernel:

>>> import torch
>>> features = torch.randn(5, 3)  # 5 features with 3 dimensions each
>>> kernel_matrix = get_feat_kernel(features, "identity")
>>> print(kernel_matrix)

Example with a custom kernel function:

>>> def custom_kernel_fn(features, **kwargs):
...     # Example: return a random kernel matrix of appropriate size
...     return torch.rand(features.shape[0], features.shape[0])
>>> kernel_matrix = get_feat_kernel(features, custom_kernel_fn)
>>> print(kernel_matrix)
get_graph_kernel(laplacian, kernel='heat', **kwargs)#

Return a graph kernel.

Parameters:
laplaciantorch.Tensor

The graph Laplacian (alternatively can be the normalized graph Laplacian).

kernelstr or callable

Either the name of a kernel or a callable kernel function.

**kwargsdict

Additional keyword arguments representing the hyperparameters of the kernel. These should be passed to the kernel function.

Returns:
torch.Tensor

A graph kernel for the provided Laplacian matrix.

graph_heat_kernel(laplacian, t=1.0)#

Return graph heat kernel $$K = exp(-t L)$$.

Parameters:
laplaciantorch.Tensor

The graph Laplacian (alternatively can be the normalized graph Laplacian).

tfloat

The temperature parameter for the heat kernel.

Returns:
torch.Tensor

The heat kernel.

graph_matern_kernel(laplacian, nu=1, kappa=1)#

Return graph Matérn kernel.

Parameters:
laplaciantorch.Tensor

The graph Laplacian (alternatively can be the normalized graph Laplacian).

nufloat

Smoothness parameter of the kernel.

kappaint

Lengthscale parameter of the kernel.

Returns:
torch.Tensor

The Matérn kernel matrix K = (2*nu / kappa^2 * I + L)^(-nu).

Notes

I represents the identity matrix and L is the graph Laplacian.