topobench.transforms.liftings.pointcloud2hypergraph.voronoi_lifting module#

Lifting pointcloud to Voronoi graph induced by Farthest Point Sampling (FPS) support set.

class PointCloud2HypergraphLifting(**kwargs)#

Bases: PointCloudLifting

Abstract class for lifting pointclouds to hypergraphs.

Parameters:
**kwargsoptional

Additional arguments for the class.

__init__(**kwargs)#
class VoronoiLifting(support_ratio, **kwargs)#

Bases: PointCloud2HypergraphLifting

Lifts pointcloud to Farthest-point Voronoi graph.

Parameters:
support_ratiofloat

Ratio of points to sample with FPS to form voronoi support set.

**kwargsoptional

Additional arguments for the class.

__init__(support_ratio, **kwargs)#
lift_topology(data)#

Lift pointcloud to voronoi graph induced by Farthest Point Sampling (FPS) support set.

Parameters:
datatorch_geometric.data.Data

The input data to be lifted.

Returns:
dict

The lifted topology.

fps(src, batch=None, ratio=None, random_start=True, batch_size=None, ptr=None)#

“A sampling algorithm from the “PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space” paper, which iteratively samples the most distant point with regard to the rest points.

Parameters:
  • src (Tensor) – Point feature matrix \(\mathbf{X} \in \mathbb{R}^{N \times F}\).

  • batch (LongTensor, optional) – Batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each node to a specific example. (default: None)

  • ratio (float or Tensor, optional) – Sampling ratio. (default: 0.5)

  • random_start (bool, optional) – If set to False, use the first node in \(\mathbf{X}\) as starting node. (default: obj:True)

  • batch_size (int, optional) – The number of examples \(B\). Automatically calculated if not given. (default: None)

  • ptr (torch.Tensor or [int], optional) – If given, batch assignment will be determined based on boundaries in CSR representation, e.g., batch=[0,0,1,1,1,2] translates to ptr=[0,2,5,6]. (default: None)

Return type:

LongTensor

import torch
from torch_cluster import fps

src = torch.Tensor([[-1, -1], [-1, 1], [1, -1], [1, 1]])
batch = torch.tensor([0, 0, 0, 0])
index = fps(src, batch, ratio=0.5)
knn(x, y, k, batch_x=None, batch_y=None, cosine=False, num_workers=1, batch_size=None)#

Finds for each element in y the k nearest points in x.

Parameters:
  • x (Tensor) – Node feature matrix \(\mathbf{X} \in \mathbb{R}^{N \times F}\).

  • y (Tensor) – Node feature matrix \(\mathbf{X} \in \mathbb{R}^{M \times F}\).

  • k (int) – The number of neighbors.

  • batch_x (LongTensor, optional) – Batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each node to a specific example. batch_x needs to be sorted. (default: None)

  • batch_y (LongTensor, optional) – Batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^M\), which assigns each node to a specific example. batch_y needs to be sorted. (default: None)

  • cosine (boolean, optional) – If True, will use the Cosine distance instead of the Euclidean distance to find nearest neighbors. (default: False)

  • num_workers (int) – Number of workers to use for computation. Has no effect in case batch_x or batch_y is not None, or the input lies on the GPU. (default: 1)

  • batch_size (int, optional) – The number of examples \(B\). Automatically calculated if not given. (default: None)

Return type:

LongTensor

import torch
from torch_cluster import knn

x = torch.Tensor([[-1, -1], [-1, 1], [1, -1], [1, 1]])
batch_x = torch.tensor([0, 0, 0, 0])
y = torch.Tensor([[-1, 0], [1, 0]])
batch_y = torch.tensor([0, 0])
assign_index = knn(x, y, 2, batch_x, batch_y)