pointtree.operations
Algorithms for tree instance segmentation.
- pointtree.operations.cloth_simulation_filtering(
- coords: ndarray[Any, dtype[float64]],
- classification_threshold: float,
- resolution: float,
- rigidness: int,
- correct_steep_slope: bool = False,
- iterations: int = 100,
Detects ground points using the Cloth Simulation Filtering (CSF) algorithm proposed in Zhang, Wuming, et al. “An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation.” Remote Sensing 8.6 (2016): 501.
- Parameters:
coords – Point coordinates.
classification_threshold – Maximum height above the cloth a point can have in order to be classified as terrain point. All points whose distance to the cloth is equal or below this threshold are classified as terrain points.
resolution – Resolution of the cloth grid (in meter).
rigidness – Rigidness of the cloth (the three levels 1, 2, and 3 are available, where 1 is the lowest and 3 the highest rigidness).
correct_steep_slope – Whether the cloth should be corrected for steep slopes in a post-pressing step. Defaults to
False
.iterations – Maximum number of iterations. Defaults to 100.
- Returns:
Class IDs for each point. For terrain points, the class ID is set to 0 and for non-terrain points to 1.
- Raises:
ValueError – If
rigidness
is not 1, 2, or 3.
- Shape:
coords
: \((N, 3)\)Output: \((N)\).
where\(N = \text{ number of points}\)
- pointtree.operations.create_digital_terrain_model(
- terrain_coords: ndarray[Any, dtype[float64]],
- grid_resolution: float,
- k: int,
- p: float,
- voxel_size: float | None = None,
Constructs a rasterized digital terrain model (DTM) from a set of terrain points. The DTM is constructed by creating a grid of regularly arranged DTM points and interpolating the height of the \(k\) closest terrain points for each DTM point on the grid. In the interpolation, terrain points \(x_t\) are weighted with a factor proportional to a power of \(p\) of their inverse distance to the corresponding DTM point \(x_{dtm}\), i.e., \(\frac{1}{||(x_{dtm} - x_{t})||^p}\). If there are terrain points whose distance to the DTM point is zero, only these points are used to calculate the DTM height and more distant points are ignored. Before constructing the DTM, the terrain points can optionally be downsampled using voxel-based subsampling.
- Parameters:
terrain_coords – Coordinates of the terrain points from which to construct the DTM.
grid_resolution – Resolution of the DTM grid (in meter).
k – Number of terrain points between which interpolation is performed to obtain the terrain height of a DTM point.
p – Power \(p\) for inverse-distance weighting in the interpolation of terrain points.
voxel_size – Voxel size with which the terrain points are downsampled before the DTM is created. If set to
None
, no downsampling is performed. Defaults toNone
.
- Returns:
Tuple of two arrays. The first is the DTM. The second contains the x- and y-coordinate of the top left corner of the DTM grid.
- Shape:
terrain_coords
: \((N, 3)\)Output: Tuple of two arrays. The first has shape \((H, W)\) and second has shape \((2)\).
where\(N = \text{ number of terrain points}\)\(H = \text{ extent of the DTM grid in y-direction}\)\(W = \text{ extent of the DTM grid in x-direction}\)
- pointtree.operations.knn_search(
- coords_support_points: Tensor,
- coords_query_points: Tensor,
- batch_indices_support_points: Tensor,
- batch_indices_query_points: Tensor,
- point_cloud_sizes_support_points: Tensor,
- point_cloud_sizes_query_points: Tensor,
- k: int,
- return_sorted: bool = True,
Computes the indices of
k
nearest neighbors. Decides between different implementations:Implementation from PyTorch3D: This implementation is always used if PyTorch3D is installed because its more efficient in terms of runtime and memory consumption than the other available implementations.
Implementation from torch-cluster: This implementation is used when PyTorch3D is not installed. It is similar to the PyTorch3D implementation but is slighlty slower.
- Parameters:
coords_support_points – Coordinates of the support points to be searched for neighbors.
coords_query_points – Coordinates of the query points.
batch_indices_support_points – Indices indicating to which point cloud in the batch each support point belongs.
batch_indices_query_points – Indices indicating to which point cloud in the batch each query point belongs.
point_cloud_sizes_support_points – Number of points in each point cloud in the batch of support points.
point_cloud_sizes_query_points – Number of points in each point cloud in the batch of query points.
k – The number of nearest neighbors to search.
return_sorted – Whether the returned neighbors should be sorted by their distance to the query point. Defaults to
True
. Setting it toFalse
can improve performance for some implementations.
- Returns:
Tuple of two tensors. The first tensor contains the indices of the neighbors of each query point. The second tensor contains the distances between the neighbors and the query points.
- Shape:
coords_support_points
: \((N, 3)\)coords_query_points
: \((N', 3)\)batch_indices_support_points
: \((N)\)batch_indices_query_points
: \((N')\)point_cloud_sizes_support_points
: \((B)\)point_cloud_sizes_query_points
: \((B)\)Output: Tuple of two tensors, both with shape \((N', k)\) if
k
\(\leq n_{max}\), otherwise \((N', n_{max})\).where\(B = \text{ batch size}\)\(N = \text{ number of support points}\)\(N' = \text{ number of query points}\)\(n_{max} = \text{ maximum number of neighbors a query point has}\)
- pointtree.operations.knn_search_pytorch3d(
- coords_support_points: Tensor,
- coords_query_points: Tensor,
- batch_indices_query_points: Tensor,
- point_cloud_sizes_support_points: Tensor,
- point_cloud_sizes_query_points: Tensor,
- k: int,
- return_sorted: bool = True,
Computes the indices of
k
nearest neighbors. This implementation is based on PyTorch3D’s knn_points function.The GPU-based KNN search implementation from PyTorch3D launches one CUDA thread per query point and each thread then loops through all the support points to find the k-nearest neighbors. It is similar to the torch-cluster implementation but it requires input batches of regular shape. Therefore, the variable size point cloud batches are packed into regular shaped batches before passing them to PyTorch3D.
- Parameters:
coords_support_points – Coordinates of the support points to be searched for neighbors.
coords_query_points – Coordinates of the query points.
batch_indices_query_points – Indices indicating to which point cloud in the batch each query point belongs.
point_cloud_sizes_support_points – Number of points in each point cloud in the batch of support points.
point_cloud_sizes_query_points – Number of points in each point cloud in the batch of query points.
k – The number of nearest neighbors to search.
return_sorted – Whether the returned neighbors should be sorted by their distance to the query point. Defaults to
True
. Setting it toFalse
can improve performance for some implementations.
- Returns:
Tuple of two tensors. The first tensor contains the indices of the neighbors of each query point. The second tensor contains the distances between the neighbors and the query points.
- Shape:
coords_support_points
: \((N, 3)\)coords_query_points
: \((N', 3)\)batch_indices_query_points
: \((N')\)point_cloud_sizes_support_points
: \((B)\)point_cloud_sizes_query_points
: \((B)\)Output: Tuple of two tensors, both with shape \((N', k)\) if
k
\(\leq n_{max}\), otherwise \((N', n_{max})\).where\(B = \text{ batch size}\)\(N = \text{ number of support points}\)\(N' = \text{ number of query points}\)\(n_{max} = \text{ maximum number of neighbors a query point has}\)
- pointtree.operations.knn_search_torch_cluster(
- coords_support_points: Tensor,
- coords_query_points: Tensor,
- batch_indices_support_points: Tensor,
- batch_indices_query_points: Tensor,
- point_cloud_sizes_support_points: Tensor,
- k: int,
Computes the indices of
k
nearest neighbors. This implementation is based on the knn method from torch-cluster.The GPU-based KNN search implementation from torch-cluster launches one CUDA thread per query point and each thread then loops through all the support points to find the k-nearest neighbors. It is similar to the PyTorch3D implementation but can handle variable size point clouds directly.
- Parameters:
coords_support_points – Coordinates of the support points to be searched for neighbors.
coords_query_points – Coordinates of the query points.
batch_indices_support_points – Indices indicating to which point cloud in the batch each support point belongs.
batch_indices_query_points – Indices indicating to which point cloud in the batch each query point belongs.
point_cloud_sizes_support_points – Number of points in each point cloud in the batch of support points.
k – The number of nearest neighbors to search.
- Returns:
Tuple of two tensors. The first tensor contains the indices of the neighbors of each query point. The second tensor contains the distances between the neighbors and the query points.
- Shape:
coords_support_points
: \((N, 3)\)coords_query_points
: \((N', 3)\)batch_indices_support_points
: \((N)\)batch_indices_query_points
: \((N')\)point_cloud_sizes_support_points
: \((B)\)Output: Tuple of two tensors, both with shape \((N', k)\) if
k
\(\leq n_{max}\), otherwise \((N', n_{max})\).where\(B = \text{ batch size}\)\(N = \text{ number of support points}\)\(N' = \text{ number of query points}\)\(n_{max} = \text{ maximum number of neighbors a query point has}\)
- pointtree.operations.make_labels_consecutive(
- labels: ndarray,
- start_id: int = 0,
- ignore_id: int | None = None,
- inplace: bool = False,
- return_unique_labels: bool = False,
Transforms the input labels into consecutive integer labels starting from a given
start_id
.- Parameters:
labels – An array of original labels.
start_id – The starting ID for the consecutive labels. Defaults to zero.
ignore_id – A label ID that should not be changed when transforming the labels.
inplace – Whether the transformation should be applied inplace to the
labels
array. Defaults toFalse
.return_unique_labels – Whether the unique labels after applying the transformation (excluding
ignore_id
) should be returned. Defaults toFalse
.
- Returns:
An array with the transformed consecutive labels. If
return_unique_labels
is set toTrue
, a tuple of two arrays is returned, where the second array contains the unique labels after the transformation.
- pointtree.operations.normalize_height(
- coords: ndarray[Any, dtype[float64]],
- dtm: ndarray[Any, dtype[float64]],
- dtm_offset: ndarray[Any, dtype[float64]],
- dtm_resolution: float,
- allow_outside_points: bool = True,
- inplace: bool = False,
Normalizes the height of a point cloud by subtracting the corresponding terrain height from the z-coordinate of each point. The terrain height for a given point is obtained by bilinearly interpolating the terrain heights of the four closest grid points of the digital terrain model.
- Parameters:
coords – Point coordinates of the point cloud to normalize.
dtm – Rasterized digital terrain model.
dtm_offset – X and y-coordinates of the top left corner of the DTM grid.
allow_outside_points – If this option is set to
True
and a point in the point cloud to be normalized is not in the area covered by the DTM, the height of the nearest DTM points is still determined and used for normalization. Otherwise, aValueError
is thrown if points are outside the area covered by the DTM. Defaults toTrue
.inplace – Whether the normalization should be applied in place to the
coords
array. Defaults toFalse
.
- Returns:
Height-normalized point cloud.
- Raises:
ValueError – If the point cloud to be normalized covers a larger base area than the DTM.
- Shape:
coords
: \((N, 3)\)dtm
: \((H, W)\)dtm_offset
: \((2)\)Output: \((N, 3)\)
where\(N = \text{ number of points}\)\(H = \text{ extent of the DTM in grid in y-direction}\)\(W = \text{ extent of the DTM in grid in x-direction}\)
- pointtree.operations.pack_batch(
- input_batch: Tensor,
- point_cloud_sizes: Tensor,
- fill_value: float = inf,
Packs a batch containing point clouds of varying size into a regular batch structure by padding all point clouds to the same size.
- Parameters:
input_batch – Batch to be packed.
point_cloud_sizes – Number of points in each point cloud in the batch.
fill_value – Value to be used to pad point clouds that contain less points than the largest point cloud in the batch. Defaults to
torch.inf
.
- Returns:
Tuple of two tensors. The first tensor is the packed batch. Point clouds containing less than \(N_{max}\) points are padded with
fill_value
. The second tensor is a boolean mask, which isTrue
in all positions where the packed batch contains valid points andFalse
in all positions filled withfill_value
.
- Shape:
input_batch
: \((N_1 + ... + N_B, D)\)point_cloud_sizes
: \((B)\)- Output: Tuple of two tensors with shape \((B, N_{max}, D)\) and \((B, N_{max})\).where\(B = \text{ batch size}\)\(D = \text{ number of feature channels}\)\(N_i = \text{ number of points in the i-th point cloud}\)\(N_{max} = \text{ number of points in the largest point cloud in the batch}\)
- pointtree.operations.ravel_index(
- index: Tensor,
- input_tensor: Tensor,
- dim: int,
Converts index argument of a multi-dimensional torch.gather() or torch.scatter_add() operation to an index that can be used to apply the operation on the flattened input tensor.
- Parameters:
index – Multi-dimensional index.
input_tensor – Input tensor of the
torch.gather()
ortorch.scatter_add()
operation.dim – Dimension in which the
torch.gather()
ortorch.scatter_add()
operation is to be applied.
- Returns:
Index for applying
torch.gather()
ortorch.scatter_add()
on the flattened input tensor.
- pointtree.operations.ravel_multi_index(
- multi_index: Tensor,
- dims: Size | Tensor,
PyTorch implementation of numpy.ravel_multi_index. This operation is inverse to
pointtree.operations.unravel_flat_index()
.- Parameters:
multi_index – Tensor containing the indices for each dimension.
dims – The shape of the tensor into which the indices from
multi_index
apply.
- Returns:
Indices for the flattened version of the tensor, referring to the same elements as referenced by
multi_index
for the non-flattened version of the tensor.
- Shape:
multi_index
: \((N, d_1, ..., d_D)\)dims
: \((D)\)Output: \((N \cdot d_1 \cdot ... \cdot d_D)\)
where\(N = \text{ number of items}\)\(D = \text{ number of index dimensions}\)\(d_i = \text{ number of elements along dimension } i\)
- pointtree.operations.unravel_flat_index(
- flat_index: Tensor,
- dims: Size | Tensor,
Converts an index for a 1-dimensional tensor into an index for an equivalent multi-dimensional tensor. This operation is inverse to
pointtree.operations.ravel_multi_index()
.- Parameters:
flat_index – Tensor containing the indices for the flat array.
dims – The shape of the tensor into which the returned indices should apply.
- Returns:
Indices for the multi-dimensional version of the tensor, referring to the same elements as referenced by
flat_index
for the flattened version of the tensor.
- Shape:
flat_index
: \((N \cdot d_1 \cdot ... \cdot d_D)\)dims
: \((D)\)Output: \((N, d_1, ..., d_D)\)
where\(N = \text{ number of items}\)\(D = \text{ number of index dimensions}\)\(d_i = \text{ number of elements along dimension } i\)
- pointtree.operations.voxel_downsampling(
- points: ndarray,
- voxel_size: float,
- point_aggregation: Literal['nearest_neighbor', 'random'] = 'random',
- preserve_order: bool = True,
- start: ndarray | None = None,
Voxel-based downsampling of a point cloud.
- Parameters:
points – The point cloud to downsample.
voxel_size – The size of the voxels used for downsampling. If
voxel_size
is set to zero or less, no downsampling is applied.point_aggregation – Method to be used to aggregate the points within the same voxel. Defaults to
nearest_neighbor
."nearest_neighbor"
: The point closest to the voxel center is selected."random"
: One point is randomly sampled from the voxel.preserve_order – If set to
True
, the point order is preserved during downsampling. This means that for any two points included in the downsampled point cloud, the point that is first in the original point cloud is also first in the downsampled point cloud. Defaults toTrue
.start – Coordinates of a point at which the voxel grid is to be aligned, i.e., the grid is placed so that
start
is at a corner point of a voxel. Defaults toNone
, which means that the grid is aligned at the coordinate origin.
- Returns:
Tuple of two arrays. The first contains the points remaining after downsampling. The second contains the indices of the points remaining after downsampling within the original point cloud.^
- Raises:
ValueError – If
start
is notNone
and has an invalid shape.
- Shape:
points
: \((N, 3 + D)\).start
: \((3)\)Output: Tuple of two arrays. The first has shape \((N', 3 + D)\) and the second \((N')\).
where\(N = \text{ number of points before downsampling}\)\(N' = \text{ number of points after downsampling}\)\(D = \text{ number of feature channels excluding coordinate channels }\)