autoarray.inversion.pixelization.mesh.DelaunayBrightnessImage#

class DelaunayBrightnessImage[source]#

Bases: Delaunay

An irregular mesh of Delaunay triangle pixels, which using linear barycentric interpolation are paired with a 2D grid of (y,x) coordinates. The Delaunay corners are derived in the image-plane by applying a KMeans clustering algorithm to the image’s weight map.

For a full description of how a mesh is paired with another grid, see the Pixelization API documentation.

The Delaunay mesh represents pixels as an irregular 2D grid of Delaunay triangles.

  • image_plane_data_grid: The observed data grid in the image-plane (which is paired with the mesh in the source-plane).

  • image_plane_mesh_grid: The (y,x) mesh coordinates in the image-plane (which are the corners of Delaunay triangles in the source-plane).

  • source_plane_data_grid: The observed data grid mapped to the source-plane after gravitational lensing.

  • source_plane_mesh_grid: The corner of each Delaunay triangle in the source-plane (the image_plane_mesh_grid maps to this after gravitational lensing).

Each (y,x) coordinate in the source_plane_data_grid is paired with the three nearest Delaunay triangle corners, using a weighted interpolation scheme. Coordinates on the source_plane_data_grid are therefore given higher weights when paired with Delaunay triangle corners they are a closer distance to.

The corners of the Delaunay pixels are derived in the image plane, by applying a KMeans clustering algorithm to the masked image data’s weight-map. The weight_floor and weight_power allow the KMeans algorithm to adapt the image-plane coordinates to the image’s brightest or faintest values. The computed valies are mapped to the source-plane via gravitational lensing, where they form the Delaunay pixel corners.

Parameters
  • pixels – The total number of pixels in the mesh, which is therefore also the number of (y,x) coordinates computed via the KMeans clustering algorithm in image-plane.

  • weight_floor – A parameter which reweights the data values the KMeans algorithm is applied too; as the floor increases more weight is applied to values with lower values thus allowing mesh pixels to be placed in these regions of the data.

  • weight_power – A parameter which reweights the data values the KMeans algorithm is applied too; as the power increases more weight is applied to values with higher values thus allowing mesh pixels to be placed in these regions of the data.

Methods

image_plane_mesh_grid_from

Computes the mesh_grid in the image-plane, by overlaying a uniform grid of coordinates over the masked 2D data (see Grid2DSparse.from_grid_and_unmasked_2d_grid_shape()).

mapper_grids_from

Mapper objects describe the mappings between pixels in the masked 2D data and the pixels in a mesh, in both the data and source frames.

mesh_grid_from

Return the Delaunay source_plane_mesh_grid as a Mesh2DDelaunay object, which provides additional functionality for performing operations that exploit the geometry of a Delaunay mesh.

relocated_grid_from

Relocates all coordinates of the input source_plane_data_grid that are outside of a border (which is defined by a grid of (y,x) coordinates) to the edge of this border.

relocated_mesh_grid_from

Relocates all coordinates of the input source_plane_mesh_grid that are outside of a border (which is defined by a grid of (y,x) coordinates) to the edge of this border.

weight_map_from

Computes a weight_map from an input hyper_data, where this image represents components in the masked 2d data in the image-plane.

Attributes

is_stochastic

rtype

bool

uses_interpolation

weight_map_from(hyper_data)[source]#

Computes a weight_map from an input hyper_data, where this image represents components in the masked 2d data in the image-plane. This applies the weight_floor and weight_power attributes of the class, which scale the weights to make different components upweighted relative to one another.

Parameters

hyper_data (ndarray) – A image which represents one or more components in the masked 2D data in the image-plane.

Return type

The weight map which is used to adapt the Delaunay pixels in the image-plane to components in the data.

image_plane_mesh_grid_from(image_plane_data_grid, hyper_data, settings=<autoarray.inversion.pixelization.settings.SettingsPixelization object>)[source]#

Computes the mesh_grid in the image-plane, by overlaying a uniform grid of coordinates over the masked 2D data (see Grid2DSparse.from_grid_and_unmasked_2d_grid_shape()).

The data_pixelization_grid is transformed to the source_plane_mesh_grid, and it is these (y,x) values which then act the centres of the Delaunay pixelization’s pixels.

For a DelaunayBrightnessImage this grid is computed by applying a KMeans clustering algorithm to the masked data’s values, where these values are reweighted by the hyper_data so that the algorithm can adapt to specific parts of the data.

Parameters
  • image_plane_mesh_grid – The sparse set of (y,x) coordinates computed from the unmasked data in the image-plane. This has a transformation applied to it to create the source_plane_mesh_grid.

  • hyper_data (ndarray) – An image which is used to determine the image_plane_mesh_grid and therefore adapt the distribution of pixels of the Delaunay grid to the data it discretizes.

  • settings – Settings controlling the pixelization for example if a border is used to relocate its exterior coordinates.