API¶
Camera¶
-
class
cameratransform.
Camera
(projection, orientation=None, lens=None)[source]¶ This class is the core of the CameraTransform package and represents a camera. Each camera has a projection (subclass of
CameraProjection
), a spatial orientation (SpatialOrientation
) and optionally a lens distortion (subclass ofLensDistortion
).-
addHorizonInformation
(horizon, uncertainty=1, only_plot=False, plot_color=None)[source]¶ Add a term to the camera probability used for fitting. This term includes the probability to observe the horizon at the given pixel positions.
Parameters: - horizon (ndarray) – the pixel positions of points on the horizon in the image, dimension (2) or (Nx2)
- uncertainty (number, ndarray) – the pixels offset, how clear the horizon is visible in the image, dimensions () or (N)
- only_plot (bool, optional) – when true, the information will be ignored for fitting and only be used to plot.
-
addLandmarkInformation
(lm_points_image, lm_points_space, uncertainties, only_plot=False, plot_color=None)[source]¶ Add a term to the camera probability used for fitting. This term includes the probability to observe the given landmarks and the specified positions in the image.
Parameters: - lm_points_image (ndarray) – the pixel positions of the landmarks in the image, dimension (2) or (Nx2)
- lm_points_space (ndarray) – the space positions of the landmarks, dimension (3) or (Nx3)
- uncertainties (number, ndarray) – the standard deviation uncertainty of the positions in the space coordinates. Typically for landmarks obtained by gps, it could be e.g. [3, 3, 5], dimensions scalar, (3) or (Nx3)
- only_plot (bool, optional) – when true, the information will be ignored for fitting and only be used to plot.
-
addObjectHeightInformation
(points_feet, points_head, height, variation, only_plot=False, plot_color=None)[source]¶ Add a term to the camera probability used for fitting. This term includes the probability to observe the objects with the given feet and head positions and a known height and height variation.
Parameters: - points_feet (ndarray) – the position of the objects feet, dimension (2) or (Nx2)
- points_head (ndarray) – the position of the objects head, dimension (2) or (Nx2)
- height (number, ndarray) – the mean height of the objects, dimensions scalar or (N)
- variation (number, ndarray) – the standard deviation of the heights of the objects, dimensions scalar or (N). If the variation is not known a pymc2 stochastic variable object can be used.
- only_plot (bool, optional) – when true, the information will be ignored for fitting and only be used to plot.
-
addObjectLengthInformation
(points_front, points_back, length, variation, Z=0, only_plot=False, plot_color=None)[source]¶ Add a term to the camera probability used for fitting. This term includes the probability to observe the objects with a given length lying flat on the surface. The objects are assumed to be like flat rods lying on the z=0 surface.
Parameters: - points_front (ndarray) – the position of the objects front, dimension (2) or (Nx2)
- points_back (ndarray) – the position of the objects back, dimension (2) or (Nx2)
- length (number, ndarray) – the mean length of the objects, dimensions scalar or (N)
- variation (number, ndarray) – the standard deviation of the lengths of the objects, dimensions scalar or (N). If the variation is not known a pymc2 stochastic variable object can be used.
- only_plot (bool, optional) – when true, the information will be ignored for fitting and only be used to plot.
-
distanceToHorizon
()[source]¶ Calculates the distance of the camera’s position to the horizon of the earth. The horizon depends on the radius of the earth and the elevation of the camera.
Returns: distance – the distance to the horizon. Return type: number
-
generateLUT
(undef_value=0, whole_image=False)[source]¶ Generate LUT to calculate area covered by one pixel in the image dependent on y position in the image
Parameters: - undef_value (number, optional) – what values undefined positions should have, default=0
- whole_image (bool, optional) – whether to generate the look up table for the whole image or just for a y slice
Returns: LUT – same length as image height
Return type: ndarray
-
getCameraCone
(project_to_ground=False, D=1)[source]¶ The cone of the camera’s field of view. This includes the border of the image and lines to the origin of the camera.
Returns: cone – the cone of the camera in space coordinates, dimensions (Nx3) Return type: ndarray
-
getImageBorder
(resolution=1)[source]¶ Get the border of the image in a top view. Useful for drawing the field of view of the camera in a map.
Parameters: resolution (number, optional) – the pixel distance between neighbouring points. Returns: border – the border of the image in space coordinates, dimensions (Nx3) Return type: ndarray
-
getImageHorizon
(pointsX=None)[source]¶ This function calculates the position of the horizon in the image sampled at the points x=0, x=im_width/2, x=im_width.
Parameters: pointsX (ndarray, optional) – the x positions of the horizon to determine, default is [0, image_width/2, image_width], dimensions () or (N) Returns: horizon – the points im camera image coordinates of the horizon, dimensions (2), or (Nx2). Return type: ndarray
-
getObjectHeight
(point_feet, point_heads, Z=0)[source]¶ Calculate the height of objects in the image, assuming the Z position of the objects is known, e.g. they are assumed to stand on the Z=0 plane.
Parameters: - point_feet (ndarray) – the positions of the feet, dimensions: (2) or (Nx2)
- point_heads (ndarray) – the positions of the heads, dimensions: (2) or (Nx2)
- Z (number, ndarray, optional) – the Z position of the objects, dimensions: scalar or (N), default 0
Returns: heights – the height of the objects in meters, dimensions: () or (N)
Return type: ndarray
-
getObjectLength
(point_front, point_back, Z=0)[source]¶ Calculate the length of objects in the image, assuming the Z position of the objects is known, e.g. they are assumed to lie flat on the Z=0 plane.
Parameters: - point_front (ndarray) – the positions of the front end, dimensions: (2) or (Nx2)
- point_back (ndarray) – the positions of the back end, dimensions: (2) or (Nx2)
- Z (number, ndarray, optional) – the Z position of the objects, dimensions: scalar or (N), default 0
Returns: lengths – the lengths of the objects in meters, dimensions: () or (N)
Return type: ndarray
-
getRay
(points, normed=False)[source]¶ As the transformation from the image coordinate system to the space coordinate system is not unique, image points can only be uniquely mapped to a ray in space coordinates.
Parameters: points (ndarray) – the points in image coordinates for which to get the ray, dimensions (2), (Nx2) Returns: - offset (ndarray) – the origin of the camera (= starting point of the rays) in space coordinates, dimensions (3)
- rays (ndarray) – the rays in the space coordinate system, dimensions (3), (Nx3)
Examples
>>> import cameratransform as ct >>> cam = ct.Camera(ct.RectilinearProjection(focallength_px=3729, image=(4608, 2592)), >>> ct.SpatialOrientation(elevation_m=15.4, tilt_deg=85))
get the ray of a point in the image:
>>> offset, ray = cam.getRay([1968, 2291])) >>> offset [0.00 0.00 15.40] >>> ray [-0.09 0.97 -0.35]
or the rays of multiple points in the image:
>>> offset, ray, cam.getRay([[1968, 2291], [1650, 2189]]) >>> offset [0.00 0.00 15.40] >>> ray [[-0.09 0.97 -0.35] [-0.18 0.98 -0.33]]
-
getTopViewOfImage
(image, extent=None, scaling=None, do_plot=False, alpha=None, Z=0.0, skip_size_check=False, hide_backpoints=True)[source]¶ Project an image to a top view projection. This will be done using a grid with the dimensions of the extent ([x_min, x_max, y_min, y_max]) in meters and the scaling, giving a resolution. For convenience, the image can be plotted directly. The projected grid is cached, so if the function is called a second time with the same parameters, the second call will be faster.
Parameters: - image (ndarray) – the image as a numpy array.
- extent (list, optional) – the extent of the resulting top view in meters: [x_min, x_max, y_min, y_max]. If no extent is given a suitable extent is guessed. If a horizon is visible in the image, the guessed extent will in most cases be too streched.
- scaling (number, optional) – the scaling factor, how many meters is the side length of each pixel in the top view. If no scaling factor is given, a good scaling factor is guessed, trying to get about the same number of pixels in the top view as in the original image.
- do_plot (bool, optional) – whether to directly plot the resulting image in a matplotlib figure.
- alpha (number, optional) – an alpha value used when plotting the image. Useful if multiple images should be overlaid.
- Z (number, optional) – the “height” of the plane on which to project.
- skip_size_check (bool, optional) – if true, the size of the image is not checked to match the size of the cameras image.
Returns: image – the top view projected image
Return type: ndarray
-
gpsFromImage
(points, X=None, Y=None, Z=0, D=None)[source]¶ Convert points (Nx2) from the image coordinate system to the gps coordinate system.
Parameters: points (ndarray) – the points in image coordinates to transform, dimensions (2), (Nx2) Returns: points – the points in the gps coordinate system, dimensions (3), (Nx3) Return type: ndarray
-
gpsFromSpace
(points)[source]¶ Convert points (Nx3) from the space coordinate system to the gps coordinate system.
Parameters: points (ndarray) – the points in space coordinates to transform, dimensions (3), (Nx3) Returns: points – the points in the gps coordinate system, dimensions (3), (Nx3) Return type: ndarray
-
imageFromGPS
(points)[source]¶ Convert points (Nx3) from the gps coordinate system to the image coordinate system.
Parameters: points (ndarray) – the points in gps coordinates to transform, dimensions (3), (Nx3) Returns: points – the points in the image coordinate system, dimensions (2), (Nx2) Return type: ndarray
-
imageFromSpace
(points, hide_backpoints=True)[source]¶ Convert points (Nx3) from the space coordinate system to the image coordinate system.
Parameters: points (ndarray) – the points in space coordinates to transform, dimensions (3), (Nx3) Returns: points – the points in the image coordinate system, dimensions (2), (Nx2) Return type: ndarray Examples
>>> import cameratransform as ct >>> cam = ct.Camera(ct.RectilinearProjection(focallength_px=3729, image=(4608, 2592)), >>> ct.SpatialOrientation(elevation_m=15.4, tilt_deg=85))
transform a single point from the space to the image:
>>> cam.imageFromSpace([-4.17, 45.32, 0.]) [1969.52 2209.73]
or multiple points in one go:
>>> cam.imageFromSpace([[-4.03, 43.96, 0.], [-8.57, 47.91, 0.]])) [[1971.05 2246.95] [1652.73 2144.53]]
-
load
(filename)[source]¶ Load the camera parameters from a json file.
Parameters: filename (str) – the filename of the file to load.
-
rotateSpace
(delta_heading)[source]¶ Rotates the whole camera setup, this will turn the heading and rotate the camera position (pos_x_m, pos_y_m) around the origin.
Parameters: delta_heading (number) – the number of degrees to rotate the camera clockwise.
-
save
(filename)[source]¶ Saves the camera parameters to a json file.
Parameters: filename (str) – the filename where to store the parameters.
-
setGPSpos
(lat, lon=None, elevation=None)[source]¶ Provide the earth position for the camera.
Parameters: - lat (number, string) – the latitude of the camera or the string representing the gps position.
- lon (number, optional) – the longitude of the camera.
- elevation (number, optional) – the elevation of the camera.
Examples
>>> import cameratransform as ct >>> cam = ct.Camera()
Supply the gps position of the camera as floats:
>>> cam.setGPSpos(-66.66, 140.00, 19)
or as a string:
>>> cam.setGPSpos("66°39'53.4"S 140°00'34.8"")
-
spaceFromGPS
(points)[source]¶ Convert points (Nx3) from the gps coordinate system to the space coordinate system.
Parameters: points (ndarray) – the points in gps coordinates to transform, dimensions (3), (Nx3) Returns: points – the points in the space coordinate system, dimensions (3), (Nx3) Return type: ndarray
-
spaceFromImage
(points, X=None, Y=None, Z=0, D=None, mesh=None)[source]¶ Convert points (Nx2) from the image coordinate system to the space coordinate system. This is not a unique transformation, therefore an additional constraint has to be provided. The X, Y, or Z coordinate(s) of the target points can be provided or the distance D from the camera.
Parameters: - points (ndarray) – the points in image coordinates to transform, dimensions (2), (Nx2)
- X (number, ndarray, optional) – the X coordinate in space coordinates of the target points, dimensions scalar, (N)
- Y (number, ndarray, optional) – the Y coordinate in space coordinates of the target points, dimensions scalar, (N)
- Z (number, ndarray, optional) – the Z coordinate in space coordinates of the target points, dimensions scalar, (N), default 0
- D (number, ndarray, optional) – the distance in space coordinates of the target points from the camera, dimensions scalar, (N)
- mesh (ndarray, optional) – project the image coordinates onto the mesh in space coordinates. The mesh is a list of M triangles, consisting of three 3D points each. Dimensions, (3x3), (Mx3x3)
Returns: points – the points in the space coordinate system, dimensions (3), (Nx3)
Return type: ndarray
Examples
>>> import cameratransform as ct >>> cam = ct.Camera(ct.RectilinearProjection(focallength_px=3729, image=(4608, 2592)), >>> ct.SpatialOrientation(elevation_m=15.4, tilt_deg=85))
transform a single point (impliying the condition Z=0):
>>> cam.spaceFromImage([1968 , 2291]) [-3.93 42.45 0.00]
transform multiple points:
>>> cam.spaceFromImage([[1968 , 2291], [1650, 2189]]) [[-3.93 42.45 0.00] [-8.29 46.11 -0.00]]
points that cannot be projected on the image, because they are behind the camera (for the RectilinearProjection) are returned with nan entries:
>>> cam.imageFromSpace([-4.17, -10.1, 0.]) [nan nan]
specify a y coordinate as for the back projection.
>>> cam.spaceFromImage([[1968 , 2291], [1650, 2189]], Y=45) [[-4.17 45.00 -0.93] [-8.09 45.00 0.37]]
or different y coordinates for each point:
>>> cam.spaceFromImage([[1968 , 2291], [1650, 2189]], Y=[43, 45]) [[-3.98 43.00 -0.20] [-8.09 45.00 0.37]]
-
undistortImage
(image, extent=None, scaling=None, do_plot=False, alpha=None, skip_size_check=False)[source]¶ Applies the undistortion of the lens model to the image. The purpose of this function is mainly to check the sanity of a lens transformation. As CameraTransform includes the lens transformation in any calculations, it is not necessary to undistort images before using them.
Parameters: - image (ndarray) – the image to undistort.
- extent (list, optional) – the extent in pixels of the resulting image. This can be used to crop the resulting undistort image.
- scaling (number, optional) – the number of old pixels that are used to calculate a new pixel. A higher value results in a smaller target image.
- do_plot (bool, optional) – whether to plot the resulting image directly in a matplotlib plot.
- alpha (number, optional) – when plotting an alpha value can be specified, useful when comparing multiple images.
- skip_size_check (bool, optional) – if true, the size of the image is not checked to match the size of the cameras image.
Returns: image – the undistorted image
Return type: ndarray
-