camera_calibration

Camera calibration utilities.

This module defines utilities and data models for camera and hand-eye calibration workflows. It provides ArUco/ChArUco detection result representations, camera intrinsics data structures, calibration computation helpers, pose conversion and visualization tools, and serialization helpers for calibration artifacts. The module supports building robust calibration pipelines from image data, robot poses, and OpenCV-based methods.

Key functionalities include: - ArUco and ChArUco marker/corner detection result handling. - Camera intrinsics representation and conversion from matrix form. - Calibration computations for camera and hand-eye estimation. - Board pose estimation and visualization on images. - Pose and calibration artifact serialization to JSON. - OpenCV calibration method mapping and interoperability helpers.

class robotblockset.cameras.camera_calibration.ArucoMarkerDetectionResult(corners: ndarray, ids: ndarray, image: ndarray)[source]

Bases: object

Result of ArUco marker detection in a single image.

corners: ndarray
ids: ndarray
image: ndarray
class robotblockset.cameras.camera_calibration.CharucoCornerDetectionResult(corners: ndarray, ids: ndarray, image: ndarray)[source]

Bases: ArucoMarkerDetectionResult

Result of ChArUco corner detection derived from ArUco detections.

class robotblockset.cameras.camera_calibration.Resolution(*args: Any, **kwargs: Any)[source]

Bases: BaseModel

Image resolution in pixels.

width: int
height: int
as_tuple() Tuple[int, int][source]

Return resolution as (width, height) tuple.

Returns:

Resolution tuple.

Return type:

CameraResolutionType

class robotblockset.cameras.camera_calibration.FocalLengths(*args: Any, **kwargs: Any)[source]

Bases: BaseModel

Camera focal lengths in pixels.

fx: float
fy: float
class robotblockset.cameras.camera_calibration.PrincipalPoint(*args: Any, **kwargs: Any)[source]

Bases: BaseModel

Camera principal point in pixels.

cx: float
cy: float
class robotblockset.cameras.camera_calibration.CameraIntrinsics(*args: Any, **kwargs: Any)[source]

Bases: BaseModel

A format for storing the camera intrinsics at a specific image resolution.

image_resolution: Resolution
focal_lengths_in_pixels: FocalLengths
principal_point_in_pixels: PrincipalPoint
radial_distortion_coefficients: RadialDistortionCoefficients | None = None
tangential_distortion_coefficients: TangentialDistortionCoefficients | None = None
classmethod from_matrix_and_resolution(intrinsics_matrix: ndarray, resolution: Tuple[int, int]) CameraIntrinsics[source]

Create a camera intrinsics model from matrix form.

Parameters:
  • intrinsics_matrix (CameraIntrinsicsMatrixType) – Camera intrinsics matrix with shape (3, 3).

  • resolution (Tuple[int, int]) – Image resolution as (width, height).

Returns:

Parsed intrinsics model.

Return type:

CameraIntrinsics

as_matrix() ndarray[source]

Return camera intrinsics as a matrix.

Returns:

Camera intrinsics matrix with shape (3, 3).

Return type:

CameraIntrinsicsMatrixType

class robotblockset.cameras.camera_calibration.BoardDetectionResults(charuco_corners: np.ndarray | None, charuco_ids: np.ndarray | None, aruco_corners: np.ndarray | None, aruco_ids: np.ndarray | None)[source]

Bases: NamedTuple

Aggregated marker and corner detections for a calibration board.

Create new instance of BoardDetectionResults(charuco_corners, charuco_ids, aruco_corners, aruco_ids)

charuco_corners: ndarray | None

Alias for field number 0

charuco_ids: ndarray | None

Alias for field number 1

aruco_corners: ndarray | None

Alias for field number 2

aruco_ids: ndarray | None

Alias for field number 3

class robotblockset.cameras.camera_calibration.BoardPose(rvec: np.ndarray | None, tvec: np.ndarray | None)[source]

Bases: NamedTuple

Board pose represented by OpenCV rotation and translation vectors.

Create new instance of BoardPose(rvec, tvec)

rvec: ndarray | None

Alias for field number 0

tvec: ndarray | None

Alias for field number 1

as_homogeneous_matrix() ndarray | None[source]

Convert pose vectors to a homogeneous transform.

Returns:

Homogeneous transform matrix when pose vectors are present, otherwise None.

Return type:

HomogeneousMatrixType or None

as_pose() ndarray | None[source]

Convert pose vectors to pose-plus-quaternion representation.

Returns:

Pose vector when pose vectors are present, otherwise None.

Return type:

Pose3DType or None

class robotblockset.cameras.camera_calibration.PointReferences(object_points: np.ndarray, image_points: np.ndarray)[source]

Bases: NamedTuple

Matched object/image point references used in calibration.

Create new instance of PointReferences(object_points, image_points)

object_points: ndarray

Alias for field number 0

image_points: ndarray

Alias for field number 1

robotblockset.cameras.camera_calibration.load_images(images_dir: str) List[ndarray][source]

Load images from a directory.

Parameters:

images_dir (str) – Directory containing image files.

Returns:

Loaded images sorted by filename.

Return type:

list[OpenCVIntImageType]

robotblockset.cameras.camera_calibration.save_images(images_dir: str, images: List[ndarray], prefix: str = 'image') None[source]

Save images as JPEG files to a directory.

Parameters:
  • images_dir (str) – Output directory.

  • images (List[OpenCVIntImageType]) – Images to save.

  • prefix (str, optional) – str, default=”image” File prefix used for generated filenames.

Return type:

None

robotblockset.cameras.camera_calibration.detect_aruco_markers(image: ndarray, dictionary: cv2.aruco.Dictionary) ArucoMarkerDetectionResult | None[source]

Detect ArUco markers in an image.

Parameters:
  • image (OpenCVIntImageType) – Input image.

  • dictionary (ArucoDictType) – ArUco dictionary used for detection.

Returns:

Marker detection result, or None if no markers are found.

Return type:

ArucoMarkerDetectionResult or None

robotblockset.cameras.camera_calibration.detect_charuco_corners(image: ndarray, markers_detection_result: ArucoMarkerDetectionResult, charuco_board: cv2.aruco.CharucoBoard) CharucoCornerDetectionResult | None[source]

Detect ChArUco corners from previously detected ArUco markers.

Parameters:
  • image (OpenCVIntImageType) – Input image.

  • markers_detection_result (ArucoMarkerDetectionResult) – ArUco marker detection output.

  • charuco_board (CharucoBoardType) – Target ChArUco board definition.

Returns:

ChArUco corner detection result, or None if interpolation fails.

Return type:

CharucoCornerDetectionResult or None

robotblockset.cameras.camera_calibration.refine_corner_detection(image: ndarray, corners: ndarray) ndarray[source]

Refine detected corners with sub-pixel accuracy.

Parameters:
  • image (OpenCVIntImageType) – Input image.

  • corners (np.ndarray) – Corner array to refine.

Returns:

Refined corners with the original input shape.

Return type:

np.ndarray

robotblockset.cameras.camera_calibration.get_poses_of_aruco_markers(markers_detection_result: ArucoMarkerDetectionResult, marker_size: float, camera_matrix: ndarray, dist_coeffs: ndarray | None = None) List[ndarray] | None[source]

Estimate poses of detected ArUco markers.

Parameters:
  • markers_detection_result (ArucoMarkerDetectionResult) – Detected marker corners and ids.

  • marker_size (float) – Marker side length in meters.

  • camera_matrix (CameraIntrinsicsMatrixType) – Camera intrinsics matrix.

  • dist_coeffs (np.ndarray, optional) – Distortion coefficients.

Returns:

Marker poses as homogeneous transforms, or None on failure.

Return type:

list[HomogeneousMatrixType] or None

robotblockset.cameras.camera_calibration.get_pose_of_charuco_board(charuco_corners_detection_result: CharucoCornerDetectionResult, charuco_board: cv2.aruco.CharucoBoard, camera_matrix: ndarray, dist_coeffs: ndarray | None = None) ndarray | None[source]

Estimate the pose of a detected ChArUco board.

Parameters:
  • charuco_corners_detection_result (CharucoCornerDetectionResult) – Detected ChArUco corners and ids.

  • charuco_board (CharucoBoardType) – Target ChArUco board definition.

  • camera_matrix (CameraIntrinsicsMatrixType) – Camera intrinsics matrix.

  • dist_coeffs (np.ndarray, optional) – Distortion coefficients.

Returns:

Board pose in camera frame, or None if estimation fails.

Return type:

HomogeneousMatrixType or None

Notes

The board frame origin is in the top-left board corner.

robotblockset.cameras.camera_calibration.detect_charuco_board(image: ndarray, camera_matrix: ndarray, dist_coeffs: ndarray | None = None, aruco_dict: cv2.aruco.Dictionary = cv2.aruco.getPredefinedDictionary, charuco_board: cv2.aruco.CharucoBoard = cv2.aruco.CharucoBoard) ndarray | None[source]

Detect the pose of a ChArUco board from an image.

Parameters:
  • image (OpenCVIntImageType) – Input image.

  • camera_matrix (CameraIntrinsicsMatrixType) – Camera intrinsics matrix.

  • dist_coeffs (np.ndarray, optional) – Distortion coefficients.

  • aruco_dict (ArucoDictType, optional) – ArucoDictType, default=DEFAULT_ARUCO_DICT ArUco dictionary used for marker detection.

  • charuco_board (CharucoBoardType, optional) – CharucoBoardType, default=DEFAULT_CHARUCO_BOARD ChArUco board definition.

Returns:

Board pose in camera frame, or None if not detected.

Return type:

HomogeneousMatrixType or None

robotblockset.cameras.camera_calibration.display_detection_results(image_of_board: ndarray, original_board: ndarray, detection_results: BoardDetectionResults, point_references: PointReferences) None[source]

Plot detected board corners against the reference board image.

Parameters:
  • image_of_board (OpenCVIntImageType) – Captured board image.

  • original_board (OpenCVIntImageType) – Reference board image.

  • detection_results (BoardDetectionResults) – Marker and corner detections.

  • point_references (PointReferences) – Object and image point correspondences.

Return type:

None

robotblockset.cameras.camera_calibration.draw_frame_on_image(image: ndarray, frame_pose_in_camera: ndarray, camera_matrix: ndarray, length: float = 0.2) ndarray[source]

Draw the 2D projection of a 3D frame on an image.

Parameters:
  • image (OpenCVIntImageType) – Image to annotate.

  • frame_pose_in_camera (HomogeneousMatrixType) – Frame pose expressed in camera coordinates.

  • camera_matrix (CameraIntrinsicsMatrixType) – Camera intrinsics matrix.

  • length (float, optional) – float, default=0.2 Axis length for drawing.

Returns:

Annotated image.

Return type:

OpenCVIntImageType

robotblockset.cameras.camera_calibration.visualize_aruco_detections(image: ndarray, aruco_result: ArucoMarkerDetectionResult) ndarray[source]

Draw detected ArUco markers and ids on an image.

Parameters:
Returns:

Annotated image.

Return type:

OpenCVIntImageType

robotblockset.cameras.camera_calibration.visualize_charuco_detection(image: ndarray, result: CharucoCornerDetectionResult) ndarray[source]

Draw detected ChArUco corners and ids on an image.

Parameters:
Returns:

Annotated image.

Return type:

OpenCVIntImageType

robotblockset.cameras.camera_calibration.detect_and_visualize_charuco_pose(image: ndarray, intrinsics: ndarray, aruco_dict: cv2.aruco.Dictionary = cv2.aruco.getPredefinedDictionary, charuco_board: cv2.aruco.CharucoBoard = cv2.aruco.CharucoBoard, draw_aruco_detection: bool = True, draw_charuco_detection: bool = True) ndarray | None[source]

Detect and visualize a ChArUco board pose in an image.

Parameters:
  • image (OpenCVIntImageType) – Input image to annotate.

  • intrinsics (CameraIntrinsicsMatrixType) – Camera intrinsics matrix.

  • aruco_dict (ArucoDictType, optional) – ArucoDictType, default=DEFAULT_ARUCO_DICT ArUco dictionary used for marker detection.

  • charuco_board (CharucoBoardType, optional) – CharucoBoardType, default=DEFAULT_CHARUCO_BOARD ChArUco board definition.

  • draw_aruco_detection (bool, optional) – bool, default=True Draw ArUco detections on the image.

  • draw_charuco_detection (bool, optional) – bool, default=True Draw ChArUco detections on the image.

Returns:

Detected board pose, or None when detection fails.

Return type:

HomogeneousMatrixType or None

robotblockset.cameras.camera_calibration.visualize_board_live(board: cv2.aruco.CharucoBoard, dict: cv2.aruco.Dictionary, camera: RGBCamera, draw_aruco_detection: bool = True, draw_charuco_detection: bool = True) None[source]

Show a live window with ArUco/ChArUco detections from a camera stream.

Parameters:
  • board (CharucoBoardType) – ChArUco board model.

  • dict (ArucoDictType) – ArUco marker dictionary used for detection.

  • camera (RGBCamera) – Camera providing RGB images and intrinsics.

  • draw_aruco_detection (bool, optional) – bool, default=True Draw detected ArUco markers.

  • draw_charuco_detection (bool, optional) – bool, default=True Draw detected ChArUco corners.

Return type:

None

robotblockset.cameras.camera_calibration.compute_hand_eye_calibration_error(tcp_poses_in_base: List[ndarray], board_poses_in_camera: List[ndarray], camera_pose: ndarray) float[source]

Compute average residual error for the hand-eye calibration equation.

Parameters:
  • tcp_poses_in_base (List[HomogeneousMatrixType]) – TCP poses in base frame.

  • board_poses_in_camera (List[HomogeneousMatrixType]) – Board poses in camera frame.

  • camera_pose (HomogeneousMatrixType) – (eye-in-hand).

Returns:

Mean residual of the AX=XB consistency equation.

Return type:

float

robotblockset.cameras.camera_calibration.eye_in_hand_pose_estimation(tcp_poses_in_base: List[ndarray], board_poses_in_camera: List[ndarray], method: int = cv2.CALIB_HAND_EYE_ANDREFF) Tuple[ndarray | None, float | None][source]

Estimate camera pose for eye-in-hand calibration.

Parameters:
  • tcp_poses_in_base (List[HomogeneousMatrixType]) – TCP poses in base frame.

  • board_poses_in_camera (List[HomogeneousMatrixType]) – Board poses in camera frame.

  • method (int, optional) – int, default=cv2.CALIB_HAND_EYE_ANDREFF OpenCV hand-eye calibration method constant.

Returns:

Estimated camera pose in TCP frame and residual error.

Return type:

tuple[HomogeneousMatrixType or None, float or None]

robotblockset.cameras.camera_calibration.eye_to_hand_pose_estimation(tcp_poses_in_base: List[ndarray], board_poses_in_camera: List[ndarray], method: int = cv2.CALIB_HAND_EYE_ANDREFF) Tuple[ndarray | None, float | None][source]

Estimate camera pose for eye-to-hand calibration.

Parameters:
  • tcp_poses_in_base (List[HomogeneousMatrixType]) – TCP poses in base frame.

  • board_poses_in_camera (List[HomogeneousMatrixType]) – Board poses in camera frame.

  • method (int, optional) – int, default=cv2.CALIB_HAND_EYE_ANDREFF OpenCV hand-eye calibration method constant.

Returns:

Estimated camera pose in base frame and residual error.

Return type:

tuple[HomogeneousMatrixType or None, float or None]

robotblockset.cameras.camera_calibration.compute_calibration(board_poses_in_camera: List[ndarray], tcp_poses_in_base: List[ndarray], mode: str = 'eye_in_hand', method: int = cv2.CALIB_HAND_EYE_ANDREFF) Tuple[ndarray | None, float | None][source]

Compute hand-eye calibration for a selected mode and method.

Parameters:
  • board_poses_in_camera (List[HomogeneousMatrixType]) – Board poses in camera frame.

  • tcp_poses_in_base (List[HomogeneousMatrixType]) – TCP poses in base frame.

  • mode (str, optional) – str, default=”eye_in_hand” Calibration mode, either "eye_in_hand" or "eye_to_hand".

  • method (int, optional) – int, default=cv2.CALIB_HAND_EYE_ANDREFF OpenCV hand-eye calibration method constant.

Returns:

Estimated camera pose and residual error.

Return type:

tuple[HomogeneousMatrixType or None, float or None]

robotblockset.cameras.camera_calibration.save_board_detections(results_dir: str, board_poses_in_camera: List[ndarray | None], images: List[ndarray], intrinsics: ndarray) None[source]

Save board-detection preview images with projected board frame.

Parameters:
  • results_dir (str) – Output directory.

  • board_poses_in_camera (List[Optional[HomogeneousMatrixType]]) – Board poses in camera frame; entries may be None.

  • images (List[OpenCVIntImageType]) – Source images.

  • intrinsics (CameraIntrinsicsMatrixType) – Camera intrinsics matrix.

Return type:

None

robotblockset.cameras.camera_calibration.draw_base_pose_on_image(image: ndarray, intrinsics: ndarray, camera_pose: ndarray | None, mode: str = 'eye_in_hand', tcp_pose: ndarray | None = None) None[source]

Draw robot base frame on an image using calibration output.

Parameters:
  • image (OpenCVIntImageType) – Image to annotate.

  • intrinsics (CameraIntrinsicsMatrixType) – Camera intrinsics matrix.

  • camera_pose (HomogeneousMatrixType, optional) – Camera pose in base frame (eye-to-hand) or TCP frame (eye-in-hand).

  • mode (str, optional) – str, default=”eye_in_hand” Calibration mode, either "eye_in_hand" or "eye_to_hand".

  • tcp_pose (HomogeneousMatrixType, optional) – TCP pose in base frame corresponding to the image.

Return type:

None

robotblockset.cameras.camera_calibration.extrinsic_calibration_all_methods(results_dir: str, images: List[ndarray], tcp_poses_in_base: List[ndarray], intrinsics: ndarray, mode: str = 'eye_in_hand', aruco_dict: cv2.aruco.Dictionary = cv2.aruco.getPredefinedDictionary, charuco_board: cv2.aruco.CharucoBoard = cv2.aruco.CharucoBoard) Tuple[Dict[str, Any], Dict[str, float]][source]

Run extrinsic calibration with all supported OpenCV hand-eye methods.

Parameters:
  • results_dir (str) – Output directory for result files.

  • images (List[OpenCVIntImageType]) – Calibration-board images.

  • tcp_poses_in_base (List[HomogeneousMatrixType]) – TCP poses in base frame.

  • intrinsics (CameraIntrinsicsMatrixType) – Camera intrinsics matrix.

  • mode (str, optional) – str, default=”eye_in_hand” Calibration mode, either "eye_in_hand" or "eye_to_hand".

  • aruco_dict (ArucoDictType, optional) – ArucoDictType, default=DEFAULT_ARUCO_DICT ArUco dictionary used for board detection.

  • charuco_board (CharucoBoardType, optional) – CharucoBoardType, default=DEFAULT_CHARUCO_BOARD ChArUco board definition.

Returns:

Estimated camera poses and residual errors keyed by method name.

Return type:

tuple[dict[str, Any], dict[str, float]]

robotblockset.cameras.camera_calibration.load_calibration_data(calibration_dir: str) Tuple[List[ndarray], List[ndarray], ndarray, Tuple[int, int]][source]

Load calibration images, TCP poses, and camera intrinsics from disk.

Parameters:

calibration_dir (str) – Calibration root directory containing a data subdirectory.

Returns:

Images, TCP poses, intrinsics matrix, and image resolution.

Return type:

tuple[list[OpenCVIntImageType], list[HomogeneousMatrixType], CameraIntrinsicsMatrixType, CameraResolutionType]

robotblockset.cameras.camera_calibration.load_pose_from_json(path: str | Path) ndarray[source]

Load a pose from a JSON file as a homogeneous transform.

Parameters:

path (Union[str, Path]) – Path to a JSON file with position_in_meters and rotation_euler_xyz_in_radians fields.

Returns:

Homogeneous transformation matrix with shape (4, 4).

Return type:

HomogeneousMatrixType

Raises:
  • FileNotFoundError – If the file does not exist.

  • KeyError – If required keys are missing in the JSON file.

  • ValueError – If the values cannot be converted to floats.

robotblockset.cameras.camera_calibration.save_pose_to_json(path: str | Path, x: ndarray | List[float] | List[int] | Tuple[float, ...] | Tuple[int, ...]) None[source]

Write a pose to a JSON file.

Parameters:
  • path (Union[str, Path]) – Path to the output JSON file.

  • x (ArrayLike) – Spatial pose representation accepted by spatial2t(), such as a homogeneous transform, rotation matrix, pose vector, position vector, quaternion, or position plus axis-angle vector.

Return type:

None

robotblockset.cameras.camera_calibration.T_to_pretty_string(T: ndarray, precision: int = 6) str[source]

Format a homogeneous transform as a multiline aligned matrix string.

Parameters:
  • T (HomogeneousMatrixType) – Homogeneous transform matrix.

  • precision (int, default=6) – Number of decimal places.

Returns:

Pretty-printed matrix text.

Return type:

str

robotblockset.cameras.camera_calibration.find_method_pairs(folder: Path) List[Tuple[str, Path, Path]][source]

Find matching calibration image/pose files by method name.

Parameters:

folder (Path) – Path Directory containing result files.

Returns:

Tuples of (method, image_path, json_path) for matching: base_pose_in_camera_<method>.jpg and camera_pose_<method>.json.

Return type:

list[tuple[str, Path, Path]]

robotblockset.cameras.camera_calibration.draw_matrix_overlay(bgr: ndarray, text: str) ndarray[source]

Overlay multiline text (matrix) on the image.

Parameters:
  • bgr (OpenCVIntImageType) – Input BGR image.

  • text (str) – Multiline text to overlay.

Returns:

Annotated image.

Return type:

OpenCVIntImageType

robotblockset.cameras.camera_calibration.load_calibration_results(results_dir: str, overlay: bool = False, cols: int = 3, save_grid: str | None = None) List[Tuple[str, ndarray, ndarray]][source]

Load saved calibration result images and poses from a results directory.

Parameters:
  • results_dir (str) – Directory containing per-method image and JSON pose outputs.

  • overlay (bool, optional) – bool, default=False Overlay transform text on loaded images.

  • cols (int, optional) – int, default=3 Number of columns for optional result grid visualization.

  • save_grid (str, optional) – Output path for saving the visualization grid.

Returns:

Tuples of method name, image, and loaded pose matrix.

Return type:

list[tuple[str, OpenCVIntImageType, HomogeneousMatrixType]]

Raises:
  • FileNotFoundError – If results_dir does not exist.

  • RuntimeError – If no matching image/JSON result pairs are found.

Functions

T_to_pretty_string(T[, precision])

Format a homogeneous transform as a multiline aligned matrix string.

compute_calibration(board_poses_in_camera, ...)

Compute hand-eye calibration for a selected mode and method.

compute_hand_eye_calibration_error(...)

Compute average residual error for the hand-eye calibration equation.

detect_and_visualize_charuco_pose(image, ...)

Detect and visualize a ChArUco board pose in an image.

detect_aruco_markers(image, dictionary)

Detect ArUco markers in an image.

detect_charuco_board(image, camera_matrix[, ...])

Detect the pose of a ChArUco board from an image.

detect_charuco_corners(image, ...)

Detect ChArUco corners from previously detected ArUco markers.

display_detection_results(image_of_board, ...)

Plot detected board corners against the reference board image.

draw_base_pose_on_image(image, intrinsics, ...)

Draw robot base frame on an image using calibration output.

draw_frame_on_image(image, ...[, length])

Draw the 2D projection of a 3D frame on an image.

draw_matrix_overlay(bgr, text)

Overlay multiline text (matrix) on the image.

extrinsic_calibration_all_methods(...[, ...])

Run extrinsic calibration with all supported OpenCV hand-eye methods.

eye_in_hand_pose_estimation(...[, method])

Estimate camera pose for eye-in-hand calibration.

eye_to_hand_pose_estimation(...[, method])

Estimate camera pose for eye-to-hand calibration.

find_method_pairs(folder)

Find matching calibration image/pose files by method name.

get_pose_of_charuco_board(...[, dist_coeffs])

Estimate the pose of a detected ChArUco board.

get_poses_of_aruco_markers(...[, dist_coeffs])

Estimate poses of detected ArUco markers.

load_calibration_data(calibration_dir)

Load calibration images, TCP poses, and camera intrinsics from disk.

load_calibration_results(results_dir[, ...])

Load saved calibration result images and poses from a results directory.

load_images(images_dir)

Load images from a directory.

load_pose_from_json(path)

Load a pose from a JSON file as a homogeneous transform.

refine_corner_detection(image, corners)

Refine detected corners with sub-pixel accuracy.

save_board_detections(results_dir, ...)

Save board-detection preview images with projected board frame.

save_images(images_dir, images[, prefix])

Save images as JPEG files to a directory.

save_pose_to_json(path, x)

Write a pose to a JSON file.

visualize_aruco_detections(image, aruco_result)

Draw detected ArUco markers and ids on an image.

visualize_board_live(board, dict, camera[, ...])

Show a live window with ArUco/ChArUco detections from a camera stream.

visualize_charuco_detection(image, result)

Draw detected ChArUco corners and ids on an image.

Classes

ArucoMarkerDetectionResult(corners, ids, image)

Result of ArUco marker detection in a single image.

BoardDetectionResults(charuco_corners, ...)

Aggregated marker and corner detections for a calibration board.

BoardPose(rvec, tvec)

Board pose represented by OpenCV rotation and translation vectors.

CameraIntrinsics(*args, **kwargs)

A format for storing the camera intrinsics at a specific image resolution.

CharucoCornerDetectionResult(corners, ids, image)

Result of ChArUco corner detection derived from ArUco detections.

FocalLengths(*args, **kwargs)

Camera focal lengths in pixels.

PointReferences(object_points, image_points)

Matched object/image point references used in calibration.

PrincipalPoint(*args, **kwargs)

Camera principal point in pixels.

Resolution(*args, **kwargs)

Image resolution in pixels.