Cable Driven Robots[edit | edit source]

"A Novel Calibration Algorithm for Cable-Driven Parallel Robots with Application to Rehabilitation" [1][edit | edit source]

Abstract: A method by which they can first establish the mapping between the unknown parameters to be calibrated and the parameters that could be measured by the inner sensors was developed, and then the least squares algorithm was used to find the solutions.

"Auto-calibration and real-time external parameter correction for stereo digital image correlation" [2][edit | edit source]

Abstract: Calibrating stereo digital image correlation (stereo-DIC) is crucial for 3D deformation measurements. In contrast to traditional methods, the proposed method is scale-independent and does not require assumptions such as planarity. More importantly, it is capable of correcting disturbed external parameters in real time.

"NIMS-PL: A Cable-Driven Robot With Self-Calibration Capabilities" [3][edit | edit source]

Abstract: It presents a Networked Info Mechanical System for Planar Translation, which is a novel two-degree-of-freedom (2-DOF) cable-driven robot with self-calibration and online drift-correction capabilities. This system is intended for actuated sensing applications in aquatic environments. The actuation redundancy resulting from in-plane translation driven by four cables results in an infinite set of tension distributions, thus requiring real-time computation of optimal tension distributions. To this end, they have implemented a highly efficient, iterative linear programming solver, which requires a very small number of iterations to converge to the optimal value. Combined with self-calibration capabilities, this drift-monitoring algorithm enables long-term autonomous operation. In addition, two novel self-calibration methods have been developed that leverage the robot's actuation redundancy. The first uses an incremental displacement, or jitter method, whereas the second uses variations in cable tensions to determine end-effector location. They also propose a novel least- squares drift-detection algorithm, which enables the robot to detect long-term drift. Combined with self-calibration capabilities, this drift-monitoring algorithm enables long-term autonomous operation.

"Kinematic Calibration of a Cable-Driven Parallel Robot for 3D Printing" [4][edit | edit source]

Abstract: Limited workspace and accuracy restrict the development of 3D printing technology. Due to the extension range and flexibility of cables, cable-driven parallel robots can be applied in challenging tasks that require motion with large reachable workspace and better flexibility. A kinematic calibration method is proposed based on cable length residuals. In addition, the proposed calibration method is effective and verified for measurement positions outside optimal positions set through experiments.

Hints:

several advantages of cable-driven robots are:

  • remote location of motors and controls;
  • potentially large workspaces;
  • high load capacity;
  • reliability.

Camera Calibration [*][edit | edit source]

Eliminates: mounting and motion errors

Facilitates: multi-head stitching and shadow removal.

To determine an accurate relationship between a 3D point in the real world and its corresponding 2D projection (pixel) in the image captured by that calibrated camera. This means recovering two kinds of parameters:

1.   Internal Parameters of the camera/lens system. (focal length, optical center, and radial distortion coefficients of the lens.)

2.   External parameters that refer to the orientation (rotation and translation) of the camera with respect to some world coordinate system.

Geometry of Image Formation[edit | edit source]

Image-coordinate.png

Given a 3D point P in the room => aim: finding the pixel coordinates (u, v) of this 3D point in the image taken by the camera.

World Coordinate System[edit | edit source]

The World Coordinate System and the Camera Coordinate System are related by a Rotation and a translation. These six parameters (3 for rotation, and 3 for translation) are called the extrinsic parameters of a camera.

To define locations of points in the room:

  1. Origin: We can arbitrarily fix a corner of the room as the origin (0,0,0).
  2. X, Y, Z axes: We can also define the X and Y axis of the room along the two dimensions on the floor and the Z axis along the vertical wall.

World Coordinate System: coordinate of any point can be measured by calculating its distance from the origin along the X, Y, and Z axes.

Camera Coordinate System[edit | edit source]

It is necessary to establish a relationship between the coordinates of the 3D camera and the coordinates of the 3D room. The camera is located in a random part of the room. The camera coordinate is translated relative to the world coordinate. It is also possible for the camera to look in an arbitrary direction. In terms of world coordinates, the camera is rotated. The image plane is placed at a distance f (focal length) from the optical center. The project image (x,y) of the 3D point (Xc, Yc, Zc) is given by:

x = f*(Xc/Zc)

y = f*(Yc/ Zc)

Matrix.png

The pixels in the image sensor may not be square, so we use two different focal lengths fx and fy. The optical center (cx,cy) of the camera may not coincide with the center of the image coordinate system. In addition, there may be a small skew  between the x and y axes of the camera sensor. With that in mind, the camera matrix can be re-written as:

Matrix-2.png
Equation.png

To find the projection of a 3D point onto the image plane, first we transform the point from world coordinate system to the camera coordinate system using the extrinsic parameters (Rotation R and Translation t).

Equation-2.png

fx and fy are the x and y focal lengths (they are usually the same). cx and cy are the x and y coordinates of the optical center in the image plane. The skew between the axes is usually 0.

The Goal of Calibration[edit | edit source]

  • 3×3 matrix K,
  • 3×3 rotation matrix R,
  • 3×1 translation vector t using a set of known 3D points (Xw, Yw, Zw) and their corresponding image coordinates (u,v).

Inputs: A collection of images with points whose 2D image coordinates and 3D world coordinates are known.

Outputs: The 3×3 camera intrinsic matrix, the rotation and translation of each image.

Types of Calibration[edit | edit source]

  • Calibration pattern: has complete control over the imaging process, the best way to perform calibration is to capture several images of an object or pattern of known dimensions from different viewpoints
  • Geometric clues: employs other geometric clues in the scene like straight lines and vanishing points which can be used for calibration.
  • Deep Learning based: has very little control over the imaging setup (e.g., having a single image of the scene), it may still be possible to obtain calibration information of the camera using a Deep Learning based method.

Calibration Pattern's Python Code Using OpenCV[edit | edit source]

import cv2
import numpy as np
import os
import glob

# Dimensions of checkerboard

CHECKERBOARD = (6,9)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# 3D points for each image

objectPoints = []

# 2D points for each image

imagePoints = []

# World coordinates for a 3D pnts

objpoint = np.zeros((1, CHECKERBOARD[0] * CHECKERBOARD[1], 3), np.float32)
objpoint[0,:,:2] = np.mgrid[0:CHECKERBOARD[0], 0:CHECKERBOARD[1]].T.reshape(-1, 2)
prev_img_shape = None

# Finding checkerboard corners

imgs = glob.glob('./images/*.jpg')
for filename in imgs:
 	 img = cv2.imread(filename)
 	 gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
 	 ret,corners = cv2.findChessboardCorners(gray, CHECKERBOARD, cv2.CALIB_CB_ADAPTIVE_THRESH +
 	 cv2.CALIB_CB_FAST_CHECK + cv2.CALIB_CB_NORMALIZE_IMAGE)
 	
 	 if ret == True:
 			 objectPoints.append(objpoint)
 			 """
 			 cv2.cornerSubPix(image, corners, winSize, zeroZone, criteria)
 			 function cornerSubPix takes in the original image, and the location of corners, and looks for the
 			 best corner location inside a small neighborhood of the original location. The algorithm is
 			 iterative in nature and therefore we need to specify the termination criteria
 			 """
 			 cornersII = cv2.cornerSubPix(gray, corners, (11,11),(-1,-1), criteria)
 			 imagePoints.append(cornersII)
 			 img = cv2.drawChessboardCorners(img, CHECKERBOARD, cornersII, ret)
 	 cv2.imshow('image',img)
 	 c2.waitKey(0)
cv2.destroyAllWindows(1)

# Camera Calibration

"""
rvecs = Rotation specified as a 3×1 vector. The direction of the vector specifies the axis of rotation and the magnitude of the vector specifies the angle of rotation.
tvecs = 3×1 Translation vector.
"""
h,w = img.shape[:2]
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)

print(f'Camera matrix: {mtx}')
print(f'dist: {dist}')
print(f'rvecs: {rvecs}')
print(f'tvecs: {tvecs}')

References[edit | edit source]

[1] Yuan, H., You, X., Zhang, Y., Zhang, W., Xu, W., 2019. A Novel Calibration Algorithm for Cable-Driven Parallel Robots with Application to Rehabilitation. Applied Sciences 9, 2182. https://doi.org/10.3390/app9112182

[2] Su, Z., Lu, L., Dong, S., Yang, F., He, X., 2019. Auto-calibration and real-time external parameter correction for stereo digital image correlation. Optics and Lasers in Engineering 121, 46–53. https://doi.org/10.1016/j.optlaseng.2019.03.018

[3] Borgstrom, P.H., Jordan, B.L., Borgstrom, B.J., Stealey, M.J., Sukhatme, G.S., Batalin, M.A., Kaiser, W.J., 2009. NIMS-PL: A Cable-Driven Robot With Self-Calibration Capabilities. IEEE Transactions on Robotics 25, 1005–1015. https://doi.org/10.1109/TRO.2009.2024792

[4] Qian, S., Bao, K., Zi, B., Wang, N., 2018. Kinematic Calibration of a Cable-Driven Parallel Robot for 3D Printing. Sensors 18, 2898. https://doi.org/10.3390/s18092898

Cookies help us deliver our services. By using our services, you agree to our use of cookies.