Sometimes it is necessary to create a point cloud from a given depth and color (RGB) frame. This is especially the case when a scene is captured using depth cameras such as Kinect. The process of aligning the depth and the RGB frame is called “registration” and it is very easy to do (and the algorithm’s pseudo-code is surprisingly hard to find with a simple Google search! 😀 )

To perform registration, you would need 4 pieces of information:

- The depth camera intrinsics:
- Focal lengths f
_{xd}and f_{yd}(in pixel units) - Optical centers (sometimes called image centers) C
_{xd}and C_{yd}

- Focal lengths f
- The RGB camera intrinsics:
- Focal lengths f
_{xrgb}and f_{yrgb}(in pixel units) - Optical centers (sometimes called image centers) C
_{xrgb}and C_{yrgb}

- Focal lengths f
- The extrinsics relating the depth camera to the RGB camera. This is a 4×4 matrix containing rotation and translation values.
- (Obviously) the depth and the RGB frames. Note that they do not have to have the same resolution. Applying the intrinsics takes care of the resolution issue. Using camera’s such as Kinect, the depth values should usually be in meters (the unit of the depth values is very important as using incorrect units will result in a registration in which the colors and the depth values are off and are clearly misaligned).

Also, note that some data sets apply a scale and a bias to the depth values in the depth frame. Make sure to account for this scaling and offsetting before proceeding. In order words, make sure there are no scales applied to the depth values of your depth frame.

Let `depthData`

contain the depth frame and `rgbData`

contain the RGB frame. The pseudo-code for registration in MATLAB is as follows:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
function [aligned] = ... depth_rgb_registration(depthData, rgbData,... fx_d, fy_d, cx_d, cy_d,... fx_rgb, fy_rgb, cx_rgb, cy_rgb,... extrinsics) depthHeight = size(depthData, 1); depthWidth = size(depthData, 2); % Aligned will contain X, Y, Z, R, G, B values in its planes aligned = zeros(depthHeight, depthWidth, 6); for v = 1 : (depthHeight) for u = 1 : (depthWidth) % Apply depth intrinsics z = single(depthData(v,u)) / depthScale; x = single((u - cx_d) * z) / fx_d; y = single((v - cy_d) * z) / fy_d; % Apply the extrinsics transformed = (extrinsics * [x;y;z;1])'; aligned(v,u,1) = transformed(1); aligned(v,u,2) = transformed(2); aligned(v,u,3) = transformed(3); end end for v = 1 : (depthHeight) for u = 1 : (depthWidth) % Apply RGB intrinsics x = (aligned(v,u,1) * fx_rgb / aligned(v,u,3)) + cx_rgb; y = (aligned(v,u,2) * fy_rgb / aligned(v,u,3)) + cy_rgb; % "x" and "y" are indices into the RGB frame, but they may contain % invalid values (which correspond to the parts of the scene not visible % to the RGB camera. % Do we have a valid index? if (x > rgbWidth || y > rgbHeight ||... x < 1 || y < 1 ||... isnan(x) || isnan(y)) continue; end % Need some kind of interpolation. I just did it the lazy way x = round(x); y = round(y); aligned(v,u,4) = single(rgbData(y, x, 1); aligned(v,u,5) = single(rgbData(y, x, 2); aligned(v,u,6) = single(rgbData(y, x, 3); end end end |

A few things to note here:

- The indices
`x`

and`y`

in the second group of`for`

loops may be invalid which indicates that the obtained RGB pixel is not visible to the RGB camera. - Some kind of interpolation may be necessary when using
`x`

and`y`

. I just did rounding. - This code can be readily used with
`savepcd`

function to save the point cloud into a PCL compatible format.

The registration formulas were obtained from the paper “On-line Incremental 3D Human Body Reconstruction for HMI or AR Applications” by Almeida et al (2011). The same formulas can be found here. Hope this helps 🙂

## 4 comments

Skip to comment form ↓

## ramine

August 26, 2016 at 1:57 PM (UTC -4) Link to this comment

Hello

Dear Mehran

Your code is useful for me.

do you know parameter camera for kinect version1?

## Mehran Maghoumi

August 27, 2016 at 6:16 PM (UTC -4) Link to this comment

Glad you found it useful! Unfortunately, I don’t have any calibration values for a Kinect 1. You may want to calibrate yourself…

## Vishnu Teja Yalakuntla

May 21, 2017 at 4:39 AM (UTC -4) Link to this comment

The device comes with calibration parameters. For example, in pylibfreenect2, we can access them through

fn.openDevice(serial, pipeline=pipeline).getColorCameraParams()

or

fn.openDevice(serial, pipeline=pipeline).getIrCameraParams()

But I don’t know how accurate they are.

Cheers

## Mehran Maghoumi

May 21, 2017 at 8:19 AM (UTC -4) Link to this comment

Not very accurate. See my other post here: https://www.codefull.org/2017/04/practical-kinect-calibration/