Align Depth and Color Frames – Depth and RGB Registration

Sometimes it is necessary to create a point cloud from a given depth and color (RGB) frame. This is especially the case when a scene is captured using depth cameras such as Kinect. The process of aligning the depth and the RGB frame is called “registration” and it is very easy to do (and the algorithm’s pseudo-code is surprisingly hard to find with a simple Google search! ūüėÄ )

To perform registration, you would need 4 pieces of information:

  1. The depth camera intrinsics:
    1. Focal lengths fxd and fyd (in pixel units)
    2. Optical centers (sometimes called image centers) Cxd and Cyd
  2. The RGB camera intrinsics:
    1. Focal lengths fxrgb and fyrgb (in pixel units)
    2. Optical centers (sometimes called image centers) Cxrgb and Cyrgb
  3. The extrinsics relating the depth camera to the¬†RGB camera. This is a 4×4 matrix containing rotation and translation values.
  4. (Obviously) the depth and the RGB frames. Note that they do not have to have the same resolution. Applying the intrinsics takes care of the resolution issue. Using camera’s such as Kinect, the depth values should usually be in meters (the unit of the depth values is very important as using incorrect units will result in a registration in which the colors and the depth values are off and are clearly misaligned).
    Also, note that some data sets apply a scale and a bias to the depth values in the depth frame. Make sure to account for this scaling and offsetting before proceeding. In order words, make sure there are no scales applied to the depth values of your depth frame.

Let depthData contain the depth frame and rgbData contain the RGB frame. The pseudo-code for registration in MATLAB is as follows:

A few things to note here:

  1. The indices x and y in the second group of for loops may be invalid which indicates that the obtained RGB pixel is not visible to the RGB camera.
  2. Some kind of interpolation may be necessary when using x and y. I just did rounding.
  3. This code can be readily used with savepcd function to save the point cloud into a PCL compatible format.

The registration formulas were obtained from the paper “On-line Incremental 3D¬†Human Body Reconstruction for HMI¬†or AR¬†Applications” by Almeida et al (2011).¬†The same formulas can be found¬†here. Hope this helps ūüôā



Skip to comment form

  1. ramine

    Dear Mehran
    Your code is useful for me.
    do you know parameter camera for kinect version1?

    1. Mehran Maghoumi

      Glad you found it useful! Unfortunately, I don’t have any calibration values for a Kinect 1. You may want to calibrate yourself…

      1. Vishnu Teja Yalakuntla

        The device comes with calibration parameters. For example, in pylibfreenect2, we can access them through

        fn.openDevice(serial, pipeline=pipeline).getColorCameraParams()


        fn.openDevice(serial, pipeline=pipeline).getIrCameraParams()

        But I don’t know how accurate they are.


        1. Mehran Maghoumi

          Not very accurate. See my other post here: https://www.codefull.org/2017/04/practical-kinect-calibration/

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">

Time limit is exhausted. Please reload CAPTCHA.