NRSDK Coordinate Systems
Last updated
Last updated
This document describes the coordinate systems of the Xreal Glass used in the NRSDK for Unity. It also describes the corresponding interfaces for getting extrinsics between the glass components, camera image data, and camera intrinsics, as well as conversion to other definition of coordinate systems.Note that this document is applicable to the NRSDK for Unity only, and does not apply to other types of NRSDK.
In the NRSDK for Unity, in terms of coordinate system's definition and the corresponding extrinsics, we use definition of the Unity coordinate system (left handed).
The XREAL glasses consists of the following key components
2 x Grayscale Cameras
2 x Display Cameras
Head / IMU
RGB Camera
The placement of the above components and their corresponding coordinate systems, as defined in NRSDK for Unity, are as follows
The global coordinate frame of the tracking system is as follows
The following Interface returns the 6dof head pose with respect to the global frame, as defined above.
The following Interface returns the 6dof extrinsics, as a transformation matrix, of a Device's coordinate frame expressed in the Head coordinate frame.
The following example code gets the extrinsic transformation of RGB Camera in Head, and transforms a point's coordinate from the RGB camera frame to the Head frame.
For computer vision algorithm developers, it is often convenient to handle quantities expressed in the OpenCV coordinate system (right handed). Hereafter, we describe how to convert the aforementioned Unity coordinate systems and their corresponding extrinsics to the OpenCV convention. We also describe the definitions and interfaces for image data and camera intrinsics.
In the OpenCV convention, the Xreal Glass components and their corresponding coordinate systems are as follows
The definition difference between Unity and OpenCV coordinate systems for a camera is as follows
Note that only the y-axis needs to be negated between these two conventions. Therefore, given an extrinsic transformation defined under the Unity coordinate systems, we can obtain the equivalent transformation defined under the OpenCV, by using the following utility function
The following example code first gets the extrinsic transformation of RGB Camera in Head, under the Unity coordinate systems as described earlier, and then converts it to the OpenCV coordinate systems by using the above utility function.
The following example code shows how to get the extrinsic transformation between the two Grayscale cameras and convert it to the OpenCV coordinate systems.
The definition of the image pixel coordinates and the camera intrinsics in the NRSDK follows the OpenCV convention.
The image data is stored row-wise in memory as follows
Raw image data can be obtained through NRRGBCamTexture or NRGrayCameraTexture for the RGBCamera or GrayCamera, respectively.
In the current version of NRSDK, one can use Texture2D to get the raw image data. The following example code uses GetRawTextureData to get raw data by accessing Texture2D from NRRGBCamTexture. The output raw data array stores the image pixel data row-wise as described above.
The interfaces for getting camera intrinsics, distortion parameters, and resolution are as follows
The following example code gets the RGB camera's intrinsic matrix and distortion parameters as described above.
For example, given a vector's coordinate in the Device's coordinate frame, and using the extrinsic transformation matrix obtained as above, we can compute the vector's coordinate in the Head coordinate frame, by
The camera intrinsic matrix is composed of the focal lengths and , and the principal point and , expressed in pixel units.
The distortion parameters contain radial coefficients and tangential coefficients . The order of NRDistortionParams is .