Daniel Moreno
October 2012
Overview
The simplest structuredlight system consists of a camera and a
data projector.
Y
X
Z
X��
Z��
Y��
R,T
Geometric calibration
• Camera intrinsics:
• Projector intrinsics:
• ProjectorCamera extrinsics:
Rotation and translation:
R,T
K
cam
K
proj
K
proj
K
cam
2
Application: 3D scanning
Projector
Camera
Data acquisition
Decode
rows
cols
ProjectorCamera
correspondences
Triangulation
Correspondences
+
Calibration
=
Pointcloud
1
2
3
4
Mesh
Pointclouds from several
viewpoints can be merged into a
single one and used to build a 3D
model
��
��
3
Camera calibration: wellknown problem
Pinhole model + radial distortion
��
��
��
⌋
⌉
��
��
��
⌊
⌈
=
100
cy
fy0
cxsfx
K
),,,;(
4
3
2
1
kkkkXLKx
∙
=
X: 3D point
k
1
,��,k
4
: distortion coefficients
K: camera intrinsics
x: projection of X into the image plane
If we have enough X↔x point
correspondences we can solve for
all the unknowns
How do we find correspondences?
Object of known
dimensions
Images from different viewpoints
X
x2
x3
),,,;
(
4
3
2
1
1
1
1
kkkkTXRLKx
+
∙
=
),,,;
(
4
3
2
1
2
2
2
kkkkTXRLKx
+
∙
=
),,,;
(
4
3
2
1
3
3
3
kkkkTXRLKx
+
∙
=
��
x1=[x,y]T
y
x
4
Projector calibration: ?
Use the pinhole model to describe the projector:
• Projectors work as an inverse camera
If we model the projector the same as our camera, we would
like to calibrate the projector just as we do for the camera:
• We need correspondences between 3D world points and projector image
plane points: X↔x
• The projector cannot capture images
��
��
��
⌋
⌉
��
��
��
⌊
⌈
=
100
cy
fy0
cxsfx
proj
K
),,,;(
4
3
2
1
kkkkXL
Kx
proj
∙
=
Challenge: How do we find point correspondences?
5
Related works
There have been proposed several projector calibration methods*, they
can be divided in three groups:
1. Rely on camera calibration
•
First the camera is calibrated, then, camera calibration is used to find the
3D world coordinates of the projected pattern
•
Inaccuracies in the camera calibration translates into errors in the
projector calibration
2. Find projector correspondences using homographies between planes
•
Cannot model projector lens distortion because of the linearity of the
transformation
3. Too difficult to perform
•
Required special equipments or calibration artifacts
•
Required color calibration
•
��
Existing methods were not accurate enough or not practical
(*) See the paper for references
6
Proposed method: overview
Features:
Simple to perform:
 no special equipment required
 reuse existing components
Accurate:
 there are no constrains for the mathematical model used to describe the projector
 we use the full pinhole model with radial distortion (as for cameras)
Robust:
 can handle small decoding errors
Block diagram
Acquisition
Decoding
Projector
intrinsics
System
extrinsics
Camera
intrinsics
7
Proposed method: acquisition
Traditional camera calibration
• requires a planar checkerboard (easy to make with a printer)
• capture pictures of the checkerboard from several viewpoints
��
Structuredlight system calibration
• use a planar checkerboard
• capture structuredlight sequences of the checkerboard from several viewpoints
��
��
��
��
8
Proposed method: decoding
Decoding depends on the projected pattern
• The method does not rely on any specific pattern
Our implementation uses complementary gray code patterns
• Robust to light conditions and different object colors (notice that we used the standard
B&W checkerboard)
• Does not required photometric calibration (as phaseshifting does)
• We prioritize calibration accuracy over acquisition speed
• Reasonable fast to project and capture: if the system is synchronized at 30fps, the 42
images used for each pose are acquired in 1.4 seconds
Our implementation decodes the pattern using ��robust pixel classification��(*)
• Highfrequency patterns are used to separate direct and global light components for
each pixel
• Once direct and global components are known each pixel is classified as ON, OFF, or
UNCERTAIN using a simple set of rules
(*) Y. Xu and D. G. Aliaga, ��Robust pixel classification for 3D modeling with structured light��
9
Proposed method: projector calibration
Once the structuredlight pattern is decoded we have a mapping between
projector and camera pixels:
3) Checkerboard corners are not located at integer pixel locations
1) Each camera pixel is associated to a projector row and column, or set to UNCERTAIN
For each (x, y): Map(x, y) = (row, col) or UNCERTAIN
2) The map is not bijective: many camera pixels corresponds to the same projector pixel
10
Proposed method: projector calibration
Solution: local homographies
Local Homographies
1 = 1 ∙ 1
��
captured image
projected image
1
1
2 = 2 ∙ 2
= ∙
1
1. Surface is locally planar: actually the complete checkerboard is a plane
2. Radial distortion is negligible in a small neighborhood
3. Radial distortion is significant in the complete image:
• a single global homography is not enough
= − 2
∀
= ∙
�� ℝ3��3
, = [ , , 1]
, = [ , , 1]
,
For each
checkerboard
corner solve:
11
Proposed method: projector calibration
Summary:
1. Decode the structuredlight pattern: camera ↔ projector map
2. Find checkerboard corner locations in camera image coordinates
3. Compute a local homography H for each corner
4. Translate each corner from image coordinates x to projector coordinates x��
applying the corresponding local homography H
5. Using the correspondences between the projector corner coordinates and
3D world corner locations, X ↔ x��, find projector intrinsic parameters
xHx∙
='
Object of known
dimensions
X
),,,;
(
'
4
3
2
11
1
1
kkkkTXRL
Kxproj
+
∙
=
),,,;
(
'
4
3
212
2
2
kkkkTXRL
Kxproj
+
∙
=
),,,;
(
'
4
3
2
13
3
3
kkkkTXRL
Kxproj
+
∙
=
��
No difference with
camera
calibration!!
12
Camera calibration and system extrinsics
Camera intrinsics
Using the corner locations in image coordinates and their 3D world coordinates, we
calibrate the camera as usual
 Note that no extra images are required
System extrinsics
Once projector and camera intrinsics are known we calibrate the extrinsics (R and T)
parameters as is done for cameracamera systems
)',',',';
~(
'
4
3
2
1
1
1
kkkkTxRL
Kxproj
+
∙
∙
=
),,,;
(
~
4
3
2
11
1
1
1
kkkkx
KLxcam ∙
=


Using the previous correspondences, x↔ x��, we fix the coordinate system at the
camera and we solve for R and T:
��
��
x
x��
R, T
)',',',';
~(
'
4
3
2
1
2
2
kkkkTxRL
Kxproj
+
∙
∙
=
),,,;
(
~
4
3
2
12
1
1
2
kkkkx
KLxcam ∙
=


)',',',';
~(
'
4
3
2
1
3
3
kkkkTxRL
Kxproj
+
∙
∙
=
),,,;
(
~
4
3
2
13
1
1
3
kkkkx
KLxcam ∙
=


)
,...,
;
(
k
kTXRL
Kx
+
∙
=
13
Calibration software
Software
The proposed calibration method can be
implemented fully automatic:
 The user provides a folder with all the images

Press ��calibrate�� and the software
automatically extracts the checkerboard
corners, decode the structuredlight pattern,
and calibrates the system
Algorithm
1. Detect checkerboard corner locations for each plane orientation
2. Estimate global and direct light components
3. Decode structuredlight patterns
4. Compute a local homography for each checkerboard corner
5. Translate corner locations into projector coordinates using local homographies
6. Calibrate camera intrinsics using image corner locations
7. Calibrate projector intrinsics using projector corner locations
8. Fix projector and camera intrinsics and calibrate system extrinsic parameters
9. Optionally, all the parameters, intrinsic and extrinsic, can be optimized together
14
Results
Comparison with existing software:
procamcalib
▪ ProjectorCamera Calibration Toolbox
▪ http://code.google.com/p/procamcalib/
Method
Camera Projector
Proposed
0.3288
0.1447
With global
homography
0.2176
Procamcalib
0.8671
Reprojection error comparison
▪ Only projector calibration is compared
▪ Same camera intrinsics is used for all methods
▪ Global homography means that a single
homography is used to translate all corners
Paper checkerboard used to find plane equation
Projected checkerboard used for calibration
15
Results
Example of projector lens distortion
Distortion coefficients
k1
k2
k3
k4
0.0888 0.3365 0.0126 0.0023
Non trivial distortion!
16
Results
Hausdorff distance
Laser scanner comparison
Model with small details
reconstructed using SSD
3D Model
Error distribution on a scanned 3D plane model:
17
Conclusions
▪ It works ☺
▪ No special setup or materials required
▪ Very similar to standard stereo camera calibration
▪ Reuse existing software components
▪Camera calibration software
▪Structuredlight projection, capture, and decoding software
▪ Local homographies effectively handle projector lens distortion
▪ Adding projector distortion model improves calibration
accuracy
▪ Wellcalibrated structuredlight systems have a precision
comparable to some laser scanners
18
Gray vs. binary codes
19
Binary
��
Gray
��
Dec Bin Gray
0
000 000
1
001 001
2
010 011
3
011 010
4
100 110
5
101 111
6
110 101
Direct/Global light components
20
i
Ki
I
L
<
<
+ =
0
max
ˆ
i
Ki
I
L
<
<
 =
0
min
ˆ
b
LL
L
d


=

+
1
2
1
2
b
bL
L
L
g


=
+

Robust pixel classification
g
g
d
L
bL
LL
)1(��
��

+
+
=
+
g
g
d
bL
L
bL
L
��
��
+

+
=

)1(
��
��
��
��
⎩
����
��
��
⎨
⎧
��
��
<
��
>
��
>
��
<
��
<
��
>
��
>
��
>
��
<
UNCERTAIN
ON
OFF
OFF
ON
UNCERTAIN
otherwise
Lp
Lp
LpLp
pp
LL
pp
LL
mL
d
g
g
d
g
d
g
d
d
Triangulation
21
2
2
22
1
1
11
TXRu
TXRu
+
=
+
=
��
��
0
ˆ
ˆ
ˆ
0
ˆ
ˆ
ˆ
22
22
222
11
11
111
=
+
=
=
+
=
TuXRuuu
TuXRuuu
��
��
0
ˆ
ˆ
ˆ
ˆ
22
22
11
11
=
��
⌋
⌉
��
⌊
⌈
X
TuRu
TuRu
In homogeneous coordinates:
X
u
1
u
2
R
1
,T
1
R
2
,T
2