Chan's algorithm

A 2D demo for Chan's algorithm. Note however that the algorithm divides the points arbitrarily, not by x-coordinate.

In computational geometry, Chan's algorithm,[1] named after Timothy M. Chan, is an optimal output-sensitive algorithm to compute the convex hull of a set ${\displaystyle P}$ of ${\displaystyle n}$ points, in 2- or 3-dimensional space. The algorithm takes ${\displaystyle O(n\log h)}$ time, where ${\displaystyle h}$ is the number of vertices of the output (the convex hull). In the planar case, the algorithm combines an ${\displaystyle O(n\log n)}$ algorithm (Graham scan, for example) with Jarvis march (${\displaystyle O(nh)}$), in order to obtain an optimal ${\displaystyle O(n\log h)}$ time. Chan's algorithm is notable because it is much simpler than the Kirkpatrick–Seidel algorithm, and it naturally extends to 3-dimensional space. This paradigm[2] has been independently developed by Frank Nielsen in his Ph.D. thesis.[3]

Algorithm

Overview

A single pass of the algorithm requires a parameter ${\displaystyle m\geq h}$  to successfully terminate. Assume such a value is fixed (in practice, ${\displaystyle h}$  is not known beforehand and multiple passes with increasing values of ${\displaystyle m}$  will be used, see below).

The algorithm starts by arbitrarily partitioning the set of points ${\displaystyle P}$  into ${\displaystyle K\leq 1+n/m}$  subsets ${\displaystyle (Q_{k})_{k=1,2,...K}}$  with at most ${\displaystyle m}$  points each; notice that ${\displaystyle K=O(n/m)}$ .

For each subset ${\displaystyle Q_{k}}$ , it computes the convex hull, ${\displaystyle C_{k}}$ , using an ${\displaystyle O(p\log p)}$  algorithm (for example, Graham scan), where ${\displaystyle p}$  is the number of points in the subset. As there are ${\displaystyle K}$  subsets of ${\displaystyle O(m)}$  points each, this phase takes ${\displaystyle K\cdot O(m\log m)=O(n\log m)}$  time.

During the second phase, Jarvis's march is executed, making use of the precomputed (mini) convex hulls, ${\displaystyle (C_{k})_{k=1,2,...K}}$ . At each step in this Jarvis's march algorithm, we have a point ${\displaystyle p_{i}}$  in the convex hull (at the beginning, ${\displaystyle p_{i}}$  may be the point in ${\displaystyle P}$  with the lowest y coordinate, which is guaranted to be in the convex hull of ${\displaystyle P}$ ), and need to find a point ${\displaystyle p_{i+1}=f(p_{i},P)}$  such that all other points of ${\displaystyle P}$  are to the right of the line ${\displaystyle p_{i}p_{i+1}}$ [clarification needed], where the notation ${\displaystyle p_{i+1}=f(p_{i},P)}$  simply means that the next point, that is ${\displaystyle p_{i+1}}$ , is determined as a function of ${\displaystyle p_{i}}$  and ${\displaystyle P}$ . The convex hull of the set ${\displaystyle Q_{k}}$ , ${\displaystyle C_{k}}$ , is known and contains at most ${\displaystyle m}$  points (listed in a clockwise or counter-clockwise order), which allows to compute ${\displaystyle f(p_{i},Q_{k})}$  in ${\displaystyle O(\log m)}$  time by binary search[how?]. Hence, the computation of ${\displaystyle f(p_{i},Q_{k})}$  for all the ${\displaystyle K}$  subsets can be done in ${\displaystyle O(K\log m)}$  time. Then, we can determine ${\displaystyle f(p_{i},P)}$  using the same technique as normally used in Jarvis's march, but only considering the points ${\displaystyle (f(p_{i},Q_{k}))_{1\leq k\leq K}}$  (i.e. the points in the mini convex hulls) instead of the whole set ${\displaystyle P}$ . For those points, one iteration of Jarvis's march is ${\displaystyle O(K)}$  which is negligible compared to the computation for all subsets. Jarvis's march completes when the process has been repeated ${\displaystyle O(h)}$  times (because, in the way Jarvis march works, after at most ${\displaystyle h}$  iterations of its outermost loop, where ${\displaystyle h}$  is the number of points in the convex hull of ${\displaystyle P}$ , we must have found the convex hull), hence the second phase takes ${\displaystyle O(Kh\log m)}$  time, equivalent to ${\displaystyle O(n\log h)}$  time if ${\displaystyle m}$  is close to ${\displaystyle h}$  (see below the description of a strategy to choose ${\displaystyle m}$  such that this is the case).

By running the two phases described above, the convex hull of ${\displaystyle n}$  points is computed in ${\displaystyle O(n\log h)}$  time.

Choosing the parameter ${\displaystyle m}$

If an arbitrary value is chosen for ${\displaystyle m}$ , it may happen that ${\displaystyle m . In that case, after ${\displaystyle m}$  steps in the second phase, we interrupt the Jarvis's march as running it to the end would take too much time. At that moment, a ${\displaystyle O(n\log m)}$  time will have been spent, and the convex hull will not have been calculated.

The idea is to make multiple passes of the algorithm with increasing values of ${\displaystyle m}$ ; each pass terminates (successfully or unsuccessfully) in ${\displaystyle O(n\log m)}$  time. If ${\displaystyle m}$  increases too slowly between passes, the number of iterations may be large; on the other hand, if it rises too quickly, the first ${\displaystyle m}$  for which the algorithm terminates successfully may be much larger than ${\displaystyle h}$ , and produce a complexity ${\displaystyle O(n\log m)>O(n\log h)}$ .

Squaring Strategy

One possible strategy is to square the value of ${\displaystyle m}$  at each iteration, up to a maximum value of ${\displaystyle n}$  (corresponding to a partition in singleton sets).[4] Starting from a value of 2, at iteration ${\displaystyle t}$ , ${\displaystyle m=\min \left(n,2^{2^{t}}\right)}$  is chosen. In that case, ${\displaystyle O(\log \log h)}$  iterations are made, given that the algorithm terminates once we have

${\displaystyle m=2^{2^{t}}\geq h\iff \log \left(2^{2^{t}}\right)\geq \log h\iff 2^{t}\geq \log h\iff \log {2^{t}}\geq \log {\log h}\iff t\geq \log {\log h},}$

with the logarithm taken in base ${\displaystyle 2}$ , and the total running time of the algorithm is

${\displaystyle \sum _{t=1}^{\lceil \log \log h\rceil }O\left(n\log \left(2^{2^{t}}\right)\right)=O(n)\sum _{t=1}^{\lceil \log \log h\rceil }O(2^{t})=O\left(n\cdot 2^{1+\lceil \log \log h\rceil }\right)=O(n\log h).}$

In three dimensions

To generalize this construction for the 3-dimensional case, an ${\displaystyle O(n\log n)}$  algorithm to compute the 3-dimensional convex hull should be used instead of Graham scan, and a 3-dimensional version of Jarvis's march needs to be used. The time complexity remains ${\displaystyle O(n\log h)}$ [citation needed].

Pseudocode

In the following pseudocode, text between parentheses and in italic are comments. To fully understand the following pseudocode, it is recommended that the reader is already familiar with Graham scan and Jarvis march algorithms to compute the convex hull, ${\displaystyle C}$ , of a set of points, ${\displaystyle P}$ .

Input: Set ${\displaystyle P}$  with ${\displaystyle n}$  points .
Output: Set ${\displaystyle C}$  with ${\displaystyle h}$  points, the convex hull of ${\displaystyle P}$ .
(Pick a point of ${\displaystyle P}$  which is guaranted to be in ${\displaystyle C}$ : for instance, the point with the lowest y coordinate.)
(This operation takes ${\displaystyle {\mathcal {O}}(n)}$  time: e.g., we can simply iterate through ${\displaystyle P}$ .)
${\displaystyle p_{1}:=PICK\_START(P)}$
(${\displaystyle p_{0}}$  is used in the Jarvis march part of this Chan's algorithm,
so that to compute the second point, ${\displaystyle p_{2}}$ , in the convex hull of ${\displaystyle P}$ .)
(Note: ${\displaystyle p_{0}}$  is not a point of ${\displaystyle P}$ .)
${\displaystyle p_{0}:=(-\infty ,0)}$
(Note: ${\displaystyle h}$ , the number of points in the final convex hull of ${\displaystyle P}$ , is not known.)
(These are the iterations needed to discover the value of ${\displaystyle m}$ , which is an estimate of ${\displaystyle h}$ .)
(${\displaystyle h\leq m}$  is required for this Chan's algorithm to find the convex hull of ${\displaystyle P}$ .)
(More specifically, we want ${\displaystyle h\leq m\leq h^{2}}$ , so that not to perform too many unnecessary iterations
and so that the time complexity of this Chan's algorithm is ${\displaystyle {\mathcal {O}}(n\log h)}$ .)
(As explained above in this article, we use a strategy where at most ${\displaystyle \log \log n}$  iterations are required to find ${\displaystyle m}$ .)
(Note: the final ${\displaystyle m}$  may not be equal to ${\displaystyle h}$ , but it is never smaller than ${\displaystyle h}$  and greater than ${\displaystyle h^{2}}$ .)
(Nevertheless, this Chan's algorithm stops once ${\displaystyle h}$  iterations of the outermost loop are performed,
that is, even if ${\displaystyle m\neq h}$ , it doesn't perform ${\displaystyle m}$  iterations of the outermost loop.)
(For more info, see the Jarvis march part of this algorithm below, where ${\displaystyle C}$  is returned if ${\displaystyle p_{i+1}==p_{1}}$ .)
for ${\displaystyle 1\leq t\leq \log \log n}$  do
(Set parameter ${\displaystyle m}$  for the current iteration. We use a "squaring scheme" as described above in this article.
There are other schemes: for example, the "doubling scheme", where ${\displaystyle m=2^{t}}$ , for ${\displaystyle t=1,\dots ,\left\lceil \log h\right\rceil }$ .
If we use the "doubling scheme", though, the resulting time complexity of this Chan's algorithm is ${\displaystyle {\mathcal {O}}(n\log ^{2}h)}$ .)
${\displaystyle m:=2^{2^{t}}}$
(Initialize an empty list (or array) to store the points of the convex hull of ${\displaystyle P}$ , as they are found.)
${\displaystyle C:=()}$
${\displaystyle ADD(C,p_{1})}$
(Split set of points ${\displaystyle P}$  into ${\displaystyle K=\left\lceil {\frac {n}{m}}\right\rceil }$  subsets of roughly ${\displaystyle m}$  elements each[how?].)
${\displaystyle Q_{1},Q_{2},\dots ,Q_{K}:=SPLIT(P,m)}$
(Compute the convex hull of all ${\displaystyle K}$  subsets of points, ${\displaystyle Q_{1},Q_{2},\dots ,Q_{K}}$ .)
(It takes ${\displaystyle {\mathcal {O}}(Km\log m)={\mathcal {O}}(n\log m)}$  time.)
If ${\displaystyle m\leq h^{2}}$ , then the time complexity is ${\displaystyle {\mathcal {O}}(n\log h^{2})={\mathcal {O}}(n\log h)}$ .)
for ${\displaystyle 1\leq k\leq K}$  do
(Compute the convex hull of subset ${\displaystyle k}$ , ${\displaystyle Q_{k}}$ , using Graham scan, which takes ${\displaystyle {\mathcal {O}}(m\log m)}$  time.)
(${\displaystyle C_{k}}$  is the convex hull of the subset of points ${\displaystyle Q_{k}}$ .)
${\displaystyle C_{k}:=GRAHAM\_SCAN(Q_{k})}$
(At this point, the convex hulls ${\displaystyle C_{1},C_{2},\dots ,C_{K}}$  of respectively the subsets of points ${\displaystyle Q_{1},Q_{2},\dots ,Q_{K}}$  have been computed.)
(Now, use a modified version of the Jarvis march algorithm to compute the convex hull of ${\displaystyle P}$ .)
(Jarvis march performs in ${\displaystyle {\mathcal {O}}(nh)}$  time, where ${\displaystyle n}$  is the number of input points and ${\displaystyle h}$  is the number of points in the convex hull.)
(Given that Jarvis march is an output-sensitive algorithm, its running time depends on the size of the convex hull, ${\displaystyle h}$ .)
(In practice, it means that Jarvis march performs ${\displaystyle h}$  iterations of its outermost loop.
At each of these iterations, it performs at most ${\displaystyle n}$  iterations of its innermost loop.)
(We want ${\displaystyle h\leq m\leq h^{2}}$ , so we do not want to perform more than ${\displaystyle m}$  iterations in the following outer loop.)
(If our current ${\displaystyle m}$  is smaller than ${\displaystyle h}$ , i.e. ${\displaystyle m , the convex hull of ${\displaystyle P}$  cannot be found.)
(In this modified version of Jarvis march, we perform an operation inside the innermost loop which takes ${\displaystyle {\mathcal {O}}(\log m)}$  time.
Hence, the total time complexity of this modified version is
${\displaystyle {\mathcal {O}}(mK\log m)={\mathcal {O}}(m\left\lceil {\frac {n}{m}}\right\rceil \log m)={\mathcal {O}}(n\log m)={\mathcal {O}}(n\log 2^{2^{t}})={\mathcal {O}}(n2^{t}).}$
If ${\displaystyle m\leq h^{2}}$ , then the time complexity is ${\displaystyle {\mathcal {O}}(n\log h^{2})={\mathcal {O}}(n\log h)}$ .)
for ${\displaystyle 1\leq i\leq m}$  do
(Note: here, a point in the convex hull of ${\displaystyle P}$  is already known, that is ${\displaystyle p_{1}}$ .)
(In this inner for loop, ${\displaystyle K}$  possible next points to be on the convex hull of ${\displaystyle P}$ , ${\displaystyle q_{i,1},q_{i,2},\dots ,q_{i,K}}$ , are computed.)
(Each of these ${\displaystyle K}$  possible next points is from a different ${\displaystyle C_{k}}$ :
that is, ${\displaystyle q_{i,k}}$  is a possible next point on the convex hull of ${\displaystyle P}$  which is part of the convex hull of ${\displaystyle C_{k}}$ .)
(Note: ${\displaystyle q_{i,1},q_{i,2},\dots ,q_{i,K}}$  depend on ${\displaystyle i}$ : that is, for each iteration ${\displaystyle i}$ , we have ${\displaystyle K}$  possible next points to be on the convex hull of ${\displaystyle P}$ .)
(Note: at each iteration ${\displaystyle i}$ , only one of the points among ${\displaystyle q_{i,1},q_{i,2},\dots ,q_{i,K}}$  is added to the convex hull of ${\displaystyle P}$ .)
for ${\displaystyle 1\leq k\leq K}$  do
(${\displaystyle JARVIS\_BINARY\_SEARCH}$  finds the point ${\displaystyle d\in C_{k}}$  such that the angle ${\displaystyle \measuredangle p_{i-1}p_{i}d}$  is maximized[why?],
where ${\displaystyle \measuredangle p_{i-1}p_{i}d}$  is the angle between the vectors ${\displaystyle {\overrightarrow {p_{i}p_{i-1}}}}$  and ${\displaystyle {\overrightarrow {p_{i}d}}}$ . Such ${\displaystyle d}$  is stored in ${\displaystyle q_{i,k}}$ .)
(Angles do not need to be calculated directly: the orientation test can be used[how?].)
(${\displaystyle JARVIS\_BINARY\_SEARCH}$  can be performed in ${\displaystyle {\mathcal {O}}(\log m)}$  time[how?].)
(Note: at the iteration ${\displaystyle i=1}$ , ${\displaystyle p_{i-1}=p_{0}=(-\infty ,0)}$  and ${\displaystyle p_{1}}$  is known and is a point in the convex hull of ${\displaystyle P}$ :
in this case, it is the point of ${\displaystyle P}$  with the lowest y coordinate.)
${\displaystyle q_{i,k}:=JARVIS\_BINARY\_SEARCH(p_{i-1},p_{i},C_{k})}$
(Choose the point ${\displaystyle z\in \{q_{i,1},q_{i,2},\dots ,q_{i,K}\}}$  which maximizes the angle ${\displaystyle \measuredangle p_{i-1}p_{i}z}$ [why?] to be the next point on the convex hull of ${\displaystyle P}$ .)
${\displaystyle p_{i+1}:=JARVIS\_NEXT\_CH\_POINT(p_{i-1},p_{i},(q_{i,1},q_{i,2},\dots ,q_{i,K}))}$
(Jarvis march terminates when the next selected point on the convext hull, ${\displaystyle p_{i+1}}$ , is the initial point, ${\displaystyle p_{1}}$ .)
if ${\displaystyle p_{i+1}==p_{1}}$
(Return the convex hull of ${\displaystyle P}$  which contains ${\displaystyle i=h}$  points.)
(Note: of course, no need to return ${\displaystyle p_{i+1}}$  which is equal to ${\displaystyle p_{1}}$ .)
return ${\displaystyle C:=(p_{1},p_{2},\dots ,p_{i})}$
else
${\displaystyle ADD(C,p_{i+1})}$
(If after ${\displaystyle m}$  iterations a point ${\displaystyle p_{i+1}}$  has not been found so that ${\displaystyle p_{i+1}==p_{1}}$ , then ${\displaystyle m .)
(We need to start over with a higher value for ${\displaystyle m}$ .)

Implementation

Chan's paper contains several suggestions that may improve the practical performance of the algorithm, for example:

• When computing the convex hulls of the subsets, eliminate the points that are not in the convex hull from consideration in subsequent executions.
• The convex hulls of larger point sets can be obtained by merging previously calculated convex hulls, instead of recomputing from scratch.
• With the above idea, the dominant cost of algorithm lies in the pre-processing, i.e., the computation of the convex hulls of the groups. To reduce this cost, we may consider reusing hulls computed from the previous iteration and merging them as the group size is increased.

Extensions

Chan's paper contains some other problems whose known algorithms can be made optimal output sensitive using his technique, for example:

• Computing the lower envelope ${\displaystyle L(S)}$  of a set ${\displaystyle S}$  of ${\displaystyle n}$  line segments, which is defined as the lower boundary of the unbounded trapezoid of formed by the intersections.
• Hershberger[5] gave an ${\displaystyle O(n\log n)}$  algorithm which can be sped up to ${\displaystyle O(n\log h)}$ , where h is the number of edges in the envelope
• Constructing output sensitive algorithms for higher dimensional convex hulls. With the use of grouping points and using efficient data structures, ${\displaystyle O(n\log h)}$  complexity can be achieved provided h is of polynomial order in ${\displaystyle n}$ .