2010/06/23

Binary search, differently ordered

Shin shared a variation of binary search problem with me yesterday, which was later discovered to be a variation of an exercise from van Gasteren and Feijen's note The Binary Search Revisited cited by Shin's blog post A Survey of Binary Search. The problem: if an array a0, a1, ..., an-1 is rotated from an ordered array ai+1, ai+2, ..., an-1, a0, a1, ..., ai for some i (so a "dip" is present in the array, namely the difference between ai and ai+1), can we still perform binary search on it? Here I try to record the path of thought I've taken.

As described by Shin, the general binary search algorithm assumes an invariant Φ(i, j) holds initially for the entire array, i.e., Φ(0, n-1) is true. And then the loop pulls i and j closer and closer, until j = i + 1. In the case of standard binary search, Φ(i, j) := ai ≤ k < aj where k is the key value to be searched for. Note that Φ(0, n-1) must hold initially but the key may well not be in the array. To fix this, we add -∞ and ∞ to both ends of the array. After the loop, we have ai equals to k if and only if k is present in the array.

Since it was hinted that binary search may be applicable, I looked at the original binary search and tried to find similarities, hoping to discover a suitable way of generalisation. In the case a0 < an-1, we know k is bounded by a0 and an-1, i.e., k is in the interval [a0, an-1]. For the other case a0 ≥ an-1, we know k can only be larger than a0 or smaller than an-1, i.e., k is in [a0, ∞) ∪ (-∞, an-1]. Also observe that performing the original binary search naively is not right. For example, when the median value am is less than k, it does not necessarily mean we should assign m to i --- consider the case when k is located at the left of the dip and am at the right. The way of comparison seems a lot more complicated.

And then it occurred to me that it would be great if [a0, ∞) ∪ (-∞, an-1] can be viewed in the same way as an ordinary interval, so we may still intuitively see the interval shrink as the loop progresses, like the standard binary search. This required gluing the two sides of the real line to an added point ∞, so a circle is formed. The interval in question is then contiguous on the circle, containing the added point ∞. I was introduced to this concept in the undergraduate algebra course taken in my fourth year, which is called the real projective line and amounts to the one-point compactification of the real line. The odd comparison rules all suddenly make sense under this view. While in general numbers on the real projective line do not have a natural ordering, in our case we can say informally that the magnitude of a number x is the minimum distance we travel counterclockwise on the circle from a0 to x, and that a number is smaller if its magnitude is smaller. This essentially cuts the real projective line at a0 and forms a closed ray roughly like [a0, a0-), a0- serving as the new infinity. Walking from a0 towards the infinity on [a0, a0-) is equivalent to walking counterclockwise on the real projective line from a0 and never reaching it again. We can say that the value domain is also rotated: rotating the array indices disrupts orderedness of the array, but we can rotate the value domain correspondingly to make the array ordered again. (I wish I could make this statement more topological! I believe it's something related to the torus.) Comparison under this ordering is simple: if two numbers are on the same side of the dip (which can be determined by comparing them with a0), then perform the usual comparison; otherwise, whichever on the right side of the dip is larger. The binary search algorithm doesn't have to be altered except for changing the way of comparison. We still have to insert a guard a0- as the rightmost element of the array, but there is only one guard instead of two. Notice that this works for binary search on an ordinarily-ordered array as well: values smaller than a0 are greater than all elements in the array under the new ordering, so searching for a value too small simply moves i towards n and -∞ is not needed to guard the left end of the array.

This ordering is my final version, though, which means there were some other versions. For example, I had used an ordering depending on both a0 and an-1 and treated all values from an-1 to a0 (both ends exclusive) as -∞. This resulted in much more complicated case analysis in the definition of the ordering. I had even made a mistake regarding the sign as essential for the comparison, not noticing that topologically 0 did not have a special role on the projective real line. This mistake made me temporarily think that modelling the situation with the real projective line was flawed. But later I discovered there is a cleaner and correct way to utilise the real projective line, which is, well, described above.

However, there is one last serious flaw. If an ordered array is rotated such that a0 = an-1, it should be considered legal input but does not count as ascending under the new ordering! I spotted this seemly-unfixable flaw at midnight, which deprived me of sleep for some two hours. And indeed it is not fixable but it is not a problem about the ordering! Say the two ends of the rotated array have value v. If the median value is also v, then we have no way to decide whether the key value is in the left segment or the right one --- the key can be in any one of them. This observation can even be developed into a full adversary argument, showing that no algorithm can correctly solve the search problem on these arrays in sub-linear time, by arguing any correct algorithm must examine the middle n-2 values of the array and therefore take Ω(n) time: Given an algorithm A and a key k, consider the "flat sequences" consisting of n copies of a number not equal to k. If A does not have to look at all n-2 values in the middle, then (for all but finitely many n's) A does not look at some value at index αn, which means changing the value at αn does not affect the output of A, namely A would still say the key is not found as it would for the flat sequences. Now change the value at αn to k for every flat sequence of length n and feed this set of input to A. Its output must be incorrect. Thus it's not possible to perform binary search, which takes O(log n) time, to correctly determine whether a value is present in this kind of arrays, which takes at least Ω(n) time. It is interesting to see that a naive-looking equality can dramatically increase a seemly-simple problem's complexity.

--
It's been quite a while since I wrote something this long last time, especially in English...


A sequel to this post has been posted, which is on whether implementing this new ordering gives a better algorithm than the usual two-pass algorithm in terms of number of comparisons.

Labels:

Anonymous scm6/23/2010 10:52 am 說:

Oh, I should have mentioned that this problem was originally posed to me by Yu-Han Lyu. He said that it's a problem often given in interviews. A typical response is to first perform a binary search to find the dip, then somehow, perhaps virtually, "rotate" back the array.

We would like to have an algorithm that finds the key in one pass. I'm not sure we can do so using fewer comparisons than the two-pass algorithm, though.

You didn't spell out (in equation) what the ordering is in this article, have you?

 
Blogger Josh Ko6/23/2010 2:09 pm 說:

> You didn't spell out (in equation) what the ordering is in this article, have you?

No, I did not. Since I didn't actually do formal calculations, I thought it was not necessary for this post to be formal.

On the other hand it seems interesting to go into the details to see what the resulting algorithm looks like. (And make sure it is correct!)

 

<< 回到主頁