tag:nickp.svbtle.com,2014:/feedNicholas Pilkington2020-09-04T15:40:25-07:00Nicholas Pilkingtonhttps://nickp.svbtle.comSvbtle.comtag:nickp.svbtle.com,2014:Post/gem-island2020-09-04T15:40:25-07:002020-09-04T15:40:25-07:00Gem Island<p>Are you fresh off the back of the long weekend and looking to sink your teeth into some combinatorics and induction? Great then you’ve come to the right place so pull your discrete mathematician stocking over your head and let’s get started. </p>
<p>Welcome to Gem Island! Home to <code class="prettyprint">N</code> happy islanders. Each of which has exactly one gem. Each night something peculiar happens. One of the gems on the island is chosen uniformly at random and it splits creating a new gem for the owner. This process repeats each night for <code class="prettyprint">D</code> nights. Here is an example of all 6 situations that could arise if there were (<code class="prettyprint">N=2</code>) islanders after (<code class="prettyprint">D=2</code>) nights. Here we’ve coloured the first islander’s gems red, the second green and the vertical bar separates the islanders to make things easier to read. </p>
<p><a href="https://svbtleusercontent.com/kDZiWq8aXQuhxS96SLYYix0xspap.png"><img src="https://svbtleusercontent.com/kDZiWq8aXQuhxS96SLYYix0xspap_small.png" alt="Screen Shot 2020-09-04 at 3.41.42 PM.png"></a></p>
<p>We are interested in the expected value, of the number of gems, the <code class="prettyprint">R</code> richest islanders have. In the case above the <code class="prettyprint">R=1</code> richest islanders has on average <code class="prettyprint">(3 + 3 + 2 + 2 + 3 + 3) / 6</code> gems. Where as the <code class="prettyprint">R=2</code> richest islanders would have<code class="prettyprint">( (3+1) + (3+1) + (2+2) + (2+2) + (1+3) + (1+3) ) / 6 = N + D</code>. In fact if <code class="prettyprint">N <= R</code> the answer is going to be <code class="prettyprint">N + D</code> because at the end of the process there is always the same number of total gems on the island. So we have our problem and it’s completely described by just three numbers <code class="prettyprint">N</code>, <code class="prettyprint">D</code> and <code class="prettyprint">R</code>.</p>
<p>Before looking for a solution to the actual problem let’s simulate out a few more examples. Here’s what things would look like with <code class="prettyprint">N=3</code> islanders after <code class="prettyprint">D=2</code> nights.</p>
<p><a href="https://svbtleusercontent.com/pppbii7vD4HGvHFzHusBUU0xspap.png"><img src="https://svbtleusercontent.com/pppbii7vD4HGvHFzHusBUU0xspap_small.png" alt="Screen Shot 2020-09-04 at 3.41.25 PM.png"></a></p>
<p>Here’s the situation for <code class="prettyprint">N=3</code> and <code class="prettyprint">D=3</code>. Things are exploding a bit and there are 60 final distributions of gems in the leaves of the tree. </p>
<p><a href="https://svbtleusercontent.com/xufbGqVLjw3tiyYy6h7xCT0xspap.png"><img src="https://svbtleusercontent.com/xufbGqVLjw3tiyYy6h7xCT0xspap_small.png" alt="Screen Shot 2020-09-04 at 3.41.04 PM.png"></a></p>
<p>What about <code class="prettyprint">N=4</code> islanders after <code class="prettyprint">D=3</code> days? Just showing the final distributions, there are 120:</p>
<p><a href="https://svbtleusercontent.com/iZHDhgBbP1WGn5YXYAciLT0xspap.png"><img src="https://svbtleusercontent.com/iZHDhgBbP1WGn5YXYAciLT0xspap_small.png" alt="Screen Shot 2020-09-08 at 9.18.21 AM.png"></a></p>
<p>You get the idea. So before attempting to solve this problem it’s interesting to consider 3 related questions. </p>
<ul>
<li>How many distributions of gems are possible? </li>
<li>How many of them are unique? </li>
<li>How many of them are unique permutations (not permutations of others)? </li>
</ul>
<p>Using the example of <code class="prettyprint">N=2</code> and <code class="prettyprint">D=2</code>:</p>
<p><a href="https://svbtleusercontent.com/r7cWuqAKPiYx5xLjMmJ2iP0xspap.png"><img src="https://svbtleusercontent.com/r7cWuqAKPiYx5xLjMmJ2iP0xspap_small.png" alt="Screen Shot 2020-09-04 at 3.41.42 PM.png"></a></p>
<p>There are 6 total distributions:<br>
<code class="prettyprint">(3,1)</code>, <code class="prettyprint">(3,1)</code>, <code class="prettyprint">(2,2)</code>, <code class="prettyprint">(2,2)</code>, <code class="prettyprint">(1,3)</code>, <code class="prettyprint">(1,3)</code> of which 3 are unique: <code class="prettyprint">(1,3)</code>, <code class="prettyprint">(3,1)</code>, <code class="prettyprint">(2,2)</code> and 2 are unique permutations: <code class="prettyprint">(3,1)</code> and <code class="prettyprint">(2,2)</code> because <code class="prettyprint">(3,1)</code> is a permutation of <code class="prettyprint">(1,3)</code>. Now we could stop there and it would be fun to come up with answers to those three questions. </p>
<p>Let’s answer that first questions because the diagram above gives a good illustration. How many distributions of gems are there are there for a given <code class="prettyprint">N</code> and <code class="prettyprint">D</code>? Well there are <code class="prettyprint">N</code> possible transitions from the first state, then <code class="prettyprint">N+1</code>, then <code class="prettyprint">N+2</code> and so on, <code class="prettyprint">D</code> times. So the number of final distributions of gems that are possible are <code class="prettyprint">N * (N+1) * ... * (N+D-1)</code>. It looks like numbers of this form are called <a href="https://en.wikipedia.org/wiki/Falling_and_rising_factorials">rising factorials</a>: <code class="prettyprint">RF(N, D) = N * (N+1) * ... * (N+D-1)</code>. </p>
<h2 id="solution-1-on_2">Solution 1 - O(N!) <a class="head_anchor" href="#solution-1-on_2">#</a></h2>
<p>This actually gives us our first solution to the problem. If we could count the number of gems the <code class="prettyprint">R</code> riches islanders have in each of the <code class="prettyprint">RF(N, D</code>) distributions we could divide by that number and that would give us an answer. Coding this up requires completely enumerating all the final distributions and then summing up the <code class="prettyprint">R</code> highest value in each one. Slow and steady can win the race:</p>
<pre><code class="prettyprint lang-python">def rf(N, D):
"""
rising factorial - returns n * (n+1) ... * (n+d-1)
"""
ret = 1
for x in range(D):
ret *= (x + D)
return ret
def recurse(N, D, R, gems):
if D == 0:
counts = [0] * N
for g in gems:
counts[g-1] += 1
# return the sum of gems R richest islanders have
return sum(sorted(counts)[::-1][:R])
ret = 0
for g in gems:
gems.append(g)
ret += recurse(N, D - 1, R, gems)
gems.pop(-1)
return ret
if __name__ == '__main__':
N, D, R = 2, 2, 1
# gems stores which islanders owns which gems.
# g[a] = b mean the a'th gem is owned by islander b
# start with each of the n islanders having one gem.
# gems = [1, 2, ..., n]
gems = [ x for x in range(1, N + 1)]
sumr = complete_enumeration(D, N, R, gems)
expected = sumr / rf(N, D)
print(expected)
</code></pre>
<p>While correct this solution is going to be offensively slow, in the order of <code class="prettyprint">O(N!)</code> so let’s try and find something faster. Let’s look back at all the distributions possible from <code class="prettyprint">N=3</code> and <code class="prettyprint">D=3</code> again:</p>
<p><a href="https://svbtleusercontent.com/69y2tfzAaqTbLgHw8DChBF0xspap.png"><img src="https://svbtleusercontent.com/69y2tfzAaqTbLgHw8DChBF0xspap_small.png" alt="Screen Shot 2020-09-05 at 1.13.59 PM.png"></a></p>
<p>Notice that there are 60 different distributions but lots of them are duplicates and there are only 10 unique distributions: <code class="prettyprint">(4,1,1)</code>, <code class="prettyprint">(1,4,1)</code>, <code class="prettyprint">(1,1,4)</code>, <code class="prettyprint">(2,2,2)</code>, <code class="prettyprint">(1,2,3)</code>, <code class="prettyprint">(1,3,2)</code>, <code class="prettyprint">(2,1,3)</code>, <code class="prettyprint">(2,3,1)</code>, <code class="prettyprint">(3,1,2)</code> and <code class="prettyprint">(3,2,1)</code>. How do we compute that <code class="prettyprint">10</code> directly from <code class="prettyprint">N</code> and <code class="prettyprint">D</code>? Firstly the sum of elements is always <code class="prettyprint">D+N</code>. Also, and surprisingly ,. each of these unique distributions is also equally likely and in the case of <code class="prettyprint">N=D=3</code> will appear 6 times. This is not super obvious so we should probably try and prove that as it would mean we only need to consider many fewer distributions. </p>
<p><a href="https://svbtleusercontent.com/6KAK9wnXXvzY8z675M9i2W0xspap.png"><img src="https://svbtleusercontent.com/6KAK9wnXXvzY8z675M9i2W0xspap_small.png" alt="Screen Shot 2020-09-05 at 1.13.45 PM.png"></a></p>
<h2 id="aside-proof-by-induction_2">Aside: Proof by Induction <a class="head_anchor" href="#aside-proof-by-induction_2">#</a></h2>
<p>Proving that each of the unique distributions is equally likely can be done by induction on <code class="prettyprint">D</code>. For <code class="prettyprint">D=0</code> there is only the initial distribution where each islander has one gem <code class="prettyprint">(1, 1, ... , 1)</code>. For a higher value of <code class="prettyprint">D</code> there are <code class="prettyprint">N</code> potential ways to get to that distribution which we can call <code class="prettyprint">(g1, g2, ... ,gN)</code>. These <code class="prettyprint">N</code> ways are each from a distribution with one less gem. The probability that we get from a previous distribution, namely <code class="prettyprint">(g1, g2, ..., gK-1, ..., gN)</code> to <code class="prettyprint">(g1, g2, ... ,gN)</code> is <code class="prettyprint">(gK - 1) / (N + D - 1)</code> as one of the <code class="prettyprint">gK - 1</code> has to be the one that splits. Since the probability for each distribution for <code class="prettyprint">D-1</code> is some fixed <code class="prettyprint">P</code> by the induction hypothesis then the total probability is <code class="prettyprint">(P * D) / (N + D - 1)</code>.</p>
<h2 id="solution-2-onsup4sup_2">Solution 2 - O(N<sup>4)</sup> <a class="head_anchor" href="#solution-2-onsup4sup_2">#</a></h2>
<p>So how many <em>unique</em> distributions are there? Fortunately there is a closed form for this using combinatorics. Think of creating a distribution by having a bag full of labels with each islanders name on it. There are many labels with each islanders name on it too. Then take <code class="prettyprint">D</code> labels from the bag and each time you do that corresponds to that islander getting a gem. The key here is repetition is allowed. The number of ways of choosing <code class="prettyprint">D</code> items from a set of <code class="prettyprint">N</code> is <code class="prettyprint">N choose D = choose(N, D)</code>. While allowing repetition is done using the <a href="https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)">multiset coefficient</a> <code class="prettyprint">multichoose(N, D) = choose(N + D - 1, N)</code>. So in the above example with <code class="prettyprint">N=3</code> and <code class="prettyprint">D=3</code>: <code class="prettyprint">multichoose(3, 3) = choose(3+3-1, 3) = 10</code>. Now since all these unique distributions are equally probably we can calculate the sum of the gems the <code class="prettyprint">R</code> richest islanders have in each and divide by <code class="prettyprint">choose(N+D-1, D)</code>. The values of <code class="prettyprint">choose(N, K)</code> can be precomputed using the <a href="https://en.wikipedia.org/wiki/Binomial_coefficient">recurrence relation</a> <code class="prettyprint">choose(N, K) = choose(N-1, K) + choose(N-1, K-1)</code>:</p>
<pre><code class="prettyprint lang-python"># Precompute the values of n choose k
MAX = 50
nCk = np.zeros((MAX, MAX), dtype='uint')
nCk[:, 0] = 1
for n in range(1, MAX):
for k in range(1, n + 1):
nCk[n][k] = nCk[n-1][k] + nCk[n-1][k-1]
def choose(nCk, n, k):
return nCk[n][k]
def multichoose(nCk, n, k):
return choose(nCk, n + k - 1, k)
</code></pre>
<p>Cool so we’ve answered the second of our <em>related</em> questions above. Now let’s try another solution. Instead of enumerating all the distributions - knowing that each of the unique ones is equally likely lets count the number of ways of allocating <code class="prettyprint">D</code> gems to the islanders and also the total number of gems the <code class="prettyprint">R</code> richest islanders have. Imagine trying to do this in a structured way that you can easily describe what has already being allocated without having to store all the previously allocated gems as we did in the previous solution with the <code class="prettyprint">gems</code> list. The reason to do this is if we can describe the state using fewer variables and additionally relate the state to other states we can typically compute the value efficiently using <a href="https://en.wikipedia.org/wiki/Dynamic_programming">dynamic programming</a>. </p>
<p>In our previous solution our function took parameters <code class="prettyprint">N, D, R</code> and also a <code class="prettyprint">gems</code> list which could have contained <code class="prettyprint">RF(N, D)</code> (many) different values. That list is a problem. Imagine instead that we are counting the number of unique distributions of gems the islands could end up with. Let imagine we are allocating gems in descending order so if we are allocating 3 gems to some set of islanders we don’t need to worry about whether any islanders after that though would have more than 3 gems. We will call the number of gems we are currently allocating <code class="prettyprint">i</code>. Let’s also remember how many gems we have left to allocating knowing that we need to allocate <code class="prettyprint">N+D</code> gems in total. Let call that <code class="prettyprint">j</code>. And finally let’s remember the number of islanders who still need to have gems allocated to them knowing that each islander needs at least one gem, at more <code class="prettyprint">D+1</code> gems and the total gems should equal <code class="prettyprint">N+D</code>. Let’s call that <code class="prettyprint">k</code>. So now we have some partial state describing the number of ways of allocating <code class="prettyprint">i</code> or fewer gems to <code class="prettyprint">k</code> islanders having <code class="prettyprint">j</code> gems left to allocate as <code class="prettyprint">F[i][j][k]</code> and we are seeking <code class="prettyprint">F[D+1][D+N][N]</code>. This may not seem helpful at the moment but the important this to remember is we are describing something similar to the <code class="prettyprint">recurse</code> function in the previous solution with a drastically reduce number of states. `<code class="prettyprint">(D+1)*(D+N)*N</code> is far smaller than <code class="prettyprint">N*D*RF(N, D)</code>. </p>
<p>But we need a way to compute <code class="prettyprint">F[D+1][D+N][N]</code> without visiting states more than once The key to doing this is describing <code class="prettyprint">F[i][j][k]</code> in terms of other states. If we are at <code class="prettyprint">F[i][j][k]</code> then there is some subset (of size say <code class="prettyprint">s</code>) of the <code class="prettyprint">k</code> remaining islanders that are each going to get <code class="prettyprint">i</code> gems. How many subsets are there well there are <code class="prettyprint">choose(k, s)</code>. Then we are done allocating <code class="prettyprint">i</code> gems and transition to state <code class="prettyprint">F[i-1][j - i*s][k - s]</code>. So our recurrence is: <code class="prettyprint">F[i][j][k] = choose(k, s) * F[i-1][j - i*s][k-s]</code> summed over all valid <code class="prettyprint">s</code>. We can keep another state <code class="prettyprint">G[i][j][k]</code> to collect the sum (instead of the count) of the gems the <code class="prettyprint">R</code> richest islanders have and compute it in a very similar way. This all leads to a nice compact solution which run ins <code class="prettyprint">O(N^4)</code>, much faster than our previous one but still very slow. Also while it’s a bit mind-bending to try and look at the code directly and understand what is going one knowing what <code class="prettyprint">F[i][j][k]</code> and <code class="prettyprint">G[i][j][k]</code> store and their recurrence relation is all that is needed. The rest is just bounds checking.</p>
<pre><code class="prettyprint lang-python">def dynamic_programming(N, D, R):
F = np.zeros((D+1+1, D+N+1, N+1), dtype='double')
G = np.zeros((D+1+1, D+N+1, N+1), dtype='double')
F[0][0][0] = 1
for i in range(1, D + 1 + 1):
for j in range(0, D+N+1):
for k in range(0, N+1):
for s in range(0, min(k + 1, 1 + j // i) ):
kCs = choose(k, s)
F[i][j][k] += kCs * F[i-1][j-s*i][k-s]
# compute the contribution to the sum of gems
# R richest islanders have as gems are allocated
# in descending order
if R - (N-k) > 0:
factor = (i-1) * min(R-(N-k), s)
else:
factor = 0
G[i][j][k] += kCs * G[i-1][j-i*s][k-s]
G[i][j][k] += kCs * factor*F[i-1][j-s*i][k-s]
f = F[D+1][D+N][N]
g = G[D+1][D+N][N]
return g / f + R
if __name__ == '__main__':
N, D, R = 10, 10, 2
expected = dynamic_programming(N, D, R)
print(expected)
</code></pre>
<p>Note that the final expected value is always at least <code class="prettyprint">R</code> because each islander starts with one gems. So we can add that contribution at the end and assume islanders start with zeros gems each where as in fact they start with 1.</p>
<h2 id="solution-3-onsup3sup_2">Solution 3 - O(N<sup>3)</sup> <a class="head_anchor" href="#solution-3-onsup3sup_2">#</a></h2>
<p>We’re making good progress here. We’ve answered two of our related questions and gotten a solution that will run in <code class="prettyprint">O(N^4)</code>. So let’s look at the third related question, how many unique distributions are there (excluding permutations). We can compute this using <a href="https://en.wikipedia.org/wiki/Multinomial_theorem">multinomial co-efficients</a>. This is like asking how many unique permutations are there of the word <code class="prettyprint">MISSISSIPPI</code>. There are <code class="prettyprint">11!</code> permutations but many are repeated. So we divide by the number of ways of permuting the repeated groups <code class="prettyprint">I</code>, <code class="prettyprint">S</code> and <code class="prettyprint">P</code> which is <code class="prettyprint">4!</code>, <code class="prettyprint">4!</code> and <code class="prettyprint">2!</code>. So <code class="prettyprint">11! / (4!*4!*2!) = 34650</code> is the number of unique permutations of the letters <code class="prettyprint">MISSISSIPPI</code>. We can do the same thing with gems distributions let again look back at the small example with <code class="prettyprint">N=D=2</code>:</p>
<p><a href="https://svbtleusercontent.com/95XFCJLCd59fuAWSPB3RAz0xspap.png"><img src="https://svbtleusercontent.com/95XFCJLCd59fuAWSPB3RAz0xspap_small.png" alt="Screen Shot 2020-09-04 at 3.41.42 PM.png"></a></p>
<p>There are <code class="prettyprint">6</code> distributions and only two unique permutations: <code class="prettyprint">(3,1)</code> and <code class="prettyprint">(2,2)</code>. There is <code class="prettyprint">1 = 2! / 2!</code> unique permutations of <code class="prettyprint">(2, 2)</code> and there are <code class="prettyprint">2 = 2! / (1!*1!)</code> unique permutation of <code class="prettyprint">(3, 1)</code> for a total of <code class="prettyprint">3</code> unique distributions. This is sort of helpful at this point but what we really need to do is look at how to describe our state with fewer or smaller variables. We previously allocated amounts of gems from largest to smallest. Let’s look at things a different way and start with all islanders having exactly 1 gem. There is some subset of islands that will receive no more gems during the process they will end the process with 1 gem. But there is a subset which will gain 1 or more gems. So let’s again describe our state of the number of ways of allocating <code class="prettyprint">i</code> islanders with <code class="prettyprint">j</code> gems as <code class="prettyprint">F[i][j]</code> and then our recurrence relation becomes <code class="prettyprint">F[i][j] = choose(i, k) * F[k][j-k]</code>. This is much clearer and describes the same information. Coding this up gives a solution that is an order of magnitude faster at <code class="prettyprint">O(N^3)</code></p>
<pre><code class="prettyprint lang-python">def dynamic_programming(N, D, R):
F = np.zeros((N+1, D+1), dtype='float')
G = np.zeros((N+1, D+1), dtype='float')
F[0][0] = 1.0
for i in range(N + 1):
for j in range(D + 1):
for k in range( min(i, j) + 1):
iCk = choose[i][k]
F[i][j] += F[k][j-k] * iCk
G[i][j] += (G[k][j-k] + min(r, k) * F[k][j-k]) * iCk
return F[N][D] / G[N][D] + r
</code></pre>
<p>Now we’ve answered our three related questions and can confidently count the number of distributions, unique distributions and unique permutations. We’ve also got three different and increasingly faster solutions to the problem. With our understanding let’s try and make one final attempt. </p>
<h2 id="the-illusive-onsup2sup-solution_2">The Illusive O(N<sup>2)</sup> Solution <a class="head_anchor" href="#the-illusive-onsup2sup-solution_2">#</a></h2>
<p>At this point you’ve probably drawn lots of those tree diagrams and counted things many times so let’s look at the problem in a different way. Let <code class="prettyprint">S(N, D, R)</code> be the sum of the number of gems the <code class="prettyprint">R</code> richest islanders have at the end of the process. So for <code class="prettyprint">N=D=2</code> and <code class="prettyprint">R=1</code>, <code class="prettyprint">S(N, D, R)</code> equals <code class="prettyprint">16</code> and we can divide by number of unique distributions <code class="prettyprint">multichoose(N, D) = 6</code> which gives the answer <code class="prettyprint">2.666 ...</code>. We know how to compute the <code class="prettyprint">multichoose</code> so how do we compute <code class="prettyprint">S(N, D, R)?</code></p>
<p>Here’s an analogy. Imagine if I said compute the expected value of sequence <code class="prettyprint">1, 4, 2</code>. You would add them up and divide by <code class="prettyprint">3</code>. That would work. Now instead of giving you the actual values <code class="prettyprint">1, 4, 2</code> I gave you a function <code class="prettyprint">V(K)</code> and told you that <code class="prettyprint">V(K)</code> was the number of values in the sequence that are greater than or equal to <code class="prettyprint">K</code>. So <code class="prettyprint">V(0) = 3</code>, <code class="prettyprint">V(1) = 3</code>, <code class="prettyprint">V(2) = 2</code>, <code class="prettyprint">V(3) = 1</code>, <code class="prettyprint">V(4) = 1</code> and <code class="prettyprint">V(5) = 0</code>. How would you compute the expected value? Well you could just sum up the values of <code class="prettyprint">V(K)</code> for all <code class="prettyprint">K</code> from <code class="prettyprint">1</code> upwards which would equal <code class="prettyprint">3 + 2 + 1 + 1 = 7</code> and again divide by <code class="prettyprint">3</code>. What’s nice about this approach is if I asked you to sum the <code class="prettyprint">R=2</code> largest number in that sequence you could just do that by summing <code class="prettyprint">min(R, V(K))</code> instead of <code class="prettyprint">V(K)</code> and get <code class="prettyprint">2 + 2 + 1 + 1 = 6</code> which is the correct answer. You don’t need to sort any thing or know anything about the ordering. </p>
<p>Knowing that let’s define <code class="prettyprint">S(N, D, R)</code> as the sum of all <code class="prettyprint">min(K, R)*V(K, Y)</code> and this <code class="prettyprint">V(K, Y)</code> is the number of unique distributions of gems where <em>exactly</em> <code class="prettyprint">K</code> have at least <code class="prettyprint">Y</code> gems each. If we also say that <code class="prettyprint">W(K, Y)</code> is the number of unique distributions of gems where <em>at least</em> <code class="prettyprint">K</code> islanders have at least <code class="prettyprint">Y</code> gems each. Note the difference between <code class="prettyprint">V</code> and <code class="prettyprint">W</code>. Then <code class="prettyprint">V(K, Y) = W(K,Y) - W(K+1),Y)</code> So we just need to find a way to compute <code class="prettyprint">W(K, Y)</code> which can be done almost directly. First we choose the <code class="prettyprint">K</code> islanders that will each be getting <code class="prettyprint">Y</code> gems and then allocate the remaining gems as we please. <code class="prettyprint">K</code> islanders can be chosen from <code class="prettyprint">N</code> as <code class="prettyprint">choose(N,K)</code> and then the remaining <code class="prettyprint">N + D - (Y-1)*K</code> gems can be allocated to any of the islands in <code class="prettyprint">choose(N + D - (Y-1)*K - 1, N + D - (Y-1)*K)</code> ways. The only problems is that this will lead to some double counting so we need to use the <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle">inclusion-exclusion principle</a> to compute <code class="prettyprint">W(K, Y):</code></p>
<pre><code class="prettyprint lang-python">def W(N, D, K, Y):
ret = 0
for L in range(K, N + 1):
if D - L*(Y-1) >= 0:
incl_excl =(-1)**(L-K)*choose(L-1,K-1)
x = incl_excl*choose(N,L)*choose(N+D-L*(Y-1)-1,D-L*(Y-1))
ret += x
return ret
</code></pre>
<p>And having <code class="prettyprint">W</code> we can compute <code class="prettyprint">V</code>:</p>
<pre><code class="prettyprint lang-python">def V(N, D, K, Y):
return W(N, D, K, Y) - W(N, D, K+1, Y)
</code></pre>
<p>and then we can sum up the <code class="prettyprint">V</code> table to compute <code class="prettyprint">S</code> and hence our expected value:</p>
<pre><code class="prettyprint lang-python">def S(N, D, R):
ret = 0
for K in range(1, N + 1):
for Y in range(1, D + 2):
ret += min(R, K) * V(N, D, K, Y)
return ret
expected = S(N, D, R) / choose(N + D - 1, D)
</code></pre>
<p>This solution will run in <code class="prettyprint">O(N^2)</code> and compute the correct answer. I really like problems like this. They are entirely described by some very small inputs, in this case just three numbers <code class="prettyprint">N</code>, <code class="prettyprint">D</code> and <code class="prettyprint">R</code> and admit a simple yet prohibitively slow solution by directly simulating the process. But digging in a bit more with some combinatorics yields increasing elegant and faster solutions. Is there a faster solution than <code class="prettyprint">O(N^2)</code>? I don’t know, maybe there is even a closed for solution to this problem which I would love to see if it exists!</p>
<p>This post is based on a question from ICPC World Finals 2018.</p>
tag:nickp.svbtle.com,2014:Post/socially-distant-polygons2020-07-01T16:44:53-07:002020-07-01T16:44:53-07:00Socially Distant Polygons<p>Imagine a polygon, like the blue one below. Where should you stand in that polygon such you are as far as possible from your nearest vertex? This seems like a useful thing to be able to calculate in 2020 (hope this blog post ages well). Ostensibly this should be easy to compute too, but it’s not obviously that the red dot in the image below is in fact the furthest point from any of the polygon vertices. </p>
<p><a href="https://svbtleusercontent.com/eJ25xLUH9siqT4PiRdsYFb0xspap.png"><img src="https://svbtleusercontent.com/eJ25xLUH9siqT4PiRdsYFb0xspap_small.png" alt="Screenshot 2020-07-01 15.14.55.png"></a></p>
<p>A first guess might be to stand half the longest edge length away from one of the vertices. But unfortunately that doesn’t work. If you try a square you’ll see it’s best to stand in the centre which is slightly further away from each corner. This shows that we need to consider points on the interior of the polygon to make sure they are covered too. Stated another way we are trying to calculate the minimum radius circle <code class="prettyprint">R</code>, such that the area of the union of circles positioned at <code class="prettyprint">(xi, yi)</code> the vertices each having <code class="prettyprint">R</code> have the same area as the polygon <code class="prettyprint">P</code>. If the radius is too small the polygon won’t be covered and just as the area completely coverage the polygon. That very last point is the point we are searching for. </p>
<p><a href="https://svbtleusercontent.com/x9cU4ZZRdWGKHFACMiNTnZ0xspap.png"><img src="https://svbtleusercontent.com/x9cU4ZZRdWGKHFACMiNTnZ0xspap_small.png" alt="Screenshot 2020-07-01 15.12.32.png"></a></p>
<p>If we write this area of the union or the circle as a function of <code class="prettyprint">R</code> then <code class="prettyprint">UnionArea(R)</code> will be monotonic, namely if <code class="prettyprint">R1 < R2</code>, then <code class="prettyprint">UnionArea(R1) <= UnionArea(R2)</code>. This nice yields a short solution, where we try increasing values of <code class="prettyprint">R</code> until the <code class="prettyprint">UnionArea(R) == Area(P)</code>. Keep increasing <code class="prettyprint">R</code> until we’ve filled up the polygon with orange (union of circle) and as so as we have done so, stop.</p>
<p><a href="https://svbtleusercontent.com/6nTimmsvwf6WPmbGW9ARhs0xspap.gif"><img src="https://svbtleusercontent.com/6nTimmsvwf6WPmbGW9ARhs0xspap_small.gif" alt="output.gif"></a></p>
<p>Knowing the function is monotonic we can do a binary search over the range <code class="prettyprint">R = {0 .. MAX}</code>, and we want the smallest value of <code class="prettyprint">R</code> which covers 100% of the polygon.</p>
<p>This is easy enough to code but computing the union of thousands of circles gets very complicated and slows things down. Although it’s short to code using a computational geometry library, this solution quickly falls over for polygons like this horror:</p>
<p><a href="https://svbtleusercontent.com/cG2rZZMwXvw1Cs7NvHryd30xspap.gif"><img src="https://svbtleusercontent.com/cG2rZZMwXvw1Cs7NvHryd30xspap_small.gif" alt="output.gif"></a></p>
<p>Let’s look at this again. We are searching for the point inside the polygon that is further away from any vertex. If we can construct a region around each vertex where all points inside that region are nearest to that vertex then the furthest point is going to lie somewhere on the boundary of those regions. </p>
<h2 id="voronoi-diagrams_2">Voronoi Diagrams <a class="head_anchor" href="#voronoi-diagrams_2">#</a></h2>
<p>The problem of decomposing space into regions containing a set of points such that each point in a region is nearest to that point is called a Voronoi decomposition. Here’s what it looks like for a polygon’s vertices.</p>
<p><a href="https://svbtleusercontent.com/7WthKpvq5m5Ndk3c4adQUc0xspap.png"><img src="https://svbtleusercontent.com/7WthKpvq5m5Ndk3c4adQUc0xspap_small.png" alt="Screenshot 2020-07-01 16.38.36.png"></a></p>
<p>So let’s try and compute the Voronoi diagram for a set of points. A naive approach is to iterate through each point, then for each other point we find the perpendicular bisector which delineates a half plane, collecting the union of these half planes gives us these regions around each point. In the following image region is being constructed around the red vertices. The green points are the midpoint of the red vertex with all the other vertices of the polygon. These lie on the perpendicular bisector that delineates the half-planes which are show in green. Any point that isn’t in one of these half-planes is nearer to the red vertex than any other vertex. This is the red region, in this case only shown inside the polygon. </p>
<p><a href="https://svbtleusercontent.com/fkAxpawhW7sdnaWkw6uPfA0xspap.png"><img src="https://svbtleusercontent.com/fkAxpawhW7sdnaWkw6uPfA0xspap_small.png" alt="Screenshot 2020-07-01 15.48.32.png"></a></p>
<p>Performing this operation for all vertices of the polygon will give the decomposition of space within the polygon which looks like this. Again any point inside a red region is nearer to the polygon vertex inside that same region than any other polygon vertex.</p>
<p><a href="https://svbtleusercontent.com/sj2Upcqa6Muvoz2q9muEpA0xspap.png"><img src="https://svbtleusercontent.com/sj2Upcqa6Muvoz2q9muEpA0xspap_small.png" alt="Screenshot 2020-07-01 15.53.23.png"></a></p>
<p>This works well but again we are maintaining a growing set of half-planes, which while simpler than the circle still gets too complicated:</p>
<p><a href="https://svbtleusercontent.com/wA6GUjyKWz32qyE9fyyPnW0xspap.gif"><img src="https://svbtleusercontent.com/wA6GUjyKWz32qyE9fyyPnW0xspap_small.gif" alt="output.gif"></a></p>
<p>Which yields</p>
<p><a href="https://svbtleusercontent.com/p6AzRRd49JNzMvBPGg2fio0xspap.jpg"><img src="https://svbtleusercontent.com/p6AzRRd49JNzMvBPGg2fio0xspap_small.jpg" alt="movie-016.jpg"></a></p>
<p>Possibly the best way to compute the Voronoi diagram is to use Fortune’s algorithm. It is such an elegant algorithm to watch as it directly traces out the regions while sweeping from bottom to top. <a href="https://www.youtube.com/watch?v=rvmREoyL2F0">This is what I mean!</a> Fortune’s algorithm works but sorting the vertices from bottom to top and processing them in order. While doing so it maintains a piece-wise parabola that looks like the beach which describes regions nearest to the points currently under consideration. The algorithm uses a balanced tree structure to maintain the beach line and a priority queue for updating and accessing the next even. As a result it can compute the Voronoi decomposition in <code class="prettyprint">O(n log n)</code>.</p>
<h2 id="aside_2">Aside <a class="head_anchor" href="#aside_2">#</a></h2>
<p>There is a very interesting correspondence between three core algorithms in computational geometry: Voronoi diagrams, Delaunay triangulations and convex hulls. We’ve already described Voronoi diagrams. The convex hull of a set of points is the minimum area convex polygon that encloses. This concept generalizes to more than two dimensionals. One way of computing the Voronoi regions in two dimensions that is fun to think about is to start with the 2D vertices <code class="prettyprint">(xi, yi)</code> and turn them into 3D coordinates by “lifting” them onto a parabola by adding <code class="prettyprint">zi = xi^2 + yi^2</code>. So now you have a bunch of 3D coordinates of the form <code class="prettyprint">(xi, yi, xi^2 + yi^2)</code>. Compute the convex hull of these points which will be a polyhedron and keep just the faces that face downward. Remove the third coordinate and each face will describe a 2D region that corresponds to a Voronoi region. </p>
<p>There is even a duality between a Voronoi diagram and a Delaunay triangulation of the same points. A Delaunay triangulation is one that maximizes the minimum angle of any triangle yield “good looking” triangles. The interesting thing here is that if you take a Voronoi diagram and convert the edges to vertices and the vertices to edges you get the corresponding Delaunay triangulation of the same set of points. So the two are dual to each other.</p>
<p>The correspondence between these three algorithms means that if you can compute one you can compute the other without too much further effort.</p>
<h2 id="solution_2">Solution <a class="head_anchor" href="#solution_2">#</a></h2>
<p>We are now in a position to solve our original problem as the key conceptual observation to make is that the furthest point inside the polygon will lie either on a vertex of the Voronoi diagram of the set of polygon vertices, or on the intersection of a Voronoi region edge with a polygon edge. We can now compute the candidate points and for each work out which ones lies furthest from any vertex of the polygon. In the following images the dotted line represents edges of Voronoi regions that intersect the polygon and green crosses candidate points that could be the furthers. The red dot is the furthest point.</p>
<p><a href="https://svbtleusercontent.com/8CsdApbnkHJVK8rapuHhGg0xspap.png"><img src="https://svbtleusercontent.com/8CsdApbnkHJVK8rapuHhGg0xspap_small.png" alt="figure-000.png"></a></p>
<p><a href="https://svbtleusercontent.com/i1LGjUEoeFWBSMfiP6SjX90xspap.png"><img src="https://svbtleusercontent.com/i1LGjUEoeFWBSMfiP6SjX90xspap_small.png" alt="figure-021.png"></a></p>
<p><a href="https://svbtleusercontent.com/kB51b8HhcKCVcsDScgDZtu0xspap.png"><img src="https://svbtleusercontent.com/kB51b8HhcKCVcsDScgDZtu0xspap_small.png" alt="figure-050.png"></a></p>
<p>I’m not sure if the solution point can be constructed directly any other way other than constructing the candidates efficiently and testing each one in turn. I’d be really interested to see if this problem can be solved any other way. This blog post was inspired by ACM ICPC World Finals 2018 Question G.</p>
tag:nickp.svbtle.com,2014:Post/traffic-lights2020-01-14T16:47:31-08:002020-01-14T16:47:31-08:00Traffic Lights<p>Have you ever wondered what the probability is of making it through a sequence of traffic lights without stopping? Me neither. But some people do so let’s try and solve that problem. It’s something that appears in everyday life for example my walk from home to DroneDeploy taken me through 16 traffic lights. </p>
<p><a href="https://svbtleusercontent.com/cD5kbLAQxGd7NA4Gz4Rr3v0xspap.png"><img src="https://svbtleusercontent.com/cD5kbLAQxGd7NA4Gz4Rr3v0xspap_small.png" alt="Screenshot 2020-01-14 15.51.25.png"></a></p>
<p>Let’s say we begin at position <code class="prettyprint">X=0</code> at time <code class="prettyprint">T</code> in seconds and start walking as <code class="prettyprint">1</code> meter per second right. Let’s say <code class="prettyprint">T</code> is uniformly randomly distributed in the range <code class="prettyprint">[0, 1e100]</code> to account for all manner of late starts. Then let’s say there are <code class="prettyprint">N</code> traffic lines as positions <code class="prettyprint">X1, X2, … ,XN</code> for each of these they are red for <code class="prettyprint">Ri</code> seconds, then green for <code class="prettyprint">Gi</code> seconds after which they repeat their cycles. At time <code class="prettyprint">T= 0</code> all traffic lights have just turned red. So givens the lists or <code class="prettyprint">X</code>, <code class="prettyprint">R</code> and <code class="prettyprint">G</code> let’s try and compute the probability that we hit each of the red lights and thus the probability that we make it all the way through. The images above shows <code class="prettyprint">4</code> traffic lights with periods <code class="prettyprint">8</code>, <code class="prettyprint">22</code>, <code class="prettyprint">6</code> and <code class="prettyprint">10</code>. Let’s also say <code class="prettyprint">R + G <= 100</code>.</p>
<p>Our first attempt could be to samples start times <code class="prettyprint">T</code> and for each sample compute which traffic light we stop at (if any) and report the probabilities. </p>
<pre><code class="prettyprint">Light = namedtuple('Light', ['x', 'r', 'g'])
lights = [
Light(3, 1, 3),
Light(5, 2, 7),
Light(9, 4, 4),
]
def simulate(t, lights):
for index, light in enumerate(lights):
at = (light.x + t) % (light.r + light.g)
if at < light.r:
return index
return len(lights)
def solution(lights):
N = len(lights)
SAMPLES = 1000000
P = 1
for light in lights:
P = lcm(P, light.r+light.g)
ret = [0] * (N + 1)
for _ in range(SAMPLES):
T = np.random.uniform(0, 1e100)
i = simulate(T, lights)
ret[i] += 1
count = sum(ret)
return [ i / count for i in ret]
</code></pre>
<p>Here’s what a few samples might look like. The solution is easy to code but it’s really slow and unlikely to converge to something useful in a reasonable amount of time. </p>
<p><a href="https://svbtleusercontent.com/9ag6eBXv4i6K9WgGVR14Cf0xspap.png"><img src="https://svbtleusercontent.com/9ag6eBXv4i6K9WgGVR14Cf0xspap_small.png" alt="Screenshot 2020-01-14 15.44.02.png"></a></p>
<p>Let’s hunt for a more effective solution. Let’s look at the periods of the traffic lights <code class="prettyprint">Pi = Ri+Gi</code>. Given that the lights start having just turned red the system will start repeating at some point in time. Computing the lowest common multiple of the periods <code class="prettyprint">Pi</code> gives the amount of times for the system to repeat. So a better solution would be instead of trying random start time to try each start time in the range <code class="prettyprint">[ 0 … LCM([P1, P2, ... , PN]) ]</code> and then compute the probability. </p>
<pre><code class="prettyprint">def gcd(a, b):
if b > 0:
return gcd(b, a%b)
else:
return a
def lcm(a, b):
return a * b // gcd(a, b)
def solution(lights):
N = len(lights)
P = 1
for light in lights:
P = lcm(P, light.r+light.g)
ret = [0] * (N + 1)
for T in range(P):
i = simulate(T, lights)
ret[i] += 1
count = sum(ret)
return [ i / count for i in ret]
</code></pre>
<p>This works and gives us a <code class="prettyprint">O ( LCM([P1, P2, ... , PN]) )</code> solution. Here’s what it look like for the same system:</p>
<p><a href="https://svbtleusercontent.com/xwyHXN8a6HNFMRDhDB778W0xspap.png"><img src="https://svbtleusercontent.com/xwyHXN8a6HNFMRDhDB778W0xspap_small.png" alt="Screenshot 2020-01-14 15.48.28.png"></a></p>
<p>On the y-axis we have all possible start times modulo <code class="prettyprint">LCM([P1, P2, ... , PN])</code>, as we don’t need to consider any others. The arrows represent which traffic like we would hit starting at that time. The final probabilities are correct so let’s look at the worst case of having traffic lights of all the periods between <code class="prettyprint">1</code> and <code class="prettyprint">100</code>. What is the LCM of the numbers <code class="prettyprint">1 .. 100</code>. Uh oh, it’s <code class="prettyprint">69,720,375,229,712,477,164,533,808,935,312,303,556,800</code>. That’s not going to work. Can we do better?</p>
<p>Let’s look at the system of period <code class="prettyprint">3</code> and <code class="prettyprint">5</code> traffic lights. The system will repeat with a period of <code class="prettyprint">LCM(3, 5) = 15</code>. But in addition the periods are co-prime as they share no prime factors <code class="prettyprint">GCD(3, 5) = 1</code>. Which means knowing what state the period <code class="prettyprint">3</code> traffic light is in tells us nothing about what state the period <code class="prettyprint">5</code> traffic light is in. Essentially the probabilities are independent and the probability of making it through both lights is just the probability of making it through the first light multiples by the probability of making it through the second light, <code class="prettyprint">G1 / (R1+G1) * G2 / (R2+G2)</code>. Maybe <code class="prettyprint">R2G2</code> will feature in the new Star Wars. Anyway, this is true for a set of traffics lights with co-prime periods like. This yields a nice <code class="prettyprint">O(N)</code> solution with the caveat that the periods must be co-prime. </p>
<p><a href="https://svbtleusercontent.com/4rvs4yikrPXu7ecB6YJZc10xspap.png"><img src="https://svbtleusercontent.com/4rvs4yikrPXu7ecB6YJZc10xspap_small.png" alt="a.png"></a></p>
<p>Here’s what things look like if we add another light of period <code class="prettyprint">2</code> to the system. </p>
<p><a href="https://svbtleusercontent.com/3LvgUdqdxfuEixnvoGUdus0xspap.png"><img src="https://svbtleusercontent.com/3LvgUdqdxfuEixnvoGUdus0xspap_small.png" alt="b.png"></a></p>
<p>Now what happens if we introduce a period <code class="prettyprint">6</code> light into this set. In this case <code class="prettyprint">6</code> is a multiple of the existing period. </p>
<p><a href="https://svbtleusercontent.com/pVpHvnovPKK4VcYRtpgA8A0xspap.png"><img src="https://svbtleusercontent.com/pVpHvnovPKK4VcYRtpgA8A0xspap_small.png" alt="a.png"></a></p>
<p>This is the same as repeating the existing period <code class="prettyprint">2</code> and <code class="prettyprint">3</code> traffic lights. So it’s the same as this system. </p>
<p><a href="https://svbtleusercontent.com/emznvVtVtvxuUveh2oeHDC0xspap.png"><img src="https://svbtleusercontent.com/emznvVtVtvxuUveh2oeHDC0xspap_small.png" alt="b.png"></a></p>
<p>Thus for a set of periods which are either co-prime or multiples of each other we can solve the system in <code class="prettyprint">O(N * P)</code> where <code class="prettyprint">P</code> in the largest period, by considering each time <code class="prettyprint">0 … P-1</code> and keeping track of which times modulo that period would hit a red light. We can do this by computing for each light which light to it’s left have periods which are multiples of itself, or where current light is a multiple of a previous one and instead use the period of that light. </p>
<p>Great! But what about these two lights, <code class="prettyprint">8</code> and <code class="prettyprint">12</code>. They are neither co-prime nor integer multiples of each other as they share a factor <code class="prettyprint">GCD(8, 12) = 4</code>. Let’s pick another number <code class="prettyprint">Z</code> and consider start times modulo <code class="prettyprint">Z</code>: <code class="prettyprint">{ T+Z, T+2Z, …. }</code>. If we only consider these times a traffic light with period <code class="prettyprint">P</code> will still be periodic but will have a period of <code class="prettyprint">P / GCD(Z, P)</code>. For the example let’s pick <code class="prettyprint">Z</code> as <code class="prettyprint">4</code> and create a system of reduced periods <code class="prettyprint">2</code> and <code class="prettyprint">3</code>, (originally <code class="prettyprint">8</code> and <code class="prettyprint">12</code>). These are co-prime so we can use our initial algorithm. The problem is now a matter of finding the suitable value <code class="prettyprint">Z</code> that for a set of periods transforms them to a set of reduced periods where each is either co-prime or a multiple of another period. Here some code to compute one of those numbers experimentally:</p>
<pre><code class="prettyprint">def findZ():
nums = list(range(100))
again = True
factors = []
while again:
again = False
for i in range(len(nums)):
for j in range(i+1, len(nums)):
if nums[i] % nums[j] == 0 or nums[j] % nums[i] == 0:
continue
z = gcd(nums[i], nums[j])
if z != 1:
for k in range(len(nums)):
if nums[k] % z == 0:
nums[k] = nums[k] // z
factors.append(z)
again = True
return factors
</code></pre>
<p>The result is the factors <code class="prettyprint">[2, 2, 2, 2, 3, 3, 5, 7]</code> the product of which is <code class="prettyprint">2520</code>. That’s the key we’ve been looking for. This leads us to the following solution which runs in a very manageable <code class="prettyprint">O(N * X * P)</code>. </p>
<pre><code class="prettyprint lang-python">def solution(lights):
Z = 2520
N = len(lights)
fn = [ ]
filters = []
for i, light in enumerate(lights):
period = (light.r+light.g) // gcd(Z, light.r+light.g)
for j in range( len(filters) ):
if period % len(filters[j]) == 0:
filters[j] = [-1] * period
if len(filters[j]) % period == 0:
fn.append(j)
break
else:
fn.append( len(filters) )
filters.append( [-1] * period )
ret = [0] * (N + 1)
for base in range(Z):
cur = 1.0 / Z
for i, light in enumerate(lights):
filter = filters[fn[i]]
total = 0
hits_red = 0
t = Z + base + light.x
for j in range(len(filter)):
if filter[j] < base:
total += 1
if t % (light.r + light.g) < light.r:
filter[j] = base
hits_red += 1
t += Z
entering = cur
if (total):
cur *= (total - hits_red) / total
ret[i] += entering - cur
ret[N] += cur
return ret
</code></pre>
<p>It’s really cool to see a seemly intractable problem dissolve in the presence of a seemingly magic number that disconnects all the primes. So what’s the probability of making is through a sequence of traffic lights? Wonder no more. This question is based on ICPC World Finals 2019 Problem K. </p>
tag:nickp.svbtle.com,2014:Post/blue-angel2016-10-09T17:21:16-07:002016-10-09T17:21:16-07:00Fat Albert<p>I was watching the <a href="https://en.wikipedia.org/wiki/Fleet_Week">Fleet Week</a> display by the <a href="https://en.wikipedia.org/wiki/Blue_Angels">Blue Angels</a> yesterday and we were talking about if you could determine where an aircraft was based on the sounds you were hearing from the engine. </p>
<p><a href="https://svbtleusercontent.com/tpyiovgnkazyow.png"><img src="https://svbtleusercontent.com/tpyiovgnkazyow_small.png" alt="Screenshot 2016-10-09 15.58.30.png"></a></p>
<p>Say we have an aircraft at some unknown position flying at a constant linear velocity. If the engine is emitting sound at a constant frequency and as soon as we start hearing the engine we start recording the sound. Then given <strong>just</strong> that audio let’s try and determine how far away the aircraft is and how fast it’s traveling. Here’s a generated <a href="https://www.dropbox.com/s/a0r76ldm0yrbskw/audio.wav?dl=0">sample recording</a> of a source starting <code class="prettyprint">315.914</code> meters away and traveling at <code class="prettyprint">214</code> meters per second in an unknown direction. </p>
<p>First let’s make a simplification. We can rotate our frame of reference such that the aircraft is traveling along the x-axis from some unknown starting point. If we look from above the situation looks like this. </p>
<p><a href="https://svbtleusercontent.com/3feyeptxkgjdva.png"><img src="https://svbtleusercontent.com/3feyeptxkgjdva_small.png" alt="Screenshot 2016-10-09 17.13.50.png"></a></p>
<p>When working with audio the first thing to do would probably be to plot the spectrogram and see if we can gleam anything from that. The spectrogram of a WAV file can be plotted using this code:</p>
<pre><code class="prettyprint">Fs, audio = scipy.io.wavfile.read('audio.wav')
MAX_FREQUENCY = 2000
pylab.figure(facecolor='white')
pylab.specgram(audio, NFFT = 1024, Fs=Fs, cmap=pylab.cm.gist_heat)
pylab.ylim((100,MAX_FREQUENCY))
pylab.xlim((0,1.1))
pylab.xlabel('Time (s)')
pylab.ylabel('Frequency (Hz)')
pylab.show()
</code></pre>
<p>and the result spectrogram which shows the power spectrum of the received signal as a function of time looks like this.</p>
<p><a href="https://svbtleusercontent.com/b2vg7xfvhvkowq.png"><img src="https://svbtleusercontent.com/b2vg7xfvhvkowq_small.png" alt="Screenshot 2016-10-09 16.12.57.png"></a></p>
<p>This looks great. Most importantly you can see the <a href="https://en.wikipedia.org/wiki/Doppler_effect">Doppler Effect</a> in action because the sound waves are compressing in the direction of the observer. This implies that the aircraft is moving towards us. Other than that there isn’t much that can be gained here. We can look at the inflection point of the spectrogram and infer that this is the point where the aircraft is passing perpendicular to us which corresponds to the actual frequency that the engine is emitting which in this case looks like about <code class="prettyprint">500</code> Hertz. However we can’t assume that the aircraft will pass us so we probably can’t even take that.</p>
<p><a href="https://svbtleusercontent.com/dx3uzapbh44j2w.png"><img src="https://svbtleusercontent.com/dx3uzapbh44j2w_small.png" alt="Screenshot 2016-10-09 16.48.39.png"></a></p>
<p>Let’s try something different. Let’s analyze this in the time domain instead. When the aircraft starts emitting sounds at some real time <code class="prettyprint">t</code> that sound takes a while before it arrives at the observer. This delay depends on the distance from the observer and the speed of sound. When this first bit of audio arrives at the observer which is <code class="prettyprint">t=0</code> but in “receiver time” the aircraft has already been flying for a while. So this first piece of audio corresponds to a previous location. We don’t know what this delay is because we don’t know how far away the plane way. Since the frequency of the sounds we are receiving are changing because of the Doppler Effect we can’t really rely on frequency analysis either. Let’s instead zoom in and look at the <a href="https://en.wikipedia.org/wiki/Zero_crossing">zero-crossings</a> of the signal. </p>
<p>The zero-crossing are the points in time that the signal (regardless of frequency) cross the x-axis. In “real time” there will be a number of times when this happens and they will be constantly spaced by <code class="prettyprint">1/(f*2.0)</code> where <code class="prettyprint">f</code> is the frequency of the sounds emitted by the engine. However when we receive the signal and the aircraft is traveling towards us - it will be squashed, and have shorter time between zero-crossings and then further apart as the aircraft flies away. So the signal get’s concertinas in a specific way. Here’s an exaggerated diagram of what is being emitted and what is being received when:</p>
<p><a href="https://svbtleusercontent.com/f9rr4mcb9h4sxw.png"><img src="https://svbtleusercontent.com/f9rr4mcb9h4sxw_small.png" alt="Screenshot 2016-10-09 17.14.35.png"></a></p>
<p>Let’s say the plane is traveling with a speed of <code class="prettyprint">v</code> parallel to the x-axis. So its x-coordinates at time <code class="prettyprint">t</code> is <code class="prettyprint">x0 + v * t</code> (some unknown starting point) and its y-coordinate is <code class="prettyprint">R</code> (some unknown distance). Here <code class="prettyprint">t</code> is the real time when the signal is emitted. The time for this signal to reach us is:</p>
<pre><code class="prettyprint">import numpy as np
def reach_time(x0, v, t, R):
c = 340.29 # speed of sound
dt = np.sqrt((x0 + v*t)**2 + R**2)/c
return dt
</code></pre>
<p>The time stamp in received time is just just <code class="prettyprint">reach_time(x0, v, t, R) + t - t0</code> where <code class="prettyprint">t0</code> is the initial and unknown delay for the first signal to reach us. From this we can get the timestamp of the nth zero-crossing knowing that the source frequency is fixed.</p>
<pre><code class="prettyprint">import numpy as np
def nth_zero_crossing(n, x0, v, R, f, n0):
c = 340.29 # speed of sound
f2 = 2.0*f
return (np.sqrt((x0 + v*n/f2)**2 + R**2)/c + (n - n0)/f2)
</code></pre>
<p>So we’ve got a model that maps the time of a zero-crossing at the source to the time of a zero-crossing in our WAV file. This is a mapping of zero-crossings in the source to zero-crossing in the received signal. Which are the orange lines in this image:</p>
<p><a href="https://svbtleusercontent.com/joau6wakv8a75q.png"><img src="https://svbtleusercontent.com/joau6wakv8a75q_small.png" alt="Screenshot 2016-10-09 17.19.30.png"></a></p>
<p>Now we need to extract the zero-crossings from the WAV file so we can compare. We could use some more advanced interpolation but since there are <code class="prettyprint">44100</code> samples per second in the audio file the impact on the resulting error term should be small. Here’s some code to extract the time of each zero-crossing in an audio file.</p>
<pre><code class="prettyprint">import scipy
import numpy as np
Fs, audio = scipy.io.wavfile.read(fn)
audio = np.array(song, dtype='float64')
# normalize
audio = (audio - audio.mean()) / audio.std()
prev = song[0]
ztimes = [ 0 ]
for j in xrange(2, song.shape[0]):
if (song[j] * prev <= 0 and prev != 0):
cross = float(j) / Fs
ztimes.append(cross)
prev = song[j]
</code></pre>
<p>This gives us a generative model where we can select some parameters of the situation and using the <code class="prettyprint">nth_zero_crossing</code> compute what the received signal would look like. This puts us in a good position to create an error function between the actual (empirical) data in the audio file and the generated data based on our parameters. Then we can try and find the parameters that minimize this error. Here some code that computes the residue of our generates signal:</p>
<pre><code class="prettyprint">import numpy as np
def gen_received_signal(args):
f2, v, x0, R, n0 = args
n = np.arange(len(ztimes))
y = (np.sqrt((x0 + v*n/f2)**2 + R**2)/c + (n - n0)/f2)
error = np.array(ztimes) - y
return error
</code></pre>
<p><a href="https://svbtleusercontent.com/bz5ptw6tpcj77a.png"><img src="https://svbtleusercontent.com/bz5ptw6tpcj77a_small.png" alt="Screenshot 2016-10-09 17.04.27.png"></a></p>
<p>Using a non-linear least squares solver like <a href="https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm">Levenberg Marquardt</a> we can search for the parameters that best explain our data. </p>
<pre><code class="prettyprint">import numpy as np
from scipy.optimize import least_squares
f2 = 1600
v = 100
x0 = -100
R = 10
n0 = 100
args = [f2, v, x0, R, n0]
res = least_squares(gen_received_signal, args)
f2, v, x0, R, n0 = res.x
# compute the initial distance
D = np.sqrt(x0**2+R**2)
print 'Solution distance=', D, 'x0=',x0, 'v=',v, 'f=',f2/2.0
</code></pre>
<p>Out of this pops the solution and more. It has also accurately computed the source frequency given some bad initial guesses. Since we aren’t assuming anything about the change in frequency this approach also works when the aircraft does not pass us and is only recorded on approach or flying away from us. In reality the sound would attenuate quadratically over distance but that should not impact this solution because we don’t use amplitudes.</p>
tag:nickp.svbtle.com,2014:Post/asteroid-intersections2016-10-01T10:23:39-07:002016-10-01T10:23:39-07:00Minkowski Asteroids<p>We have two convex polygons <code class="prettyprint">P</code> and <code class="prettyprint">Q</code> each moving at constant velocities <code class="prettyprint">Vp</code> and <code class="prettyprint">Vq</code>. At some point in time they <strong>may</strong> pass through one another. We would like to find the point in time at which the area of their intersection is at a maximum. Here is a simple visualization, where the yellow area represents the intersection and the arrow heads represents the velocity of the polygons.</p>
<p><a href="https://svbtleusercontent.com/ksl17eoepmjkww.gif"><img src="https://svbtleusercontent.com/ksl17eoepmjkww_small.gif" alt="output_z4aALm.gif"></a></p>
<p>Let’s first look at the problem of testing whether two polygons intersect. The simplest way to do it is to check if any edges of one polygon intersect any edges of the other. For this we need a line segment intersection algorithm. We can check if two line segments <code class="prettyprint">A - B</code> and <code class="prettyprint">C - D</code> intersect if the signed area of the triangle <code class="prettyprint">A, B, C</code> is different from the signed area of the triangle <code class="prettyprint">A, B, D</code> and similarly for <code class="prettyprint">C, D, A</code> and <code class="prettyprint">C, D, B</code>. It’s simple and runs in <code class="prettyprint">O(N^2)</code>. Here’s the code:</p>
<pre><code class="prettyprint">import numpy as np
def area(A, B, C):
return np.linalg.norm(np.cross(B - A, B - C)) * 0.5
def intersect(A, B, C, D):
return area(A, B, C) != area(A, B, D) and area(C, D, A) != area(C, D, B)
def polygons_intersect(P, Q):
n = len(P)
m = len(P)
for i in xrange(n):
for j in xrange(m):
if intersect(P[i], P[(i+1)%n], Q[j], Q[(j+1)%m]):
return True
return False
</code></pre>
<h1 id="aside_1">Aside: <a class="head_anchor" href="#aside_1">#</a></h1>
<p>There is a another way to do this using something called the <a href="https://en.wikipedia.org/wiki/Hyperplane_separation_theorem">hyperplane separation theorem</a>. Rather than explaining it I’ll plot an example of how it works in two dimensions, which I think is more helpful. Take each edge of the polygons in questions and extend them outwards in the direction of their normals. In the plots below the dotted lines represent normals and the solid lines, the extensions of and edges of one of the polygons. Let’s call extensions of the edges “barriers”. Now consider projecting both shapes onto any of the barriers. This would turn them into line segments on the barriers. In the case of intersection these segments would overlap on all the barriers. Look at this plot and confirm that projecting the shapes in the center onto any barrier would yield a solid line segment, not two.</p>
<p><a href="https://svbtleusercontent.com/diozord5fvahpa.png"><img src="https://svbtleusercontent.com/diozord5fvahpa_small.png" alt="Screenshot 2016-10-01 11.50.13.png"></a></p>
<p>This is different in the case where there is no intersection. In this case there is at least one barrier where the projection of the shapes do not for a solid line segment. In this example it’s the purple barrier and you can see that the normal to this barrier actually shows the separation of the shapes (purple dotted line). How do we check if the projection of the shapes on to the barrier intersect? We can project those segments again down onto something simple line a horizontal line and see if their endpoints overlap.</p>
<p><a href="https://svbtleusercontent.com/pl9jjtwrtkb6lq.png"><img src="https://svbtleusercontent.com/pl9jjtwrtkb6lq_small.png" alt="Screenshot 2016-10-01 12.03.15.png"></a></p>
<p>Armed with some fast polygon intersection algorithms we can go back to our original problem and try various points in time and check whether the polygons intersect. This is still not great because there is no guarantee that they <strong>will</strong> intersect and even then, what we are actually looking for is the range of times during which the shapes intersect so we can compute the maximum area of overlap. </p>
<p>Let’s try another approach. First let’s make a simplification and assume that one of the shapes is stationary and the other is moving relative to it. Let’s project each point on our moving polygon out in the direction of its velocity. </p>
<p><a href="https://svbtleusercontent.com/iuy1yyercc7z1g.png"><img src="https://svbtleusercontent.com/iuy1yyercc7z1g_small.png" alt="Screenshot 2016-10-01 12.37.29.png"></a></p>
<p>These rays may or may not intersect the other polygon. In this case we have four intersections. With each of the intersections we can compute a time based on the velocity of the polygon. We can sort these times and return the minimum and maximum as time range of overlap.</p>
<pre><code class="prettyprint">def overlap_range(P, Q, V):
for x, y in zip(*Q.exterior.xy):
# simulate a ray as a long line segment
SCALE = 1e10
segment_x = [ x, x + SCALE*V[0] ]
segment_y = [ y, y + SCALE*V[1] ]
line = [segment_x, segment_y]
points = intersections(P, line)
for point in points:
ix, iy = point.x, point.y
# time of intersection
when = math.hypot(x-ix, y-iy) / math.hypot(V[0], V[1])
intersection_times.append(when)
intersection_times.sort()
if len(intersection_time) == 0:
print "polygons never overlap"
elif len(intersection_time) == 1:
print "polygons touch by don't overlap"
return intersection_time[0], intersection_time[-1]
</code></pre>
<p>Here each point on the polygon creates a line segment and that is tested against each edge of the other polygon. So we’ve handled the issues of when the polygons don’t ever intersect but we can do even better with <a href="https://en.wikipedia.org/wiki/Minkowski_space">Minkowski geometry</a>. We’ll use something called the Minkowski difference between two sets of points. In image processing it’s related to <a href="https://en.wikipedia.org/wiki/Erosion_(morphology)">erosion</a>. What this does is take one shape, mirror it about the origin and then compute the Minkowski sum of the mirrored shape and the other one. The Minkowski sum is related to <a href="https://en.wikipedia.org/wiki/Dilation_(morphology)">dilation</a> and for two sets of points <code class="prettyprint">P</code> and <code class="prettyprint">Q</code> is defined as all points <code class="prettyprint">p+q</code> where <code class="prettyprint">p</code> is in <code class="prettyprint">P</code> and <code class="prettyprint">q</code> is in <code class="prettyprint">Q</code>. </p>
<p>Don’t worry two much about the definitions. Here is the <strong>key point</strong> to understand. We are looking to compute if the two polygons intersect. If they intersect there is a point in each polygon that is inside both of them. If so mirroring one polygon and computing the Minkowski sum polygon would create a polygon that contains the origin.</p>
<p>Here’s some code to compute the Minkowski difference between two polygons. Since both sets are convex we take the convex hull of the resulting polygon to create a new convex polygon.</p>
<pre><code class="prettyprint">def minkowski_difference(P, Q):
R = []
for i in xrange(len(P)):
for j in xrange(len(Q)):
R.append((P[i] - Q[j], P[i] - Q[j]))
return convex_hull(R)
</code></pre>
<p>The <a href="https://en.wikipedia.org/wiki/Convex_hull">convex hull</a> is just the minimum sized convex polygon that encloses all the points. If you hammer a bunch of nails into a board and stretch an elastic band around all the the nails; the nails that touch the elastic band are the convex hull. It can be computed in <code class="prettyprint">O(N*log(N))</code> with a <a href="https://en.wikipedia.org/wiki/Graham_scan">Graham Scan</a>. Here’s a image showing two sets of points (red and blue) and their corresponding convex hulls. It also shows the intersection in yellow which is the convex hull of the points in both polygons and the intersection points. Seeing this go back and confirm the <strong>key point</strong> above that if the polygons intersect their Minkowski difference contains the origin.</p>
<p><a href="https://svbtleusercontent.com/swbpyosqm1fgg.png"><img src="https://svbtleusercontent.com/swbpyosqm1fgg_small.png" alt="Screenshot 2016-10-02 14.06.38.png"></a></p>
<p>Now instead of having two polygons we have one green polygon that is the Minkowski difference of the other two. In addition, from the definition of the Minkowski difference, we know that if the origin is inside this polygon the two comprising polygons intersect one another. This is a really important fact which let’s us compute collisions really fast and more importantly when the collision will happen. We can also compute the first and last points of intersection of these polygons using a single ray from the origin in the direction of the relative velocities of the polygons.</p>
<p><a href="https://svbtleusercontent.com/ofqg5xpm85tfwq.png"><img src="https://svbtleusercontent.com/ofqg5xpm85tfwq_small.png" alt="Screenshot 2016-10-01 13.18.39.png"></a></p>
<p>Here’s an illustration of what is happening in both normal and Minkowski space. You can see the blue and red polygons passing through one another. At the same time you can watch the green polygon representing the Minkowski difference between the red and blue polygon moving through the origin at the same time.</p>
<p><a href="https://svbtleusercontent.com/ynfpafrpvx0dpw.gif"><img src="https://svbtleusercontent.com/ynfpafrpvx0dpw_small.gif" alt="output_XH5Mwd.gif"></a></p>
<p>Once we’ve retrieved this range (the first and last intersection times) we can sample some points in that range and compute the overlap. The intersection of two convex polygons another convex polygon which is the convex hull of the intersections points and points that lie inside both polygons. Using our convex hull function and our intersection function we can compute this polygon and then use the <a href="https://en.wikipedia.org/wiki/Shoelace_formula">Surveyor’s Algorithm</a> to compute the area.</p>
<p>Finally putting all the pieces together we have an algorithm that takes the Minkowski difference of two polygons then computes the (generally) two points of intersection of the ray from the origin to the Minkowski difference polygon. Using the times of the two intersections we can compute the overlap of the two polygons as a function of time. Plotting the result we get this.</p>
<p><a href="https://svbtleusercontent.com/gloajfbi5lwdna.png"><img src="https://svbtleusercontent.com/gloajfbi5lwdna_small.png" alt="Screenshot 2016-10-01 12.55.45.png"></a></p>
<p>The final task remains to compute the maximum of this function. It seems that that the overlap is unimodal where the maximum is reached if one shape is entirely inside the other. Theres is proof in this <a href="https://hal.archives-ouvertes.fr/inria-00073859/document">paper</a> . Since the function is unimodal we can use a <a href="https://en.wikipedia.org/wiki/Ternary_search">ternary search</a> to quickly compute the maximum in <code class="prettyprint">O(log N)</code>.</p>
<pre><code class="prettyprint">def findMax(objectiveFunc, lower, upper):
if abs(upper - lower) < 1e-6:
return (lower + upper)/2
lowerThird = (2*lower + upper)/3
upperThird = (lower + 2*upper)/3
if objectiveFunc(lowerThird) < objectiveFunc(upperThird):
return findMax(objectiveFunc, lowerThird, upper)
else:
return findMax(objectiveFunc, lower, upperThird)
</code></pre>
<p>Minkowski geometry extends to <code class="prettyprint">N</code> dimensions and the principles stay the same - which can make it easier to do things like collision detection and response in 3 dimensions where the more simplistic methods don’t generalize well. This question was posed at the ICPC ACM World Finals. </p>
tag:nickp.svbtle.com,2014:Post/telephone-tapping2016-09-18T01:23:06-07:002016-09-18T01:23:06-07:00Telephone Wiretapping <p>Imagine you are looking to intercept a communication that can happen between two people over a telephone network. Let’s say that the two people in question are part of a larger group, who all communicate with each other (sometimes via other people in the network if they don’t have their phone number). We can represent this as a graph where the vertices are people and edges connect two people if they have each other in their phone books. </p>
<p>Here’s a network with <code class="prettyprint">6</code> people, some of whom don’t directly communicate with each other but can do so through others. Each person can reach all the others though, so the graph is connected. </p>
<p><a href="https://svbtleusercontent.com/mu1rhu10ijcurg.jpg"><img src="https://svbtleusercontent.com/mu1rhu10ijcurg_small.jpg" alt="graph.jpg"></a></p>
<p>Let’s also assume that this group of people communicate efficiently and use the smallest amount of calls possible and always distribute information to every person. If an unknown member of the network wants to communicate some nefarious plans to all the other members they call some people who in turn spread the message through the network by making more calls while adhering to the rules above. If we can tap a single link between two people what is the probability of intercepting one of these calls? Let’s work through an example of a network of 4 people who can each communicate with two others. The graph looks like this:</p>
<p><a href="https://svbtleusercontent.com/gvrltcrazv8ukq.png"><img src="https://svbtleusercontent.com/gvrltcrazv8ukq_small.png" alt="Screenshot 2016-09-18 00.50.46.png"></a></p>
<p>If we tap then link connecting <code class="prettyprint">0</code> and <code class="prettyprint">1</code> there is only one way to communicate to all members without using this tapped link, where as there are <code class="prettyprint">3</code> that do use it. Meaning that the probability of intercepting the information is <code class="prettyprint">0.75</code>. The small images represent the ways in which the communication can happen and those that use the tapped link (will be intercepted) are highlighted.</p>
<p>There are a couple of important things to note as this point. Firstly the links chosen to communicate over form a <a href="https://en.wikipedia.org/wiki/Spanning_tree">spanning tree</a> of the graph. This is an important property as a spanning tree has one less edge than the number of nodes and doesn’t contain any cycles. Cycles would mean that the communication has not been efficient because we could remove an edge on the cycle and still have the information reach all the people. </p>
<p>Let’s work through another example and compute the probability of intercepting the communication if we tap a specific link. Here is another graph. It represents <code class="prettyprint">4</code> people but this time there are <code class="prettyprint">6</code> links. Everyone can communicate with everyone else. Let’s tap the top link - highlighted in yellow.</p>
<p><a href="https://svbtleusercontent.com/atkdvbglghrxua.png"><img src="https://svbtleusercontent.com/atkdvbglghrxua_small.png" alt="Screenshot 2016-09-17 20.37.48.png"></a></p>
<p>Now let’s enumerate all the spanning trees of this graph manually. Notice that each spanning tree connects all the vertices in the original graph just using fewer edges. In particular <code class="prettyprint">3</code> edges which is one less that the number of vertices. Adding another edge would create a cycle. There are <code class="prettyprint">16</code> different spanning trees and <code class="prettyprint">8</code> of them (highlighted in yellow) use the link we have tapped. This means the probability of intercepting the transitions is <code class="prettyprint">8.0 / 16.0 = 0.5</code>.</p>
<p><a href="https://svbtleusercontent.com/x0lbh6pnfuqhg.png"><img src="https://svbtleusercontent.com/x0lbh6pnfuqhg_small.png" alt="Screenshot 2016-09-17 20.37.52.png"></a></p>
<p>Cool! So to solve this problem we need to count the number of spanning trees of a graph that uses a specified edge - call that value <code class="prettyprint">A</code>. Then compute the number of spanning trees that the graph has - call that value <code class="prettyprint">B</code>. The probability of intercepting the communication on the tapped link is <code class="prettyprint">A/B</code>. </p>
<p>The number of spanning trees that use a specific edges can be computed by collapsing the vertices at each end of that edge into one vertex and computing the number of spanning trees for that new multi-graph. For example for the cross-box graph above if we want to find the number of spanning trees that use the top edge we collapse it and generate the graph on the right which indeed has <code class="prettyprint">8</code> spanning trees. Remember this could create multi edges between vertices.</p>
<p><a href="https://svbtleusercontent.com/bklqjh9ylqwxra.png"><img src="https://svbtleusercontent.com/bklqjh9ylqwxra_small.png" alt="Screenshot 2016-09-18 00.51.45.png"></a></p>
<p>Enumerating all the spanning trees is not a feasible option as this number grows really quickly. In fact <a href="https://en.wikipedia.org/wiki/Cayley%27s_formula">Cayley’s formula</a> gives the number of spanning trees of a complete graph of size <code class="prettyprint">K</code> as <code class="prettyprint">K ** (K-2)</code>. </p>
<p>Instead we can use Kirchhoff’s matrix tree theorem. Which tells us that if we have a graph represented by an adjacency matrix <code class="prettyprint">G</code> we can count the number of the spanning trees as:</p>
<p><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/4bd618b8b7f0e5506ca460b410160f107bc2436f" alt="graph.jpg"></p>
<p>Where the lambdas are the non-zero eigenvalues of the associated Laplacian matrix <code class="prettyprint">G</code>. It’s actually easier and more numerically stable to compute the determinant of a cofactor of the Laplacian which gives the same result. The Laplacian matrix is used to compute lots of useful properties of graphs. It is equal to the degree matrix minus the adjacency matrix:</p>
<p><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/712994d22cc3a9e0bd6148764a17c1628f843062" alt="graph.pj"></p>
<p>Computing the Laplacian from an adjacency matrix can be done with this code:</p>
<pre><code class="prettyprint lang-python"># compute the Laplacian of the adjacency matrix
def laplacian(A):
L = -A
for a in xrange(L.shape[0]):
for b in xrange(L.shape[1]):
if A[a][b]:
L[a][a] += A[a][b] # increase degree
return L
</code></pre>
<p>Using this we can compute the cofactor. </p>
<pre><code class="prettyprint lang-python">def cofactor(L, factor=1.0):
Q = L[1::, 1::] # bottom right minor
return np.linalg.det(Q / factor)
</code></pre>
<p>Also I added a scaling parameter to the cofactor computation. The determinants can get really big when the network has thousands of vertices. In this case computing the numerator and denominator of the probability can result in overflow. If we take some factor <code class="prettyprint">factor</code> out of the Laplacian matrices before computing the determinant we reduce the value by <code class="prettyprint">factor ** N</code> where N is the size of the matrix. Using this we can compute the probability for large matrices because the factors almost totally cancel out because the matrices dimensions different by only 1.</p>
<pre><code class="prettyprint lang-python">def probability(G1, G2):
factor = 24.0
# det(A) = f**n*det(A/f)
L1 = laplacian(G1)
L2 = laplacian(G2)
Q1 = cofactor(L1, factor=factor)
Q2 = cofactor(L2, factor=factor)
# f**(n-1) * det(G1/f)
# -------------------
# f**(n-2) * det(G2/f)
return Q1 / Q2 / factor
</code></pre>
<p>Using this we can go through each edge in the graph and compute the probability of intercepting if we tap that edge. This value will change depending on the graph and if it has an <a href="https://en.wikipedia.org/wiki/Biconnected_component">articulating points</a> these edges will have probability <code class="prettyprint">1.0</code>.</p>
tag:nickp.svbtle.com,2014:Post/geometric-cliques2016-09-17T18:57:18-07:002016-09-17T18:57:18-07:00Geometric Cliques<p>If you have <code class="prettyprint">N</code> points in the plane what is the largest subset of those points such that each point is within a distance of <code class="prettyprint">D</code> of all the others. It seems pretty innocuous right? Turns out it’s a great big beautiful disaster.</p>
<p>Here’s a example of <code class="prettyprint">N = 10</code> points where the maximum subset are the points collected with dotted lines which are each within a distance <code class="prettyprint">D</code> of each other. There is no bigger set. </p>
<p><a href="https://svbtleusercontent.com/vugyu6dq2zjtfa.png"><img src="https://svbtleusercontent.com/vugyu6dq2zjtfa_small.png" alt="Screenshot 2016-09-17 19.35.21.png"></a></p>
<p>Trying to compute every subset of points and checking if they are all within <code class="prettyprint">D</code> of each other will take exponential time <code class="prettyprint">O(2^N)</code>. So we need a better approach. Let’s try picking a point which will will assume to be part of this clique. Then all the candidate points that are not within <code class="prettyprint">D</code> of that point won’t be in the clique. For example if we have three co-linear points spaced by <code class="prettyprint">D</code> and select the middle one to build our clique then either the left points can be in the clique or the right one but not both. This does give us a heuristic. We can take each point filter out points that aren’t within <code class="prettyprint">D</code> and then run a brute force search to find the maximal clique. </p>
<p>Let’s try and do better, as this could still be exponential depending on how the points are clustered. Let’s start with a bunch of points:</p>
<p><a href="https://svbtleusercontent.com/p9f5oddzlaidma.png"><img src="https://svbtleusercontent.com/p9f5oddzlaidma_small.png" alt="Screenshot 2016-09-19 20.06.26.png"></a> </p>
<p>Pick any two points and assume that they are going to be the furthest points apart in our clique let this distance be <code class="prettyprint">F</code>, so <code class="prettyprint">F <= D</code>.</p>
<p><a href="https://svbtleusercontent.com/hmcqnuymt1x5w.png"><img src="https://svbtleusercontent.com/hmcqnuymt1x5w_small.png" alt="Screenshot 2016-09-19 20.07.49.png"></a></p>
<p>If we filter out all points more than <code class="prettyprint">F</code> from these two points we get this situation</p>
<pre><code class="prettyprint lang-python"># try all candidates for furthest points
for i in xrange(N):
for j in xrange(i + 1, N):
xi, yi = points[i]
xj, yj = points[j]
if distance(xi, yi, xj, yj) > D: continue
# Furthest pair in our clique
F = distance(xi, yi, xj, yj)
lens = []
for k in xrange(N):
if k == i: continue
if k == j: continue
xk, yk = points[k]
if distance(xi, yi, xk, yk) > F: continue
if distance(xj, yj, xk, yk) > F: continue
lens.append(points[k])
</code></pre>
<p><a href="https://svbtleusercontent.com/1rn8hmvwkuorq.png"><img src="https://svbtleusercontent.com/1rn8hmvwkuorq_small.png" alt="Screenshot 2016-09-19 20.08.13.png"></a></p>
<p>Points inside the intersection of these two circles - the lens shape - are within <code class="prettyprint">F</code> of both points at the end of the line and <code class="prettyprint">F <= D</code>. I thought this way the end of the story. But we can’t simply select all of these points as a clique because they may not be within <code class="prettyprint">D</code> of each other. For example the top and bottom points in the lens shape might be further than <code class="prettyprint">D</code> apart so we need to do some more work.</p>
<p>First note that all the points above the dotted lines are within <code class="prettyprint">F</code> and therefore <code class="prettyprint">D</code> of each other so they are a potential clique, as are points below the dotted line. But there may be a bigger clique incorporating points from both sides of the lines. If we pick a certain point below the line for our clique we are forbidden from picking any points more than <code class="prettyprint">D</code> away from that point. Incidentally these <u>forbidden</u> points will all lie on the other side of the line. Take a moment, look at the picture above and make sure you are happy with that.</p>
<p>Now lets separate the points inside the lens shape into two sets. Those above the dotted line and those below. This can be done buy taking the signed area of the triangle created between the two points at the end of the line and the point in question. If the area is positive the point is above the line and is it’s negative it’s below the line.</p>
<pre><code class="prettyprint lang-python">top, bot = [], []
M = len(lens)
for k in xrange(M):
xk, yk = lens[k]
if area((xi, yi), (xj, yj), (xk, yk)) >= 0:
top.append(lens[k])
else:
bot.append(lens[k])
</code></pre>
<p>The situation is now like this:</p>
<p><a href="https://svbtleusercontent.com/yzkqab8nih7suw.png"><img src="https://svbtleusercontent.com/yzkqab8nih7suw_small.png" alt="Screenshot 2016-09-19 20.10.00.png"></a></p>
<p>Let’s now treat each point as a vertex and connect vertices in one set with vertices in the other if they are further than <code class="prettyprint">D</code> away. Using fewer points so it’s not as cluttered the situation could look like this. Red lines denote points further than <code class="prettyprint">D</code> apart and some point don’t have edges connected to them. This is fine, it just means all points are within <code class="prettyprint">D</code> of them.</p>
<p><a href="https://svbtleusercontent.com/1tbc28fm0xulgg.png"><img src="https://svbtleusercontent.com/1tbc28fm0xulgg_small.png" alt="Screenshot 2016-09-17 21.36.39.png"></a></p>
<p>We’ve now constructed a bipartite graph representing some of the geometric constraints we are interested in. So our task is to select the maximum number of points from this set such that we don’t have any two points connected by a red line because that means they are too far away. Vertices with no edges connected to them are freebies. They are within <code class="prettyprint">D</code> of all of points so there’s no reason not to pick them. </p>
<p>Our problem of selecting the maximum number of vertices in a graph such that no two points have an incident edge is the problem of computing the <a href="https://en.wikipedia.org/wiki/Independent_set_(graph_theory)">maximum independent set</a> of a graph. This is unfortunately NP-Complete in a general graph but in a bi-partite graph the number of vertices in the maximum independent set equals the number of edges in a minimum <a href="https://en.wikipedia.org/wiki/Edge_cover">edge covering</a>. This in turn is equal to the <a href="https://en.wikipedia.org/wiki/Maximum_flow_problem">maximum-flow</a> of the graph. This series of dualities is called <a href="https://en.wikipedia.org/wiki/K%C5%91nig%27s_theorem_(graph_theory)">König’s theorem</a>. </p>
<p>Without the bi-partite structure, computing the maximum independent set is NP-Complete where as maximum-flow can be computed in a general graph in polynomial time. So we’re in much better shape. We compute the maximum flow of this graph and that value <code class="prettyprint">f</code> plus the number of nodes without edges <code class="prettyprint">top_set</code> and <code class="prettyprint">bot_set</code> plus <code class="prettyprint">2</code> for the end points of the line as our solution for that pair of points.</p>
<pre><code class="prettyprint lang-python">def max_bipartite_independent(top, bot):
graph = collections.defaultdict(dict)
top_set = set( [i for i in xrange( len(top)) ])
bot_set = set( [i for i in xrange( len(bot)) ])
for i in xrange(len(top)):
for j in xrange(len(bot)):
xi, yi = top[i]
xj, yj = bot[j]
if distance(xi, yi, xj, yj) > F:
node_i = 'TOP' + str(i)
node_j = 'BOT' + str(j)
src = 'SOURCE-NODE'
snk = 'SINK-NODE'
graph[node_i][node_j] = 1
graph[src][node_i] = MAX_INT
graph[node_j][snk] = MAX_INT
top_set.discard(i)
bot_set.discard(j)
f = flow.max_flow(graph, 'SOURCE-NODE', 'SINK-NODE')
solution = f + len(top_set) + len(bot_set) + 2
return solution
</code></pre>
<p>There are a few different algorithms to compute maximum flow . The following is a simple implementation of the Ford-Fulkerson algorithm using Dijkstra’s algorithm to find augmenting paths. </p>
<pre><code class="prettyprint lang-python">import Queue
def dijkstra(graph, source, sink):
q = Queue.PriorityQueue()
q.put((0, source, []))
visited = set([source])
while not q.empty():
length, node, path = q.get()
# Found a path return the capacity
if node == sink:
cap = None
for a, b in path:
if cap is None or graph[a][b] < cap:
cap = graph[a][b]
return cap, path
# Visit next node
for child in graph[node].keys():
if not child in visited and graph[node][child] > 0:
next_state = (length+1, child, path + [(node, child)])
visited.add(child)
q.put(next_state)
# No paths remaining
return None, None
</code></pre>
<p>And the remaining code to compute the maximum-flow of the graph:</p>
<pre><code class="prettyprint lang-python">def max_flow(graph, source, sink):
flow = 0
while True:
capacity, path = dijkstra(graph, source, sink)
if not capacity: return flow
for a, b in path:
graph[a][b] = graph[a].get(b, 0) - capacity
graph[b][a] = graph[b].get(a, 0) + capacity
flow += capacity
return flow
</code></pre>
<p>Cool so we’ve finally go all the pieces needed to solve this problem. We try every pair of points <code class="prettyprint">O(N^2)</code> as the candidate for the two furtherest points in our clique then of the points the fall inside the lens we build a bipartite graph and compute the maximum independent set which corresponds to the maximum clique size. </p>
<p>All in the algorithm takes <code class="prettyprint">O(V^2)</code> attempting each pair of candidates points. But with the maximum-flow inner loop it’s <code class="prettyprint">O(V^5)</code>. This appears to be optimal without using a faster maximum matcher. Full source code is <a href="https://gist.github.com/nickponline/abce6170b96043a4c372feef590388d3">here</a>, <a href="https://gist.github.com/nickponline/8a25d6a393de50580dfe440693c6abc5">maximum flow code</a> and a little <a href="https://gist.github.com/nickponline/dcdd02adbb7b7bcd595e53349135cfd8">visualizer</a>. </p>
tag:nickp.svbtle.com,2014:Post/minimum-image-cover2016-07-25T12:24:23-07:002016-07-25T12:24:23-07:00Minimum Image Cover<p>Some applications of photogrammetry require us to collect a number of overlapping aerial images covering an area. The more overlap the better, as more overlap in pairs of images gives us a higher result in some applications. However in other applications we are actually looking for <u>as few images</u> as possible from the set that still cover the area of interest without any gaps.</p>
<p><a href="https://svbtleusercontent.com/gab8uaudo7b2q.jpg"><img src="https://svbtleusercontent.com/gab8uaudo7b2q_small.jpg" alt="dji-inspire-1-drone-bh1.jpg"></a></p>
<p>Framed another way - given a collection of sets, what is the minimum number of those sets that need to be selected such that their union is the union of all of the sets. The sets in our case are defined by taking each location on the ground as a <code class="prettyprint">longitude,latitude</code> pair and then finding the set of images that can see that location. We’ll talk about how to enumerate these locations on the ground later. This is called the set cover problem.</p>
<p><a href="https://svbtleusercontent.com/vqdikmneu7pka.png"><img src="https://svbtleusercontent.com/vqdikmneu7pka_small.png" alt="Screenshot 2016-07-25 11.53.04.png"></a></p>
<h2 id="camera-geometry_2">Camera Geometry <a class="head_anchor" href="#camera-geometry_2">#</a></h2>
<p>Before we start solving the problem lets generate a dataset of aerial imagery. In order to do that we need to represent an aerial camera at some point in space and the direction in which it is pointing. The location and orientation of the camera together are called the pose and are represented by six values - longitude, latitude, altitude, yaw, pitch and roll. The first three represent position and the last three represent orientation. In addition to this, cameras have a number of intrinsic parameters. For the purposes of our data set we are just going to consider focal length (<code class="prettyprint">F</code>) and sensor size. Focal length is the distance over which the camera lens focuses light onto the sensor. The shorter the focal length the wider the field of view. The longer the focal length the smaller the field of view but the higher the ground sampling distance (which is measured in centimeters per pixel). Sensor size is the size of the CCD in the camera, larger sensors (at the same resolution) can represent a larger scene. There are a few different conventions for sensor size and as a result focal length is sometimes given as <u>equivalent</u> focal length as if the sensor were a certain size. In this case we’ll assume that our camera has a 35mm-equivalent focal length of 20mm. Which oddly (by convention) means that the sensor size is 24x36 mm. From these parameters - pose, focal length and sensor size - we can draw this diagram that describes the area on the ground (footprint) that the camera can see which is related to the field of view by the altitude.</p>
<p><a href="https://svbtleusercontent.com/wneu64xxyl2jza.jpg"><img src="https://svbtleusercontent.com/wneu64xxyl2jza_small.jpg" alt="focal-length-fov-sensor-size.jpg"></a></p>
<p>This image assumes that the camera is pointing straight down which would generate a clean rectangular footprint (rectangular because the sensor is not square). In reality this is not the case and the aircraft is bumping around and the camera is moving slightly which generates a quadrilateral footprint, also causing some perspective distortion. In order to compute the footprint accurately we can represent the camera with 3 vectors - <code class="prettyprint">position</code>, <code class="prettyprint">lookat</code> and <code class="prettyprint">up</code>. The <code class="prettyprint">position</code> vector is the location of the camera in space. The <code class="prettyprint">lookat</code> vector is a unit vector in the direction the camera is pointing and the <code class="prettyprint">up</code> vector is a unit vector out of the top of the camera to disambiguate the upside down images. We can apply a rotation matrix computed from <code class="prettyprint">roll</code>, <code class="prettyprint">pitch</code> and <code class="prettyprint">yaw</code> to the <code class="prettyprint">up</code> and <code class="prettyprint">lookat</code> vectors about the camera <code class="prettyprint">position</code> to orientate the camera like this.</p>
<pre><code class="prettyprint lang-python">import numpy as np
def rotate(vector, yaw, pitch, roll):
Rotz = np.array([
[np.cos(yaw), np.sin(yaw), 0],
[-np.sin(yaw), np.cos(yaw), 0],
[0, 0 ,1]
])
Rotx = np.array([
[1, 0, 0],
[0, np.cos(pitch), np.sin(pitch)],
[0, -np.sin(pitch), np.cos(pitch)]
])
Roty = np.array([
[np.cos(roll), 0, -np.sin(roll)],
[0, 1, 0],
[np.sin(roll), 0, np.cos(roll)]
])
rotation_matrix = np.dot(Rotz, np.dot(Roty, Rotx))
return np.dot(rotation_matrix, vector)
up = array([0, 1, 0])
position = array([0, 0, 80000]) # 80 meters up
lookat = array([0, 0, -1]) # pointing down
yaw, pitch, roll = 45, 5, 5
up = rotate(up, yaw, pitch, roll)
lookat = rotate(lookat, yaw, pitch, roll)
</code></pre>
<p>Now we have a camera pointing in the correct direction we need to compute the four corners of the footprint on the ground. We can do this geometrically by placing the camera sensor plane <code class="prettyprint">F</code>-mm away from the camera position in the direction of the <code class="prettyprint">lookat</code> and then projecting a ray from the camera position through the four corners of the sensors and into the ground. The point at which the ray intersects the ground - assuming the ground is flat (the ground is not flat) are the four corners of the footprint.</p>
<pre><code class="prettyprint">def ground_projection(p, corners):
k = -position[2] / (corners[2] - position[2])
return position + (corners-position)*k
</code></pre>
<p>Using this code we can generate a bunch of overlapping camera positions covering an area. Here’s a dataset of 100 images taken generated as if a DJI Phantom 4 (<code class="prettyprint">F</code>=3.6) was flying over a 250 square meter area at 400m. Now we can return to our set cover problem of finding the minimum number of images that cover the whole area. The whole area in this case is the union of the footprints of the individual images (right). Let’s lay a lattice over the ground to discretize the points of the grounds this is a good approximation of the ground plane and makes the problem easier to solve. (left) We’re now looking for the minimum number of images that cover all the lattice points.</p>
<p><a href="https://svbtleusercontent.com/imtejvsz7xlu1q.png"><img src="https://svbtleusercontent.com/imtejvsz7xlu1q_small.png" alt="Screenshot 2016-07-25 11.32.19.png"></a></p>
<h2 id="greedy_2">Greedy <a class="head_anchor" href="#greedy_2">#</a></h2>
<p>First lets try a greedy approach. Select the image that covers the most uncovered lattice points and add it to your solution set. Continue this until we have covered all the lattice points. This is easy to implement and runs fast, however the result isn’t optimal and we take more images than necessary.</p>
<h2 id="integer-linear-programming_2">Integer Linear Programming <a class="head_anchor" href="#integer-linear-programming_2">#</a></h2>
<p>Let’s think about the problem another way. Let’s take an example with 6 position on the ground and 5 cameras. Let the 6 positions be variables <code class="prettyprint">x1, x2, x3, x4, x5, x6</code> and the 5 cameras are sets <code class="prettyprint">s1 = {x1, x2}, s2 = {x3, x4}, s3 = {x5, s6}, s4 = {x1, x2, x3}, s5 = {x4, x5, x6}</code>.</p>
<pre><code class="prettyprint">[x1 x2 x3 x4 x5 x6]
[ s1 ][ s2 ][ s3 ]
[ s4 ][ s5 ]
</code></pre>
<p>Let’s assign an <u>inclusion</u> variable <code class="prettyprint">i1 .. i6</code> to each set. We would like to minimize the sum of <code class="prettyprint">i1 ... i6</code> such that for each position on the ground we have included at least one of the cameras containing it. There is one constraint for each element, and also the possibility of duplicate constraints (which can be ignored) if there are two elements that are covered by the same two sets. In this case both <code class="prettyprint">x1</code> and <code class="prettyprint">x2</code> are covered by <code class="prettyprint">s1</code> and <code class="prettyprint">s4</code> hence the first two constraints are the same. Similarly for the last two.</p>
<pre><code class="prettyprint">s1 + s4 >= 1
s1 + s4 >= 1
s2 + s4 >= 1
s2 + s5 >= 1
s3 + s5 >= 1
s3 + s5 >= 1
</code></pre>
<p>In addition we constrain the variables <code class="prettyprint">i1 ... i6</code> to be <code class="prettyprint">{0, 1}</code> we can get an optimal solution to the problem where <code class="prettyprint">s1, s2, s3 = 0</code> and <code class="prettyprint">s4, s5=1</code> which represents the minimum set cover. In reality integer programming itself is dual to <a href="https://en.wikipedia.org/wiki/Karp%27s_21_NP-complete_problems">Satisfiability</a> which is NP-complete meaning we can’t easily find this solution. But we can use a trick that get us close.</p>
<h2 id="relaxed-linear-programming_2">Relaxed Linear Programming <a class="head_anchor" href="#relaxed-linear-programming_2">#</a></h2>
<p>Let’s relax the condition that <code class="prettyprint">i1 ...i6</code> need to be <code class="prettyprint">{0, 1}</code> and let them be floating point numbers between <code class="prettyprint">0 ... 1</code>. Don’t worry too much about the meaning of fractionally including an item in the set this - example with clear that up.</p>
<pre><code class="prettyprint">[x1 x2 x3 x4 x5 x6]
[ s1 ][ s2 ][ s3 ]
</code></pre>
<p>If we relax the integral constraint the optimal solution <code class="prettyprint">1.5</code> which is <code class="prettyprint">s1, s2, s3 = 0.5</code>. But more importantly if we round up each of these numbers to <code class="prettyprint">1</code>. We still get a solution but possibly lose optimality. In this case we don’t and the rounded solution <code class="prettyprint">s1, s2, s3 = 1</code> is still optimal. However in our previous example</p>
<pre><code class="prettyprint">[x1 x2 x3 x4 x5 x6]
[ s1 ][ s2 ][ s3 ]
[ s4 ][ s5 ]
</code></pre>
<p>Our rounded solution is 1.5 which rounds up to 3 which is worse than the optimal integral constrained solution of 2 (using just <code class="prettyprint">s4</code> and <code class="prettyprint">s5</code>). Our task is now to find a good heuristic way to round up the fractional values. One algorithm to do this is called <a href="https://en.wikipedia.org/wiki/Randomized_rounding#Randomized-rounding_algorithm_for_Set_Cover">randomize rounding</a>.</p>
<p>Now we can apply this to our dataset. For each lattice point we compute all the cameras that can see that location and assign that lattice point to a variables. We then set up a linear programming solver to give us a (possibly fractional solution). The code looks like this:</p>
<pre><code class="prettyprint lang-python"># LP-solver
import pulp
# set up solver
problem = LpProblem("SetCover", LpMinimize)
variables = [ LpVariable("x" + str(var), 0, 1) for var in xrange(100) ]
problem += sum(variables)
# set inclusion constraints
problem += variables[0] >= 1
problem += variables[1] + variables[2] + variables[5] >= 1
...
problem += variables[3] + variables[5] + variables[5] >= 1
problem += variables[1] >= 1
GLPK().solve(problem)
# solution
for v in problem.variables():
print 'Camera:', v.name, "=", 1 if v.varValue > 0 else 0
</code></pre>
<p>Rendering this we see that out of the 100 images we’ve needed to retain only 42 to cover the whole area. </p>
<p><a href="https://svbtleusercontent.com/j43mmlnpztlhla.png"><img src="https://svbtleusercontent.com/j43mmlnpztlhla_small.png" alt="Screenshot 2016-07-25 11.52.44.png"></a></p>
tag:nickp.svbtle.com,2014:Post/counting-money2016-05-15T17:41:36-07:002016-05-15T17:41:36-07:00Counting Money<p>This post is based on a question from the Challenge 24 competition in 2016 where we were given a photograph of each of the denominations of Hungarian currency, called <a href="https://en.wikipedia.org/wiki/Hungarian_forint">Forints</a>. In addition given a number of photos of piles of coins (some of them counterfeit) and we had to compute the total value of money automatically. </p>
<p><a href="https://svbtleusercontent.com/nxryj57tp3nprg.jpg"><img src="https://svbtleusercontent.com/nxryj57tp3nprg_small.jpg" alt="coins.jpg"></a></p>
<p>First let’s look at the template and see how we can easily extract the location of the clean coins. A flood fill can compute the <a href="https://en.wikipedia.org/wiki/Connected_component_(graph_theory)#Algorithms">connected components</a> using a BFS which runs quickly enough and since the images is quite clear we can just iterate through each unvisited non-white pixel and for each start a new component and flood out to all connected pixels that aren’t white. Here’s the code:</p>
<pre><code class="prettyprint lang-python">import cv2
from itertools import product
def flood_fill(img):
rows, cols = img.shape
components = {}
component_id = -1
seen = set()
for (r, c) in product(range(rows), range(cols)):
if inside(r, c, img, seen):
component_id += 1
components[component_id] = []
q = [(r, c)]
seen.add((r, c))
while len(q) != 0:
cr, cc = q.pop()
components[component_id].append((cr, cc))
for (dr, dc) in product([-1, 0, 1], repeat=2):
nr, nc = cr + dr, cc + dc
if inside(nr, nc, img, seen):
seen.add((nr, nc))
q.append((nr, nc))
return components
</code></pre>
<p>The results look good and it cleanly picks out the backs and fronts of each coin in the template:</p>
<p><a href="https://svbtleusercontent.com/2k5kb2hrtnzycg.jpg"><img src="https://svbtleusercontent.com/2k5kb2hrtnzycg_small.jpg" alt="a.jpg"></a></p>
<p>Now that we have all the templates we need to find matches of them in images like this, which we will call background images:</p>
<p><a href="https://svbtleusercontent.com/fzsliralkqymyg.jpg"><img src="https://svbtleusercontent.com/fzsliralkqymyg_small.jpg" alt="coins.jpg"></a></p>
<p>Matching coins in images is often presented as an example of image processing algorithms in particular <a href="http://scikit-image.org/docs/dev/user_guide/tutorial_segmentation.html">segmentation</a> and the <a href="http://docs.opencv.org/master/d3/db4/tutorial_py_watershed.html#gsc.tab=0">watershed</a> algorithm. There are a couple of things that make this deceptively difficult in this case though. The first is that we aren’t just detecting or counting the coins we actually need to know which denomination each one is so we can total the amount. The second is that the coins can occlude one another so you may only see part of a coin. Finally the coins can each be arbitrarily rotated and fakes coins can appear that are larger or smaller than the originals - these need to be discounted.</p>
<p>There are a few ways of matching templates in an image. One way is to look at the cross-correlation of the two images at different points. You can think of this as sliding the template image row by row, column by column over the background image and at each location measuring how much the two images correlate. So we are looking for the offset that corresponds to the maximum correlation. The problem with this method is that it is super slow especially if the images are large or (like in this case) there are multiple templates to match and we are looking for the best match. We can solve this much faster in the frequency-domain using something called <a href="https://en.wikipedia.org/wiki/Phase_correlation">phase correlation</a>. This is the same cross-correlation technical but also isolates the phase information. If <code class="prettyprint">Ga</code> and <code class="prettyprint">Gb</code> are the Fourier transforms of the template and background images respectively and <code class="prettyprint">Ga*</code> and <code class="prettyprint">Gb*</code> are their copmlex conjugates we can compute this as:</p>
<p><a href="https://svbtleusercontent.com/hki799qx9hvwma.png"><img src="https://svbtleusercontent.com/hki799qx9hvwma_small.png" alt="88357aa1d55f79979d1f88b5c6a2678f.png"></a></p>
<p>and retrieve a normalized (important because there are multiple templates to match) cross-correlation by taking the real component of the inverse Fourier transform of this. Here’s some code that computes the location of the peak of the phase correlation which corresponds to the translation by which the template is off the background image. This process is called <a href="https://en.wikipedia.org/wiki/Image_registration">image registration</a>.</p>
<pre><code class="prettyprint lang-python">def find_translation(background, template):
from numpy import fft2, ifft2, real, abs, where
br, bc = background.shape
tr, tc = template.shape
Ga = fft2(background)
Gb = fft2(template, (br, bc))
R = Ga * conj(Gb)
pc = ifft2(R / abs(R))
pc = real(pc)
peak = pc.max()
translation = where( pc == peak)
return translation
</code></pre>
<p>Running phase correlation for each template iteratively and selecting the highest peak seems to perform well and we are able to correctly register all the coins both back and front in addition to the occluded coins:</p>
<p><a href="https://svbtleusercontent.com/sexg2j8s0v4nig.png"><img src="https://svbtleusercontent.com/sexg2j8s0v4nig_small.png" alt="3.png.progress17.png"></a></p>
<p>This seems to work pretty well and picks out the images however it can’t handle coins that are scaled or rotated. Rotation is actually quite a complex operation if you look at the movement of pixels. Let’s try rearranging the pixels in the image in such a such that they change more predictably when the image is scaled and rotated. We can use some kind of conformal mapping. These are functions that preserve angles in the Euclidean plane and one of the most common is the log-polar transform. Here’s a basic implementation of the log-polar transform:</p>
<pre><code class="prettyprint lang-python">def log_polar(image):
from scipy.ndimage.interpolation import map_coordinates
r, c = image.shape
coords = scipy.mgrid[0:max(image.shape)*2,0:360]
log_r = 10**(coords[0,:]/(r*2.)*log10(c))
angle = 2.*pi*(coords[1,:]/360.)
center = (log_r*cos(angle)+r/2.,log_r*sin(angle)+c/2.)
ret = map_coordinates(image,center,order=3,mode='constant')
return cv2.resize(ret, image.shape)
</code></pre>
<p>Here are the resulting log-polar transforms of a few images as they rotate. What’s useful to note here is what happens to horizontal, vertical and radial lines. </p>
<p><a href="https://svbtleusercontent.com/hcmss8jqvda.gif"><img src="https://svbtleusercontent.com/hcmss8jqvda_small.gif" alt="output_OHcilz.gif"></a></p>
<p>So rotation in the log-polar space manifests as a cyclic shift of the columns of pixels. This makes sense because the units of the <code class="prettyprint">X</code> axis in the images is no longer <code class="prettyprint">X</code> but the angle of rotation. So pixels on the same angle in the original image (from some center) map to a horizontal line in log-polar space. Similarly pixels on the same radial are mapped to vertical lines <code class="prettyprint">Y</code>. Another interesting point about this transform is the use of the logarithm. This effectively squashes the bottom rows of the transformed images. Look at the number of pixels dedicated to the text “Forint” compared to the number of pixels dedicated to the center of the images. This mimics the function of the fovea in the human eye and dedicates more resolution to the area in focus which is useful in a number of images tracking applications. </p>
<p>So once we have located the template in the background image we can can use the Fourier shift theorem which tells us that a linear shift in phase manifests as cyclic shift in the Fourier domain. Now we have a way to register two images and compute the translation, rotation and scale factor between them using the following function. Using this information we can detect and count all coins in the image (including the rotated ones) and discount coins that aren’t an authentic size. There are limitations to this method though for example because of the symmetry of the Fourier transform we can only detect a limited range and resolution of scale and rotation. There are more complicate methods that extend it though but thankfully their weren’t needed in this case. </p>
<p>Image registration using spectral methods is really fast and commonly used to align where there is known to be an affine transform between the images. More complex methods are needed to where there is a perspective transform between the two images which will be the topic of an upcoming blog post. </p>
<p><a href="https://svbtleusercontent.com/jxowtkoxrbbmg.png"><img src="https://svbtleusercontent.com/jxowtkoxrbbmg_small.png" alt="6.png.progress13.png"></a></p>
tag:nickp.svbtle.com,2014:Post/chain-reaction2016-05-08T20:31:48-07:002016-05-08T20:31:48-07:00Mines Chain Reaction<p>This post is based on a question asked during the <a href="http://ch24.org">Challenge 24</a> programming competition. Given the locations of a number of land mines as <code class="prettyprint">X</code> and <code class="prettyprint">Y</code> coordinates and their blast radius <code class="prettyprint">R</code>. What is the minimum number of mines that need to be detonated such that all mines are detonated. When a mine is detonated it detonates all mines within its blast radius and the process repeats.</p>
<p><a href="https://svbtleusercontent.com/vjovi2lxa0wryg.jpg"><img src="https://svbtleusercontent.com/vjovi2lxa0wryg_small.jpg" alt="787881-landmine-1415477841-246-640x480.jpg"></a></p>
<p>Here’s a simple example with <code class="prettyprint">13</code> mines. In this case the optimal solution is to detonate mines <code class="prettyprint">0, 3</code> and <code class="prettyprint">8</code> which will detonate all others. It’s not the only solution.</p>
<p><a href="https://svbtleusercontent.com/epthtxszeqfnkg.png"><img src="https://svbtleusercontent.com/epthtxszeqfnkg_small.png" alt="Screenshot 2016-05-08 21.01.38.png"></a></p>
<p>The relationship between mines is not commutative. Just because mines <code class="prettyprint">A</code> can reach mine <code class="prettyprint">B</code> doesn’t mean that mine <code class="prettyprint">B</code> reaches mine <code class="prettyprint">A</code>. Therefore we can represent the mines as a directed graph where vertices are mines and there is an (unweighted) edge from mine <code class="prettyprint">A</code> to mine <code class="prettyprint">B</code> if mine <code class="prettyprint">A</code> can directly detonate mine <code class="prettyprint">B</code>. </p>
<p><a href="https://svbtleusercontent.com/bhqbdjirberg.jpg"><img src="https://svbtleusercontent.com/bhqbdjirberg_small.jpg" alt="graph.jpg"></a><br>
In order to solve this problem we first need to compute the <a href="https://en.wikipedia.org/wiki/Strongly_connected_component">strongly connected components</a> in this graph . These are the subsets of mines which if any one is detonated then all mines in the subset will be detonated. In the example image above mines <code class="prettyprint">5, 6</code>, and <code class="prettyprint">7</code> comprise a SCC as do mines <code class="prettyprint">0, 2, 9, 10, 11</code> and <code class="prettyprint">12</code>. For simplicity we’ll say that mines on their own are also in SCCs of size <code class="prettyprint">1</code>. In order to compute the SCCs we can use <a href="https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm">Tarjan’s algorithm</a> which can be implemented recursively or with a stack.</p>
<pre><code class="prettyprint lang-python">def tarjan(graph):
index_counter = [0]
stack = []
lowlinks = {}
index = {}
result = []
def strongconnect(node):
# depth index for this node
index[node] = index_counter[0]
lowlinks[node] = index_counter[0]
index_counter[0] += 1
stack.append(node)
# Consider successors of `node`
try:
successors = graph[node]
except:
successors = []
for successor in successors:
if successor not in lowlinks:
# Successor has not yet been visited
strongconnect(successor)
lowlink = min(lowlinks[node],lowlinks[successor])
lowlinks[node] = lowlink
elif successor in stack:
# the successor is in the stack
lowlink = min(lowlinks[node],index[successor])
lowlinks[node] = lowlink
# pop the stack and generate an SCC
if lowlinks[node] == index[node]:
connected_component = []
while True:
successor = stack.pop()
connected_component.append(successor)
if successor == node: break
component = tuple(connected_component)
# storing the result
result.append(component)
for node in graph:
if node not in lowlinks:
strongconnect(node)
return result
</code></pre>
<p>This computes the SSCs for the initial graph. Now we can collapse all vertices in a SCC into a super vertex. Remember detonating <u>any</u> mine in the super vertex will detonate all the other in that super vertex. Then we can create another graph of the super vertices and connect super vertices with a directed edge if any mine in that super vertex can detonate any mine in another super vertex. We now get another directed graph although this one won’t have cycles. Remember if we denote a mine in a connected component it will detonate all mines in that component and all mines in all components reachable from that support node. Here’s an illustration to or the process so far:</p>
<p><a href="https://svbtleusercontent.com/5qamy4rdva2xsq.jpg"><img src="https://svbtleusercontent.com/5qamy4rdva2xsq_small.jpg" alt="graph.jpg"></a></p>
<p>We can now work out which mines need to be detonated. In order to do this we can look for all super vertices in this graph that have a zero in-degree. This means that they aren’t reachable by any sequence of mine detonations and thus need to be detonated themselves. One solution is to detonate mines: <code class="prettyprint">0, 3</code> and <code class="prettyprint">8</code>. There are actually multiple solutions. We can see this for example by considering the case where all the mines are within blast radius of all others and thus form one strongly connected components. In this case we could choose any mine to start the chain reaction.</p>
<p>In the competition the test cases got really large. The smallest had <code class="prettyprint">500</code> vertices and the largest had <code class="prettyprint">800,000</code> vertices. Tarjan’s algorithm is really fast and runs in <code class="prettyprint">O(N)</code>. Similarly the degree counting can also be done in <code class="prettyprint">O(N)</code>. The slowest part it actually creating the initial graph which when done naively takes <code class="prettyprint">O(N^2)</code>. In order to process the larger test cases we need to use a range query structure like a <a href="https://en.wikipedia.org/wiki/K-d_tree">KD-Tree</a> to query all mines within <code class="prettyprint">R</code> of a specific mine in logarithmic time. Reducing the processing to <code class="prettyprint">O(N log N)</code>. A simpler approach than implementing a KD-Tree is to sort all the mines by their <code class="prettyprint">X</code> coordinate and only consider partner mines that are within <code class="prettyprint">X*X</code> < <code class="prettyprint">R*R</code> of original. With randomly spaced data this gets you close to <code class="prettyprint">O(N log N)</code> without too much more coding. The problem set is available <a href="https://www.dropbox.com/s/5u4k1ckiwe1h9hm/B.zip?dl=0">here</a>.</p>
<p>This type of analysis is useful in considering the distribution of information through a network. If the initial graph represented people and edges represented the people with whom they shared information. Then the source nodes are the minimum set of people that need to be given information such that it is transferred through the whole network. </p>