If you want to see the proof that height of a treap is O(log n) with high probability, here is a very nice note by Avrim. It shows that the heights of all nodes are less than 8 log n with probability 1/n^{0.44}.
Friday, August 31, 2012
Lecture 6. Analysis of Treaps
Today, we talked about the insertion and deletion of Treap. We also showed that the expected height of a of any node in the Treap is 2 log n.
Thursday, August 30, 2012
Homework 1 is out.
You can leave comments here.
Wednesday, August 29, 2012
Lecture 5 introduction to Treap
Today, we talked about Treap. Basically, it is an implementation of the BST with the property that the insertion, deletion, and look-up can all be done in O(log n) in expectation (the construction is randomized). There are also other implementations of BST with these properties such as AVL tree, B-tree, red-black tree, which you may know before.
We showed that for given a set of keys with assigned priority, the corresponding Treap exists and is unique.
We also showed how to insert a new (key, priority) pair into a Treap.
We showed that for given a set of keys with assigned priority, the corresponding Treap exists and is unique.
We also showed how to insert a new (key, priority) pair into a Treap.
Monday, August 27, 2012
Two clarifications for lecture 4
1. Some student pointed out after class that there is a typo in my analysis of the randomized selection algorithm. We should have d = 4c instead of d = 4cn.
Also, the pivot point we get for the deterministic selection algorithm should have the property that 0.3n <= i <= 0.7n where a_i is the pivot point. I may write 0.3n < a_i < 0.7n instead.
2. One student asked in the class what if there are many numbers of the same value in the selection problem. Another student immediately told me after class that the following quick fix: create a list called E for all the numbers that are equal to the pivot number a_i..
We will then have three lists L, E, G. Since we know the length of L, E, G, we can still figure out whether the k-th smallest number is in L, E, or G. If it is in E, we can simply output a_i; if it is in L, we will apply Select(L,k); if it is in G, we will apply Select(G, k-|L|-|E|).
Also, the pivot point we get for the deterministic selection algorithm should have the property that 0.3n <= i <= 0.7n where a_i is the pivot point. I may write 0.3n < a_i < 0.7n instead.
2. One student asked in the class what if there are many numbers of the same value in the selection problem. Another student immediately told me after class that the following quick fix: create a list called E for all the numbers that are equal to the pivot number a_i..
We will then have three lists L, E, G. Since we know the length of L, E, G, we can still figure out whether the k-th smallest number is in L, E, or G. If it is in E, we can simply output a_i; if it is in L, we will apply Select(L,k); if it is in G, we will apply Select(G, k-|L|-|E|).
Lecture 4. Linear Time Selection (2)
Today, we studied two algorithms. One is a randomized algorithm for selecting the k-th largest number. The other is a deterministic algorithm for selecting the median.
We proved that the first algorithm runs in linear time in expectation and the second algorithm runs in linear time.
Below are some nice notes from Avrim Blum for the materials covered in the course today.
We proved that the first algorithm runs in linear time in expectation and the second algorithm runs in linear time.
Below are some nice notes from Avrim Blum for the materials covered in the course today.
Sunday, August 26, 2012
Office hour change (only for this week)
I will change my office hour at 8/27 to 3pm-5pm. This change is only for this week.
Friday, August 24, 2012
Clarification of Lecture 3
Some of the students asked me again why the probability that a_i is compared with a_j is 2/(j-i+1) for any i<j.
Following is the key fact: the probability a_i is compared with a_j is equal to the probability that either a_i or a_j are the first to be chosen as the pivot among a_i, a_(i+1),..., a_(j-1), a_j.
Analyzing the probability can be also thought of a dart game: each time we throw a dart uniform randomly. If we hit i or j, we will win. If we hit any number k such that i<k<j, we will lose. If we hit a number outside the range of [i,j], then we will keep playing the game. Then the winning probability is exactly 2/(j-i+1).
Thursday, August 23, 2012
Lecture 3. Analyzing Quicksort and Linear Time Selection Algorithm
We analyzed the running time of quicksort. Also, we also gave a randomized algorithm for selecting the k-th largest number.
Wednesday, August 22, 2012
There has been some technical problem for my email box to receive email over the past 2 - 3 days.
If you send me an email and do not get a reply, please send it again at
wuyi at purdue.edu
Lecture 2: Divide and Conquer and analyzing recurrence
Today, we talked about Karasuba's algorithm for multiplication and Strassen's algorithm for matrix multiplication. Question regarding Lecture 2 can be asked here.
Monday, August 20, 2012
Lecture 1. Asymptotic Analysis and Solving Recurrences
Today, we talked about the definition of O(), Omega(), Theta(), omega(), o(). We also do a example of recurrence based on the merge sort algorithm.
If you are interested in what are general techniques of solving recurrence, here is a very good summary by Jeff Erickson.
Here is a link to the slides of lecture 1.
Questions regarding lecture 1 can also be asked here.
If you are interested in what are general techniques of solving recurrence, here is a very good summary by Jeff Erickson.
Here is a link to the slides of lecture 1.
Questions regarding lecture 1 can also be asked here.
The grading information is updated.
See the following link.
Subscribe to:
Comments (Atom)