Yi: Tue 4pm-5:30pm
Leo: Wed 3:30pm-4:30pm
Siddharth: Monday 9:30-11:30
The class is over. Thanks a lot for coming (at every early monring) and making it a fun affair for me. I really appreciate it. Don't forget to fill in the teaching evaluation form.
Friday, December 7, 2012
Wednesday, December 5, 2012
Lecture 39 Approximation Algorithm
Today, we talked about the approximation algorithms for vertex cover and metric travelling salesman problem.
Lecture 38 NP complete problems
We talked about why Clique, Maximum size Independent set, and Vertex Cover are NP Complete.
Monday, December 3, 2012
Friday, November 30, 2012
Lecture 37 NP completeness
Today, we talked about why 3SAT is NP complete. We also discussed problems between P and NPC.
Wednesday, November 28, 2012
Lecture 36. More on NP-Completeness
Today, we talked about co-NP and the cook-levin theorem. Under the assumption that a polynomial time algorithm can be efficiently converted into a polynomial size circuit. We show that Circuit-Sat is NP-complete.
Lecture 35. NP Completeness
Today, we talked about the definition of P, NP and polynomial time reducible.
Monday, November 26, 2012
Homework 6
Homework 6 is out. This homework is optional.
If you do not submit it, we will take average over your first 5 homework scores.
If you submit it, we will take average over your 6 homework scores if it is better than the average of your first 5 homework scores
Hint for problem 1: use max flow to make the fair assignment. Basically we just want to assign a driver to the days he need to drive with the fairness constraint.
Hint for problem 2:The general approach is to somehow create an instance of problem B from an
instance of problem A and apply the algorithm for B on it.
If you do not submit it, we will take average over your first 5 homework scores.
If you submit it, we will take average over your 6 homework scores if it is better than the average of your first 5 homework scores
Hint for problem 1: use max flow to make the fair assignment. Basically we just want to assign a driver to the days he need to drive with the fairness constraint.
Hint for problem 2:The general approach is to somehow create an instance of problem B from an
instance of problem A and apply the algorithm for B on it.
Monday, November 19, 2012
Lecture 34. Applications of Max Flow
Today, we talked about the applications of max flow on the following problems
1. min cut
2. edge disjoint path
3. vertex disjoint path
4. bipartite graph matching
1. min cut
2. edge disjoint path
3. vertex disjoint path
4. bipartite graph matching
Friday, November 16, 2012
Wednesday, November 14, 2012
Lecture 32 Max Flow Min Cut Theorem and Ford-Fulkerson's algorithm
In this lecture, we talked about Ford Fulkerson's algorithm for finding the max flow. To show the correctness of the algorithm, we also proved the Max-Flow Min-Cut Theorem.
Lecture 31 Max Flow and Min Cut
Today, we finished algorithm on strongly connected component. We also defined the problem of max flow and showed that the maximum flow is always upper bounded by the minimum cut.
Friday, November 9, 2012
Lecture 30. Strongly Connected Component
Today, we explained how to use DFS to find all the strongly connected components of a graph.
Homework 5
Homework 5 is out. Any question can be asked here.
Problem 1
Q: is there a maximum grade?
A:You can assume there is no maximum grade for each course.
Q: Can you give us hint on the running time for problem 1?
For full credit. The running time should be O(n), O(T log n+ n), O(nT^2)
Hint for case 3: dynamic programming
Problem 4
Q:The goal is to: delete as many edges as possible from G , when Vertex u is reachable from v if there is a path from v to u. Also, do we need to remove as many edges as possible from the other vertexes that will be along the path from u--to --v? or just to remove the edges that connect u--to--v and left the minimum path.
A: This problem requires you to delete the maximum number edges so that the reachability does not change for *ANY* two vertices u, v; i.e., for any two vertices u,v in V, u is reachable from v in G if and only if u is reachable from v in G'.
Problem 1
Q: is there a maximum grade?
A:You can assume there is no maximum grade for each course.
Q: Can you give us hint on the running time for problem 1?
For full credit. The running time should be O(n), O(T log n+ n), O(nT^2)
Hint for case 3: dynamic programming
Problem 4
Q:The goal is to: delete as many edges as possible from G , when Vertex u is reachable from v if there is a path from v to u. Also, do we need to remove as many edges as possible from the other vertexes that will be along the path from u--to --v? or just to remove the edges that connect u--to--v and left the minimum path.
A: This problem requires you to delete the maximum number edges so that the reachability does not change for *ANY* two vertices u, v; i.e., for any two vertices u,v in V, u is reachable from v in G if and only if u is reachable from v in G'.
Wednesday, November 7, 2012
Lecture 29 DFS, longest path, topological sort, and strongly connected component
Today, we talked about DFS and its applications on solving topological sort, longest path in a graph, and deciding whether a directed graph is strongly connected.
Monday, November 5, 2012
Lecture 28. Algorithms for finding the shortest path
Today, we talked about three different algorithms for the shortest path problem. The Bellman-Ford algorithm, Floyd-Warshall algorithm and Dijkstra's algorithm.
Friday, November 2, 2012
Lecture 27. Dynamic Proigramming (2)
Today, we talked about the longest increasing subsequence. We gave two algorithms with running time O(n^2) and O(n log n).
Then we talked about 0-1 Knapsack problem and gave a dynamic programming algorithm for it.
Then we talked about 0-1 Knapsack problem and gave a dynamic programming algorithm for it.
Lecture 26 Dynamic programming
Today we talked about dyanmic programming. We showed how to use Dynamic Programming to solve the weight verision of activity selection.
Monday, October 29, 2012
Lecture 25 Greedy algorithm
Today, we talked about the application of greedy algorithm on two examples. Minimum Spanning Tree and Activity selection.
Friday, October 26, 2012
Homework 4
Homework 4 is out today. Please leave comments here.
Problem 1
Q: .Initially, how are we going to choose C from F while F has no edges? does that mean choosing one node in the first iteration? do we add the minimum edge to F or separate set?
A: if there are no edges in the graph, each node itself is a connected component. In the first iteration, you choose an arbitrary node and then choose the smallest edge leaving from that node. We will then add the minimum weight edge into F. Suppose the edge we add into F is (u,v). Now u and v is in the same connected component And in the second iteration, you can either choose a single vertex w other than (u,v) or you can choose the connected component {u,v}.
The main point of this problem is to show that no matter which connected component is chosen, the algorithm always output the correct Minimum Spanning Tree. Actually, both Prim's algorithm and Kruskal's algorithm can be viewed as a special case of this algorithm when there is a specific way of choosing the connected component.
Problem 2
Q: Can we use hashmap for problem 2a?
A: You can use hash function. Please state which hash function you use and what if there are many collisions for the function you choose.
Additional Hint 1. you can solve the problem WITHOUT using hash function.
Additional Hint 2. there is an algorithm similar to the idea of merge sort.
Q: can you give some more hint on 2b?
Additional Hint: Try to related the problem with the polynomial: \sum x^d_i
Problem 3
Q: can you give us some hint on Problem 3?
3.1: run the algorithm for the binary alphabet for k times
3.2: it is somewhat similar to the algorithm from homework 3's problem 1.
3.3: try to combine the algorithm from the first two problems
Wednesday, October 24, 2012
Lecture 24 Multiplying two polynomials and clarification
This lecture finishes the algorithm for fast multiplication of two polynomials.
Tuesday, October 23, 2012
Homework 3
Problem 1, the length of the pattern m is less than n.
There is a typo in problem 2 regarding the definition of the delta function (some variable "m" should be"i" instead). I've corrected the typo and uploaded the correct version.
Problem 3, You might be able to get bound better than O((m+n) log n).
Problem 4.
Q :Why can't we just
Delete(x);
Insert(k)
A: you need to implement delete(x). If you just remove the node, it is no longer a binomial tree.
Friday, October 19, 2012
Lecture 23. fast polynomial multiplication algorithm
In this class, we explain the idea of multiplying two polynomials of degree n in time O( n log n).
Wednesday, October 17, 2012
Lecture 22. String matching and polynomial multiplication
In this class, we explain the following relationship between approximate string matching and polynomial multiplication.
If you can multiply two polynomials of length N in time O(N log N). Then you can solve the problem of string matching problem in time O(n log m) where n is the length of the string and m is the length of the pattern.
If you can multiply two polynomials of length N in time O(N log N). Then you can solve the problem of string matching problem in time O(n log m) where n is the length of the string and m is the length of the pattern.
Monday, October 15, 2012
Course evaluation
Hi, Class
It is the time for midterm course evaluation.
Please leave comments (through the system) on how happy (or angry) you are with me for the past half semester. Any suggestion on how the course should proceed is welcomed.
==========================instructions====================================
Please participate in the mid-semester evaluation of CS courses and instructors. I will not see the form you
submit but only a summary or compilation of the responses to each question. You will thus be anonymous --
unless you put your name in a comment box, which you should NOT do.
Go to:
https://portals.cs.purdue.edu/
log in using your Purdue career account ID and password,
click on "Course Evaluations", and then
click on "Mid-Semester Evaluations".
The deadline is Sunday, October 21.
The form consists mostly of multiple-choice questions, but you will have the opportunity to make comments,
and you would do well to be prepared for that.
You should see all your current CS courses on the form. If you enter anything for a course and submit the
form, the course will not appear if you ask for the form again. Unmarked courses will appear again.
<signature>
It is the time for midterm course evaluation.
Please leave comments (through the system) on how happy (or angry) you are with me for the past half semester. Any suggestion on how the course should proceed is welcomed.
==========================instructions====================================
Please participate in the mid-semester evaluation of CS courses and instructors. I will not see the form you
submit but only a summary or compilation of the responses to each question. You will thus be anonymous --
unless you put your name in a comment box, which you should NOT do.
Go to:
https://portals.cs.purdue.edu/
log in using your Purdue career account ID and password,
click on "Course Evaluations", and then
click on "Mid-Semester Evaluations".
The deadline is Sunday, October 21.
The form consists mostly of multiple-choice questions, but you will have the opportunity to make comments,
and you would do well to be prepared for that.
You should see all your current CS courses on the form. If you enter anything for a course and submit the
form, the course will not appear if you ask for the form again. Unmarked courses will appear again.
<signature>
Lecture 21. KMP algorithm and approximate string match
In this class, we finished KMP's algorithm for string matching. We also showed the connection between approximate string matching and calculating convolution between two sequences.
Friday, October 12, 2012
Lecture 20 String Matching (2)
Today, we talked about the KMP algorithm for string matching. It improves the Finite Automata approach with a simpler pre-process.
Thursday, October 11, 2012
Lecture 19 String Matching (1)
Today, we talked about how to find whether a string of length n has a substring of length m. We described an algorithm which runs in O(n) assuming that we can prepoccess the pattern.
Monday, October 1, 2012
Lecture 18. Union Find (2)
Today, we finish the lecture on Union Find. In the end of the class, we had a pretty complicate proof for the log*n running time. If you are confused, have a look at the following page
http://en.wikipedia.org/wiki/Proof_of_O(log*n)_time_complexity_of_Union_Find
The proof is not exactly accurate, but you can start read from the proof below the picture of many buckets.
http://en.wikipedia.org/wiki/Proof_of_O(log*n)_time_complexity_of_Union_Find
The proof is not exactly accurate, but you can start read from the proof below the picture of many buckets.
Friday, September 28, 2012
Lecture 17 Union Find (1)
Today, we talked about union find, data structure that supports find and union operation. We gave an implementation that has cost O(log n) for both operations. We also explain the idea of how to modify the find function to improve the running time to log* n.
If you are interested in the reference of total number of atoms in the universe, here is a wiki page:
http://en.wikipedia.org/wiki/Observable_universe
If you are interested in the reference of total number of atoms in the universe, here is a wiki page:
http://en.wikipedia.org/wiki/Observable_universe
Wednesday, September 26, 2012
change of Siddharth's office hour
The office hour of our TA, Siddharth, is now 10:30-12:30 at Thu.
Lecture 16 Fibonacci heap (3)
Today, we finished the Fibonacci heap and proved that it has O(1) amortized cost for insertion, merge, find_min, decrease_key. The only expensive work is delete_min which takes O(log n).
As an application, we show how to use Fibonacci heap in Prim's algorithm for Minimum Spanning Tree.
As an application, we show how to use Fibonacci heap in Prim's algorithm for Minimum Spanning Tree.
Monday, September 24, 2012
Hint on homework 2
Some student asked me hint for problem 2 of homework 2. The suggestion is that prove your claim with simple examples. For example, can you prove you claim on trees of size 3?
Lecture 15. Fibonacci Heap (2)
Today we talked about how to implement a decrease_key. This implementation will give us tree that is no longer binomial tree. But it will still have the property that suppose the children of a node is sorted according to its degree, then the i-th has degree at least i-2. (An alternative definition of binomial tree is that its ith child has degree exact i-1). We call any tree of above property Fibonacci Tree.
Friday, September 21, 2012
Lecture 14 Binomial Heap and Fibonacci Heap
Today, we talked about Binomial Heap and why it takes an amortized running time of O(log n) for the deletion and O(1) for all the other operations. We also (try to) motivate Fibonacci Heap which also support decrease_key in O(1) time.
If you are confused by the Dijkstra's algorithm I described in the end, take a look at the wiki description.
If you are confused by the Dijkstra's algorithm I described in the end, take a look at the wiki description.
Wednesday, September 19, 2012
lecture 13 Binomial Heaps (2)
Today, we talked about how to implement the merge operation of Binomial heap in O(log n) time. This will give us insertion, delete_min, merge all in O(log n).
Surprisingly, if we are a little bit lazier in the implementation, we can make all the operations O(1) except for the delete_min operation.
Surprisingly, if we are a little bit lazier in the implementation, we can make all the operations O(1) except for the delete_min operation.
Monday, September 17, 2012
Lecture 12. Binomial Trees and clarifications
Today we talked about the construction of Binomial Heaps. The main motivation is to allow efficient merging operation. As we know the binary heap merging takes time \Omega(n).
In the end of the class, there is a mistake when I described the delete-min operations. For a binomial heap H,we will first take the tree with the smallest root B_min and delete of its root to get H'= (B_1,...B_{min-1}). In the class, I mention that then we will merge H' with H. This is wrong. As is pointed out by some student, we want to merge H' with H'' where H'' =(H\B_min) (the notion means that H'' is the heap we get after remove B_min from H).
In the end of the class, there is a mistake when I described the delete-min operations. For a binomial heap H,we will first take the tree with the smallest root B_min and delete of its root to get H'= (B_1,...B_{min-1}). In the class, I mention that then we will merge H' with H. This is wrong. As is pointed out by some student, we want to merge H' with H'' where H'' =(H\B_min) (the notion means that H'' is the heap we get after remove B_min from H).
Friday, September 14, 2012
Lecture 11 Splay Tree
Today we proved that for deletion and search as well as the splay operation, the amortized cost is O(log n) for splay trees. As a homework, you will show the insertion will also have an amortized cost O(log n).
Wednesday, September 12, 2012
Lecture 10 Splay Tree
Today, we talked about the implementation of Splay Trees. We also introduced the potential function for the analysis of Splay Trees.
Monday, September 10, 2012
Lecture 9 Amortized Analysis and Splay Tree
Today we summarize 3 different techniques of analyzing amortized cost. Also, we introduce the basic idea of Splay Tree. It is similar to the standard binary tree while the difference is that after it access some element by the standard BST operations, it will bring that element to the root of the tree.
Sunday, September 9, 2012
Good news(?)
Starting next Wednesday 9/12, the class will meet at LWSN B151.
Friday, September 7, 2012
Lecture 8. Amortized Analysis (2)
Today, we introduce 3 ways of analyzing the amortized cost of an algorithm. We use the problem of simulating a queue by two stacks as an example.
These 3 mehtods are aggregate analysis, banker's method and physicist's method. Aggregate analysis basically analyzes the average cost of n different operations. Banker's method is a generalization (of aggregate analysis) which allows us to assign different cost to different operations as long as the cost we pay can always cover the time we spend on the operations. The physicist's method introduce a potential function (which can be viewed as all the credits available in the banker's method). After defining the potential function, it naturally induces a amortized cost for each operation.
These 3 mehtods are aggregate analysis, banker's method and physicist's method. Aggregate analysis basically analyzes the average cost of n different operations. Banker's method is a generalization (of aggregate analysis) which allows us to assign different cost to different operations as long as the cost we pay can always cover the time we spend on the operations. The physicist's method introduce a potential function (which can be viewed as all the credits available in the banker's method). After defining the potential function, it naturally induces a amortized cost for each operation.
Wednesday, September 5, 2012
Lecture 7. Amortized Analysis (1)
Today, we talked about amortized analysis. One example we use is the dynamic array.
We showed that although the worst case insertion can be Omega(n), the amortized cost is actually constant.
We showed that although the worst case insertion can be Omega(n), the amortized cost is actually constant.
Friday, August 31, 2012
Lecture 6. Analysis of Treaps
Today, we talked about the insertion and deletion of Treap. We also showed that the expected height of a of any node in the Treap is 2 log n.
If you want to see the proof that height of a treap is O(log n) with high probability, here is a very nice note by Avrim. It shows that the heights of all nodes are less than 8 log n with probability 1/n^{0.44}.
Thursday, August 30, 2012
Homework 1 is out.
You can leave comments here.
Wednesday, August 29, 2012
Lecture 5 introduction to Treap
Today, we talked about Treap. Basically, it is an implementation of the BST with the property that the insertion, deletion, and look-up can all be done in O(log n) in expectation (the construction is randomized). There are also other implementations of BST with these properties such as AVL tree, B-tree, red-black tree, which you may know before.
We showed that for given a set of keys with assigned priority, the corresponding Treap exists and is unique.
We also showed how to insert a new (key, priority) pair into a Treap.
We showed that for given a set of keys with assigned priority, the corresponding Treap exists and is unique.
We also showed how to insert a new (key, priority) pair into a Treap.
Monday, August 27, 2012
Two clarifications for lecture 4
1. Some student pointed out after class that there is a typo in my analysis of the randomized selection algorithm. We should have d = 4c instead of d = 4cn.
Also, the pivot point we get for the deterministic selection algorithm should have the property that 0.3n <= i <= 0.7n where a_i is the pivot point. I may write 0.3n < a_i < 0.7n instead.
2. One student asked in the class what if there are many numbers of the same value in the selection problem. Another student immediately told me after class that the following quick fix: create a list called E for all the numbers that are equal to the pivot number a_i..
We will then have three lists L, E, G. Since we know the length of L, E, G, we can still figure out whether the k-th smallest number is in L, E, or G. If it is in E, we can simply output a_i; if it is in L, we will apply Select(L,k); if it is in G, we will apply Select(G, k-|L|-|E|).
Also, the pivot point we get for the deterministic selection algorithm should have the property that 0.3n <= i <= 0.7n where a_i is the pivot point. I may write 0.3n < a_i < 0.7n instead.
2. One student asked in the class what if there are many numbers of the same value in the selection problem. Another student immediately told me after class that the following quick fix: create a list called E for all the numbers that are equal to the pivot number a_i..
We will then have three lists L, E, G. Since we know the length of L, E, G, we can still figure out whether the k-th smallest number is in L, E, or G. If it is in E, we can simply output a_i; if it is in L, we will apply Select(L,k); if it is in G, we will apply Select(G, k-|L|-|E|).
Lecture 4. Linear Time Selection (2)
Today, we studied two algorithms. One is a randomized algorithm for selecting the k-th largest number. The other is a deterministic algorithm for selecting the median.
We proved that the first algorithm runs in linear time in expectation and the second algorithm runs in linear time.
Below are some nice notes from Avrim Blum for the materials covered in the course today.
We proved that the first algorithm runs in linear time in expectation and the second algorithm runs in linear time.
Below are some nice notes from Avrim Blum for the materials covered in the course today.
Sunday, August 26, 2012
Office hour change (only for this week)
I will change my office hour at 8/27 to 3pm-5pm. This change is only for this week.
Friday, August 24, 2012
Clarification of Lecture 3
Some of the students asked me again why the probability that a_i is compared with a_j is 2/(j-i+1) for any i<j.
Following is the key fact: the probability a_i is compared with a_j is equal to the probability that either a_i or a_j are the first to be chosen as the pivot among a_i, a_(i+1),..., a_(j-1), a_j.
Analyzing the probability can be also thought of a dart game: each time we throw a dart uniform randomly. If we hit i or j, we will win. If we hit any number k such that i<k<j, we will lose. If we hit a number outside the range of [i,j], then we will keep playing the game. Then the winning probability is exactly 2/(j-i+1).
Thursday, August 23, 2012
Lecture 3. Analyzing Quicksort and Linear Time Selection Algorithm
We analyzed the running time of quicksort. Also, we also gave a randomized algorithm for selecting the k-th largest number.
Wednesday, August 22, 2012
There has been some technical problem for my email box to receive email over the past 2 - 3 days.
If you send me an email and do not get a reply, please send it again at
wuyi at purdue.edu
Lecture 2: Divide and Conquer and analyzing recurrence
Today, we talked about Karasuba's algorithm for multiplication and Strassen's algorithm for matrix multiplication. Question regarding Lecture 2 can be asked here.
Monday, August 20, 2012
Lecture 1. Asymptotic Analysis and Solving Recurrences
Today, we talked about the definition of O(), Omega(), Theta(), omega(), o(). We also do a example of recurrence based on the merge sort algorithm.
If you are interested in what are general techniques of solving recurrence, here is a very good summary by Jeff Erickson.
Here is a link to the slides of lecture 1.
Questions regarding lecture 1 can also be asked here.
If you are interested in what are general techniques of solving recurrence, here is a very good summary by Jeff Erickson.
Here is a link to the slides of lecture 1.
Questions regarding lecture 1 can also be asked here.
The grading information is updated.
See the following link.
Subscribe to:
Comments (Atom)