We use cookies to ensure you have the best browsing experience on our website. Please read our cookie policy for more information about how we use cookies.
I have the following algorithm. I run DFS on the tree and make an array, lets call it postOrder, of the nodes in a post order traversal. The root is the rightmost element of the array. I also build an array maintaining the sizes of all the subtrees, lets call it subtreeSize, such that subtreeSize[i] is the size of the subtree rooted at i. Then comes the dynamic programming part where I will be working on the postOrder array from left to right, such that root 1 is the last element to be handled. I initialize an array DP of dimensions n + 1 and k + 1 and set dp[0][j] = 0 for all j. The dp formula I use is: dp[i][j] = max { dp[i-1][j] + weight[i], dp[i-subtreeSize[i]][j-1]}. The rationale behind the formula is that either I don't remove the i-th element (dp[i-1][j] + weight[i]) or I do (dp[i-subtreeSize[i]][j-1]). Four of the test cases passes and the rest fail. When I debug my dp table at test case 3 I see that dp[n][k] = 16511421728 while the answer is said to be 16234296119, which happens to be the value inside my dp[n][k-1]. Any suggestions to what may be wrong?
EDIT: I just found out what the problem was. I had forgot one more step of the initialization part, namely to set dp[i][0] to sumweight[i-1] + weight[i]. Before I could do this initialization step I had to build the sumweight array. sumweight[i] stores the weights of all the i first nodes of the postorder array. When I added these changes to my code all the test cases passed. I had written everything down on a paper but I forgot to actually add all of it into the code. I hope this explanation will help others solve the problem.
Cookie support is required to access HackerRank
Seems like cookies are disabled on this browser, please enable them to open this website
Tree Pruning
You are viewing a single comment's thread. Return to all comments →
I have the following algorithm. I run DFS on the tree and make an array, lets call it postOrder, of the nodes in a post order traversal. The root is the rightmost element of the array. I also build an array maintaining the sizes of all the subtrees, lets call it subtreeSize, such that subtreeSize[i] is the size of the subtree rooted at i. Then comes the dynamic programming part where I will be working on the postOrder array from left to right, such that root 1 is the last element to be handled. I initialize an array DP of dimensions n + 1 and k + 1 and set dp[0][j] = 0 for all j. The dp formula I use is: dp[i][j] = max { dp[i-1][j] + weight[i], dp[i-subtreeSize[i]][j-1]}. The rationale behind the formula is that either I don't remove the i-th element (dp[i-1][j] + weight[i]) or I do (dp[i-subtreeSize[i]][j-1]). Four of the test cases passes and the rest fail. When I debug my dp table at test case 3 I see that dp[n][k] = 16511421728 while the answer is said to be 16234296119, which happens to be the value inside my dp[n][k-1]. Any suggestions to what may be wrong?
EDIT: I just found out what the problem was. I had forgot one more step of the initialization part, namely to set dp[i][0] to sumweight[i-1] + weight[i]. Before I could do this initialization step I had to build the sumweight array. sumweight[i] stores the weights of all the i first nodes of the postorder array. When I added these changes to my code all the test cases passed. I had written everything down on a paper but I forgot to actually add all of it into the code. I hope this explanation will help others solve the problem.