We use cookies to ensure you have the best browsing experience on our website. Please read our cookie policy for more information about how we use cookies.
The editorials are bad at its best. It explains the solution and not the path to discovery. Here is my take on the solution.
So, we are given a weighted graph, and we are asked to print the number of paths in the graph having cost C.
Cost of path = C which is maximum weight of any edge in the path.
Suppose I have a graph with N nodes, then
Total number of edges = (N*(N-1))/2
Value of C can be too big so I cannot memoize for all values of C. But there are atmost10^5 different value of C, which can be memoized
Idea : If I can get a graph whose each edge weight X or less than X , then I can calculate number of edges in graph which is answer of cost X.
Ah, hah. I can build a graph step by step considering cost of edges in increasing order. ie . First graph would have all edges with cost <= 1, then the graph would be updated for cost <= 2 and so on.
Now, I need to process the input, which is nothing but connecting node x with node y.
Step 2: Creating the graph
for(autoq:queries){intc=q.cost;join(q.x,q.y,c);}
Here, the join function finds the root of x, and root of y and makes a union of the different tree.
I can also update the answer for cost c as well.
// Returns root of node X and performs path compression.intgetroot(intx){if(root[x]==x)returnx;root[x]=getroot(root[x]);returnroot[x];}voidjoin(inta,intb,longlongc){// Getting root of both a and b tress.introota=getroot(a);introotb=getroot(b);if(roota==rootb)return;// No need to do anything. They both belong to same tree alreadyroot[rootb]=roota;// Different trees, join themlonglongtotalToRemove=0;longlongn=rcount[roota];// Number of nodes in tree AtotalToRemove+=(n*(n-1))/2;n=rcount[rootb];totalToRemove+=(n*(n-1))/2;// Number of nodes in tree Brcount[roota]+=rcount[rootb];// Tree A now also contains all nodes of tree B. Update the countn=rcount[roota];longlongtotalToAdd=(n*(n-1))/2;// Number of nodes in updated Tree.costCounts[c]+=(totalToAdd-totalToRemove);}
So, now my map contains answer for all the given cost. There are Q queries, and at most N costs. Q x N will cause a Timeout. I now need to find a way to calculate sum for [L,R] fast.
Currently my cost map returns returns cost of X, what if my cost map returned cost for all x <= X. A prefix sum!
Super Maximum Cost Queries
You are viewing a single comment's thread. Return to all comments →
The editorials are bad at its best. It explains the solution and not the path to discovery. Here is my take on the solution.
So, we are given a weighted graph, and we are asked to print the number of paths in the graph having cost C.
Cost of path = C which is maximum weight of any edge in the path.
Suppose I have a graph with N nodes, then
Total number of edges = (N*(N-1))/2
Value of C can be too big so I cannot memoize for all values of C. But there are atmost
10^5
different value of C, which can be memoizedIdea : If I can get a graph whose each edge weight X or less than X , then I can calculate number of edges in graph which is answer of cost X.
Ah, hah. I can build a graph step by step considering cost of edges in increasing order. ie . First graph would have all edges with cost <= 1, then the graph would be updated for cost <= 2 and so on.
So,
Step 1: Sort the input by cost
Now, I need to process the input, which is nothing but connecting node x with node y.
Step 2: Creating the graph
Here, the join function finds the root of x, and root of y and makes a union of the different tree.
I can also update the answer for cost c as well.
So, now my map contains answer for all the given cost. There are Q queries, and at most N costs. Q x N will cause a Timeout. I now need to find a way to calculate sum for [L,R] fast.
Currently my cost map returns returns cost of X, what if my cost map returned cost for all x <= X. A prefix sum!
Step 3: Optimize cost counts prefix sum
Now, i can easily find out answer by binary searching :)
Step 4: Spit answers
Any you get a good green Accepted Solution. :)