We use cookies to ensure you have the best browsing experience on our website. Please read our cookie policy for more information about how we use cookies.
Late reply, but here's some thoughts of mine: You can sort the people list by the number of true bits (cardinality), which would add a one time O(log(n)) cost, especially if you count the 1's while you're reading in the input anyhow. Then the pairings can be tested starting from the "most knowledgeable" end, and hope you get a large combined cardinality early on. That lets you break out of the inner loop when the people you're testing can't possibly improve on the max, and break out of the outer loop when people's cardinality is less than half of the max found so far.
Downside is that while that will hopefully speed things up, you're only reducing the complexity by a constant scaler, and O(n^2/x) is still O(n^2). In the worst case, everyone has the same number of true bits, no pair has zero overlapping knowledge, and so the shortcuts will never help you. :-/
Another example: this is a challenge where the sub-problems are all independent, and so it's ripe for parallelization. Ideally that divides the run time directly in proportion to the number of cores. But again, that would still be big oh en squared, and outside the format of these challenges.
I toyed with the idea of indexing the people by where their knowledge is concentrated. Basically, break the bit list into chunks, and for each chunk count the number of true bits, and then record that information. Maybe a minimum threshold to be included in a set of people that are strong in that chunk? Or in a map with lists of people by chunk cardinality. Anyhow, then for each candidate you could identify their weakest chunk, and start with the people strongest there. The point would be to finish early by finding an unbeatable combination quickly. But you'd have no promises of improved running time. Like hashing algorithms, however you divide up the knowledge space, there's a worst case where the knowledge is spread out such to make those divisions useless.
Apology for the wall of text! And thank you for letting me ramble.
ACM ICPC Team
You are viewing a single comment's thread. Return to all comments →
Late reply, but here's some thoughts of mine: You can sort the people list by the number of true bits (cardinality), which would add a one time
O(log(n))
cost, especially if you count the 1's while you're reading in the input anyhow. Then the pairings can be tested starting from the "most knowledgeable" end, and hope you get a large combined cardinality early on. That lets youbreak
out of the inner loop when the people you're testing can't possibly improve on the max, andbreak
out of the outer loop when people's cardinality is less than half of the max found so far.Downside is that while that will hopefully speed things up, you're only reducing the complexity by a constant scaler, and
O(n^2/x)
is stillO(n^2)
. In the worst case, everyone has the same number of true bits, no pair has zero overlapping knowledge, and so the shortcuts will never help you. :-/Another example: this is a challenge where the sub-problems are all independent, and so it's ripe for parallelization. Ideally that divides the run time directly in proportion to the number of cores. But again, that would still be big oh en squared, and outside the format of these challenges.
I toyed with the idea of indexing the people by where their knowledge is concentrated. Basically, break the bit list into chunks, and for each chunk count the number of true bits, and then record that information. Maybe a minimum threshold to be included in a set of people that are strong in that chunk? Or in a map with lists of people by chunk cardinality. Anyhow, then for each candidate you could identify their weakest chunk, and start with the people strongest there. The point would be to finish early by finding an unbeatable combination quickly. But you'd have no promises of improved running time. Like hashing algorithms, however you divide up the knowledge space, there's a worst case where the knowledge is spread out such to make those divisions useless.
Apology for the wall of text! And thank you for letting me ramble.