We use cookies to ensure you have the best browsing experience on our website. Please read our cookie policy for more information about how we use cookies.
Despite the additional trick you've used, this solution's worst case runtime complexity is O(n+k), not O(n). The offender is this line:
var counts = new int[k];
C# has to initialize all k of those values to 0. While this may be fast on hardware, it still takes O(k) time in a standard model of computation.
You could attempt to replace this with something like a hashtable/dictionary, but then you lose O(1) lookup + possible insertion time for each input element. If m is the total number of mod buckets encountered and it takes O(log(m)) time to insert/lookup, then the worst case runtime with this hashtable would be O(n*log(m)). This may or may not be an improvement over O(n+k) depending on how m and k compare to each other. You could run both approaches concurrently for O(min(n+k, n*log(m))) worst case time.
Also keep in mind that the trick you are using here is effectively a trade-off in multiplications + a bitshift (for computing n'(n'-1)/2 for each bucket at the end) and distributing that computation via lookups and additions (over the n elements in the main loop). The multiplication approach will be faster at some point after where you exceed the word size of your architecture (after that point, we cannot assume that addition and multiplication are constant time operations).
Divisible Sum Pairs
You are viewing a single comment's thread. Return to all comments →
Despite the additional trick you've used, this solution's worst case runtime complexity is O(n+k), not O(n). The offender is this line:
var counts = new int[k];
C# has to initialize all k of those values to 0. While this may be fast on hardware, it still takes O(k) time in a standard model of computation.
You could attempt to replace this with something like a hashtable/dictionary, but then you lose O(1) lookup + possible insertion time for each input element. If m is the total number of mod buckets encountered and it takes O(log(m)) time to insert/lookup, then the worst case runtime with this hashtable would be O(n*log(m)). This may or may not be an improvement over O(n+k) depending on how m and k compare to each other. You could run both approaches concurrently for O(min(n+k, n*log(m))) worst case time.
Also keep in mind that the trick you are using here is effectively a trade-off in multiplications + a bitshift (for computing n'(n'-1)/2 for each bucket at the end) and distributing that computation via lookups and additions (over the n elements in the main loop). The multiplication approach will be faster at some point after where you exceed the word size of your architecture (after that point, we cannot assume that addition and multiplication are constant time operations).