We use cookies to ensure you have the best browsing experience on our website. Please read our cookie policy for more information about how we use cookies.
An interesting discussion. I actually agree with you at this point, that this is about the algorithm itself. The complexity will be O(n+k) here regardless of the language you use, unless you have some magical infinite memory all initialized with zero bits. Then you can do some simple arithmetic and allocate properly initialized memory instantly independent of the value of k.
Now, the actual constants hidden in the O(n+k)~alpha*n+beta*k, in particular beta will obviously depend on the language, compiler, operating system, etc... And in case of instantiation beta should be pretty small already.
Regarding lazy initialization, it will not be helpful here, since you rely on the values being zero on your first lookup. If one insists on lazy array, another boolean array of size k is needed to keep track of which entries of the integer array were already incremented/touched by the algorithm. But all entries of this boolean array of size k itself have to be initialized to false, the problem becomes circular. Seems like there is no way around the O(n+k) from the computer science perspective.
Regarding the general downsides of this algorithm, I think the space requirement is a much more serious issue. Consider this case n=2, k=2e10, a = [1,2]. A simple O(n^2) will solve this very quickly, while the bucket approach might stall on trying to create an array of integers of size k. So, it might be a good idea to first check the values of n and k and branch into two different approaches based on the value of k. This is what library sorting algorithms do frequently.
Cookie support is required to access HackerRank
Seems like cookies are disabled on this browser, please enable them to open this website
Divisible Sum Pairs
You are viewing a single comment's thread. Return to all comments →
An interesting discussion. I actually agree with you at this point, that this is about the algorithm itself. The complexity will be O(n+k) here regardless of the language you use, unless you have some magical infinite memory all initialized with zero bits. Then you can do some simple arithmetic and allocate properly initialized memory instantly independent of the value of k.
Now, the actual constants hidden in the O(n+k)~alpha*n+beta*k, in particular beta will obviously depend on the language, compiler, operating system, etc... And in case of instantiation beta should be pretty small already.
Regarding lazy initialization, it will not be helpful here, since you rely on the values being zero on your first lookup. If one insists on lazy array, another boolean array of size k is needed to keep track of which entries of the integer array were already incremented/touched by the algorithm. But all entries of this boolean array of size k itself have to be initialized to false, the problem becomes circular. Seems like there is no way around the O(n+k) from the computer science perspective.
Regarding the general downsides of this algorithm, I think the space requirement is a much more serious issue. Consider this case n=2, k=2e10, a = [1,2]. A simple O(n^2) will solve this very quickly, while the bucket approach might stall on trying to create an array of integers of size k. So, it might be a good idea to first check the values of n and k and branch into two different approaches based on the value of k. This is what library sorting algorithms do frequently.