We use cookies to ensure you have the best browsing experience on our website. Please read our cookie policy for more information about how we use cookies.
Pointers have to point at something in memory. You'd still need the entire array in memory somewhere, just in another part of the program, and it wouldn't be freed till you're done. Even if this other part of the program you haven't written yet is some kind of cache that could potentially load only parts of the data set when it thinks they'll be needed and discard them when they're no longer needed, what you're describing is an extra algorithm you haven't written yet that might fix the inefficiencies of the one you have written. Anybody can say "This code I wrote will be much better when I add some other code to fix it"; it doesn't score you any points.
File offsets would be no better or even worse: you most likely cause the file to be loaded entirely into memory and cached - which isn't saving anything. If the file isn't loaded into memory and kept there, then the program will be causing constant disk access, which would be horrendously slow and make the fact that you re-examine the same data multiple times extremely expensive.
It certainly beats O(d)
Firstly, that description of the complexity of the queue solution is as wrong as your original claim that your solution was O(1). The primary factor in the queue solution, as in yours, is the size of the total data set. The size of d has minimal effect on the speed efficiency of the queue solution and the very small effect it does have (affecting the number of triplet-count increments) is actually in inverse propoportion to the size of d. Python's dequeue type has O(1) for appends and pops at either end, most other languages have such types including Haskell, the language I used for this challenge.
As for memory used, we're told that d will never be more than 20. 20 values kept in memory for the duration of the program is a tiny amount, given the potential size of the data. We're told that the data set may have 10000 values in it - which your solution (and your suggested "improvements") will keep in memory for the entire program.
You realise the program itself, its code, takes up significantly more space in memory than that required by the queue in the queue solution? It's a trivial cost even when the data set is small and a miniscule cost when the data set is big.
Evaluating each element up to 3 times is O(1) time each for a total of O(n) time.
3 times as much work for each value, when there are 10000 values, is a lot of extra work. The queue solution only examines each value once.
The best thing that can be said about your solution is that the code is simple. It is only efficient in comparison to the very naive solutions where list elements are examined as many as d times each. None of your claims are accurate. Some of them are not meaningful.
Cookie support is required to access HackerRank
Seems like cookies are disabled on this browser, please enable them to open this website
Beautiful Triplets
You are viewing a single comment's thread. Return to all comments →
Pointers have to point at something in memory. You'd still need the entire array in memory somewhere, just in another part of the program, and it wouldn't be freed till you're done. Even if this other part of the program you haven't written yet is some kind of cache that could potentially load only parts of the data set when it thinks they'll be needed and discard them when they're no longer needed, what you're describing is an extra algorithm you haven't written yet that might fix the inefficiencies of the one you have written. Anybody can say "This code I wrote will be much better when I add some other code to fix it"; it doesn't score you any points.
File offsets would be no better or even worse: you most likely cause the file to be loaded entirely into memory and cached - which isn't saving anything. If the file isn't loaded into memory and kept there, then the program will be causing constant disk access, which would be horrendously slow and make the fact that you re-examine the same data multiple times extremely expensive.
Firstly, that description of the complexity of the queue solution is as wrong as your original claim that your solution was O(1). The primary factor in the queue solution, as in yours, is the size of the total data set. The size of d has minimal effect on the speed efficiency of the queue solution and the very small effect it does have (affecting the number of triplet-count increments) is actually in inverse propoportion to the size of d. Python's dequeue type has O(1) for appends and pops at either end, most other languages have such types including Haskell, the language I used for this challenge.
As for memory used, we're told that d will never be more than 20. 20 values kept in memory for the duration of the program is a tiny amount, given the potential size of the data. We're told that the data set may have 10000 values in it - which your solution (and your suggested "improvements") will keep in memory for the entire program.
You realise the program itself, its code, takes up significantly more space in memory than that required by the queue in the queue solution? It's a trivial cost even when the data set is small and a miniscule cost when the data set is big.
3 times as much work for each value, when there are 10000 values, is a lot of extra work. The queue solution only examines each value once.
The best thing that can be said about your solution is that the code is simple. It is only efficient in comparison to the very naive solutions where list elements are examined as many as d times each. None of your claims are accurate. Some of them are not meaningful.