Sort 21 Discussions, By:
Please Login in order to post a comment
I'm also having troubles with timeouts in Haskell. Therefore I downloaded a test case to try it locally and it runs almost instantaneously:
$ ghc --make myprog.hs
$ time ./myprog < input02.txt | diff output02.txt -
\ No newline at end of file
./myprog < input02.txt 0.03s user 0.00s system 88% cpu 0.030 total
Even if the machines are different, it is difficult to imagine that the test machine is 10k times slower!?
Oh, yes this can be. If test machine has attached not so much memory for each test (while processing lots of tests in parallel i assume and maybe it is virtual machine with even less performance), then the (immutable functional) solution is done while swapping to hard disk all the time ... instead of GBytes / s in RAM, your solution is drilled down to lets say 50 MB / s or less
O(N log N) for preparing data,
O(log N) for each search
This is a great problem with well-designed test cases. The second hint was very useful, first I thought that simple linear search would be enough, but my several failed attempts (in Scala) and the large number of queries suggested that it would be better to do the search phase a bit faster. Kudos to the problem-designer.
Even algorithm with this complexity got multiple timeout(coding in CL), which makes the situation a little weird.
I am having timeouts with Clojure as well.
Here is what I do (Spoiler alert!):
I reverse sort, add values (nlogn part), recursive search with middle value as the pivot (logn part).
In Haskell reading all contexts is faster than reading line by line for each test case. That was my culprit of timeout.
I used ByteString for input and it's still not enough. I am thinking if recursing list as one of the binary search arguments will decrease the runtime performance signifacantly.
The solution I had passing did not use ByteString; I must admit though the pass/fail was quite random, and I'm not convinced it's down to speed per se: seemingly innocuous changes led to lots of timeouts...
I came up with what I think is an interesting solution for this problem, but the test case sizes make me have to fill my code with optimized i/o just to handle reading 100,000 numbers without timing out. This is very frustrating and in my opinion, ruins the point of the challenge.
How do you optimize i/o?
Does any one have problems with this exercise in clojure? My solution is simple using sort and binary search and still it is unable to pass all the test cases due to timeout. Any idea?
Same here. I sort and reverse numbers and then loop-recur until. Any ideas how to speed up?
Same here. Whatever I did for perfomance increase it didn't help. Even when I did dorun when I found the value. I was not keeping all found values (since the number of test cases can be 10^5), but this little trick didn't help too. Too much time is wasted to find the needed value. Since there are a lof of test seems it is necessary to find another algorithm to find the value
I was thinking about creating lazy seq of maximum values, instead of sorting, as we don't need entire seq to be sorted but rather first n values, only in pesimistic scenario we need all.
This was a pain in Haskell due to timeouts, but I can testify that there is a solution.
The issue is that the test cases are very long. Thus performing each test in O(n) is not going to cut it.
It is worth spending a little time prepping the data, which can be done in O(n log n) and then run each test using binary search with O(log n). Then you can pass all the tests cases
This problem is definitely worth more than 30 points. The intuitive approach of sorting and then adding until you reach the number does not cut it.
I solved in Haskell and this problem was be cool to optimize. But definitely is not easy, worth much more than just 20 points. My first solution was awful, O(2^n), genereting all subsets. Timed out in almost all cases. So I optimize to O(n*log(n)) (descending sort) + O(n) (search). Better, but a lot of test cases timed out yet. In the end, I need to prepare the data to use a binary search tree on queries, so then, I got all the tests passes with O(log(n)) complexity.
Man did I have trouble getting Haskell not to timeout. This was much harder than I was expecting. I don't know if I totally missed an easier solution. Basically had to optimise by calculating all my sums with corresponding number of subsets, do a bit of ordering here and there and then solve all the tests in one sweep, with another dollop of ordering to be able to present the answers back in the same order they were read.
Haa... this problem is worth at least 30 points. Took me a lot of time trying to resolve the timeout issue. My solution's performance was improved after I stopped converting array to list (I don't know why I do that often).
Hey SummentDas I've have the same problem with Haskell, I'm not able to pass the second test case and I don't really now how I can improve it. I've been googling and it seems that monads is the answer. Do you use monads to solve this problem?
I seem to be having difficulty writing a solution that doesn't time out. Should I be trying to do it without sorting the set?
Sorting is O(N log N); you should be fine. Consider 10^5 elements in the list and 10^5 queries; what's the time complexity of your solution?
Sorting is fine.
The timeout happens because there are a lot of queries.
Doing the query in O(N) is not good enough. If you prep the data correctly you can perform the queries in O(log N).