Sort 24 Discussions, By:
Please Login in order to post a comment
I'm also having troubles with timeouts in Haskell. Therefore I downloaded a test case to try it locally and it runs almost instantaneously:
$ ghc --make myprog.hs
$ time ./myprog < input02.txt | diff output02.txt -
\ No newline at end of file
./myprog < input02.txt 0.03s user 0.00s system 88% cpu 0.030 total
Even if the machines are different, it is difficult to imagine that the test machine is 10k times slower!?
Oh, yes this can be. If test machine has attached not so much memory for each test (while processing lots of tests in parallel i assume and maybe it is virtual machine with even less performance), then the (immutable functional) solution is done while swapping to hard disk all the time ... instead of GBytes / s in RAM, your solution is drilled down to lets say 50 MB / s or less
O(N log N) for preparing data,
O(log N) for each search
This is a great problem with well-designed test cases. The second hint was very useful, first I thought that simple linear search would be enough, but my several failed attempts (in Scala) and the large number of queries suggested that it would be better to do the search phase a bit faster. Kudos to the problem-designer.
Even algorithm with this complexity got multiple timeout(coding in CL), which makes the situation a little weird.
I am having timeouts with Clojure as well.
Here is what I do (Spoiler alert!):
I reverse sort, add values (nlogn part), recursive search with middle value as the pivot (logn part).
I brought the solution search down to O(N) (over all tests) by sorting the test data.
In Haskell reading all contexts is faster than reading line by line for each test case. That was my culprit of timeout.
I used ByteString for input and it's still not enough. I am thinking if recursing list as one of the binary search arguments will decrease the runtime performance signifacantly.
The solution I had passing did not use ByteString; I must admit though the pass/fail was quite random, and I'm not convinced it's down to speed per se: seemingly innocuous changes led to lots of timeouts...
I came up with what I think is an interesting solution for this problem, but the test case sizes make me have to fill my code with optimized i/o just to handle reading 100,000 numbers without timing out. This is very frustrating and in my opinion, ruins the point of the challenge.
How do you optimize i/o?
Does any one have problems with this exercise in clojure? My solution is simple using sort and binary search and still it is unable to pass all the test cases due to timeout. Any idea?
Same here. I sort and reverse numbers and then loop-recur until. Any ideas how to speed up?
Same here. Whatever I did for perfomance increase it didn't help. Even when I did dorun when I found the value. I was not keeping all found values (since the number of test cases can be 10^5), but this little trick didn't help too. Too much time is wasted to find the needed value. Since there are a lof of test seems it is necessary to find another algorithm to find the value
I was thinking about creating lazy seq of maximum values, instead of sorting, as we don't need entire seq to be sorted but rather first n values, only in pesimistic scenario we need all.