You are viewing a single comment's thread. Return to all comments →
I think he was just trying to get a 2 line solution. The code is terribly inefficient when compared with a HashMap solution, much harder to understand, and (if implemented in any real system) nearly impossible to maintain.
As the original poster indicated he was going for "shortest/cleanest" code. He might have succeeded with "shortest", but "cleanest" is a stretch.
Despite of java verbosity, if you get used to functional style, what you see there is just "col1.each | e1 -> col2.filter(==e1).count | sout". Isn't that clean, easy to read, and maintainable?
Terribly inefficient comparing to HashMap? Disagree. If author wanted, he'd made a test case where all strings have same hashCodes, and HashMap solution would've been even 2xtimes worse than straight O(N*N).
But in general, for randomly generated (which was not stated in task) "terribly inefficient" is true, because HashMap solution gives you O(N) instead O(N*N)
raggzy, no intent to inflame here, I can tell you're a true believer; I work with another. I've heard the "once you learn how to read it, it's much simpler" argument before. I do know how to read it, and sometimes use Java streaming and functional programming.
That said, here are a couple of anecdotal observations.
1) A few months back, I informally polled a dozen colleagues (all experienced developers) with small samples of code using Java streaming vs. more traditional style. About 80% found the streaming code more difficult to read.
2) In some recent performance analysis work, I found issues with streaming code. IMO, it becomes so easy to just stream one structure into another, filter/collect, etc., that developers (even ones very experienced with functional programming) begin to forget about the cost of the operations. Data structures are easily created and discarded, without thought.
Regarding efficiency of the example (where N is the number of strings, and Q is the number of queries): Typical hashmap O(N + Q). You're correct that it might be possible (though extremely difficult) to create strings with the same hash which could cause HashMap performance to degenerate into a TreeMap - which would give a worst case performance of O((N log UniqueN) + (Q log UniqueN)). It is likely that this is still better performance than O(N + N * Q) - unless, of course, the number of queries is very small and most values of N are unique.
More plausible than strings with the same hash, would be repeated queries of the same string. With the hashmap solution we get: typical O(Q), worst case O(Q log UniqueN). For the streaming solution O(Q * N).
I recommend using streaming/functional programming, where the TYPICAL reader will find it easily understandable. I recommend being careful to watch for tencencies to create and trash structures without thinking about the algorithm.
And nice work on doing this with only 2 LOC. I wouldn't have thought to use IntStream.range() as a technique for reading the input like that. Pretty cool.
Ah, I thought we'd have some small imperative-functional holywar :)
Someone asked me above in this thread about the same, and I find your reply here pretty similar to what I replied. Which means we're actually on the same, imperative side :)
Just sometimes using Collectors.join() and IntStram.range() .max() .min() .anyMatch() and so on because it's not bad.