You are viewing a single comment's thread. Return to all comments →
Besides a shorter solution, what are the advantages of a functional approach compared to say a HashMap? Does the functional approach take more memory and/or run time?
What are you looking for, is classical "Functional-vs-Imperative" holywar :). There are tons on that in the internet.
It's "managing calculations"-vs-"managing state". In functional style you feel yourself like a mathematician constructing complete function. In imperative you feel yourself like a programmer thinking "how state will change" if I write this.
Classical "what it is"-vs-"how to do it".
HashMap solution could be written in functional style too.
My personal thoughts on that - functional features are good, give better readability. I'd suggest using them in things like list comprehensions, e.g. you want to filter, count, aggregate something for elements. But personally I prefer mostly imperative code, because it's self-contained and self-explaning. E.g. you know what's going on for sure, though you have to write more code.
Also, functional-styled code is more bugless in terms of multithreading and multicore/multinode processing - e.g. filters, aggregators/reducers is good paralellable architecture. That's why it's often used in things like map-reduce, data processing and so on.
In terms of memory/runtime - depends on environment and implementation of functional language. In most cases they are constantly equal. But there are some cases when functional code under-the-hood generates a lots of object which "annoy" garbage collection. And also, there are some problems which require managing state directly. E.g. immutability would be an overdose.
Let's say you can always do better in imperative. But in most cases you gain more because of shortness and cleaniness of functional. However, you still need to be sure what it's doing under-the-hood and use it carefully.
what exactly do you mean by functional approach? I mean if you mean just breaking code into different methods such that one method does one job then we can use Hashmap in that way also. But other than that are there any serious issues in using a Hashmap instead of char array as discussed in the editorial?
"Functional programming" doesn't mean breaking code into smaller methods. Functional programming is a pretty different way to think about software (compared to writing a list of instructions to be followed).
I did a little googling and came up with this: https://wiki.haskell.org/Functional_programming . Maybe that helps?
I don't have much experience with functional programming myself, but I did enjoy reading a cute little book called The Little Schemer: https://mitpress.mit.edu/books/little-schemer . I haven't had a chance to read the rest of the series.
I think he was just trying to get a 2 line solution. The code is terribly inefficient when compared with a HashMap solution, much harder to understand, and (if implemented in any real system) nearly impossible to maintain.
As the original poster indicated he was going for "shortest/cleanest" code. He might have succeeded with "shortest", but "cleanest" is a stretch.
Despite of java verbosity, if you get used to functional style, what you see there is just "col1.each | e1 -> col2.filter(==e1).count | sout". Isn't that clean, easy to read, and maintainable?
Terribly inefficient comparing to HashMap? Disagree. If author wanted, he'd made a test case where all strings have same hashCodes, and HashMap solution would've been even 2xtimes worse than straight O(N*N).
But in general, for randomly generated (which was not stated in task) "terribly inefficient" is true, because HashMap solution gives you O(N) instead O(N*N)
raggzy, no intent to inflame here, I can tell you're a true believer; I work with another. I've heard the "once you learn how to read it, it's much simpler" argument before. I do know how to read it, and sometimes use Java streaming and functional programming.
That said, here are a couple of anecdotal observations.
1) A few months back, I informally polled a dozen colleagues (all experienced developers) with small samples of code using Java streaming vs. more traditional style. About 80% found the streaming code more difficult to read.
2) In some recent performance analysis work, I found issues with streaming code. IMO, it becomes so easy to just stream one structure into another, filter/collect, etc., that developers (even ones very experienced with functional programming) begin to forget about the cost of the operations. Data structures are easily created and discarded, without thought.
Regarding efficiency of the example (where N is the number of strings, and Q is the number of queries): Typical hashmap O(N + Q). You're correct that it might be possible (though extremely difficult) to create strings with the same hash which could cause HashMap performance to degenerate into a TreeMap - which would give a worst case performance of O((N log UniqueN) + (Q log UniqueN)). It is likely that this is still better performance than O(N + N * Q) - unless, of course, the number of queries is very small and most values of N are unique.
More plausible than strings with the same hash, would be repeated queries of the same string. With the hashmap solution we get: typical O(Q), worst case O(Q log UniqueN). For the streaming solution O(Q * N).
I recommend using streaming/functional programming, where the TYPICAL reader will find it easily understandable. I recommend being careful to watch for tencencies to create and trash structures without thinking about the algorithm.
And nice work on doing this with only 2 LOC. I wouldn't have thought to use IntStream.range() as a technique for reading the input like that. Pretty cool.
Ah, I thought we'd have some small imperative-functional holywar :)
Someone asked me above in this thread about the same, and I find your reply here pretty similar to what I replied. Which means we're actually on the same, imperative side :)
Just sometimes using Collectors.join() and IntStram.range() .max() .min() .anyMatch() and so on because it's not bad.