You are viewing a single comment's thread. Return to all comments →
This is concise, but not performant. You are iterating through the entire array on every query: O(NQ) when O(N+Q) is possible with the same memory usage O(N).
Imagine all strings in test have same hashCodes. String length <= 100. It's very easy to achieve. If author wanted, he could've made such a test. Every hash algorithm is vulnerable to that kind of test. And your O(N+Q) hashtable solution will degrade to O(N*Q).
So when talking about "big O", which is upper bound, "hashtable" solution has same complexity, which is O(N*Q).
Better and stable algorithm would be to use TreeMap with lexicographic comparator. Which would give you O(Q*logN).
By the way, starging from Java8 they combined the "best of two worlds". HashMap class uses binary tree (if keys are "comparable") instead of linked list in case there are a lots of keys with same hashCode.
So in java8 it would give you O(Q*logN) if using HashMap solution.