We use cookies to ensure you have the best browsing experience on our website. Please read our cookie policy for more information about how we use cookies.

I believe what he's trying to say is this:
There are two approaches here -
1. The difference array approach (the one every one is praising as being more efficient)
2. The intuitive approach - i.e., just going through each group of a,b,k values and incrementing elements located between a and b in the array, then finally searching through for a max)

The reason approach 1 is more efficient is because the operation for the difference array only requires a maximum 2 adjustments per transformation (you increment the value at the start index by k, and decrement the value after the end index by k).
Contrast this with approach 2, where actually going through the array and incrementing every value within the specified 'a-b' range could result in N operations.

So approach 2 could take a max of O(N * M) time- where 'M' is the number of operations, and N is the size of the array
And approach 1 could take a max of O(2 * M) time, which is considered equivalent to O(M)

Does that make sense? Someone correct me if I'm wrong! Cheers :)

## Array Manipulation

You are viewing a single comment's thread. Return to all comments →

I believe what he's trying to say is this: There are two approaches here - 1. The difference array approach (the one every one is praising as being more efficient) 2. The intuitive approach - i.e., just going through each group of a,b,k values and incrementing elements located between a and b in the array, then finally searching through for a max)

The reason approach 1 is more efficient is because the operation for the difference array only requires a maximum 2 adjustments per transformation (you increment the value at the start index by k, and decrement the value after the end index by k). Contrast this with approach 2, where actually going through the array and incrementing every value within the specified 'a-b' range could result in N operations.

So approach 2 could take a max of O(N * M) time- where 'M' is the number of operations, and N is the size of the array And approach 1 could take a max of O(2 * M) time, which is considered equivalent to O(M)

Does that make sense? Someone correct me if I'm wrong! Cheers :)