I came across this question: Implement a queue in which push_rear(), pop_front() and get_min() are all constant time operations.
I initially thought of using a min-heap data structure which has O(1) complexity for a get_min(). But push_rear() and pop_front() would be O(log(n)).
Does anyone know what would be the best way to implement such a queue which has O(1) push(), pop() and min()?
I googled about this, and wanted to point out this Algorithm Geeks thread. But it seems that none of the solutions follow constant time rule for all 3 methods: push(), pop() and min().
Thanks for all the suggestions.
You Can actually use a LinkedList to maintain the Queue.
Each element in LinkedList will be of Type
You can have two pointers One points to the Start and the other points to the End.
If you add an element to the start of the Queue. Examine the Start pointer and the node to insert. If node to insert currentmin is less than start currentmin node to insert currentmin is the minimum. Else update the currentmin with start currentmin.
Repeat the same for enque.
JavaScript implementation
(Credit to adamax's solution for the idea; I loosely based an implementation on it. Jump to the bottom to see fully commented code or read through the general steps below. Note that this finds the maximum value in O(1) constant time rather than the minimum value--easy to change up):
Insertion: O(1) worst case
Deletion: O(1) amortized
Finding the maximum value: O(1) worst case
Code
Java Implementation
Use one deque (A) to store the elements and another deque (B) to store the minimums.
When x is enqueued, push_back it to A and keep pop_backing B until the back of B is smaller than x, then push_back x to B.
when dequeuing A, pop_front A as return value, and if it is equal to the front of B, pop_front B as well.
when getting the minimum of A, use the front of B as return value.
dequeue and getmin are obviously O(1). For the enqueue operation, consider the push_back of n elements. There are n push_back to A, n push_back to B and at most n pop_back of B because each element will either stay in B or being popped out once from B. Over all there are O(3n) operations and therefore the amortized cost is O(1) as well for enqueue.
Lastly the reason this algorithm works is that when you enqueue x to A, if there are elements in B that are larger than x, they will never be minimums now because x will stay in the queue A longer than any elements in B (a queue is FIFO). Therefore we need to pop out elements in B (from the back) that are larger than x before we push x into B.
You can implement a stack with O(1) pop(), push() and get_min(): just store the current minimum together with each element. So, for example, the stack
[4,2,5,1]
(1 on top) becomes[(4,4), (2,2), (5,2), (1,1)]
.Then you can use two stacks to implement the queue. Push to one stack, pop from another one; if the second stack is empty during the pop, move all elements from the first stack to the second one.
E.g for a
pop
request, moving all the elements from first stack[(4,4), (2,2), (5,2), (1,1)]
, the second stack would be[(1,1), (5,1), (2,1), (4,1)]
. and now return top element from second stack.To find the minimum element of the queue, look at the smallest two elements of the individual min-stacks, then take the minimum of those two values. (Of course, there's some extra logic here is case one of the stacks is empty, but that's not too hard to work around).
It will have O(1)
get_min()
andpush()
and amortized O(1)pop()
.Solutions to this question, including code, can be found here: http://discuss.joelonsoftware.com/default.asp?interview.11.742223.32