I have been Trying to count cars when crossing the line and it works, but the problem is it counts one car many times which is ridiculous because it should be counted once
Here is the code I am using:
import cv2
import numpy as np
bgsMOG = cv2.BackgroundSubtractorMOG()
cap = cv2.VideoCapture("traffic.avi")
counter = 0
if cap:
while True:
ret, frame = cap.read()
if ret:
fgmask = bgsMOG.apply(frame, None, 0.01)
cv2.line(frame,(0,60),(160,60),(255,255,0),1)
# To find the countours of the Cars
contours, hierarchy = cv2.findContours(fgmask,
cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
try:
hierarchy = hierarchy[0]
except:
hierarchy = []
for contour, hier in zip(contours, hierarchy):
(x, y, w, h) = cv2.boundingRect(contour)
if w > 20 and h > 20:
cv2.rectangle(frame, (x,y), (x+w,y+h), (255, 0, 0), 1)
#To find centroid of the Car
x1 = w/2
y1 = h/2
cx = x+x1
cy = y+y1
## print "cy=", cy
## print "cx=", cx
centroid = (cx,cy)
## print "centoid=", centroid
# Draw the circle of Centroid
cv2.circle(frame,(int(cx),int(cy)),2,(0,0,255),-1)
# To make sure the Car crosses the line
## dy = cy-108
## print "dy", dy
if centroid > (27, 38) and centroid < (134, 108):
## if (cx <= 132)and(cx >= 20):
counter +=1
## print "counter=", counter
## if cy > 10 and cy < 160:
cv2.putText(frame, str(counter), (x,y-5),
cv2.FONT_HERSHEY_SIMPLEX,
0.5, (255, 0, 255), 2)
## cv2.namedWindow('Output',cv2.cv.CV_WINDOW_NORMAL)
cv2.imshow('Output', frame)
## cv2.imshow('FGMASK', fgmask)
key = cv2.waitKey(60)
if key == 27:
break
cap.release()
cv2.destroyAllWindows()
and the video is on my github page @ https://github.com/Tes3awy/MatLab-Tutorials called traffic.avi, and it's also a built-in video in Matlab library
Any help that each car is counted once ?
EDIT: The individual frames of the video look as follows:
Preparation
In order to understand what is happening, and eventually solve our problem, we first need to improve the script a little.
I've added logging of the important steps of your algorithm, refactored the code a little, and added saving of the mask and processed images, added ability to run the script using the individual frame images, along with some other modifications.
This is what the script looks like at this point:
This script is responsible for processing of the stream of images, and identifying all the vehicles in each frame -- I refer to them as
matches
in the code.The task of counting the detected vehicles is delegated to class
VehicleCounter
. The reason why I chose to make this a class will become evident as we progress. I did not implement your vehicle counting algorithm, because it will not work for reasons that will again become evident as we dig into this deeper.File
vehicle_counter.py
contains the following code:Finally, I wrote a script that will stitch all the generated images together, so it's easier to inspect them:
Analysis
In order to solve this problem, we should have some idea about what results we expect to get. We should also label all the distinct cars in the video, so it's easier to talk about them.
If we run our script, and stitch the images together, we get the a number of useful files to help us analyze the problem:
Upon inspecting those, a number of issues become evident:
Solution
1. Pre-Seeding the Background Subtractor
Our video is quite short, only 120 frames. With learning rate of
0.01
, it will take a substantial part of the video for the background detector to stabilize.Fortunately, the last frame of the video (frame number 119) is completely devoid of vehicles, and therefore we can use it as our initial background image. (Other options of obtaining suitable image are mentioned in notes and comments.)
To use this initial background image, we simply load it, and
apply
it on the background subtractor with learning factor1.0
:When we look at the new mosaic of masks we can see that we get less noise and the vehicle detection works better in the early frames.
2. Cleaning Up the Foreground Mask
A simple approach to improve our foreground mask is to apply a few morphological transformations.
Inspecting the masks, processed frames and the log file generated with filtering, we can see that we now detect vehicles more reliably, and have mitigated the issue of different parts of one vehicle being detected as separate objects.
3. Tracking Vehicles Between Frames
At this point, we need to go through our log file, and collect all the centroid coordinates for each vehicle. This will allow us to plot and inspect the path each vehicle traces across the image, and develop an algorithm to do this automatically. To make this process easier, we can create a reduced log by grepping out the relevant entries.
The lists of centroid coordinates:
Individual vehicle traces plotted on the background:
Combined enlarged image of all the vehicle traces:
Vectors
In order to analyze the movement, we need to work with vectors (i.e. the distance and direction moved). The following diagram shows how the angles correspond to movement of vehicles in the image.
We can use the following function to calculate the vector between two points:
Categorization
One way we can look for patterns that could be used to categorize the movements as valid/invalid is to make a scatter plot (angle vs. distance):
distance = -0.008 * angle**2 + 0.4 * angle + 25.0
distance = 10.0
We can use the following function to categorize the movement vectors:
NB: There is one outlier, which is occurs due to our loosing track of vehicle D in frames 43..48.
Algorithm
We will use class
Vehicle
to store information about each tracked vehicle:Class
VehicleCounter
will store a list of currently tracked vehicles and keep track of the total count. On each frame, we will use the list of bounding boxes and positions of identified vehicles (candidate list) to update the state ofVehicleCounter
:Vehicle
s:Vehicle
s for any remaining matches4. Solution
We can reuse the main script with the final version of
vehicle_counter.py
, containing the implementation of our counting algorithm:The program now draws the historical paths of all currently tracked vehicles into the output image, along with the vehicle count. Each vehicle is assigned 1 of 10 colours.
Notice that vehicle D ends up being tracked twice, however it is counted only once, since we lose track of it before crossing the divider. Ideas on how to resolve this are mentioned in the appendix.
Based on the last processed frame generated by the script
the total vehicle count is 10. This is a correct result.
More details can be found in the output the script generated:
A. Potential Improvements
cv2.drawContours
withCV_FILLED
?B. Notes
BackgroundSubtractorMOG
in Python (at least in OpenCV 2.4.x), but there is a way to do it with a little work.