How can I extract the output scores for objects , object class ,object id detected in images , generated by the Tensorflow Model for Object Detection ?
I want to store all these details into individual variables so that later they can be stored in a database .
Using the same code as found in this link
https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb
Please Help me out with the solution to this problem .
I've Tried
print(str(output_dict['detection_classes'][0] ) , ":" , str(output_dict['detection_scores'][0]))
This works and gives the object id and score for the class with the highest probability . But I want to extract the class name too and also the scores , Ids and names for all objects present in the image
Example of output :
There are two dogs in the image . When I print out the result I get the id and score for the object with the highest probability[94% in this case] i want to print the object name too and also similar details for all other objects in the images
You may need some knowledge background about tensorflow object detection, short and quick solution here might be the way you expected :
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
image_np = load_image_into_numpy_array(image)
image_np_expanded = np.expand_dims(image_np, axis=0)
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
scores = detection_graph.get_tensor_by_name('detection_scores:0')
classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Actual detection.
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
objects = []
threshold = 0.5 # in order to get higher percentages you need to lower this number; usually at 0.01 you get 100% predicted objects
for index, value in enumerate(classes[0]):
object_dict = {}
if scores[0, index] > threshold:
object_dict[(category_index.get(value)).get('name').encode('utf8')] = \
scores[0, index]
objects.append(object_dict)
print (objects)
print(len(np.where(scores[0] > threshold)[0])/num_detections[0])
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
Hope this helpful.
It gives you the class with the highest score because output tensors are sorted from highest score to lowest and you are asking for the highest score by indexing to the first element [0].
Look at object_detection/inference/detection_inference for inspiration.
As for class names, you can use the label map to create a category index dictionary to translate class ids to names.
Get class name,
your label map should be able to help here.
from object_detection.utils import label_map_util
label_map_path = os.path.join(annotations_dir, 'label_map.pbtxt')
label_map_dict = label_map_util.get_label_map_dict(label_map_path)
label_map_dict_number_to_name = {v: k for k, v in label_map_dict.iteritems()}
class_number = output_dict['detection_classes'][index]
class_name = label_map_dict_number_to_name[class_number]
Please paste your code, so we can figure out why only one box is in y