This is fairly standard openCV code where a loop will detect faces with haar cascade classifier and then there is a deep learning model that will detect the emotion in the face. The model was created from the 2013 kaggle dataset, and I downloaded this model from this github account if someone wants to try out the code. fer2013_mini_XCEPTION.119-0.65.hdf5 Just place a models
folder in your directory and rename it to model.h5
https://github.com/oarriaga/face_classification/tree/master/trained_models
The code works just fine with Tensorflow but when I run the program KERAS_BACKEND=theano python haarMOD.py
I get an error that is maybe due to BLAS library not linking properly?? Would anyone have any ideas on how to get theano functioning? Ultimately I am trying to get a similar variation of this code to work on a Flask server which only works with Theano.
import cv2
import sys, os
import pandas as pd
import numpy as np
from keras.models import load_model
#KERAS_BACKEND=theano python haarMOD.py
BASEPATH = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, BASEPATH)
os.chdir(BASEPATH)
MODELPATH = './models/model.h5'
emotion_dict = {0: "Angry", 1: "Disgust", 2: "Fear", 3: "Happy", 4: "Sad", 5: "Surprise", 6: "Neutral"}
model = load_model(MODELPATH)
WHITE = [255, 255, 255]
def draw_box(Image, x, y, w, h):
cv2.line(Image, (x, y), (x + int(w / 5), y), WHITE, 2)
cv2.line(Image, (x + int((w / 5) * 4), y), (x + w, y), WHITE, 2)
cv2.line(Image, (x, y), (x, y + int(h / 5)), WHITE, 2)
cv2.line(Image, (x + w, y), (x + w, y + int(h / 5)), WHITE, 2)
cv2.line(Image, (x, (y + int(h / 5 * 4))), (x, y + h), WHITE, 2)
cv2.line(Image, (x, (y + h)), (x + int(w / 5), y + h), WHITE, 2)
cv2.line(Image, (x + int((w / 5) * 4), y + h), (x + w, y + h), WHITE, 2)
cv2.line(Image, (x + w, (y + int(h / 5 * 4))), (x + w, y + h), WHITE, 2)
haar_face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
video = cv2.VideoCapture('MovieSample.m4v')
while True:
check, frame = video.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = haar_face_cascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5);
for (x, y, w, h) in faces:
gray_face = cv2.resize((gray[y:y + h, x:x + w]), (110, 110))
draw_box(gray, x, y, w, h)
roi_gray = gray[y:y + h, x:x + w]
cropped_img = np.expand_dims(np.expand_dims(cv2.resize(roi_gray, (48, 48)), -1), 0)
cv2.normalize(cropped_img, cropped_img, alpha=0, beta=1, norm_type=cv2.NORM_L2, dtype=cv2.CV_32F)
prediction = model.predict(cropped_img)
cv2.putText(gray, emotion_dict[int(np.argmax(prediction))], (x, y), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (WHITE), 1, cv2.LINE_AA)
cv2.imshow("Face Detector", gray)
cv2.waitKey(1)
key = cv2.waitKey(1)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video.release()
cv2.destroyAllWindows()
Any tips are greatly appreciated, I am running Linux Mint based on Ubuntu 18.3 with Anaconda 3.6 on CPU with these steps from machine learning mastery to build the deep learning library. I am also using a .AVI file instead of a webcam because I dont have a webcam on my PC. Change the video = cv2.VideoCapture('MovieSample.m4v')
to video = cv2.VideoCapture(0)
for openCV to default to a USB camera.
https://machinelearningmastery.com/setup-python-environment-machine-learning-deep-learning-anaconda/
The error that pops up for me is line 17 model = load_model(MODELPATH) if on CPU, do you have a BLAS library installed Theano can link against?
Can someone give a tip on how to trouble shoot that??
I got the code to work on a Windows machine by editing the .json file on my C drive
C:\Users\user\.keras
to reference"theano"
instead of"tenserflow"
And then adding in this bit of additional code that I found in a different stackoverflow post to my original .py file