mirror of
https://gitcode.com/gh_mirrors/ope/OpenFace.git
synced 2025-12-30 04:52:29 +00:00
Integrating OpenFace 1.0.0 changes into CE-CLM branch
This commit is contained in:
29
README.md
29
README.md
@@ -1,19 +1,21 @@
|
||||
# OpenFace: an open source facial behavior analysis toolkit
|
||||
# OpenFace 1.0.0: an open source facial behavior analysis toolkit
|
||||
|
||||
[](https://travis-ci.org/TadasBaltrusaitis/OpenFace)
|
||||
[](https://ci.appveyor.com/project/TadasBaltrusaitis/openface/branch/master)
|
||||
|
||||
Over the past few years, there has been an increased interest in automatic facial behavior analysis and understanding. We present OpenFace – an open source tool intended for computer vision and machine learning researchers, affective computing community and people interested in building interactive applications based on facial behavior analysis. OpenFace is the first open source tool capable of facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation. The computer vision algorithms which represent the core of OpenFace demonstrate state-of-the-art results in all of the above mentioned tasks. Furthermore, our tool is capable of real-time performance and is able to run from a simple webcam without any specialist hardware.
|
||||
Over the past few years, there has been an increased interest in automatic facial behavior analysis and understanding. We present OpenFace – a tool intended for computer vision and machine learning researchers, affective computing community and people interested in building interactive applications based on facial behavior analysis. OpenFace is the first toolkit capable of facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation with available source code. The computer vision algorithms which represent the core of OpenFace demonstrate state-of-the-art results in all of the above mentioned tasks. Furthermore, our tool is capable of real-time performance and is able to run from a simple webcam without any specialist hardware.
|
||||
|
||||
The code was written mainly by Tadas Baltrusaitis during his time at the Language Technologies Institute at the Carnegie Mellon University; Computer Laboratory, University of Cambridge; and Institute for Creative Technologies, University of Southern California.
|
||||

|
||||
|
||||
Special thanks goes to Louis-Philippe Morency and his MultiComp Lab at Institute for Creative Technologies for help in writing and testing the code, and Erroll Wood for the gaze estimation work.
|
||||

|
||||
|
||||
OpenFace is an implementation of a number of research papers from the Multicomp group, Language Technologies Institute at the Carnegie Mellon University and Rainbow Group, Computer Laboratory, University of Cambridge. The founder of the project and main developer is Tadas Baltrušaitis.
|
||||
|
||||
Special thanks goes to Louis-Philippe Morency and his MultiComp Lab at Carnegie Mellon University for help in writing and testing the code, and Erroll Wood for the gaze estimation work.
|
||||
|
||||
## WIKI
|
||||
**For instructions of how to install/compile/use the project please see [WIKI](https://github.com/TadasBaltrusaitis/OpenFace/wiki)**
|
||||
|
||||
More details about the project - http://www.cl.cam.ac.uk/research/rainbow/projects/openface/
|
||||
|
||||
## Functionality
|
||||
|
||||
The system is capable of performing a number of facial analysis tasks:
|
||||
@@ -68,17 +70,18 @@ Tadas Baltrušaitis, Marwa Mahmoud, and Peter Robinson
|
||||
in *Facial Expression Recognition and Analysis Challenge*,
|
||||
*IEEE International Conference on Automatic Face and Gesture Recognition*, 2015
|
||||
|
||||
# Copyright
|
||||
|
||||
Copyright can be found in the Copyright.txt
|
||||
|
||||
You have to respect boost, TBB, dlib, and OpenCV licenses.
|
||||
|
||||
# Commercial license
|
||||
|
||||
For inquiries about the commercial licensing of the OpenFace toolkit please contact innovation@cmu.edu
|
||||
|
||||
# Final remarks
|
||||
|
||||
I did my best to make sure that the code runs out of the box but there are always issues and I would be grateful for your understanding that this is research code and not full fledged product. However, if you encounter any problems/bugs/issues please contact me on github or by emailing me at Tadas.Baltrusaitis@cl.cam.ac.uk for any bug reports/questions/suggestions.
|
||||
I did my best to make sure that the code runs out of the box but there are always issues and I would be grateful for your understanding that this is research code and not full fledged product. However, if you encounter any problems/bugs/issues please contact me on github or by emailing me at Tadas.Baltrusaitis@cl.cam.ac.uk for any bug reports/questions/suggestions. I prefer questions and bug reports on github as that provides visibility to others who might be encountering same issues or who have the same questions.
|
||||
|
||||
# Copyright
|
||||
|
||||
Copyright can be found in the Copyright.txt
|
||||
|
||||
You have to respect boost, TBB, dlib, OpenBLAS, and OpenCV licenses.
|
||||
|
||||
Furthermore you have to respect the licenses of the datasets used for model training - https://github.com/TadasBaltrusaitis/OpenFace/wiki/Datasets
|
||||
|
||||
@@ -128,19 +128,31 @@ int main(int argc, char **argv)
|
||||
|
||||
rgb_image = image_reader.GetNextImage();
|
||||
|
||||
if (!face_model.eye_model)
|
||||
{
|
||||
cout << "WARNING: no eye model found" << endl;
|
||||
}
|
||||
|
||||
if (face_analyser.GetAUClassNames().size() == 0 && face_analyser.GetAUClassNames().size() == 0)
|
||||
{
|
||||
cout << "WARNING: no Action Unit models found" << endl;
|
||||
}
|
||||
|
||||
cout << "Starting tracking" << endl;
|
||||
while (!rgb_image.empty())
|
||||
{
|
||||
|
||||
Utilities::RecorderOpenFaceParameters recording_params(arguments, false, false,
|
||||
image_reader.fx, image_reader.fy, image_reader.cx, image_reader.cy);
|
||||
|
||||
if (!face_model.eye_model)
|
||||
{
|
||||
recording_params.setOutputGaze(false);
|
||||
}
|
||||
Utilities::RecorderOpenFace open_face_rec(image_reader.name, recording_params, arguments);
|
||||
|
||||
visualizer.SetImage(rgb_image, image_reader.fx, image_reader.fy, image_reader.cx, image_reader.cy);
|
||||
|
||||
if (recording_params.outputGaze() && !face_model.eye_model)
|
||||
cout << "WARNING: no eye model defined, but outputting gaze" << endl;
|
||||
|
||||
// Making sure the image is in uchar grayscale (some face detectors use RGB, landmark detector uses grayscale)
|
||||
cv::Mat_<uchar> grayscale_image = image_reader.GetGrayFrame();
|
||||
|
||||
@@ -209,6 +221,7 @@ int main(int argc, char **argv)
|
||||
visualizer.SetObservationLandmarks(face_model.detected_landmarks, 1.0, face_model.GetVisibilities()); // Set confidence to high to make sure we always visualize
|
||||
visualizer.SetObservationPose(pose_estimate, 1.0);
|
||||
visualizer.SetObservationGaze(gaze_direction0, gaze_direction1, LandmarkDetector::CalculateAllEyeLandmarks(face_model), LandmarkDetector::Calculate3DEyeLandmarks(face_model, image_reader.fx, image_reader.fy, image_reader.cx, image_reader.cy), face_model.detection_certainty);
|
||||
visualizer.SetObservationActionUnits(face_analyser.GetCurrentAUsReg(), face_analyser.GetCurrentAUsClass());
|
||||
|
||||
// Setting up the recorder output
|
||||
open_face_rec.SetObservationHOG(face_model.detection_success, hog_descriptor, num_hog_rows, num_hog_cols, 31); // The number of channels in HOG is fixed at the moment, as using FHOG
|
||||
|
||||
@@ -104,11 +104,16 @@ int main(int argc, char **argv)
|
||||
// The modules that are being used for tracking
|
||||
LandmarkDetector::CLNF face_model(det_parameters.model_location);
|
||||
|
||||
if (!face_model.eye_model)
|
||||
{
|
||||
cout << "WARNING: no eye model found" << endl;
|
||||
}
|
||||
|
||||
// Open a sequence
|
||||
Utilities::SequenceCapture sequence_reader;
|
||||
|
||||
// A utility for visualizing the results (show just the tracks)
|
||||
Utilities::Visualizer visualizer(true, false, false);
|
||||
Utilities::Visualizer visualizer(true, false, false, false);
|
||||
|
||||
// Tracking FPS for visualization
|
||||
Utilities::FpsTracker fps_tracker;
|
||||
|
||||
@@ -162,11 +162,21 @@ int main(int argc, char **argv)
|
||||
face_analysis_params.OptimizeForImages();
|
||||
FaceAnalysis::FaceAnalyser face_analyser(face_analysis_params);
|
||||
|
||||
if (!face_model.eye_model)
|
||||
{
|
||||
cout << "WARNING: no eye model found" << endl;
|
||||
}
|
||||
|
||||
if (face_analyser.GetAUClassNames().size() == 0 && face_analyser.GetAUClassNames().size() == 0)
|
||||
{
|
||||
cout << "WARNING: no Action Unit models found" << endl;
|
||||
}
|
||||
|
||||
// Open a sequence
|
||||
Utilities::SequenceCapture sequence_reader;
|
||||
|
||||
// A utility for visualizing the results (show just the tracks)
|
||||
Utilities::Visualizer visualizer(true, false, false);
|
||||
Utilities::Visualizer visualizer(arguments);
|
||||
|
||||
// Tracking FPS for visualization
|
||||
Utilities::FpsTracker fps_tracker;
|
||||
@@ -188,12 +198,13 @@ int main(int argc, char **argv)
|
||||
|
||||
Utilities::RecorderOpenFaceParameters recording_params(arguments, true, sequence_reader.IsWebcam(),
|
||||
sequence_reader.fx, sequence_reader.fy, sequence_reader.cx, sequence_reader.cy, sequence_reader.fps);
|
||||
// Do not do AU detection on multi-face case as it is not supported
|
||||
recording_params.setOutputAUs(false);
|
||||
Utilities::RecorderOpenFace open_face_rec(sequence_reader.name, recording_params, arguments);
|
||||
|
||||
if (recording_params.outputGaze() && !face_model.eye_model)
|
||||
cout << "WARNING: no eye model defined, but outputting gaze" << endl;
|
||||
if (!face_model.eye_model)
|
||||
{
|
||||
recording_params.setOutputGaze(false);
|
||||
}
|
||||
|
||||
Utilities::RecorderOpenFace open_face_rec(sequence_reader.name, recording_params, arguments);
|
||||
|
||||
if (sequence_reader.IsWebcam())
|
||||
{
|
||||
@@ -340,6 +351,7 @@ int main(int argc, char **argv)
|
||||
visualizer.SetObservationLandmarks(face_models[model].detected_landmarks, face_models[model].detection_certainty);
|
||||
visualizer.SetObservationPose(LandmarkDetector::GetPose(face_models[model], sequence_reader.fx, sequence_reader.fy, sequence_reader.cx, sequence_reader.cy), face_models[model].detection_certainty);
|
||||
visualizer.SetObservationGaze(gaze_direction0, gaze_direction1, LandmarkDetector::CalculateAllEyeLandmarks(face_models[model]), LandmarkDetector::Calculate3DEyeLandmarks(face_models[model], sequence_reader.fx, sequence_reader.fy, sequence_reader.cx, sequence_reader.cy), face_models[model].detection_certainty);
|
||||
visualizer.SetObservationActionUnits(face_analyser.GetCurrentAUsReg(), face_analyser.GetCurrentAUsClass());
|
||||
|
||||
// Output features
|
||||
open_face_rec.SetObservationHOG(face_models[model].detection_success, hog_descriptor, num_hog_rows, num_hog_cols, 31); // The number of channels in HOG is fixed at the moment, as using FHOG
|
||||
|
||||
@@ -121,6 +121,16 @@ int main(int argc, char **argv)
|
||||
FaceAnalysis::FaceAnalyserParameters face_analysis_params(arguments);
|
||||
FaceAnalysis::FaceAnalyser face_analyser(face_analysis_params);
|
||||
|
||||
if (!face_model.eye_model)
|
||||
{
|
||||
cout << "WARNING: no eye model found" << endl;
|
||||
}
|
||||
|
||||
if (face_analyser.GetAUClassNames().size() == 0 && face_analyser.GetAUClassNames().size() == 0)
|
||||
{
|
||||
cout << "WARNING: no Action Unit models found" << endl;
|
||||
}
|
||||
|
||||
Utilities::SequenceCapture sequence_reader;
|
||||
|
||||
// A utility for visualizing the results
|
||||
@@ -150,6 +160,10 @@ int main(int argc, char **argv)
|
||||
|
||||
Utilities::RecorderOpenFaceParameters recording_params(arguments, true, sequence_reader.IsWebcam(),
|
||||
sequence_reader.fx, sequence_reader.fy, sequence_reader.cx, sequence_reader.cy, sequence_reader.fps);
|
||||
if (!face_model.eye_model)
|
||||
{
|
||||
recording_params.setOutputGaze(false);
|
||||
}
|
||||
Utilities::RecorderOpenFace open_face_rec(sequence_reader.name, recording_params, arguments);
|
||||
|
||||
if (recording_params.outputGaze() && !face_model.eye_model)
|
||||
@@ -205,6 +219,7 @@ int main(int argc, char **argv)
|
||||
visualizer.SetObservationLandmarks(face_model.detected_landmarks, face_model.detection_certainty, face_model.GetVisibilities());
|
||||
visualizer.SetObservationPose(pose_estimate, face_model.detection_certainty);
|
||||
visualizer.SetObservationGaze(gazeDirection0, gazeDirection1, LandmarkDetector::CalculateAllEyeLandmarks(face_model), LandmarkDetector::Calculate3DEyeLandmarks(face_model, sequence_reader.fx, sequence_reader.fy, sequence_reader.cx, sequence_reader.cy), face_model.detection_certainty);
|
||||
visualizer.SetObservationActionUnits(face_analyser.GetCurrentAUsReg(), face_analyser.GetCurrentAUsClass());
|
||||
visualizer.SetFps(fps_tracker.GetFPS());
|
||||
|
||||
// detect key presses
|
||||
|
||||
@@ -133,10 +133,11 @@ feat_test = sprintf('FeatureExtraction.exe -f samples/default.wmv -verbose');
|
||||
dos(feat_test);
|
||||
img_test = sprintf('FaceLandmarkImg.exe -fdir samples -verbose');
|
||||
dos(img_test);
|
||||
vid_test = sprintf('FaceLandmarkVidMulti.exe -f samples/multi_face.avi');
|
||||
vid_test = sprintf('FaceLandmarkVidMulti.exe -f samples/multi_face.avi -verbose');
|
||||
dos(vid_test);
|
||||
rmdir('processed', 's');
|
||||
|
||||
%%
|
||||
cd('..');
|
||||
cd(out_x86);
|
||||
vid_test = sprintf('FaceLandmarkVid.exe -f samples/default.wmv');
|
||||
@@ -145,7 +146,7 @@ feat_test = sprintf('FeatureExtraction.exe -f samples/default.wmv -verbose');
|
||||
dos(feat_test);
|
||||
img_test = sprintf('FaceLandmarkImg.exe -fdir samples -verbose');
|
||||
dos(img_test);
|
||||
vid_test = sprintf('FaceLandmarkVidMulti.exe -f samples/multi_face.avi');
|
||||
vid_test = sprintf('FaceLandmarkVidMulti.exe -f samples/multi_face.avi -verbose');
|
||||
dos(vid_test);
|
||||
rmdir('processed', 's');
|
||||
cd('..');
|
||||
|
||||
156
exe/releases/package_windows_executables_no_au.m
Normal file
156
exe/releases/package_windows_executables_no_au.m
Normal file
@@ -0,0 +1,156 @@
|
||||
clear;
|
||||
version = '0.4.1';
|
||||
|
||||
out_x86 = sprintf('OpenFace_%s_win_x86_landmarks', version);
|
||||
out_x64 = sprintf('OpenFace_%s_win_x64_landmarks', version);
|
||||
|
||||
mkdir(out_x86);
|
||||
mkdir(out_x64);
|
||||
|
||||
in_x86 = '../../Release/';
|
||||
in_x64 = '../../x64/Release/';
|
||||
|
||||
% Copy models
|
||||
copyfile([in_x86, 'AU_predictors'], [out_x86, '/AU_predictors'])
|
||||
rmdir([ out_x86, '/AU_predictors/svm_combined'], 's');
|
||||
rmdir([ out_x86, '/AU_predictors/svr_combined'], 's');
|
||||
copyfile([in_x86, 'classifiers'], [out_x86, '/classifiers'])
|
||||
copyfile([in_x86, 'model'], [out_x86, '/model'])
|
||||
|
||||
copyfile([in_x64, 'AU_predictors'], [out_x64, '/AU_predictors'])
|
||||
rmdir([ out_x64, '/AU_predictors/svm_combined'], 's');
|
||||
rmdir([ out_x64, '/AU_predictors/svr_combined'], 's');
|
||||
copyfile([in_x64, 'classifiers'], [out_x64, '/classifiers'])
|
||||
copyfile([in_x64, 'model'], [out_x64, '/model'])
|
||||
|
||||
%% Copy libraries
|
||||
libs_x86 = dir([in_x86, '*.lib'])';
|
||||
|
||||
for lib = libs_x86
|
||||
|
||||
copyfile([in_x86, '/', lib.name], [out_x86, '/', lib.name])
|
||||
|
||||
end
|
||||
|
||||
libs_x64 = dir([in_x64, '*.lib'])';
|
||||
|
||||
for lib = libs_x64
|
||||
|
||||
copyfile([in_x64, '/', lib.name], [out_x64, '/', lib.name])
|
||||
|
||||
end
|
||||
|
||||
%% Copy dlls
|
||||
dlls_x86 = dir([in_x86, '*.dll'])';
|
||||
|
||||
for dll = dlls_x86
|
||||
|
||||
copyfile([in_x86, '/', dll.name], [out_x86, '/', dll.name])
|
||||
|
||||
end
|
||||
|
||||
dlls_x64 = dir([in_x64, '*.dll'])';
|
||||
|
||||
for dll = dlls_x64
|
||||
|
||||
copyfile([in_x64, '/', dll.name], [out_x64, '/', dll.name])
|
||||
|
||||
end
|
||||
|
||||
% Copy zmq dll's
|
||||
mkdir([out_x64, '/amd64']);
|
||||
copyfile([in_x64, '/amd64'], [out_x64, '/amd64']);
|
||||
mkdir([out_x64, '/i386']);
|
||||
copyfile([in_x64, '/i386'], [out_x64, '/i386']);
|
||||
|
||||
mkdir([out_x86, '/amd64']);
|
||||
copyfile([in_x86, '/amd64'], [out_x86, '/amd64']);
|
||||
mkdir([out_x86, '/i386']);
|
||||
copyfile([in_x86, '/i386'], [out_x86, '/i386']);
|
||||
|
||||
%% Copy exe's
|
||||
exes_x86 = dir([in_x86, '*.exe'])';
|
||||
|
||||
for exe = exes_x86
|
||||
|
||||
copyfile([in_x86, '/', exe.name], [out_x86, '/', exe.name])
|
||||
|
||||
end
|
||||
|
||||
exes_x64 = dir([in_x64, '*.exe'])';
|
||||
|
||||
for exe = exes_x64
|
||||
|
||||
copyfile([in_x64, '/', exe.name], [out_x64, '/', exe.name])
|
||||
|
||||
end
|
||||
|
||||
%% Copy license and copyright
|
||||
copyfile('../../Copyright.txt', [out_x86, '/Copyright.txt']);
|
||||
copyfile('../../OpenFace-license.txt', [out_x86, '/OpenFace-license.txt']);
|
||||
|
||||
copyfile('../../Copyright.txt', [out_x64, '/Copyright.txt']);
|
||||
copyfile('../../OpenFace-license.txt', [out_x64, '/OpenFace-license.txt']);
|
||||
|
||||
%% Copy icons etc. needed for GUI
|
||||
img_x86 = dir([in_x86, '*.ico'])';
|
||||
|
||||
for img = img_x86
|
||||
|
||||
copyfile([in_x86, '/', img.name], [out_x86, '/', img.name])
|
||||
|
||||
end
|
||||
|
||||
img_x64 = dir([in_x64, '*.ico'])';
|
||||
|
||||
for img = img_x64
|
||||
|
||||
copyfile([in_x64, '/', img.name], [out_x64, '/', img.name])
|
||||
|
||||
end
|
||||
|
||||
img_x86 = dir([in_x86, '*.png'])';
|
||||
|
||||
for img = img_x86
|
||||
|
||||
copyfile([in_x86, '/', img.name], [out_x86, '/', img.name])
|
||||
|
||||
end
|
||||
|
||||
img_x64 = dir([in_x86, '*.png'])';
|
||||
|
||||
for img = img_x64
|
||||
|
||||
copyfile([in_x64, '/', img.name], [out_x64, '/', img.name])
|
||||
|
||||
end
|
||||
|
||||
%% Copy sample images for testing
|
||||
copyfile('../../samples', [out_x86, '/samples']);
|
||||
copyfile('../../samples', [out_x64 '/samples']);
|
||||
|
||||
%% Test if everything worked by running examples
|
||||
cd(out_x64);
|
||||
vid_test = sprintf('FaceLandmarkVid.exe -f samples/default.wmv');
|
||||
dos(vid_test);
|
||||
feat_test = sprintf('FeatureExtraction.exe -f samples/default.wmv -verbose');
|
||||
dos(feat_test);
|
||||
img_test = sprintf('FaceLandmarkImg.exe -fdir samples -verbose');
|
||||
dos(img_test);
|
||||
vid_test = sprintf('FaceLandmarkVidMulti.exe -f samples/multi_face.avi -verbose');
|
||||
dos(vid_test);
|
||||
rmdir('processed', 's');
|
||||
|
||||
%%
|
||||
cd('..');
|
||||
cd(out_x86);
|
||||
vid_test = sprintf('FaceLandmarkVid.exe -f samples/default.wmv');
|
||||
dos(vid_test);
|
||||
feat_test = sprintf('FeatureExtraction.exe -f samples/default.wmv -verbose');
|
||||
dos(feat_test);
|
||||
img_test = sprintf('FaceLandmarkImg.exe -fdir samples -verbose');
|
||||
dos(img_test);
|
||||
vid_test = sprintf('FaceLandmarkVidMulti.exe -f samples/multi_face.avi -verbose');
|
||||
dos(vid_test);
|
||||
rmdir('processed', 's');
|
||||
cd('..');
|
||||
@@ -264,37 +264,47 @@ namespace OpenFaceDemo
|
||||
{
|
||||
|
||||
var au_regs = face_analyser.GetCurrentAUsReg();
|
||||
if (au_regs.Count > 0)
|
||||
{
|
||||
double smile = (au_regs["AU12"] + au_regs["AU06"] + au_regs["AU25"]) / 13.0;
|
||||
double frown = (au_regs["AU15"] + au_regs["AU17"]) / 12.0;
|
||||
|
||||
double smile = (au_regs["AU12"] + au_regs["AU06"] + au_regs["AU25"]) / 13.0;
|
||||
double frown = (au_regs["AU15"] + au_regs["AU17"]) / 12.0;
|
||||
double brow_up = (au_regs["AU01"] + au_regs["AU02"]) / 10.0;
|
||||
double brow_down = au_regs["AU04"] / 5.0;
|
||||
|
||||
double brow_up = (au_regs["AU01"] + au_regs["AU02"]) / 10.0;
|
||||
double brow_down = au_regs["AU04"] / 5.0;
|
||||
double eye_widen = au_regs["AU05"] / 3.0;
|
||||
double nose_wrinkle = au_regs["AU09"] / 4.0;
|
||||
|
||||
double eye_widen = au_regs["AU05"] / 3.0;
|
||||
double nose_wrinkle = au_regs["AU09"] / 4.0;
|
||||
Dictionary<int, double> smileDict = new Dictionary<int, double>();
|
||||
smileDict[0] = 0.7 * smile_cumm + 0.3 * smile;
|
||||
smileDict[1] = 0.7 * frown_cumm + 0.3 * frown;
|
||||
smilePlot.AddDataPoint(new DataPointGraph() { Time = CurrentTime, values = smileDict, Confidence = confidence });
|
||||
|
||||
Dictionary<int, double> smileDict = new Dictionary<int, double>();
|
||||
smileDict[0] = 0.7 * smile_cumm + 0.3 * smile;
|
||||
smileDict[1] = 0.7 * frown_cumm + 0.3 * frown;
|
||||
smilePlot.AddDataPoint(new DataPointGraph() { Time = CurrentTime, values = smileDict, Confidence = confidence });
|
||||
Dictionary<int, double> browDict = new Dictionary<int, double>();
|
||||
browDict[0] = 0.7 * brow_up_cumm + 0.3 * brow_up;
|
||||
browDict[1] = 0.7 * brow_down_cumm + 0.3 * brow_down;
|
||||
browPlot.AddDataPoint(new DataPointGraph() { Time = CurrentTime, values = browDict, Confidence = confidence });
|
||||
|
||||
Dictionary<int, double> browDict = new Dictionary<int, double>();
|
||||
browDict[0] = 0.7 * brow_up_cumm + 0.3 * brow_up;
|
||||
browDict[1] = 0.7 * brow_down_cumm + 0.3 * brow_down;
|
||||
browPlot.AddDataPoint(new DataPointGraph() { Time = CurrentTime, values = browDict, Confidence = confidence });
|
||||
Dictionary<int, double> eyeDict = new Dictionary<int, double>();
|
||||
eyeDict[0] = 0.7 * widen_cumm + 0.3 * eye_widen;
|
||||
eyeDict[1] = 0.7 * wrinkle_cumm + 0.3 * nose_wrinkle;
|
||||
eyePlot.AddDataPoint(new DataPointGraph() { Time = CurrentTime, values = eyeDict, Confidence = confidence });
|
||||
|
||||
Dictionary<int, double> eyeDict = new Dictionary<int, double>();
|
||||
eyeDict[0] = 0.7 * widen_cumm + 0.3 * eye_widen;
|
||||
eyeDict[1] = 0.7 * wrinkle_cumm + 0.3 * nose_wrinkle;
|
||||
eyePlot.AddDataPoint(new DataPointGraph() { Time = CurrentTime, values = eyeDict, Confidence = confidence });
|
||||
|
||||
smile_cumm = smileDict[0];
|
||||
frown_cumm = smileDict[1];
|
||||
brow_up_cumm = browDict[0];
|
||||
brow_down_cumm = browDict[1];
|
||||
widen_cumm = eyeDict[0];
|
||||
wrinkle_cumm = eyeDict[1];
|
||||
smile_cumm = smileDict[0];
|
||||
frown_cumm = smileDict[1];
|
||||
brow_up_cumm = browDict[0];
|
||||
brow_down_cumm = browDict[1];
|
||||
widen_cumm = eyeDict[0];
|
||||
wrinkle_cumm = eyeDict[1];
|
||||
}
|
||||
else
|
||||
{
|
||||
// If no AUs present disable the AU visualization
|
||||
MainGrid.ColumnDefinitions[2].Width = new GridLength(0);
|
||||
eyePlot.Visibility = Visibility.Collapsed;
|
||||
browPlot.Visibility = Visibility.Collapsed;
|
||||
smilePlot.Visibility = Visibility.Collapsed;
|
||||
}
|
||||
|
||||
Dictionary<int, double> poseDict = new Dictionary<int, double>();
|
||||
poseDict[0] = -pose[3];
|
||||
|
||||
@@ -63,23 +63,23 @@
|
||||
<MenuItem IsCheckable="True" Header="Show AUs" Click="VisualisationChange" IsChecked="{Binding ShowAUs}"/>
|
||||
</MenuItem>
|
||||
|
||||
<MenuItem Header="Face Detector">
|
||||
<MenuItem Header="Face Detector" Name="FaceDetectorMenu">
|
||||
|
||||
<MenuItem x:Name="FaceDetHaar" Header="OpenCV (Haar)" IsCheckable="true" IsChecked="{Binding DetectorHaar}"></MenuItem>
|
||||
<MenuItem x:Name="FaceDetHOG" Header="dlib (HOG-SVM)" IsCheckable="true" IsChecked="{Binding DetectorHOG}"></MenuItem>
|
||||
<MenuItem x:Name="FaceDetCNN" Header="OpenFace (MTCNN)" IsCheckable="true" IsChecked="{Binding DetectorCNN}"></MenuItem>
|
||||
<i:Interaction.Behaviors>
|
||||
<local:ExclusiveMenuItemBehavior></local:ExclusiveMenuItemBehavior>
|
||||
<OpenFaceOffline:ExclusiveMenuItemBehavior></OpenFaceOffline:ExclusiveMenuItemBehavior>
|
||||
</i:Interaction.Behaviors>
|
||||
|
||||
</MenuItem>
|
||||
<MenuItem Header="Landmark Detector">
|
||||
<MenuItem Header="Landmark Detector" Name="LandmarkDetectorMenu">
|
||||
|
||||
<MenuItem x:Name="LandmarkDetCLM" Header="CLM" IsCheckable="true" IsChecked="{Binding LandmarkDetectorCLM}"></MenuItem>
|
||||
<MenuItem x:Name="LandmarkDetCLNF" Header="CLNF" IsCheckable="true" IsChecked="{Binding LandmarkDetectorCLNF}"></MenuItem>
|
||||
<MenuItem x:Name="LandmarkDetCECLM" Header="CE-CLM" IsCheckable="true" IsChecked="{Binding LandmarkDetectorCECLM}"></MenuItem>
|
||||
<i:Interaction.Behaviors>
|
||||
<local:ExclusiveMenuItemBehavior></local:ExclusiveMenuItemBehavior>
|
||||
<OpenFaceOffline:ExclusiveMenuItemBehavior></OpenFaceOffline:ExclusiveMenuItemBehavior>
|
||||
</i:Interaction.Behaviors>
|
||||
|
||||
</MenuItem>
|
||||
|
||||
@@ -162,7 +162,6 @@ namespace OpenFaceOffline
|
||||
|
||||
gaze_analyser = new GazeAnalyserManaged();
|
||||
|
||||
|
||||
}
|
||||
|
||||
// ----------------------------------------------------------
|
||||
@@ -202,7 +201,7 @@ namespace OpenFaceOffline
|
||||
face_model_params.optimiseForVideo();
|
||||
|
||||
// Setup the visualization
|
||||
Visualizer visualizer_of = new Visualizer(ShowTrackedVideo || RecordTracked, ShowAppearance, ShowAppearance);
|
||||
Visualizer visualizer_of = new Visualizer(ShowTrackedVideo || RecordTracked, ShowAppearance, ShowAppearance, false);
|
||||
|
||||
// Initialize the face analyser
|
||||
face_analyser = new FaceAnalyserManaged(AppDomain.CurrentDomain.BaseDirectory, DynamicAUModels, image_output_size, MaskAligned);
|
||||
@@ -241,6 +240,7 @@ namespace OpenFaceOffline
|
||||
|
||||
// The face analysis step (for AUs and eye gaze)
|
||||
face_analyser.AddNextFrame(frame, landmark_detector.CalculateAllLandmarks(), detection_succeeding, false);
|
||||
|
||||
gaze_analyser.AddNextFrame(landmark_detector, detection_succeeding, reader.GetFx(), reader.GetFy(), reader.GetCx(), reader.GetCy());
|
||||
|
||||
// Only the final face will contain the details
|
||||
@@ -295,7 +295,7 @@ namespace OpenFaceOffline
|
||||
|
||||
|
||||
// Setup the visualization
|
||||
Visualizer visualizer_of = new Visualizer(ShowTrackedVideo || RecordTracked, ShowAppearance, ShowAppearance);
|
||||
Visualizer visualizer_of = new Visualizer(ShowTrackedVideo || RecordTracked, ShowAppearance, ShowAppearance, false);
|
||||
|
||||
// Initialize the face detector if it has not been initialized yet
|
||||
if (face_detector == null)
|
||||
@@ -630,6 +630,8 @@ namespace OpenFaceOffline
|
||||
SettingsMenu.IsEnabled = false;
|
||||
RecordingMenu.IsEnabled = false;
|
||||
AUSetting.IsEnabled = false;
|
||||
FaceDetectorMenu.IsEnabled = false;
|
||||
LandmarkDetectorMenu.IsEnabled = false;
|
||||
|
||||
PauseButton.IsEnabled = true;
|
||||
StopButton.IsEnabled = true;
|
||||
@@ -659,6 +661,8 @@ namespace OpenFaceOffline
|
||||
SettingsMenu.IsEnabled = true;
|
||||
RecordingMenu.IsEnabled = true;
|
||||
AUSetting.IsEnabled = true;
|
||||
FaceDetectorMenu.IsEnabled = true;
|
||||
LandmarkDetectorMenu.IsEnabled = true;
|
||||
|
||||
PauseButton.IsEnabled = false;
|
||||
StopButton.IsEnabled = false;
|
||||
|
||||
@@ -272,6 +272,9 @@ namespace CppInterop {
|
||||
clnf->Reset(x, y);
|
||||
}
|
||||
|
||||
bool HasEyeModel() {
|
||||
return clnf->eye_model;
|
||||
}
|
||||
|
||||
double GetConfidence()
|
||||
{
|
||||
|
||||
@@ -54,9 +54,9 @@ namespace UtilitiesOF {
|
||||
|
||||
public:
|
||||
|
||||
Visualizer(bool vis_track, bool vis_hog, bool vis_aligned)
|
||||
Visualizer(bool vis_track, bool vis_hog, bool vis_aligned, bool vis_aus)
|
||||
{
|
||||
m_visualizer = new Utilities::Visualizer(vis_track, vis_hog, vis_aligned);
|
||||
m_visualizer = new Utilities::Visualizer(vis_track, vis_hog, vis_aligned, vis_aus);
|
||||
}
|
||||
|
||||
void SetObservationGaze(System::Tuple<float, float, float>^ gaze_direction0, System::Tuple<float, float, float>^ gaze_direction1,
|
||||
|
||||
@@ -1224,25 +1224,28 @@ void FaceAnalyser::ReadRegressor(std::string fname, const vector<string>& au_nam
|
||||
{
|
||||
ifstream regressor_stream(fname.c_str(), ios::in | ios::binary);
|
||||
|
||||
// First read the input type
|
||||
int regressor_type;
|
||||
regressor_stream.read((char*)®ressor_type, 4);
|
||||
if (regressor_stream.is_open())
|
||||
{
|
||||
// First read the input type
|
||||
int regressor_type;
|
||||
regressor_stream.read((char*)®ressor_type, 4);
|
||||
|
||||
if(regressor_type == SVR_appearance_static_linear)
|
||||
{
|
||||
AU_SVR_static_appearance_lin_regressors.Read(regressor_stream, au_names);
|
||||
}
|
||||
else if(regressor_type == SVR_appearance_dynamic_linear)
|
||||
{
|
||||
AU_SVR_dynamic_appearance_lin_regressors.Read(regressor_stream, au_names);
|
||||
}
|
||||
else if(regressor_type == SVM_linear_stat)
|
||||
{
|
||||
AU_SVM_static_appearance_lin.Read(regressor_stream, au_names);
|
||||
}
|
||||
else if(regressor_type == SVM_linear_dyn)
|
||||
{
|
||||
AU_SVM_dynamic_appearance_lin.Read(regressor_stream, au_names);
|
||||
if (regressor_type == SVR_appearance_static_linear)
|
||||
{
|
||||
AU_SVR_static_appearance_lin_regressors.Read(regressor_stream, au_names);
|
||||
}
|
||||
else if (regressor_type == SVR_appearance_dynamic_linear)
|
||||
{
|
||||
AU_SVR_dynamic_appearance_lin_regressors.Read(regressor_stream, au_names);
|
||||
}
|
||||
else if (regressor_type == SVM_linear_stat)
|
||||
{
|
||||
AU_SVM_static_appearance_lin.Read(regressor_stream, au_names);
|
||||
}
|
||||
else if (regressor_type == SVM_linear_dyn)
|
||||
{
|
||||
AU_SVM_dynamic_appearance_lin.Read(regressor_stream, au_names);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -149,7 +149,7 @@ FaceAnalyserParameters::FaceAnalyserParameters(vector<string> &arguments):root()
|
||||
}
|
||||
else
|
||||
{
|
||||
std::cout << "Could not find the AU detection model to load" << std::endl;
|
||||
std::cout << "Could not find the face analysis module to load" << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -182,7 +182,7 @@ void FaceAnalyserParameters::init()
|
||||
}
|
||||
else
|
||||
{
|
||||
std::cout << "Could not find the AU detection model to load" << std::endl;
|
||||
std::cout << "Could not find the face analysis module to load" << std::endl;
|
||||
}
|
||||
|
||||
orientation_bins = vector<cv::Vec3d>();
|
||||
@@ -226,7 +226,7 @@ void FaceAnalyserParameters::OptimizeForVideos()
|
||||
}
|
||||
else
|
||||
{
|
||||
std::cout << "Could not find the AU detection model to load" << std::endl;
|
||||
std::cout << "Could not find the face analysis module to load" << std::endl;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -114,6 +114,8 @@ struct FaceModelParameters
|
||||
|
||||
private:
|
||||
void init();
|
||||
void check_model_path(const std::string& root = "/");
|
||||
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
5
lib/local/LandmarkDetector/model/clnf_multi_pie.txt
Normal file
5
lib/local/LandmarkDetector/model/clnf_multi_pie.txt
Normal file
@@ -0,0 +1,5 @@
|
||||
PDM pdms/Multi-PIE_aligned_PDM_68.txt
|
||||
Triangulations tris_68.txt
|
||||
PatchesCCNF patch_experts/ccnf_patches_0.25_multi_pie.txt
|
||||
PatchesCCNF patch_experts/ccnf_patches_0.35_multi_pie.txt
|
||||
PatchesCCNF patch_experts/ccnf_patches_0.5_multi_pie.txt
|
||||
3
lib/local/LandmarkDetector/model/main_clnf_multi_pie.txt
Normal file
3
lib/local/LandmarkDetector/model/main_clnf_multi_pie.txt
Normal file
@@ -0,0 +1,3 @@
|
||||
LandmarkDetector clnf_multi_pie.txt
|
||||
FaceDetConversion haarAlign.txt
|
||||
DetectionValidator detection_validation/validator_cnn.txt
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -57,6 +57,8 @@ FaceModelParameters::FaceModelParameters()
|
||||
{
|
||||
// initialise the default values
|
||||
init();
|
||||
check_model_path();
|
||||
|
||||
}
|
||||
|
||||
FaceModelParameters::FaceModelParameters(vector<string> &arguments)
|
||||
@@ -260,6 +262,33 @@ FaceModelParameters::FaceModelParameters(vector<string> &arguments)
|
||||
{
|
||||
std::cout << "Could not find the MTCNN face detector location" << std::endl;
|
||||
}
|
||||
check_model_path(root.string());
|
||||
}
|
||||
|
||||
void FaceModelParameters::check_model_path(const std::string& root)
|
||||
{
|
||||
// Make sure model_location is valid
|
||||
// First check working directory, then the executable's directory, then the config path set by the build process.
|
||||
boost::filesystem::path config_path = boost::filesystem::path(CONFIG_DIR);
|
||||
boost::filesystem::path model_path = boost::filesystem::path(model_location);
|
||||
boost::filesystem::path root_path = boost::filesystem::path(root);
|
||||
|
||||
if (boost::filesystem::exists(model_path))
|
||||
{
|
||||
model_location = model_path.string();
|
||||
}
|
||||
else if (boost::filesystem::exists(root_path / model_path))
|
||||
{
|
||||
model_location = (root_path / model_path).string();
|
||||
}
|
||||
else if (boost::filesystem::exists(config_path / model_path))
|
||||
{
|
||||
model_location = (config_path / model_path).string();
|
||||
}
|
||||
else
|
||||
{
|
||||
std::cout << "Could not find the landmark detection model to load" << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
void FaceModelParameters::init()
|
||||
|
||||
@@ -78,6 +78,7 @@ namespace Utilities
|
||||
float getCy() const { return cy; }
|
||||
|
||||
void setOutputAUs(bool output_AUs) { this->output_AUs = output_AUs; }
|
||||
void setOutputGaze(bool output_gaze) { this->output_gaze = output_gaze; }
|
||||
|
||||
private:
|
||||
|
||||
|
||||
@@ -54,8 +54,8 @@ namespace Utilities
|
||||
|
||||
// The constructor for the visualizer that specifies what to visualize
|
||||
Visualizer(std::vector<std::string> arguments);
|
||||
Visualizer(bool vis_track, bool vis_hog, bool vis_align);
|
||||
|
||||
Visualizer(bool vis_track, bool vis_hog, bool vis_align, bool vis_aus);
|
||||
|
||||
// Adding observations to the visualizer
|
||||
|
||||
// Pose related observations
|
||||
@@ -67,6 +67,8 @@ namespace Utilities
|
||||
// Pose related observations
|
||||
void SetObservationPose(const cv::Vec6f& pose, double confidence);
|
||||
|
||||
void SetObservationActionUnits(const std::vector<std::pair<std::string, double> >& au_intensities, const std::vector<std::pair<std::string, double> >& au_occurences);
|
||||
|
||||
// Gaze related observations
|
||||
void SetObservationGaze(const cv::Point3f& gazeDirection0, const cv::Point3f& gazeDirection1, const std::vector<cv::Point2f>& eye_landmarks, const std::vector<cv::Point3f>& eye_landmarks3d, double confidence);
|
||||
|
||||
@@ -88,7 +90,8 @@ namespace Utilities
|
||||
bool vis_track;
|
||||
bool vis_hog;
|
||||
bool vis_align;
|
||||
|
||||
bool vis_aus;
|
||||
|
||||
// Can be adjusted to show less confident frames
|
||||
double visualisation_boundary = 0.4;
|
||||
|
||||
@@ -99,6 +102,7 @@ namespace Utilities
|
||||
cv::Mat tracked_image;
|
||||
cv::Mat hog_image;
|
||||
cv::Mat aligned_face_image;
|
||||
cv::Mat action_units_image;
|
||||
|
||||
// Useful for drawing 3d
|
||||
float fx, fy, cx, cy;
|
||||
|
||||
@@ -36,6 +36,11 @@
|
||||
#include "RotationHelpers.h"
|
||||
#include "ImageManipulationHelpers.h"
|
||||
|
||||
#include <sstream>
|
||||
#include <iomanip>
|
||||
#include <map>
|
||||
#include <set>
|
||||
|
||||
// For drawing on images
|
||||
#include <opencv2/imgproc.hpp>
|
||||
|
||||
@@ -45,12 +50,35 @@ using namespace Utilities;
|
||||
const int draw_shiftbits = 4;
|
||||
const int draw_multiplier = 1 << 4;
|
||||
|
||||
const std::map<std::string, std::string> AUS_DESCRIPTION = {
|
||||
{ "AU01", "Inner Brow Raiser " },
|
||||
{ "AU02", "Outer Brow Raiser " },
|
||||
{ "AU04", "Brow Lowerer " },
|
||||
{ "AU05", "Upper Lid Raiser " },
|
||||
{ "AU06", "Cheek Raiser " },
|
||||
{ "AU07", "Lid Tightener " },
|
||||
{ "AU09", "Nose Wrinkler " },
|
||||
{ "AU10", "Upper Lip Raiser " },
|
||||
{ "AU12", "Lip Corner Puller " },
|
||||
{ "AU14", "Dimpler " },
|
||||
{ "AU15", "Lip Corner Depressor" },
|
||||
{ "AU17", "Chin Raiser " },
|
||||
{ "AU20", "Lip stretcher " },
|
||||
{ "AU23", "Lip Tightener " },
|
||||
{ "AU25", "Lips part " },
|
||||
{ "AU26", "Jaw Drop " },
|
||||
{ "AU28", "Lip Suck " },
|
||||
{ "AU45", "Blink " },
|
||||
|
||||
};
|
||||
|
||||
Visualizer::Visualizer(std::vector<std::string> arguments)
|
||||
{
|
||||
// By default not visualizing anything
|
||||
this->vis_track = false;
|
||||
this->vis_hog = false;
|
||||
this->vis_align = false;
|
||||
this->vis_aus = false;
|
||||
|
||||
for (size_t i = 0; i < arguments.size(); ++i)
|
||||
{
|
||||
@@ -59,6 +87,7 @@ Visualizer::Visualizer(std::vector<std::string> arguments)
|
||||
vis_track = true;
|
||||
vis_align = true;
|
||||
vis_hog = true;
|
||||
vis_aus = true;
|
||||
}
|
||||
else if (arguments[i].compare("-vis-align") == 0)
|
||||
{
|
||||
@@ -72,19 +101,22 @@ Visualizer::Visualizer(std::vector<std::string> arguments)
|
||||
{
|
||||
vis_track = true;
|
||||
}
|
||||
else if (arguments[i].compare("-vis-aus") == 0)
|
||||
{
|
||||
vis_aus = true;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
Visualizer::Visualizer(bool vis_track, bool vis_hog, bool vis_align)
|
||||
Visualizer::Visualizer(bool vis_track, bool vis_hog, bool vis_align, bool vis_aus)
|
||||
{
|
||||
this->vis_track = vis_track;
|
||||
this->vis_hog = vis_hog;
|
||||
this->vis_align = vis_align;
|
||||
this->vis_aus = vis_aus;
|
||||
}
|
||||
|
||||
|
||||
|
||||
// Setting the image on which to draw
|
||||
void Visualizer::SetImage(const cv::Mat& canvas, float fx, float fy, float cx, float cy)
|
||||
{
|
||||
@@ -99,6 +131,7 @@ void Visualizer::SetImage(const cv::Mat& canvas, float fx, float fy, float cx, f
|
||||
// Clearing other images
|
||||
hog_image = cv::Mat();
|
||||
aligned_face_image = cv::Mat();
|
||||
action_units_image = cv::Mat();
|
||||
|
||||
}
|
||||
|
||||
@@ -195,6 +228,108 @@ void Visualizer::SetObservationPose(const cv::Vec6f& pose, double confidence)
|
||||
}
|
||||
}
|
||||
|
||||
void Visualizer::SetObservationActionUnits(const std::vector<std::pair<std::string, double> >& au_intensities,
|
||||
const std::vector<std::pair<std::string, double> >& au_occurences)
|
||||
{
|
||||
if (au_intensities.size() > 0 || au_occurences.size() > 0)
|
||||
{
|
||||
|
||||
std::set<std::string> au_names;
|
||||
std::map<std::string, bool> occurences_map;
|
||||
std::map<std::string, double> intensities_map;
|
||||
|
||||
for (size_t idx = 0; idx < au_intensities.size(); idx++)
|
||||
{
|
||||
au_names.insert(au_intensities[idx].first);
|
||||
intensities_map[au_intensities[idx].first] = au_intensities[idx].second;
|
||||
}
|
||||
|
||||
for (size_t idx = 0; idx < au_occurences.size(); idx++)
|
||||
{
|
||||
au_names.insert(au_occurences[idx].first);
|
||||
occurences_map[au_occurences[idx].first] = au_occurences[idx].second;
|
||||
}
|
||||
|
||||
const int AU_TRACKBAR_LENGTH = 400;
|
||||
const int AU_TRACKBAR_HEIGHT = 10;
|
||||
|
||||
const int MARGIN_X = 185;
|
||||
const int MARGIN_Y = 10;
|
||||
|
||||
const int nb_aus = au_names.size();
|
||||
|
||||
// Do not reinitialize
|
||||
if (action_units_image.empty())
|
||||
{
|
||||
action_units_image = cv::Mat(nb_aus * (AU_TRACKBAR_HEIGHT + 10) + MARGIN_Y * 2, AU_TRACKBAR_LENGTH + MARGIN_X, CV_8UC3, cv::Scalar(255, 255, 255));
|
||||
}
|
||||
else
|
||||
{
|
||||
action_units_image.setTo(255);
|
||||
}
|
||||
|
||||
std::map<std::string, std::pair<bool, double>> aus;
|
||||
|
||||
// first, prepare a mapping "AU name" -> { present, intensity }
|
||||
for (auto au_name : au_names)
|
||||
{
|
||||
// Insert the intensity and AU presense (as these do not always overlap check if they exist first)
|
||||
bool occurence = false;
|
||||
if (occurences_map.find(au_name) != occurences_map.end())
|
||||
{
|
||||
occurence = occurences_map[au_name] != 0;
|
||||
}
|
||||
else
|
||||
{
|
||||
// If we do not have an occurence label, trust the intensity one
|
||||
occurence = intensities_map[au_name] > 1;
|
||||
}
|
||||
double intensity = 0.0;
|
||||
if (intensities_map.find(au_name) != intensities_map.end())
|
||||
{
|
||||
intensity = intensities_map[au_name];
|
||||
}
|
||||
else
|
||||
{
|
||||
// If we do not have an intensity label, trust the occurence one
|
||||
intensity = occurences_map[au_name] == 0 ? 0 : 5;
|
||||
}
|
||||
|
||||
aus[au_name] = std::make_pair(occurence, intensity);
|
||||
}
|
||||
|
||||
// then, build the graph
|
||||
size_t idx = 0;
|
||||
for (auto& au : aus)
|
||||
{
|
||||
std::string name = au.first;
|
||||
bool present = au.second.first;
|
||||
double intensity = au.second.second;
|
||||
|
||||
auto offset = MARGIN_Y + idx * (AU_TRACKBAR_HEIGHT + 10);
|
||||
std::ostringstream au_i;
|
||||
au_i << std::setprecision(2) << std::setw(4) << std::fixed << intensity;
|
||||
cv::putText(action_units_image, name, cv::Point(10, offset + 10), CV_FONT_HERSHEY_SIMPLEX, 0.5, CV_RGB(present ? 0 : 200, 0, 0), 1, CV_AA);
|
||||
cv::putText(action_units_image, AUS_DESCRIPTION.at(name), cv::Point(55, offset + 10), CV_FONT_HERSHEY_SIMPLEX, 0.3, CV_RGB(0, 0, 0), 1, CV_AA);
|
||||
|
||||
if (present)
|
||||
{
|
||||
cv::putText(action_units_image, au_i.str(), cv::Point(160, offset + 10), CV_FONT_HERSHEY_SIMPLEX, 0.3, CV_RGB(0, 100, 0), 1, CV_AA);
|
||||
cv::rectangle(action_units_image, cv::Point(MARGIN_X, offset),
|
||||
cv::Point(MARGIN_X + AU_TRACKBAR_LENGTH * intensity / 5, offset + AU_TRACKBAR_HEIGHT),
|
||||
cv::Scalar(128, 128, 128),
|
||||
CV_FILLED);
|
||||
}
|
||||
else
|
||||
{
|
||||
cv::putText(action_units_image, "0.00", cv::Point(160, offset + 10), CV_FONT_HERSHEY_SIMPLEX, 0.3, CV_RGB(0, 0, 0), 1, CV_AA);
|
||||
}
|
||||
idx++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Eye gaze infomration drawing, first of eye landmarks then of gaze
|
||||
void Visualizer::SetObservationGaze(const cv::Point3f& gaze_direction0, const cv::Point3f& gaze_direction1, const std::vector<cv::Point2f>& eye_landmarks2d, const std::vector<cv::Point3f>& eye_landmarks3d, double confidence)
|
||||
{
|
||||
@@ -283,19 +418,22 @@ void Visualizer::SetFps(double fps)
|
||||
|
||||
char Visualizer::ShowObservation()
|
||||
{
|
||||
if (vis_track)
|
||||
{
|
||||
cv::namedWindow("tracking_result", 1);
|
||||
cv::imshow("tracking_result", captured_image);
|
||||
}
|
||||
if (vis_align)
|
||||
if (vis_align && !aligned_face_image.empty())
|
||||
{
|
||||
cv::imshow("sim_warp", aligned_face_image);
|
||||
}
|
||||
if (vis_hog)
|
||||
if (vis_hog && !hog_image.empty())
|
||||
{
|
||||
cv::imshow("hog", hog_image);
|
||||
}
|
||||
if (vis_aus && !action_units_image.empty())
|
||||
{
|
||||
cv::imshow("action units", action_units_image);
|
||||
}
|
||||
if (vis_track)
|
||||
{
|
||||
cv::imshow("tracking result", captured_image);
|
||||
}
|
||||
return cv::waitKey(1);
|
||||
}
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ model = 'model/main_ceclm_general.txt'; % Trained on in the wild, menpo and mult
|
||||
%model = 'model/main_clm_wild.txt'; % Trained on in-the-wild
|
||||
|
||||
% Create a command that will run the tracker on set of videos and display the output
|
||||
command = sprintf('%s -mloc "%s" ', executable, model);
|
||||
command = sprintf('%s -mloc "%s" -verbose ', executable, model);
|
||||
|
||||
% add all videos to single argument list (so as not to load the model anew
|
||||
% for every video)
|
||||
|
||||
Reference in New Issue
Block a user