Updating readme and instructions.

This commit is contained in:
Tadas Baltrusaitis
2018-05-09 08:27:41 +01:00
parent 53018d09b5
commit 17962a0838
5 changed files with 57 additions and 56 deletions

View File

@@ -12,21 +12,21 @@ Notwithstanding the license granted herein, Licensee acknowledges that certain c
// not limited to academic journal and conference publications, technical
// reports and manuals, must cite at least one of the following works:
//
// OpenFace: an open source facial behavior analysis toolkit
// Tadas Baltru<72>aitis, Peter Robinson, and Louis-Philippe Morency
// in IEEE Winter Conference on Applications of Computer Vision, 2016
// OpenFace 2.0: Facial Behavior Analysis Toolkit
// Tadas Baltru<72>aitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency
// in IEEE International Conference on Automatic Face and Gesture Recognition, 2018
//
// Convolutional experts constrained local model for facial landmark detection.
// A. Zadeh, T. Baltru<72>aitis, and Louis-Philippe Morency,
// in Computer Vision and Pattern Recognition Workshops, 2017.
//
// Rendering of Eyes for Eye-Shape Registration and Gaze Estimation
// Erroll Wood, Tadas Baltru<72>aitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, and Andreas Bulling
// in IEEE International. Conference on Computer Vision (ICCV), 2015
//
// Cross-dataset learning and person-speci?c normalisation for automatic Action Unit detection
// Cross-dataset learning and person-specific normalisation for automatic Action Unit detection
// Tadas Baltru<72>aitis, Marwa Mahmoud, and Peter Robinson
// in Facial Expression Recognition and Analysis Challenge,
// IEEE International Conference on Automatic Face and Gesture Recognition, 2015
//
// Constrained Local Neural Fields for robust facial landmark detection in the wild.
// Tadas Baltru<72>aitis, Peter Robinson, and Louis-Philippe Morency.
// in IEEE Int. Conference on Computer Vision Workshops, 300 Faces in-the-Wild Challenge, 2013.
//
///////////////////////////////////////////////////////////////////////////////

View File

@@ -1,9 +1,9 @@
# OpenFace 1.0.0: an open source facial behavior analysis toolkit
# OpenFace 2.0.0: an open source facial behavior analysis toolkit
[![Build Status](https://travis-ci.org/TadasBaltrusaitis/OpenFace.svg?branch=master)](https://travis-ci.org/TadasBaltrusaitis/OpenFace)
[![Build status](https://ci.appveyor.com/api/projects/status/8msiklxfbhlnsmxp/branch/master?svg=true)](https://ci.appveyor.com/project/TadasBaltrusaitis/openface/branch/master)
Over the past few years, there has been an increased interest in automatic facial behavior analysis and understanding. We present OpenFace a tool intended for computer vision and machine learning researchers, affective computing community and people interested in building interactive applications based on facial behavior analysis. OpenFace is the first toolkit capable of facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation with available source code. The computer vision algorithms which represent the core of OpenFace demonstrate state-of-the-art results in all of the above mentioned tasks. Furthermore, our tool is capable of real-time performance and is able to run from a simple webcam without any specialist hardware.
Over the past few years, there has been an increased interest in automatic facial behavior analysis and understanding. We present OpenFace a tool intended for computer vision and machine learning researchers, affective computing community and people interested in building interactive applications based on facial behavior analysis. OpenFace is the first toolkit capable of facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation with available source code for both running and training the models. The computer vision algorithms which represent the core of OpenFace demonstrate state-of-the-art results in all of the above mentioned tasks. Furthermore, our tool is capable of real-time performance and is able to run from a simple webcam without any specialist hardware.
![Multicomp logo](https://github.com/TadasBaltrusaitis/OpenFace/blob/master/imgs/muticomp_logo_black.png)
@@ -47,12 +47,16 @@ If you use any of the resources provided on this page in any of your publication
#### Overall system
**OpenFace: an open source facial behavior analysis toolkit**
Tadas Baltrušaitis, Peter Robinson, and Louis-Philippe Morency,
in *IEEE Winter Conference on Applications of Computer Vision*, 2016
**OpenFace 2.0: Facial Behavior Analysis Toolkit**
Tadas Baltrušaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency,
*IEEE International Conference on Automatic Face and Gesture Recognition*, 2018
#### Facial landmark detection and tracking
**Convolutional experts constrained local model for facial landmark detection**
A. Zadeh, T. Baltrušaitis, and Louis-Philippe Morency.
*Computer Vision and Pattern Recognition Workshops*, 2017
**Constrained Local Neural Fields for robust facial landmark detection in the wild**
Tadas Baltrušaitis, Peter Robinson, and Louis-Philippe Morency.
in IEEE Int. *Conference on Computer Vision Workshops, 300 Faces in-the-Wild Challenge*, 2013.
@@ -72,11 +76,11 @@ in *Facial Expression Recognition and Analysis Challenge*,
# Commercial license
For inquiries about the commercial licensing of the OpenFace toolkit please contact innovation@cmu.edu
For inquiries about the commercial licensing of the OpenFace toolkit please visit TODO
# Final remarks
I did my best to make sure that the code runs out of the box but there are always issues and I would be grateful for your understanding that this is research code and not full fledged product. However, if you encounter any problems/bugs/issues please contact me on github or by emailing me at Tadas.Baltrusaitis@cl.cam.ac.uk for any bug reports/questions/suggestions. I prefer questions and bug reports on github as that provides visibility to others who might be encountering same issues or who have the same questions.
I did my best to make sure that the code runs out of the box but there are always issues and I would be grateful for your understanding that this is research code and a research project. If you encounter any problems/bugs/issues please contact me on github or by emailing me at tadyla@gmail.com for any bug reports/questions/suggestions. I prefer questions and bug reports on github as that provides visibility to others who might be encountering same issues or who have the same questions.
# Copyright

View File

@@ -4,38 +4,36 @@ These are provided for recreation of some of the experiments described in the pu
======================== Demos ==================================
run_demo_images.m - running the FaceLandmarkImg landmark detection on the demo images packaged with the code
run_demo_videos.m - running the FaceTrackingVid landmark detection and tracking on prepackaged demo videos
run_demo_video_multi.m - running the FaceTrackingVidMulti landmark detection and tracking on prepackaged demo videos (the difference from above is that it can deal with multiple faces)
Shows examples of running OpenFace executables from the command line interface and loading the output results into Matlab. Each script shows a different use case of OpenFace.
For extracting head pose, facial landmarks, HOG features, aligned faces, eye gaze, and Facial Action Units look at the following demos:
For extracting head pose, facial landmarks, gaze HOG features and Facial Action Units look at the following demos:
feature_extraction_demo_img_seq.m - Running the FeatureExtraction project, it demonstrates how to specify parameters for extracting a number of features from a sequence of images in a folder and how to read those features into Matlab.
feature_extraction_demo_vid.m - Running the FeatureExtraction project, it demonstrates how to specify parameters for extracting a number of features from a video and how to read those features into Matlab.
gaze_extraction_demo_vid.m - Example of a clip with varying gaze and extraction of eye gaze information
The other scripts are for unit testing of the code:
- run_demo_align_size.m
- run_tes_img_seq.m
The demos are configured to use CLNF patch experts trained on in-the-wild and Multi-PIE datasets, it is possible to uncomment other model file definitions in the scripts to run them instead.
======================== Head Pose Experiments ============================
To run them you will need to have the appropriate datasets and to change the dataset locations.
run_head_pose_tests_OpenFace.m - runs CLNF on the 3 head pose datasets (Boston University, Biwi Kinect, and ICT-3DHP you need to acquire the datasets yourself)
run_head_pose_tests_OpenFace_CECLM.m - runs CE-CLM on the 3 head pose datasets (Boston University, Biwi Kinect, and ICT-3DHP you need to acquire the datasets yourself)
run_head_pose_tests_OpenFace_CLNF - runs CLNF on the 3 head pose datasets (Boston University, Biwi Kinect, and ICT-3DHP you need to acquire the datasets yourself)
======================== Feature Point Experiments ============================
run_OpenFace_feature_point_tests_300W.m runs CLM and CLNF on the in the wild face datasets acquired from http://ibug.doc.ic.ac.uk/resources/300-W/
The code uses the already defined bounding boxes of faces (these are produced using the 'ExtractBoundingBoxes.m' script on the in the wild datasets). The code relies on there being a .txt file of the same name as the image containing the bounding box in the appropriate directory, see https://github.com/TadasBaltrusaitis/OpenFace/wiki/Command-line-arguments for details.
300W
To run the code you will need to download the 300-W challenge datasets and then replace the database_root with the dataset location.
run_OpenFace_feature_point_tests_300W.m runs CE-CLM, CLNF, and CLM on the in the wild face datasets acquired from http://ibug.doc.ic.ac.uk/resources/300-W/
This script also includes code to draw a graph displaying error curves of the CLNF and CLM methods trained on in the wild data.
The code uses the already defined bounding boxes of faces (these are produced using the 'ExtractBoundingBoxes.m' script on the in the wild datasets). The code relies on there being a .txt file of the same name as the image containing the bounding box. (Note that if the bounding box is not provided the code will use MTCNN face detector)
For convenient comparisons to other state-of-art approaches it also includes results of using the following approaches on the 300-W datasets:
To run the code you will need to download the 300-W challenge datasets and run the bounding box extraction script, then replace the database_root with the dataset location.
run_yt_dataset.m run the CLNF model on the YTCeleb Database (https://sites.google.com/site/akshayasthana/Annotations), you need to get the dataset yourself though.
This script also includes code to draw a graph displaying error curves of the CE-CLM, CLNF and CLM methods trained on in the wild data.
YTCeleb
run_yt_dataset.m run the CE-CLM and CLNF models on the YTCeleb Database (https://sites.google.com/site/akshayasthana/Annotations), you need to get the dataset yourself though.
300VW
run_300VW_dataset_OpenFace.m runs CE-CLM model on the 300VW dataset (you will need to acquire it yourself) and compares the results to a number of recent facial landmark detection approaches
======================== Action Unit Experiments ============================
@@ -46,3 +44,5 @@ As the models were partially trained/validated on DISFA, FERA2011, BP4D, UNBC, B
======================== Gaze Experiments ============================
Evaluating our gaze estimation on the MPIIGaze dataset, run the extract_mpii_gaze_test.m script in the Gaze Experiments folder
Note that the dataset evaluated on is NOT publicly available.

View File

@@ -1,33 +1,32 @@
///////////////////////////////////////////////////////////////////////////////
// Copyright (C) 2017, Carnegie Mellon University and University of Cambridge,
// all rights reserved.
//
// ACADEMIC OR NON-PROFIT ORGANIZATION NONCOMMERCIAL RESEARCH USE ONLY
//
// BY USING OR DOWNLOADING THE SOFTWARE, YOU ARE AGREEING TO THE TERMS OF THIS LICENSE AGREEMENT.
// IF YOU DO NOT AGREE WITH THESE TERMS, YOU MAY NOT USE OR DOWNLOAD THE SOFTWARE.
//
Copyright (C) 2017, University of Southern California, University of Cambridge, and Carnegie Mellon University, all rights reserved
ACADEMIC OR NON-PROFIT ORGANIZATION NONCOMMERCIAL RESEARCH USE ONLY
THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Notwithstanding the license granted herein, Licensee acknowledges that certain components of the Software may be covered by so-called <20>open source<63> software licenses (<28>Open Source Components<74>), which means any software licenses approved as open source licenses by the Open Source Initiative or any substantially similar licenses, including without limitation any license that, as a condition of distribution of the software licensed under such license, requires that the distributor make the software available in source code format. Carnegie Mellon shall provide a list of Open Source Components for a particular version of the Software upon Licensee<65>s request. Licensee will comply with the applicable terms of such licenses and to the extent required by the licenses covering Open Source Components, the terms of such licenses will apply in lieu of the terms of this Agreement. To the extent the terms of the licenses applicable to Open Source Components prohibit any of the restrictions in this License Agreement with respect to such Open Source Component, such restrictions will not apply to such Open Source Component. To the extent the terms of the licenses applicable to Open Source Components require Carnegie Mellon to make an offer to provide source code or related information in connection with the Software, such offer is hereby made. Any request for source code or related information should be directed to Tadas Baltrusaitis. Licensee acknowledges receipt of notices for the Open Source Components for the initial delivery of the Software.
// License can be found in OpenFace-license.txt
//
// * Any publications arising from the use of this software, including but
// not limited to academic journal and conference publications, technical
// reports and manuals, must cite at least one of the following works:
//
// OpenFace: an open source facial behavior analysis toolkit
// Tadas Baltru<72>aitis, Peter Robinson, and Louis-Philippe Morency
// in IEEE Winter Conference on Applications of Computer Vision, 2016
// OpenFace 2.0: Facial Behavior Analysis Toolkit
// Tadas Baltru<72>aitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency
// in IEEE International Conference on Automatic Face and Gesture Recognition, 2018
//
// Convolutional experts constrained local model for facial landmark detection.
// A. Zadeh, T. Baltru<72>aitis, and Louis-Philippe Morency,
// in Computer Vision and Pattern Recognition Workshops, 2017.
//
// Rendering of Eyes for Eye-Shape Registration and Gaze Estimation
// Erroll Wood, Tadas Baltru<72>aitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, and Andreas Bulling
// in IEEE International. Conference on Computer Vision (ICCV), 2015
//
// Cross-dataset learning and person-speci?c normalisation for automatic Action Unit detection
// Cross-dataset learning and person-specific normalisation for automatic Action Unit detection
// Tadas Baltru<72>aitis, Marwa Mahmoud, and Peter Robinson
// in Facial Expression Recognition and Analysis Challenge,
// IEEE International Conference on Automatic Face and Gesture Recognition, 2015
//
// Constrained Local Neural Fields for robust facial landmark detection in the wild.
// Tadas Baltru<72>aitis, Peter Robinson, and Louis-Philippe Morency.
// in IEEE Int. Conference on Computer Vision Workshops, 300 Faces in-the-Wild Challenge, 2013.
//
///////////////////////////////////////////////////////////////////////////////

View File

@@ -23,13 +23,11 @@ Copyright can be found in the copyright.txt
./experiments_300VW - These are provided for recreation of experiments on 300VW dataset
./experiments_300W - These are provided for recreation of some of the experiments described in the papers on 300W dataset
./experiments_JANUS - These are provided for recreation of some of the experiments described in the papers on IJB-FL dataset
./experiments_JANUS - These are provided for recreation of some of the experiments described in the papers on the menpo dataset (both cross and within data)
./experiments_menpo - These are provided for recreation of some of the experiments described in the papers on the menpo dataset (both cross and within data)
//======================= Utilities ===================//
./face_detection - Provides utilities for face detection, possible choices between four detectors: MTCNN (requires MatConvNet for speed), Matlab inbuilt one, Zhu and Ramanan, and Yu et al.
./face_detection_yu - The face detector from Xiang Yu, more details in ./face_detection_yu/README.txt. Only tested on windows machines
./face_detection_zhu - The face detector from Zhu and Ramanan, might need to compile it using ./face_detection_yu/face-release1.0-basic/compile.m
./mtcnn - The most recent and accurate model, MTCNN face detector based on the paper "Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Neural Networks" by Zhang et al.
./face_detection - Provides utilities for face detection, possible choices between two detectors: MTCNN (requires MatConvNet for speed), and Matlab inbuilt one
./mtcnn - a recent and accurate face detector, MTCNN face detector based on the paper "Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Neural Networks" by Zhang et al.
./face_validation - A module for validating face detections (training and inference), it is used for tracking in videos so as to know when reinitialisation is needed
./PDM_helpers - utility functions that deal with PDM fitting, Jacobians and other shape manipulations
@@ -41,4 +39,4 @@ Results that you should expect on running the code on the publicly available dat
--------------------------------------- Final remarks -----------------------------------------------------------------------------
I did my best to make sure that the code runs out of the box but there are always issues and I would be grateful for your understanding that this is research code. However, if you encounter any probles please contact me at Tadas.Baltrusaitis@cl.cam.ac.uk for any bug reports/questions/suggestions.
I did my best to make sure that the code runs out of the box but there are always issues and I would be grateful for your understanding that this is research code. However, if you encounter any problems please raise an issue on github.