Update InspireFace to 1.1.4
@@ -9,7 +9,7 @@ set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O3")
|
||||
# Current version
|
||||
set(INSPIRE_FACE_VERSION_MAJOR 1)
|
||||
set(INSPIRE_FACE_VERSION_MINOR 1)
|
||||
set(INSPIRE_FACE_VERSION_PATCH 2)
|
||||
set(INSPIRE_FACE_VERSION_PATCH 4)
|
||||
|
||||
# Converts the version number to a string
|
||||
string(CONCAT INSPIRE_FACE_VERSION_MAJOR_STR ${INSPIRE_FACE_VERSION_MAJOR})
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
# InspireFace
|
||||
[](https://github.com/HyperInspire/InspireFace/releases/latest)
|
||||
[](https://img.shields.io/github/actions/workflow/status/HyperInspire/InspireFace/release-sdks.yaml?&style=for-the-badge&label=build)
|
||||
[](https://img.shields.io/github/actions/workflow/status/HyperInspire/InspireFace/test_ubuntu_x86_Pikachu.yaml?&style=for-the-badge&label=test)
|
||||
|
||||
InspireFace is a cross-platform face recognition SDK developed in C/C++, supporting multiple operating systems and various backend types for inference, such as CPU, GPU, and NPU.
|
||||
|
||||
@@ -10,6 +11,10 @@ Please contact [contact@insightface.ai](mailto:contact@insightface.ai?subject=In
|
||||
|
||||
## Change Logs
|
||||
|
||||
**`2024-07-05`** Fixed some bugs in the python ctypes interface.
|
||||
|
||||
**`2024-07-03`** Add the blink detection algorithm of face interaction module.
|
||||
|
||||
**`2024-07-02`** Fixed several bugs in the face detector with multi-level input.
|
||||
|
||||
**`2024-06-27`** Verified iOS usability and fixed some bugs.
|
||||
@@ -52,7 +57,7 @@ You can download the model package files containing models and configurations ne
|
||||
If you intend to use the SDK locally or on a server, ensure that OpenCV is installed on the host device beforehand to enable successful linking during the compilation process. For cross-compilation targets like Android or ARM embedded boards, you can use the pre-compiled OpenCV libraries provided by **3rdparty/inspireface-precompile/opencv/**.
|
||||
|
||||
### 1.4. Installing MNN
|
||||
The '3rdparty' directory already includes the MNN library and specifies a particular version as the stable version. If you need to enable or disable additional configuration options during compilation, you can refer to the CMake Options provided by MNN. If you need to use your own precompiled version, feel free to replace it.
|
||||
The '**3rdparty**' directory already includes the MNN library and specifies a particular version as the stable version. If you need to enable or disable additional configuration options during compilation, you can refer to the CMake Options provided by MNN. If you need to use your own precompiled version, feel free to replace it.
|
||||
|
||||
### 1.5. Requirements
|
||||
|
||||
@@ -298,14 +303,14 @@ In the project, there is a subproject called cpp/test. To compile it, you need t
|
||||
```bash
|
||||
cmake -DISF_BUILD_WITH_TEST=ON ..
|
||||
```
|
||||
If you need to run test cases, you will need to download the required [resource files](https://drive.google.com/file/d/1i4uC-dZTQxdVgn2rP0ZdfJTMkJIXgYY4/view?usp=sharing), which are **test_res** and **Model Package** respectively. Unzip the pack file into the test_res folder. The directory structure of test_res should be prepared as follows before testing:
|
||||
If you need to run test cases, you will need to download the required [resource files](https://drive.google.com/file/d/1i4uC-dZTQxdVgn2rP0ZdfJTMkJIXgYY4/view?usp=sharing): **test_res**. Unzip the test_res folder. The directory structure of test_res should be prepared as follows before testing:
|
||||
|
||||
```bash
|
||||
|
||||
test_res
|
||||
├── data
|
||||
├── images
|
||||
├── pack <- unzip pack.zip
|
||||
├── pack <-- The model package files are here
|
||||
├── save
|
||||
├── valid_lfw_funneled.txt
|
||||
├── video
|
||||
@@ -352,17 +357,17 @@ The following functionalities and technologies are currently supported.
|
||||
| 6 | Silent Liveness Detection |  | MiniVision |
|
||||
| 7 | Face Quality Detection |  | |
|
||||
| 8 | Face Pose Estimation |  | |
|
||||
| 9 | Age Prediction |  | |
|
||||
| 10 | Cooperative Liveness Detection |  | |
|
||||
| 9 | Face Attribute Prediction |  | Age, Race, Gender |
|
||||
| 10 | Cooperative Liveness Detection |  | Blink |
|
||||
|
||||
|
||||
## 6. Models Package List
|
||||
|
||||
For different scenarios, we currently provide several Packs, each containing multiple models and configurations.
|
||||
For different scenarios, we currently provide several Packs, each containing multiple models and configurations.The package file is placed in the **pack** subdirectory under the **test_res** directory.
|
||||
|
||||
| Name | Supported Devices | Note | Link |
|
||||
| --- | --- | --- | --- |
|
||||
| Pikachu | CPU | Lightweight edge-side model | [GDrive](https://drive.google.com/drive/folders/1krmv9Pj0XEZXR1GRPHjW_Sl7t4l0dNSS?usp=sharing) |
|
||||
| Megatron | CPU, GPU | Local or server-side model | [GDrive](https://drive.google.com/drive/folders/1krmv9Pj0XEZXR1GRPHjW_Sl7t4l0dNSS?usp=sharing) |
|
||||
| Pikachu | CPU | Lightweight edge-side models | [GDrive](https://drive.google.com/drive/folders/1krmv9Pj0XEZXR1GRPHjW_Sl7t4l0dNSS?usp=sharing) |
|
||||
| Megatron | CPU, GPU | Mobile and server models | [GDrive](https://drive.google.com/drive/folders/1krmv9Pj0XEZXR1GRPHjW_Sl7t4l0dNSS?usp=sharing) |
|
||||
| Gundam-RV1109 | RKNPU | Supports RK1109 and RK1126 | [GDrive](https://drive.google.com/drive/folders/1krmv9Pj0XEZXR1GRPHjW_Sl7t4l0dNSS?usp=sharing) |
|
||||
|
||||
|
||||
@@ -54,8 +54,14 @@ cmake -DCMAKE_BUILD_TYPE=Release \
|
||||
# Compile the project using 4 parallel jobs
|
||||
make -j4
|
||||
|
||||
# Create a symbolic link to the extracted test data directory
|
||||
ln -s ${FULL_TEST_DIR} .
|
||||
# Check if the symbolic link or directory already exists
|
||||
if [ ! -e "$(basename ${FULL_TEST_DIR})" ]; then
|
||||
# Create a symbolic link to the extracted test data directory
|
||||
ln -s ${FULL_TEST_DIR} .
|
||||
echo "Symbolic link to '${TARGET_DIR}' created."
|
||||
else
|
||||
echo "Symbolic link or directory '$(basename ${FULL_TEST_DIR})' already exists. Skipping creation."
|
||||
fi
|
||||
|
||||
# Check if the test executable file exists
|
||||
if [ ! -f "$TEST_EXECUTABLE" ]; then
|
||||
|
||||
@@ -0,0 +1,71 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Exit immediately if any command exits with a non-zero status
|
||||
set -e
|
||||
|
||||
ROOT_DIR="$(pwd)"
|
||||
TARGET_DIR="test_res"
|
||||
DOWNLOAD_URL="https://github.com/tunmx/inspireface-store/raw/main/resource/test_res-lite.zip"
|
||||
ZIP_FILE="test_res-lite.zip"
|
||||
BUILD_DIRNAME="ubuntu18_shared"
|
||||
|
||||
# Check if the target directory already exists
|
||||
if [ ! -d "$TARGET_DIR" ]; then
|
||||
echo "Directory '$TARGET_DIR' does not exist. Downloading..."
|
||||
|
||||
# Download the dataset zip file
|
||||
wget -q "$DOWNLOAD_URL" -O "$ZIP_FILE"
|
||||
|
||||
echo "Extracting '$ZIP_FILE' to '$TARGET_DIR'..."
|
||||
# Unzip the downloaded file
|
||||
unzip "$ZIP_FILE"
|
||||
|
||||
# Remove the downloaded zip file and unnecessary folders
|
||||
rm "$ZIP_FILE"
|
||||
rm -rf "__MACOSX"
|
||||
|
||||
echo "Download and extraction complete."
|
||||
else
|
||||
echo "Directory '$TARGET_DIR' already exists. Skipping download."
|
||||
fi
|
||||
|
||||
# Get the absolute path of the target directory
|
||||
FULL_TEST_DIR="$(realpath ${TARGET_DIR})"
|
||||
|
||||
# Create the build directory if it doesn't exist
|
||||
mkdir -p build/${BUILD_DIRNAME}/
|
||||
|
||||
# Change directory to the build directory
|
||||
# Disable the shellcheck warning for potential directory changes
|
||||
# shellcheck disable=SC2164
|
||||
cd build/${BUILD_DIRNAME}/
|
||||
|
||||
# Configure the CMake build system
|
||||
cmake -DCMAKE_BUILD_TYPE=Release \
|
||||
-DISF_BUILD_WITH_SAMPLE=OFF \
|
||||
-DISF_BUILD_WITH_TEST=OFF \
|
||||
-DISF_ENABLE_BENCHMARK=OFF \
|
||||
-DISF_ENABLE_USE_LFW_DATA=OFF \
|
||||
-DISF_ENABLE_TEST_EVALUATION=OFF \
|
||||
-DOpenCV_DIR=3rdparty/inspireface-precompile/opencv/4.5.1/opencv-ubuntu18-x86/lib/cmake/opencv4 \
|
||||
-DISF_BUILD_SHARED_LIBS=ON ../../
|
||||
|
||||
# Compile the project using 4 parallel jobs
|
||||
make -j4
|
||||
|
||||
# Come back to project root dir
|
||||
cd ${ROOT_DIR}
|
||||
|
||||
# Important: You must copy the compiled dynamic library to this path!
|
||||
cp build/${BUILD_DIRNAME}/lib/libInspireFace.so python/inspireface/modules/core/
|
||||
|
||||
# Install dependency
|
||||
pip install opencv-python
|
||||
pip install click
|
||||
pip install loguru
|
||||
|
||||
cd python/
|
||||
|
||||
# Run sample
|
||||
python sample_face_detection.py ../test_res/pack/Pikachu ../test_res/data/bulk/woman.png
|
||||
|
||||
@@ -66,3 +66,5 @@ else
|
||||
echo "Test executable found. Running tests..."
|
||||
"$TEST_EXECUTABLE"
|
||||
fi
|
||||
|
||||
# Executing python scripts
|
||||
@@ -104,9 +104,9 @@ build() {
|
||||
-DANDROID_NATIVE_API_LEVEL=${NDK_API_LEVEL} \
|
||||
-DANDROID_STL=c++_static \
|
||||
-DMNN_BUILD_FOR_ANDROID_COMMAND=true \
|
||||
-DISF_BUILD_WITH_SAMPLE=ON \
|
||||
-DISF_BUILD_WITH_TEST=ON \
|
||||
-DISF_ENABLE_BENCHMARK=ON \
|
||||
-DISF_BUILD_WITH_SAMPLE=OFF \
|
||||
-DISF_BUILD_WITH_TEST=OFF \
|
||||
-DISF_ENABLE_BENCHMARK=OFF \
|
||||
-DISF_ENABLE_USE_LFW_DATA=OFF \
|
||||
-DISF_ENABLE_TEST_EVALUATION=OFF \
|
||||
-DISF_BUILD_SHARED_LIBS=ON \
|
||||
|
||||
@@ -58,4 +58,4 @@ cmake -DCMAKE_SYSTEM_NAME=Linux \
|
||||
make -j4
|
||||
make install
|
||||
|
||||
move_install_files "$(pwd)"
|
||||
move_install_files "$(pwd)"
|
||||
@@ -55,6 +55,7 @@ set(SOURCE_FILES ${SOURCE_FILES} ${CMAKE_CURRENT_SOURCE_DIR}/middleware/model_ar
|
||||
link_directories(${MNN_LIBS})
|
||||
|
||||
if(ISF_BUILD_SHARED_LIBS)
|
||||
add_definitions("-DISF_BUILD_SHARED_LIBS")
|
||||
add_library(InspireFace SHARED ${SOURCE_FILES})
|
||||
else()
|
||||
add_library(InspireFace STATIC ${SOURCE_FILES})
|
||||
|
||||
@@ -100,13 +100,13 @@ HResult HFReleaseInspireFaceSession(HFSession handle) {
|
||||
HResult HFCreateInspireFaceSession(HFSessionCustomParameter parameter, HFDetectMode detectMode, HInt32 maxDetectFaceNum, HInt32 detectPixelLevel, HInt32 trackByDetectModeFPS, HFSession *handle) {
|
||||
inspire::ContextCustomParameter param;
|
||||
param.enable_mask_detect = parameter.enable_mask_detect;
|
||||
param.enable_age = parameter.enable_age;
|
||||
param.enable_face_attribute = parameter.enable_face_quality;
|
||||
param.enable_liveness = parameter.enable_liveness;
|
||||
param.enable_face_quality = parameter.enable_face_quality;
|
||||
param.enable_gender = parameter.enable_gender;
|
||||
param.enable_interaction_liveness = parameter.enable_interaction_liveness;
|
||||
param.enable_ir_liveness = parameter.enable_ir_liveness;
|
||||
param.enable_recognition = parameter.enable_recognition;
|
||||
param.enable_face_attribute = parameter.enable_face_attribute;
|
||||
inspire::DetectMode detMode = inspire::DETECT_MODE_ALWAYS_DETECT;
|
||||
if (detectMode == HF_DETECT_MODE_LIGHT_TRACK) {
|
||||
detMode = inspire::DETECT_MODE_LIGHT_TRACK;
|
||||
@@ -138,11 +138,8 @@ HResult HFCreateInspireFaceSessionOptional(HOption customOption, HFDetectMode de
|
||||
if (customOption & HF_ENABLE_IR_LIVENESS) {
|
||||
param.enable_ir_liveness = true;
|
||||
}
|
||||
if (customOption & HF_ENABLE_AGE_PREDICT) {
|
||||
param.enable_age = true;
|
||||
}
|
||||
if (customOption & HF_ENABLE_GENDER_PREDICT) {
|
||||
param.enable_gender = true;
|
||||
if (customOption & HF_ENABLE_FACE_ATTRIBUTE) {
|
||||
param.enable_face_attribute = true;
|
||||
}
|
||||
if (customOption & HF_ENABLE_MASK_DETECT) {
|
||||
param.enable_mask_detect = true;
|
||||
@@ -281,6 +278,33 @@ HResult HFGetFaceBasicTokenSize(HPInt32 bufferSize) {
|
||||
return HSUCCEED;
|
||||
}
|
||||
|
||||
HResult HFGetNumOfFaceDenseLandmark(HPInt32 num) {
|
||||
*num = 106;
|
||||
return HSUCCEED;
|
||||
}
|
||||
|
||||
HResult HFGetFaceDenseLandmarkFromFaceToken(HFFaceBasicToken singleFace, HPoint2f* landmarks, HInt32 num) {
|
||||
if (num != 106) {
|
||||
return HERR_SESS_LANDMARK_NUM_NOT_MATCH;
|
||||
}
|
||||
inspire::FaceBasicData data;
|
||||
data.dataSize = singleFace.size;
|
||||
data.data = singleFace.data;
|
||||
HyperFaceData face = {0};
|
||||
HInt32 ret;
|
||||
ret = DeserializeHyperFaceData((char* )data.data, data.dataSize, face);
|
||||
if (ret != HSUCCEED) {
|
||||
return ret;
|
||||
}
|
||||
for (size_t i = 0; i < num; i++)
|
||||
{
|
||||
landmarks[i].x = face.densityLandmark[i].x;
|
||||
landmarks[i].y = face.densityLandmark[i].y;
|
||||
}
|
||||
|
||||
return HSUCCEED;
|
||||
}
|
||||
|
||||
HResult HFFeatureHubFaceSearchThresholdSetting(float threshold) {
|
||||
FEATURE_HUB->SetRecognitionThreshold(threshold);
|
||||
return HSUCCEED;
|
||||
@@ -481,13 +505,13 @@ HResult HFMultipleFacePipelineProcess(HFSession session, HFImageStream streamHan
|
||||
}
|
||||
inspire::ContextCustomParameter param;
|
||||
param.enable_mask_detect = parameter.enable_mask_detect;
|
||||
param.enable_age = parameter.enable_age;
|
||||
param.enable_face_attribute = parameter.enable_face_quality;
|
||||
param.enable_liveness = parameter.enable_liveness;
|
||||
param.enable_face_quality = parameter.enable_face_quality;
|
||||
param.enable_gender = parameter.enable_gender;
|
||||
param.enable_interaction_liveness = parameter.enable_interaction_liveness;
|
||||
param.enable_ir_liveness = parameter.enable_ir_liveness;
|
||||
param.enable_recognition = parameter.enable_recognition;
|
||||
param.enable_face_attribute = parameter.enable_face_attribute;
|
||||
|
||||
HResult ret;
|
||||
std::vector<inspire::HyperFaceData> data;
|
||||
@@ -535,11 +559,8 @@ HResult HFMultipleFacePipelineProcessOptional(HFSession session, HFImageStream s
|
||||
if (customOption & HF_ENABLE_IR_LIVENESS) {
|
||||
param.enable_ir_liveness = true;
|
||||
}
|
||||
if (customOption & HF_ENABLE_AGE_PREDICT) {
|
||||
param.enable_age = true;
|
||||
}
|
||||
if (customOption & HF_ENABLE_GENDER_PREDICT) {
|
||||
param.enable_gender = true;
|
||||
if (customOption & HF_ENABLE_FACE_ATTRIBUTE) {
|
||||
param.enable_face_attribute = true;
|
||||
}
|
||||
if (customOption & HF_ENABLE_MASK_DETECT) {
|
||||
param.enable_mask_detect = true;
|
||||
@@ -549,7 +570,7 @@ HResult HFMultipleFacePipelineProcessOptional(HFSession session, HFImageStream s
|
||||
}
|
||||
if (customOption & HF_ENABLE_INTERACTION) {
|
||||
param.enable_interaction_liveness = true;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
HResult ret;
|
||||
@@ -633,6 +654,38 @@ HResult HFFaceQualityDetect(HFSession session, HFFaceBasicToken singleFace, HFlo
|
||||
|
||||
}
|
||||
|
||||
HResult HFGetFaceIntereactionResult(HFSession session, PHFFaceIntereactionResult result) {
|
||||
if (session == nullptr) {
|
||||
return HERR_INVALID_CONTEXT_HANDLE;
|
||||
}
|
||||
HF_FaceAlgorithmSession *ctx = (HF_FaceAlgorithmSession* ) session;
|
||||
if (ctx == nullptr) {
|
||||
return HERR_INVALID_CONTEXT_HANDLE;
|
||||
}
|
||||
result->num = ctx->impl.GetFaceInteractionLeftEyeStatusCache().size();
|
||||
result->leftEyeStatusConfidence = (HFloat* )ctx->impl.GetFaceInteractionLeftEyeStatusCache().data();
|
||||
result->rightEyeStatusConfidence = (HFloat* )ctx->impl.GetFaceInteractionRightEyeStatusCache().data();
|
||||
|
||||
return HSUCCEED;
|
||||
}
|
||||
|
||||
HResult HFGetFaceAttributeResult(HFSession session, PHFFaceAttributeResult results) {
|
||||
if (session == nullptr) {
|
||||
return HERR_INVALID_CONTEXT_HANDLE;
|
||||
}
|
||||
HF_FaceAlgorithmSession *ctx = (HF_FaceAlgorithmSession* ) session;
|
||||
if (ctx == nullptr) {
|
||||
return HERR_INVALID_CONTEXT_HANDLE;
|
||||
}
|
||||
|
||||
results->num = ctx->impl.GetFaceAgeBracketResultsCache().size();
|
||||
results->race = (HPInt32 )ctx->impl.GetFaceRaceResultsCache().data();
|
||||
results->gender = (HPInt32 )ctx->impl.GetFaceGenderResultsCache().data();
|
||||
results->ageBracket = (HPInt32 )ctx->impl.GetFaceAgeBracketResultsCache().data();
|
||||
|
||||
return HSUCCEED;
|
||||
}
|
||||
|
||||
HResult HFFeatureHubGetFaceCount(HInt32* count) {
|
||||
*count = FEATURE_HUB->GetFaceFeatureCount();
|
||||
return HSUCCEED;
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
#include "herror.h"
|
||||
|
||||
#if defined(_WIN32)
|
||||
#ifdef HYPER_BUILD_SHARED_LIB
|
||||
#ifdef ISF_BUILD_SHARED_LIBS
|
||||
#define HYPER_CAPI_EXPORT __declspec(dllexport)
|
||||
#else
|
||||
#define HYPER_CAPI_EXPORT
|
||||
@@ -29,8 +29,8 @@ extern "C" {
|
||||
#define HF_ENABLE_LIVENESS 0x00000004 ///< Flag to enable RGB liveness detection feature.
|
||||
#define HF_ENABLE_IR_LIVENESS 0x00000008 ///< Flag to enable IR (Infrared) liveness detection feature.
|
||||
#define HF_ENABLE_MASK_DETECT 0x00000010 ///< Flag to enable mask detection feature.
|
||||
#define HF_ENABLE_AGE_PREDICT 0x00000020 ///< Flag to enable age prediction feature.
|
||||
#define HF_ENABLE_GENDER_PREDICT 0x00000040 ///< Flag to enable gender prediction feature.
|
||||
#define HF_ENABLE_FACE_ATTRIBUTE 0x00000020 ///< Flag to enable face attribute prediction feature.
|
||||
#define HF_ENABLE_PLACEHOLDER_ 0x00000040 ///< -
|
||||
#define HF_ENABLE_QUALITY 0x00000080 ///< Flag to enable face quality assessment feature.
|
||||
#define HF_ENABLE_INTERACTION 0x00000100 ///< Flag to enable interaction feature.
|
||||
|
||||
@@ -125,9 +125,8 @@ typedef struct HFSessionCustomParameter {
|
||||
HInt32 enable_liveness; ///< Enable RGB liveness detection feature.
|
||||
HInt32 enable_ir_liveness; ///< Enable IR liveness detection feature.
|
||||
HInt32 enable_mask_detect; ///< Enable mask detection feature.
|
||||
HInt32 enable_age; ///< Enable age prediction feature.
|
||||
HInt32 enable_gender; ///< Enable gender prediction feature.
|
||||
HInt32 enable_face_quality; ///< Enable face quality detection feature.
|
||||
HInt32 enable_face_attribute; ///< Enable face attribute prediction feature.
|
||||
HInt32 enable_interaction_liveness; ///< Enable interaction for liveness detection feature.
|
||||
} HFSessionCustomParameter, *PHFSessionCustomParameter;
|
||||
|
||||
@@ -149,7 +148,7 @@ typedef enum HFDetectMode {
|
||||
* @param detectMode Detection mode to be used.
|
||||
* @param maxDetectFaceNum Maximum number of faces to detect.
|
||||
* @param detectPixelLevel Modify the input resolution level of the detector, the larger the better,
|
||||
* the need to input a multiple of 160, such as 160, 320, 640, the default value -1 is 160.
|
||||
* the need to input a multiple of 160, such as 160, 320, 640, the default value -1 is 320.
|
||||
* @param trackByDetectModeFPS If you are using the MODE_TRACK_BY_DETECTION tracking mode,
|
||||
* this value is used to set the fps frame rate of your current incoming video stream, which defaults to -1 at 30fps.
|
||||
* @param handle Pointer to the context handle that will be returned.
|
||||
@@ -298,6 +297,23 @@ HYPER_CAPI_EXPORT extern HResult HFCopyFaceBasicToken(HFFaceBasicToken token, HP
|
||||
*/
|
||||
HYPER_CAPI_EXPORT extern HResult HFGetFaceBasicTokenSize(HPInt32 bufferSize);
|
||||
|
||||
/**
|
||||
* @brief Retrieve the number of dense facial landmarks.
|
||||
* @param num Number of dense facial landmarks
|
||||
* @return HResult indicating the success or failure of the operation.
|
||||
*/
|
||||
HYPER_CAPI_EXPORT extern HResult HFGetNumOfFaceDenseLandmark(HPInt32 num);
|
||||
|
||||
/**
|
||||
* @brief When you pass in a valid facial token, you can retrieve a set of dense facial landmarks.
|
||||
* The memory for the dense landmarks must be allocated by you.
|
||||
* @param singleFace Basic token representing a single face.
|
||||
* @param landmarks Pre-allocated memory address of the array for 2D floating-point coordinates.
|
||||
* @param num Number of landmark points
|
||||
* @return HResult indicating the success or failure of the operation.
|
||||
*/
|
||||
HYPER_CAPI_EXPORT extern HResult HFGetFaceDenseLandmarkFromFaceToken(HFFaceBasicToken singleFace, HPoint2f* landmarks, HInt32 num);
|
||||
|
||||
/************************************************************************
|
||||
* Face Recognition
|
||||
************************************************************************/
|
||||
@@ -618,6 +634,59 @@ HYPER_CAPI_EXPORT extern HResult HFGetFaceQualityConfidence(HFSession session, P
|
||||
*/
|
||||
HYPER_CAPI_EXPORT extern HResult HFFaceQualityDetect(HFSession session, HFFaceBasicToken singleFace, HFloat *confidence);
|
||||
|
||||
|
||||
/**
|
||||
* @brief Some facial states in the face interaction module.
|
||||
*/
|
||||
typedef struct HFFaceIntereactionResult {
|
||||
HInt32 num; ///< Number of faces detected.
|
||||
HPFloat leftEyeStatusConfidence; ///< Left eye state: confidence close to 1 means open, close to 0 means closed.
|
||||
HPFloat rightEyeStatusConfidence; ///< Right eye state: confidence close to 1 means open, close to 0 means closed.
|
||||
} HFFaceIntereactionResult, *PHFFaceIntereactionResult;
|
||||
|
||||
HYPER_CAPI_EXPORT extern HResult HFGetFaceIntereactionResult(HFSession session, PHFFaceIntereactionResult result);
|
||||
|
||||
/**
|
||||
* @brief Struct representing face attribute results.
|
||||
*
|
||||
* This struct holds the race, gender, and age bracket attributes for a detected face.
|
||||
*/
|
||||
typedef struct HFFaceAttributeResult {
|
||||
HInt32 num; ///< Number of faces detected.
|
||||
HPInt32 race; ///< Race of the detected face.
|
||||
///< 0: Black;
|
||||
///< 1: Asian;
|
||||
///< 2: Latino/Hispanic;
|
||||
///< 3: Middle Eastern;
|
||||
///< 4: White;
|
||||
HPInt32 gender; ///< Gender of the detected face.
|
||||
///< 0: Female;
|
||||
///< 1: Male;
|
||||
HPInt32 ageBracket; ///< Age bracket of the detected face.
|
||||
///< 0: 0-2 years old;
|
||||
///< 1: 3-9 years old;
|
||||
///< 2: 10-19 years old;
|
||||
///< 3: 20-29 years old;
|
||||
///< 4: 30-39 years old;
|
||||
///< 5: 40-49 years old;
|
||||
///< 6: 50-59 years old;
|
||||
///< 7: 60-69 years old;
|
||||
///< 8: more than 70 years old;
|
||||
} HFFaceAttributeResult, *PHFFaceAttributeResult;
|
||||
|
||||
/**
|
||||
* @brief Get the face attribute results.
|
||||
*
|
||||
* This function retrieves the attribute results such as race, gender, and age bracket
|
||||
* for faces detected in the current context.
|
||||
*
|
||||
* @param session Handle to the session.
|
||||
* @param results Pointer to the structure where face attribute results will be stored.
|
||||
* @return HResult indicating the success or failure of the operation.
|
||||
*/
|
||||
HYPER_CAPI_EXPORT extern HResult HFGetFaceAttributeResult(HFSession session, PHFFaceAttributeResult results);
|
||||
|
||||
|
||||
/************************************************************************
|
||||
* System Function
|
||||
************************************************************************/
|
||||
|
||||
@@ -31,5 +31,9 @@ typedef struct HFaceRect {
|
||||
HInt32 height; ///< Height of the rectangle.
|
||||
} HFaceRect; ///< Rectangle representing a face region.
|
||||
|
||||
typedef struct HPoint2f{
|
||||
HFloat x; ///< X-coordinate
|
||||
HFloat y; ///< Y-coordinate
|
||||
} HPoint2f;
|
||||
|
||||
#endif //HYPERFACEREPO_INTYPEDEF_H
|
||||
|
||||
@@ -96,6 +96,15 @@ inline HyperFaceData INSPIRE_API FaceObjectToHyperFaceData(const FaceObject& obj
|
||||
data.face3DAngle.pitch = obj.high_result.pitch;
|
||||
data.face3DAngle.roll = obj.high_result.roll;
|
||||
data.face3DAngle.yaw = obj.high_result.yaw;
|
||||
|
||||
|
||||
const auto &lmk = obj.landmark_smooth_aux_.back();
|
||||
for (size_t i = 0; i < lmk.size(); i++)
|
||||
{
|
||||
data.densityLandmark[i].x = lmk[i].x;
|
||||
data.densityLandmark[i].y = lmk[i].y;
|
||||
}
|
||||
|
||||
|
||||
return data;
|
||||
}
|
||||
|
||||
@@ -57,15 +57,16 @@ typedef struct TransMatrix {
|
||||
* Struct to represent hyper face data.
|
||||
*/
|
||||
typedef struct HyperFaceData {
|
||||
int trackState; ///< Track state
|
||||
int inGroupIndex; ///< Index within a group
|
||||
int trackId; ///< Track ID
|
||||
int trackCount; ///< Track count
|
||||
FaceRect rect; ///< Face rectangle
|
||||
TransMatrix trans; ///< Transformation matrix
|
||||
Point2F keyPoints[5]; ///< Key points (e.g., landmarks)
|
||||
Face3DAngle face3DAngle; ///< 3D face angles
|
||||
float quality[5]; ///< Quality values for key points
|
||||
int trackState; ///< Track state
|
||||
int inGroupIndex; ///< Index within a group
|
||||
int trackId; ///< Track ID
|
||||
int trackCount; ///< Track count
|
||||
FaceRect rect; ///< Face rectangle
|
||||
TransMatrix trans; ///< Transformation matrix
|
||||
Point2F keyPoints[5]; ///< Key points (e.g., landmarks)
|
||||
Face3DAngle face3DAngle; ///< 3D face angles
|
||||
float quality[5]; ///< Quality values for key points
|
||||
Point2F densityLandmark[106]; ///< Face density landmark
|
||||
} HyperFaceData;
|
||||
|
||||
} // namespace inspire
|
||||
|
||||
@@ -312,6 +312,10 @@ public:
|
||||
face_id_ = id;
|
||||
}
|
||||
|
||||
std::vector<float> left_eye_status_;
|
||||
|
||||
std::vector<float> right_eye_status_;
|
||||
|
||||
private:
|
||||
TRACK_STATE tracking_state_;
|
||||
// std::shared_ptr<FaceAction> face_action_;
|
||||
|
||||
@@ -42,8 +42,7 @@ int32_t FaceContext::Configuration(DetectMode detect_mode,
|
||||
INSPIRE_LAUNCH->getMArchive(),
|
||||
param.enable_liveness,
|
||||
param.enable_mask_detect,
|
||||
param.enable_age,
|
||||
param.enable_gender,
|
||||
param.enable_face_attribute,
|
||||
param.enable_interaction_liveness
|
||||
);
|
||||
|
||||
@@ -62,6 +61,11 @@ int32_t FaceContext::FaceDetectAndTrack(CameraStream &image) {
|
||||
m_yaw_results_cache_.clear();
|
||||
m_pitch_results_cache_.clear();
|
||||
m_quality_score_results_cache_.clear();
|
||||
m_react_left_eye_results_cache_.clear();
|
||||
m_react_right_eye_results_cache_.clear();
|
||||
m_quality_score_results_cache_.clear();
|
||||
m_attribute_race_results_cache_.clear();
|
||||
m_attribute_gender_results_cache_.clear();
|
||||
if (m_face_track_ == nullptr) {
|
||||
return HERR_SESS_TRACKER_FAILURE;
|
||||
}
|
||||
@@ -129,6 +133,11 @@ int32_t FaceContext::FacesProcess(CameraStream &image, const std::vector<HyperFa
|
||||
std::lock_guard<std::mutex> lock(m_mtx_);
|
||||
m_mask_results_cache_.resize(faces.size(), -1.0f);
|
||||
m_rgb_liveness_results_cache_.resize(faces.size(), -1.0f);
|
||||
m_react_left_eye_results_cache_.resize(faces.size(), -1.0f);
|
||||
m_react_right_eye_results_cache_.resize(faces.size(), -1.0f);
|
||||
m_attribute_race_results_cache_.resize(faces.size(), -1);
|
||||
m_attribute_gender_results_cache_.resize(faces.size(), -1);
|
||||
m_attribute_age_results_cache_.resize(faces.size(), -1);
|
||||
for (int i = 0; i < faces.size(); ++i) {
|
||||
const auto &face = faces[i];
|
||||
// RGB Liveness Detect
|
||||
@@ -147,19 +156,48 @@ int32_t FaceContext::FacesProcess(CameraStream &image, const std::vector<HyperFa
|
||||
}
|
||||
m_mask_results_cache_[i] = m_face_pipeline_->faceMaskCache;
|
||||
}
|
||||
// Age prediction
|
||||
if (param.enable_age) {
|
||||
auto ret = m_face_pipeline_->Process(image, face, PROCESS_AGE);
|
||||
// Face attribute prediction
|
||||
if (param.enable_face_attribute) {
|
||||
auto ret = m_face_pipeline_->Process(image, face, PROCESS_ATTRIBUTE);
|
||||
if (ret != HSUCCEED) {
|
||||
return ret;
|
||||
}
|
||||
m_attribute_race_results_cache_[i] = m_face_pipeline_->faceAttributeCache[0];
|
||||
m_attribute_gender_results_cache_[i] = m_face_pipeline_->faceAttributeCache[1];
|
||||
m_attribute_age_results_cache_[i] = m_face_pipeline_->faceAttributeCache[2];
|
||||
}
|
||||
// Gender prediction
|
||||
if (param.enable_age) {
|
||||
auto ret = m_face_pipeline_->Process(image, face, PROCESS_GENDER);
|
||||
|
||||
// Face interaction
|
||||
if (param.enable_interaction_liveness) {
|
||||
auto ret = m_face_pipeline_->Process(image, face, PROCESS_INTERACTION);
|
||||
if (ret != HSUCCEED) {
|
||||
return ret;
|
||||
}
|
||||
// Get eyes status
|
||||
m_react_left_eye_results_cache_[i] = m_face_pipeline_->eyesStatusCache[0];
|
||||
m_react_right_eye_results_cache_[i] = m_face_pipeline_->eyesStatusCache[1];
|
||||
// Special handling: ff it is a tracking state, it needs to be filtered
|
||||
if (face.trackState > 0)
|
||||
{
|
||||
auto idx = face.inGroupIndex;
|
||||
if (idx < m_face_track_->trackingFace.size()) {
|
||||
auto& target = m_face_track_->trackingFace[idx];
|
||||
if (target.GetTrackingId() == face.trackId) {
|
||||
auto new_eye_left = EmaFilter(m_face_pipeline_->eyesStatusCache[0], target.left_eye_status_, 8, 0.2f);
|
||||
auto new_eye_right = EmaFilter(m_face_pipeline_->eyesStatusCache[1], target.right_eye_status_, 8, 0.2f);
|
||||
if (face.trackState > 1) {
|
||||
// The filtered value can be obtained only in the tracking state
|
||||
m_react_left_eye_results_cache_[i] = new_eye_left;
|
||||
m_react_right_eye_results_cache_[i] = new_eye_right;
|
||||
}
|
||||
|
||||
} else {
|
||||
INSPIRE_LOGD("Serialized objects cannot connect to trace objects in memory, and there may be some problems");
|
||||
}
|
||||
} else {
|
||||
INSPIRE_LOGW("The index of the trace object does not match the trace list in memory, and there may be some problems");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
@@ -212,11 +250,30 @@ const std::vector<float>& FaceContext::GetFaceQualityScoresResultsCache() const
|
||||
return m_quality_score_results_cache_;
|
||||
}
|
||||
|
||||
const std::vector<float>& FaceContext::GetFaceInteractionLeftEyeStatusCache() const {
|
||||
return m_react_left_eye_results_cache_;
|
||||
}
|
||||
|
||||
const std::vector<float>& FaceContext::GetFaceInteractionRightEyeStatusCache() const {
|
||||
return m_react_right_eye_results_cache_;
|
||||
}
|
||||
|
||||
const Embedded& FaceContext::GetFaceFeatureCache() const {
|
||||
return m_face_feature_cache_;
|
||||
}
|
||||
|
||||
const std::vector<int>& FaceContext::GetFaceRaceResultsCache() const {
|
||||
return m_attribute_race_results_cache_;
|
||||
}
|
||||
|
||||
const std::vector<int>& FaceContext::GetFaceGenderResultsCache() const {
|
||||
return m_attribute_gender_results_cache_;
|
||||
}
|
||||
|
||||
const std::vector<int>& FaceContext::GetFaceAgeBracketResultsCache() const {
|
||||
return m_attribute_age_results_cache_;
|
||||
}
|
||||
|
||||
int32_t FaceContext::FaceFeatureExtract(CameraStream &image, FaceBasicData& data) {
|
||||
std::lock_guard<std::mutex> lock(m_mtx_);
|
||||
int32_t ret;
|
||||
|
||||
@@ -37,8 +37,7 @@ typedef struct CustomPipelineParameter {
|
||||
bool enable_liveness = false; ///< Enable RGB liveness detection feature
|
||||
bool enable_ir_liveness = false; ///< Enable IR (Infrared) liveness detection feature
|
||||
bool enable_mask_detect = false; ///< Enable mask detection feature
|
||||
bool enable_age = false; ///< Enable age prediction feature
|
||||
bool enable_gender = false; ///< Enable gender prediction feature
|
||||
bool enable_face_attribute = false; ///< Enable face attribute prediction feature
|
||||
bool enable_face_quality = false; ///< Enable face quality assessment feature
|
||||
bool enable_interaction_liveness = false; ///< Enable interactive liveness detection feature
|
||||
|
||||
@@ -232,6 +231,36 @@ public:
|
||||
*/
|
||||
const std::vector<float>& GetFaceQualityScoresResultsCache() const;
|
||||
|
||||
/**
|
||||
* @brief Gets the cache of left eye status predict results.
|
||||
* @return A const reference to a vector containing eye status predict results.
|
||||
*/
|
||||
const std::vector<float>& GetFaceInteractionLeftEyeStatusCache() const;
|
||||
|
||||
/**
|
||||
* @brief Gets the cache of right eye status predict results.
|
||||
* @return A const reference to a vector containing eye status predict results.
|
||||
*/
|
||||
const std::vector<float>& GetFaceInteractionRightEyeStatusCache() const;
|
||||
|
||||
/**
|
||||
* @brief Gets the cache of face attribute rece results.
|
||||
* @return A const reference to a vector containing face attribute rece results.
|
||||
*/
|
||||
const std::vector<int>& GetFaceRaceResultsCache() const;
|
||||
|
||||
/**
|
||||
* @brief Gets the cache of face attribute gender results.
|
||||
* @return A const reference to a vector containing face attribute gender results.
|
||||
*/
|
||||
const std::vector<int>& GetFaceGenderResultsCache() const;
|
||||
|
||||
/**
|
||||
* @brief Gets the cache of face attribute age bracket results.
|
||||
* @return A const reference to a vector containing face attribute age bracket results.
|
||||
*/
|
||||
const std::vector<int>& GetFaceAgeBracketResultsCache() const;
|
||||
|
||||
/**
|
||||
* @brief Gets the cache of the current face features.
|
||||
* @return A const reference to the Embedded object containing current face feature data.
|
||||
@@ -263,6 +292,11 @@ private:
|
||||
std::vector<float> m_mask_results_cache_; ///< Cache for mask detection results
|
||||
std::vector<float> m_rgb_liveness_results_cache_; ///< Cache for RGB liveness detection results
|
||||
std::vector<float> m_quality_score_results_cache_; ///< Cache for RGB face quality score results
|
||||
std::vector<float> m_react_left_eye_results_cache_; ///< Cache for Left eye state in face interaction
|
||||
std::vector<float> m_react_right_eye_results_cache_; ///< Cache for Right eye state in face interaction
|
||||
std::vector<int> m_attribute_race_results_cache_;
|
||||
std::vector<int> m_attribute_gender_results_cache_;
|
||||
std::vector<int> m_attribute_age_results_cache_;
|
||||
Embedded m_face_feature_cache_; ///< Cache for current face feature data
|
||||
|
||||
std::mutex m_mtx_; ///< Mutex for thread safety.
|
||||
|
||||
@@ -27,6 +27,7 @@
|
||||
#define HERR_SESS_TRACKER_FAILURE (HERR_SESS_BASE+3) // Tracker module not initialized
|
||||
#define HERR_SESS_INVALID_RESOURCE (HERR_SESS_BASE+10) // Invalid static resource
|
||||
#define HERR_SESS_NUM_OF_MODELS_NOT_MATCH (HERR_SESS_BASE+11) // Number of models does not match
|
||||
#define HERR_SESS_LANDMARK_NUM_NOT_MATCH (HERR_SESS_BASE+20) // The number of input landmark points does not match
|
||||
|
||||
#define HERR_SESS_PIPELINE_FAILURE (HERR_SESS_BASE+8) // Pipeline module not initialized
|
||||
|
||||
|
||||
@@ -7,6 +7,6 @@
|
||||
|
||||
#define INSPIRE_FACE_VERSION_MAJOR_STR "1"
|
||||
#define INSPIRE_FACE_VERSION_MINOR_STR "1"
|
||||
#define INSPIRE_FACE_VERSION_PATCH_STR "2"
|
||||
#define INSPIRE_FACE_VERSION_PATCH_STR "4"
|
||||
|
||||
#endif //HYPERFACEREPO_INFORMATION_H
|
||||
|
||||
@@ -249,6 +249,15 @@ int32_t InferenceHelperMnn::PreProcess(const std::vector<InputTensorInfo>& input
|
||||
/* Convert color type */
|
||||
// LOGD("input_tensor_info.image_info.channel: %d", input_tensor_info.image_info.channel);
|
||||
// LOGD("input_tensor_info.GetChannel(): %d", input_tensor_info.GetChannel());
|
||||
|
||||
// !!!!!! BUG !!!!!!!!!
|
||||
// When initializing, setting the image channel to 3 and the tensor channel to 1,
|
||||
// and configuring the processing to convert the color image to grayscale may cause some bugs.
|
||||
// For example, the image channel might automatically change to 1.
|
||||
// This issue has not been fully investigated,
|
||||
// so it's necessary to manually convert the image to grayscale before input.
|
||||
// !!!!!! BUG !!!!!!!!!
|
||||
|
||||
if ((input_tensor_info.image_info.channel == 3) && (input_tensor_info.GetChannel() == 3)) {
|
||||
image_processconfig.sourceFormat = (input_tensor_info.image_info.is_bgr) ? MNN::CV::BGR : MNN::CV::RGB;
|
||||
if (input_tensor_info.image_info.swap_color) {
|
||||
|
||||
@@ -481,11 +481,9 @@ inline cv::Mat ScaleAffineMatrix(const cv::Mat &affine, float scale,
|
||||
return m;
|
||||
}
|
||||
|
||||
template<typename T>
|
||||
inline int ArgMax(const std::vector<T> data, int start, int end) {
|
||||
int diff = std::max_element(data.begin() + start, data.begin() + end) -
|
||||
(data.begin() + start);
|
||||
return diff;
|
||||
template<class ForwardIterator>
|
||||
inline size_t argmax(ForwardIterator first, ForwardIterator last) {
|
||||
return std::distance(first, std::max_element(first, last));
|
||||
}
|
||||
|
||||
inline void RotPoints(std::vector<cv::Point2f> &pts, float angle) {
|
||||
@@ -650,6 +648,52 @@ inline bool isShortestSideGreaterThan(const cv::Rect_<T>& rect, T value, float s
|
||||
return shortestSide > value;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Computes the affine transformation matrix for face cropping.
|
||||
* @param rect Rectangle representing the face in the image.
|
||||
* @return cv::Mat The computed affine transformation matrix.
|
||||
*/
|
||||
inline cv::Mat ComputeCropMatrix(const cv::Rect2f &rect, int width, int height) {
|
||||
float x = rect.x;
|
||||
float y = rect.y;
|
||||
float w = rect.width;
|
||||
float h = rect.height;
|
||||
float cx = x + w / 2;
|
||||
float cy = y + h / 2;
|
||||
float length = std::max(w, h) * 1.5 / 2;
|
||||
float x1 = cx - length;
|
||||
float y1 = cy - length;
|
||||
float x2 = cx + length;
|
||||
float y2 = cy + length;
|
||||
cv::Rect2f padding_rect(x1, y1, x2 - x1, y2 - y1);
|
||||
std::vector<cv::Point2f> rect_pts = Rect2Points(padding_rect);
|
||||
rect_pts.erase(rect_pts.end() - 1);
|
||||
std::vector<cv::Point2f> dst_pts = {{0, 0}, {(float )width, 0}, {(float )width, (float )height}};
|
||||
cv::Mat m = cv::getAffineTransform(rect_pts, dst_pts);
|
||||
|
||||
return m;
|
||||
}
|
||||
|
||||
|
||||
// Exponential Moving Average (EMA) filter function
|
||||
inline float EmaFilter(float currentProb, std::vector<float>& history, int max, float alpha = 0.2f) {
|
||||
// Add current probability to history
|
||||
history.push_back(currentProb);
|
||||
|
||||
// Trim history if it exceeds max size
|
||||
if (history.size() > max) {
|
||||
history.erase(history.begin(), history.begin() + (history.size() - max));
|
||||
}
|
||||
|
||||
// Compute EMA
|
||||
float ema = history[0]; // Initial value
|
||||
for (size_t i = 1; i < history.size(); ++i) {
|
||||
ema = alpha * history[i] + (1 - alpha) * ema;
|
||||
}
|
||||
|
||||
return ema;
|
||||
}
|
||||
|
||||
} // namespace inspire
|
||||
|
||||
#endif
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
//
|
||||
// Created by Tunm-Air13 on 2023/9/8.
|
||||
//
|
||||
|
||||
#include "age_predict.h"
|
||||
@@ -1,14 +0,0 @@
|
||||
//
|
||||
// Created by Tunm-Air13 on 2023/9/8.
|
||||
//
|
||||
#pragma once
|
||||
#ifndef HYPERFACEREPO_AGEPREDICT_H
|
||||
#define HYPERFACEREPO_AGEPREDICT_H
|
||||
|
||||
|
||||
class AgePredict {
|
||||
|
||||
};
|
||||
|
||||
|
||||
#endif //HYPERFACEREPO_AGEPREDICT_H
|
||||
@@ -6,7 +6,6 @@
|
||||
#define HYPERFACEREPO_ATTRIBUTE_ALL_H
|
||||
|
||||
#include "mask_predict.h"
|
||||
#include "gender_predict.h"
|
||||
#include "age_predict.h"
|
||||
#include "face_attribute.h"
|
||||
|
||||
#endif //HYPERFACEREPO_ATTRIBUTE_ALL_H
|
||||
|
||||
@@ -0,0 +1,41 @@
|
||||
//
|
||||
// Created by Tunm-Air13 on 2023/9/8.
|
||||
//
|
||||
|
||||
#include "face_attribute.h"
|
||||
#include "middleware/utils.h"
|
||||
|
||||
namespace inspire {
|
||||
|
||||
FaceAttributePredict::FaceAttributePredict(): AnyNet("FaceAttributePredict") {}
|
||||
|
||||
std::vector<int> FaceAttributePredict::operator()(const Matrix& bgr_affine) {
|
||||
AnyTensorOutputs outputs;
|
||||
Forward(bgr_affine, outputs);
|
||||
// cv::imshow("w", bgr_affine);
|
||||
// cv::waitKey(0);
|
||||
|
||||
std::vector<float> &raceOut = outputs[0].second;
|
||||
std::vector<float> &genderOut = outputs[1].second;
|
||||
std::vector<float> &ageOut = outputs[2].second;
|
||||
|
||||
// for(int i = 0; i < raceOut.size(); i++) {
|
||||
// std::cout << raceOut[i] << ", ";
|
||||
// }
|
||||
// std::cout << std::endl;
|
||||
|
||||
auto raceIdx = argmax(raceOut.begin(), raceOut.end());
|
||||
auto genderIdx = argmax(genderOut.begin(), genderOut.end());
|
||||
auto ageIdx = argmax(ageOut.begin(), ageOut.end());
|
||||
|
||||
std::string raceLabel = m_original_labels_[raceIdx];
|
||||
std::string simplifiedLabel = m_label_map_.at(raceLabel);
|
||||
int simplifiedRaceIdx = m_simplified_label_index_.at(simplifiedLabel);
|
||||
|
||||
// std::cout << raceLabel << std::endl;
|
||||
// std::cout << simplifiedLabel << std::endl;
|
||||
|
||||
return {simplifiedRaceIdx, 1 - (int )genderIdx, (int )ageIdx};
|
||||
}
|
||||
|
||||
} // namespace hyper
|
||||
@@ -0,0 +1,69 @@
|
||||
//
|
||||
// Created by Tunm-Air13 on 2023/9/8.
|
||||
//
|
||||
#pragma once
|
||||
#ifndef HYPERFACEREPO_GENDERPREDICT_H
|
||||
#define HYPERFACEREPO_GENDERPREDICT_H
|
||||
#include "data_type.h"
|
||||
#include "middleware/any_net.h"
|
||||
|
||||
namespace inspire {
|
||||
|
||||
/**
|
||||
* @class FaceAttributePredict
|
||||
* @brief According to the face image, three classification information of age, gender and race were extracted.
|
||||
*
|
||||
* This class inherits from AnyNet and provides methods for performing face attribute prediction.
|
||||
*/
|
||||
class INSPIRE_API FaceAttributePredict : public AnyNet {
|
||||
public:
|
||||
/**
|
||||
* @brief Constructor for FaceAttributePredict class.
|
||||
*/
|
||||
FaceAttributePredict();
|
||||
|
||||
/**
|
||||
* @brief Exec infer.
|
||||
*
|
||||
* @param bgr_affine The BGR affine matrix to perform mask prediction on.
|
||||
* @return The multi-list attribute prediction result.
|
||||
*/
|
||||
std::vector<int> operator()(const Matrix& bgr_affine);
|
||||
|
||||
private:
|
||||
// Define primitive tag
|
||||
const std::vector<std::string> m_original_labels_ = {
|
||||
"Black", "East Asian", "Indian", "Latino_Hispanic", "Middle Eastern", "Southeast Asian", "White"
|
||||
};
|
||||
|
||||
// Define simplified labels
|
||||
const std::vector<std::string> m_simplified_labels_ = {
|
||||
"Black", "Asian", "Latino/Hispanic", "Middle Eastern", "White"
|
||||
};
|
||||
|
||||
// Define the mapping from the original tag to the simplified tag
|
||||
const std::unordered_map<std::string, std::string> m_label_map_ = {
|
||||
{"Black", "Black"},
|
||||
{"East Asian", "Asian"},
|
||||
{"Indian", "Asian"},
|
||||
{"Latino_Hispanic", "Latino/Hispanic"},
|
||||
{"Middle Eastern", "Middle Eastern"},
|
||||
{"Southeast Asian", "Asian"},
|
||||
{"White", "White"}
|
||||
};
|
||||
|
||||
// Define index maps for simplified labels
|
||||
const std::unordered_map<std::string, int> m_simplified_label_index_ = {
|
||||
{"Black", 0},
|
||||
{"Asian", 1},
|
||||
{"Latino/Hispanic", 2},
|
||||
{"Middle Eastern", 3},
|
||||
{"White", 4}
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
|
||||
} // namespace hyper
|
||||
|
||||
#endif //HYPERFACEREPO_GENDERPREDICT_H
|
||||
@@ -1,6 +0,0 @@
|
||||
//
|
||||
// Created by Tunm-Air13 on 2023/9/8.
|
||||
//
|
||||
|
||||
#include "gender_predict.h"
|
||||
|
||||
@@ -1,14 +0,0 @@
|
||||
//
|
||||
// Created by Tunm-Air13 on 2023/9/8.
|
||||
//
|
||||
#pragma once
|
||||
#ifndef HYPERFACEREPO_GENDERPREDICT_H
|
||||
#define HYPERFACEREPO_GENDERPREDICT_H
|
||||
|
||||
|
||||
class GenderPredict {
|
||||
|
||||
};
|
||||
|
||||
|
||||
#endif //HYPERFACEREPO_GENDERPREDICT_H
|
||||
@@ -7,35 +7,31 @@
|
||||
#include "log.h"
|
||||
#include "track_module/landmark/face_landmark.h"
|
||||
#include "recognition_module/extract/alignment.h"
|
||||
#include "middleware/utils.h"
|
||||
#include "herror.h"
|
||||
|
||||
namespace inspire {
|
||||
|
||||
FacePipeline::FacePipeline(InspireArchive &archive, bool enableLiveness, bool enableMaskDetect, bool enableAge,
|
||||
bool enableGender, bool enableInteractionLiveness)
|
||||
FacePipeline::FacePipeline(InspireArchive &archive, bool enableLiveness, bool enableMaskDetect, bool enableAttribute,
|
||||
bool enableInteractionLiveness)
|
||||
: m_enable_liveness_(enableLiveness),
|
||||
m_enable_mask_detect_(enableMaskDetect),
|
||||
m_enable_age_(enableAge),
|
||||
m_enable_gender_(enableGender),
|
||||
m_enable_attribute_(enableAttribute),
|
||||
m_enable_interaction_liveness_(enableInteractionLiveness) {
|
||||
|
||||
if (m_enable_age_) {
|
||||
InspireModel ageModel;
|
||||
auto ret = InitAgePredict(ageModel);
|
||||
if (m_enable_attribute_) {
|
||||
InspireModel attrModel;
|
||||
auto ret = archive.LoadModel("face_attribute", attrModel);
|
||||
if (ret != 0) {
|
||||
INSPIRE_LOGE("Load Face attribute model: %d", ret);
|
||||
}
|
||||
|
||||
ret = InitFaceAttributePredict(attrModel);
|
||||
if (ret != 0) {
|
||||
INSPIRE_LOGE("InitAgePredict error.");
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize the gender prediction model (assuming Index is 0)
|
||||
if (m_enable_gender_) {
|
||||
InspireModel genderModel;
|
||||
auto ret = InitGenderPredict(genderModel);
|
||||
if (ret != 0) {
|
||||
INSPIRE_LOGE("InitGenderPredict error.");
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize the mask detection model
|
||||
if (m_enable_mask_detect_) {
|
||||
InspireModel maskModel;
|
||||
@@ -62,12 +58,17 @@ FacePipeline::FacePipeline(InspireArchive &archive, bool enableLiveness, bool en
|
||||
}
|
||||
}
|
||||
|
||||
// Initializing the model for in-vivo detection (assuming Index is 0)
|
||||
// There may be a combination of algorithms for facial interaction
|
||||
if (m_enable_interaction_liveness_) {
|
||||
InspireModel actLivenessModel;
|
||||
auto ret = InitLivenessInteraction(actLivenessModel);
|
||||
// Blink model
|
||||
InspireModel blinkModel;
|
||||
auto ret = archive.LoadModel("blink_predict", blinkModel);
|
||||
if (ret != 0) {
|
||||
INSPIRE_LOGE("InitLivenessInteraction error.");
|
||||
INSPIRE_LOGE("Load Blink model error.");
|
||||
}
|
||||
ret = InitBlinkFromLivenessInteraction(blinkModel);
|
||||
if (ret != 0) {
|
||||
INSPIRE_LOGE("InitBlinkFromLivenessInteraction error.");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -75,6 +76,8 @@ FacePipeline::FacePipeline(InspireArchive &archive, bool enableLiveness, bool en
|
||||
|
||||
|
||||
int32_t FacePipeline::Process(CameraStream &image, const HyperFaceData &face, FaceProcessFunction proc) {
|
||||
cv::Mat originImage;
|
||||
cv::Mat crop112;
|
||||
switch (proc) {
|
||||
case PROCESS_MASK: {
|
||||
if (m_mask_predict_ == nullptr) {
|
||||
@@ -91,12 +94,16 @@ int32_t FacePipeline::Process(CameraStream &image, const HyperFaceData &face, Fa
|
||||
// }
|
||||
// cv::imshow("wqwe", img);
|
||||
// cv::waitKey(0);
|
||||
auto trans = getTransformMatrix112(pointsFive);
|
||||
trans.convertTo(trans, CV_64F);
|
||||
auto crop = image.GetAffineRGBImage(trans, 112, 112);
|
||||
if (crop112.empty())
|
||||
{
|
||||
auto trans = getTransformMatrix112(pointsFive);
|
||||
trans.convertTo(trans, CV_64F);
|
||||
crop112 = image.GetAffineRGBImage(trans, 112, 112);
|
||||
}
|
||||
|
||||
// cv::imshow("wq", crop);
|
||||
// cv::waitKey(0);
|
||||
auto mask_score = (*m_mask_predict_)(crop);
|
||||
auto mask_score = (*m_mask_predict_)(crop112);
|
||||
faceMaskCache = mask_score;
|
||||
break;
|
||||
}
|
||||
@@ -107,11 +114,12 @@ int32_t FacePipeline::Process(CameraStream &image, const HyperFaceData &face, Fa
|
||||
// auto trans27 = getTransformMatrixSafas(pointsFive);
|
||||
// trans27.convertTo(trans27, CV_64F);
|
||||
// auto align112x27 = image.GetAffineRGBImage(trans27, 112, 112);
|
||||
|
||||
auto img = image.GetScaledImage(1.0, true);
|
||||
if (originImage.empty()) {
|
||||
originImage = image.GetScaledImage(1.0, true);
|
||||
}
|
||||
cv::Rect oriRect(face.rect.x, face.rect.y, face.rect.width, face.rect.height);
|
||||
auto rect = GetNewBox(img.cols, img.rows, oriRect, 2.7f);
|
||||
auto crop = img(rect);
|
||||
auto rect = GetNewBox(originImage.cols, originImage.rows, oriRect, 2.7f);
|
||||
auto crop = originImage(rect);
|
||||
// cv::imwrite("crop.jpg", crop);
|
||||
auto score = (*m_rgb_anti_spoofing_)(crop);
|
||||
// auto i = cv::imread("zsb.jpg");
|
||||
@@ -119,16 +127,45 @@ int32_t FacePipeline::Process(CameraStream &image, const HyperFaceData &face, Fa
|
||||
faceLivenessCache = score;
|
||||
break;
|
||||
}
|
||||
case PROCESS_AGE: {
|
||||
if (m_age_predict_ == nullptr) {
|
||||
case PROCESS_INTERACTION: {
|
||||
if (m_blink_predict_ == nullptr) {
|
||||
return HERR_SESS_PIPELINE_FAILURE; // uninitialized
|
||||
}
|
||||
if (originImage.empty()) {
|
||||
originImage = image.GetScaledImage(1.0, true);
|
||||
}
|
||||
std::vector<std::vector<int>> order_list = {HLMK_LEFT_EYE_POINTS_INDEX, HLMK_RIGHT_EYE_POINTS_INDEX};
|
||||
eyesStatusCache = {0, 0};
|
||||
for (size_t i = 0; i < order_list.size(); i++)
|
||||
{
|
||||
const auto &index = order_list[i];
|
||||
std::vector<cv::Point2f> points;
|
||||
for (const auto &idx: index)
|
||||
{
|
||||
points.emplace_back(face.densityLandmark[idx].x, face.densityLandmark[idx].y);
|
||||
}
|
||||
cv::Rect2f rect = cv::boundingRect(points);
|
||||
auto affine_scale = ComputeCropMatrix(rect, BlinkPredict::BLINK_EYE_INPUT_SIZE, BlinkPredict::BLINK_EYE_INPUT_SIZE);
|
||||
affine_scale.convertTo(affine_scale, CV_64F);
|
||||
auto pre_crop = image.GetAffineRGBImage(affine_scale, BlinkPredict::BLINK_EYE_INPUT_SIZE, BlinkPredict::BLINK_EYE_INPUT_SIZE);
|
||||
auto eyeStatus = (*m_blink_predict_)(pre_crop);
|
||||
eyesStatusCache[i] = eyeStatus;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case PROCESS_GENDER: {
|
||||
if (m_gender_predict_ == nullptr) {
|
||||
case PROCESS_ATTRIBUTE: {
|
||||
if (m_attribute_predict_ == nullptr) {
|
||||
return HERR_SESS_PIPELINE_FAILURE; // uninitialized
|
||||
}
|
||||
std::vector<cv::Point2f> pointsFive;
|
||||
for (const auto &p: face.keyPoints) {
|
||||
pointsFive.push_back(HPointToPoint2f(p));
|
||||
}
|
||||
auto trans = getTransformMatrix112(pointsFive);
|
||||
trans.convertTo(trans, CV_64F);
|
||||
auto crop = image.GetAffineRGBImage(trans, 112, 112);
|
||||
auto outputs = (*m_attribute_predict_)(crop);
|
||||
faceAttributeCache = cv::Vec3i(outputs[0], outputs[1], outputs[2]);
|
||||
break;
|
||||
}
|
||||
}
|
||||
@@ -173,17 +210,16 @@ int32_t FacePipeline::Process(CameraStream &image, FaceObject &face) {
|
||||
return HSUCCEED;
|
||||
}
|
||||
|
||||
|
||||
int32_t FacePipeline::InitAgePredict(InspireModel &) {
|
||||
|
||||
return 0;
|
||||
int32_t FacePipeline::InitFaceAttributePredict(InspireModel &model) {
|
||||
m_attribute_predict_ = std::make_shared<FaceAttributePredict>();
|
||||
auto ret = m_attribute_predict_->loadData(model, model.modelType);
|
||||
if (ret != InferenceHelper::kRetOk) {
|
||||
return HERR_ARCHIVE_LOAD_FAILURE;
|
||||
}
|
||||
return HSUCCEED;
|
||||
}
|
||||
|
||||
|
||||
int32_t FacePipeline::InitGenderPredict(InspireModel &model) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t FacePipeline::InitMaskPredict(InspireModel &model) {
|
||||
m_mask_predict_ = std::make_shared<MaskPredict>();
|
||||
auto ret = m_mask_predict_->loadData(model, model.modelType);
|
||||
@@ -203,8 +239,13 @@ int32_t FacePipeline::InitRBGAntiSpoofing(InspireModel &model) {
|
||||
return HSUCCEED;
|
||||
}
|
||||
|
||||
int32_t FacePipeline::InitLivenessInteraction(InspireModel &model) {
|
||||
return 0;
|
||||
int32_t FacePipeline::InitBlinkFromLivenessInteraction(InspireModel &model) {
|
||||
m_blink_predict_ = std::make_shared<BlinkPredict>();
|
||||
auto ret = m_blink_predict_->loadData(model, model.modelType);
|
||||
if (ret != InferenceHelper::kRetOk) {
|
||||
return HERR_ARCHIVE_LOAD_FAILURE;
|
||||
}
|
||||
return HSUCCEED;
|
||||
}
|
||||
|
||||
const std::shared_ptr<RBGAntiSpoofing> &FacePipeline::getMRgbAntiSpoofing() const {
|
||||
|
||||
@@ -21,8 +21,8 @@ namespace inspire {
|
||||
typedef enum FaceProcessFunction {
|
||||
PROCESS_MASK = 0, ///< Mask detection.
|
||||
PROCESS_RGB_LIVENESS, ///< RGB liveness detection.
|
||||
PROCESS_AGE, ///< Age estimation.
|
||||
PROCESS_GENDER, ///< Gender prediction.
|
||||
PROCESS_ATTRIBUTE, ///< Face attribute estimation.
|
||||
PROCESS_INTERACTION, ///< Face interaction.
|
||||
} FaceProcessFunction;
|
||||
|
||||
/**
|
||||
@@ -40,12 +40,11 @@ public:
|
||||
* @param archive Model archive instance for model loading.
|
||||
* @param enableLiveness Whether RGB liveness detection is enabled.
|
||||
* @param enableMaskDetect Whether mask detection is enabled.
|
||||
* @param enableAge Whether age estimation is enabled.
|
||||
* @param enableGender Whether gender prediction is enabled.
|
||||
* @param enableAttributee Whether face attribute estimation is enabled.
|
||||
* @param enableInteractionLiveness Whether interaction liveness detection is enabled.
|
||||
*/
|
||||
explicit FacePipeline(InspireArchive &archive, bool enableLiveness, bool enableMaskDetect, bool enableAge,
|
||||
bool enableGender, bool enableInteractionLiveness);
|
||||
explicit FacePipeline(InspireArchive &archive, bool enableLiveness, bool enableMaskDetect, bool enableAttribute,
|
||||
bool enableInteractionLiveness);
|
||||
|
||||
/**
|
||||
* @brief Processes a face using the specified FaceProcessFunction.
|
||||
@@ -70,20 +69,12 @@ public:
|
||||
|
||||
private:
|
||||
/**
|
||||
* @brief Initializes the AgePredict model.
|
||||
* @brief Initializes the FaceAttributePredict model.
|
||||
*
|
||||
* @param model Pointer to the AgePredict model.
|
||||
* @param model Pointer to the FaceAttributePredict model.
|
||||
* @return int32_t Status code indicating success (0) or failure.
|
||||
*/
|
||||
int32_t InitAgePredict(InspireModel &model);
|
||||
|
||||
/**
|
||||
* @brief Initializes the GenderPredict model.
|
||||
*
|
||||
* @param model Pointer to the GenderPredict model.
|
||||
* @return int32_t Status code indicating success (0) or failure.
|
||||
*/
|
||||
int32_t InitGenderPredict(InspireModel &model);
|
||||
int32_t InitFaceAttributePredict(InspireModel &model);
|
||||
|
||||
/**
|
||||
* @brief Initializes the MaskPredict model.
|
||||
@@ -102,29 +93,29 @@ private:
|
||||
int32_t InitRBGAntiSpoofing(InspireModel &model);
|
||||
|
||||
/**
|
||||
* @brief Initializes the LivenessInteraction model.
|
||||
* @brief Initializes the Blink predict model.
|
||||
*
|
||||
* @param model Pointer to the LivenessInteraction model.
|
||||
* @param model Pointer to the Blink predict model.
|
||||
* @return int32_t Status code indicating success (0) or failure.
|
||||
*/
|
||||
int32_t InitLivenessInteraction(InspireModel &model);
|
||||
int32_t InitBlinkFromLivenessInteraction(InspireModel &model);
|
||||
|
||||
private:
|
||||
const bool m_enable_liveness_ = false; ///< Whether RGB liveness detection is enabled.
|
||||
const bool m_enable_mask_detect_ = false; ///< Whether mask detection is enabled.
|
||||
const bool m_enable_age_ = false; ///< Whether age estimation is enabled.
|
||||
const bool m_enable_gender_ = false; ///< Whether gender prediction is enabled.
|
||||
const bool m_enable_attribute_ = false; ///< Whether face attribute is enabled.
|
||||
const bool m_enable_interaction_liveness_ = false; ///< Whether interaction liveness detection is enabled.
|
||||
|
||||
std::shared_ptr<AgePredict> m_age_predict_; ///< Pointer to AgePredict instance.
|
||||
std::shared_ptr<GenderPredict> m_gender_predict_; ///< Pointer to GenderPredict instance.
|
||||
std::shared_ptr<FaceAttributePredict> m_attribute_predict_; ///< Pointer to Face attribute prediction instance.
|
||||
std::shared_ptr<MaskPredict> m_mask_predict_; ///< Pointer to MaskPredict instance.
|
||||
std::shared_ptr<RBGAntiSpoofing> m_rgb_anti_spoofing_; ///< Pointer to RBGAntiSpoofing instance.
|
||||
std::shared_ptr<LivenessInteraction> m_liveness_interaction_spoofing_; ///< Pointer to LivenessInteraction instance.
|
||||
std::shared_ptr<BlinkPredict> m_blink_predict_; ///< Pointer to Blink predict instance.
|
||||
|
||||
public:
|
||||
float faceMaskCache; ///< Cache for face mask detection result.
|
||||
float faceLivenessCache; ///< Cache for face liveness detection result.
|
||||
cv::Vec2f eyesStatusCache; ///< Cache for blink predict result.
|
||||
cv::Vec3i faceAttributeCache; ///< Cache for face attribute predict result.
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
@@ -5,7 +5,8 @@
|
||||
#ifndef HYPERFACEREPO_LIVENESS_ALL_H
|
||||
#define HYPERFACEREPO_LIVENESS_ALL_H
|
||||
|
||||
#include "liveness_interaction.h"
|
||||
#include "blink_predict.h"
|
||||
#include "rgb_anti_spoofing.h"
|
||||
#include "order_of_hyper_landmark.h"
|
||||
|
||||
#endif //HYPERFACEREPO_LIVENESS_ALL_H
|
||||
|
||||
@@ -0,0 +1,28 @@
|
||||
//
|
||||
// Created by Tunm-Air13 on 2023/9/8.
|
||||
//
|
||||
|
||||
#include "blink_predict.h"
|
||||
#include "middleware/utils.h"
|
||||
|
||||
namespace inspire {
|
||||
|
||||
BlinkPredict::BlinkPredict(): AnyNet("BlinkPredict") {}
|
||||
|
||||
float BlinkPredict::operator()(const Matrix &bgr_affine) {
|
||||
cv::Mat input;
|
||||
if (bgr_affine.cols == BLINK_EYE_INPUT_SIZE && bgr_affine.rows == BLINK_EYE_INPUT_SIZE)
|
||||
{
|
||||
input = bgr_affine;
|
||||
} else {
|
||||
cv::resize(bgr_affine, input, cv::Size(BLINK_EYE_INPUT_SIZE, BLINK_EYE_INPUT_SIZE));
|
||||
}
|
||||
cv::cvtColor(input, input, cv::COLOR_BGR2GRAY);
|
||||
AnyTensorOutputs outputs;
|
||||
Forward(input, outputs);
|
||||
auto &map = outputs[0].second;;
|
||||
|
||||
return map[1];
|
||||
}
|
||||
|
||||
} // namespace inspire
|
||||
@@ -0,0 +1,41 @@
|
||||
//
|
||||
// Created by Tunm-Air13 on 2023/9/8.
|
||||
//
|
||||
#pragma once
|
||||
#ifndef HYPERFACEREPO_BLINK_PREDICT_H
|
||||
#define HYPERFACEREPO_BLINK_PREDICT_H
|
||||
#include "data_type.h"
|
||||
#include "middleware/any_net.h"
|
||||
|
||||
namespace inspire {
|
||||
|
||||
/**
|
||||
* @class BlinkPredict
|
||||
* @brief Prediction whether the eyes are open or closed.
|
||||
*
|
||||
* This class inherits from AnyNet and provides methods for performing blink prediction.
|
||||
*/
|
||||
class INSPIRE_API BlinkPredict : public AnyNet {
|
||||
public:
|
||||
/**
|
||||
* @brief Constructor for MaskPredict class.
|
||||
*/
|
||||
BlinkPredict();
|
||||
|
||||
/**
|
||||
* @brief Operator for performing blink prediction on a BGR affine matrix.
|
||||
*
|
||||
* @param bgr_affine The BGR affine matrix to perform mask prediction on.
|
||||
* @return Blink prediction result.
|
||||
*/
|
||||
float operator()(const Matrix& bgr_affine);
|
||||
|
||||
public:
|
||||
|
||||
static const int BLINK_EYE_INPUT_SIZE = 64; ///< Input size
|
||||
|
||||
};
|
||||
|
||||
} // namespace inspire
|
||||
|
||||
#endif //HYPERFACEREPO_BLINK_PREDICT_H
|
||||
@@ -1,5 +0,0 @@
|
||||
//
|
||||
// Created by Tunm-Air13 on 2023/9/8.
|
||||
//
|
||||
|
||||
#include "liveness_interaction.h"
|
||||
@@ -1,14 +0,0 @@
|
||||
//
|
||||
// Created by Tunm-Air13 on 2023/9/8.
|
||||
//
|
||||
#pragma once
|
||||
#ifndef HYPERFACEREPO_LIVENESSINTERACTION_H
|
||||
#define HYPERFACEREPO_LIVENESSINTERACTION_H
|
||||
|
||||
|
||||
class LivenessInteraction {
|
||||
|
||||
};
|
||||
|
||||
|
||||
#endif //HYPERFACEREPO_LIVENESSINTERACTION_H
|
||||
@@ -0,0 +1,20 @@
|
||||
//
|
||||
// Created by Tunm-Air13 on 2024/7/3.
|
||||
//
|
||||
#pragma once
|
||||
#ifndef HYPERFACEREPO_ORDER_HYPERLANDMARK_H
|
||||
#define HYPERFACEREPO_ORDER_HYPERLANDMARK_H
|
||||
#include <iostream>
|
||||
#include <vector>
|
||||
|
||||
namespace inspire {
|
||||
|
||||
// HyperLandmark left eye contour points sequence of dense facial landmarks.
|
||||
const std::vector<int> HLMK_LEFT_EYE_POINTS_INDEX = {1, 34, 53, 59, 67, 3, 12, 94};
|
||||
|
||||
// HyperLandmark right eye contour points sequence of dense facial landmarks.
|
||||
const std::vector<int> HLMK_RIGHT_EYE_POINTS_INDEX = {27, 104, 41, 85, 20, 47, 43, 51};
|
||||
|
||||
} // namespace inspire
|
||||
|
||||
#endif //HYPERFACEREPO_ORDER_HYPERLANDMARK_H
|
||||
@@ -174,7 +174,7 @@ bool FaceTrack::TrackFace(CameraStream &image, FaceObject &face) {
|
||||
// pose and quality - BUG
|
||||
auto rect = face.bbox_;
|
||||
// std::cout << rect << std::endl;
|
||||
auto affine_scale = FacePoseQuality::ComputeCropMatrix(rect);
|
||||
auto affine_scale = ComputeCropMatrix(rect, FacePoseQuality::INPUT_WIDTH, FacePoseQuality::INPUT_HEIGHT);
|
||||
affine_scale.convertTo(affine_scale, CV_64F);
|
||||
auto pre_crop = image.GetAffineRGBImage(affine_scale, FacePoseQuality::INPUT_WIDTH,
|
||||
FacePoseQuality::INPUT_HEIGHT);
|
||||
@@ -245,7 +245,7 @@ void FaceTrack::UpdateStream(CameraStream &image) {
|
||||
image.SetPreviewSize(track_preview_size_);
|
||||
cv::Mat image_detect = image.GetPreviewImage(true);
|
||||
|
||||
nms();
|
||||
|
||||
for (auto const &face: trackingFace) {
|
||||
cv::Rect m_mask_rect = face.GetRectSquare();
|
||||
std::vector<cv::Point2f> pts = Rect2Points(m_mask_rect);
|
||||
@@ -282,7 +282,7 @@ void FaceTrack::UpdateStream(CameraStream &image) {
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
nms();
|
||||
// LOGD("Track Cost %f", t_track.GetCostTimeUpdate());
|
||||
track_total_use_time_ = ((double) cv::getTickCount() - timeStart) / cv::getTickFrequency() * 1000;
|
||||
|
||||
|
||||
@@ -35,26 +35,6 @@ FacePoseQualityResult FacePoseQuality::operator()(const Matrix &bgr_affine) {
|
||||
return res;
|
||||
}
|
||||
|
||||
cv::Mat FacePoseQuality::ComputeCropMatrix(const cv::Rect2f &rect) {
|
||||
float x = rect.x;
|
||||
float y = rect.y;
|
||||
float w = rect.width;
|
||||
float h = rect.height;
|
||||
float cx = x + w / 2;
|
||||
float cy = y + h / 2;
|
||||
float length = std::max(w, h) * 1.5 / 2;
|
||||
float x1 = cx - length;
|
||||
float y1 = cy - length;
|
||||
float x2 = cx + length;
|
||||
float y2 = cy + length;
|
||||
cv::Rect2f padding_rect(x1, y1, x2 - x1, y2 - y1);
|
||||
std::vector<cv::Point2f> rect_pts = Rect2Points(padding_rect);
|
||||
rect_pts.erase(rect_pts.end() - 1);
|
||||
std::vector<cv::Point2f> dst_pts = {{0, 0}, {INPUT_WIDTH, 0}, {INPUT_WIDTH, INPUT_HEIGHT}};
|
||||
cv::Mat m = cv::getAffineTransform(rect_pts, dst_pts);
|
||||
|
||||
return m;
|
||||
}
|
||||
|
||||
|
||||
} // namespace hyper
|
||||
@@ -44,12 +44,6 @@ public:
|
||||
*/
|
||||
FacePoseQualityResult operator()(const Matrix& bgr_affine);
|
||||
|
||||
/**
|
||||
* @brief Computes the affine transformation matrix for face cropping.
|
||||
* @param rect Rectangle representing the face in the image.
|
||||
* @return cv::Mat The computed affine transformation matrix.
|
||||
*/
|
||||
static cv::Mat ComputeCropMatrix(const cv::Rect2f &rect);
|
||||
|
||||
public:
|
||||
const static int INPUT_WIDTH = 96; ///< Width of the input image for the network.
|
||||
|
||||
@@ -1 +1 @@
|
||||
InspireFace Version: 1.1.2
|
||||
InspireFace Version: 1.1.4
|
||||
|
||||
@@ -7,16 +7,27 @@
|
||||
|
||||
int main(int argc, char* argv[]) {
|
||||
// Check whether the number of parameters is correct
|
||||
if (argc != 3) {
|
||||
std::cerr << "Usage: " << argv[0] << " <pack_path> <source_path>\n";
|
||||
if (argc < 3 || argc > 4) {
|
||||
std::cerr << "Usage: " << argv[0] << " <pack_path> <source_path> [rotation]\n";
|
||||
return 1;
|
||||
}
|
||||
|
||||
auto packPath = argv[1];
|
||||
auto sourcePath = argv[2];
|
||||
int rotation = 0;
|
||||
|
||||
// If rotation is provided, check and set the value
|
||||
if (argc == 4) {
|
||||
rotation = std::atoi(argv[3]);
|
||||
if (rotation != 0 && rotation != 90 && rotation != 180 && rotation != 270) {
|
||||
std::cerr << "Invalid rotation value. Allowed values are 0, 90, 180, 270.\n";
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
std::cout << "Pack file Path: " << packPath << std::endl;
|
||||
std::cout << "Source file Path: " << sourcePath << std::endl;
|
||||
std::cout << "Rotation: " << rotation << std::endl;
|
||||
|
||||
HResult ret;
|
||||
// The resource file must be loaded before it can be used
|
||||
@@ -55,9 +66,26 @@ int main(int argc, char* argv[]) {
|
||||
HFImageData imageParam = {0};
|
||||
imageParam.data = image.data; // Data buffer
|
||||
imageParam.width = image.cols; // Target view width
|
||||
imageParam.height = image.rows; // Target view width
|
||||
imageParam.rotation = HF_CAMERA_ROTATION_0; // Data source rotate
|
||||
imageParam.format = HF_STREAM_BGR; // Data source format
|
||||
imageParam.height = image.rows; // Target view width
|
||||
|
||||
// Set rotation based on input parameter
|
||||
switch (rotation) {
|
||||
case 90:
|
||||
imageParam.rotation = HF_CAMERA_ROTATION_90;
|
||||
break;
|
||||
case 180:
|
||||
imageParam.rotation = HF_CAMERA_ROTATION_180;
|
||||
break;
|
||||
case 270:
|
||||
imageParam.rotation = HF_CAMERA_ROTATION_270;
|
||||
break;
|
||||
case 0:
|
||||
default:
|
||||
imageParam.rotation = HF_CAMERA_ROTATION_0;
|
||||
break;
|
||||
}
|
||||
|
||||
imageParam.format = HF_STREAM_BGR; // Data source format
|
||||
|
||||
// Create an image data stream
|
||||
HFImageStream imageHandle = {0};
|
||||
@@ -82,10 +110,11 @@ int main(int argc, char* argv[]) {
|
||||
cv::Mat draw = image.clone();
|
||||
for (int index = 0; index < faceNum; ++index) {
|
||||
std::cout << "========================================" << std::endl;
|
||||
std::cout << "Token size: " << multipleFaceData.tokens[index].size << std::endl;
|
||||
std::cout << "Process face index: " << index << std::endl;
|
||||
// Use OpenCV's Rect to receive face bounding boxes
|
||||
auto rect = cv::Rect(multipleFaceData.rects[index].x, multipleFaceData.rects[index].y,
|
||||
multipleFaceData.rects[index].width, multipleFaceData.rects[index].height);
|
||||
multipleFaceData.rects[index].width, multipleFaceData.rects[index].height);
|
||||
cv::rectangle(draw, rect, cv::Scalar(0, 100, 255), 4);
|
||||
|
||||
// Print FaceID, In IMAGE-MODE it is changing, in VIDEO-MODE it is fixed, but it may be lost
|
||||
@@ -93,9 +122,21 @@ int main(int argc, char* argv[]) {
|
||||
|
||||
// Print Head euler angle, It can often be used to judge the quality of a face by the Angle of the head
|
||||
std::cout << "Roll: " << multipleFaceData.angles.roll[index]
|
||||
<< ", Yaw: " << multipleFaceData.angles.roll[index]
|
||||
<< ", Pitch: " << multipleFaceData.angles.pitch[index] << std::endl;
|
||||
<< ", Yaw: " << multipleFaceData.angles.roll[index]
|
||||
<< ", Pitch: " << multipleFaceData.angles.pitch[index] << std::endl;
|
||||
|
||||
HInt32 numOfLmk;
|
||||
HFGetNumOfFaceDenseLandmark(&numOfLmk);
|
||||
HPoint2f denseLandmarkPoints[numOfLmk];
|
||||
ret = HFGetFaceDenseLandmarkFromFaceToken(multipleFaceData.tokens[index], denseLandmarkPoints, numOfLmk);
|
||||
if (ret != HSUCCEED) {
|
||||
std::cerr << "HFGetFaceDenseLandmarkFromFaceToken error!!" << std::endl;
|
||||
return -1;
|
||||
}
|
||||
for (size_t i = 0; i < numOfLmk; i++) {
|
||||
cv::Point2f p(denseLandmarkPoints[i].x, denseLandmarkPoints[i].y);
|
||||
cv::circle(draw, p, 0, (0, 0, 255), 2);
|
||||
}
|
||||
}
|
||||
cv::imwrite("draw_detected.jpg", draw);
|
||||
|
||||
@@ -117,7 +158,6 @@ int main(int argc, char* argv[]) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
// Get face quality results from the pipeline cache
|
||||
HFFaceQualityConfidence qualityConfidence = {0};
|
||||
ret = HFGetFaceQualityConfidence(session, &qualityConfidence);
|
||||
@@ -152,6 +192,5 @@ int main(int argc, char* argv[]) {
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -44,13 +44,13 @@ int main(int argc, char* argv[]) {
|
||||
}
|
||||
|
||||
// Enable the functions in the pipeline: mask detection, live detection, and face quality detection
|
||||
HOption option = HF_ENABLE_QUALITY | HF_ENABLE_MASK_DETECT | HF_ENABLE_LIVENESS;
|
||||
HOption option = HF_ENABLE_QUALITY | HF_ENABLE_MASK_DETECT | HF_ENABLE_INTERACTION;
|
||||
// Video or frame sequence mode uses VIDEO-MODE, which is face detection with tracking
|
||||
HFDetectMode detMode = HF_DETECT_MODE_TRACK_BY_DETECTION;
|
||||
HFDetectMode detMode = HF_DETECT_MODE_LIGHT_TRACK;
|
||||
// Maximum number of faces detected
|
||||
HInt32 maxDetectNum = 20;
|
||||
// Face detection image input level
|
||||
HInt32 detectPixelLevel = 320;
|
||||
HInt32 detectPixelLevel = 160;
|
||||
// fps in tracking-by-detection mode
|
||||
HInt32 trackByDetectFps = 20;
|
||||
HFSession session = {0};
|
||||
@@ -122,7 +122,25 @@ int main(int argc, char* argv[]) {
|
||||
|
||||
// Draw detection mode on the frame
|
||||
drawMode(draw, detMode);
|
||||
if (faceNum > 0) {
|
||||
ret = HFMultipleFacePipelineProcessOptional(session, imageHandle, &multipleFaceData, option);
|
||||
if (ret != HSUCCEED)
|
||||
{
|
||||
std::cout << "HFMultipleFacePipelineProcessOptional error: " << ret << std::endl;
|
||||
return ret;
|
||||
}
|
||||
HFFaceIntereactionResult result;
|
||||
ret = HFGetFaceIntereactionResult(session, &result);
|
||||
if (ret != HSUCCEED)
|
||||
{
|
||||
std::cout << "HFGetFaceIntereactionResult error: " << ret << std::endl;
|
||||
return ret;
|
||||
}
|
||||
std::cout << "Left eye status: " << result.leftEyeStatusConfidence[0] << std::endl;
|
||||
std::cout << "Righ eye status: " << result.rightEyeStatusConfidence[0] << std::endl;
|
||||
|
||||
}
|
||||
|
||||
for (int index = 0; index < faceNum; ++index) {
|
||||
// std::cout << "========================================" << std::endl;
|
||||
// std::cout << "Process face index: " << index << std::endl;
|
||||
@@ -143,8 +161,21 @@ int main(int argc, char* argv[]) {
|
||||
// Add TrackID to the drawing
|
||||
cv::putText(draw, "ID: " + std::to_string(trackId), cv::Point(rect.x, rect.y - 10),
|
||||
cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(0, 255, 0), 2);
|
||||
}
|
||||
|
||||
HInt32 numOfLmk;
|
||||
HFGetNumOfFaceDenseLandmark(&numOfLmk);
|
||||
HPoint2f denseLandmarkPoints[numOfLmk];
|
||||
ret = HFGetFaceDenseLandmarkFromFaceToken(multipleFaceData.tokens[index], denseLandmarkPoints, numOfLmk);
|
||||
if (ret != HSUCCEED) {
|
||||
std::cerr << "HFGetFaceDenseLandmarkFromFaceToken error!!" << std::endl;
|
||||
return -1;
|
||||
}
|
||||
for (size_t i = 0; i < numOfLmk; i++) {
|
||||
cv::Point2f p(denseLandmarkPoints[i].x, denseLandmarkPoints[i].y);
|
||||
cv::circle(draw, p, 0, (0, 0, 255), 2);
|
||||
}
|
||||
}
|
||||
|
||||
cv::imshow("w", draw);
|
||||
cv::waitKey(1);
|
||||
|
||||
|
||||
@@ -100,7 +100,7 @@ int main(int argc, char* argv[]) {
|
||||
}
|
||||
|
||||
// Set log level
|
||||
HFSetLogLevel(HF_LOG_ERROR);
|
||||
HFSetLogLevel(HF_LOG_INFO);
|
||||
|
||||
return session.run();
|
||||
}
|
||||
|
||||
@@ -7,6 +7,115 @@
|
||||
#include "inspireface/c_api/inspireface.h"
|
||||
#include "../test_helper/test_tools.h"
|
||||
|
||||
|
||||
TEST_CASE("test_FacePipelineAttribute", "[face_pipeline_attribute]") {
|
||||
DRAW_SPLIT_LINE
|
||||
TEST_PRINT_OUTPUT(true);
|
||||
|
||||
enum AGE_BRACKED {
|
||||
AGE_0_2 = 0, ///< Age 0-2 years old
|
||||
AGE_3_9, ///< Age 3-9 years old
|
||||
AGE_10_19, ///< Age 10-19 years old
|
||||
AGE_20_29, ///< Age 20-29 years old
|
||||
AGE_30_39, ///< Age 30-39 years old
|
||||
AGE_40_49, ///< Age 40-49 years old
|
||||
AGE_50_59, ///< Age 50-59 years old
|
||||
AGE_60_69, ///< Age 60-69 years old
|
||||
MORE_THAN_70, ///< Age more than 70 years old
|
||||
};
|
||||
enum GENDER {
|
||||
FEMALE = 0, ///< Female
|
||||
MALE, ///< Male
|
||||
};
|
||||
enum RACE {
|
||||
BLACK = 0, ///< Black
|
||||
ASIAN, ///< Asian
|
||||
LATINO_HISPANIC, ///< Latino/Hispanic
|
||||
MIDDLE_EASTERN, ///< Middle Eastern
|
||||
WHITE, ///< White
|
||||
};
|
||||
|
||||
HResult ret;
|
||||
HFSessionCustomParameter parameter = {0};
|
||||
parameter.enable_face_attribute = 1;
|
||||
HFDetectMode detMode = HF_DETECT_MODE_ALWAYS_DETECT;
|
||||
HFSession session;
|
||||
HInt32 faceDetectPixelLevel = 160;
|
||||
ret = HFCreateInspireFaceSession(parameter, detMode, 5, faceDetectPixelLevel, -1, &session);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
SECTION("a black girl") {
|
||||
HFImageStream imgHandle;
|
||||
auto img = cv::imread(GET_DATA("data/attribute/1423.jpg"));
|
||||
REQUIRE(!img.empty());
|
||||
ret = CVImageToImageStream(img, imgHandle);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
HFMultipleFaceData multipleFaceData = {0};
|
||||
ret = HFExecuteFaceTrack(session, imgHandle, &multipleFaceData);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
REQUIRE(multipleFaceData.detectedNum == 1);
|
||||
|
||||
// Run pipeline
|
||||
ret = HFMultipleFacePipelineProcessOptional(session, imgHandle, &multipleFaceData, HF_ENABLE_FACE_ATTRIBUTE);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
HFFaceAttributeResult result = {0};
|
||||
ret = HFGetFaceAttributeResult(session, &result);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
REQUIRE(result.num == 1);
|
||||
|
||||
// Check attribute
|
||||
CHECK(result.race[0] == BLACK);
|
||||
CHECK(result.ageBracket[0] == AGE_10_19);
|
||||
CHECK(result.gender[0] == FEMALE);
|
||||
|
||||
ret = HFReleaseImageStream(imgHandle);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
imgHandle = nullptr;
|
||||
}
|
||||
|
||||
SECTION("two young white women") {
|
||||
HFImageStream imgHandle;
|
||||
auto img = cv::imread(GET_DATA("data/attribute/7242.jpg"));
|
||||
REQUIRE(!img.empty());
|
||||
ret = CVImageToImageStream(img, imgHandle);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
HFMultipleFaceData multipleFaceData = {0};
|
||||
ret = HFExecuteFaceTrack(session, imgHandle, &multipleFaceData);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
REQUIRE(multipleFaceData.detectedNum == 2);
|
||||
|
||||
// Run pipeline
|
||||
ret = HFMultipleFacePipelineProcessOptional(session, imgHandle, &multipleFaceData, HF_ENABLE_FACE_ATTRIBUTE);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
HFFaceAttributeResult result = {0};
|
||||
ret = HFGetFaceAttributeResult(session, &result);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
REQUIRE(result.num == 2);
|
||||
|
||||
// Check attribute
|
||||
for (size_t i = 0; i < result.num; i++)
|
||||
{
|
||||
CHECK(result.race[i] == WHITE);
|
||||
CHECK(result.ageBracket[i] == AGE_20_29);
|
||||
CHECK(result.gender[i] == FEMALE);
|
||||
}
|
||||
|
||||
|
||||
ret = HFReleaseImageStream(imgHandle);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
imgHandle = nullptr;
|
||||
}
|
||||
|
||||
ret = HFReleaseInspireFaceSession(session);
|
||||
session = nullptr;
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
}
|
||||
|
||||
TEST_CASE("test_FacePipeline", "[face_pipeline]") {
|
||||
DRAW_SPLIT_LINE
|
||||
TEST_PRINT_OUTPUT(true);
|
||||
@@ -184,8 +293,120 @@ TEST_CASE("test_FacePipeline", "[face_pipeline]") {
|
||||
|
||||
ret = HFReleaseInspireFaceSession(session);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
TEST_CASE("test_FaceReaction", "[face_reaction]") {
|
||||
DRAW_SPLIT_LINE
|
||||
TEST_PRINT_OUTPUT(true);
|
||||
|
||||
HResult ret;
|
||||
HFSessionCustomParameter parameter = {0};
|
||||
parameter.enable_interaction_liveness = 1;
|
||||
parameter.enable_liveness = 1;
|
||||
HFDetectMode detMode = HF_DETECT_MODE_ALWAYS_DETECT;
|
||||
HFSession session;
|
||||
ret = HFCreateInspireFaceSession(parameter, detMode, 3, -1, -1, &session);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
SECTION("open eyes") {
|
||||
// Get a face picture
|
||||
HFImageStream imgHandle;
|
||||
auto img = cv::imread(GET_DATA("data/reaction/open_eyes.png"));
|
||||
ret = CVImageToImageStream(img, imgHandle);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
// Extract basic face information from photos
|
||||
HFMultipleFaceData multipleFaceData = {0};
|
||||
ret = HFExecuteFaceTrack(session, imgHandle, &multipleFaceData);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
REQUIRE(multipleFaceData.detectedNum > 0);
|
||||
|
||||
// Predict eyes status
|
||||
ret = HFMultipleFacePipelineProcess(session, imgHandle, &multipleFaceData, parameter);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
// Get results
|
||||
HFFaceIntereactionResult result;
|
||||
ret = HFGetFaceIntereactionResult(session, &result);
|
||||
REQUIRE(multipleFaceData.detectedNum == result.num);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
// Check
|
||||
CHECK(result.leftEyeStatusConfidence[0] > 0.5f);
|
||||
CHECK(result.rightEyeStatusConfidence[0] > 0.5f);
|
||||
|
||||
ret = HFReleaseImageStream(imgHandle);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
SECTION("close eyes") {
|
||||
// Get a face picture
|
||||
HFImageStream imgHandle;
|
||||
auto img = cv::imread(GET_DATA("data/reaction/close_eyes.jpeg"));
|
||||
ret = CVImageToImageStream(img, imgHandle);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
// Extract basic face information from photos
|
||||
HFMultipleFaceData multipleFaceData = {0};
|
||||
ret = HFExecuteFaceTrack(session, imgHandle, &multipleFaceData);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
REQUIRE(multipleFaceData.detectedNum > 0);
|
||||
|
||||
// Predict eyes status
|
||||
ret = HFMultipleFacePipelineProcess(session, imgHandle, &multipleFaceData, parameter);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
// Get results
|
||||
HFFaceIntereactionResult result;
|
||||
ret = HFGetFaceIntereactionResult(session, &result);
|
||||
REQUIRE(multipleFaceData.detectedNum == result.num);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
// Check
|
||||
CHECK(result.leftEyeStatusConfidence[0] < 0.5f);
|
||||
CHECK(result.rightEyeStatusConfidence[0] < 0.5f);
|
||||
|
||||
ret = HFReleaseImageStream(imgHandle);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
}
|
||||
|
||||
SECTION("Close one eye and open the other") {
|
||||
// Get a face picture
|
||||
HFImageStream imgHandle;
|
||||
auto img = cv::imread(GET_DATA("data/reaction/close_open_eyes.jpeg"));
|
||||
ret = CVImageToImageStream(img, imgHandle);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
// Extract basic face information from photos
|
||||
HFMultipleFaceData multipleFaceData = {0};
|
||||
ret = HFExecuteFaceTrack(session, imgHandle, &multipleFaceData);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
REQUIRE(multipleFaceData.detectedNum > 0);
|
||||
|
||||
// Predict eyes status
|
||||
ret = HFMultipleFacePipelineProcess(session, imgHandle, &multipleFaceData, parameter);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
// Get results
|
||||
HFFaceIntereactionResult result;
|
||||
ret = HFGetFaceIntereactionResult(session, &result);
|
||||
REQUIRE(multipleFaceData.detectedNum == result.num);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
// Check
|
||||
CHECK(result.leftEyeStatusConfidence[0] < 0.5f);
|
||||
CHECK(result.rightEyeStatusConfidence[0] > 0.5f);
|
||||
|
||||
ret = HFReleaseImageStream(imgHandle);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
}
|
||||
|
||||
ret = HFReleaseInspireFaceSession(session);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
}
|
||||
@@ -483,4 +483,60 @@ TEST_CASE("test_MultipleLevelFaceDetect", "[face_detect]") {
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
|
||||
TEST_CASE("test_FaceShowLandmark", "[face_landmark]") {
|
||||
DRAW_SPLIT_LINE
|
||||
TEST_PRINT_OUTPUT(true);
|
||||
|
||||
std::vector<std::string> images_path = {
|
||||
GET_DATA("data/reaction/close_open_eyes.jpeg"),
|
||||
GET_DATA("data/reaction/open_eyes.png"),
|
||||
GET_DATA("data/reaction/close_eyes.jpeg"),
|
||||
};
|
||||
|
||||
HResult ret;
|
||||
HFSessionCustomParameter parameter = {0};
|
||||
HFDetectMode detMode = HF_DETECT_MODE_ALWAYS_DETECT;
|
||||
HFSession session;
|
||||
HInt32 detectPixelLevel = 160;
|
||||
ret = HFCreateInspireFaceSession(parameter, detMode, 20, detectPixelLevel, -1, &session);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
HFSessionSetTrackPreviewSize(session, detectPixelLevel);
|
||||
HFSessionSetFilterMinimumFacePixelSize(session, 0);
|
||||
|
||||
for (size_t i = 0; i < images_path.size(); i++)
|
||||
{
|
||||
HFImageStream imgHandle;
|
||||
auto image = cv::imread(images_path[i]);
|
||||
ret = CVImageToImageStream(image, imgHandle);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
// Extract basic face information from photos
|
||||
HFMultipleFaceData multipleFaceData = {0};
|
||||
ret = HFExecuteFaceTrack(session, imgHandle, &multipleFaceData);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
REQUIRE(multipleFaceData.detectedNum > 0);
|
||||
|
||||
|
||||
HInt32 numOfLmk;
|
||||
HFGetNumOfFaceDenseLandmark(&numOfLmk);
|
||||
HPoint2f denseLandmarkPoints[numOfLmk];
|
||||
ret = HFGetFaceDenseLandmarkFromFaceToken(multipleFaceData.tokens[0], denseLandmarkPoints, numOfLmk);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
for (size_t i = 0; i < numOfLmk; i++) {
|
||||
cv::Point2f p(denseLandmarkPoints[i].x, denseLandmarkPoints[i].y);
|
||||
cv::circle(image, p, 0, (0, 0, 255), 2);
|
||||
}
|
||||
|
||||
cv::imwrite("lml_" + std::to_string(i) + ".jpg", image);
|
||||
|
||||
ret = HFReleaseImageStream(imgHandle);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
}
|
||||
ret = HFReleaseInspireFaceSession(session);
|
||||
REQUIRE(ret == HSUCCEED);
|
||||
|
||||
}
|
||||
@@ -22,33 +22,34 @@ During the use of InspireFace, some error feedback codes may be generated. Here
|
||||
| 16 | HERR_SESS_TRACKER_FAILURE | 1283 | Tracker module not initialized |
|
||||
| 17 | HERR_SESS_INVALID_RESOURCE | 1290 | Invalid static resource |
|
||||
| 18 | HERR_SESS_NUM_OF_MODELS_NOT_MATCH | 1291 | Number of models does not match |
|
||||
| 19 | HERR_SESS_PIPELINE_FAILURE | 1288 | Pipeline module not initialized |
|
||||
| 20 | HERR_SESS_REC_EXTRACT_FAILURE | 1295 | Face feature extraction not registered |
|
||||
| 21 | HERR_SESS_REC_DEL_FAILURE | 1296 | Face feature deletion failed due to out of range index |
|
||||
| 22 | HERR_SESS_REC_UPDATE_FAILURE | 1297 | Face feature update failed due to out of range index |
|
||||
| 23 | HERR_SESS_REC_ADD_FEAT_EMPTY | 1298 | Feature vector for registration cannot be empty |
|
||||
| 24 | HERR_SESS_REC_FEAT_SIZE_ERR | 1299 | Incorrect length of feature vector for registration |
|
||||
| 25 | HERR_SESS_REC_INVALID_INDEX | 1300 | Invalid index number |
|
||||
| 26 | HERR_SESS_REC_CONTRAST_FEAT_ERR | 1303 | Incorrect length of feature vector for comparison |
|
||||
| 27 | HERR_SESS_REC_BLOCK_FULL | 1304 | Feature vector block full |
|
||||
| 28 | HERR_SESS_REC_BLOCK_DEL_FAILURE | 1305 | Deletion failed |
|
||||
| 29 | HERR_SESS_REC_BLOCK_UPDATE_FAILURE | 1306 | Update failed |
|
||||
| 30 | HERR_SESS_REC_ID_ALREADY_EXIST | 1307 | ID already exists |
|
||||
| 31 | HERR_SESS_FACE_DATA_ERROR | 1310 | Face data parsing |
|
||||
| 32 | HERR_SESS_FACE_REC_OPTION_ERROR | 1320 | An optional parameter is incorrect |
|
||||
| 33 | HERR_FT_HUB_DISABLE | 1329 | FeatureHub is disabled |
|
||||
| 34 | HERR_FT_HUB_OPEN_ERROR | 1330 | Database open error |
|
||||
| 35 | HERR_FT_HUB_NOT_OPENED | 1331 | Database not opened |
|
||||
| 36 | HERR_FT_HUB_NO_RECORD_FOUND | 1332 | No record found |
|
||||
| 37 | HERR_FT_HUB_CHECK_TABLE_ERROR | 1333 | Data table check error |
|
||||
| 38 | HERR_FT_HUB_INSERT_FAILURE | 1334 | Data insertion error |
|
||||
| 39 | HERR_FT_HUB_PREPARING_FAILURE | 1335 | Data preparation error |
|
||||
| 40 | HERR_FT_HUB_EXECUTING_FAILURE | 1336 | SQL execution error |
|
||||
| 41 | HERR_FT_HUB_NOT_VALID_FOLDER_PATH | 1337 | Invalid folder path |
|
||||
| 42 | HERR_FT_HUB_ENABLE_REPETITION | 1338 | Enable db function repeatedly |
|
||||
| 43 | HERR_FT_HUB_DISABLE_REPETITION | 1339 | Disable db function repeatedly |
|
||||
| 44 | HERR_ARCHIVE_LOAD_FAILURE | 1360 | Archive load failure |
|
||||
| 45 | HERR_ARCHIVE_LOAD_MODEL_FAILURE | 1361 | Model load failure |
|
||||
| 46 | HERR_ARCHIVE_FILE_FORMAT_ERROR | 1362 | The archive format is incorrect |
|
||||
| 47 | HERR_ARCHIVE_REPETITION_LOAD | 1363 | Do not reload the model |
|
||||
| 48 | HERR_ARCHIVE_NOT_LOAD | 1364 | Model not loaded |
|
||||
| 19 | HERR_SESS_LANDMARK_NUM_NOT_MATCH | 1300 | The number of input landmark points does not match |
|
||||
| 20 | HERR_SESS_PIPELINE_FAILURE | 1288 | Pipeline module not initialized |
|
||||
| 21 | HERR_SESS_REC_EXTRACT_FAILURE | 1295 | Face feature extraction not registered |
|
||||
| 22 | HERR_SESS_REC_DEL_FAILURE | 1296 | Face feature deletion failed due to out of range index |
|
||||
| 23 | HERR_SESS_REC_UPDATE_FAILURE | 1297 | Face feature update failed due to out of range index |
|
||||
| 24 | HERR_SESS_REC_ADD_FEAT_EMPTY | 1298 | Feature vector for registration cannot be empty |
|
||||
| 25 | HERR_SESS_REC_FEAT_SIZE_ERR | 1299 | Incorrect length of feature vector for registration |
|
||||
| 26 | HERR_SESS_REC_INVALID_INDEX | 1300 | Invalid index number |
|
||||
| 27 | HERR_SESS_REC_CONTRAST_FEAT_ERR | 1303 | Incorrect length of feature vector for comparison |
|
||||
| 28 | HERR_SESS_REC_BLOCK_FULL | 1304 | Feature vector block full |
|
||||
| 29 | HERR_SESS_REC_BLOCK_DEL_FAILURE | 1305 | Deletion failed |
|
||||
| 30 | HERR_SESS_REC_BLOCK_UPDATE_FAILURE | 1306 | Update failed |
|
||||
| 31 | HERR_SESS_REC_ID_ALREADY_EXIST | 1307 | ID already exists |
|
||||
| 32 | HERR_SESS_FACE_DATA_ERROR | 1310 | Face data parsing |
|
||||
| 33 | HERR_SESS_FACE_REC_OPTION_ERROR | 1320 | An optional parameter is incorrect |
|
||||
| 34 | HERR_FT_HUB_DISABLE | 1329 | FeatureHub is disabled |
|
||||
| 35 | HERR_FT_HUB_OPEN_ERROR | 1330 | Database open error |
|
||||
| 36 | HERR_FT_HUB_NOT_OPENED | 1331 | Database not opened |
|
||||
| 37 | HERR_FT_HUB_NO_RECORD_FOUND | 1332 | No record found |
|
||||
| 38 | HERR_FT_HUB_CHECK_TABLE_ERROR | 1333 | Data table check error |
|
||||
| 39 | HERR_FT_HUB_INSERT_FAILURE | 1334 | Data insertion error |
|
||||
| 40 | HERR_FT_HUB_PREPARING_FAILURE | 1335 | Data preparation error |
|
||||
| 41 | HERR_FT_HUB_EXECUTING_FAILURE | 1336 | SQL execution error |
|
||||
| 42 | HERR_FT_HUB_NOT_VALID_FOLDER_PATH | 1337 | Invalid folder path |
|
||||
| 43 | HERR_FT_HUB_ENABLE_REPETITION | 1338 | Enable db function repeatedly |
|
||||
| 44 | HERR_FT_HUB_DISABLE_REPETITION | 1339 | Disable db function repeatedly |
|
||||
| 45 | HERR_ARCHIVE_LOAD_FAILURE | 1360 | Archive load failure |
|
||||
| 46 | HERR_ARCHIVE_LOAD_MODEL_FAILURE | 1361 | Model load failure |
|
||||
| 47 | HERR_ARCHIVE_FILE_FORMAT_ERROR | 1362 | The archive format is incorrect |
|
||||
| 48 | HERR_ARCHIVE_REPETITION_LOAD | 1363 | Do not reload the model |
|
||||
| 49 | HERR_ARCHIVE_NOT_LOAD | 1364 | Model not loaded |
|
||||
|
||||
@@ -552,8 +552,8 @@ class LibraryLoader:
|
||||
# noinspection PyBroadException
|
||||
try:
|
||||
return self.Lookup(path)
|
||||
except Exception: # pylint: disable=broad-except
|
||||
pass
|
||||
except Exception as err: # pylint: disable=broad-except
|
||||
print(err)
|
||||
|
||||
raise ImportError("Could not load %s." % libname)
|
||||
|
||||
@@ -918,6 +918,21 @@ struct_HFaceRect._fields_ = [
|
||||
|
||||
HFaceRect = struct_HFaceRect# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/intypedef.h: 32
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/intypedef.h: 37
|
||||
class struct_HPoint2f(Structure):
|
||||
pass
|
||||
|
||||
struct_HPoint2f.__slots__ = [
|
||||
'x',
|
||||
'y',
|
||||
]
|
||||
struct_HPoint2f._fields_ = [
|
||||
('x', HFloat),
|
||||
('y', HFloat),
|
||||
]
|
||||
|
||||
HPoint2f = struct_HPoint2f# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/intypedef.h: 37
|
||||
|
||||
enum_HFImageFormat = c_int# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 49
|
||||
|
||||
HF_STREAM_RGB = 0# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 49
|
||||
@@ -987,7 +1002,7 @@ if _libs[_LIBRARY_FILENAME].has("HFLaunchInspireFace", "cdecl"):
|
||||
HFLaunchInspireFace.argtypes = [HPath]
|
||||
HFLaunchInspireFace.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 132
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 131
|
||||
class struct_HFSessionCustomParameter(Structure):
|
||||
pass
|
||||
|
||||
@@ -996,9 +1011,8 @@ struct_HFSessionCustomParameter.__slots__ = [
|
||||
'enable_liveness',
|
||||
'enable_ir_liveness',
|
||||
'enable_mask_detect',
|
||||
'enable_age',
|
||||
'enable_gender',
|
||||
'enable_face_quality',
|
||||
'enable_face_attribute',
|
||||
'enable_interaction_liveness',
|
||||
]
|
||||
struct_HFSessionCustomParameter._fields_ = [
|
||||
@@ -1006,15 +1020,14 @@ struct_HFSessionCustomParameter._fields_ = [
|
||||
('enable_liveness', HInt32),
|
||||
('enable_ir_liveness', HInt32),
|
||||
('enable_mask_detect', HInt32),
|
||||
('enable_age', HInt32),
|
||||
('enable_gender', HInt32),
|
||||
('enable_face_quality', HInt32),
|
||||
('enable_face_attribute', HInt32),
|
||||
('enable_interaction_liveness', HInt32),
|
||||
]
|
||||
|
||||
HFSessionCustomParameter = struct_HFSessionCustomParameter# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 132
|
||||
HFSessionCustomParameter = struct_HFSessionCustomParameter# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 131
|
||||
|
||||
PHFSessionCustomParameter = POINTER(struct_HFSessionCustomParameter)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 132
|
||||
PHFSessionCustomParameter = POINTER(struct_HFSessionCustomParameter)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 131
|
||||
|
||||
enum_HFDetectMode = c_int# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 142
|
||||
|
||||
@@ -1137,7 +1150,19 @@ if _libs[_LIBRARY_FILENAME].has("HFGetFaceBasicTokenSize", "cdecl"):
|
||||
HFGetFaceBasicTokenSize.argtypes = [HPInt32]
|
||||
HFGetFaceBasicTokenSize.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 312
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 305
|
||||
if _libs[_LIBRARY_FILENAME].has("HFGetNumOfFaceDenseLandmark", "cdecl"):
|
||||
HFGetNumOfFaceDenseLandmark = _libs[_LIBRARY_FILENAME].get("HFGetNumOfFaceDenseLandmark", "cdecl")
|
||||
HFGetNumOfFaceDenseLandmark.argtypes = [HPInt32]
|
||||
HFGetNumOfFaceDenseLandmark.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 315
|
||||
if _libs[_LIBRARY_FILENAME].has("HFGetFaceDenseLandmarkFromFaceToken", "cdecl"):
|
||||
HFGetFaceDenseLandmarkFromFaceToken = _libs[_LIBRARY_FILENAME].get("HFGetFaceDenseLandmarkFromFaceToken", "cdecl")
|
||||
HFGetFaceDenseLandmarkFromFaceToken.argtypes = [HFFaceBasicToken, POINTER(HPoint2f), HInt32]
|
||||
HFGetFaceDenseLandmarkFromFaceToken.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 329
|
||||
class struct_HFFaceFeature(Structure):
|
||||
pass
|
||||
|
||||
@@ -1150,31 +1175,31 @@ struct_HFFaceFeature._fields_ = [
|
||||
('data', HPFloat),
|
||||
]
|
||||
|
||||
HFFaceFeature = struct_HFFaceFeature# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 312
|
||||
HFFaceFeature = struct_HFFaceFeature# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 329
|
||||
|
||||
PHFFaceFeature = POINTER(struct_HFFaceFeature)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 312
|
||||
PHFFaceFeature = POINTER(struct_HFFaceFeature)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 329
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 324
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 341
|
||||
if _libs[_LIBRARY_FILENAME].has("HFFaceFeatureExtract", "cdecl"):
|
||||
HFFaceFeatureExtract = _libs[_LIBRARY_FILENAME].get("HFFaceFeatureExtract", "cdecl")
|
||||
HFFaceFeatureExtract.argtypes = [HFSession, HFImageStream, HFFaceBasicToken, PHFFaceFeature]
|
||||
HFFaceFeatureExtract.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 336
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 353
|
||||
if _libs[_LIBRARY_FILENAME].has("HFFaceFeatureExtractCpy", "cdecl"):
|
||||
HFFaceFeatureExtractCpy = _libs[_LIBRARY_FILENAME].get("HFFaceFeatureExtractCpy", "cdecl")
|
||||
HFFaceFeatureExtractCpy.argtypes = [HFSession, HFImageStream, HFFaceBasicToken, HPFloat]
|
||||
HFFaceFeatureExtractCpy.restype = HResult
|
||||
|
||||
enum_HFSearchMode = c_int# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 349
|
||||
enum_HFSearchMode = c_int# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 366
|
||||
|
||||
HF_SEARCH_MODE_EAGER = 0# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 349
|
||||
HF_SEARCH_MODE_EAGER = 0# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 366
|
||||
|
||||
HF_SEARCH_MODE_EXHAUSTIVE = (HF_SEARCH_MODE_EAGER + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 349
|
||||
HF_SEARCH_MODE_EXHAUSTIVE = (HF_SEARCH_MODE_EAGER + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 366
|
||||
|
||||
HFSearchMode = enum_HFSearchMode# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 349
|
||||
HFSearchMode = enum_HFSearchMode# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 366
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 362
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 379
|
||||
class struct_HFFeatureHubConfiguration(Structure):
|
||||
pass
|
||||
|
||||
@@ -1193,21 +1218,21 @@ struct_HFFeatureHubConfiguration._fields_ = [
|
||||
('searchMode', HFSearchMode),
|
||||
]
|
||||
|
||||
HFFeatureHubConfiguration = struct_HFFeatureHubConfiguration# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 362
|
||||
HFFeatureHubConfiguration = struct_HFFeatureHubConfiguration# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 379
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 374
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 391
|
||||
if _libs[_LIBRARY_FILENAME].has("HFFeatureHubDataEnable", "cdecl"):
|
||||
HFFeatureHubDataEnable = _libs[_LIBRARY_FILENAME].get("HFFeatureHubDataEnable", "cdecl")
|
||||
HFFeatureHubDataEnable.argtypes = [HFFeatureHubConfiguration]
|
||||
HFFeatureHubDataEnable.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 380
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 397
|
||||
if _libs[_LIBRARY_FILENAME].has("HFFeatureHubDataDisable", "cdecl"):
|
||||
HFFeatureHubDataDisable = _libs[_LIBRARY_FILENAME].get("HFFeatureHubDataDisable", "cdecl")
|
||||
HFFeatureHubDataDisable.argtypes = []
|
||||
HFFeatureHubDataDisable.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 392
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 409
|
||||
class struct_HFFaceFeatureIdentity(Structure):
|
||||
pass
|
||||
|
||||
@@ -1222,11 +1247,11 @@ struct_HFFaceFeatureIdentity._fields_ = [
|
||||
('feature', PHFFaceFeature),
|
||||
]
|
||||
|
||||
HFFaceFeatureIdentity = struct_HFFaceFeatureIdentity# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 392
|
||||
HFFaceFeatureIdentity = struct_HFFaceFeatureIdentity# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 409
|
||||
|
||||
PHFFaceFeatureIdentity = POINTER(struct_HFFaceFeatureIdentity)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 392
|
||||
PHFFaceFeatureIdentity = POINTER(struct_HFFaceFeatureIdentity)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 409
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 401
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 418
|
||||
class struct_HFSearchTopKResults(Structure):
|
||||
pass
|
||||
|
||||
@@ -1241,89 +1266,89 @@ struct_HFSearchTopKResults._fields_ = [
|
||||
('customIds', HPInt32),
|
||||
]
|
||||
|
||||
HFSearchTopKResults = struct_HFSearchTopKResults# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 401
|
||||
HFSearchTopKResults = struct_HFSearchTopKResults# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 418
|
||||
|
||||
PHFSearchTopKResults = POINTER(struct_HFSearchTopKResults)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 401
|
||||
PHFSearchTopKResults = POINTER(struct_HFSearchTopKResults)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 418
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 412
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 429
|
||||
if _libs[_LIBRARY_FILENAME].has("HFFeatureHubFaceSearchThresholdSetting", "cdecl"):
|
||||
HFFeatureHubFaceSearchThresholdSetting = _libs[_LIBRARY_FILENAME].get("HFFeatureHubFaceSearchThresholdSetting", "cdecl")
|
||||
HFFeatureHubFaceSearchThresholdSetting.argtypes = [c_float]
|
||||
HFFeatureHubFaceSearchThresholdSetting.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 423
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 440
|
||||
if _libs[_LIBRARY_FILENAME].has("HFFaceComparison", "cdecl"):
|
||||
HFFaceComparison = _libs[_LIBRARY_FILENAME].get("HFFaceComparison", "cdecl")
|
||||
HFFaceComparison.argtypes = [HFFaceFeature, HFFaceFeature, HPFloat]
|
||||
HFFaceComparison.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 431
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 448
|
||||
if _libs[_LIBRARY_FILENAME].has("HFGetFeatureLength", "cdecl"):
|
||||
HFGetFeatureLength = _libs[_LIBRARY_FILENAME].get("HFGetFeatureLength", "cdecl")
|
||||
HFGetFeatureLength.argtypes = [HPInt32]
|
||||
HFGetFeatureLength.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 440
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 457
|
||||
if _libs[_LIBRARY_FILENAME].has("HFFeatureHubInsertFeature", "cdecl"):
|
||||
HFFeatureHubInsertFeature = _libs[_LIBRARY_FILENAME].get("HFFeatureHubInsertFeature", "cdecl")
|
||||
HFFeatureHubInsertFeature.argtypes = [HFFaceFeatureIdentity]
|
||||
HFFeatureHubInsertFeature.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 450
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 467
|
||||
if _libs[_LIBRARY_FILENAME].has("HFFeatureHubFaceSearch", "cdecl"):
|
||||
HFFeatureHubFaceSearch = _libs[_LIBRARY_FILENAME].get("HFFeatureHubFaceSearch", "cdecl")
|
||||
HFFeatureHubFaceSearch.argtypes = [HFFaceFeature, HPFloat, PHFFaceFeatureIdentity]
|
||||
HFFeatureHubFaceSearch.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 460
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 477
|
||||
if _libs[_LIBRARY_FILENAME].has("HFFeatureHubFaceSearchTopK", "cdecl"):
|
||||
HFFeatureHubFaceSearchTopK = _libs[_LIBRARY_FILENAME].get("HFFeatureHubFaceSearchTopK", "cdecl")
|
||||
HFFeatureHubFaceSearchTopK.argtypes = [HFFaceFeature, HInt32, PHFSearchTopKResults]
|
||||
HFFeatureHubFaceSearchTopK.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 468
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 485
|
||||
if _libs[_LIBRARY_FILENAME].has("HFFeatureHubFaceRemove", "cdecl"):
|
||||
HFFeatureHubFaceRemove = _libs[_LIBRARY_FILENAME].get("HFFeatureHubFaceRemove", "cdecl")
|
||||
HFFeatureHubFaceRemove.argtypes = [HInt32]
|
||||
HFFeatureHubFaceRemove.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 476
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 493
|
||||
if _libs[_LIBRARY_FILENAME].has("HFFeatureHubFaceUpdate", "cdecl"):
|
||||
HFFeatureHubFaceUpdate = _libs[_LIBRARY_FILENAME].get("HFFeatureHubFaceUpdate", "cdecl")
|
||||
HFFeatureHubFaceUpdate.argtypes = [HFFaceFeatureIdentity]
|
||||
HFFeatureHubFaceUpdate.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 485
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 502
|
||||
if _libs[_LIBRARY_FILENAME].has("HFFeatureHubGetFaceIdentity", "cdecl"):
|
||||
HFFeatureHubGetFaceIdentity = _libs[_LIBRARY_FILENAME].get("HFFeatureHubGetFaceIdentity", "cdecl")
|
||||
HFFeatureHubGetFaceIdentity.argtypes = [HInt32, PHFFaceFeatureIdentity]
|
||||
HFFeatureHubGetFaceIdentity.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 493
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 510
|
||||
if _libs[_LIBRARY_FILENAME].has("HFFeatureHubGetFaceCount", "cdecl"):
|
||||
HFFeatureHubGetFaceCount = _libs[_LIBRARY_FILENAME].get("HFFeatureHubGetFaceCount", "cdecl")
|
||||
HFFeatureHubGetFaceCount.argtypes = [POINTER(HInt32)]
|
||||
HFFeatureHubGetFaceCount.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 500
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 517
|
||||
if _libs[_LIBRARY_FILENAME].has("HFFeatureHubViewDBTable", "cdecl"):
|
||||
HFFeatureHubViewDBTable = _libs[_LIBRARY_FILENAME].get("HFFeatureHubViewDBTable", "cdecl")
|
||||
HFFeatureHubViewDBTable.argtypes = []
|
||||
HFFeatureHubViewDBTable.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 519
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 536
|
||||
if _libs[_LIBRARY_FILENAME].has("HFMultipleFacePipelineProcess", "cdecl"):
|
||||
HFMultipleFacePipelineProcess = _libs[_LIBRARY_FILENAME].get("HFMultipleFacePipelineProcess", "cdecl")
|
||||
HFMultipleFacePipelineProcess.argtypes = [HFSession, HFImageStream, PHFMultipleFaceData, HFSessionCustomParameter]
|
||||
HFMultipleFacePipelineProcess.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 535
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 552
|
||||
if _libs[_LIBRARY_FILENAME].has("HFMultipleFacePipelineProcessOptional", "cdecl"):
|
||||
HFMultipleFacePipelineProcessOptional = _libs[_LIBRARY_FILENAME].get("HFMultipleFacePipelineProcessOptional", "cdecl")
|
||||
HFMultipleFacePipelineProcessOptional.argtypes = [HFSession, HFImageStream, PHFMultipleFaceData, HInt32]
|
||||
HFMultipleFacePipelineProcessOptional.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 547
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 564
|
||||
class struct_HFRGBLivenessConfidence(Structure):
|
||||
pass
|
||||
|
||||
@@ -1336,17 +1361,17 @@ struct_HFRGBLivenessConfidence._fields_ = [
|
||||
('confidence', HPFloat),
|
||||
]
|
||||
|
||||
HFRGBLivenessConfidence = struct_HFRGBLivenessConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 547
|
||||
HFRGBLivenessConfidence = struct_HFRGBLivenessConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 564
|
||||
|
||||
PHFRGBLivenessConfidence = POINTER(struct_HFRGBLivenessConfidence)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 547
|
||||
PHFRGBLivenessConfidence = POINTER(struct_HFRGBLivenessConfidence)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 564
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 560
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 577
|
||||
if _libs[_LIBRARY_FILENAME].has("HFGetRGBLivenessConfidence", "cdecl"):
|
||||
HFGetRGBLivenessConfidence = _libs[_LIBRARY_FILENAME].get("HFGetRGBLivenessConfidence", "cdecl")
|
||||
HFGetRGBLivenessConfidence.argtypes = [HFSession, PHFRGBLivenessConfidence]
|
||||
HFGetRGBLivenessConfidence.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 571
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 588
|
||||
class struct_HFFaceMaskConfidence(Structure):
|
||||
pass
|
||||
|
||||
@@ -1359,17 +1384,17 @@ struct_HFFaceMaskConfidence._fields_ = [
|
||||
('confidence', HPFloat),
|
||||
]
|
||||
|
||||
HFFaceMaskConfidence = struct_HFFaceMaskConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 571
|
||||
HFFaceMaskConfidence = struct_HFFaceMaskConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 588
|
||||
|
||||
PHFFaceMaskConfidence = POINTER(struct_HFFaceMaskConfidence)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 571
|
||||
PHFFaceMaskConfidence = POINTER(struct_HFFaceMaskConfidence)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 588
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 583
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 600
|
||||
if _libs[_LIBRARY_FILENAME].has("HFGetFaceMaskConfidence", "cdecl"):
|
||||
HFGetFaceMaskConfidence = _libs[_LIBRARY_FILENAME].get("HFGetFaceMaskConfidence", "cdecl")
|
||||
HFGetFaceMaskConfidence.argtypes = [HFSession, PHFFaceMaskConfidence]
|
||||
HFGetFaceMaskConfidence.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 594
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 611
|
||||
class struct_HFFaceQualityConfidence(Structure):
|
||||
pass
|
||||
|
||||
@@ -1382,23 +1407,75 @@ struct_HFFaceQualityConfidence._fields_ = [
|
||||
('confidence', HPFloat),
|
||||
]
|
||||
|
||||
HFFaceQualityConfidence = struct_HFFaceQualityConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 594
|
||||
HFFaceQualityConfidence = struct_HFFaceQualityConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 611
|
||||
|
||||
PHFFaceQualityConfidence = POINTER(struct_HFFaceQualityConfidence)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 594
|
||||
PHFFaceQualityConfidence = POINTER(struct_HFFaceQualityConfidence)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 611
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 606
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 623
|
||||
if _libs[_LIBRARY_FILENAME].has("HFGetFaceQualityConfidence", "cdecl"):
|
||||
HFGetFaceQualityConfidence = _libs[_LIBRARY_FILENAME].get("HFGetFaceQualityConfidence", "cdecl")
|
||||
HFGetFaceQualityConfidence.argtypes = [HFSession, PHFFaceQualityConfidence]
|
||||
HFGetFaceQualityConfidence.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 618
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 635
|
||||
if _libs[_LIBRARY_FILENAME].has("HFFaceQualityDetect", "cdecl"):
|
||||
HFFaceQualityDetect = _libs[_LIBRARY_FILENAME].get("HFFaceQualityDetect", "cdecl")
|
||||
HFFaceQualityDetect.argtypes = [HFSession, HFFaceBasicToken, POINTER(HFloat)]
|
||||
HFFaceQualityDetect.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 631
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 645
|
||||
class struct_HFFaceIntereactionResult(Structure):
|
||||
pass
|
||||
|
||||
struct_HFFaceIntereactionResult.__slots__ = [
|
||||
'num',
|
||||
'leftEyeStatusConfidence',
|
||||
'rightEyeStatusConfidence',
|
||||
]
|
||||
struct_HFFaceIntereactionResult._fields_ = [
|
||||
('num', HInt32),
|
||||
('leftEyeStatusConfidence', HPFloat),
|
||||
('rightEyeStatusConfidence', HPFloat),
|
||||
]
|
||||
|
||||
HFFaceIntereactionResult = struct_HFFaceIntereactionResult# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 645
|
||||
|
||||
PHFFaceIntereactionResult = POINTER(struct_HFFaceIntereactionResult)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 645
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 647
|
||||
if _libs[_LIBRARY_FILENAME].has("HFGetFaceIntereactionResult", "cdecl"):
|
||||
HFGetFaceIntereactionResult = _libs[_LIBRARY_FILENAME].get("HFGetFaceIntereactionResult", "cdecl")
|
||||
HFGetFaceIntereactionResult.argtypes = [HFSession, PHFFaceIntereactionResult]
|
||||
HFGetFaceIntereactionResult.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 675
|
||||
class struct_HFFaceAttributeResult(Structure):
|
||||
pass
|
||||
|
||||
struct_HFFaceAttributeResult.__slots__ = [
|
||||
'num',
|
||||
'race',
|
||||
'gender',
|
||||
'ageBracket',
|
||||
]
|
||||
struct_HFFaceAttributeResult._fields_ = [
|
||||
('num', HInt32),
|
||||
('race', HPInt32),
|
||||
('gender', HPInt32),
|
||||
('ageBracket', HPInt32),
|
||||
]
|
||||
|
||||
HFFaceAttributeResult = struct_HFFaceAttributeResult# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 675
|
||||
|
||||
PHFFaceAttributeResult = POINTER(struct_HFFaceAttributeResult)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 675
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 687
|
||||
if _libs[_LIBRARY_FILENAME].has("HFGetFaceAttributeResult", "cdecl"):
|
||||
HFGetFaceAttributeResult = _libs[_LIBRARY_FILENAME].get("HFGetFaceAttributeResult", "cdecl")
|
||||
HFGetFaceAttributeResult.argtypes = [HFSession, PHFFaceAttributeResult]
|
||||
HFGetFaceAttributeResult.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 701
|
||||
class struct_HFInspireFaceVersion(Structure):
|
||||
pass
|
||||
|
||||
@@ -1413,50 +1490,56 @@ struct_HFInspireFaceVersion._fields_ = [
|
||||
('patch', c_int),
|
||||
]
|
||||
|
||||
HFInspireFaceVersion = struct_HFInspireFaceVersion# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 631
|
||||
HFInspireFaceVersion = struct_HFInspireFaceVersion# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 701
|
||||
|
||||
PHFInspireFaceVersion = POINTER(struct_HFInspireFaceVersion)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 631
|
||||
PHFInspireFaceVersion = POINTER(struct_HFInspireFaceVersion)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 701
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 641
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 711
|
||||
if _libs[_LIBRARY_FILENAME].has("HFQueryInspireFaceVersion", "cdecl"):
|
||||
HFQueryInspireFaceVersion = _libs[_LIBRARY_FILENAME].get("HFQueryInspireFaceVersion", "cdecl")
|
||||
HFQueryInspireFaceVersion.argtypes = [PHFInspireFaceVersion]
|
||||
HFQueryInspireFaceVersion.restype = HResult
|
||||
|
||||
enum_HFLogLevel = c_int# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653
|
||||
enum_HFLogLevel = c_int# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723
|
||||
|
||||
HF_LOG_NONE = 0# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653
|
||||
HF_LOG_NONE = 0# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723
|
||||
|
||||
HF_LOG_DEBUG = (HF_LOG_NONE + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653
|
||||
HF_LOG_DEBUG = (HF_LOG_NONE + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723
|
||||
|
||||
HF_LOG_INFO = (HF_LOG_DEBUG + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653
|
||||
HF_LOG_INFO = (HF_LOG_DEBUG + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723
|
||||
|
||||
HF_LOG_WARN = (HF_LOG_INFO + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653
|
||||
HF_LOG_WARN = (HF_LOG_INFO + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723
|
||||
|
||||
HF_LOG_ERROR = (HF_LOG_WARN + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653
|
||||
HF_LOG_ERROR = (HF_LOG_WARN + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723
|
||||
|
||||
HF_LOG_FATAL = (HF_LOG_ERROR + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653
|
||||
HF_LOG_FATAL = (HF_LOG_ERROR + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723
|
||||
|
||||
HFLogLevel = enum_HFLogLevel# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653
|
||||
HFLogLevel = enum_HFLogLevel# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 658
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 728
|
||||
if _libs[_LIBRARY_FILENAME].has("HFSetLogLevel", "cdecl"):
|
||||
HFSetLogLevel = _libs[_LIBRARY_FILENAME].get("HFSetLogLevel", "cdecl")
|
||||
HFSetLogLevel.argtypes = [HFLogLevel]
|
||||
HFSetLogLevel.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 663
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 733
|
||||
if _libs[_LIBRARY_FILENAME].has("HFLogDisable", "cdecl"):
|
||||
HFLogDisable = _libs[_LIBRARY_FILENAME].get("HFLogDisable", "cdecl")
|
||||
HFLogDisable.argtypes = []
|
||||
HFLogDisable.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 676
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 746
|
||||
if _libs[_LIBRARY_FILENAME].has("HFDeBugImageStreamImShow", "cdecl"):
|
||||
HFDeBugImageStreamImShow = _libs[_LIBRARY_FILENAME].get("HFDeBugImageStreamImShow", "cdecl")
|
||||
HFDeBugImageStreamImShow.argtypes = [HFImageStream]
|
||||
HFDeBugImageStreamImShow.restype = None
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 757
|
||||
if _libs[_LIBRARY_FILENAME].has("HFDeBugImageStreamDecodeSave", "cdecl"):
|
||||
HFDeBugImageStreamDecodeSave = _libs[_LIBRARY_FILENAME].get("HFDeBugImageStreamDecodeSave", "cdecl")
|
||||
HFDeBugImageStreamDecodeSave.argtypes = [HFImageStream, HPath]
|
||||
HFDeBugImageStreamDecodeSave.restype = HResult
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 27
|
||||
try:
|
||||
HF_ENABLE_NONE = 0
|
||||
@@ -1489,13 +1572,13 @@ except:
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 32
|
||||
try:
|
||||
HF_ENABLE_AGE_PREDICT = 32
|
||||
HF_ENABLE_FACE_ATTRIBUTE = 32
|
||||
except:
|
||||
pass
|
||||
|
||||
# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 33
|
||||
try:
|
||||
HF_ENABLE_GENDER_PREDICT = 64
|
||||
HF_ENABLE_PLACEHOLDER_ = 64
|
||||
except:
|
||||
pass
|
||||
|
||||
@@ -1513,7 +1596,7 @@ except:
|
||||
|
||||
HFImageData = struct_HFImageData# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 74
|
||||
|
||||
HFSessionCustomParameter = struct_HFSessionCustomParameter# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 132
|
||||
HFSessionCustomParameter = struct_HFSessionCustomParameter# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 131
|
||||
|
||||
HFFaceBasicToken = struct_HFFaceBasicToken# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 204
|
||||
|
||||
@@ -1521,21 +1604,25 @@ HFFaceEulerAngle = struct_HFFaceEulerAngle# /Users/tunm/work/InspireFace/cpp/ins
|
||||
|
||||
HFMultipleFaceData = struct_HFMultipleFaceData# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 229
|
||||
|
||||
HFFaceFeature = struct_HFFaceFeature# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 312
|
||||
HFFaceFeature = struct_HFFaceFeature# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 329
|
||||
|
||||
HFFeatureHubConfiguration = struct_HFFeatureHubConfiguration# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 362
|
||||
HFFeatureHubConfiguration = struct_HFFeatureHubConfiguration# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 379
|
||||
|
||||
HFFaceFeatureIdentity = struct_HFFaceFeatureIdentity# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 392
|
||||
HFFaceFeatureIdentity = struct_HFFaceFeatureIdentity# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 409
|
||||
|
||||
HFSearchTopKResults = struct_HFSearchTopKResults# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 401
|
||||
HFSearchTopKResults = struct_HFSearchTopKResults# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 418
|
||||
|
||||
HFRGBLivenessConfidence = struct_HFRGBLivenessConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 547
|
||||
HFRGBLivenessConfidence = struct_HFRGBLivenessConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 564
|
||||
|
||||
HFFaceMaskConfidence = struct_HFFaceMaskConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 571
|
||||
HFFaceMaskConfidence = struct_HFFaceMaskConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 588
|
||||
|
||||
HFFaceQualityConfidence = struct_HFFaceQualityConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 594
|
||||
HFFaceQualityConfidence = struct_HFFaceQualityConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 611
|
||||
|
||||
HFInspireFaceVersion = struct_HFInspireFaceVersion# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 631
|
||||
HFFaceIntereactionResult = struct_HFFaceIntereactionResult# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 645
|
||||
|
||||
HFFaceAttributeResult = struct_HFFaceAttributeResult# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 675
|
||||
|
||||
HFInspireFaceVersion = struct_HFInspireFaceVersion# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 701
|
||||
|
||||
# No inserted files
|
||||
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import ctypes
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
from .core import *
|
||||
@@ -146,6 +148,11 @@ class FaceExtended:
|
||||
rgb_liveness_confidence: float
|
||||
mask_confidence: float
|
||||
quality_confidence: float
|
||||
left_eye_status_confidence: float
|
||||
right_eye_status_confidence: float
|
||||
race: int
|
||||
gender: int
|
||||
age_bracket: int
|
||||
|
||||
|
||||
class FaceInformation:
|
||||
@@ -208,8 +215,7 @@ class SessionCustomParameter:
|
||||
enable_liveness: bool = False
|
||||
enable_ir_liveness: bool = False
|
||||
enable_mask_detect: bool = False
|
||||
enable_age: bool = False
|
||||
enable_gender: bool = False
|
||||
enable_face_attribute: bool = False
|
||||
enable_face_quality: bool = False
|
||||
enable_interaction_liveness: bool = False
|
||||
|
||||
@@ -225,8 +231,7 @@ class SessionCustomParameter:
|
||||
enable_liveness=int(self.enable_liveness),
|
||||
enable_ir_liveness=int(self.enable_ir_liveness),
|
||||
enable_mask_detect=int(self.enable_mask_detect),
|
||||
enable_age=int(self.enable_age),
|
||||
enable_gender=int(self.enable_gender),
|
||||
enable_face_attribute=int(self.enable_face_attribute),
|
||||
enable_face_quality=int(self.enable_face_quality),
|
||||
enable_interaction_liveness=int(self.enable_interaction_liveness)
|
||||
)
|
||||
@@ -317,6 +322,21 @@ class InspireFaceSession(object):
|
||||
else:
|
||||
return []
|
||||
|
||||
def get_face_dense_landmark(self, single_face: FaceInformation):
|
||||
num_landmarks = HInt32()
|
||||
HFGetNumOfFaceDenseLandmark(byref(num_landmarks))
|
||||
landmarks_array = (HPoint2f * num_landmarks.value)()
|
||||
ret = HFGetFaceDenseLandmarkFromFaceToken(single_face._token, landmarks_array, num_landmarks)
|
||||
if ret != 0:
|
||||
logger.error(f"An error occurred obtaining a dense landmark for a single face: {ret}")
|
||||
|
||||
landmark = []
|
||||
for point in landmarks_array:
|
||||
landmark.append(point.x)
|
||||
landmark.append(point.y)
|
||||
|
||||
return np.asarray(landmark).reshape(-1, 2)
|
||||
|
||||
def set_track_preview_size(self, size=192):
|
||||
"""
|
||||
Sets the preview size for the face tracking session.
|
||||
@@ -367,10 +387,12 @@ class InspireFaceSession(object):
|
||||
logger.error(f"Face pipeline error: {ret}")
|
||||
return []
|
||||
|
||||
extends = [FaceExtended(-1.0, -1.0, -1.0) for _ in range(len(faces))]
|
||||
extends = [FaceExtended(-1.0, -1.0, -1.0, -1.0, -1.0, -1, -1, -1) for _ in range(len(faces))]
|
||||
self._update_mask_confidence(exec_param, flag, extends)
|
||||
self._update_rgb_liveness_confidence(exec_param, flag, extends)
|
||||
self._update_face_quality_confidence(exec_param, flag, extends)
|
||||
self._update_face_attribute_confidence(exec_param, flag, extends)
|
||||
self._update_face_interact_confidence(exec_param, flag, extends)
|
||||
|
||||
return extends
|
||||
|
||||
@@ -431,6 +453,18 @@ class InspireFaceSession(object):
|
||||
else:
|
||||
logger.error(f"Get mask result error: {ret}")
|
||||
|
||||
def _update_face_interact_confidence(self, exec_param, flag, extends):
|
||||
if (flag == "object" and exec_param.enable_interaction_liveness) or (
|
||||
flag == "bitmask" and exec_param & HF_ENABLE_INTERACTION):
|
||||
results = HFFaceIntereactionResult()
|
||||
ret = HFGetFaceIntereactionResult(self._sess, PHFFaceIntereactionResult(results))
|
||||
if ret == 0:
|
||||
for i in range(results.num):
|
||||
extends[i].left_eye_status_confidence = results.leftEyeStatusConfidence[i]
|
||||
extends[i].right_eye_status_confidence = results.rightEyeStatusConfidence[i]
|
||||
else:
|
||||
logger.error(f"Get face interact result error: {ret}")
|
||||
|
||||
def _update_rgb_liveness_confidence(self, exec_param, flag, extends: List[FaceExtended]):
|
||||
if (flag == "object" and exec_param.enable_liveness) or (
|
||||
flag == "bitmask" and exec_param & HF_ENABLE_LIVENESS):
|
||||
@@ -442,6 +476,19 @@ class InspireFaceSession(object):
|
||||
else:
|
||||
logger.error(f"Get rgb liveness result error: {ret}")
|
||||
|
||||
def _update_face_attribute_confidence(self, exec_param, flag, extends: List[FaceExtended]):
|
||||
if (flag == "object" and exec_param.enable_face_attribute) or (
|
||||
flag == "bitmask" and exec_param & HF_ENABLE_FACE_ATTRIBUTE):
|
||||
attribute_results = HFFaceAttributeResult()
|
||||
ret = HFGetFaceAttributeResult(self._sess, PHFFaceAttributeResult(attribute_results))
|
||||
if ret == 0:
|
||||
for i in range(attribute_results.num):
|
||||
extends[i].gender = attribute_results.gender[i]
|
||||
extends[i].age_bracket = attribute_results.ageBracket[i]
|
||||
extends[i].race = attribute_results.race[i]
|
||||
else:
|
||||
logger.error(f"Get face attribute result error: {ret}")
|
||||
|
||||
def _update_face_quality_confidence(self, exec_param, flag, extends: List[FaceExtended]):
|
||||
if (flag == "object" and exec_param.enable_face_quality) or (
|
||||
flag == "bitmask" and exec_param & HF_ENABLE_QUALITY):
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
# Session option
|
||||
from inspireface.modules.core.native import HF_ENABLE_NONE, HF_ENABLE_FACE_RECOGNITION, HF_ENABLE_LIVENESS, HF_ENABLE_IR_LIVENESS, \
|
||||
HF_ENABLE_MASK_DETECT, HF_ENABLE_AGE_PREDICT, HF_ENABLE_GENDER_PREDICT, HF_ENABLE_QUALITY, HF_ENABLE_INTERACTION
|
||||
HF_ENABLE_MASK_DETECT, HF_ENABLE_FACE_ATTRIBUTE, HF_ENABLE_QUALITY, HF_ENABLE_INTERACTION
|
||||
|
||||
# Face track mode
|
||||
from inspireface.modules.core.native import HF_DETECT_MODE_ALWAYS_DETECT, HF_DETECT_MODE_LIGHT_TRACK, HF_DETECT_MODE_TRACK_BY_DETECTION
|
||||
|
||||
@@ -3,6 +3,12 @@ import cv2
|
||||
import inspireface as ifac
|
||||
from inspireface.param import *
|
||||
import click
|
||||
import numpy as np
|
||||
|
||||
race_tags = ["Black", "Asian", "Latino/Hispanic", "Middle Eastern", "White"]
|
||||
gender_tags = ["Female", "Male", ]
|
||||
age_bracket_tags = ["0-2 years old", "3-9 years old", "10-19 years old", "20-29 years old", "30-39 years old",
|
||||
"40-49 years old", "50-59 years old", "60-69 years old", "more than 70 years old"]
|
||||
|
||||
@click.command()
|
||||
@click.argument("resource_path")
|
||||
@@ -17,7 +23,7 @@ def case_face_detection_image(resource_path, image_path):
|
||||
assert ret, "Launch failure. Please ensure the resource path is correct."
|
||||
|
||||
# Optional features, loaded during session creation based on the modules specified.
|
||||
opt = HF_ENABLE_FACE_RECOGNITION | HF_ENABLE_QUALITY | HF_ENABLE_MASK_DETECT | HF_ENABLE_LIVENESS
|
||||
opt = HF_ENABLE_FACE_RECOGNITION | HF_ENABLE_QUALITY | HF_ENABLE_MASK_DETECT | HF_ENABLE_LIVENESS | HF_ENABLE_INTERACTION | HF_ENABLE_FACE_ATTRIBUTE
|
||||
session = ifac.InspireFaceSession(opt, HF_DETECT_MODE_ALWAYS_DETECT)
|
||||
|
||||
# Load the image using OpenCV.
|
||||
@@ -35,12 +41,33 @@ def case_face_detection_image(resource_path, image_path):
|
||||
print(f"idx: {idx}")
|
||||
# Print Euler angles of the face.
|
||||
print(f"roll: {face.roll}, yaw: {face.yaw}, pitch: {face.pitch}")
|
||||
# Draw bounding box around the detected face.
|
||||
|
||||
# Get face bounding box
|
||||
x1, y1, x2, y2 = face.location
|
||||
cv2.rectangle(draw, (x1, y1), (x2, y2), (0, 0, 255), 2)
|
||||
|
||||
# Calculate center, size, and angle
|
||||
center = ((x1 + x2) / 2, (y1 + y2) / 2)
|
||||
size = (x2 - x1, y2 - y1)
|
||||
angle = face.roll # 这里使用 roll 角度
|
||||
|
||||
# Get rotation matrix
|
||||
rotation_matrix = cv2.getRotationMatrix2D(center, angle, 1.0)
|
||||
|
||||
# Apply rotation to the bounding box corners
|
||||
rect = ((center[0], center[1]), (size[0], size[1]), angle)
|
||||
box = cv2.boxPoints(rect)
|
||||
box = box.astype(int)
|
||||
|
||||
# Draw the rotated bounding box
|
||||
cv2.drawContours(draw, [box], 0, (100, 180, 29), 2)
|
||||
|
||||
# Draw landmarks
|
||||
lmk = session.get_face_dense_landmark(face)
|
||||
for x, y in lmk.astype(int):
|
||||
cv2.circle(draw, (x, y), 0, (220, 100, 0), 2)
|
||||
|
||||
# Features must be enabled during session creation to use them here.
|
||||
select_exec_func = HF_ENABLE_QUALITY | HF_ENABLE_MASK_DETECT | HF_ENABLE_LIVENESS
|
||||
select_exec_func = HF_ENABLE_QUALITY | HF_ENABLE_MASK_DETECT | HF_ENABLE_LIVENESS | HF_ENABLE_INTERACTION | HF_ENABLE_FACE_ATTRIBUTE
|
||||
# Execute the pipeline to obtain richer face information.
|
||||
extends = session.face_pipeline(image, faces, select_exec_func)
|
||||
for idx, ext in enumerate(extends):
|
||||
@@ -50,6 +77,11 @@ def case_face_detection_image(resource_path, image_path):
|
||||
print(f"quality: {ext.quality_confidence}")
|
||||
print(f"rgb liveness: {ext.rgb_liveness_confidence}")
|
||||
print(f"face mask: {ext.mask_confidence}")
|
||||
print(
|
||||
f"face eyes status: left eye: {ext.left_eye_status_confidence} right eye: {ext.right_eye_status_confidence}")
|
||||
print(f"gender: {gender_tags[ext.gender]}")
|
||||
print(f"race: {race_tags[ext.race]}")
|
||||
print(f"age: {age_bracket_tags[ext.age_bracket]}")
|
||||
|
||||
# Save the annotated image to the 'tmp/' directory.
|
||||
save_path = os.path.join("tmp/", "det.jpg")
|
||||
|
||||
@@ -2,7 +2,7 @@ import click
|
||||
import cv2
|
||||
import inspireface as ifac
|
||||
from inspireface.param import *
|
||||
|
||||
import numpy as np
|
||||
|
||||
@click.command()
|
||||
@click.argument("resource_path")
|
||||
@@ -51,8 +51,34 @@ def case_face_tracker_from_video(resource_path, source, show):
|
||||
# Process frame here (e.g., face detection/tracking).
|
||||
faces = session.face_detection(frame)
|
||||
for idx, face in enumerate(faces):
|
||||
print(f"{'==' * 20}")
|
||||
print(f"idx: {idx}")
|
||||
# Print Euler angles of the face.
|
||||
print(f"roll: {face.roll}, yaw: {face.yaw}, pitch: {face.pitch}")
|
||||
|
||||
# Get face bounding box
|
||||
x1, y1, x2, y2 = face.location
|
||||
cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 0, 255), 2)
|
||||
|
||||
# Calculate center, size, and angle
|
||||
center = ((x1 + x2) / 2, (y1 + y2) / 2)
|
||||
size = (x2 - x1, y2 - y1)
|
||||
angle = face.roll # 这里使用 roll 角度
|
||||
|
||||
# Get rotation matrix
|
||||
rotation_matrix = cv2.getRotationMatrix2D(center, angle, 1.0)
|
||||
|
||||
# Apply rotation to the bounding box corners
|
||||
rect = ((center[0], center[1]), (size[0], size[1]), angle)
|
||||
box = cv2.boxPoints(rect)
|
||||
box = box.astype(int)
|
||||
|
||||
# Draw the rotated bounding box
|
||||
cv2.drawContours(frame, [box], 0, (100, 180, 29), 2)
|
||||
|
||||
# Draw landmarks
|
||||
lmk = session.get_face_dense_landmark(face)
|
||||
for x, y in lmk.astype(int):
|
||||
cv2.circle(frame, (x, y), 0, (220, 100, 0), 2)
|
||||
|
||||
if show:
|
||||
cv2.imshow("Face Tracker", frame)
|
||||
|
||||
|
Before Width: | Height: | Size: 96 KiB |
|
Before Width: | Height: | Size: 16 KiB |
|
Before Width: | Height: | Size: 90 KiB |
|
Before Width: | Height: | Size: 155 KiB |
|
Before Width: | Height: | Size: 56 KiB |
|
Before Width: | Height: | Size: 23 KiB |
|
Before Width: | Height: | Size: 20 KiB |
|
Before Width: | Height: | Size: 19 KiB |
|
Before Width: | Height: | Size: 164 KiB |
|
Before Width: | Height: | Size: 212 KiB |
|
Before Width: | Height: | Size: 78 KiB |
|
Before Width: | Height: | Size: 1.2 MiB |
|
Before Width: | Height: | Size: 424 KiB |
|
Before Width: | Height: | Size: 51 KiB |
|
Before Width: | Height: | Size: 44 KiB |
|
Before Width: | Height: | Size: 224 KiB |
|
Before Width: | Height: | Size: 25 KiB |
|
Before Width: | Height: | Size: 296 KiB |
|
Before Width: | Height: | Size: 391 KiB |
|
Before Width: | Height: | Size: 88 KiB |
|
Before Width: | Height: | Size: 166 KiB |
|
Before Width: | Height: | Size: 165 KiB |
|
Before Width: | Height: | Size: 165 KiB |
|
Before Width: | Height: | Size: 164 KiB |
|
Before Width: | Height: | Size: 20 KiB |
|
Before Width: | Height: | Size: 14 KiB |
@@ -8,14 +8,14 @@ import inspireface as ifac
|
||||
ENABLE_BENCHMARK_TEST = True
|
||||
|
||||
# Enabling will run all the CRUD tests, which will take time
|
||||
ENABLE_CRUD_TEST = True
|
||||
ENABLE_CRUD_TEST = False
|
||||
|
||||
# Enabling will run the face search benchmark, which takes time and must be configured with the correct
|
||||
# 'LFW_FUNNELED_DIR_PATH' parameter
|
||||
ENABLE_SEARCH_BENCHMARK_TEST = True
|
||||
|
||||
# Enabling will run the LFW dataset precision test, which will take time
|
||||
ENABLE_LFW_PRECISION_TEST = True
|
||||
ENABLE_LFW_PRECISION_TEST = False
|
||||
|
||||
# Testing model name
|
||||
TEST_MODEL_NAME = "Pikachu"
|
||||
|
||||
@@ -84,24 +84,6 @@ class FaceTrackerCase(unittest.TestCase):
|
||||
right_face_roll = faces[0].roll
|
||||
self.assertEqual(True, right_face_roll > 30)
|
||||
|
||||
def test_face_track_from_video(self):
|
||||
# Read a video file
|
||||
video_gen = read_video_generator(get_test_data("video/810_1684206192.mp4"))
|
||||
results = [self.engine_tk.face_detection(frame) for frame in video_gen]
|
||||
num_of_frame = len(results)
|
||||
num_of_track_loss = len([faces for faces in results if not faces])
|
||||
total_track_ids = [faces[0].track_id for faces in results if faces]
|
||||
num_of_id_switch = len([id_ for id_ in total_track_ids if id_ != 1])
|
||||
|
||||
# Calculate the loss rate of trace loss and switching id
|
||||
track_loss = num_of_track_loss / num_of_frame
|
||||
id_switch_loss = num_of_id_switch / len(total_track_ids)
|
||||
|
||||
# Not rigorous, only for the current test of this video file
|
||||
self.assertEqual(True, track_loss < 0.05)
|
||||
self.assertEqual(True, id_switch_loss < 0.1)
|
||||
|
||||
|
||||
@optional(ENABLE_BENCHMARK_TEST, "All benchmark related tests have been closed.")
|
||||
class FaceTrackerBenchmarkCase(unittest.TestCase):
|
||||
benchmark_results = list()
|
||||
|
||||
|
Before Width: | Height: | Size: 39 KiB After Width: | Height: | Size: 242 KiB |
@@ -18,33 +18,34 @@
|
||||
| 16 | HERR_SESS_TRACKER_FAILURE | 1283 | Tracker module not initialized |
|
||||
| 17 | HERR_SESS_INVALID_RESOURCE | 1290 | Invalid static resource |
|
||||
| 18 | HERR_SESS_NUM_OF_MODELS_NOT_MATCH | 1291 | Number of models does not match |
|
||||
| 19 | HERR_SESS_PIPELINE_FAILURE | 1288 | Pipeline module not initialized |
|
||||
| 20 | HERR_SESS_REC_EXTRACT_FAILURE | 1295 | Face feature extraction not registered |
|
||||
| 21 | HERR_SESS_REC_DEL_FAILURE | 1296 | Face feature deletion failed due to out of range index |
|
||||
| 22 | HERR_SESS_REC_UPDATE_FAILURE | 1297 | Face feature update failed due to out of range index |
|
||||
| 23 | HERR_SESS_REC_ADD_FEAT_EMPTY | 1298 | Feature vector for registration cannot be empty |
|
||||
| 24 | HERR_SESS_REC_FEAT_SIZE_ERR | 1299 | Incorrect length of feature vector for registration |
|
||||
| 25 | HERR_SESS_REC_INVALID_INDEX | 1300 | Invalid index number |
|
||||
| 26 | HERR_SESS_REC_CONTRAST_FEAT_ERR | 1303 | Incorrect length of feature vector for comparison |
|
||||
| 27 | HERR_SESS_REC_BLOCK_FULL | 1304 | Feature vector block full |
|
||||
| 28 | HERR_SESS_REC_BLOCK_DEL_FAILURE | 1305 | Deletion failed |
|
||||
| 29 | HERR_SESS_REC_BLOCK_UPDATE_FAILURE | 1306 | Update failed |
|
||||
| 30 | HERR_SESS_REC_ID_ALREADY_EXIST | 1307 | ID already exists |
|
||||
| 31 | HERR_SESS_FACE_DATA_ERROR | 1310 | Face data parsing |
|
||||
| 32 | HERR_SESS_FACE_REC_OPTION_ERROR | 1320 | An optional parameter is incorrect |
|
||||
| 33 | HERR_FT_HUB_DISABLE | 1329 | FeatureHub is disabled |
|
||||
| 34 | HERR_FT_HUB_OPEN_ERROR | 1330 | Database open error |
|
||||
| 35 | HERR_FT_HUB_NOT_OPENED | 1331 | Database not opened |
|
||||
| 36 | HERR_FT_HUB_NO_RECORD_FOUND | 1332 | No record found |
|
||||
| 37 | HERR_FT_HUB_CHECK_TABLE_ERROR | 1333 | Data table check error |
|
||||
| 38 | HERR_FT_HUB_INSERT_FAILURE | 1334 | Data insertion error |
|
||||
| 39 | HERR_FT_HUB_PREPARING_FAILURE | 1335 | Data preparation error |
|
||||
| 40 | HERR_FT_HUB_EXECUTING_FAILURE | 1336 | SQL execution error |
|
||||
| 41 | HERR_FT_HUB_NOT_VALID_FOLDER_PATH | 1337 | Invalid folder path |
|
||||
| 42 | HERR_FT_HUB_ENABLE_REPETITION | 1338 | Enable db function repeatedly |
|
||||
| 43 | HERR_FT_HUB_DISABLE_REPETITION | 1339 | Disable db function repeatedly |
|
||||
| 44 | HERR_ARCHIVE_LOAD_FAILURE | 1360 | Archive load failure |
|
||||
| 45 | HERR_ARCHIVE_LOAD_MODEL_FAILURE | 1361 | Model load failure |
|
||||
| 46 | HERR_ARCHIVE_FILE_FORMAT_ERROR | 1362 | The archive format is incorrect |
|
||||
| 47 | HERR_ARCHIVE_REPETITION_LOAD | 1363 | Do not reload the model |
|
||||
| 48 | HERR_ARCHIVE_NOT_LOAD | 1364 | Model not loaded |
|
||||
| 19 | HERR_SESS_LANDMARK_NUM_NOT_MATCH | 1300 | The number of input landmark points does not match |
|
||||
| 20 | HERR_SESS_PIPELINE_FAILURE | 1288 | Pipeline module not initialized |
|
||||
| 21 | HERR_SESS_REC_EXTRACT_FAILURE | 1295 | Face feature extraction not registered |
|
||||
| 22 | HERR_SESS_REC_DEL_FAILURE | 1296 | Face feature deletion failed due to out of range index |
|
||||
| 23 | HERR_SESS_REC_UPDATE_FAILURE | 1297 | Face feature update failed due to out of range index |
|
||||
| 24 | HERR_SESS_REC_ADD_FEAT_EMPTY | 1298 | Feature vector for registration cannot be empty |
|
||||
| 25 | HERR_SESS_REC_FEAT_SIZE_ERR | 1299 | Incorrect length of feature vector for registration |
|
||||
| 26 | HERR_SESS_REC_INVALID_INDEX | 1300 | Invalid index number |
|
||||
| 27 | HERR_SESS_REC_CONTRAST_FEAT_ERR | 1303 | Incorrect length of feature vector for comparison |
|
||||
| 28 | HERR_SESS_REC_BLOCK_FULL | 1304 | Feature vector block full |
|
||||
| 29 | HERR_SESS_REC_BLOCK_DEL_FAILURE | 1305 | Deletion failed |
|
||||
| 30 | HERR_SESS_REC_BLOCK_UPDATE_FAILURE | 1306 | Update failed |
|
||||
| 31 | HERR_SESS_REC_ID_ALREADY_EXIST | 1307 | ID already exists |
|
||||
| 32 | HERR_SESS_FACE_DATA_ERROR | 1310 | Face data parsing |
|
||||
| 33 | HERR_SESS_FACE_REC_OPTION_ERROR | 1320 | An optional parameter is incorrect |
|
||||
| 34 | HERR_FT_HUB_DISABLE | 1329 | FeatureHub is disabled |
|
||||
| 35 | HERR_FT_HUB_OPEN_ERROR | 1330 | Database open error |
|
||||
| 36 | HERR_FT_HUB_NOT_OPENED | 1331 | Database not opened |
|
||||
| 37 | HERR_FT_HUB_NO_RECORD_FOUND | 1332 | No record found |
|
||||
| 38 | HERR_FT_HUB_CHECK_TABLE_ERROR | 1333 | Data table check error |
|
||||
| 39 | HERR_FT_HUB_INSERT_FAILURE | 1334 | Data insertion error |
|
||||
| 40 | HERR_FT_HUB_PREPARING_FAILURE | 1335 | Data preparation error |
|
||||
| 41 | HERR_FT_HUB_EXECUTING_FAILURE | 1336 | SQL execution error |
|
||||
| 42 | HERR_FT_HUB_NOT_VALID_FOLDER_PATH | 1337 | Invalid folder path |
|
||||
| 43 | HERR_FT_HUB_ENABLE_REPETITION | 1338 | Enable db function repeatedly |
|
||||
| 44 | HERR_FT_HUB_DISABLE_REPETITION | 1339 | Disable db function repeatedly |
|
||||
| 45 | HERR_ARCHIVE_LOAD_FAILURE | 1360 | Archive load failure |
|
||||
| 46 | HERR_ARCHIVE_LOAD_MODEL_FAILURE | 1361 | Model load failure |
|
||||
| 47 | HERR_ARCHIVE_FILE_FORMAT_ERROR | 1362 | The archive format is incorrect |
|
||||
| 48 | HERR_ARCHIVE_REPETITION_LOAD | 1363 | Do not reload the model |
|
||||
| 49 | HERR_ARCHIVE_NOT_LOAD | 1364 | Model not loaded |
|
||||
|
||||