10 Commits

Author SHA1 Message Date
Yakhyokhuja Valikhujaev
7c98a60d26 fix: Python 3.10 does not support tomlib (#43) 2025-12-24 00:51:36 +09:00
Yakhyokhuja Valikhujaev
d97a3b2cb2 Merge pull request #42 from yakhyo/feat/standardize-outputs
feat: Standardize detection output and several other updates
2025-12-24 00:38:32 +09:00
yakhyo
2200ba063c docs: Update related docs and ruff formatting 2025-12-24 00:34:24 +09:00
yakhyo
9bcbfa65c2 feat: Update detection module output to datalasses 2025-12-24 00:00:00 +09:00
yakhyo
96306a0910 feat: Update github actions 2025-12-23 23:59:15 +09:00
Yakhyokhuja Valikhujaev
3389aa3e4c feat: Add MiniFasNet for Face Anti Spoofing (#41) 2025-12-20 22:34:47 +09:00
Yakhyokhuja Valikhujaev
b282e6ccc1 docs: Update related docs to face anonymization (#40) 2025-12-20 21:27:26 +09:00
Yakhyokhuja Valikhujaev
d085c6a822 feat: Add face blurring for privacy (#39)
* feat: Add face blurring for privacy

* chore: Revert back the version
2025-12-20 20:57:42 +09:00
yakhyo
13b518e96d chore: Upgrade version to v1.5.3 2025-12-15 15:09:54 +09:00
yakhyo
1b877bc9fc fix: Fix the version 2025-12-15 14:53:36 +09:00
35 changed files with 1885 additions and 208 deletions

BIN
.github/logos/gaze_crop.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 716 KiB

BIN
.github/logos/gaze_org.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 673 KiB

View File

@@ -10,14 +10,20 @@ on:
- main
- develop
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
runs-on: ${{ matrix.os }}
timeout-minutes: 15
strategy:
fail-fast: false
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13"]
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: ["3.11", "3.13"]
steps:
- name: Checkout code
@@ -27,7 +33,7 @@ jobs:
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: 'pip'
cache: "pip"
- name: Install dependencies
run: |
@@ -38,21 +44,18 @@ jobs:
run: |
python -c "import onnxruntime as ort; print('Available providers:', ort.get_available_providers())"
- name: Lint with ruff (if available)
run: |
pip install ruff || true
ruff check . --exit-zero || true
continue-on-error: true
- name: Lint with ruff
run: ruff check .
- name: Run tests
run: pytest -v --tb=short
- name: Test package imports
run: |
python -c "from uniface import RetinaFace, ArcFace, Landmark106, AgeGender; print('All imports successful')"
run: python -c "import uniface; print(f'uniface {uniface.__version__} loaded with {len(uniface.__all__)} exports')"
build:
runs-on: ubuntu-latest
timeout-minutes: 10
needs: test
steps:
@@ -63,7 +66,7 @@ jobs:
uses: actions/setup-python@v5
with:
python-version: "3.10"
cache: 'pip'
cache: "pip"
- name: Install build tools
run: |
@@ -84,4 +87,3 @@ jobs:
name: dist-python-${{ github.sha }}
path: dist/
retention-days: 7

View File

@@ -5,9 +5,14 @@ on:
tags:
- "v*.*.*" # Trigger only on version tags like v0.1.9
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
validate:
runs-on: ubuntu-latest
timeout-minutes: 5
outputs:
version: ${{ steps.get_version.outputs.version }}
tag_version: ${{ steps.get_version.outputs.tag_version }}
@@ -16,13 +21,18 @@ jobs:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Get version from tag and pyproject.toml
id: get_version
run: |
TAG_VERSION=${GITHUB_REF#refs/tags/v}
echo "tag_version=$TAG_VERSION" >> $GITHUB_OUTPUT
PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' pyproject.toml)
PYPROJECT_VERSION=$(python -c "import tomllib; print(tomllib.load(open('pyproject.toml','rb'))['project']['version'])")
echo "version=$PYPROJECT_VERSION" >> $GITHUB_OUTPUT
echo "Tag version: v$TAG_VERSION"
@@ -38,12 +48,13 @@ jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 15
needs: validate
strategy:
fail-fast: false
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13"]
python-version: ["3.11", "3.13"]
steps:
- name: Checkout code
@@ -65,6 +76,7 @@ jobs:
publish:
runs-on: ubuntu-latest
timeout-minutes: 10
needs: [validate, test]
permissions:
contents: write

View File

@@ -21,11 +21,28 @@ Thank you for considering contributing to UniFace! We welcome contributions of a
### Code Style
This project uses [Ruff](https://docs.astral.sh/ruff/) for linting and formatting.
```bash
# Check for linting errors
ruff check .
# Auto-fix linting errors
ruff check . --fix
# Format code
ruff format .
```
**Guidelines:**
- Follow PEP8 guidelines
- Use type hints (Python 3.10+)
- Write docstrings for public APIs
- Line length: 120 characters
- Keep code simple and readable
All PRs must pass `ruff check .` before merging.
## Development Setup
```bash
@@ -51,6 +68,7 @@ Example notebooks demonstrating library usage:
| Face Recognition | [face_analyzer.ipynb](examples/face_analyzer.ipynb) |
| Face Verification | [face_verification.ipynb](examples/face_verification.ipynb) |
| Face Search | [face_search.ipynb](examples/face_search.ipynb) |
| Face Anonymization | [face_anonymization.ipynb](examples/face_anonymization.ipynb) |
## Questions?

View File

@@ -404,6 +404,47 @@ print(f"Detected {len(np.unique(mask))} facial components")
---
## Anti-Spoofing Models
### MiniFASNet Family
Lightweight face anti-spoofing models for liveness detection. Detect if a face is real (live) or fake (photo, video replay, mask).
| Model Name | Size | Scale | Use Case |
| ---------- | ------ | ----- | ----------------------------- |
| `V1SE` | 1.2 MB | 4.0 | Squeeze-and-excitation variant |
| `V2` ⭐ | 1.2 MB | 2.7 | **Recommended default** |
**Dataset**: Trained on face anti-spoofing datasets
**Output**: Returns (label_idx, score) where label_idx: 0=Fake, 1=Real
#### Usage
```python
from uniface import RetinaFace
from uniface.spoofing import MiniFASNet
from uniface.constants import MiniFASNetWeights
# Default (V2, recommended)
detector = RetinaFace()
spoofer = MiniFASNet()
# V1SE variant
spoofer = MiniFASNet(model_name=MiniFASNetWeights.V1SE)
# Detect and check liveness
faces = detector.detect(image)
for face in faces:
label_idx, score = spoofer.predict(image, face['bbox'])
# label_idx: 0 = Fake, 1 = Real
label = 'Real' if label_idx == 1 else 'Fake'
print(f"{label}: {score:.1%}")
```
**Note**: Requires face bounding box from a detector. Use with RetinaFace, SCRFD, or YOLOv5Face.
---
## Model Updates
Models are automatically downloaded and cached on first use. Cache location: `~/.uniface/models/`
@@ -445,6 +486,7 @@ python scripts/download_model.py --model MNET_V2
- **Face Recognition Training**: [yakhyo/face-recognition](https://github.com/yakhyo/face-recognition) - ArcFace, MobileFace, SphereFace training code
- **Gaze Estimation Training**: [yakhyo/gaze-estimation](https://github.com/yakhyo/gaze-estimation) - MobileGaze training code and pretrained weights
- **Face Parsing Training**: [yakhyo/face-parsing](https://github.com/yakhyo/face-parsing) - BiSeNet training code and pretrained weights
- **Face Anti-Spoofing**: [yakhyo/face-anti-spoofing](https://github.com/yakhyo/face-anti-spoofing) - MiniFASNet ONNX inference (weights from [minivision-ai/Silent-Face-Anti-Spoofing](https://github.com/minivision-ai/Silent-Face-Anti-Spoofing))
- **InsightFace**: [deepinsight/insightface](https://github.com/deepinsight/insightface) - Model architectures and pretrained weights
### Papers

View File

@@ -39,9 +39,9 @@ faces = detector.detect(image)
# Print results
for i, face in enumerate(faces):
print(f"Face {i+1}:")
print(f" Confidence: {face['confidence']:.2f}")
print(f" BBox: {face['bbox']}")
print(f" Landmarks: {len(face['landmarks'])} points")
print(f" Confidence: {face.confidence:.2f}")
print(f" BBox: {face.bbox}")
print(f" Landmarks: {len(face.landmarks)} points")
```
**Output:**
@@ -70,9 +70,9 @@ image = cv2.imread("photo.jpg")
faces = detector.detect(image)
# Extract visualization data
bboxes = [f['bbox'] for f in faces]
scores = [f['confidence'] for f in faces]
landmarks = [f['landmarks'] for f in faces]
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
# Draw on image
draw_detections(
@@ -113,8 +113,8 @@ faces2 = detector.detect(image2)
if faces1 and faces2:
# Extract embeddings
emb1 = recognizer.get_normalized_embedding(image1, faces1[0]['landmarks'])
emb2 = recognizer.get_normalized_embedding(image2, faces2[0]['landmarks'])
emb1 = recognizer.get_normalized_embedding(image1, faces1[0].landmarks)
emb2 = recognizer.get_normalized_embedding(image2, faces2[0].landmarks)
# Compute similarity (cosine similarity)
similarity = np.dot(emb1, emb2.T)[0][0]
@@ -159,9 +159,9 @@ while True:
faces = detector.detect(frame)
# Draw results
bboxes = [f['bbox'] for f in faces]
scores = [f['confidence'] for f in faces]
landmarks = [f['landmarks'] for f in faces]
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(
image=frame,
bboxes=bboxes,
@@ -199,7 +199,7 @@ faces = detector.detect(image)
# Predict attributes
for i, face in enumerate(faces):
gender, age = age_gender.predict(image, face['bbox'])
gender, age = age_gender.predict(image, face.bbox)
gender_str = 'Female' if gender == 0 else 'Male'
print(f"Face {i+1}: {gender_str}, {age} years old")
```
@@ -230,7 +230,7 @@ image = cv2.imread("photo.jpg")
faces = detector.detect(image)
if faces:
landmarks = landmarker.get_landmarks(image, faces[0]['bbox'])
landmarks = landmarker.get_landmarks(image, faces[0].bbox)
print(f"Detected {len(landmarks)} landmarks")
# Draw landmarks
@@ -262,8 +262,7 @@ faces = detector.detect(image)
# Estimate gaze for each face
for i, face in enumerate(faces):
bbox = face['bbox']
x1, y1, x2, y2 = map(int, bbox[:4])
x1, y1, x2, y2 = map(int, face.bbox[:4])
face_crop = image[y1:y2, x1:x2]
if face_crop.size > 0:
@@ -271,7 +270,7 @@ for i, face in enumerate(faces):
print(f"Face {i+1}: pitch={np.degrees(pitch):.1f}°, yaw={np.degrees(yaw):.1f}°")
# Draw gaze direction
draw_gaze(image, bbox, pitch, yaw)
draw_gaze(image, face.bbox, pitch, yaw)
cv2.imwrite("gaze_output.jpg", image)
```
@@ -328,7 +327,138 @@ Detected 12 facial components
---
## 9. Batch Processing (3 minutes)
## 9. Face Anonymization (2 minutes)
Automatically blur faces for privacy protection:
```python
from uniface.privacy import anonymize_faces
import cv2
# One-liner: automatic detection and blurring
image = cv2.imread("group_photo.jpg")
anonymized = anonymize_faces(image, method='pixelate')
cv2.imwrite("anonymized.jpg", anonymized)
print("Faces anonymized successfully!")
```
**Manual control with custom parameters:**
```python
from uniface import RetinaFace
from uniface.privacy import BlurFace
# Initialize detector and blurrer
detector = RetinaFace()
blurrer = BlurFace(method='gaussian', blur_strength=5.0)
# Detect and anonymize
faces = detector.detect(image)
anonymized = blurrer.anonymize(image, faces)
cv2.imwrite("output.jpg", anonymized)
```
**Available blur methods:**
```python
# Pixelation (news media standard)
blurrer = BlurFace(method='pixelate', pixel_blocks=8)
# Gaussian blur (smooth, natural)
blurrer = BlurFace(method='gaussian', blur_strength=4.0)
# Black boxes (maximum privacy)
blurrer = BlurFace(method='blackout', color=(0, 0, 0))
# Elliptical blur (natural face shape)
blurrer = BlurFace(method='elliptical', blur_strength=3.0, margin=30)
# Median blur (edge-preserving)
blurrer = BlurFace(method='median', blur_strength=3.0)
```
**Webcam anonymization:**
```python
import cv2
from uniface import RetinaFace
from uniface.privacy import BlurFace
detector = RetinaFace()
blurrer = BlurFace(method='pixelate')
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
break
faces = detector.detect(frame)
frame = blurrer.anonymize(frame, faces, inplace=True)
cv2.imshow('Anonymized', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
```
**Command-line tool:**
```bash
# Anonymize image with pixelation
python scripts/run_anonymization.py --image photo.jpg
# Real-time webcam anonymization
python scripts/run_anonymization.py --webcam --method gaussian
# Custom blur strength
python scripts/run_anonymization.py --image photo.jpg --method gaussian --blur-strength 5.0
```
---
## 10. Face Anti-Spoofing (2 minutes)
Detect if a face is real or fake (photo, video replay, mask):
```python
from uniface import RetinaFace
from uniface.spoofing import MiniFASNet
detector = RetinaFace()
spoofer = MiniFASNet() # Uses V2 by default
image = cv2.imread("photo.jpg")
faces = detector.detect(image)
for i, face in enumerate(faces):
label_idx, score = spoofer.predict(image, face.bbox)
# label_idx: 0 = Fake, 1 = Real
label = 'Real' if label_idx == 1 else 'Fake'
print(f"Face {i+1}: {label} ({score:.1%})")
```
**Output:**
```
Face 1: Real (98.5%)
```
**Command-line tool:**
```bash
# Image
python scripts/run_spoofing.py --image photo.jpg
# Webcam
python scripts/run_spoofing.py --source 0
```
---
## 11. Batch Processing (3 minutes)
Process multiple images:
@@ -361,7 +491,7 @@ print("Done!")
---
## 10. Model Selection
## 12. Model Selection
Choose the right model for your use case:
@@ -503,6 +633,7 @@ Explore interactive examples for common tasks:
| **Face Verification** | Compare two faces to verify identity | [face_verification.ipynb](examples/face_verification.ipynb) |
| **Face Search** | Find a person in a group photo | [face_search.ipynb](examples/face_search.ipynb) |
| **Face Parsing** | Segment face into semantic components | [face_parsing.ipynb](examples/face_parsing.ipynb) |
| **Face Anonymization** | Blur or pixelate faces for privacy protection | [face_anonymization.ipynb](examples/face_anonymization.ipynb) |
| **Gaze Estimation** | Estimate gaze direction | [gaze_estimation.ipynb](examples/gaze_estimation.ipynb) |
### Additional Resources

121
README.md
View File

@@ -1,11 +1,15 @@
# UniFace: All-in-One Face Analysis Library
<div align="center">
[![License](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![Python](https://img.shields.io/badge/Python-3.10%2B-blue)](https://www.python.org/)
[![PyPI](https://img.shields.io/pypi/v/uniface.svg)](https://pypi.org/project/uniface/)
[![CI](https://github.com/yakhyo/uniface/actions/workflows/ci.yml/badge.svg)](https://github.com/yakhyo/uniface/actions)
[![Downloads](https://pepy.tech/badge/uniface)](https://pepy.tech/project/uniface)
[![DeepWiki](https://img.shields.io/badge/DeepWiki-yakhyo%2Funiface-blue.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAAyCAYAAAAnWDnqAAAAAXNSR0IArs4c6QAAA05JREFUaEPtmUtyEzEQhtWTQyQLHNak2AB7ZnyXZMEjXMGeK/AIi+QuHrMnbChYY7MIh8g01fJoopFb0uhhEqqcbWTp06/uv1saEDv4O3n3dV60RfP947Mm9/SQc0ICFQgzfc4CYZoTPAswgSJCCUJUnAAoRHOAUOcATwbmVLWdGoH//PB8mnKqScAhsD0kYP3j/Yt5LPQe2KvcXmGvRHcDnpxfL2zOYJ1mFwrryWTz0advv1Ut4CJgf5uhDuDj5eUcAUoahrdY/56ebRWeraTjMt/00Sh3UDtjgHtQNHwcRGOC98BJEAEymycmYcWwOprTgcB6VZ5JK5TAJ+fXGLBm3FDAmn6oPPjR4rKCAoJCal2eAiQp2x0vxTPB3ALO2CRkwmDy5WohzBDwSEFKRwPbknEggCPB/imwrycgxX2NzoMCHhPkDwqYMr9tRcP5qNrMZHkVnOjRMWwLCcr8ohBVb1OMjxLwGCvjTikrsBOiA6fNyCrm8V1rP93iVPpwaE+gO0SsWmPiXB+jikdf6SizrT5qKasx5j8ABbHpFTx+vFXp9EnYQmLx02h1QTTrl6eDqxLnGjporxl3NL3agEvXdT0WmEost648sQOYAeJS9Q7bfUVoMGnjo4AZdUMQku50McDcMWcBPvr0SzbTAFDfvJqwLzgxwATnCgnp4wDl6Aa+Ax283gghmj+vj7feE2KBBRMW3FzOpLOADl0Isb5587h/U4gGvkt5v60Z1VLG8BhYjbzRwyQZemwAd6cCR5/XFWLYZRIMpX39AR0tjaGGiGzLVyhse5C9RKC6ai42ppWPKiBagOvaYk8lO7DajerabOZP46Lby5wKjw1HCRx7p9sVMOWGzb/vA1hwiWc6jm3MvQDTogQkiqIhJV0nBQBTU+3okKCFDy9WwferkHjtxib7t3xIUQtHxnIwtx4mpg26/HfwVNVDb4oI9RHmx5WGelRVlrtiw43zboCLaxv46AZeB3IlTkwouebTr1y2NjSpHz68WNFjHvupy3q8TFn3Hos2IAk4Ju5dCo8B3wP7VPr/FGaKiG+T+v+TQqIrOqMTL1VdWV1DdmcbO8KXBz6esmYWYKPwDL5b5FA1a0hwapHiom0r/cKaoqr+27/XcrS5UwSMbQAAAABJRU5ErkJggg==)](https://deepwiki.com/yakhyo/uniface)
[![Downloads](https://static.pepy.tech/badge/uniface)](https://pepy.tech/project/uniface)
[![DeepWiki](https://img.shields.io/badge/DeepWiki-AI_Docs-blue.svg?logo=bookstack)](https://deepwiki.com/yakhyo/uniface)
</div>
<div align="center">
<img src=".github/logos/logo_web.webp" width=75%>
@@ -23,6 +27,8 @@
- **Face Parsing**: BiSeNet-based semantic segmentation with 19 facial component classes
- **Gaze Estimation**: Real-time gaze direction prediction with MobileGaze
- **Attribute Analysis**: Age, gender, and emotion detection
- **Anti-Spoofing**: Face liveness detection with MiniFASNet models
- **Face Anonymization**: Privacy-preserving face blurring with 5 methods (pixelate, gaussian, blackout, elliptical, median)
- **Face Alignment**: Precise alignment for downstream tasks
- **Hardware Acceleration**: ARM64 optimizations (Apple Silicon), CUDA (NVIDIA), CPU fallback
- **Simple API**: Intuitive factory functions and clean interfaces
@@ -99,9 +105,9 @@ faces = detector.detect(image)
# Process results
for face in faces:
bbox = face['bbox'] # [x1, y1, x2, y2]
confidence = face['confidence']
landmarks = face['landmarks'] # 5-point landmarks
bbox = face.bbox # np.ndarray [x1, y1, x2, y2]
confidence = face.confidence
landmarks = face.landmarks # np.ndarray (5, 2) landmarks
print(f"Face detected with confidence: {confidence:.2f}")
```
@@ -119,8 +125,8 @@ recognizer = ArcFace()
faces1 = detector.detect(image1)
faces2 = detector.detect(image2)
embedding1 = recognizer.get_normalized_embedding(image1, faces1[0]['landmarks'])
embedding2 = recognizer.get_normalized_embedding(image2, faces2[0]['landmarks'])
embedding1 = recognizer.get_normalized_embedding(image1, faces1[0].landmarks)
embedding2 = recognizer.get_normalized_embedding(image2, faces2[0].landmarks)
# Compare faces
similarity = compute_similarity(embedding1, embedding2)
@@ -136,7 +142,7 @@ detector = RetinaFace()
landmarker = Landmark106()
faces = detector.detect(image)
landmarks = landmarker.get_landmarks(image, faces[0]['bbox'])
landmarks = landmarker.get_landmarks(image, faces[0].bbox)
# Returns 106 (x, y) landmark points
```
@@ -149,7 +155,7 @@ detector = RetinaFace()
age_gender = AgeGender()
faces = detector.detect(image)
gender, age = age_gender.predict(image, faces[0]['bbox'])
gender, age = age_gender.predict(image, faces[0].bbox)
gender_str = 'Female' if gender == 0 else 'Male'
print(f"{gender_str}, {age} years old")
```
@@ -166,15 +172,14 @@ gaze_estimator = MobileGaze()
faces = detector.detect(image)
for face in faces:
bbox = face['bbox']
x1, y1, x2, y2 = map(int, bbox[:4])
x1, y1, x2, y2 = map(int, face.bbox[:4])
face_crop = image[y1:y2, x1:x2]
pitch, yaw = gaze_estimator.estimate(face_crop)
print(f"Gaze: pitch={np.degrees(pitch):.1f}°, yaw={np.degrees(yaw):.1f}°")
# Visualize
draw_gaze(image, bbox, pitch, yaw)
draw_gaze(image, face.bbox, pitch, yaw)
```
### Face Parsing
@@ -198,6 +203,77 @@ vis_result = vis_parsing_maps(face_rgb, mask, save_image=False)
print(f"Unique classes: {len(np.unique(mask))}")
```
### Face Anti-Spoofing
Detect if a face is real or fake (photo, video replay, mask):
```python
from uniface import RetinaFace
from uniface.spoofing import MiniFASNet
detector = RetinaFace()
spoofer = MiniFASNet() # Uses V2 by default
faces = detector.detect(image)
for face in faces:
label_idx, score = spoofer.predict(image, face.bbox)
# label_idx: 0 = Fake, 1 = Real
label = 'Real' if label_idx == 1 else 'Fake'
print(f"{label}: {score:.1%}")
```
### Face Anonymization
Protect privacy by blurring or pixelating faces with 5 different methods:
```python
from uniface import RetinaFace
from uniface.privacy import BlurFace, anonymize_faces
import cv2
# Method 1: One-liner with automatic detection
image = cv2.imread("photo.jpg")
anonymized = anonymize_faces(image, method='pixelate')
cv2.imwrite("anonymized.jpg", anonymized)
# Method 2: Manual control with custom parameters
detector = RetinaFace()
blurrer = BlurFace(method='gaussian', blur_strength=5.0)
faces = detector.detect(image)
anonymized = blurrer.anonymize(image, faces)
# Available blur methods:
methods = {
'pixelate': BlurFace(method='pixelate', pixel_blocks=10), # Blocky effect (news media standard)
'gaussian': BlurFace(method='gaussian', blur_strength=3.0), # Smooth, natural blur
'blackout': BlurFace(method='blackout', color=(0, 0, 0)), # Solid color boxes (maximum privacy)
'elliptical': BlurFace(method='elliptical', margin=20), # Soft oval blur (natural face shape)
'median': BlurFace(method='median', blur_strength=3.0) # Edge-preserving blur
}
# Real-time webcam anonymization
cap = cv2.VideoCapture(0)
detector = RetinaFace()
blurrer = BlurFace(method='pixelate')
while True:
ret, frame = cap.read()
if not ret:
break
faces = detector.detect(frame)
frame = blurrer.anonymize(frame, faces, inplace=True)
cv2.imshow('Anonymized', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
```
---
## Documentation
@@ -216,6 +292,7 @@ print(f"Unique classes: {len(np.unique(mask))}")
from uniface.detection import RetinaFace, SCRFD
from uniface.recognition import ArcFace
from uniface.landmark import Landmark106
from uniface.privacy import BlurFace, anonymize_faces
from uniface.constants import SCRFDWeights
@@ -310,6 +387,12 @@ faces = detect_faces(image, method='retinaface', conf_thresh=0.8) # methods: re
| ---------- | ---------------------------------------- | ------------------------------------ |
| `BiSeNet` | `model_name=ParsingWeights.RESNET18`, `input_size=(512, 512)` | 19 facial component classes; BiSeNet architecture with ResNet backbone |
**Anti-Spoofing**
| Class | Key params (defaults) | Notes |
| ------------- | ----------------------------------------- | ------------------------------------ |
| `MiniFASNet` | `model_name=MiniFASNetWeights.V2` | Returns (label_idx, score); 0=Fake, 1=Real |
---
## Model Performance
@@ -357,6 +440,7 @@ Interactive examples covering common face analysis tasks:
| **Face Verification** | Compare two faces to verify identity | [face_verification.ipynb](examples/face_verification.ipynb) |
| **Face Search** | Find a person in a group photo | [face_search.ipynb](examples/face_search.ipynb) |
| **Face Parsing** | Segment face into semantic components | [face_parsing.ipynb](examples/face_parsing.ipynb) |
| **Face Anonymization** | Blur or pixelate faces for privacy protection | [face_anonymization.ipynb](examples/face_anonymization.ipynb) |
| **Gaze Estimation** | Estimate gaze direction from face images | [gaze_estimation.ipynb](examples/gaze_estimation.ipynb) |
### Webcam Face Detection
@@ -377,9 +461,9 @@ while True:
faces = detector.detect(frame)
# Extract data for visualization
bboxes = [f['bbox'] for f in faces]
scores = [f['confidence'] for f in faces]
landmarks = [f['landmarks'] for f in faces]
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(
image=frame,
@@ -413,7 +497,7 @@ for person_id, image_path in person_images.items():
faces = detector.detect(image)
if faces:
embedding = recognizer.get_normalized_embedding(
image, faces[0]['landmarks']
image, faces[0].landmarks
)
database[person_id] = embedding
@@ -422,7 +506,7 @@ query_image = cv2.imread("query.jpg")
query_faces = detector.detect(query_image)
if query_faces:
query_embedding = recognizer.get_normalized_embedding(
query_image, query_faces[0]['landmarks']
query_image, query_faces[0].landmarks
)
# Find best match
@@ -551,6 +635,8 @@ uniface/
│ ├── parsing/ # Face parsing
│ ├── gaze/ # Gaze estimation
│ ├── attribute/ # Age, gender, emotion
│ ├── spoofing/ # Face anti-spoofing
│ ├── privacy/ # Face anonymization & blurring
│ ├── onnx_utils.py # ONNX Runtime utilities
│ ├── model_store.py # Model download & caching
│ └── visualization.py # Drawing utilities
@@ -568,6 +654,7 @@ uniface/
- **Face Recognition Training**: [yakhyo/face-recognition](https://github.com/yakhyo/face-recognition) - ArcFace, MobileFace, SphereFace training code
- **Face Parsing Training**: [yakhyo/face-parsing](https://github.com/yakhyo/face-parsing) - BiSeNet face parsing training code and pretrained weights
- **Gaze Estimation Training**: [yakhyo/gaze-estimation](https://github.com/yakhyo/gaze-estimation) - MobileGaze training code and pretrained weights
- **Face Anti-Spoofing**: [yakhyo/face-anti-spoofing](https://github.com/yakhyo/face-anti-spoofing) - MiniFASNet ONNX inference (weights from [minivision-ai/Silent-Face-Anti-Spoofing](https://github.com/minivision-ai/Silent-Face-Anti-Spoofing))
- **InsightFace**: [deepinsight/insightface](https://github.com/deepinsight/insightface) - Model architectures and pretrained weights
## Contributing

View File

@@ -48,7 +48,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"1.3.1\n"
"1.6.0\n"
]
}
],
@@ -140,13 +140,13 @@
"\n",
" # Draw detections\n",
" bbox_image = image.copy()\n",
" bboxes = [f['bbox'] for f in faces]\n",
" scores = [f['confidence'] for f in faces]\n",
" landmarks = [f['landmarks'] for f in faces]\n",
" bboxes = [f.bbox for f in faces]\n",
" scores = [f.confidence for f in faces]\n",
" landmarks = [f.landmarks for f in faces]\n",
" draw_detections(image=bbox_image, bboxes=bboxes, scores=scores, landmarks=landmarks, vis_threshold=0.6, fancy_bbox=True)\n",
"\n",
" # Align first detected face (returns aligned image and inverse transform matrix)\n",
" first_landmarks = faces[0]['landmarks']\n",
" first_landmarks = faces[0].landmarks\n",
" aligned_image, _ = face_alignment(image, first_landmarks, image_size=112)\n",
"\n",
" # Convert BGR to RGB for visualization\n",
@@ -202,7 +202,8 @@
"source": [
"## Notes\n",
"\n",
"- `detect()` returns a list of face dictionaries with `bbox`, `confidence`, `landmarks`\n",
"- `detect()` returns a list of `Face` objects with `bbox`, `confidence`, `landmarks` attributes\n",
"- Access attributes using dot notation: `face.bbox`, `face.landmarks`\n",
"- `face_alignment()` uses 5-point landmarks to align and crop the face\n",
"- Default output size is 112x112 (standard for face recognition models)\n"
]

View File

@@ -44,7 +44,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"1.3.1\n"
"1.6.0\n"
]
}
],

File diff suppressed because one or more lines are too long

View File

@@ -44,7 +44,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"1.3.1\n"
"1.6.0\n"
]
}
],
@@ -153,14 +153,14 @@
"# Load image\n",
"image = cv2.imread(image_path)\n",
"\n",
"# Detect faces - returns list of face dictionaries\n",
"# Detect faces - returns list of Face objects\n",
"faces = detector.detect(image)\n",
"print(f'Detected {len(faces)} face(s)')\n",
"\n",
"# Unpack face data for visualization\n",
"bboxes = [f['bbox'] for f in faces]\n",
"scores = [f['confidence'] for f in faces]\n",
"landmarks = [f['landmarks'] for f in faces]\n",
"bboxes = [f.bbox for f in faces]\n",
"scores = [f.confidence for f in faces]\n",
"landmarks = [f.landmarks for f in faces]\n",
"\n",
"# Draw detections\n",
"draw_detections(image=image, bboxes=bboxes, scores=scores, landmarks=landmarks, vis_threshold=0.6, fancy_bbox=True)\n",
@@ -211,9 +211,9 @@
"faces = detector.detect(image, max_num=2)\n",
"print(f'Detected {len(faces)} face(s)')\n",
"\n",
"bboxes = [f['bbox'] for f in faces]\n",
"scores = [f['confidence'] for f in faces]\n",
"landmarks = [f['landmarks'] for f in faces]\n",
"bboxes = [f.bbox for f in faces]\n",
"scores = [f.confidence for f in faces]\n",
"landmarks = [f.landmarks for f in faces]\n",
"\n",
"draw_detections(image=image, bboxes=bboxes, scores=scores, landmarks=landmarks, vis_threshold=0.6, fancy_bbox=True)\n",
"\n",
@@ -258,9 +258,9 @@
"faces = detector.detect(image, max_num=5)\n",
"print(f'Detected {len(faces)} face(s)')\n",
"\n",
"bboxes = [f['bbox'] for f in faces]\n",
"scores = [f['confidence'] for f in faces]\n",
"landmarks = [f['landmarks'] for f in faces]\n",
"bboxes = [f.bbox for f in faces]\n",
"scores = [f.confidence for f in faces]\n",
"landmarks = [f.landmarks for f in faces]\n",
"\n",
"draw_detections(image=image, bboxes=bboxes, scores=scores, landmarks=landmarks, vis_threshold=0.6, fancy_bbox=True)\n",
"\n",
@@ -274,7 +274,8 @@
"source": [
"## Notes\n",
"\n",
"- `detect()` returns a list of dictionaries with keys: `bbox`, `confidence`, `landmarks`\n",
"- `detect()` returns a list of `Face` objects with attributes: `bbox`, `confidence`, `landmarks`\n",
"- Access attributes using dot notation: `face.bbox`, `face.confidence`, `face.landmarks`\n",
"- Adjust `conf_thresh` and `nms_thresh` for your use case\n",
"- Use `max_num` to limit detected faces"
]

View File

@@ -46,7 +46,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"UniFace version: 1.5.0\n"
"UniFace version: 1.6.0\n"
]
}
],
@@ -365,7 +365,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "base",
"language": "python",
"name": "python3"
},
@@ -379,7 +379,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.0"
"version": "3.13.5"
}
},
"nbformat": 4,

View File

@@ -42,7 +42,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"1.3.1\n"
"1.6.0\n"
]
}
],

View File

@@ -37,7 +37,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"1.3.1\n"
"1.6.0\n"
]
}
],

View File

@@ -44,7 +44,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"UniFace version: 1.4.0\n"
"UniFace version: 1.6.0\n"
]
}
],
@@ -152,8 +152,7 @@
"\n",
" # Estimate gaze for each face\n",
" for i, face in enumerate(faces):\n",
" bbox = face['bbox']\n",
" x1, y1, x2, y2 = map(int, bbox[:4])\n",
" x1, y1, x2, y2 = map(int, face.bbox[:4])\n",
" face_crop = image[y1:y2, x1:x2]\n",
"\n",
" if face_crop.size > 0:\n",
@@ -164,7 +163,7 @@
" print(f' Face {i+1}: pitch={pitch_deg:.1f}°, yaw={yaw_deg:.1f}°')\n",
"\n",
" # Draw gaze without angle text\n",
" draw_gaze(image, bbox, pitch, yaw, draw_angles=False)\n",
" draw_gaze(image, face.bbox, pitch, yaw, draw_angles=False)\n",
"\n",
" # Convert BGR to RGB for display\n",
" original_rgb = cv2.cvtColor(original, cv2.COLOR_BGR2RGB)\n",
@@ -249,7 +248,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "base",
"language": "python",
"name": "python3"
},
@@ -263,7 +262,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.0"
"version": "3.13.5"
}
},
"nbformat": 4,

View File

@@ -1,6 +1,6 @@
[project]
name = "uniface"
version = "1.5.0"
version = "1.6.0"
description = "UniFace: A Comprehensive Library for Face Detection, Recognition, Landmark Analysis, Face Parsing, Gaze Estimation, Age, and Gender Detection"
readme = "README.md"
license = { text = "MIT" }
@@ -9,7 +9,7 @@ maintainers = [
{ name = "Yakhyokhuja Valikhujaev", email = "yakhyo9696@gmail.com" },
]
requires-python = ">=3.10,<3.14"
requires-python = ">=3.11,<3.14"
keywords = [
"face-detection",
"face-recognition",
@@ -34,7 +34,6 @@ classifiers = [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
@@ -73,7 +72,7 @@ uniface = ["py.typed"]
[tool.ruff]
line-length = 120
target-version = "py310"
target-version = "py311"
exclude = [
".git",
".ruff_cache",

View File

@@ -7,6 +7,7 @@ Scripts for testing UniFace features.
| Script | Description |
|--------|-------------|
| `run_detection.py` | Face detection on image or webcam |
| `run_anonymization.py` | Face anonymization/blurring for privacy |
| `run_age_gender.py` | Age and gender prediction |
| `run_emotion.py` | Emotion detection (7 or 8 emotions) |
| `run_gaze_estimation.py` | Gaze direction estimation |
@@ -26,6 +27,11 @@ Scripts for testing UniFace features.
python scripts/run_detection.py --image assets/test.jpg
python scripts/run_detection.py --webcam
# Face anonymization
python scripts/run_anonymization.py --image assets/test.jpg --method pixelate
python scripts/run_anonymization.py --webcam --method gaussian
python scripts/run_anonymization.py --image photo.jpg --method pixelate --pixel-blocks 5
# Age and gender
python scripts/run_age_gender.py --image assets/test.jpg
python scripts/run_age_gender.py --webcam

View File

@@ -0,0 +1,207 @@
# Face anonymization/blurring for privacy
# Usage: python run_anonymization.py --image path/to/image.jpg --method pixelate
# python run_anonymization.py --webcam --method gaussian
import argparse
import os
import cv2
from uniface import RetinaFace
from uniface.privacy import BlurFace
def process_image(
detector,
blurrer: BlurFace,
image_path: str,
save_dir: str = 'outputs',
show_detections: bool = False,
):
"""Process a single image."""
image = cv2.imread(image_path)
if image is None:
print(f"Error: Failed to load image from '{image_path}'")
return
# Detect faces
faces = detector.detect(image)
print(f'Detected {len(faces)} face(s)')
# Optionally draw detection boxes before blurring
if show_detections and faces:
from uniface.visualization import draw_detections
preview = image.copy()
bboxes = [face['bbox'] for face in faces]
scores = [face['confidence'] for face in faces]
landmarks = [face['landmarks'] for face in faces]
draw_detections(preview, bboxes, scores, landmarks)
# Show preview
cv2.imshow('Detections (Press any key to continue)', preview)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Anonymize faces
if faces:
anonymized = blurrer.anonymize(image, faces)
else:
anonymized = image
# Save output
os.makedirs(save_dir, exist_ok=True)
basename = os.path.splitext(os.path.basename(image_path))[0]
output_path = os.path.join(save_dir, f'{basename}_anonymized.jpg')
cv2.imwrite(output_path, anonymized)
print(f'Output saved: {output_path}')
def run_webcam(detector, blurrer: BlurFace):
"""Run real-time anonymization on webcam."""
cap = cv2.VideoCapture(0)
if not cap.isOpened():
print('Cannot open webcam')
return
print("Press 'q' to quit")
while True:
ret, frame = cap.read()
frame = cv2.flip(frame, 1) # mirror for natural interaction
if not ret:
break
# Detect and anonymize
faces = detector.detect(frame)
if faces:
frame = blurrer.anonymize(frame, faces, inplace=True)
# Display info
cv2.putText(
frame,
f'Faces blurred: {len(faces)} | Method: {blurrer.method}',
(10, 30),
cv2.FONT_HERSHEY_SIMPLEX,
0.7,
(0, 255, 0),
2,
)
cv2.imshow('Face Anonymization (Press q to quit)', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
def main():
parser = argparse.ArgumentParser(
description='Face anonymization using various blur methods',
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Anonymize image with pixelation (default)
python run_anonymization.py --image photo.jpg
# Use Gaussian blur with custom strength
python run_anonymization.py --image photo.jpg --method gaussian --blur-strength 5.0
# Real-time webcam anonymization
python run_anonymization.py --webcam --method pixelate
# Black boxes for maximum privacy
python run_anonymization.py --image photo.jpg --method blackout
# Custom pixelation intensity
python run_anonymization.py --image photo.jpg --method pixelate --pixel-blocks 5
""",
)
# Input/output
parser.add_argument('--image', type=str, help='Path to input image')
parser.add_argument('--webcam', action='store_true', help='Use webcam for real-time anonymization')
parser.add_argument('--save-dir', type=str, default='outputs', help='Output directory (default: outputs)')
# Blur method
parser.add_argument(
'--method',
type=str,
default='pixelate',
choices=['gaussian', 'pixelate', 'blackout', 'elliptical', 'median'],
help='Blur method (default: pixelate)',
)
# Method-specific parameters
parser.add_argument(
'--blur-strength',
type=float,
default=3.0,
help='Blur strength for gaussian/elliptical/median (default: 3.0)',
)
parser.add_argument(
'--pixel-blocks',
type=int,
default=20,
help='Number of pixel blocks for pixelate (default: 10, lower=more pixelated)',
)
parser.add_argument(
'--color',
type=str,
default='0,0,0',
help='Fill color for blackout as R,G,B (default: 0,0,0 for black)',
)
parser.add_argument('--margin', type=int, default=20, help='Margin for elliptical blur (default: 20)')
# Detection
parser.add_argument(
'--conf-thresh',
type=float,
default=0.5,
help='Detection confidence threshold (default: 0.5)',
)
# Visualization
parser.add_argument(
'--show-detections',
action='store_true',
help='Show detection boxes before blurring (image mode only)',
)
args = parser.parse_args()
# Validate input
if not args.image and not args.webcam:
parser.error('Either --image or --webcam must be specified')
# Parse color
color_values = [int(x) for x in args.color.split(',')]
if len(color_values) != 3:
parser.error('--color must be in format R,G,B (e.g., 0,0,0)')
color = tuple(color_values)
# Initialize detector
print(f'Initializing face detector (conf_thresh={args.conf_thresh})...')
detector = RetinaFace(conf_thresh=args.conf_thresh)
# Initialize blurrer
print(f'Initializing blur method: {args.method}')
blurrer = BlurFace(
method=args.method,
blur_strength=args.blur_strength,
pixel_blocks=args.pixel_blocks,
color=color,
margin=args.margin,
)
# Run
if args.webcam:
run_webcam(detector, blurrer)
else:
process_image(detector, blurrer, args.image, args.save_dir, args.show_detections)
if __name__ == '__main__':
main()

201
scripts/run_spoofing.py Normal file
View File

@@ -0,0 +1,201 @@
# Face Anti-Spoofing Detection
# Usage:
# Image: python run_spoofing.py --image path/to/image.jpg
# Video: python run_spoofing.py --video path/to/video.mp4
# Webcam: python run_spoofing.py --source 0
import argparse
import os
from pathlib import Path
import cv2
import numpy as np
from uniface import RetinaFace
from uniface.constants import MiniFASNetWeights
from uniface.spoofing import create_spoofer
def draw_spoofing_result(
image: np.ndarray,
bbox: list,
label_idx: int,
score: float,
thickness: int = 2,
) -> None:
"""Draw bounding box with anti-spoofing result.
Args:
image: Input image to draw on.
bbox: Bounding box in [x1, y1, x2, y2] format.
label_idx: Prediction label index (0 = Fake, 1 = Real).
score: Confidence score (0.0 to 1.0).
thickness: Line thickness for bounding box.
"""
x1, y1, x2, y2 = map(int, bbox[:4])
# Color based on result (green for real, red for fake)
is_real = label_idx == 1
color = (0, 255, 0) if is_real else (0, 0, 255)
# Draw bounding box
cv2.rectangle(image, (x1, y1), (x2, y2), color, thickness)
# Prepare label
label = 'Real' if is_real else 'Fake'
text = f'{label}: {score:.1%}'
# Draw label background
(tw, th), baseline = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2)
cv2.rectangle(image, (x1, y1 - th - 10), (x1 + tw + 10, y1), color, -1)
# Draw label text
cv2.putText(image, text, (x1 + 5, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 255), 2)
def process_image(detector, spoofer, image_path: str, save_dir: str = 'outputs') -> None:
"""Process a single image for face anti-spoofing detection."""
image = cv2.imread(image_path)
if image is None:
print(f"Error: Failed to load image from '{image_path}'")
return
# Detect faces
faces = detector.detect(image)
print(f'Detected {len(faces)} face(s)')
if not faces:
print('No faces detected in the image.')
return
# Run anti-spoofing on each face
for i, face in enumerate(faces, 1):
label_idx, score = spoofer.predict(image, face['bbox'])
# label_idx: 0 = Fake, 1 = Real
label = 'Real' if label_idx == 1 else 'Fake'
print(f' Face {i}: {label} ({score:.1%})')
# Draw result on image
draw_spoofing_result(image, face['bbox'], label_idx, score)
# Save output
os.makedirs(save_dir, exist_ok=True)
output_path = os.path.join(save_dir, f'{Path(image_path).stem}_spoofing.jpg')
cv2.imwrite(output_path, image)
print(f'Output saved: {output_path}')
def process_video(detector, spoofer, source, save_dir: str = 'outputs') -> None:
"""Process video or webcam stream for face anti-spoofing detection."""
# Handle webcam or video file
if isinstance(source, int) or source.isdigit():
cap = cv2.VideoCapture(int(source))
is_webcam = True
output_name = 'webcam_spoofing.mp4'
else:
cap = cv2.VideoCapture(source)
is_webcam = False
output_name = f'{Path(source).stem}_spoofing.mp4'
if not cap.isOpened():
print(f'Error: Failed to open video source: {source}')
return
# Get video properties
fps = int(cap.get(cv2.CAP_PROP_FPS)) if not is_webcam else 30
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Setup video writer
os.makedirs(save_dir, exist_ok=True)
output_path = os.path.join(save_dir, output_name)
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
writer = cv2.VideoWriter(output_path, fourcc, fps, (width, height))
print("Processing video... Press 'q' to quit")
frame_count = 0
try:
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
frame_count += 1
# Detect faces
faces = detector.detect(frame)
# Run anti-spoofing on each face
for face in faces:
label_idx, score = spoofer.predict(frame, face['bbox'])
draw_spoofing_result(frame, face['bbox'], label_idx, score)
# Write frame
writer.write(frame)
# Display frame
cv2.imshow('Face Anti-Spoofing', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
print('Stopped by user.')
break
finally:
cap.release()
writer.release()
cv2.destroyAllWindows()
print(f'Processed {frame_count} frames')
if not is_webcam:
print(f'Output saved: {output_path}')
def main():
parser = argparse.ArgumentParser(description='Face Anti-Spoofing Detection')
parser.add_argument('--image', type=str, help='Path to input image')
parser.add_argument('--video', type=str, help='Path to input video')
parser.add_argument('--source', type=str, help='Video source (0 for webcam)')
parser.add_argument(
'--model',
type=str,
default='v2',
choices=['v1se', 'v2'],
help='Model variant: v1se or v2 (default: v2)',
)
parser.add_argument('--scale', type=float, default=None, help='Custom crop scale (default: auto)')
parser.add_argument('--save_dir', type=str, default='outputs', help='Output directory')
args = parser.parse_args()
# Check that at least one input source is provided
if not any([args.image, args.video, args.source]):
parser.print_help()
print('\nError: Please provide --image, --video, or --source')
return
# Select model variant
model_name = MiniFASNetWeights.V1SE if args.model == 'v1se' else MiniFASNetWeights.V2
# Initialize models
print(f'Initializing models (MiniFASNet {args.model.upper()})...')
detector = RetinaFace()
spoofer = create_spoofer(model_name=model_name, scale=args.scale)
# Process input
if args.image:
if not os.path.exists(args.image):
print(f'Error: Image not found: {args.image}')
return
process_image(detector, spoofer, args.image, args.save_dir)
elif args.video:
if not os.path.exists(args.video):
print(f'Error: Video not found: {args.video}')
return
process_video(detector, spoofer, args.video, args.save_dir)
elif args.source:
process_video(detector, spoofer, args.source, args.save_dir)
if __name__ == '__main__':
main()

View File

@@ -13,7 +13,7 @@
__license__ = 'MIT'
__author__ = 'Yakhyokhuja Valikhujaev'
__version__ = '1.5.0'
__version__ = '1.6.0'
from uniface.face_utils import compute_similarity, face_alignment
@@ -40,7 +40,9 @@ from .detection import (
from .gaze import MobileGaze, create_gaze_estimator
from .landmark import Landmark106, create_landmarker
from .parsing import BiSeNet, create_face_parser
from .privacy import BlurFace, anonymize_faces
from .recognition import ArcFace, MobileFace, SphereFace, create_recognizer
from .spoofing import MiniFASNet, create_spoofer
__all__ = [
'__author__',
@@ -55,6 +57,7 @@ __all__ = [
'create_gaze_estimator',
'create_landmarker',
'create_recognizer',
'create_spoofer',
'detect_faces',
'list_available_detectors',
# Detection models
@@ -74,6 +77,11 @@ __all__ = [
# Attribute models
'AgeGender',
'Emotion',
# Spoofing models
'MiniFASNet',
# Privacy
'BlurFace',
'anonymize_faces',
# Utilities
'compute_similarity',
'draw_detections',

View File

@@ -36,41 +36,24 @@ class FaceAnalyzer:
def analyze(self, image: np.ndarray) -> List[Face]:
"""Analyze faces in an image."""
detections = self.detector.detect(image)
Logger.debug(f'Detected {len(detections)} face(s)')
faces = self.detector.detect(image)
Logger.debug(f'Detected {len(faces)} face(s)')
faces = []
for idx, detection in enumerate(detections):
bbox = detection['bbox']
confidence = detection['confidence']
landmarks = detection['landmarks']
embedding = None
for idx, face in enumerate(faces):
if self.recognizer is not None:
try:
embedding = self.recognizer.get_normalized_embedding(image, landmarks)
Logger.debug(f' Face {idx + 1}: Extracted embedding with shape {embedding.shape}')
face.embedding = self.recognizer.get_normalized_embedding(image, face.landmarks)
Logger.debug(f' Face {idx + 1}: Extracted embedding with shape {face.embedding.shape}')
except Exception as e:
Logger.warning(f' Face {idx + 1}: Failed to extract embedding: {e}')
age, gender = None, None
if self.age_gender is not None:
try:
gender, age = self.age_gender.predict(image, bbox)
Logger.debug(f' Face {idx + 1}: Age={age}, Gender={gender}')
face.gender, face.age = self.age_gender.predict(image, face.bbox)
Logger.debug(f' Face {idx + 1}: Age={face.age}, Gender={face.gender}')
except Exception as e:
Logger.warning(f' Face {idx + 1}: Failed to predict age/gender: {e}')
face = Face(
bbox=bbox,
confidence=confidence,
landmarks=landmarks,
embedding=embedding,
age=age,
gender=gender,
)
faces.append(face)
Logger.info(f'Analysis complete: {len(faces)} face(s) processed')
return faces

View File

@@ -119,6 +119,20 @@ class ParsingWeights(str, Enum):
RESNET34 = "parsing_resnet34"
class MiniFASNetWeights(str, Enum):
"""
MiniFASNet: Lightweight Face Anti-Spoofing models.
Trained on face anti-spoofing datasets.
https://github.com/yakhyo/face-anti-spoofing
Model Variants:
- V1SE: Uses scale=4.0 for face crop (squeese-and-excitation version)
- V2: Uses scale=2.7 for face crop (improved version)
"""
V1SE = "minifasnet_v1se"
V2 = "minifasnet_v2"
MODEL_URLS: Dict[Enum, str] = {
# RetinaFace
RetinaFaceWeights.MNET_025: 'https://github.com/yakhyo/uniface/releases/download/weights/retinaface_mv1_0.25.onnx',
@@ -161,6 +175,9 @@ MODEL_URLS: Dict[Enum, str] = {
# Parsing
ParsingWeights.RESNET18: 'https://github.com/yakhyo/face-parsing/releases/download/weights/resnet18.onnx',
ParsingWeights.RESNET34: 'https://github.com/yakhyo/face-parsing/releases/download/weights/resnet34.onnx',
# Anti-Spoofing (MiniFASNet)
MiniFASNetWeights.V1SE: 'https://github.com/yakhyo/face-anti-spoofing/releases/download/weights/MiniFASNetV1SE.onnx',
MiniFASNetWeights.V2: 'https://github.com/yakhyo/face-anti-spoofing/releases/download/weights/MiniFASNetV2.onnx',
}
MODEL_SHA256: Dict[Enum, str] = {
@@ -205,6 +222,9 @@ MODEL_SHA256: Dict[Enum, str] = {
# Face Parsing
ParsingWeights.RESNET18: '0d9bd318e46987c3bdbfacae9e2c0f461cae1c6ac6ea6d43bbe541a91727e33f',
ParsingWeights.RESNET34: '5b805bba7b5660ab7070b5a381dcf75e5b3e04199f1e9387232a77a00095102e',
# Anti-Spoofing (MiniFASNet)
MiniFASNetWeights.V1SE: 'ebab7f90c7833fbccd46d3a555410e78d969db5438e169b6524be444862b3676',
MiniFASNetWeights.V2: 'b32929adc2d9c34b9486f8c4c7bc97c1b69bc0ea9befefc380e4faae4e463907',
}
CHUNK_SIZE = 8192

View File

@@ -7,6 +7,8 @@ from typing import Any, Dict, List
import numpy as np
from uniface.face import Face
from .base import BaseDetector
from .retinaface import RetinaFace
from .scrfd import SCRFD
@@ -16,7 +18,7 @@ from .yolov5 import YOLOv5Face
_detector_cache: Dict[str, BaseDetector] = {}
def detect_faces(image: np.ndarray, method: str = 'retinaface', **kwargs) -> List[Dict[str, Any]]:
def detect_faces(image: np.ndarray, method: str = 'retinaface', **kwargs) -> List[Face]:
"""
High-level face detection function.
@@ -26,18 +28,18 @@ def detect_faces(image: np.ndarray, method: str = 'retinaface', **kwargs) -> Lis
**kwargs: Additional arguments passed to the detector.
Returns:
List[Dict[str, Any]]: A list of dictionaries, where each dictionary represents a detected face and contains:
- 'bbox' (List[float]): [x1, y1, x2, y2] bounding box coordinates.
- 'confidence' (float): The confidence score of the detection.
- 'landmarks' (List[List[float]]): 5-point facial landmarks.
List[Face]: A list of Face objects, each containing:
- bbox (np.ndarray): [x1, y1, x2, y2] bounding box coordinates.
- confidence (float): The confidence score of the detection.
- landmarks (np.ndarray): 5-point facial landmarks with shape (5, 2).
Example:
>>> from uniface import detect_faces
>>> image = cv2.imread("your_image.jpg")
>>> faces = detect_faces(image, method='retinaface', conf_thresh=0.8)
>>> for face in faces:
... print(f"Found face with confidence: {face['confidence']}")
... print(f"BBox: {face['bbox']}")
... print(f"Found face with confidence: {face.confidence}")
... print(f"BBox: {face.bbox}")
"""
method_name = method.lower()

View File

@@ -7,6 +7,8 @@ from typing import Any, Dict, List
import numpy as np
from uniface.face import Face
class BaseDetector(ABC):
"""
@@ -21,7 +23,7 @@ class BaseDetector(ABC):
self.config = kwargs
@abstractmethod
def detect(self, image: np.ndarray, **kwargs) -> List[Dict[str, Any]]:
def detect(self, image: np.ndarray, **kwargs) -> List[Face]:
"""
Detect faces in an image.
@@ -30,18 +32,17 @@ class BaseDetector(ABC):
**kwargs: Additional detection parameters
Returns:
List[Dict[str, Any]]: List of detected faces, where each dictionary contains:
- 'bbox' (np.ndarray): Bounding box coordinates with shape (4,) as [x1, y1, x2, y2]
- 'confidence' (float): Detection confidence score (0.0 to 1.0)
- 'landmarks' (np.ndarray): Facial landmarks with shape (5, 2) for 5-point landmarks
or (68, 2) for 68-point landmarks. Empty array if not supported.
List[Face]: List of detected Face objects, each containing:
- bbox (np.ndarray): Bounding box coordinates with shape (4,) as [x1, y1, x2, y2]
- confidence (float): Detection confidence score (0.0 to 1.0)
- landmarks (np.ndarray): Facial landmarks with shape (5, 2) for 5-point landmarks
Example:
>>> faces = detector.detect(image)
>>> for face in faces:
... bbox = face['bbox'] # np.ndarray with shape (4,)
... confidence = face['confidence'] # float
... landmarks = face['landmarks'] # np.ndarray with shape (5, 2)
... bbox = face.bbox # np.ndarray with shape (4,)
... confidence = face.confidence # float
... landmarks = face.landmarks # np.ndarray with shape (5, 2)
"""
pass

View File

@@ -2,7 +2,7 @@
# Author: Yakhyokhuja Valikhujaev
# GitHub: https://github.com/yakhyo
from typing import Any, Dict, List, Literal, Tuple
from typing import Any, List, Literal, Tuple
import numpy as np
@@ -14,6 +14,7 @@ from uniface.common import (
resize_image,
)
from uniface.constants import RetinaFaceWeights
from uniface.face import Face
from uniface.log import Logger
from uniface.model_store import verify_model_weights
from uniface.onnx_utils import create_onnx_session
@@ -154,7 +155,7 @@ class RetinaFace(BaseDetector):
max_num: int = 0,
metric: Literal['default', 'max'] = 'max',
center_weight: float = 2.0,
) -> List[Dict[str, Any]]:
) -> List[Face]:
"""
Perform face detection on an input image and return bounding boxes and facial landmarks.
@@ -168,19 +169,19 @@ class RetinaFace(BaseDetector):
when using the "default" metric. Defaults to 2.0.
Returns:
List[Dict[str, Any]]: List of face detection dictionaries, each containing:
- 'bbox' (np.ndarray): Bounding box coordinates with shape (4,) as [x1, y1, x2, y2]
- 'confidence' (float): Detection confidence score (0.0 to 1.0)
- 'landmarks' (np.ndarray): 5-point facial landmarks with shape (5, 2)
List[Face]: List of Face objects, each containing:
- bbox (np.ndarray): Bounding box coordinates with shape (4,) as [x1, y1, x2, y2]
- confidence (float): Detection confidence score (0.0 to 1.0)
- landmarks (np.ndarray): 5-point facial landmarks with shape (5, 2)
Example:
>>> faces = detector.detect(image)
>>> for face in faces:
... bbox = face['bbox'] # np.ndarray with shape (4,)
... confidence = face['confidence'] # float
... landmarks = face['landmarks'] # np.ndarray with shape (5, 2)
... bbox = face.bbox # np.ndarray with shape (4,)
... confidence = face.confidence # float
... landmarks = face.landmarks # np.ndarray with shape (5, 2)
... # Can pass landmarks directly to recognition
... embedding = recognizer.get_normalized_embedding(image, landmarks)
... embedding = recognizer.get_normalized_embedding(image, face.landmarks)
"""
original_height, original_width = image.shape[:2]
@@ -229,12 +230,12 @@ class RetinaFace(BaseDetector):
faces = []
for i in range(detections.shape[0]):
face_dict = {
'bbox': detections[i, :4],
'confidence': float(detections[i, 4]),
'landmarks': landmarks[i],
}
faces.append(face_dict)
face = Face(
bbox=detections[i, :4],
confidence=float(detections[i, 4]),
landmarks=landmarks[i],
)
faces.append(face)
return faces
@@ -350,19 +351,12 @@ if __name__ == '__main__':
# Process each detected face
for face in faces:
# Extract bbox and landmarks from dictionary
bbox = face['bbox'] # [x1, y1, x2, y2]
landmarks = face['landmarks'] # [[x1, y1], [x2, y2], ...]
confidence = face['confidence']
# Extract bbox and landmarks from Face object
draw_bbox(frame, face.bbox, face.confidence)
# Pass bbox and confidence separately
draw_bbox(frame, bbox, confidence)
# Convert landmarks to numpy array format if needed
if landmarks is not None and len(landmarks) > 0:
# Convert list of [x, y] pairs to numpy array
points = np.array(landmarks, dtype=np.float32) # Shape: (5, 2)
draw_keypoints(frame, points)
# Draw landmarks if available
if face.landmarks is not None and len(face.landmarks) > 0:
draw_keypoints(frame, face.landmarks)
# Display face count
cv2.putText(

View File

@@ -2,13 +2,14 @@
# Author: Yakhyokhuja Valikhujaev
# GitHub: https://github.com/yakhyo
from typing import Any, Dict, List, Literal, Tuple
from typing import Any, List, Literal, Tuple
import cv2
import numpy as np
from uniface.common import distance2bbox, distance2kps, non_max_suppression, resize_image
from uniface.constants import SCRFDWeights
from uniface.face import Face
from uniface.log import Logger
from uniface.model_store import verify_model_weights
from uniface.onnx_utils import create_onnx_session
@@ -193,7 +194,7 @@ class SCRFD(BaseDetector):
max_num: int = 0,
metric: Literal['default', 'max'] = 'max',
center_weight: float = 2.0,
) -> List[Dict[str, Any]]:
) -> List[Face]:
"""
Perform face detection on an input image and return bounding boxes and facial landmarks.
@@ -207,19 +208,19 @@ class SCRFD(BaseDetector):
when using the "default" metric. Defaults to 2.0.
Returns:
List[Dict[str, Any]]: List of face detection dictionaries, each containing:
- 'bbox' (np.ndarray): Bounding box coordinates with shape (4,) as [x1, y1, x2, y2]
- 'confidence' (float): Detection confidence score (0.0 to 1.0)
- 'landmarks' (np.ndarray): 5-point facial landmarks with shape (5, 2)
List[Face]: List of Face objects, each containing:
- bbox (np.ndarray): Bounding box coordinates with shape (4,) as [x1, y1, x2, y2]
- confidence (float): Detection confidence score (0.0 to 1.0)
- landmarks (np.ndarray): 5-point facial landmarks with shape (5, 2)
Example:
>>> faces = detector.detect(image)
>>> for face in faces:
... bbox = face['bbox'] # np.ndarray with shape (4,)
... confidence = face['confidence'] # float
... landmarks = face['landmarks'] # np.ndarray with shape (5, 2)
... bbox = face.bbox # np.ndarray with shape (4,)
... confidence = face.confidence # float
... landmarks = face.landmarks # np.ndarray with shape (5, 2)
... # Can pass landmarks directly to recognition
... embedding = recognizer.get_normalized_embedding(image, landmarks)
... embedding = recognizer.get_normalized_embedding(image, face.landmarks)
"""
original_height, original_width = image.shape[:2]
@@ -280,12 +281,12 @@ class SCRFD(BaseDetector):
faces = []
for i in range(detections.shape[0]):
face_dict = {
'bbox': detections[i, :4],
'confidence': float(detections[i, 4]),
'landmarks': landmarks[i],
}
faces.append(face_dict)
face = Face(
bbox=detections[i, :4],
confidence=float(detections[i, 4]),
landmarks=landmarks[i],
)
faces.append(face)
return faces
@@ -324,19 +325,12 @@ if __name__ == '__main__':
# Process each detected face
for face in faces:
# Extract bbox and landmarks from dictionary
bbox = face['bbox'] # [x1, y1, x2, y2]
landmarks = face['landmarks'] # [[x1, y1], [x2, y2], ...]
confidence = face['confidence']
# Extract bbox and landmarks from Face object
draw_bbox(frame, face.bbox, face.confidence)
# Pass bbox and confidence separately
draw_bbox(frame, bbox, confidence)
# Convert landmarks to numpy array format if needed
if landmarks is not None and len(landmarks) > 0:
# Convert list of [x, y] pairs to numpy array
points = np.array(landmarks, dtype=np.float32) # Shape: (5, 2)
draw_keypoints(frame, points)
# Draw landmarks if available
if face.landmarks is not None and len(face.landmarks) > 0:
draw_keypoints(frame, face.landmarks)
# Display face count
cv2.putText(

View File

@@ -2,13 +2,14 @@
# Author: Yakhyokhuja Valikhujaev
# GitHub: https://github.com/yakhyo
from typing import Any, Dict, List, Literal, Tuple
from typing import Any, List, Literal, Tuple
import cv2
import numpy as np
from uniface.common import non_max_suppression
from uniface.constants import YOLOv5FaceWeights
from uniface.face import Face
from uniface.log import Logger
from uniface.model_store import verify_model_weights
from uniface.onnx_utils import create_onnx_session
@@ -259,7 +260,7 @@ class YOLOv5Face(BaseDetector):
max_num: int = 0,
metric: Literal['default', 'max'] = 'max',
center_weight: float = 2.0,
) -> List[Dict[str, Any]]:
) -> List[Face]:
"""
Perform face detection on an input image and return bounding boxes and facial landmarks.
@@ -273,19 +274,19 @@ class YOLOv5Face(BaseDetector):
when using the "default" metric. Defaults to 2.0.
Returns:
List[Dict[str, Any]]: List of face detection dictionaries, each containing:
- 'bbox' (np.ndarray): Bounding box coordinates with shape (4,) as [x1, y1, x2, y2]
- 'confidence' (float): Detection confidence score (0.0 to 1.0)
- 'landmarks' (np.ndarray): 5-point facial landmarks with shape (5, 2)
List[Face]: List of Face objects, each containing:
- bbox (np.ndarray): Bounding box coordinates with shape (4,) as [x1, y1, x2, y2]
- confidence (float): Detection confidence score (0.0 to 1.0)
- landmarks (np.ndarray): 5-point facial landmarks with shape (5, 2)
Example:
>>> faces = detector.detect(image)
>>> for face in faces:
... bbox = face['bbox'] # np.ndarray with shape (4,)
... confidence = face['confidence'] # float
... landmarks = face['landmarks'] # np.ndarray with shape (5, 2)
... bbox = face.bbox # np.ndarray with shape (4,)
... confidence = face.confidence # float
... landmarks = face.landmarks # np.ndarray with shape (5, 2)
... # Can pass landmarks directly to recognition
... embedding = recognizer.get_normalized_embedding(image, landmarks)
... embedding = recognizer.get_normalized_embedding(image, face.landmarks)
"""
original_height, original_width = image.shape[:2]
@@ -330,11 +331,11 @@ class YOLOv5Face(BaseDetector):
faces = []
for i in range(detections.shape[0]):
face_dict = {
'bbox': detections[i, :4],
'confidence': float(detections[i, 4]),
'landmarks': landmarks[i],
}
faces.append(face_dict)
face = Face(
bbox=detections[i, :4],
confidence=float(detections[i, 4]),
landmarks=landmarks[i],
)
faces.append(face)
return faces

View File

@@ -51,8 +51,4 @@ def create_gaze_estimator(method: str = 'mobilegaze', **kwargs) -> BaseGazeEstim
raise ValueError(f"Unsupported gaze estimation method: '{method}'. Available: {available}")
__all__ = [
'create_gaze_estimator',
'MobileGaze',
'BaseGazeEstimator',
]
__all__ = ['create_gaze_estimator', 'MobileGaze', 'BaseGazeEstimator']

View File

@@ -0,0 +1,52 @@
# Copyright 2025 Yakhyokhuja Valikhujaev
# Author: Yakhyokhuja Valikhujaev
# GitHub: https://github.com/yakhyo
from typing import Optional
import numpy as np
from .blur import BlurFace
def anonymize_faces(
image: np.ndarray,
detector: Optional[object] = None,
method: str = 'pixelate',
blur_strength: float = 3.0,
pixel_blocks: int = 10,
conf_thresh: float = 0.5,
**kwargs,
) -> np.ndarray:
"""One-line face anonymization with automatic detection.
Args:
image (np.ndarray): Input image (BGR format).
detector: Face detector instance. Creates RetinaFace if None.
method (str): Blur method name. Defaults to 'pixelate'.
blur_strength (float): Blur intensity. Defaults to 3.0.
pixel_blocks (int): Block count for pixelate. Defaults to 10.
conf_thresh (float): Detection confidence threshold. Defaults to 0.5.
**kwargs: Additional detector arguments.
Returns:
np.ndarray: Anonymized image.
Example:
>>> from uniface.privacy import anonymize_faces
>>> anonymized = anonymize_faces(image, method='pixelate')
"""
if detector is None:
try:
from uniface import RetinaFace
detector = RetinaFace(conf_thresh=conf_thresh, **kwargs)
except ImportError as err:
raise ImportError('Could not import RetinaFace. Please ensure UniFace is properly installed.') from err
faces = detector.detect(image)
blurrer = BlurFace(method=method, blur_strength=blur_strength, pixel_blocks=pixel_blocks)
return blurrer.anonymize(image, faces)
__all__ = ['BlurFace', 'anonymize_faces']

193
uniface/privacy/blur.py Normal file
View File

@@ -0,0 +1,193 @@
# Copyright 2025 Yakhyokhuja Valikhujaev
# Author: Yakhyokhuja Valikhujaev
# GitHub: https://github.com/yakhyo
from typing import Dict, List, Tuple, Union
import cv2
import numpy as np
__all__ = ['BlurFace']
def _gaussian_blur(region: np.ndarray, strength: float = 3.0) -> np.ndarray:
"""Apply Gaussian blur to a region."""
h, w = region.shape[:2]
kernel_size = max(3, int((min(h, w) / 7) * strength)) | 1
return cv2.GaussianBlur(region, (kernel_size, kernel_size), 0)
def _median_blur(region: np.ndarray, strength: float = 3.0) -> np.ndarray:
"""Apply median blur to a region."""
h, w = region.shape[:2]
kernel_size = max(3, int((min(h, w) / 7) * strength)) | 1
return cv2.medianBlur(region, kernel_size)
def _pixelate_blur(region: np.ndarray, blocks: int = 10) -> np.ndarray:
"""Apply pixelation to a region."""
h, w = region.shape[:2]
temp_h, temp_w = max(1, h // blocks), max(1, w // blocks)
temp = cv2.resize(region, (temp_w, temp_h), interpolation=cv2.INTER_LINEAR)
return cv2.resize(temp, (w, h), interpolation=cv2.INTER_NEAREST)
def _blackout_blur(region: np.ndarray, color: Tuple[int, int, int] = (0, 0, 0)) -> np.ndarray:
"""Replace region with solid color."""
return np.full_like(region, color)
class EllipticalBlur:
"""Elliptical blur with soft, feathered edges.
This blur applies Gaussian blur within an elliptical mask that follows
the natural oval shape of faces, requiring full image context for proper blending.
Args:
blur_strength (float): Blur intensity multiplier. Defaults to 3.0.
margin (int): Extra pixels to extend ellipse beyond bbox. Defaults to 20.
"""
def __init__(self, blur_strength: float = 3.0, margin: int = 20):
self.blur_strength = blur_strength
self.margin = margin
def __call__(
self,
image: np.ndarray,
bboxes: List[Union[Tuple, List]],
inplace: bool = False,
) -> np.ndarray:
if not inplace:
image = image.copy()
h, w = image.shape[:2]
for bbox in bboxes:
x1, y1, x2, y2 = map(int, bbox)
center_x, center_y = (x1 + x2) // 2, (y1 + y2) // 2
axes_x = (x2 - x1) // 2 + self.margin
axes_y = (y2 - y1) // 2 + self.margin
# Create soft elliptical mask
mask = np.zeros((h, w), dtype=np.float32)
cv2.ellipse(mask, (center_x, center_y), (axes_x, axes_y), 0, 0, 360, 255, -1)
mask = cv2.GaussianBlur(mask, (51, 51), 0) / 255.0
mask = mask[:, :, np.newaxis]
kernel_size = max(3, int((min(axes_y, axes_x) * 2 / 7) * self.blur_strength)) | 1
blurred = cv2.GaussianBlur(image, (kernel_size, kernel_size), 0)
image = (blurred * mask + image * (1 - mask)).astype(np.uint8)
return image
class BlurFace:
"""Face blurring with multiple anonymization methods.
Args:
method (str): Blur method - 'gaussian', 'pixelate', 'blackout', 'elliptical', or 'median'.
Defaults to 'pixelate'.
blur_strength (float): Intensity for gaussian/elliptical/median. Defaults to 3.0.
pixel_blocks (int): Block count for pixelate. Defaults to 10.
color (Tuple[int, int, int]): Fill color (BGR) for blackout. Defaults to (0, 0, 0).
margin (int): Edge margin for elliptical. Defaults to 20.
Example:
>>> blurrer = BlurFace(method='pixelate')
>>> anonymized = blurrer.anonymize(image, faces)
"""
VALID_METHODS = {'gaussian', 'pixelate', 'blackout', 'elliptical', 'median'}
def __init__(
self,
method: str = 'pixelate',
blur_strength: float = 3.0,
pixel_blocks: int = 15,
color: Tuple[int, int, int] = (0, 0, 0),
margin: int = 20,
):
self.method = method.lower()
self._blur_strength = blur_strength
self._pixel_blocks = pixel_blocks
self._color = color
self._margin = margin
if self.method not in self.VALID_METHODS:
raise ValueError(f"Invalid blur method: '{method}'. Choose from: {sorted(self.VALID_METHODS)}")
if self.method == 'elliptical':
self._elliptical = EllipticalBlur(blur_strength, margin)
def _blur_region(self, region: np.ndarray) -> np.ndarray:
if self.method == 'gaussian':
return _gaussian_blur(region, self._blur_strength)
elif self.method == 'median':
return _median_blur(region, self._blur_strength)
elif self.method == 'pixelate':
return _pixelate_blur(region, self._pixel_blocks)
elif self.method == 'blackout':
return _blackout_blur(region, self._color)
def anonymize(
self,
image: np.ndarray,
faces: List[Dict],
inplace: bool = False,
) -> np.ndarray:
"""Anonymize faces in an image.
Args:
image (np.ndarray): Input image (BGR format).
faces (List[Dict]): Face detections with 'bbox' key containing [x1, y1, x2, y2].
inplace (bool): Modify image in-place if True. Defaults to False.
Returns:
np.ndarray: Image with anonymized faces.
"""
if not faces:
return image if inplace else image.copy()
bboxes = [face['bbox'] for face in faces]
return self.blur_regions(image, bboxes, inplace)
def blur_regions(
self,
image: np.ndarray,
bboxes: List[Union[Tuple, List]],
inplace: bool = False,
) -> np.ndarray:
"""Blur specific rectangular regions in an image.
Args:
image (np.ndarray): Input image (BGR format).
bboxes (List): Bounding boxes as [x1, y1, x2, y2].
inplace (bool): Modify image in-place if True. Defaults to False.
Returns:
np.ndarray: Image with blurred regions.
"""
if not bboxes:
return image if inplace else image.copy()
if self.method == 'elliptical':
return self._elliptical(image, bboxes, inplace)
if not inplace:
image = image.copy()
h, w = image.shape[:2]
for bbox in bboxes:
x1, y1, x2, y2 = map(int, bbox)
x1, y1 = max(0, x1), max(0, y1)
x2, y2 = min(w, x2), min(h, y2)
if x2 > x1 and y2 > y1:
image[y1:y2, x1:x2] = self._blur_region(image[y1:y2, x1:x2])
return image
def __repr__(self) -> str:
return f"BlurFace(method='{self.method}')"

View File

@@ -55,10 +55,4 @@ def create_recognizer(method: str = 'arcface', **kwargs) -> BaseRecognizer:
raise ValueError(f"Unsupported method: '{method}'. Available: {available}")
__all__ = [
'create_recognizer',
'ArcFace',
'MobileFace',
'SphereFace',
'BaseRecognizer',
]
__all__ = ['create_recognizer', 'BaseRecognizer', 'ArcFace', 'MobileFace', 'SphereFace']

View File

@@ -0,0 +1,64 @@
# Copyright 2025 Yakhyokhuja Valikhujaev
# Author: Yakhyokhuja Valikhujaev
# GitHub: https://github.com/yakhyo
from typing import Optional
from uniface.constants import MiniFASNetWeights
from .base import BaseSpoofer
from .minifasnet import MiniFASNet
__all__ = [
'BaseSpoofer',
'MiniFASNet',
'MiniFASNetWeights',
'create_spoofer',
]
def create_spoofer(
model_name: MiniFASNetWeights = MiniFASNetWeights.V2,
scale: Optional[float] = None,
) -> MiniFASNet:
"""
Factory function to create a face anti-spoofing model.
This is a convenience function that creates a MiniFASNet instance
with the specified model variant and optional custom scale.
Args:
model_name (MiniFASNetWeights): The model variant to use.
Options:
- MiniFASNetWeights.V2: Improved version (default), uses scale=2.7
- MiniFASNetWeights.V1SE: Squeeze-and-excitation version, uses scale=4.0
Defaults to MiniFASNetWeights.V2.
scale (Optional[float]): Custom crop scale factor for face region.
If None, uses the default scale for the selected model variant.
Returns:
MiniFASNet: An initialized face anti-spoofing model.
Example:
>>> from uniface.spoofing import create_spoofer, MiniFASNetWeights
>>> from uniface import RetinaFace
>>>
>>> # Create with default settings (V2 model)
>>> spoofer = create_spoofer()
>>>
>>> # Create with V1SE model
>>> spoofer = create_spoofer(model_name=MiniFASNetWeights.V1SE)
>>>
>>> # Create with custom scale
>>> spoofer = create_spoofer(scale=3.0)
>>>
>>> # Use with face detector
>>> detector = RetinaFace()
>>> faces = detector.detect(image)
>>> for face in faces:
... label_idx, score = spoofer.predict(image, face['bbox'])
... # label_idx: 0 = Fake, 1 = Real
... label = 'Real' if label_idx == 1 else 'Fake'
... print(f'{label}: {score:.2%}')
"""
return MiniFASNet(model_name=model_name, scale=scale)

117
uniface/spoofing/base.py Normal file
View File

@@ -0,0 +1,117 @@
# Copyright 2025 Yakhyokhuja Valikhujaev
# Author: Yakhyokhuja Valikhujaev
# GitHub: https://github.com/yakhyo
from abc import ABC, abstractmethod
from typing import List, Tuple, Union
import numpy as np
class BaseSpoofer(ABC):
"""
Abstract base class for all face anti-spoofing models.
This class defines the common interface that all anti-spoofing models must implement,
ensuring consistency across different spoofing detection methods. Anti-spoofing models
detect whether a face is real (live person) or fake (photo, video, mask, etc.).
The prediction returns a tuple of (label_idx, score):
- label_idx: 0 = Fake (spoof), 1 = Real (live)
- score: Confidence score for the predicted label (0.0 to 1.0)
"""
@abstractmethod
def _initialize_model(self) -> None:
"""
Initialize the underlying model for inference.
This method should handle loading model weights, creating the
inference session (e.g., ONNX Runtime), and any necessary
setup procedures to prepare the model for prediction.
Raises:
RuntimeError: If the model fails to load or initialize.
"""
raise NotImplementedError('Subclasses must implement the _initialize_model method.')
@abstractmethod
def preprocess(self, image: np.ndarray, bbox: Union[List, np.ndarray]) -> np.ndarray:
"""
Preprocess the input image for model inference.
This method should crop the face region using the bounding box,
resize it to the model's expected input size, and normalize
the pixel values as required by the model.
Args:
image (np.ndarray): Input image in BGR format with shape (H, W, C).
bbox (Union[List, np.ndarray]): Face bounding box in [x1, y1, x2, y2] format.
Returns:
np.ndarray: The preprocessed image tensor ready for inference,
typically with shape (1, C, H, W).
"""
raise NotImplementedError('Subclasses must implement the preprocess method.')
@abstractmethod
def postprocess(self, outputs: np.ndarray) -> Tuple[int, float]:
"""
Postprocess raw model outputs into prediction result.
This method takes the raw output from the model's inference and
converts it into a label index and confidence score.
Args:
outputs (np.ndarray): Raw outputs from the model inference (logits).
Returns:
Tuple[int, float]: A tuple of (label_idx, score) where:
- label_idx: 0 = Fake (spoof), 1 = Real (live)
- score: Confidence score for the predicted label (0.0 to 1.0)
"""
raise NotImplementedError('Subclasses must implement the postprocess method.')
@abstractmethod
def predict(self, image: np.ndarray, bbox: Union[List, np.ndarray]) -> Tuple[int, float]:
"""
Perform end-to-end anti-spoofing prediction on a face.
This method orchestrates the full pipeline: preprocessing the input,
running inference, and postprocessing to return the prediction.
Args:
image (np.ndarray): Input image in BGR format containing the face.
bbox (Union[List, np.ndarray]): Face bounding box in [x1, y1, x2, y2] format.
This is typically obtained from a face detector.
Returns:
Tuple[int, float]: A tuple of (label_idx, score) where:
- label_idx: 0 = Fake (spoof), 1 = Real (live)
- score: Confidence score for the predicted label (0.0 to 1.0)
Example:
>>> spoofer = MiniFASNet()
>>> detector = RetinaFace()
>>> faces = detector.detect(image)
>>> for face in faces:
... label_idx, score = spoofer.predict(image, face['bbox'])
... label = 'Real' if label_idx == 1 else 'Fake'
... print(f'{label}: {score:.2%}')
"""
raise NotImplementedError('Subclasses must implement the predict method.')
def __call__(self, image: np.ndarray, bbox: Union[List, np.ndarray]) -> Tuple[int, float]:
"""
Provides a convenient, callable shortcut for the `predict` method.
Args:
image (np.ndarray): Input image in BGR format.
bbox (Union[List, np.ndarray]): Face bounding box in [x1, y1, x2, y2] format.
Returns:
Tuple[int, float]: A tuple of (label_idx, score) where:
- label_idx: 0 = Fake (spoof), 1 = Real (live)
- score: Confidence score for the predicted label (0.0 to 1.0)
"""
return self.predict(image, bbox)

View File

@@ -0,0 +1,225 @@
# Copyright 2025 Yakhyokhuja Valikhujaev
# Author: Yakhyokhuja Valikhujaev
# GitHub: https://github.com/yakhyo
from typing import List, Optional, Tuple, Union
import cv2
import numpy as np
from uniface.constants import MiniFASNetWeights
from uniface.log import Logger
from uniface.model_store import verify_model_weights
from uniface.onnx_utils import create_onnx_session
from .base import BaseSpoofer
__all__ = ['MiniFASNet']
# Default crop scales for each model variant
DEFAULT_SCALES = {
MiniFASNetWeights.V1SE: 4.0,
MiniFASNetWeights.V2: 2.7,
}
class MiniFASNet(BaseSpoofer):
"""
MiniFASNet: Lightweight Face Anti-Spoofing with ONNX Runtime.
MiniFASNet is a face anti-spoofing model that detects whether a face is real
(live person) or fake (photo, video replay, mask, etc.). It supports two model
variants: V1SE (with squeeze-and-excitation) and V2 (improved version).
The model takes a face region cropped from the image using a bounding box
and predicts whether it's a real or spoofed face.
Reference:
https://github.com/yakhyo/face-anti-spoofing
Args:
model_name (MiniFASNetWeights): The enum specifying the model variant to load.
Options: V1SE (scale=4.0), V2 (scale=2.7).
Defaults to `MiniFASNetWeights.V2`.
scale (Optional[float]): Custom crop scale factor for face region.
If None, uses the default scale for the selected model variant.
V1SE uses 4.0, V2 uses 2.7.
Attributes:
scale (float): Crop scale factor for face region extraction.
input_size (Tuple[int, int]): Model input dimensions (width, height).
Example:
>>> from uniface.spoofing import MiniFASNet
>>> from uniface import RetinaFace
>>>
>>> detector = RetinaFace()
>>> spoofer = MiniFASNet()
>>>
>>> # Detect faces and check if they are real
>>> faces = detector.detect(image)
>>> for face in faces:
... label_idx, score = spoofer.predict(image, face['bbox'])
... # label_idx: 0 = Fake, 1 = Real
... label = 'Real' if label_idx == 1 else 'Fake'
... print(f'{label}: {score:.2%}')
"""
def __init__(
self,
model_name: MiniFASNetWeights = MiniFASNetWeights.V2,
scale: Optional[float] = None,
) -> None:
Logger.info(f'Initializing MiniFASNet with model={model_name.name}')
# Use default scale for the model variant if not specified
self.scale = scale if scale is not None else DEFAULT_SCALES.get(model_name, 2.7)
self.model_path = verify_model_weights(model_name)
self._initialize_model()
def _initialize_model(self) -> None:
"""
Initialize the ONNX model from the stored model path.
Raises:
RuntimeError: If the model fails to load or initialize.
"""
try:
self.session = create_onnx_session(self.model_path)
# Get input configuration
input_cfg = self.session.get_inputs()[0]
self.input_name = input_cfg.name
# Input shape is (batch, channels, height, width) - we need (width, height)
self.input_size = tuple(input_cfg.shape[2:4][::-1]) # (width, height)
# Get output configuration
output_cfg = self.session.get_outputs()[0]
self.output_name = output_cfg.name
Logger.info(f'MiniFASNet initialized with input size {self.input_size}, scale={self.scale}')
except Exception as e:
Logger.error(f"Failed to load MiniFASNet model from '{self.model_path}'", exc_info=True)
raise RuntimeError(f'Failed to initialize MiniFASNet model: {e}') from e
def _xyxy_to_xywh(self, bbox: Union[List, np.ndarray]) -> List[int]:
"""Convert bounding box from [x1, y1, x2, y2] to [x, y, w, h] format."""
x1, y1, x2, y2 = bbox[:4]
return [int(x1), int(y1), int(x2 - x1), int(y2 - y1)]
def _crop_face(self, image: np.ndarray, bbox_xywh: List[int]) -> np.ndarray:
"""
Crop and resize face region from image using scale factor.
The crop is centered on the face bounding box and scaled to capture
more context around the face, which is important for anti-spoofing.
Args:
image: Input image in BGR format.
bbox_xywh: Face bounding box in [x, y, w, h] format.
Returns:
Cropped and resized face region.
"""
src_h, src_w = image.shape[:2]
x, y, box_w, box_h = bbox_xywh
# Calculate the scale to apply based on image and face size
scale = min((src_h - 1) / box_h, (src_w - 1) / box_w, self.scale)
new_w = box_w * scale
new_h = box_h * scale
# Calculate center of the bounding box
center_x = x + box_w / 2
center_y = y + box_h / 2
# Calculate new bounding box coordinates
x1 = max(0, int(center_x - new_w / 2))
y1 = max(0, int(center_y - new_h / 2))
x2 = min(src_w - 1, int(center_x + new_w / 2))
y2 = min(src_h - 1, int(center_y + new_h / 2))
# Crop and resize
cropped = image[y1 : y2 + 1, x1 : x2 + 1]
resized = cv2.resize(cropped, self.input_size)
return resized
def preprocess(self, image: np.ndarray, bbox: Union[List, np.ndarray]) -> np.ndarray:
"""
Preprocess the input image for model inference.
Crops the face region, converts to float32, and arranges
dimensions for the model (NCHW format).
Args:
image: Input image in BGR format with shape (H, W, C).
bbox: Face bounding box in [x1, y1, x2, y2] format.
Returns:
Preprocessed image tensor with shape (1, C, H, W).
"""
# Convert bbox format
bbox_xywh = self._xyxy_to_xywh(bbox)
# Crop and resize face region
face = self._crop_face(image, bbox_xywh)
# Convert to float32 (no normalization needed for this model)
face = face.astype(np.float32)
# HWC -> CHW -> NCHW
face = np.transpose(face, (2, 0, 1))
face = np.expand_dims(face, axis=0)
return face
def _softmax(self, x: np.ndarray) -> np.ndarray:
"""Apply softmax to logits along axis 1."""
e_x = np.exp(x - np.max(x, axis=1, keepdims=True))
return e_x / e_x.sum(axis=1, keepdims=True)
def postprocess(self, outputs: np.ndarray) -> Tuple[int, float]:
"""
Postprocess raw model outputs into prediction result.
Applies softmax to convert logits to probabilities and
returns the predicted label index and confidence score.
Args:
outputs: Raw outputs from the model inference (logits).
Returns:
Tuple[int, float]: A tuple of (label_idx, score) where:
- label_idx: 0 = Fake (spoof), 1 = Real (live)
- score: Confidence score for the predicted label (0.0 to 1.0)
"""
probs = self._softmax(outputs)
label_idx = int(np.argmax(probs))
score = float(probs[0, label_idx])
return label_idx, score
def predict(self, image: np.ndarray, bbox: Union[List, np.ndarray]) -> Tuple[int, float]:
"""
Perform end-to-end anti-spoofing prediction on a face.
Args:
image: Input image in BGR format containing the face.
bbox: Face bounding box in [x1, y1, x2, y2] format.
Returns:
Tuple[int, float]: A tuple of (label_idx, score) where:
- label_idx: 0 = Fake (spoof), 1 = Real (live)
- score: Confidence score for the predicted label (0.0 to 1.0)
"""
# Preprocess
input_tensor = self.preprocess(image, bbox)
# Run inference
outputs = self.session.run([self.output_name], {self.input_name: input_tensor})[0]
# Postprocess and return
return self.postprocess(outputs)