23 Commits
v3.2.0 ... main

Author SHA1 Message Date
Yakhyokhuja Valikhujaev
7882ec5cb4 chore: Update docstrings and comments (#119) 2026-05-11 01:07:14 +09:00
dependabot[bot]
d51d030545 chore(deps): bump gitpython from 3.1.49 to 3.1.50 (#118)
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.49 to 3.1.50.
- [Release notes](https://github.com/gitpython-developers/GitPython/releases)
- [Changelog](https://github.com/gitpython-developers/GitPython/blob/main/CHANGES)
- [Commits](https://github.com/gitpython-developers/GitPython/compare/3.1.49...3.1.50)

---
updated-dependencies:
- dependency-name: gitpython
  dependency-version: 3.1.50
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-05-09 19:16:08 +09:00
github-actions[bot]
5a767847da chore: Release v3.6.0 2026-05-08 12:25:34 +00:00
github-actions[bot]
4a22f903f0 chore: Release v3.6.0rc2 2026-05-08 12:20:29 +00:00
Yakhyokhuja Valikhujaev
43a46e11df ci: Refresh uv.lock during release pipeline (#117) 2026-05-08 21:13:44 +09:00
github-actions[bot]
025b93ab8b chore: Release v3.6.0rc1 2026-05-08 03:27:30 +00:00
Yakhyokhuja Valikhujaev
8bf87d958f feat: Add PIPNet for facial landmarks detection (#116)
* docs: Add PipNet model documentation

* feat: Add PipNet for face landmark detection
2026-05-08 12:25:00 +09:00
Yakhyokhuja Valikhujaev
b813dc2ee7 ref: Update package mngt and optimize the vector store functions (#115)
* ref: Update download and hash chunk sizes to speed up

* build: Adopt uv with uv.lock and drop requirements.txt

* ref: Centralize softmax helper and minor cleanups
2026-05-06 01:47:27 +09:00
Yakhyokhuja Valikhujaev
73fc291930 ci: Resolve deprecation warnings in pipeline (#114)
* ci: Resolve deprecation warnings in pipeline
2026-04-28 00:52:46 +09:00
github-actions[bot]
400bb72217 chore: Release v3.5.3 2026-04-27 15:24:18 +00:00
Yakhyokhuja Valikhujaev
a0a12d5eca fix: Fix pypi publish re-run issue (#113) 2026-04-28 00:22:12 +09:00
github-actions[bot]
a34f376da0 chore: Release v3.5.2 2026-04-27 15:04:20 +00:00
Yakhyokhuja Valikhujaev
2b29706615 ci: Add end-to-end deployment pipeline and fix docs auto-trigger (#112) 2026-04-27 23:59:09 +09:00
github-actions[bot]
f6d3cf33f0 chore: Release v3.5.1 2026-04-27 11:53:21 +00:00
Yakhyokhuja Valikhujaev
0eb042425c chore: Minor changes to workflow names and docs (#111) 2026-04-27 20:51:50 +09:00
github-actions[bot]
35c0b6d539 chore: Release v3.5.1rc1 2026-04-25 15:17:54 +00:00
Yakhyokhuja Valikhujaev
13c4ac83d8 feat: Update the release workflow and package installation command (#110)
* fix: Fix installation conflict between onnxruntime and onnxruntime-gpu

* fix: Fix CI, notebooks, type hints, and packaging issues found in audit

* feat: Add new release config

* ci: Automate release pipeline and document release process
2026-04-25 23:59:00 +09:00
Yakhyokhuja Valikhujaev
6ce397b811 feat: Add MODNet portrait matting (#108)
* feat: Add MODNet portrait matting

* docs: Update docs and example of portrait matting

* fix: Fix linting issue
2026-04-11 23:30:32 +09:00
Yakhyokhuja Valikhujaev
9bf54f5f78 feat: Add EdgeFace recognition model (#105)
* refactor: Split recognition models into separate files

* feat: Add EdgeFace recognition model

* release: Bump version to v3.4.0
2026-04-04 20:11:28 +09:00
Yakhyokhuja Valikhujaev
c87ec1ad0f docs: Add example images and update MkDocs files (#104)
* chore: Add example inference results

* docs: Update MkDocs and README files
2026-04-04 18:28:27 +09:00
Yakhyokhuja Valikhujaev
9e56a86963 chore: Update docs and clean up notebook outputs before before commit (#102)
* chore: Add links for repo and docs on example notebooks

* ref: Compress jupyter notebook sizes

* ci: Add nbstripout pre-commit hook for notebook output stripping

* docs: Add coding agent docs and commit message tag
2026-04-03 10:10:51 +09:00
Yakhyokhuja Valikhujaev
426bd71505 release: Release UniFace v3.3.0 - Python 3.10 support, stores refactor, docs and examples refresh (#101)
* docs: Update docs and examples

* chore: Update tools folder testing for development

* feat: Update indexing to stores and drawing logic

* chore: Update the release version to 3.3.0

* feat: Add python 3.10 support

* build: Add python support for worklows and publishing

* chore: Update all example notebooks
2026-03-28 22:30:56 +09:00
LiberiFatali
ede8b27091 chore: Add example notebook for face recognition (#100) 2026-03-28 05:27:27 +09:00
120 changed files with 6190 additions and 2304 deletions

View File

@@ -1,4 +1,4 @@
name: Build
name: CI
on:
push:
@@ -17,8 +17,8 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- uses: actions/checkout@v5
- uses: actions/setup-python@v6
with:
python-version: "3.11"
- uses: pre-commit/action@v3.0.1
@@ -33,8 +33,12 @@ jobs:
matrix:
include:
# Full Python range on Linux (fastest runner)
- os: ubuntu-latest
python-version: "3.10"
- os: ubuntu-latest
python-version: "3.11"
- os: ubuntu-latest
python-version: "3.12"
- os: ubuntu-latest
python-version: "3.13"
- os: ubuntu-latest
@@ -46,28 +50,25 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
enable-cache: true
python-version: ${{ matrix.python-version }}
cache: "pip"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install .[dev]
run: uv sync --locked --extra cpu --extra dev
- name: Check ONNX Runtime providers
run: |
python -c "import onnxruntime as ort; print('Available providers:', ort.get_available_providers())"
run: uv run python -c "import onnxruntime as ort; print('Available providers:', ort.get_available_providers())"
- name: Run tests
run: pytest -v --tb=short
run: uv run pytest -v --tb=short
- name: Test package imports
run: python -c "import uniface; print(f'uniface {uniface.__version__} loaded with {len(uniface.__all__)} exports')"
run: uv run python -c "import uniface; print(f'uniface {uniface.__version__} loaded with {len(uniface.__all__)} exports')"
build:
runs-on: ubuntu-latest
@@ -76,10 +77,10 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Set up Python
uses: actions/setup-python@v5
uses: actions/setup-python@v6
with:
python-version: "3.11"
cache: "pip"

View File

@@ -1,9 +1,6 @@
name: Deploy docs
name: Deploy Documentation
on:
push:
tags:
- "v*.*.*"
workflow_dispatch:
permissions:
@@ -13,26 +10,28 @@ jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0 # Fetch full history for git-committers and git-revision-date plugins
fetch-depth: 0
- uses: actions/setup-python@v5
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
enable-cache: true
python-version: "3.11"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install mkdocs-material pymdown-extensions mkdocs-git-committers-plugin-2 mkdocs-git-revision-date-localized-plugin
run: uv sync --locked --extra docs
- name: Build docs
env:
MKDOCS_GIT_COMMITTERS_APIKEY: ${{ secrets.MKDOCS_GIT_COMMITTERS_APIKEY }}
run: mkdocs build --strict
run: uv run mkdocs build --strict
- name: Deploy to GitHub Pages
uses: peaceiris/actions-gh-pages@v4
env:
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: true
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./site

229
.github/workflows/pipeline.yml vendored Normal file
View File

@@ -0,0 +1,229 @@
name: Release Pipeline
on:
workflow_dispatch:
inputs:
version:
description: 'Version (e.g. 3.6.0, 3.6.0b1, 3.6.0rc1)'
required: true
concurrency:
group: pipeline
cancel-in-progress: false
jobs:
validate:
runs-on: ubuntu-latest
timeout-minutes: 5
outputs:
is_prerelease: ${{ steps.prerelease.outputs.is_prerelease }}
steps:
- name: Checkout code
uses: actions/checkout@v5
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.11"
- name: Validate version (PEP 440)
run: |
python - <<'EOF'
import re, sys
v = "${{ inputs.version }}"
if not re.fullmatch(r'\d+\.\d+\.\d+((a|b|rc)\d+|\.dev\d+)?', v):
print(f"Invalid version: {v}")
print("Expected forms: 3.6.0, 3.6.0a1, 3.6.0b1, 3.6.0rc1, 3.6.0.dev1")
sys.exit(1)
EOF
- name: Check tag does not exist
run: |
if git rev-parse "v${{ inputs.version }}" >/dev/null 2>&1; then
echo "Tag v${{ inputs.version }} already exists."
exit 1
fi
- name: Detect pre-release
id: prerelease
run: |
if [[ "${{ inputs.version }}" =~ (a|b|rc|\.dev)[0-9]+ ]]; then
echo "is_prerelease=true" >> $GITHUB_OUTPUT
else
echo "is_prerelease=false" >> $GITHUB_OUTPUT
fi
test:
runs-on: ubuntu-latest
timeout-minutes: 15
needs: validate
strategy:
fail-fast: false
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13", "3.14"]
steps:
- name: Checkout code
uses: actions/checkout@v5
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
enable-cache: true
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: uv sync --locked --extra cpu --extra dev
- name: Run tests
run: uv run pytest -v --tb=short
release:
runs-on: ubuntu-latest
timeout-minutes: 5
needs: test
permissions:
contents: write
steps:
- name: Checkout code
uses: actions/checkout@v5
with:
fetch-depth: 0
token: ${{ secrets.RELEASE_TOKEN }}
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.11"
- name: Update pyproject.toml
run: |
python - <<'EOF'
import re, pathlib
p = pathlib.Path('pyproject.toml')
text = p.read_text()
new = re.sub(r'^version\s*=\s*".*"', f'version = "${{ inputs.version }}"', text, count=1, flags=re.M)
if new == text:
raise SystemExit("Failed to update version in pyproject.toml")
p.write_text(new)
EOF
- name: Update uniface/__init__.py
run: |
python - <<'EOF'
import re, pathlib
p = pathlib.Path('uniface/__init__.py')
text = p.read_text()
new = re.sub(r"^__version__\s*=\s*'.*'", f"__version__ = '${{ inputs.version }}'", text, count=1, flags=re.M)
if new == text:
raise SystemExit("Failed to update __version__ in uniface/__init__.py")
p.write_text(new)
EOF
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
enable-cache: true
python-version: "3.11"
- name: Refresh uv.lock with new project version
run: uv lock --upgrade-package uniface
- name: Commit, tag, push
run: |
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git add pyproject.toml uniface/__init__.py uv.lock
git commit -m "chore: Release v${{ inputs.version }}"
git tag "v${{ inputs.version }}"
git push origin HEAD:${{ github.ref_name }}
git push origin "v${{ inputs.version }}"
publish:
runs-on: ubuntu-latest
timeout-minutes: 10
needs: [validate, release]
permissions:
contents: write
id-token: write
environment:
name: pypi
url: https://pypi.org/project/uniface/
steps:
- name: Checkout tag
uses: actions/checkout@v5
with:
ref: v${{ inputs.version }}
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: "3.11"
cache: 'pip'
- name: Install build tools
run: |
python -m pip install --upgrade pip
python -m pip install build twine
- name: Build package
run: python -m build
- name: Check package
run: twine check dist/*
- name: Publish to PyPI
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
run: twine upload dist/*
- name: Create GitHub Release
uses: softprops/action-gh-release@v2
with:
tag_name: v${{ inputs.version }}
files: dist/*
generate_release_notes: true
prerelease: ${{ needs.validate.outputs.is_prerelease }}
docs:
runs-on: ubuntu-latest
timeout-minutes: 10
needs: [validate, publish]
if: needs.validate.outputs.is_prerelease == 'false'
permissions:
contents: write
steps:
- name: Checkout tag
uses: actions/checkout@v5
with:
ref: v${{ inputs.version }}
fetch-depth: 0
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
enable-cache: true
python-version: "3.11"
- name: Install dependencies
run: uv sync --locked --extra docs
- name: Build docs
env:
MKDOCS_GIT_COMMITTERS_APIKEY: ${{ secrets.MKDOCS_GIT_COMMITTERS_APIKEY }}
run: uv run mkdocs build --strict
- name: Deploy to GitHub Pages
uses: peaceiris/actions-gh-pages@v4
env:
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: true
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./site
destination_dir: docs

View File

@@ -1,119 +0,0 @@
name: Publish to PyPI
on:
push:
tags:
- "v*.*.*" # Trigger only on version tags like v0.1.9
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
validate:
runs-on: ubuntu-latest
timeout-minutes: 5
outputs:
version: ${{ steps.get_version.outputs.version }}
tag_version: ${{ steps.get_version.outputs.tag_version }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11" # Needs 3.11+ for tomllib
- name: Get version from tag and pyproject.toml
id: get_version
run: |
TAG_VERSION=${GITHUB_REF#refs/tags/v}
echo "tag_version=$TAG_VERSION" >> $GITHUB_OUTPUT
PYPROJECT_VERSION=$(python -c "import tomllib; print(tomllib.load(open('pyproject.toml','rb'))['project']['version'])")
echo "version=$PYPROJECT_VERSION" >> $GITHUB_OUTPUT
echo "Tag version: v$TAG_VERSION"
echo "pyproject.toml version: $PYPROJECT_VERSION"
- name: Verify version match
run: |
if [ "${{ steps.get_version.outputs.tag_version }}" != "${{ steps.get_version.outputs.version }}" ]; then
echo "Error: Tag version (${{ steps.get_version.outputs.tag_version }}) does not match pyproject.toml version (${{ steps.get_version.outputs.version }})"
exit 1
fi
echo "Version validation passed: ${{ steps.get_version.outputs.version }}"
test:
runs-on: ubuntu-latest
timeout-minutes: 15
needs: validate
strategy:
fail-fast: false
matrix:
python-version: ["3.11", "3.13"]
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: 'pip'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install .[dev]
- name: Run tests
run: pytest -v
publish:
runs-on: ubuntu-latest
timeout-minutes: 10
needs: [validate, test]
permissions:
contents: write
id-token: write
environment:
name: pypi
url: https://pypi.org/project/uniface/
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
cache: 'pip'
- name: Install build tools
run: |
python -m pip install --upgrade pip
python -m pip install build twine
- name: Build package
run: python -m build
- name: Check package
run: twine check dist/*
- name: Publish to PyPI
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
run: twine upload dist/*
- name: Create GitHub Release
uses: softprops/action-gh-release@v1
with:
files: dist/*
generate_release_notes: true

View File

@@ -18,6 +18,13 @@ repos:
- id: debug-statements
- id: check-ast
# Strip Jupyter notebook outputs
- repo: https://github.com/kynan/nbstripout
rev: 0.9.1
hooks:
- id: nbstripout
files: ^examples/
# Ruff - Fast Python linter and formatter
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.14.10

6
AGENTS.md Normal file
View File

@@ -0,0 +1,6 @@
<!-- Cursor agent instructions — shared with CLAUDE.md -->
<!-- See CLAUDE.md for full project instructions for AI coding agents. -->
# AGENTS.md
Please read and follow all instructions in [CLAUDE.md](./CLAUDE.md).

81
CLAUDE.md Normal file
View File

@@ -0,0 +1,81 @@
# CLAUDE.md
Project instructions for AI coding agents.
## Project Overview
UniFace is a Python library for face detection, recognition, tracking, landmark analysis, face parsing, gaze estimation, age/gender detection. It uses ONNX Runtime for inference.
## Code Style
- Python 3.10+ with type hints
- Line length: 120
- Single quotes for strings, double quotes for docstrings
- Google-style docstrings
- Formatter/linter: Ruff (config in `pyproject.toml`)
- Run `ruff format .` and `ruff check . --fix` before committing
## Commit Messages
Follow [Conventional Commits](https://www.conventionalcommits.org/) with a **capitalized** description:
```
<type>: <Capitalized short description>
```
Types: `feat`, `fix`, `docs`, `style`, `refactor`, `perf`, `test`, `build`, `ci`, `chore`
Examples:
- `feat: Add gaze estimation model`
- `fix: Correct bounding box scaling for non-square images`
- `ci: Add nbstripout pre-commit hook`
- `docs: Update installation instructions`
- `refactor: Unify attribute/detector base classes`
## Testing
```bash
pytest -v --tb=short
```
Tests live in `tests/`. Run the full suite before submitting changes.
## Pre-commit
Pre-commit hooks handle formatting, linting, security checks, and notebook output stripping. Always run:
```bash
pre-commit install
pre-commit run --all-files
```
## Project Structure
```
uniface/ # Main package
detection/ # Face detection models (SCRFD, RetinaFace, YOLOv5, YOLOv8)
recognition/ # Face recognition/verification (AdaFace, ArcFace, EdgeFace, MobileFace, SphereFace)
landmark/ # Facial landmark models
tracking/ # Object tracking (ByteTrack)
parsing/ # Face parsing/segmentation (BiSeNet, XSeg)
gaze/ # Gaze estimation
headpose/ # Head pose estimation
attribute/ # Age, gender, emotion detection
spoofing/ # Anti-spoofing (MiniFASNet)
privacy/ # Face anonymization
stores/ # Vector stores (FAISS)
constants.py # Model weight URLs and checksums
model_store.py # Model download/cache management
analyzer.py # High-level FaceAnalyzer API
types.py # Shared type definitions
tests/ # Unit tests
examples/ # Jupyter notebooks (outputs are auto-stripped)
docs/ # MkDocs documentation
```
## Key Conventions
- New models: add class in submodule, register weights in `constants.py`, export in `__init__.py`
- Dependencies: managed in `pyproject.toml`
- All ONNX models are downloaded on demand with SHA256 verification
- Do not commit notebook outputs; `nbstripout` pre-commit hook handles this

View File

@@ -21,25 +21,31 @@ Thank you for considering contributing to UniFace! We welcome contributions of a
## Development Setup
We use [uv](https://docs.astral.sh/uv/) for reproducible dev installs. The committed `uv.lock` pins every transitive dependency so contributors and CI resolve to identical versions.
```bash
# Install uv (https://docs.astral.sh/uv/getting-started/installation/)
curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/yakhyo/uniface.git
cd uniface
pip install -e ".[dev]"
# Sync runtime + cpu + dev extras from uv.lock (use --extra gpu instead of cpu for CUDA)
uv sync --extra cpu --extra dev
```
`uv sync` creates a project-local `.venv/` and installs everything pinned in `uv.lock`. Run commands with `uv run <cmd>` (e.g. `uv run pytest`), or activate the venv with `source .venv/bin/activate`.
### Setting Up Pre-commit Hooks
We use [pre-commit](https://pre-commit.com/) to ensure code quality and consistency. Install and configure it:
We use [pre-commit](https://pre-commit.com/) to ensure code quality and consistency. `pre-commit` is included in the `[dev]` extra, so it's already installed after `uv sync`.
```bash
# Install pre-commit
pip install pre-commit
# Install the git hooks
pre-commit install
uv run pre-commit install
# (Optional) Run against all files
pre-commit run --all-files
uv run pre-commit run --all-files
```
Once installed, pre-commit will automatically run on every commit to check:
@@ -59,12 +65,12 @@ This project uses [Ruff](https://docs.astral.sh/ruff/) for linting and formattin
#### General Rules
- **Line length:** 120 characters maximum
- **Python version:** 3.11+ (use modern syntax)
- **Python version:** 3.10+ (use modern syntax)
- **Quote style:** Single quotes for strings, double quotes for docstrings
#### Type Hints
Use modern Python 3.11+ type hints (PEP 585 and PEP 604):
Use modern Python 3.10+ type hints (PEP 585 and PEP 604):
```python
# Preferred (modern)
@@ -188,6 +194,45 @@ Example notebooks demonstrating library usage:
| Face Vector Store | [10_face_vector_store.ipynb](examples/10_face_vector_store.ipynb) |
| Head Pose Estimation | [11_head_pose_estimation.ipynb](examples/11_head_pose_estimation.ipynb) |
## Release Process
Releases are fully automated via GitHub Actions. Only maintainers with branch-protection bypass privileges on `main` can trigger a release.
### Cutting a release
1. Go to **Actions → Release Pipeline → Run workflow** on GitHub.
2. Enter the version following [PEP 440](https://peps.python.org/pep-0440/):
- Stable: `0.7.0`, `1.0.0`
- Pre-release: `0.7.0rc1`, `0.7.0b1`, `0.7.0a1`, `0.7.0.dev1`
3. Click **Run workflow**.
### What happens automatically
The `Release Pipeline` workflow runs all stages in sequence:
1. **Validate** — checks the version string against PEP 440 and confirms the tag does not already exist.
2. **Test** — runs the test suite on Python 3.103.14.
3. **Release** — updates `pyproject.toml` and `uniface/__init__.py`, commits `chore: Release vX.Y.Z` to `main`, creates and pushes tag `vX.Y.Z`.
4. **Publish** — builds the package, uploads to PyPI, and creates a GitHub Release (flagged as pre-release for `a`/`b`/`rc`/`.dev` versions).
5. **Deploy docs** — runs only for **stable** versions. Pre-releases do not update the live documentation site.
### Verifying a release
- PyPI: <https://pypi.org/project/uniface/>
- GitHub Releases: <https://github.com/yakhyo/uniface/releases>
- Docs (stable only): <https://yakhyo.github.io/uniface/>
### Installing a pre-release
End users can opt in to pre-releases with the `--pre` flag:
```bash
pip install uniface --pre # latest pre-release
pip install uniface==0.7.0rc1 # specific pre-release
```
Without `--pre`, `pip install uniface` always resolves to the latest stable version.
## Questions?
Open an issue or start a discussion on GitHub.

170
README.md
View File

@@ -1,9 +1,9 @@
<h1 align="center">UniFace: All-in-One Face Analysis Library</h1>
<h1 align="center">UniFace: A Unified Face Analysis Library for Python</h1>
<div align="center">
[![PyPI Version](https://img.shields.io/pypi/v/uniface.svg?label=Version)](https://pypi.org/project/uniface/)
[![Python Version](https://img.shields.io/badge/Python-3.11%2B-blue)](https://www.python.org/)
[![Python Version](https://img.shields.io/badge/Python-3.10%2B-blue)](https://www.python.org/)
[![License](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![Github Build Status](https://github.com/yakhyo/uniface/actions/workflows/ci.yml/badge.svg)](https://github.com/yakhyo/uniface/actions)
[![PyPI Downloads](https://static.pepy.tech/personalized-badge/uniface?period=total&units=INTERNATIONAL_SYSTEM&left_color=GRAY&right_color=BLUE&left_text=Downloads)](https://pepy.tech/projects/uniface)
@@ -14,54 +14,90 @@
</div>
<div align="center">
<img src="https://raw.githubusercontent.com/yakhyo/uniface/main/.github/logos/uniface_rounded_q80.webp" width="90%" alt="UniFace - All-in-One Open-Source Face Analysis Library">
<img src="https://raw.githubusercontent.com/yakhyo/uniface/main/.github/logos/uniface_rounded_q80.webp" width="90%" alt="UniFace - A Unified Face Analysis Library for Python">
</div>
---
**UniFace** is a lightweight, production-ready face analysis library built on ONNX Runtime. It provides high-performance face detection, recognition, landmark detection, face parsing, gaze estimation, and attribute analysis with hardware acceleration support across platforms.
**UniFace** is a lightweight, production-ready Python library for face detection, recognition, tracking, landmark analysis, face parsing, gaze estimation, and face attributes.
---
## Features
- **Face Detection** — RetinaFace, SCRFD, YOLOv5-Face, and YOLOv8-Face with 5-point landmarks
- **Face Recognition** — ArcFace, MobileFace, and SphereFace embeddings
- **Face Recognition** — AdaFace, ArcFace, EdgeFace, MobileFace, and SphereFace embeddings
- **Face Tracking** — Multi-object tracking with [BYTETracker](https://github.com/yakhyo/bytetrack-tracker) for persistent IDs across video frames
- **Facial Landmarks** — 106-point landmark localization module (separate from 5-point detector landmarks)
- **Facial Landmarks** — 106-point (2d106det) and 98 / 68-point (PIPNet) landmark localization (separate from the 5-point detector landmarks)
- **Face Parsing** — BiSeNet semantic segmentation (19 classes), XSeg face masking
- **Portrait Matting** — Trimap-free alpha matte with MODNet (background removal, green screen, compositing)
- **Gaze Estimation** — Real-time gaze direction with MobileGaze
- **Head Pose Estimation** — 3D head orientation (pitch, yaw, roll) with 6D rotation representation
- **Attribute Analysis** — Age, gender, race (FairFace), and emotion
- **Vector Indexing** — FAISS-backed embedding store for fast multi-identity search
- **Vector Store** — FAISS-backed embedding store for fast multi-identity search
- **Anti-Spoofing** — Face liveness detection with MiniFASNet
- **Face Anonymization** — 5 blur methods for privacy protection
- **Hardware Acceleration** — ARM64 (Apple Silicon), CUDA (NVIDIA), CPU
---
## Visual Examples
<table>
<tr>
<td align="center"><b>Face Detection</b><br><img src="https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/detection.jpg" width="100%"></td>
<td align="center"><b>Gaze Estimation</b><br><img src="https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/gaze.jpg" width="100%"></td>
</tr>
<tr>
<td align="center"><b>Head Pose Estimation</b><br><img src="https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/headpose.jpg" width="100%"></td>
<td align="center"><b>Age &amp; Gender</b><br><img src="https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/age_gender.jpg" width="100%"></td>
</tr>
<tr>
<td align="center" colspan="2"><b>Face Verification</b><br><img src="https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/verification.jpg" width="80%"></td>
</tr>
<tr>
<td align="center" colspan="2"><b>106-Point Landmarks</b><br><img src="https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/landmarks.jpg" width="36%"></td>
</tr>
<tr>
<td align="center" colspan="2"><b>Face Parsing</b><br><img src="https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/parsing.jpg" width="80%"></td>
</tr>
<tr>
<td align="center" colspan="2"><b>Face Segmentation</b><br><img src="https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/segmentation.jpg" width="80%"></td>
</tr>
<tr>
<td align="center" colspan="2"><b>Portrait Matting</b><br><img src="https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/matting.jpg" width="100%"></td>
</tr>
<tr>
<td align="center" colspan="2"><b>Face Anonymization</b><br><img src="https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/anonymization.jpg" width="100%"></td>
</tr>
</table>
---
## Installation
**Standard installation**
**CPU / Apple Silicon**
```bash
pip install uniface
pip install uniface[cpu]
```
**GPU support (CUDA)**
**GPU support (NVIDIA CUDA)**
```bash
pip install uniface[gpu]
```
> **Why separate extras?** `onnxruntime` and `onnxruntime-gpu` conflict when both are installed — they own the same Python namespace. Installing only the extra you need prevents that conflict entirely.
**From source (latest version)**
```bash
git clone https://github.com/yakhyo/uniface.git
cd uniface && pip install -e .
cd uniface && pip install -e ".[cpu]" # or .[gpu] for CUDA
```
**FAISS vector indexing**
**FAISS vector store**
```bash
pip install faiss-cpu # or faiss-gpu for CUDA
@@ -127,14 +163,10 @@ for face in faces:
```python
import cv2
from uniface.analyzer import FaceAnalyzer
from uniface.detection import RetinaFace
from uniface.recognition import ArcFace
from uniface import FaceAnalyzer
detector = RetinaFace()
recognizer = ArcFace()
analyzer = FaceAnalyzer(detector, recognizer=recognizer)
# Zero-config: uses SCRFD (500M) + ArcFace (MobileNet) by default
analyzer = FaceAnalyzer()
image = cv2.imread("photo.jpg")
if image is None:
@@ -146,19 +178,63 @@ for face in faces:
print(face.bbox, face.embedding.shape if face.embedding is not None else None)
```
---
## Execution Providers (ONNX Runtime)
With attributes:
```python
from uniface.detection import RetinaFace
from uniface import FaceAnalyzer, AgeGender
# Force CPU-only inference
detector = RetinaFace(providers=["CPUExecutionProvider"])
analyzer = FaceAnalyzer(attributes=[AgeGender()])
faces = analyzer.analyze(image)
for face in faces:
print(f"{face.sex}, {face.age}y, embedding={face.embedding.shape}")
```
See more in the docs:
https://yakhyo.github.io/uniface/concepts/execution-providers/
---
## Example (Portrait Matting)
```python
import cv2
import numpy as np
from uniface.matting import MODNet
matting = MODNet()
image = cv2.imread("portrait.jpg")
matte = matting.predict(image) # (H, W) float32 in [0, 1]
# Transparent PNG
rgba = cv2.cvtColor(image, cv2.COLOR_BGR2BGRA)
rgba[:, :, 3] = (matte * 255).astype(np.uint8)
cv2.imwrite("transparent.png", rgba)
# Green screen
matte_3ch = matte[:, :, np.newaxis]
bg = np.full_like(image, (0, 177, 64), dtype=np.uint8)
result = (image * matte_3ch + bg * (1 - matte_3ch)).astype(np.uint8)
cv2.imwrite("green_screen.jpg", result)
```
---
## Jupyter Notebooks
| Example | Colab | Description |
|---------|:-----:|-------------|
| [01_face_detection.ipynb](examples/01_face_detection.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/01_face_detection.ipynb) | Face detection and landmarks |
| [02_face_alignment.ipynb](examples/02_face_alignment.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/02_face_alignment.ipynb) | Face alignment for recognition |
| [03_face_verification.ipynb](examples/03_face_verification.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/03_face_verification.ipynb) | Compare faces for identity |
| [04_face_search.ipynb](examples/04_face_search.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/04_face_search.ipynb) | Find a person in group photos |
| [05_face_analyzer.ipynb](examples/05_face_analyzer.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/05_face_analyzer.ipynb) | Unified face analysis |
| [06_face_parsing.ipynb](examples/06_face_parsing.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/06_face_parsing.ipynb) | Semantic face segmentation |
| [07_face_anonymization.ipynb](examples/07_face_anonymization.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/07_face_anonymization.ipynb) | Privacy-preserving blur |
| [08_gaze_estimation.ipynb](examples/08_gaze_estimation.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/08_gaze_estimation.ipynb) | Gaze direction estimation |
| [09_face_segmentation.ipynb](examples/09_face_segmentation.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/09_face_segmentation.ipynb) | Face segmentation with XSeg |
| [10_face_vector_store.ipynb](examples/10_face_vector_store.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/10_face_vector_store.ipynb) | FAISS-backed face database |
| [11_head_pose_estimation.ipynb](examples/11_head_pose_estimation.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/11_head_pose_estimation.ipynb) | Head pose estimation (pitch, yaw, roll) |
| [12_face_recognition.ipynb](examples/12_face_recognition.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/12_face_recognition.ipynb) | Standalone face recognition pipeline |
| [13_portrait_matting.ipynb](examples/13_portrait_matting.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/13_portrait_matting.ipynb) | Portrait matting with MODNet |
---
@@ -177,6 +253,20 @@ Full documentation: https://yakhyo.github.io/uniface/
---
## Execution Providers (ONNX Runtime)
```python
from uniface.detection import RetinaFace
# Force CPU-only inference
detector = RetinaFace(providers=["CPUExecutionProvider"])
```
See more in the docs:
https://yakhyo.github.io/uniface/concepts/execution-providers/
---
## Datasets
| Task | Training Dataset | Models |
@@ -185,6 +275,8 @@ Full documentation: https://yakhyo.github.io/uniface/
| Recognition | MS1MV2 | MobileFace, SphereFace |
| Recognition | WebFace600K | ArcFace |
| Recognition | WebFace4M / 12M | AdaFace |
| Recognition | MS1MV2 | EdgeFace |
| Landmarks | WFLW, 300W+CelebA | PIPNet (98 / 68 pts) |
| Gaze | Gaze360 | MobileGaze |
| Head Pose | 300W-LP | HeadPose (ResNet, MobileNet) |
| Parsing | CelebAMask-HQ | BiSeNet |
@@ -194,24 +286,6 @@ Full documentation: https://yakhyo.github.io/uniface/
---
## Jupyter Notebooks
| Example | Colab | Description |
|---------|:-----:|-------------|
| [01_face_detection.ipynb](examples/01_face_detection.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/01_face_detection.ipynb) | Face detection and landmarks |
| [02_face_alignment.ipynb](examples/02_face_alignment.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/02_face_alignment.ipynb) | Face alignment for recognition |
| [03_face_verification.ipynb](examples/03_face_verification.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/03_face_verification.ipynb) | Compare faces for identity |
| [04_face_search.ipynb](examples/04_face_search.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/04_face_search.ipynb) | Find a person in group photos |
| [05_face_analyzer.ipynb](examples/05_face_analyzer.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/05_face_analyzer.ipynb) | All-in-one analysis |
| [06_face_parsing.ipynb](examples/06_face_parsing.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/06_face_parsing.ipynb) | Semantic face segmentation |
| [07_face_anonymization.ipynb](examples/07_face_anonymization.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/07_face_anonymization.ipynb) | Privacy-preserving blur |
| [08_gaze_estimation.ipynb](examples/08_gaze_estimation.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/08_gaze_estimation.ipynb) | Gaze direction estimation |
| [09_face_segmentation.ipynb](examples/09_face_segmentation.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/09_face_segmentation.ipynb) | Face segmentation with XSeg |
| [10_face_vector_store.ipynb](examples/10_face_vector_store.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/10_face_vector_store.ipynb) | FAISS-backed face database |
| [11_head_pose_estimation.ipynb](examples/11_head_pose_estimation.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/11_head_pose_estimation.ipynb) | Head pose estimation (pitch, yaw, roll) |
---
## Licensing and Model Usage
UniFace is MIT-licensed, but several pretrained models carry their own licenses.
@@ -234,10 +308,13 @@ If you plan commercial use, verify model license compatibility.
| Detection | [yolov8-face-onnx-inference](https://github.com/yakhyo/yolov8-face-onnx-inference) | - | YOLOv8-Face ONNX Inference |
| Tracking | [bytetrack-tracker](https://github.com/yakhyo/bytetrack-tracker) | - | BYTETracker Multi-Object Tracking |
| Recognition | [face-recognition](https://github.com/yakhyo/face-recognition) | ✓ | MobileFace, SphereFace Training |
| Recognition | [edgeface-onnx](https://github.com/yakhyo/edgeface-onnx) | - | EdgeFace ONNX Inference |
| Landmarks | [pipnet-onnx](https://github.com/yakhyo/pipnet-onnx) | - | PIPNet 98 / 68-point ONNX Inference |
| Parsing | [face-parsing](https://github.com/yakhyo/face-parsing) | ✓ | BiSeNet Face Parsing |
| Parsing | [face-segmentation](https://github.com/yakhyo/face-segmentation) | - | XSeg Face Segmentation |
| Gaze | [gaze-estimation](https://github.com/yakhyo/gaze-estimation) | ✓ | MobileGaze Training |
| Head Pose | [head-pose-estimation](https://github.com/yakhyo/head-pose-estimation) | ✓ | Head Pose Training (6DRepNet-style) |
| Matting | [modnet](https://github.com/yakhyo/modnet) | - | MODNet Portrait Matting |
| Anti-Spoofing | [face-anti-spoofing](https://github.com/yakhyo/face-anti-spoofing) | - | MiniFASNet Inference |
| Attributes | [fairface-onnx](https://github.com/yakhyo/fairface-onnx) | - | FairFace ONNX Inference |
@@ -261,3 +338,6 @@ Questions or feedback:
## License
This project is licensed under the [MIT License](LICENSE).
> **Disclaimer:** This project is not affiliated with or related to
> [Uniface](https://uniface.com/) by Rocket Software.

BIN
assets/demos/age_gender.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 206 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

BIN
assets/demos/detection.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 341 KiB

BIN
assets/demos/gaze.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 212 KiB

BIN
assets/demos/headpose.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

BIN
assets/demos/landmarks.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 428 KiB

BIN
assets/demos/matting.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 938 KiB

BIN
assets/demos/parsing.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 712 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 851 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 171 KiB

BIN
assets/demos/src_man1.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

BIN
assets/demos/src_man2.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 220 KiB

BIN
assets/demos/src_man3.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 208 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.8 KiB

View File

@@ -110,6 +110,28 @@ landmarks = landmarker.get_landmarks(image, face.bbox)
| 63-86 | Eyes | 24 |
| 87-105 | Mouth | 19 |
### 98 / 68-Point Landmarks (PIPNet)
Returned by `PIPNet`. The variant determines the layout:
```python
from uniface.constants import PIPNetWeights
from uniface.landmark import PIPNet
# 98-point WFLW layout (default)
landmarks = PIPNet().get_landmarks(image, face.bbox)
# Shape: (98, 2)
# 68-point 300W layout
landmarks = PIPNet(model_name=PIPNetWeights.DW300_CELEBA_68).get_landmarks(image, face.bbox)
# Shape: (68, 2)
```
The 98-point output follows the standard [WFLW](https://wywu.github.io/projects/LAB/WFLW.html) layout
(33 face-contour points, eyebrow/eye/nose/mouth groups). The 68-point output follows the standard
[300W / iBUG](https://ibug.doc.ic.ac.uk/resources/300-W/) layout. Coordinates are in original-image
pixel space, identical in convention to `Landmark106`.
---
## Face Crop

View File

@@ -43,7 +43,7 @@ All **ONNX-based** model classes accept the `providers` parameter:
- Detection: `RetinaFace`, `SCRFD`, `YOLOv5Face`, `YOLOv8Face`
- Recognition: `ArcFace`, `AdaFace`, `MobileFace`, `SphereFace`
- Landmarks: `Landmark106`
- Landmarks: `Landmark106`, `PIPNet`
- Gaze: `MobileGaze`
- Parsing: `BiSeNet`, `XSeg`
- Attributes: `AgeGender`, `FairFace`
@@ -93,7 +93,7 @@ print("Available providers:", providers)
No additional setup required. ARM64 optimizations are built into `onnxruntime`:
```bash
pip install uniface
pip install uniface[cpu]
```
Verify ARM64:
@@ -110,7 +110,7 @@ python -c "import platform; print(platform.machine())"
### NVIDIA GPU (CUDA)
Install with GPU support:
Install with GPU support (this installs `onnxruntime-gpu`, which already includes CPU fallback):
```bash
pip install uniface[gpu]
@@ -140,7 +140,7 @@ else:
CPU execution is always available:
```bash
pip install uniface
pip install uniface[cpu]
```
Works on all platforms without additional configuration.
@@ -215,7 +215,7 @@ for image_path in image_paths:
3. Reinstall with GPU support:
```bash
pip uninstall onnxruntime onnxruntime-gpu
pip uninstall onnxruntime onnxruntime-gpu -y
pip install uniface[gpu]
```

View File

@@ -43,7 +43,7 @@ class Face:
# Required (from detection)
bbox: np.ndarray # [x1, y1, x2, y2]
confidence: float # 0.0 to 1.0
landmarks: np.ndarray # (5, 2) or (106, 2)
landmarks: np.ndarray # (5, 2) from detectors. Dense landmarkers return (106, 2), (98, 2), or (68, 2).
# Optional (enriched by analyzers)
embedding: np.ndarray | None = None

View File

@@ -194,6 +194,8 @@ If a model fails verification, it's re-downloaded automatically.
| Model | Size | Download |
|-------|------|----------|
| Landmark106 | 14 MB | ✅ |
| PIPNet WFLW-98 | 47 MB | ✅ |
| PIPNet 300W+CelebA-68 | 46 MB | ✅ |
| AgeGender | 8 MB | ✅ |
| FairFace | 44 MB | ✅ |
| Gaze ResNet34 | 82 MB | ✅ |

View File

@@ -26,6 +26,7 @@ graph TB
HPOSE[Head Pose]
PARSE[Parsing]
SPOOF[Anti-Spoofing]
MATT[Matting]
PRIV[Privacy]
end
@@ -33,7 +34,7 @@ graph TB
TRK[BYTETracker]
end
subgraph Indexing
subgraph Stores
IDX[FAISS Vector Store]
end
@@ -42,6 +43,7 @@ graph TB
end
IMG --> DET
IMG --> MATT
DET --> REC
DET --> LMK
DET --> ATTR
@@ -62,16 +64,14 @@ graph TB
## Design Principles
### 1. ONNX-First
### 1. Cross-Platform Inference
UniFace runs inference primarily via ONNX Runtime for core components:
UniFace uses portable model runtimes to provide consistent inference across macOS, Linux, and Windows. Most core components run through ONNX Runtime, while optional components may use PyTorch where appropriate.
- **Cross-platform**: Same models work on macOS, Linux, Windows
- **Hardware acceleration**: Automatic selection of optimal provider
- **Production-ready**: No Python-only dependencies for inference
Some optional components (e.g., emotion TorchScript, torchvision NMS) require PyTorch.
### 2. Minimal Dependencies
Core dependencies are kept minimal:
@@ -115,16 +115,17 @@ def detect(self, image: np.ndarray) -> list[Face]:
```
uniface/
├── detection/ # Face detection (RetinaFace, SCRFD, YOLOv5Face, YOLOv8Face)
├── recognition/ # Face recognition (AdaFace, ArcFace, MobileFace, SphereFace)
├── recognition/ # Face recognition (AdaFace, ArcFace, EdgeFace, MobileFace, SphereFace)
├── tracking/ # Multi-object tracking (BYTETracker)
├── landmark/ # 106-point landmarks
├── landmark/ # Dense landmarks (Landmark106 = 106 pts, PIPNet = 98 / 68 pts)
├── attribute/ # Age, gender, emotion, race
├── parsing/ # Face semantic segmentation
├── matting/ # Portrait matting (MODNet)
├── gaze/ # Gaze estimation
├── headpose/ # Head pose estimation
├── spoofing/ # Anti-spoofing
├── privacy/ # Face anonymization
├── indexing/ # Vector indexing (FAISS)
├── stores/ # Vector stores (FAISS)
├── types.py # Dataclasses (Face, GazeResult, HeadPoseResult, etc.)
├── constants.py # Model weights and URLs
├── model_store.py # Model download and caching

View File

@@ -201,17 +201,11 @@ For drawing detections, filter by confidence:
```python
from uniface.draw import draw_detections
# Only draw high-confidence detections
bboxes = [f.bbox for f in faces if f.confidence > 0.7]
scores = [f.confidence for f in faces if f.confidence > 0.7]
landmarks = [f.landmarks for f in faces if f.confidence > 0.7]
# Only draw high-confidence detections (confidence ≥ vis_threshold)
draw_detections(
image=image,
bboxes=bboxes,
scores=scores,
landmarks=landmarks,
vis_threshold=0.6 # Additional visualization filter
faces=faces,
vis_threshold=0.7,
)
```

View File

@@ -6,16 +6,20 @@ Thank you for contributing to UniFace!
## Quick Start
We use [uv](https://docs.astral.sh/uv/) for reproducible dev installs (lockfile-pinned).
```bash
# Install uv first: https://docs.astral.sh/uv/getting-started/installation/
# Clone
git clone https://github.com/yakhyo/uniface.git
cd uniface
# Install dev dependencies
pip install -e ".[dev]"
# Install runtime + cpu + dev extras from uv.lock (--extra gpu for CUDA)
uv sync --extra cpu --extra dev
# Run tests
pytest
uv run pytest
```
---
@@ -32,17 +36,50 @@ ruff check . --fix
**Guidelines:**
- Line length: 120
- Python 3.11+ type hints
- Python 3.10+ type hints
- Google-style docstrings
---
## Pre-commit Hooks
`pre-commit` is included in the `[dev]` extra, so `uv sync` already installs it.
```bash
pip install pre-commit
pre-commit install
pre-commit run --all-files
uv run pre-commit install
uv run pre-commit run --all-files
```
---
## Commit Messages
We follow [Conventional Commits](https://www.conventionalcommits.org/):
```
<type>: <short description>
```
| Type | When to use |
|--------------|--------------------------------------------------|
| **feat** | New feature or capability |
| **fix** | Bug fix |
| **docs** | Documentation changes |
| **style** | Formatting, whitespace (no logic change) |
| **refactor** | Code restructuring without changing behavior |
| **perf** | Performance improvement |
| **test** | Adding or updating tests |
| **build** | Build system or dependencies |
| **ci** | CI/CD and pre-commit configuration |
| **chore** | Routine maintenance and tooling |
**Examples:**
```
feat: Add gaze estimation model
fix: Correct bounding box scaling for non-square images
ci: Add nbstripout pre-commit hook
docs: Update installation instructions
```
---
@@ -67,6 +104,14 @@ pre-commit run --all-files
---
## Releases
Releases are automated via GitHub Actions. Maintainers trigger **Actions → Release Pipeline → Run workflow** with a [PEP 440](https://peps.python.org/pep-0440/) version (e.g. `0.7.0`, `0.7.0rc1`). The pipeline runs tests, bumps `pyproject.toml` + `uniface/__init__.py`, tags the commit, publishes to PyPI, and creates a GitHub Release. Docs redeploy only for stable releases.
See [CONTRIBUTING.md](https://github.com/yakhyo/uniface/blob/main/CONTRIBUTING.md#release-process) for the full process.
---
## Questions?
Open an issue on [GitHub](https://github.com/yakhyo/uniface/issues).

View File

@@ -12,6 +12,7 @@ Overview of all training datasets and evaluation benchmarks used by UniFace mode
| Recognition | [MS1MV2](#ms1mv2) | 5.8M images, 85.7K IDs | MobileFace, SphereFace |
| Recognition | [WebFace600K](#webface600k) | 600K images | ArcFace |
| Recognition | [WebFace4M / WebFace12M](#webface4m--webface12m) | 4M / 12M images | AdaFace |
| Landmarks | [WFLW](#wflw) / [300W+CelebA](#300w--celeba) | 10K / 3.8K labeled + 202.6K unlabeled | PIPNet (98 / 68 pts) |
| Gaze | [Gaze360](#gaze360) | 238 subjects | MobileGaze |
| Parsing | [CelebAMask-HQ](#celebamask-hq) | 30K images | BiSeNet |
| Attributes | [CelebA](#celeba) | 200K images | AgeGender |
@@ -126,6 +127,41 @@ Large-scale dataset with wide variations in pose, age, illumination, ethnicity,
---
### Facial Landmarks
#### WFLW
Wider Facial Landmarks in-the-Wild — a 98-point landmark dataset whose images come from
WIDER FACE. Used to train the supervised PIPNet 98-point variant shipped with UniFace.
| Property | Value |
| ---------- | -------------------------------------- |
| Images | 10,000 (7,500 train / 2,500 test) |
| Annotation | 98 manually labeled landmarks per face |
| Used by | PIPNet WFLW-98 |
!!! info "Reference"
**Project page**: [WFLW dataset](https://wywu.github.io/projects/LAB/WFLW.html)
---
#### 300W + CelebA
The 68-point PIPNet variant is trained in a generalizable semi-supervised setting (GSSL):
labeled images come from 300W and unlabeled images come from CelebA.
| Property | Value |
| --------------- | -------------------------------------------------------------------------------- |
| Labeled images | 3,837 (3,148 train: LFPW train + HELEN train + AFW; 689 test: LFPW test + HELEN test + iBUG) |
| Unlabeled | 202,599 (full CelebA; bounding boxes from RetinaFace per the PIPNet paper) |
| Annotation | 68-point iBUG layout |
| Used by | PIPNet 300W+CelebA-68 |
!!! info "Reference"
**Paper**: [PIPNet (Pixel-in-Pixel Net)](https://arxiv.org/abs/2003.03771) (IJCV 2021)
---
### Gaze Estimation
#### Gaze360

View File

@@ -10,17 +10,17 @@ template: home.html
# UniFace { .hero-title }
<p class="hero-subtitle">All-in-One Open-Source Face Analysis Library</p>
<p class="hero-subtitle">A Unified Face Analysis Library for Python</p>
[![PyPI Version](https://img.shields.io/pypi/v/uniface.svg?label=Version)](https://pypi.org/project/uniface/)
[![Python Version](https://img.shields.io/badge/Python-3.11%2B-blue)](https://www.python.org/)
[![Python Version](https://img.shields.io/badge/Python-3.10%2B-blue)](https://www.python.org/)
[![License](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![Github Build Status](https://github.com/yakhyo/uniface/actions/workflows/ci.yml/badge.svg)](https://github.com/yakhyo/uniface/actions)
[![PyPI Downloads](https://static.pepy.tech/personalized-badge/uniface?period=total&units=INTERNATIONAL_SYSTEM&left_color=GRAY&right_color=BLUE&left_text=Downloads)](https://pepy.tech/projects/uniface)
[![Kaggle Badge](https://img.shields.io/badge/Notebooks-Kaggle?label=Kaggle&color=blue)](https://www.kaggle.com/yakhyokhuja/code)
[![Discord](https://img.shields.io/badge/Discord-Join%20Server-5865F2?logo=discord&logoColor=white)](https://discord.gg/wdzrjr7R5j)
<!-- <img src="https://raw.githubusercontent.com/yakhyo/uniface/main/.github/logos/uniface_rounded_q80.webp" alt="UniFace - All-in-One Open-Source Face Analysis Library" style="max-width: 70%; margin: 1rem 0;"> -->
<!-- <img src="https://raw.githubusercontent.com/yakhyo/uniface/main/.github/logos/uniface_rounded_q80.webp" alt="UniFace - A Unified Face Analysis Library for Python" style="max-width: 70%; margin: 1rem 0;"> -->
[Get Started](quickstart.md){ .md-button .md-button--primary }
[View on GitHub](https://github.com/yakhyo/uniface){ .md-button }
@@ -31,17 +31,17 @@ template: home.html
<div class="feature-card" markdown>
### :material-face-recognition: Face Detection
ONNX-optimized detectors (RetinaFace, SCRFD, YOLO) with 5-point landmarks.
RetinaFace, SCRFD, and YOLO detectors with 5-point landmarks.
</div>
<div class="feature-card" markdown>
### :material-account-check: Face Recognition
AdaFace, ArcFace, MobileFace, and SphereFace embeddings for identity verification.
AdaFace, ArcFace, EdgeFace, MobileFace, and SphereFace embeddings for identity verification.
</div>
<div class="feature-card" markdown>
### :material-map-marker: Landmarks
Accurate 106-point facial landmark localization for detailed face analysis.
Dense facial landmark localization — 106-point (2d106det) and 98 / 68-point (PIPNet) variants.
</div>
<div class="feature-card" markdown>
@@ -90,14 +90,14 @@ FAISS-backed embedding store for fast multi-identity face search.
## Installation
UniFace runs inference primarily via **ONNX Runtime**; some optional components (e.g., emotion TorchScript, torchvision NMS) require **PyTorch**.
UniFace uses portable model runtimes for consistent inference across macOS, Linux, and Windows. Most core components run through **ONNX Runtime**, while optional components may use **PyTorch** where appropriate.
**Standard**
**CPU / Apple Silicon**
```bash
pip install uniface
pip install uniface[cpu]
```
**GPU (CUDA)**
**GPU (NVIDIA CUDA)**
```bash
pip install uniface[gpu]
```
@@ -106,7 +106,7 @@ pip install uniface[gpu]
```bash
git clone https://github.com/yakhyo/uniface.git
cd uniface
pip install -e .
pip install -e ".[cpu]" # or .[gpu] for CUDA
```
---

View File

@@ -6,20 +6,32 @@ This guide covers all installation options for UniFace.
## Requirements
- **Python**: 3.11 or higher
- **Python**: 3.10 or higher
- **Operating Systems**: macOS, Linux, Windows
---
## Why Two Extras?
`onnxruntime` (CPU) and `onnxruntime-gpu` (CUDA) both own the same Python namespace.
Installing both at the same time causes file conflicts and silent provider mismatches.
UniFace exposes them as separate, mutually exclusive extras so you install exactly one.
---
## Quick Install
The simplest way to install UniFace:
=== "CPU / Apple Silicon"
```bash
pip install uniface
```
```bash
pip install uniface[cpu]
```
This installs the CPU version with all core dependencies.
=== "NVIDIA GPU (CUDA)"
```bash
pip install uniface[gpu]
```
---
@@ -27,14 +39,16 @@ This installs the CPU version with all core dependencies.
### macOS (Apple Silicon - M1/M2/M3/M4)
For Apple Silicon Macs, the standard installation automatically includes ARM64 optimizations:
The `[cpu]` extra pulls in the standard `onnxruntime` package, which has native ARM64 support
built in since version 1.13. No additional setup is needed for CoreML acceleration.
```bash
pip install uniface
pip install uniface[cpu]
```
!!! tip "Native Performance"
The base `onnxruntime` package has native Apple Silicon support with ARM64 optimizations built-in since version 1.13+. No additional configuration needed.
`onnxruntime` 1.13+ includes ARM64 optimizations out of the box.
UniFace automatically detects and enables `CoreMLExecutionProvider` on Apple Silicon.
Verify ARM64 installation:
@@ -47,18 +61,22 @@ python -c "import platform; print(platform.machine())"
### Linux/Windows with NVIDIA GPU
For CUDA acceleration on NVIDIA GPUs:
```bash
pip install uniface[gpu]
```
This installs `onnxruntime-gpu`, which includes both `CUDAExecutionProvider` and
`CPUExecutionProvider` — no separate CPU package is needed.
**Requirements:**
- `uniface[gpu]` automatically installs `onnxruntime-gpu`. Requirements depend on the ORT version and execution provider.
- NVIDIA driver compatible with your CUDA version
- CUDA 11.x or 12.x toolkit
- cuDNN 8.x
!!! info "CUDA Compatibility"
See the [ONNX Runtime GPU compatibility matrix](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html) for matching CUDA and cuDNN versions.
See the [ONNX Runtime GPU compatibility matrix](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html)
for matching CUDA and cuDNN versions.
Verify GPU installation:
@@ -70,23 +88,10 @@ print("Available providers:", ort.get_available_providers())
---
### FAISS Vector Indexing
For fast multi-identity face search using a FAISS index:
```bash
pip install faiss-cpu # CPU
pip install faiss-gpu # NVIDIA GPU (CUDA)
```
See the [Indexing module](modules/indexing.md) for usage.
---
### CPU-Only (All Platforms)
```bash
pip install uniface
pip install uniface[cpu]
```
Works on all platforms with automatic CPU fallback.
@@ -100,37 +105,58 @@ For development or the latest features:
```bash
git clone https://github.com/yakhyo/uniface.git
cd uniface
pip install -e .
pip install -e ".[cpu]" # CPU / Apple Silicon
pip install -e ".[gpu]" # NVIDIA GPU
```
With development dependencies:
```bash
pip install -e ".[dev]"
pip install -e ".[cpu,dev]"
```
---
## FAISS Vector Store
For fast multi-identity face search using a FAISS vector store:
```bash
pip install faiss-cpu # CPU
pip install faiss-gpu # NVIDIA GPU (CUDA)
```
See the [Stores module](modules/stores.md) for usage.
---
## Dependencies
UniFace has minimal dependencies:
UniFace has minimal core dependencies:
| Package | Purpose |
|---------|---------|
| `numpy` | Array operations |
| `opencv-python` | Image processing |
| `onnxruntime` | Model inference |
| `scikit-image` | Geometric transforms |
| `scipy` | Signal processing |
| `requests` | Model download |
| `tqdm` | Progress bars |
**Optional:**
**Runtime extras (install exactly one):**
| Package | Install extra | Purpose |
|---------|---------------|---------|
| `faiss-cpu` / `faiss-gpu` | `pip install faiss-cpu` | FAISS vector indexing |
| `onnxruntime-gpu` | `uniface[gpu]` | CUDA acceleration |
| `torch` | `pip install torch` | Emotion model uses TorchScript |
| Extra | Package | Use case |
|-------|---------|---------|
| `uniface[cpu]` | `onnxruntime` | CPU inference, Apple Silicon |
| `uniface[gpu]` | `onnxruntime-gpu` | NVIDIA CUDA inference |
**Other optional packages:**
| Package | Install | Purpose |
|---------|---------|---------|
| `faiss-cpu` / `faiss-gpu` | `pip install faiss-cpu` | FAISS vector store |
| `torch` | `pip install torch` | Emotion model (TorchScript) |
| `torchvision` | `pip install torchvision` | Faster NMS for YOLO detectors |
---
@@ -155,17 +181,81 @@ print("Installation successful!")
---
## Upgrading
When upgrading UniFace, stay consistent with your runtime extra:
```bash
pip install --upgrade uniface[cpu] # or uniface[gpu]
```
If you are switching from CPU to GPU (or vice versa):
```bash
pip uninstall onnxruntime onnxruntime-gpu -y
pip install uniface[gpu] # install the one you want
```
---
## Pre-release Versions
UniFace ships release candidates and betas to PyPI ahead of stable releases (versions like `0.7.0rc1`, `0.7.0b1`, `0.7.0a1`). These let you try upcoming features before they're finalized.
`pip install uniface` always installs the latest **stable** release. To opt in to pre-releases:
```bash
# Latest pre-release (if newer than latest stable)
pip install uniface[cpu] --pre
# A specific pre-release
pip install uniface[cpu]==0.7.0rc1
```
Pre-releases are not recommended for production — APIs may still change before the stable release.
---
## Troubleshooting
### onnxruntime Not Found
If you see:
```
ImportError: onnxruntime is not installed. Install it with one of:
pip install uniface[cpu] # CPU / Apple Silicon
pip install uniface[gpu] # NVIDIA GPU (CUDA)
```
You installed uniface without an extra. Run the appropriate command above.
---
### Both onnxruntime and onnxruntime-gpu Installed
If you previously ran `pip install uniface[gpu]` on top of a `pip install uniface[cpu]`
(or vice versa), you may have both packages installed simultaneously, which causes conflicts.
Fix it with:
```bash
pip uninstall onnxruntime onnxruntime-gpu -y
pip install uniface[gpu] # or uniface[cpu]
```
---
### Import Errors
If you encounter import errors, ensure you're using Python 3.11+:
Ensure you're using Python 3.10+:
```bash
python --version
# Should show: Python 3.11.x or higher
# Should show: Python 3.10.x or higher
```
---
### Model Download Issues
Models are automatically downloaded on first use. If downloads fail:
@@ -179,6 +269,25 @@ model_path = verify_model_weights(RetinaFaceWeights.MNET_V2)
print(f"Model downloaded to: {model_path}")
```
---
### CUDA Not Detected
1. Verify CUDA installation:
```bash
nvidia-smi
```
2. Check CUDA version compatibility with ONNX Runtime.
3. Reinstall the GPU extra cleanly:
```bash
pip uninstall onnxruntime onnxruntime-gpu -y
pip install uniface[gpu]
```
---
### Performance Issues on Mac
Verify you're using the ARM64 build (not x86_64 via Rosetta):

View File

@@ -20,5 +20,7 @@ UniFace is released under the [MIT License](https://opensource.org/licenses/MIT)
| SphereFace | [yakhyo/face-recognition](https://github.com/yakhyo/face-recognition) | MIT |
| BiSeNet | [yakhyo/face-parsing](https://github.com/yakhyo/face-parsing) | MIT |
| MobileGaze | [yakhyo/gaze-estimation](https://github.com/yakhyo/gaze-estimation) | MIT |
| MODNet | [yakhyo/modnet](https://github.com/yakhyo/modnet) | Apache-2.0 |
| MiniFASNet | [yakhyo/face-anti-spoofing](https://github.com/yakhyo/face-anti-spoofing) | Apache-2.0 |
| FairFace | [yakhyo/fairface-onnx](https://github.com/yakhyo/fairface-onnx) | CC BY 4.0 |
| PIPNet | [yakhyo/pipnet-onnx](https://github.com/yakhyo/pipnet-onnx) — meanface tables vendored from [jhb86253817/PIPNet](https://github.com/jhb86253817/PIPNet) | MIT |

View File

@@ -156,6 +156,24 @@ Face recognition using angular softmax loss.
---
### EdgeFace
Efficient face recognition designed for edge devices, using EdgeNeXt backbone with optional LoRA compression.
| Model Name | Backbone | Params | MFLOPs | Size | LFW | CALFW | CPLFW | CFP-FP | AgeDB-30 |
| --------------- | -------- | ------ | ------ | ----- | ------ | ------ | ------ | ------ | -------- |
| `XXS` :material-check-circle: | EdgeNeXt | 1.24M | 94 | ~5 MB | 99.57% | 94.83% | 90.27% | 93.63% | 94.92% |
| `XS_GAMMA_06` | EdgeNeXt | 1.77M | 154 | ~7 MB | 99.73% | 95.28% | 91.58% | 94.71% | 96.08% |
| `S_GAMMA_05` | EdgeNeXt | 3.65M | 306 | ~14 MB | 99.78% | 95.55% | 92.48% | 95.74% | 97.03% |
| `BASE` | EdgeNeXt | 18.2M | 1399 | ~70 MB | 99.83% | 96.07% | 93.75% | 97.01% | 97.60% |
!!! info "Training Data & Reference"
**Paper**: [EdgeFace: Efficient Face Recognition Model for Edge Devices](https://arxiv.org/abs/2307.01838v2) (IEEE T-BIOM 2024)
**Source**: [github.com/otroshi/edgeface](https://github.com/otroshi/edgeface) | [github.com/yakhyo/edgeface-onnx](https://github.com/yakhyo/edgeface-onnx)
---
## Facial Landmark Models
### 106-Point Landmark Detection
@@ -178,6 +196,26 @@ Facial landmark localization model.
---
### PIPNet (98 / 68 points)
PIPNet (Pixel-in-Pixel Net) facial landmark detector. ResNet-18 backbone, 256×256 input.
| Model Name | Points | Backbone | Dataset | Size |
| ---------- | ------ | -------- | ------- | ---- |
| `WFLW_98` :material-check-circle: | 98 | ResNet-18 | WFLW (supervised) | 47 MB |
| `DW300_CELEBA_68` | 68 | ResNet-18 | 300W+CelebA (GSSL) | 46 MB |
!!! info "Reference"
**Paper**: [PIPNet: Towards Efficient Facial Landmark Detection in the Wild](https://arxiv.org/abs/2003.03771) (IJCV 2021)
**Source**: [yakhyo/pipnet-onnx](https://github.com/yakhyo/pipnet-onnx) — ONNX export from [jhb86253817/PIPNet](https://github.com/jhb86253817/PIPNet)
!!! note "Auto-selected meanface"
Both variants share the same architecture; the number of landmarks (and the matching
meanface table) is inferred from the ONNX output channel count.
---
## Attribute Analysis Models
### Age & Gender Detection
@@ -353,6 +391,36 @@ XSeg from DeepFaceLab outputs masks for face regions. Requires 5-point landmarks
---
## Portrait Matting Models
### MODNet
MODNet (Real-Time Trimap-Free Portrait Matting) produces soft alpha mattes from full images without requiring a trimap. Uses MobileNetV2 backbone with low-resolution, high-resolution, and fusion branches.
| Model Name | Variant | Size | Use Case |
| ---------- | ------- | ---- | -------- |
| `PHOTOGRAPHIC` :material-check-circle: | High-quality | 25 MB | Portrait photos |
| `WEBCAM` | Real-time | 25 MB | Webcam feeds |
!!! info "Model Details"
**Paper**: [MODNet: Real-Time Trimap-Free Portrait Matting via Objective Decomposition](https://arxiv.org/abs/2011.11961) (AAAI 2022)
**Source**: [yakhyo/modnet](https://github.com/yakhyo/modnet) — ported weights and clean inference codebase
**Output**: Alpha matte `(H, W)` in `[0, 1]`
**Applications:**
- Background removal / replacement
- Green screen compositing
- Video conferencing virtual backgrounds
- Portrait editing
!!! note "Input Requirements"
Operates on full images (not face crops). No trimap or face detection required.
---
## Anti-Spoofing Models
### MiniFASNet Family
@@ -402,8 +470,10 @@ See [Model Cache & Offline Use](concepts/model-cache-offline.md) for full detail
- **Head Pose Estimation**: [yakhyo/head-pose-estimation](https://github.com/yakhyo/head-pose-estimation) - 6D rotation head pose estimation training and ONNX models
- **Face Parsing Training**: [yakhyo/face-parsing](https://github.com/yakhyo/face-parsing) - BiSeNet training code and pretrained weights
- **Face Segmentation**: [yakhyo/face-segmentation](https://github.com/yakhyo/face-segmentation) - XSeg ONNX Inference
- **Portrait Matting**: [yakhyo/modnet](https://github.com/yakhyo/modnet) - MODNet ported weights and inference (from [ZHKKKe/MODNet](https://github.com/ZHKKKe/MODNet))
- **Face Anti-Spoofing**: [yakhyo/face-anti-spoofing](https://github.com/yakhyo/face-anti-spoofing) - MiniFASNet ONNX inference (weights from [minivision-ai/Silent-Face-Anti-Spoofing](https://github.com/minivision-ai/Silent-Face-Anti-Spoofing))
- **FairFace**: [yakhyo/fairface-onnx](https://github.com/yakhyo/fairface-onnx) - FairFace ONNX inference for race, gender, age prediction
- **PIPNet**: [yakhyo/pipnet-onnx](https://github.com/yakhyo/pipnet-onnx) - PIPNet ONNX export and inference (from [jhb86253817/PIPNet](https://github.com/jhb86253817/PIPNet))
- **InsightFace**: [deepinsight/insightface](https://github.com/deepinsight/insightface) - Model architectures and pretrained weights
### Papers
@@ -414,4 +484,6 @@ See [Model Cache & Offline Use](concepts/model-cache-offline.md) for full detail
- **AdaFace**: [AdaFace: Quality Adaptive Margin for Face Recognition](https://arxiv.org/abs/2204.00964)
- **ArcFace**: [Additive Angular Margin Loss for Deep Face Recognition](https://arxiv.org/abs/1801.07698)
- **SphereFace**: [Deep Hypersphere Embedding for Face Recognition](https://arxiv.org/abs/1704.08063)
- **MODNet**: [Real-Time Trimap-Free Portrait Matting via Objective Decomposition](https://arxiv.org/abs/2011.11961)
- **BiSeNet**: [Bilateral Segmentation Network for Real-time Semantic Segmentation](https://arxiv.org/abs/1808.00897)
- **PIPNet**: [Towards Efficient Facial Landmark Detection in the Wild](https://arxiv.org/abs/2003.03771)

View File

@@ -2,6 +2,11 @@
Facial attribute analysis for age, gender, race, and emotion detection.
<figure markdown="span">
![Age & Gender Prediction](https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/age_gender.jpg){ width="100%" }
<figcaption>Age and gender prediction with detection bounding boxes</figcaption>
</figure>
---
## Available Models

View File

@@ -2,6 +2,11 @@
Face detection is the first step in any face analysis pipeline. UniFace provides four detection models.
<figure markdown="span">
![Face Detection](https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/detection.jpg){ width="100%" }
<figcaption>SCRFD detection with corner-style bounding boxes and 5-point landmarks</figcaption>
</figure>
---
## Available Models
@@ -264,10 +269,8 @@ from uniface.draw import draw_detections
draw_detections(
image=image,
bboxes=[f.bbox for f in faces],
scores=[f.confidence for f in faces],
landmarks=[f.landmarks for f in faces],
vis_threshold=0.6
faces=faces,
vis_threshold=0.6,
)
cv2.imwrite("result.jpg", image)
@@ -288,6 +291,6 @@ python tools/detect.py --source image.jpg
## See Also
- [Recognition Module](recognition.md) - Extract embeddings from detected faces
- [Landmarks Module](landmarks.md) - Get 106-point landmarks
- [Landmarks Module](landmarks.md) - Get 106 / 98 / 68-point dense landmarks
- [Image Pipeline Recipe](../recipes/image-pipeline.md) - Complete detection workflow
- [Concepts: Thresholds](../concepts/thresholds-calibration.md) - Tuning detection parameters

View File

@@ -2,6 +2,11 @@
Gaze estimation predicts where a person is looking (pitch and yaw angles).
<figure markdown="span">
![Gaze Estimation](https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/gaze.jpg){ width="100%" }
<figcaption>Gaze direction arrows with pitch/yaw angle labels</figcaption>
</figure>
---
## Available Models

View File

@@ -2,6 +2,11 @@
Head pose estimation predicts the 3D orientation of a person's head (pitch, yaw, and roll angles).
<figure markdown="span">
![Head Pose Estimation](https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/headpose.jpg){ width="100%" }
<figcaption>3D head pose visualization with pitch, yaw, and roll angles</figcaption>
</figure>
---
## Available Models

View File

@@ -2,6 +2,11 @@
Facial landmark detection provides precise localization of facial features.
<figure markdown="span">
![106-Point Landmarks](https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/landmarks.jpg){ width="50%" }
<figcaption>106-point facial landmark localization</figcaption>
</figure>
---
## Available Models
@@ -9,6 +14,8 @@ Facial landmark detection provides precise localization of facial features.
| Model | Points | Size |
|-------|--------|------|
| **Landmark106** | 106 | 14 MB |
| **PIPNet (WFLW-98)** | 98 | 47 MB |
| **PIPNet (300W+CelebA-68)** | 68 | 46 MB |
!!! info "5-Point Landmarks"
Basic 5-point landmarks are included with all detection models (RetinaFace, SCRFD, YOLOv5-Face, YOLOv8-Face).
@@ -74,6 +81,48 @@ mouth = landmarks[87:106]
---
## PIPNet (98 / 68 points)
PIPNet (Pixel-in-Pixel Net) is a high-accuracy facial landmark detector. UniFace ships
two ONNX variants that share a ResNet-18 backbone and 256×256 input — the only difference
is the number of points and the dataset they were trained on.
### Basic Usage
```python
from uniface.detection import RetinaFace
from uniface.landmark import PIPNet
detector = RetinaFace()
landmarker = PIPNet() # Default: 98 points (WFLW)
faces = detector.detect(image)
if faces:
landmarks = landmarker.get_landmarks(image, faces[0].bbox)
print(f"Landmarks shape: {landmarks.shape}") # (98, 2)
```
### 68-Point Variant (300W+CelebA, GSSL)
```python
from uniface.constants import PIPNetWeights
from uniface.landmark import PIPNet
landmarker = PIPNet(model_name=PIPNetWeights.DW300_CELEBA_68)
landmarks = landmarker.get_landmarks(image, face.bbox)
print(landmarks.shape) # (68, 2)
```
### Notes
- The number of landmarks is read from the ONNX output and the matching meanface
table is selected automatically — there is no `num_lms=` argument.
- PIPNet uses an asymmetric crop around the bbox (+10% left / right / bottom,
10% top) and ImageNet normalization. This is handled internally.
- Output landmarks are in original-image pixel coordinates as `float32`.
---
## 5-Point Landmarks (Detection)
All detection models provide 5-point landmarks:
@@ -237,9 +286,17 @@ def estimate_head_pose(landmarks, image_shape):
## Factory Function
```python
from uniface.constants import PIPNetWeights
from uniface.landmark import create_landmarker
landmarker = create_landmarker() # Returns Landmark106
# Default: 106-point InsightFace model
landmarker = create_landmarker()
# 98-point PIPNet (WFLW)
landmarker = create_landmarker('pipnet')
# 68-point PIPNet (300W+CelebA)
landmarker = create_landmarker('pipnet', model_name=PIPNetWeights.DW300_CELEBA_68)
```
---

157
docs/modules/matting.md Normal file
View File

@@ -0,0 +1,157 @@
# Portrait Matting
Portrait matting produces a soft alpha matte separating the foreground (person) from the background — no trimap needed.
<figure markdown="span">
![Portrait Matting](https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/matting.jpg){ width="100%" }
<figcaption>MODNet: Input → Matte → Green Screen</figcaption>
</figure>
---
## Available Models
| Model | Variant | Size | Use Case |
|-------|---------|------|----------|
| **MODNet Photographic** :material-check-circle: | PHOTOGRAPHIC | 25 MB | High-quality portrait photos |
| MODNet Webcam | WEBCAM | 25 MB | Real-time webcam feeds |
---
## Basic Usage
```python
import cv2
from uniface.matting import MODNet
matting = MODNet()
image = cv2.imread("photo.jpg")
matte = matting.predict(image)
print(f"Matte shape: {matte.shape}") # (H, W)
print(f"Matte dtype: {matte.dtype}") # float32
print(f"Matte range: [{matte.min():.2f}, {matte.max():.2f}]") # [0, 1]
```
---
## Model Variants
```python
from uniface.matting import MODNet
from uniface.constants import MODNetWeights
# Photographic (default) — best for photos
matting = MODNet()
# Webcam — optimized for real-time
matting = MODNet(model_name=MODNetWeights.WEBCAM)
# Custom input size
matting = MODNet(input_size=256)
```
| Parameter | Default | Description |
|-----------|---------|-------------|
| `model_name` | `PHOTOGRAPHIC` | Model variant to load |
| `input_size` | `512` | Target shorter-side size for preprocessing |
| `providers` | `None` | ONNX Runtime execution providers |
---
## Applications
### Transparent Background (RGBA)
```python
import cv2
import numpy as np
matting = MODNet()
image = cv2.imread("photo.jpg")
matte = matting.predict(image)
rgba = cv2.cvtColor(image, cv2.COLOR_BGR2BGRA)
rgba[:, :, 3] = (matte * 255).astype(np.uint8)
cv2.imwrite("transparent.png", rgba)
```
### Green Screen
```python
import numpy as np
matte_3ch = matte[:, :, np.newaxis]
bg = np.full_like(image, (0, 177, 64), dtype=np.uint8)
green = (image * matte_3ch + bg * (1 - matte_3ch)).astype(np.uint8)
cv2.imwrite("green_screen.jpg", green)
```
### Custom Background
```python
import cv2
import numpy as np
background = cv2.imread("beach.jpg")
background = cv2.resize(background, (image.shape[1], image.shape[0]))
matte_3ch = matte[:, :, np.newaxis]
result = (image * matte_3ch + background * (1 - matte_3ch)).astype(np.uint8)
cv2.imwrite("custom_bg.jpg", result)
```
### Webcam Matting
```python
import cv2
import numpy as np
from uniface.matting import MODNet
matting = MODNet(model_name="modnet_webcam")
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
break
matte = matting.predict(frame)
matte_3ch = matte[:, :, np.newaxis]
bg = np.full_like(frame, (0, 177, 64), dtype=np.uint8)
result = (frame * matte_3ch + bg * (1 - matte_3ch)).astype(np.uint8)
cv2.imshow("Matting", np.hstack([frame, result]))
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()
```
---
## Factory Function
```python
from uniface.matting import create_matting_model
from uniface.constants import MODNetWeights
# Default (Photographic)
matting = create_matting_model()
# With enum
matting = create_matting_model(MODNetWeights.WEBCAM)
# With string
matting = create_matting_model("modnet_webcam")
```
---
## Next Steps
- [Parsing](parsing.md) - Face semantic segmentation
- [Privacy](privacy.md) - Face anonymization
- [Detection](detection.md) - Face detection

View File

@@ -2,6 +2,16 @@
Face parsing segments faces into semantic components or face regions.
<figure markdown="span">
![Face Parsing](https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/parsing.jpg){ width="80%" }
<figcaption>BiSeNet face parsing with 19 semantic component classes</figcaption>
</figure>
<figure markdown="span">
![Face Segmentation](https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/segmentation.jpg){ width="80%" }
<figcaption>XSeg face region segmentation mask</figcaption>
</figure>
---
## Available Models

View File

@@ -2,6 +2,11 @@
Face anonymization protects privacy by blurring or obscuring faces in images and videos.
<figure markdown="span">
![Face Anonymization](https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/anonymization.jpg){ width="100%" }
<figcaption>Five anonymization methods: pixelate, gaussian, blackout, elliptical, and median</figcaption>
</figure>
---
## Available Methods

View File

@@ -2,6 +2,11 @@
Face recognition extracts embeddings for identity verification and face search.
<figure markdown="span">
![Face Verification](https://raw.githubusercontent.com/yakhyo/uniface/main/assets/demos/verification.jpg){ width="80%" }
<figcaption>Pairwise face verification with cosine similarity scores</figcaption>
</figure>
---
## Available Models
@@ -10,6 +15,7 @@ Face recognition extracts embeddings for identity verification and face search.
|-------|----------|------|---------------|
| **AdaFace** | IR-18/IR-101 | 92-249 MB | 512 |
| **ArcFace** | MobileNet/ResNet | 8-166 MB | 512 |
| **EdgeFace** | EdgeNeXt/LoRA | 5-70 MB | 512 |
| **MobileFace** | MobileNet V2/V3 | 1-10 MB | 512 |
| **SphereFace** | Sphere20/36 | 50-92 MB | 512 |
@@ -113,6 +119,64 @@ recognizer = ArcFace(providers=['CPUExecutionProvider'])
---
## EdgeFace
Efficient face recognition designed for edge devices, using an EdgeNeXt backbone with optional LoRA low-rank compression. Competition-winning entry (compact track) at EFaR 2023, IJCB.
### Basic Usage
```python
from uniface.detection import RetinaFace
from uniface.recognition import EdgeFace
detector = RetinaFace()
recognizer = EdgeFace()
# Detect face
faces = detector.detect(image)
# Extract embedding
if faces:
embedding = recognizer.get_normalized_embedding(image, faces[0].landmarks)
print(f"Embedding shape: {embedding.shape}") # (512,)
```
### Model Variants
```python
from uniface.recognition import EdgeFace
from uniface.constants import EdgeFaceWeights
# Ultra-compact (default)
recognizer = EdgeFace(model_name=EdgeFaceWeights.XXS)
# Compact with LoRA
recognizer = EdgeFace(model_name=EdgeFaceWeights.XS_GAMMA_06)
# Small with LoRA
recognizer = EdgeFace(model_name=EdgeFaceWeights.S_GAMMA_05)
# Full-size
recognizer = EdgeFace(model_name=EdgeFaceWeights.BASE)
# Force CPU execution
recognizer = EdgeFace(providers=['CPUExecutionProvider'])
```
| Variant | Params | MFLOPs | Size | LFW | CALFW | CPLFW | CFP-FP | AgeDB-30 |
|---------|--------|--------|------|-----|-------|-------|--------|----------|
| **XXS** :material-check-circle: | 1.24M | 94 | ~5 MB | 99.57% | 94.83% | 90.27% | 93.63% | 94.92% |
| XS_GAMMA_06 | 1.77M | 154 | ~7 MB | 99.73% | 95.28% | 91.58% | 94.71% | 96.08% |
| S_GAMMA_05 | 3.65M | 306 | ~14 MB | 99.78% | 95.55% | 92.48% | 95.74% | 97.03% |
| BASE | 18.2M | 1399 | ~70 MB | 99.83% | 96.07% | 93.75% | 97.01% | 97.60% |
!!! info "Reference"
**Paper**: [EdgeFace: Efficient Face Recognition Model for Edge Devices](https://arxiv.org/abs/2307.01838v2) (IEEE T-BIOM 2024)
**Source**: [github.com/otroshi/edgeface](https://github.com/otroshi/edgeface)
---
## MobileFace
Lightweight face recognition models with MobileNet backbones.
@@ -287,9 +351,10 @@ else:
```python
from uniface.recognition import create_recognizer
# Available methods: 'arcface', 'adaface', 'mobileface', 'sphereface'
# Available methods: 'arcface', 'adaface', 'edgeface', 'mobileface', 'sphereface'
recognizer = create_recognizer('arcface')
recognizer = create_recognizer('adaface')
recognizer = create_recognizer('edgeface')
```
---

View File

@@ -1,4 +1,4 @@
# Indexing
# Stores
FAISS-backed vector store for fast similarity search over embeddings.
@@ -12,7 +12,7 @@ FAISS-backed vector store for fast similarity search over embeddings.
## FAISS
```python
from uniface.indexing import FAISS
from uniface.stores import FAISS
```
A thin wrapper around a FAISS `IndexFlatIP` (inner-product) index. Vectors
@@ -134,7 +134,7 @@ loaded = store.load() # True if files exist
import cv2
from uniface.detection import RetinaFace
from uniface.recognition import ArcFace
from uniface.indexing import FAISS
from uniface.stores import FAISS
detector = RetinaFace()
recognizer = ArcFace()

View File

@@ -12,13 +12,15 @@ Run UniFace examples directly in your browser with Google Colab, or download and
| [Face Alignment](https://github.com/yakhyo/uniface/blob/main/examples/02_face_alignment.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/02_face_alignment.ipynb) | Align faces for recognition |
| [Face Verification](https://github.com/yakhyo/uniface/blob/main/examples/03_face_verification.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/03_face_verification.ipynb) | Compare faces for identity |
| [Face Search](https://github.com/yakhyo/uniface/blob/main/examples/04_face_search.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/04_face_search.ipynb) | Find a person in group photos |
| [Face Analyzer](https://github.com/yakhyo/uniface/blob/main/examples/05_face_analyzer.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/05_face_analyzer.ipynb) | All-in-one face analysis |
| [Face Analyzer](https://github.com/yakhyo/uniface/blob/main/examples/05_face_analyzer.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/05_face_analyzer.ipynb) | Unified face analysis |
| [Face Parsing](https://github.com/yakhyo/uniface/blob/main/examples/06_face_parsing.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/06_face_parsing.ipynb) | Semantic face segmentation |
| [Face Anonymization](https://github.com/yakhyo/uniface/blob/main/examples/07_face_anonymization.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/07_face_anonymization.ipynb) | Privacy-preserving blur |
| [Gaze Estimation](https://github.com/yakhyo/uniface/blob/main/examples/08_gaze_estimation.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/08_gaze_estimation.ipynb) | Gaze direction estimation |
| [Face Segmentation](https://github.com/yakhyo/uniface/blob/main/examples/09_face_segmentation.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/09_face_segmentation.ipynb) | Face segmentation with XSeg |
| [Face Vector Store](https://github.com/yakhyo/uniface/blob/main/examples/10_face_vector_store.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/10_face_vector_store.ipynb) | FAISS-backed face database |
| [Head Pose Estimation](https://github.com/yakhyo/uniface/blob/main/examples/11_head_pose_estimation.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/11_head_pose_estimation.ipynb) | 3D head orientation estimation |
| [Face Recognition](https://github.com/yakhyo/uniface/blob/main/examples/12_face_recognition.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/12_face_recognition.ipynb) | Standalone face recognition pipeline |
| [Portrait Matting](https://github.com/yakhyo/uniface/blob/main/examples/13_portrait_matting.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/yakhyo/uniface/blob/main/examples/13_portrait_matting.ipynb) | Portrait matting with MODNet |
---
@@ -32,7 +34,7 @@ git clone https://github.com/yakhyo/uniface.git
cd uniface
# Install dependencies
pip install uniface jupyter
pip install "uniface[cpu]" jupyter # or uniface[gpu] for CUDA
# Launch Jupyter
jupyter notebook examples/

View File

@@ -54,19 +54,8 @@ detector = RetinaFace()
image = cv2.imread("photo.jpg")
faces = detector.detect(image)
# Extract visualization data
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
# Draw on image
draw_detections(
image=image,
bboxes=bboxes,
scores=scores,
landmarks=landmarks,
vis_threshold=0.6,
)
draw_detections(image=image, faces=faces, vis_threshold=0.6)
# Save result
cv2.imwrite("output.jpg", image)
@@ -177,7 +166,9 @@ Face 2: Female, 20-29, White
---
## Facial Landmarks (106 Points)
## Facial Landmarks (106 / 98 / 68 Points)
UniFace ships two dense-landmark families. Pick whichever fits your downstream task:
```python
import cv2
@@ -185,14 +176,14 @@ from uniface.detection import RetinaFace
from uniface.landmark import Landmark106
detector = RetinaFace()
landmarker = Landmark106()
landmarker = Landmark106() # 106-point InsightFace 2d106det model
image = cv2.imread("photo.jpg")
faces = detector.detect(image)
if faces:
landmarks = landmarker.get_landmarks(image, faces[0].bbox)
print(f"Detected {len(landmarks)} landmarks")
print(f"Detected {len(landmarks)} landmarks") # 106
# Draw landmarks
for x, y in landmarks.astype(int):
@@ -201,6 +192,21 @@ if faces:
cv2.imwrite("landmarks.jpg", image)
```
**PIPNet (98 / 68 points)** — ResNet-18 backbone trained on WFLW (98 pts) or 300W+CelebA (68 pts):
```python
from uniface.constants import PIPNetWeights
from uniface.landmark import PIPNet
# 98-point WFLW model (default)
landmarker_98 = PIPNet()
# 68-point 300W+CelebA model
landmarker_68 = PIPNet(model_name=PIPNetWeights.DW300_CELEBA_68)
landmarks = landmarker_98.get_landmarks(image, faces[0].bbox) # (98, 2)
```
---
## Gaze Estimation
@@ -291,6 +297,34 @@ print(f"Detected {len(np.unique(mask))} facial components")
---
## Portrait Matting
Remove backgrounds without a trimap:
```python
import cv2
import numpy as np
from uniface.matting import MODNet
matting = MODNet()
image = cv2.imread("portrait.jpg")
matte = matting.predict(image) # (H, W) float32 in [0, 1]
# Transparent PNG
rgba = cv2.cvtColor(image, cv2.COLOR_BGR2BGRA)
rgba[:, :, 3] = (matte * 255).astype(np.uint8)
cv2.imwrite("transparent.png", rgba)
# Green screen
matte_3ch = matte[:, :, np.newaxis]
bg = np.full_like(image, (0, 177, 64), dtype=np.uint8)
result = (image * matte_3ch + bg * (1 - matte_3ch)).astype(np.uint8)
cv2.imwrite("green_screen.jpg", result)
```
---
## Face Anonymization
Blur faces for privacy protection:
@@ -372,10 +406,7 @@ while True:
faces = detector.detect(frame)
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(image=frame, bboxes=bboxes, scores=scores, landmarks=landmarks)
draw_detections(image=frame, faces=faces)
cv2.imshow("UniFace - Press 'q' to quit", frame)
@@ -451,7 +482,8 @@ For detailed model comparisons and benchmarks, see the [Model Zoo](models.md).
| Task | Available Models |
|------|------------------|
| Detection | `RetinaFace`, `SCRFD`, `YOLOv5Face`, `YOLOv8Face` |
| Recognition | `ArcFace`, `AdaFace`, `MobileFace`, `SphereFace` |
| Recognition | `ArcFace`, `AdaFace`, `EdgeFace`, `MobileFace`, `SphereFace` |
| Landmarks | `Landmark106` (106 pts), `PIPNet` (98 / 68 pts) |
| Tracking | `BYTETracker` |
| Gaze | `MobileGaze` (ResNet18/34/50, MobileNetV2, MobileOneS0) |
| Head Pose | `HeadPose` (ResNet18/34/50, MobileNetV2/V3) |
@@ -499,7 +531,7 @@ python -c "import platform; print(platform.machine())"
from uniface.detection import RetinaFace, SCRFD
from uniface.recognition import ArcFace, AdaFace
from uniface.attribute import AgeGender, FairFace
from uniface.landmark import Landmark106
from uniface.landmark import Landmark106, PIPNet
from uniface.gaze import MobileGaze
from uniface.headpose import HeadPose
from uniface.parsing import BiSeNet, XSeg
@@ -507,7 +539,7 @@ from uniface.privacy import BlurFace
from uniface.spoofing import MiniFASNet
from uniface.tracking import BYTETracker
from uniface.analyzer import FaceAnalyzer
from uniface.indexing import FAISS # pip install faiss-cpu
from uniface.stores import FAISS # pip install faiss-cpu
from uniface.draw import draw_detections, draw_tracks
```

View File

@@ -52,7 +52,7 @@ python tools/search.py --reference ref.jpg --source 0 # webcam
## Vector Search (FAISS index)
For identifying faces against a database of many known people, use the
[`FAISS`](../modules/indexing.md) vector store.
[`FAISS`](../modules/stores.md) vector store.
!!! info "Install extra"
`bash
@@ -80,7 +80,7 @@ import cv2
from pathlib import Path
from uniface.detection import RetinaFace
from uniface.recognition import ArcFace
from uniface.indexing import FAISS
from uniface.stores import FAISS
detector = RetinaFace()
recognizer = ArcFace()
@@ -112,7 +112,7 @@ python tools/faiss_search.py build --faces-dir dataset/ --db-path ./my_index
import cv2
from uniface.detection import RetinaFace
from uniface.recognition import ArcFace
from uniface.indexing import FAISS
from uniface.stores import FAISS
detector = RetinaFace()
recognizer = ArcFace()
@@ -143,7 +143,7 @@ python tools/faiss_search.py run --db-path ./my_index --source 0 # webcam
### Manage the index
```python
from uniface.indexing import FAISS
from uniface.stores import FAISS
store = FAISS(db_path="./my_index")
store.load()
@@ -160,7 +160,7 @@ store.save()
## See Also
- [Indexing Module](../modules/indexing.md) - Full `FAISS` API reference
- [Stores Module](../modules/stores.md) - Full `FAISS` API reference
- [Recognition Module](../modules/recognition.md) - Face recognition details
- [Video & Webcam](video-webcam.md) - Real-time processing
- [Concepts: Thresholds](../concepts/thresholds-calibration.md) - Tuning similarity thresholds

View File

@@ -48,12 +48,7 @@ def process_image(image_path):
print(f" Face {i+1}: {attrs.sex}, {attrs.age} years old")
# Visualize
draw_detections(
image=image,
bboxes=[f.bbox for f in faces],
scores=[f.confidence for f in faces],
landmarks=[f.landmarks for f in faces]
)
draw_detections(image=image, faces=faces)
return image, results

View File

@@ -26,12 +26,7 @@ while True:
faces = detector.detect(frame)
draw_detections(
image=frame,
bboxes=[f.bbox for f in faces],
scores=[f.confidence for f in faces],
landmarks=[f.landmarks for f in faces]
)
draw_detections(image=frame, faces=faces)
cv2.imshow("Face Detection", frame)

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,356 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "0",
"metadata": {},
"source": [
"# Face Recognition: RetinaFace → Align → ArcFace\n",
"\n",
"<div style=\"display:flex; flex-wrap:wrap; align-items:center;\">\n",
" <a style=\"margin-right:10px; margin-bottom:6px;\" href=\"https://pepy.tech/projects/uniface\"><img alt=\"PyPI Downloads\" src=\"https://static.pepy.tech/personalized-badge/uniface?period=total&units=international_system&left_color=grey&right_color=blue&left_text=Downloads\"></a>\n",
" <a style=\"margin-right:10px; margin-bottom:6px;\" href=\"https://pypi.org/project/uniface/\"><img alt=\"PyPI Version\" src=\"https://img.shields.io/pypi/v/uniface.svg\"></a>\n",
" <a style=\"margin-right:10px; margin-bottom:6px;\" href=\"https://opensource.org/licenses/MIT\"><img alt=\"License\" src=\"https://img.shields.io/badge/License-MIT-blue.svg\"></a>\n",
" <a style=\"margin-bottom:6px;\" href=\"https://github.com/yakhyo/uniface\"><img alt=\"GitHub Stars\" src=\"https://img.shields.io/github/stars/yakhyo/uniface.svg?style=social\"></a>\n",
"</div>\n",
"\n",
"**UniFace** is a lightweight, production-ready Python library for face detection, recognition, tracking, landmark analysis, face parsing, gaze estimation, and face attributes.\n",
"\n",
"🔗 **GitHub**: [github.com/yakhyo/uniface](https://github.com/yakhyo/uniface) | 📚 **Docs**: [yakhyo.github.io/uniface](https://yakhyo.github.io/uniface)\n",
"\n",
"---\n",
"\n",
"This notebook demonstrates face recognition **without** the high-level `FaceAnalyzer` wrapper. Each step is handled manually:\n",
"\n",
"1. **RetinaFace**: Detects faces and extracts 5-point landmarks.\n",
"2. **Face Alignment**: Warps each face into a standardized 112x112 crop using the landmarks.\n",
"3. **ArcFace**: Generates a 512-D L2-normalized embedding from the aligned crop.\n",
"\n",
"We compare three test images: `image0.jpg`, `image1.jpg`, and `image5.jpg`."
]
},
{
"cell_type": "markdown",
"id": "1",
"metadata": {},
"source": [
"## 1. Install UniFace"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2",
"metadata": {},
"outputs": [],
"source": [
"%pip install -q \"uniface[cpu]\"\n",
"\n",
"# Clone repo for assets (Colab only)\n",
"import os\n",
"if 'COLAB_GPU' in os.environ or 'COLAB_RELEASE_TAG' in os.environ:\n",
" if not os.path.exists('uniface'):\n",
" !git clone --depth 1 https://github.com/yakhyo/uniface.git\n",
" os.chdir('uniface/examples')"
]
},
{
"cell_type": "markdown",
"id": "3",
"metadata": {},
"source": [
"## 2. Import Libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4",
"metadata": {},
"outputs": [],
"source": [
"import cv2\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.patches as patches\n",
"\n",
"import uniface\n",
"from uniface.detection import RetinaFace\n",
"from uniface.recognition import ArcFace\n",
"from uniface.face_utils import face_alignment\n",
"\n",
"print(f\"UniFace version: {uniface.__version__}\")"
]
},
{
"cell_type": "markdown",
"id": "5",
"metadata": {},
"source": [
"## 3. Configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6",
"metadata": {},
"outputs": [],
"source": [
"IMAGE_PATHS = {\n",
" \"image0\": \"../assets/test_images/image0.jpg\",\n",
" \"image1\": \"../assets/test_images/image1.jpg\",\n",
" \"image5\": \"../assets/test_images/image5.jpg\",\n",
"}\n",
"THRESHOLD = 0.4 # Cosine similarity threshold for \"same person\""
]
},
{
"cell_type": "markdown",
"id": "7",
"metadata": {},
"source": [
"## 4. Initialize Models"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8",
"metadata": {},
"outputs": [],
"source": [
"detector = RetinaFace(confidence_threshold=0.5)\n",
"recognizer = ArcFace()"
]
},
{
"cell_type": "markdown",
"id": "9",
"metadata": {},
"source": [
"## 5. Load Images & Detect Faces\n",
"\n",
"We use the detector to find faces and their landmarks in each image."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "10",
"metadata": {},
"outputs": [],
"source": [
"images = {}\n",
"faces = {}\n",
"\n",
"for name, path in IMAGE_PATHS.items():\n",
" img = cv2.imread(path)\n",
" if img is None:\n",
" raise FileNotFoundError(f\"Cannot read: {path}\")\n",
"\n",
" detected = detector.detect(img)\n",
" if not detected:\n",
" raise RuntimeError(f\"No face detected in: {path}\")\n",
"\n",
" images[name] = img\n",
" faces[name] = detected[0] # Keep highest-confidence face\n",
" print(f\"{name:8s} | {len(detected)} face(s) detected | confidence={faces[name].confidence:.3f}\")"
]
},
{
"cell_type": "markdown",
"id": "11",
"metadata": {},
"source": [
"## 6. Visualize Detections"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "12",
"metadata": {},
"outputs": [],
"source": [
"LM_COLORS = [\"red\", \"blue\", \"green\", \"cyan\", \"magenta\"]\n",
"\n",
"fig, axes = plt.subplots(1, 3, figsize=(15, 5))\n",
"fig.suptitle(\"Detected Faces & 5-Point Landmarks\", fontweight=\"bold\", fontsize=16)\n",
"\n",
"for ax, (name, img) in zip(axes, images.items()):\n",
" face = faces[name]\n",
" ax.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))\n",
" ax.set_title(f\"{name}\\nconf={face.confidence:.3f}\", fontsize=12)\n",
" ax.axis(\"off\")\n",
"\n",
" # Bounding box\n",
" x1, y1, x2, y2 = face.bbox.astype(int)\n",
" ax.add_patch(patches.Rectangle(\n",
" (x1, y1), x2 - x1, y2 - y1,\n",
" linewidth=2, edgecolor=\"lime\", facecolor=\"none\"))\n",
"\n",
" # Landmarks\n",
" for (lx, ly), c in zip(face.landmarks, LM_COLORS):\n",
" ax.plot(lx, ly, \"o\", color=c, markersize=6)\n",
"\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "13",
"metadata": {},
"source": [
"## 7. Face Alignment\n",
"\n",
"We warp the detected faces into a standardized 112x112 size. This improves recognition accuracy."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "14",
"metadata": {},
"outputs": [],
"source": [
"aligned = {}\n",
"\n",
"for name, img in images.items():\n",
" lm = faces[name].landmarks\n",
" crop, _ = face_alignment(img, lm, image_size=(112, 112))\n",
" aligned[name] = crop\n",
"\n",
"fig, axes = plt.subplots(1, 3, figsize=(12, 4))\n",
"fig.suptitle(\"Aligned Face Crops (112x112)\", fontweight=\"bold\", fontsize=14)\n",
"\n",
"for ax, (name, crop) in zip(axes, aligned.items()):\n",
" ax.imshow(cv2.cvtColor(crop, cv2.COLOR_BGR2RGB))\n",
" ax.set_title(name, fontsize=12)\n",
" ax.axis(\"off\")\n",
"\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "15",
"metadata": {},
"source": [
"## 8. Extract Embeddings\n",
"\n",
"We pass the aligned crops to ArcFace to get the 512-D vectors."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "16",
"metadata": {},
"outputs": [],
"source": [
"embeddings = {}\n",
"\n",
"for name, crop in aligned.items():\n",
" # landmarks=None because image is already aligned\n",
" emb = recognizer.get_normalized_embedding(crop, landmarks=None)\n",
" embeddings[name] = emb\n",
" print(f\"{name:8s} | embedding shape={emb.shape} | L2-norm={np.linalg.norm(emb):.4f}\")"
]
},
{
"cell_type": "markdown",
"id": "17",
"metadata": {},
"source": [
"## 9. Pairwise Cosine Similarity\n",
"\n",
"Since embeddings are normalized, cosine similarity is just the dot product."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18",
"metadata": {},
"outputs": [],
"source": [
"names = list(embeddings.keys())\n",
"n = len(names)\n",
"sim_matrix = np.zeros((n, n))\n",
"\n",
"for i, ni in enumerate(names):\n",
" for j, nj in enumerate(names):\n",
" # Use squeeze() to handle (1, 512) shapes if present\n",
" sim_matrix[i, j] = float(np.dot(embeddings[ni].squeeze(), embeddings[nj].squeeze()))\n",
"\n",
"# Print comparison results\n",
"pairs = [(names[i], names[j]) for i in range(n) for j in range(i + 1, n)]\n",
"for a, b in pairs:\n",
" s = float(np.dot(embeddings[a].squeeze(), embeddings[b].squeeze()))\n",
" verdict = \"✓ Same person\" if s >= THRESHOLD else \"✗ Different people\"\n",
" print(f\"{a} vs {b}: similarity={s:.4f} → {verdict}\")"
]
},
{
"cell_type": "markdown",
"id": "19",
"metadata": {},
"source": [
"## 10. Similarity Heatmap"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "20",
"metadata": {},
"outputs": [],
"source": [
"fig, ax = plt.subplots(figsize=(8, 6))\n",
"im = ax.imshow(sim_matrix, vmin=0, vmax=1, cmap=\"viridis\")\n",
"plt.colorbar(im, ax=ax, label=\"Cosine similarity\")\n",
"\n",
"ax.set_xticks(range(n))\n",
"ax.set_yticks(range(n))\n",
"ax.set_xticklabels(names, rotation=30, ha=\"right\")\n",
"ax.set_yticklabels(names)\n",
"ax.set_title(\"Pairwise Face Similarity (ArcFace)\", fontweight=\"bold\")\n",
"\n",
"for i in range(n):\n",
" for j in range(n):\n",
" val = sim_matrix[i, j]\n",
" ax.text(j, i, f\"{val:.2f}\",\n",
" ha=\"center\", va=\"center\",\n",
" color=\"black\" if val >= 0.6 else \"white\",\n",
" fontsize=12, fontweight=\"bold\")\n",
"\n",
"plt.tight_layout()\n",
"plt.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "base",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,265 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Portrait Matting with MODNet\n",
"\n",
"<div style=\"display:flex; flex-wrap:wrap; align-items:center;\">\n",
" <a style=\"margin-right:10px; margin-bottom:6px;\" href=\"https://pepy.tech/projects/uniface\"><img alt=\"PyPI Downloads\" src=\"https://static.pepy.tech/personalized-badge/uniface?period=total&units=international_system&left_color=grey&right_color=blue&left_text=Downloads\"></a>\n",
" <a style=\"margin-right:10px; margin-bottom:6px;\" href=\"https://pypi.org/project/uniface/\"><img alt=\"PyPI Version\" src=\"https://img.shields.io/pypi/v/uniface.svg\"></a>\n",
" <a style=\"margin-right:10px; margin-bottom:6px;\" href=\"https://opensource.org/licenses/MIT\"><img alt=\"License\" src=\"https://img.shields.io/badge/License-MIT-blue.svg\"></a>\n",
" <a style=\"margin-bottom:6px;\" href=\"https://github.com/yakhyo/uniface\"><img alt=\"GitHub Stars\" src=\"https://img.shields.io/github/stars/yakhyo/uniface.svg?style=social\"></a>\n",
"</div>\n",
"\n",
"**UniFace** is a lightweight, production-ready Python library for face detection, recognition, tracking, landmark analysis, face parsing, gaze estimation, and face attributes.\n",
"\n",
"🔗 **GitHub**: [github.com/yakhyo/uniface](https://github.com/yakhyo/uniface) | 📚 **Docs**: [yakhyo.github.io/uniface](https://yakhyo.github.io/uniface)\n",
"\n",
"---\n",
"\n",
"This notebook demonstrates portrait matting using **MODNet** — a trimap-free model that produces soft alpha mattes from full images. No face detection or cropping required.\n",
"\n",
"## 1. Install UniFace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -q \"uniface[cpu]\"\n",
"\n",
"# Clone repo for assets (Colab only)\n",
"import os\n",
"if 'COLAB_GPU' in os.environ or 'COLAB_RELEASE_TAG' in os.environ:\n",
" if not os.path.exists('uniface'):\n",
" !git clone --depth 1 https://github.com/yakhyo/uniface.git\n",
" os.chdir('uniface/examples')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Import Libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import cv2\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"import uniface\n",
"from uniface.matting import MODNet\n",
"\n",
"print(f\"UniFace version: {uniface.__version__}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Initialize Model\n",
"\n",
"MODNet has two variants:\n",
"- **PHOTOGRAPHIC** (default): optimized for high-quality portrait photos\n",
"- **WEBCAM**: optimized for real-time webcam feeds"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"matting = MODNet()\n",
"\n",
"print(f\"Input size: {matting.input_size}\")\n",
"print(f\"Input name: {matting.input_name}\")\n",
"print(f\"Output names: {matting.output_names}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. Helper Functions"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def compose(image, matte, background=None):\n",
" \"\"\"Composite foreground over a background using the alpha matte.\"\"\"\n",
" h, w = image.shape[:2]\n",
" matte_3ch = matte[:, :, np.newaxis]\n",
"\n",
" if background is None:\n",
" bg = np.full_like(image, (0, 177, 64), dtype=np.uint8)\n",
" else:\n",
" bg = cv2.resize(background, (w, h), interpolation=cv2.INTER_AREA)\n",
"\n",
" return (image * matte_3ch + bg * (1 - matte_3ch)).astype(np.uint8)\n",
"\n",
"\n",
"def show_results(image, matte):\n",
" \"\"\"Display original, matte, and green screen as a single merged image.\"\"\"\n",
" matte_vis = cv2.cvtColor((matte * 255).astype(np.uint8), cv2.COLOR_GRAY2BGR)\n",
" green = compose(image, matte)\n",
" merged = np.hstack([image, matte_vis, green])\n",
"\n",
" plt.figure(figsize=(18, 6))\n",
" plt.imshow(cv2.cvtColor(merged, cv2.COLOR_BGR2RGB))\n",
" plt.axis(\"off\")\n",
" plt.tight_layout()\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5. Basic Matting"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"image = cv2.imread(\"../assets/demos/src_portrait1.jpg\")\n",
"print(f\"Image shape: {image.shape}\")\n",
"\n",
"matte = matting.predict(image)\n",
"print(f\"Matte shape: {matte.shape}\")\n",
"print(f\"Matte dtype: {matte.dtype}\")\n",
"print(f\"Matte range: [{matte.min():.3f}, {matte.max():.3f}]\")\n",
"\n",
"show_results(image, matte)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. Transparent Background (RGBA)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"alpha = (matte * 255).astype(np.uint8)\n",
"rgba = cv2.cvtColor(image, cv2.COLOR_BGR2BGRA)\n",
"rgba[:, :, 3] = alpha\n",
"\n",
"# Checkerboard background to visualize transparency\n",
"h, w = image.shape[:2]\n",
"checker = np.zeros((h, w, 3), dtype=np.uint8)\n",
"block = 20\n",
"for y in range(0, h, block):\n",
" for x in range(0, w, block):\n",
" if (y // block + x // block) % 2 == 0:\n",
" checker[y:y+block, x:x+block] = 200\n",
" else:\n",
" checker[y:y+block, x:x+block] = 255\n",
"\n",
"matte_3ch = matte[:, :, np.newaxis]\n",
"rgba_vis = (image * matte_3ch + checker * (1 - matte_3ch)).astype(np.uint8)\n",
"\n",
"merged = np.hstack([image, rgba_vis])\n",
"\n",
"plt.figure(figsize=(16, 5))\n",
"plt.imshow(cv2.cvtColor(merged, cv2.COLOR_BGR2RGB))\n",
"plt.axis(\"off\")\n",
"plt.tight_layout()\n",
"plt.show()\n",
"\n",
"print(f\"RGBA shape: {rgba.shape}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 7. Custom Background"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create a gradient background\n",
"h, w = image.shape[:2]\n",
"gradient = np.zeros((h, w, 3), dtype=np.uint8)\n",
"for y in range(h):\n",
" ratio = y / h\n",
" gradient[y, :] = [int(180 * (1 - ratio)), int(100 + 80 * ratio), int(220 * ratio)]\n",
"\n",
"custom_bg = compose(image, matte, gradient)\n",
"green_bg = compose(image, matte)\n",
"\n",
"merged = np.hstack([image, green_bg, custom_bg])\n",
"\n",
"plt.figure(figsize=(18, 6))\n",
"plt.imshow(cv2.cvtColor(merged, cv2.COLOR_BGR2RGB))\n",
"plt.axis(\"off\")\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Summary\n",
"\n",
"MODNet provides trimap-free portrait matting:\n",
"\n",
"- **`predict(image)`** — returns `(H, W)` float32 alpha matte in `[0, 1]`\n",
"- **No face detection needed** — works on full images directly\n",
"- **Two variants** — `PHOTOGRAPHIC` for photos, `WEBCAM` for real-time\n",
"- **Compositing** — use the matte for transparent PNGs, green screen, or custom backgrounds\n",
"\n",
"For more details, see the [Matting docs](https://yakhyo.github.io/uniface/modules/matting/)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "base",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.5"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1,5 +1,5 @@
site_name: UniFace
site_description: All-in-One Face Analysis Library with ONNX Runtime
site_description: A Unified Face Analysis Library for Python
site_author: Yakhyokhuja Valikhujaev
site_url: https://yakhyo.github.io/uniface
@@ -150,11 +150,12 @@ nav:
- Landmarks: modules/landmarks.md
- Attributes: modules/attributes.md
- Parsing: modules/parsing.md
- Matting: modules/matting.md
- Gaze: modules/gaze.md
- Head Pose: modules/headpose.md
- Anti-Spoofing: modules/spoofing.md
- Privacy: modules/privacy.md
- Indexing: modules/indexing.md
- Stores: modules/stores.md
- Guides:
- Overview: concepts/overview.md
- Inputs & Outputs: concepts/inputs-outputs.md

View File

@@ -1,7 +1,7 @@
[project]
name = "uniface"
version = "3.2.0"
description = "UniFace: A Comprehensive Library for Face Detection, Recognition, Tracking, Landmark Analysis, Face Parsing, Gaze Estimation, Age, and Gender Detection"
version = "3.6.0"
description = "UniFace: A Unified Face Analysis Library for Python"
readme = "README.md"
license = "MIT"
authors = [{ name = "Yakhyokhuja Valikhujaev", email = "yakhyo9696@gmail.com" }]
@@ -9,7 +9,7 @@ maintainers = [
{ name = "Yakhyokhuja Valikhujaev", email = "yakhyo9696@gmail.com" },
]
requires-python = ">=3.11,<3.15"
requires-python = ">=3.10,<3.15"
keywords = [
"face-detection",
"face-recognition",
@@ -34,6 +34,7 @@ classifiers = [
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
@@ -43,16 +44,28 @@ classifiers = [
dependencies = [
"numpy>=1.21.0",
"opencv-python>=4.5.0",
"onnxruntime>=1.16.0",
"scikit-image>=0.26.0",
"scikit-image>=0.22.0",
"scipy>=1.7.0",
"requests>=2.28.0",
"tqdm>=4.64.0",
]
[project.optional-dependencies]
dev = ["pytest>=7.0.0", "ruff>=0.4.0"]
gpu = ["onnxruntime-gpu>=1.16.0"]
cpu = [
"onnxruntime>=1.16.0; python_version >= '3.11'",
"onnxruntime>=1.16.0,<1.24; python_version < '3.11'",
]
gpu = [
"onnxruntime-gpu>=1.16.0; python_version >= '3.11'",
"onnxruntime-gpu>=1.16.0,<1.24; python_version < '3.11'",
]
dev = ["pytest>=7.0.0", "ruff>=0.4.0", "pre-commit>=3.0.0"]
docs = [
"mkdocs-material",
"pymdown-extensions",
"mkdocs-git-committers-plugin-2",
"mkdocs-git-revision-date-localized-plugin",
]
[project.urls]
Homepage = "https://github.com/yakhyo/uniface"
@@ -73,7 +86,7 @@ uniface = ["py.typed"]
[tool.ruff]
line-length = 120
target-version = "py311"
target-version = "py310"
exclude = [
".git",
".ruff_cache",

View File

@@ -1,7 +0,0 @@
numpy>=1.21.0
opencv-python>=4.5.0
onnxruntime>=1.16.0
scikit-image>=0.26.0
scipy>=1.7.0
requests>=2.28.0
tqdm>=4.64.0

View File

@@ -1,61 +0,0 @@
# Copyright 2025-2026 Yakhyokhuja Valikhujaev
# Author: Yakhyokhuja Valikhujaev
# GitHub: https://github.com/yakhyo
from __future__ import annotations
import numpy as np
from uniface.draw import draw_gaze
def _compute_gaze_delta(bbox: np.ndarray, pitch: float, yaw: float) -> tuple[int, int]:
"""Replicate draw_gaze dx/dy math for verification."""
x_min, _, x_max, _ = map(int, bbox[:4])
length = x_max - x_min
dx = int(-length * np.sin(yaw) * np.cos(pitch))
dy = int(-length * np.sin(pitch))
return dx, dy
def test_draw_gaze_yaw_only_moves_horizontally():
"""Yaw-only input (pitch=0) should produce horizontal displacement only."""
image = np.zeros((200, 200, 3), dtype=np.uint8)
bbox = np.array([50, 50, 150, 150], dtype=np.float32)
yaw = 0.5
pitch = 0.0
dx, dy = _compute_gaze_delta(bbox, pitch, yaw)
assert dx != 0, 'Yaw-only should produce horizontal displacement'
assert dy == 0, 'Yaw-only should produce zero vertical displacement'
# Should not raise
draw_gaze(image, bbox, pitch, yaw, draw_bbox=False, draw_angles=False)
def test_draw_gaze_pitch_only_moves_vertically():
"""Pitch-only input (yaw=0) should produce vertical displacement only."""
image = np.zeros((200, 200, 3), dtype=np.uint8)
bbox = np.array([50, 50, 150, 150], dtype=np.float32)
yaw = 0.0
pitch = 0.5
dx, dy = _compute_gaze_delta(bbox, pitch, yaw)
assert dx == 0, 'Pitch-only should produce zero horizontal displacement'
assert dy != 0, 'Pitch-only should produce vertical displacement'
# Should not raise
draw_gaze(image, bbox, pitch, yaw, draw_bbox=False, draw_angles=False)
def test_draw_gaze_modifies_image():
"""draw_gaze should modify the image in place."""
image = np.zeros((200, 200, 3), dtype=np.uint8)
bbox = np.array([50, 50, 150, 150], dtype=np.float32)
original = image.copy()
draw_gaze(image, bbox, 0.3, 0.3)
assert not np.array_equal(image, original), 'draw_gaze should modify the image'

View File

@@ -91,6 +91,12 @@ def test_create_recognizer_sphereface():
assert recognizer is not None, 'Failed to create SphereFace recognizer'
def test_create_recognizer_edgeface():
"""Test creating an EdgeFace recognizer using factory function."""
recognizer = create_recognizer('edgeface')
assert recognizer is not None, 'Failed to create EdgeFace recognizer'
def test_create_recognizer_invalid_method():
"""
Test that invalid recognizer method raises an error.
@@ -124,6 +130,25 @@ def test_create_landmarker_invalid_method():
create_landmarker('invalid_method')
def test_create_landmarker_pipnet_default():
"""create_landmarker('pipnet') returns a PIPNet (98 points by default)."""
from uniface.landmark import PIPNet
landmarker = create_landmarker('pipnet')
assert isinstance(landmarker, PIPNet), 'Should return PIPNet instance'
assert landmarker.num_lms == 98
def test_create_landmarker_pipnet_68():
"""create_landmarker('pipnet', model_name=...) selects the 68-point variant."""
from uniface.constants import PIPNetWeights
from uniface.landmark import PIPNet
landmarker = create_landmarker('pipnet', model_name=PIPNetWeights.DW300_CELEBA_68)
assert isinstance(landmarker, PIPNet), 'Should return PIPNet instance'
assert landmarker.num_lms == 68
# list_available_detectors tests
def test_list_available_detectors():
"""
@@ -183,6 +208,17 @@ def test_landmarker_inference_from_factory():
assert landmarks.shape == (106, 2), 'Should return 106 landmarks'
def test_pipnet_landmarker_inference_from_factory():
"""PIPNet landmarker created from factory can perform inference."""
landmarker = create_landmarker('pipnet')
mock_image = np.random.randint(0, 255, (640, 640, 3), dtype=np.uint8)
mock_bbox = [100, 100, 300, 300]
landmarks = landmarker.get_landmarks(mock_image, mock_bbox)
assert landmarks is not None, 'Landmarker should return landmarks'
assert landmarks.shape == (98, 2), 'Should return 98 landmarks'
def test_multiple_detector_creation():
"""
Test that multiple detectors can be created independently.

158
tests/test_matting.py Normal file
View File

@@ -0,0 +1,158 @@
# Copyright 2025-2026 Yakhyokhuja Valikhujaev
# Author: Yakhyokhuja Valikhujaev
# GitHub: https://github.com/yakhyo
from __future__ import annotations
import numpy as np
import pytest
from uniface.constants import MODNetWeights
from uniface.matting import MODNet, create_matting_model
def test_modnet_initialization():
"""Test MODNet initialization with default weights."""
matting = MODNet()
assert matting is not None
assert matting.input_size == 512
def test_modnet_with_webcam_weights():
"""Test MODNet initialization with webcam variant."""
matting = MODNet(model_name=MODNetWeights.WEBCAM)
assert matting is not None
assert matting.input_size == 512
def test_modnet_custom_input_size():
"""Test MODNet with custom input size."""
matting = MODNet(input_size=256)
assert matting.input_size == 256
def test_modnet_preprocess():
"""Test preprocessing produces correct tensor shape and dtype."""
matting = MODNet()
image = np.random.randint(0, 255, (480, 640, 3), dtype=np.uint8)
tensor, orig_h, orig_w = matting.preprocess(image)
assert tensor.dtype == np.float32
assert tensor.ndim == 4
assert tensor.shape[0] == 1
assert tensor.shape[1] == 3
assert tensor.shape[2] % 32 == 0
assert tensor.shape[3] % 32 == 0
assert orig_h == 480
assert orig_w == 640
def test_modnet_preprocess_small_image():
"""Test preprocessing with image smaller than input_size."""
matting = MODNet(input_size=512)
image = np.random.randint(0, 255, (128, 128, 3), dtype=np.uint8)
tensor, orig_h, orig_w = matting.preprocess(image)
assert tensor.shape[2] % 32 == 0
assert tensor.shape[3] % 32 == 0
assert orig_h == 128
assert orig_w == 128
def test_modnet_preprocess_large_image():
"""Test preprocessing with image larger than input_size."""
matting = MODNet(input_size=512)
image = np.random.randint(0, 255, (1080, 1920, 3), dtype=np.uint8)
tensor, orig_h, orig_w = matting.preprocess(image)
assert tensor.shape[2] % 32 == 0
assert tensor.shape[3] % 32 == 0
assert orig_h == 1080
assert orig_w == 1920
def test_modnet_postprocess():
"""Test postprocessing resizes matte to original dimensions."""
matting = MODNet()
dummy_output = np.random.rand(1, 1, 512, 672).astype(np.float32)
matte = matting.postprocess(dummy_output, original_size=(640, 480))
assert matte.shape == (480, 640)
assert matte.dtype == np.float32
def test_modnet_predict():
"""Test end-to-end prediction."""
matting = MODNet()
image = np.random.randint(0, 255, (480, 640, 3), dtype=np.uint8)
matte = matting.predict(image)
assert matte.shape == (480, 640)
assert matte.dtype == np.float32
assert matte.min() >= 0.0
assert matte.max() <= 1.0
def test_modnet_callable():
"""Test that MODNet is callable via __call__."""
matting = MODNet()
image = np.random.randint(0, 255, (256, 256, 3), dtype=np.uint8)
matte = matting(image)
assert matte.shape == (256, 256)
assert matte.dtype == np.float32
def test_modnet_different_input_sizes():
"""Test prediction with various image dimensions."""
matting = MODNet()
sizes = [(256, 256), (480, 640), (720, 1280), (300, 500)]
for h, w in sizes:
image = np.random.randint(0, 255, (h, w, 3), dtype=np.uint8)
matte = matting.predict(image)
assert matte.shape == (h, w), f'Failed for size {h}x{w}'
assert matte.dtype == np.float32
# Factory tests
def test_create_matting_model_default():
"""Test factory with default parameters."""
matting = create_matting_model()
assert matting is not None
assert isinstance(matting, MODNet)
def test_create_matting_model_with_enum():
"""Test factory with enum."""
matting = create_matting_model(MODNetWeights.WEBCAM)
assert isinstance(matting, MODNet)
def test_create_matting_model_with_string():
"""Test factory with string model name."""
matting = create_matting_model('modnet_photographic')
assert isinstance(matting, MODNet)
def test_create_matting_model_webcam_string():
"""Test factory with webcam string model name."""
matting = create_matting_model('modnet_webcam')
assert isinstance(matting, MODNet)
def test_create_matting_model_invalid():
"""Test factory with invalid model name."""
with pytest.raises(ValueError, match='Unknown matting model'):
create_matting_model('invalid_model')

View File

@@ -0,0 +1,132 @@
# Copyright 2025-2026 Yakhyokhuja Valikhujaev
# Author: Yakhyokhuja Valikhujaev
# GitHub: https://github.com/yakhyo
from __future__ import annotations
import numpy as np
import pytest
from uniface.constants import PIPNetWeights
from uniface.landmark import PIPNet
@pytest.fixture(scope='module', params=[PIPNetWeights.WFLW_98, PIPNetWeights.DW300_CELEBA_68])
def pipnet_model(request):
return PIPNet(model_name=request.param)
@pytest.fixture
def mock_image():
return np.random.randint(0, 255, (640, 640, 3), dtype=np.uint8)
@pytest.fixture
def mock_bbox():
return [100, 100, 300, 300]
def _expected_n_lms(model: PIPNet) -> int:
return 98 if model.num_lms == 98 else 68
def test_model_initialization(pipnet_model):
assert pipnet_model is not None, 'PIPNet model initialization failed.'
assert pipnet_model.num_lms in (68, 98), f'Unexpected num_lms: {pipnet_model.num_lms}'
assert pipnet_model.input_h == pipnet_model.input_w == 256
def test_landmark_detection(pipnet_model, mock_image, mock_bbox):
landmarks = pipnet_model.get_landmarks(mock_image, mock_bbox)
n = _expected_n_lms(pipnet_model)
assert landmarks.shape == (n, 2), f'Expected shape ({n}, 2), got {landmarks.shape}'
def test_landmark_dtype(pipnet_model, mock_image, mock_bbox):
landmarks = pipnet_model.get_landmarks(mock_image, mock_bbox)
assert landmarks.dtype == np.float32, f'Expected float32, got {landmarks.dtype}'
def test_landmark_coordinates_within_image(pipnet_model, mock_image, mock_bbox):
landmarks = pipnet_model.get_landmarks(mock_image, mock_bbox)
n = _expected_n_lms(pipnet_model)
x_coords = landmarks[:, 0]
y_coords = landmarks[:, 1]
x1, y1, x2, y2 = mock_bbox
margin = 50
x_in_bounds = int(np.sum((x_coords >= x1 - margin) & (x_coords <= x2 + margin)))
y_in_bounds = int(np.sum((y_coords >= y1 - margin) & (y_coords <= y2 + margin)))
threshold = max(int(0.9 * n), n - 5)
assert x_in_bounds >= threshold, f'Only {x_in_bounds}/{n} x-coordinates within bounds'
assert y_in_bounds >= threshold, f'Only {y_in_bounds}/{n} y-coordinates within bounds'
def test_different_bbox_sizes(pipnet_model, mock_image):
n = _expected_n_lms(pipnet_model)
test_bboxes = [
[50, 50, 150, 150],
[100, 100, 300, 300],
[50, 50, 400, 400],
]
for bbox in test_bboxes:
landmarks = pipnet_model.get_landmarks(mock_image, bbox)
assert landmarks.shape == (n, 2), f'Failed for bbox {bbox}'
def test_consistency(pipnet_model, mock_image, mock_bbox):
landmarks1 = pipnet_model.get_landmarks(mock_image, mock_bbox)
landmarks2 = pipnet_model.get_landmarks(mock_image, mock_bbox)
assert np.allclose(landmarks1, landmarks2), 'Same input should produce same landmarks'
def test_different_image_sizes(pipnet_model, mock_bbox):
n = _expected_n_lms(pipnet_model)
test_sizes = [(480, 640, 3), (720, 1280, 3), (1080, 1920, 3)]
for size in test_sizes:
mock_image = np.random.randint(0, 255, size, dtype=np.uint8)
landmarks = pipnet_model.get_landmarks(mock_image, mock_bbox)
assert landmarks.shape == (n, 2), f'Failed for image size {size}'
def test_bbox_list_format(pipnet_model, mock_image):
n = _expected_n_lms(pipnet_model)
landmarks = pipnet_model.get_landmarks(mock_image, [100, 100, 300, 300])
assert landmarks.shape == (n, 2), 'Should work with bbox as list'
def test_bbox_array_format(pipnet_model, mock_image):
n = _expected_n_lms(pipnet_model)
bbox_array = np.array([100, 100, 300, 300])
landmarks = pipnet_model.get_landmarks(mock_image, bbox_array)
assert landmarks.shape == (n, 2), 'Should work with bbox as numpy array'
def test_landmark_distribution(pipnet_model, mock_image, mock_bbox):
landmarks = pipnet_model.get_landmarks(mock_image, mock_bbox)
x_variance = np.var(landmarks[:, 0])
y_variance = np.var(landmarks[:, 1])
assert x_variance > 0, 'Landmarks should have variation in x-coordinates'
assert y_variance > 0, 'Landmarks should have variation in y-coordinates'
def test_default_model_is_wflw_98():
"""PIPNet() with no args should default to the 98-point WFLW model."""
model = PIPNet()
assert model.num_lms == 98
def test_meanface_lookup_invalid_num_lms():
"""get_meanface_info should reject unsupported landmark counts."""
from uniface.landmark._meanface import get_meanface_info
with pytest.raises(ValueError, match='No meanface table'):
get_meanface_info(num_lms=42)

View File

@@ -8,7 +8,7 @@ from __future__ import annotations
import numpy as np
import pytest
from uniface.recognition import ArcFace, MobileFace, SphereFace
from uniface.recognition import ArcFace, EdgeFace, MobileFace, SphereFace
@pytest.fixture
@@ -35,6 +35,12 @@ def sphereface_model():
return SphereFace()
@pytest.fixture
def edgeface_model():
"""Fixture to initialize the EdgeFace model for testing."""
return EdgeFace()
@pytest.fixture
def mock_aligned_face():
"""
@@ -176,6 +182,45 @@ def test_sphereface_normalized_embedding(sphereface_model, mock_landmarks):
assert np.isclose(norm, 1.0, atol=1e-5), f'Normalized embedding should have norm 1.0, got {norm}'
# EdgeFace Tests
def test_edgeface_initialization(edgeface_model):
"""Test that the EdgeFace model initializes correctly."""
assert edgeface_model is not None, 'EdgeFace model initialization failed.'
def test_edgeface_embedding_shape(edgeface_model, mock_aligned_face):
"""Test that EdgeFace produces embeddings with the correct shape."""
embedding = edgeface_model.get_embedding(mock_aligned_face)
assert embedding.shape[1] == 512, f'Expected 512-dim embedding, got {embedding.shape[1]}'
assert embedding.shape[0] == 1, 'Embedding should have batch dimension of 1'
def test_edgeface_normalized_embedding(edgeface_model, mock_landmarks):
"""Test that EdgeFace normalized embeddings have unit length."""
mock_image = np.random.randint(0, 255, (640, 640, 3), dtype=np.uint8)
embedding = edgeface_model.get_normalized_embedding(mock_image, mock_landmarks)
assert embedding.shape == (512,), f'Expected shape (512,), got {embedding.shape}'
norm = np.linalg.norm(embedding)
assert np.isclose(norm, 1.0, atol=1e-5), f'Normalized embedding should have norm 1.0, got {norm}'
def test_edgeface_embedding_dtype(edgeface_model, mock_aligned_face):
"""Test that EdgeFace embeddings have the correct data type."""
embedding = edgeface_model.get_embedding(mock_aligned_face)
assert embedding.dtype == np.float32, f'Expected float32, got {embedding.dtype}'
def test_edgeface_consistency(edgeface_model, mock_aligned_face):
"""Test that the same input produces the same EdgeFace embedding."""
embedding1 = edgeface_model.get_embedding(mock_aligned_face)
embedding2 = edgeface_model.get_embedding(mock_aligned_face)
assert np.allclose(embedding1, embedding2), 'Same input should produce same embedding'
# Cross-model comparison tests
def test_different_models_different_embeddings(arcface_model, mobileface_model, mock_aligned_face):
"""

View File

@@ -27,12 +27,17 @@ from uniface.draw import draw_detections
from uniface.recognition import ArcFace
def draw_face_info(image, face, face_id):
"""Draw face ID and attributes above bounding box."""
def draw_face_info(image, face):
"""Draw face attributes above bounding box."""
x1, y1, _x2, y2 = map(int, face.bbox)
lines = [f'ID: {face_id}', f'Conf: {face.confidence:.2f}']
if face.age and face.sex:
lines = []
if face.age is not None and face.sex is not None:
lines.append(f'{face.sex}, {face.age}y')
if face.emotion is not None:
lines.append(face.emotion)
if not lines:
return
for i, line in enumerate(lines):
y_pos = y1 - 10 - (len(lines) - 1 - i) * 25
@@ -95,13 +100,10 @@ def process_image(analyzer, image_path: str, save_dir: str = 'outputs', show_sim
status = 'Same' if sim > 0.4 else 'Different'
print(f' Face {i + 1} ↔ Face {j + 1}: {sim:.3f} ({status})')
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(image=image, bboxes=bboxes, scores=scores, landmarks=landmarks, corner_bbox=True)
draw_detections(image=image, faces=faces, corner_bbox=True)
for i, face in enumerate(faces, 1):
draw_face_info(image, face, i)
for face in faces:
draw_face_info(image, face)
os.makedirs(save_dir, exist_ok=True)
output_path = os.path.join(save_dir, f'{Path(image_path).stem}_analysis.jpg')
@@ -137,13 +139,10 @@ def process_video(analyzer, video_path: str, save_dir: str = 'outputs'):
frame_count += 1
faces = analyzer.analyze(frame)
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(image=frame, bboxes=bboxes, scores=scores, landmarks=landmarks, corner_bbox=True)
draw_detections(image=frame, faces=faces, corner_bbox=True)
for i, face in enumerate(faces, 1):
draw_face_info(frame, face, i)
for face in faces:
draw_face_info(frame, face)
cv2.putText(frame, f'Faces: {len(faces)}', (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
out.write(frame)
@@ -173,13 +172,10 @@ def run_camera(analyzer, camera_id: int = 0):
faces = analyzer.analyze(frame)
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(image=frame, bboxes=bboxes, scores=scores, landmarks=landmarks, corner_bbox=True)
draw_detections(image=frame, faces=faces, corner_bbox=True)
for i, face in enumerate(faces, 1):
draw_face_info(frame, face, i)
for face in faces:
draw_face_info(frame, face)
cv2.putText(frame, f'Faces: {len(faces)}', (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
cv2.imshow('Face Analyzer', frame)

View File

@@ -43,10 +43,7 @@ def process_image(
from uniface.draw import draw_detections
preview = image.copy()
bboxes = [face.bbox for face in faces]
scores = [face.confidence for face in faces]
landmarks = [face.landmarks for face in faces]
draw_detections(preview, bboxes, scores, landmarks)
draw_detections(image=preview, faces=faces)
cv2.imshow('Detections (Press any key to continue)', preview)
cv2.waitKey(0)

View File

@@ -52,12 +52,7 @@ def process_image(
if not faces:
return
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(
image=image, bboxes=bboxes, scores=scores, landmarks=landmarks, vis_threshold=threshold, corner_bbox=True
)
draw_detections(image=image, faces=faces, vis_threshold=threshold, corner_bbox=True)
for i, face in enumerate(faces):
result = age_gender.predict(image, face)
@@ -104,12 +99,7 @@ def process_video(
frame_count += 1
faces = detector.detect(frame)
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(
image=frame, bboxes=bboxes, scores=scores, landmarks=landmarks, vis_threshold=threshold, corner_bbox=True
)
draw_detections(image=frame, faces=faces, vis_threshold=threshold, corner_bbox=True)
for face in faces:
result = age_gender.predict(frame, face)
@@ -143,12 +133,7 @@ def run_camera(detector, age_gender, camera_id: int = 0, threshold: float = 0.6)
faces = detector.detect(frame)
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(
image=frame, bboxes=bboxes, scores=scores, landmarks=landmarks, vis_threshold=threshold, corner_bbox=True
)
draw_detections(image=frame, faces=faces, vis_threshold=threshold, corner_bbox=True)
for face in faces:
result = age_gender.predict(frame, face)

View File

@@ -34,13 +34,7 @@ def process_image(detector, image_path: Path, output_path: Path, threshold: floa
faces = detector.detect(image)
# unpack face data for visualization
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(
image=image, bboxes=bboxes, scores=scores, landmarks=landmarks, vis_threshold=threshold, corner_bbox=True
)
draw_detections(image=image, faces=faces, vis_threshold=threshold, corner_bbox=True)
cv2.putText(
image,

View File

@@ -35,10 +35,7 @@ def process_image(detector, image_path: str, threshold: float = 0.6, save_dir: s
faces = detector.detect(image)
if faces:
bboxes = [face.bbox for face in faces]
scores = [face.confidence for face in faces]
landmarks = [face.landmarks for face in faces]
draw_detections(image=image, bboxes=bboxes, scores=scores, landmarks=landmarks, vis_threshold=threshold)
draw_detections(image=image, faces=faces, vis_threshold=threshold)
os.makedirs(save_dir, exist_ok=True)
output_path = os.path.join(save_dir, f'{os.path.splitext(os.path.basename(image_path))[0]}_out.jpg')
@@ -89,14 +86,9 @@ def process_video(
faces = detector.detect(frame)
total_faces += len(faces)
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(
image=frame,
bboxes=bboxes,
scores=scores,
landmarks=landmarks,
faces=faces,
vis_threshold=threshold,
draw_score=True,
corner_bbox=True,
@@ -141,14 +133,9 @@ def run_camera(detector, camera_id: int = 0, threshold: float = 0.6):
faces = detector.detect(frame)
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(
image=frame,
bboxes=bboxes,
scores=scores,
landmarks=landmarks,
faces=faces,
vis_threshold=threshold,
draw_score=True,
corner_bbox=True,

View File

@@ -1,9 +1,11 @@
import argparse
from uniface.constants import (
AdaFaceWeights,
AgeGenderWeights,
ArcFaceWeights,
DDAMFNWeights,
EdgeFaceWeights,
HeadPoseWeights,
LandmarkWeights,
MobileFaceWeights,
@@ -15,9 +17,11 @@ from uniface.model_store import verify_model_weights
MODEL_TYPES = {
'retinaface': RetinaFaceWeights,
'sphereface': SphereFaceWeights,
'mobileface': MobileFaceWeights,
'adaface': AdaFaceWeights,
'arcface': ArcFaceWeights,
'edgeface': EdgeFaceWeights,
'mobileface': MobileFaceWeights,
'sphereface': SphereFaceWeights,
'scrfd': SCRFDWeights,
'ddamfn': DDAMFNWeights,
'agegender': AgeGenderWeights,

View File

@@ -52,12 +52,7 @@ def process_image(
if not faces:
return
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(
image=image, bboxes=bboxes, scores=scores, landmarks=landmarks, vis_threshold=threshold, corner_bbox=True
)
draw_detections(image=image, faces=faces, vis_threshold=threshold, corner_bbox=True)
for i, face in enumerate(faces):
result = emotion_predictor.predict(image, face)
@@ -104,12 +99,7 @@ def process_video(
frame_count += 1
faces = detector.detect(frame)
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(
image=frame, bboxes=bboxes, scores=scores, landmarks=landmarks, vis_threshold=threshold, corner_bbox=True
)
draw_detections(image=frame, faces=faces, vis_threshold=threshold, corner_bbox=True)
for face in faces:
result = emotion_predictor.predict(frame, face)
@@ -143,12 +133,7 @@ def run_camera(detector, emotion_predictor, camera_id: int = 0, threshold: float
faces = detector.detect(frame)
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(
image=frame, bboxes=bboxes, scores=scores, landmarks=landmarks, vis_threshold=threshold, corner_bbox=True
)
draw_detections(image=frame, faces=faces, vis_threshold=threshold, corner_bbox=True)
for face in faces:
result = emotion_predictor.predict(frame, face)

View File

@@ -52,12 +52,7 @@ def process_image(
if not faces:
return
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(
image=image, bboxes=bboxes, scores=scores, landmarks=landmarks, vis_threshold=threshold, corner_bbox=True
)
draw_detections(image=image, faces=faces, vis_threshold=threshold, corner_bbox=True)
for i, face in enumerate(faces):
result = fairface.predict(image, face)
@@ -104,12 +99,7 @@ def process_video(
frame_count += 1
faces = detector.detect(frame)
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(
image=frame, bboxes=bboxes, scores=scores, landmarks=landmarks, vis_threshold=threshold, corner_bbox=True
)
draw_detections(image=frame, faces=faces, vis_threshold=threshold, corner_bbox=True)
for face in faces:
result = fairface.predict(frame, face)
@@ -143,12 +133,7 @@ def run_camera(detector, fairface, camera_id: int = 0, threshold: float = 0.6):
faces = detector.detect(frame)
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
draw_detections(
image=frame, bboxes=bboxes, scores=scores, landmarks=landmarks, vis_threshold=threshold, corner_bbox=True
)
draw_detections(image=frame, faces=faces, vis_threshold=threshold, corner_bbox=True)
for face in faces:
result = fairface.predict(frame, face)

View File

@@ -24,7 +24,7 @@ import cv2
from uniface import create_detector, create_recognizer
from uniface.draw import draw_corner_bbox, draw_text_label
from uniface.indexing import FAISS
from uniface.stores import FAISS
def _draw_face(image, bbox, text: str, color: tuple[int, int, int]) -> None:

View File

@@ -198,7 +198,10 @@ def main():
parser_arg.add_argument('--source', type=str, required=True, help='Image/video path or camera ID (0, 1, ...)')
parser_arg.add_argument('--save-dir', type=str, default='outputs', help='Output directory')
parser_arg.add_argument(
'--model', type=str, default=ParsingWeights.RESNET18, choices=[ParsingWeights.RESNET18, ParsingWeights.RESNET34]
'--model',
type=ParsingWeights,
default=ParsingWeights.RESNET18,
choices=list(ParsingWeights),
)
parser_arg.add_argument(
'--expand-ratio',

View File

@@ -16,16 +16,22 @@ import numpy as np
from uniface.detection import SCRFD, RetinaFace
from uniface.face_utils import compute_similarity
from uniface.recognition import ArcFace, MobileFace, SphereFace
from uniface.recognition import AdaFace, ArcFace, EdgeFace, MobileFace, SphereFace
RECOGNIZERS = {
'arcface': ArcFace,
'adaface': AdaFace,
'edgeface': EdgeFace,
'mobileface': MobileFace,
'sphereface': SphereFace,
}
def get_recognizer(name: str):
if name == 'arcface':
return ArcFace()
elif name == 'mobileface':
return MobileFace()
else:
return SphereFace()
cls = RECOGNIZERS.get(name)
if cls is None:
raise ValueError(f"Unknown recognizer: '{name}'. Available: {list(RECOGNIZERS)}")
return cls()
def run_inference(detector, recognizer, image_path: str):
@@ -91,7 +97,7 @@ def main():
'--recognizer',
type=str,
default='arcface',
choices=['arcface', 'mobileface', 'sphereface'],
choices=list(RECOGNIZERS),
)
args = parser.parse_args()

View File

@@ -15,10 +15,11 @@
This library provides unified APIs for:
- Face detection (RetinaFace, SCRFD, YOLOv5Face, YOLOv8Face)
- Face recognition (AdaFace, ArcFace, MobileFace, SphereFace)
- Face recognition (AdaFace, ArcFace, EdgeFace, MobileFace, SphereFace)
- Face tracking (ByteTrack with Kalman filtering)
- Facial landmarks (106-point detection)
- Facial landmarks (106 / 98 / 68-point detection: 2d106det, PIPNet)
- Face parsing (semantic segmentation)
- Portrait matting (trimap-free alpha matte)
- Gaze estimation
- Head pose estimation
- Age, gender, and emotion prediction
@@ -30,7 +31,7 @@ from __future__ import annotations
__license__ = 'MIT'
__author__ = 'Yakhyokhuja Valikhujaev'
__version__ = '3.2.0'
__version__ = '3.6.0'
import contextlib
@@ -50,17 +51,18 @@ from .detection import (
)
from .gaze import MobileGaze, create_gaze_estimator
from .headpose import HeadPose, create_head_pose_estimator
from .landmark import Landmark106, create_landmarker
from .landmark import Landmark106, PIPNet, create_landmarker
from .matting import MODNet, create_matting_model
from .parsing import BiSeNet, XSeg, create_face_parser
from .privacy import BlurFace
from .recognition import AdaFace, ArcFace, MobileFace, SphereFace, create_recognizer
from .recognition import AdaFace, ArcFace, EdgeFace, MobileFace, SphereFace, create_recognizer
from .spoofing import MiniFASNet, create_spoofer
from .tracking import BYTETracker
from .types import AttributeResult, EmotionResult, Face, GazeResult, HeadPoseResult, SpoofingResult
# Optional: FAISS vector store (requires `pip install faiss-cpu`)
with contextlib.suppress(ImportError):
from .indexing import FAISS
from .stores import FAISS
__all__ = [
# Metadata
@@ -74,6 +76,7 @@ __all__ = [
'create_detector',
'create_face_parser',
'create_gaze_estimator',
'create_matting_model',
'create_head_pose_estimator',
'create_landmarker',
'create_recognizer',
@@ -87,16 +90,20 @@ __all__ = [
# Recognition models
'AdaFace',
'ArcFace',
'EdgeFace',
'MobileFace',
'SphereFace',
# Landmark models
'Landmark106',
'PIPNet',
# Gaze models
'GazeResult',
'MobileGaze',
# Head pose models
'HeadPose',
'HeadPoseResult',
# Matting models
'MODNet',
# Parsing models
'BiSeNet',
'XSeg',
@@ -114,7 +121,7 @@ __all__ = [
'BYTETracker',
# Privacy
'BlurFace',
# Indexing (optional)
# Stores (optional)
'FAISS',
# Utilities
'Logger',

View File

@@ -4,6 +4,8 @@
from __future__ import annotations
from typing import Any
import numpy as np
from uniface.attribute.base import Attribute
@@ -14,6 +16,8 @@ from uniface.types import Face
__all__ = ['FaceAnalyzer']
_UNSET: Any = object()
class FaceAnalyzer:
"""Unified face analyzer combining detection, recognition, and attributes.
@@ -27,35 +31,52 @@ class FaceAnalyzer:
via the ``attributes`` list. Each predictor's ``predict(image, face)``
is called once per detected face, enriching the :class:`Face` in-place.
Args:
detector: Face detector instance for detecting faces in images.
recognizer: Optional face recognizer for extracting embeddings.
attributes: Optional list of ``Attribute`` predictors to run on
each detected face (e.g. ``[AgeGender(), FairFace(), Emotion()]``).
When called with no arguments, uses SCRFD (500M) for detection and
ArcFace (MobileNet) for recognition — the smallest and fastest variants.
Example:
>>> from uniface import RetinaFace, ArcFace, AgeGender, FaceAnalyzer
>>> detector = RetinaFace()
>>> recognizer = ArcFace()
>>> analyzer = FaceAnalyzer(detector, recognizer=recognizer, attributes=[AgeGender()])
Args:
detector: Face detector instance. Defaults to ``SCRFD(SCRFD_500M_KPS)``.
recognizer: Face recognizer for extracting embeddings.
Defaults to ``ArcFace(MNET)``. Pass ``None`` to disable recognition.
attributes: Optional list of ``Attribute`` predictors to run on
each detected face (e.g. ``[AgeGender()]``).
Examples:
>>> from uniface import FaceAnalyzer
>>> analyzer = FaceAnalyzer()
>>> faces = analyzer.analyze(image)
>>> from uniface import FaceAnalyzer, AgeGender
>>> analyzer = FaceAnalyzer(attributes=[AgeGender()])
>>> faces = analyzer.analyze(image)
"""
def __init__(
self,
detector: BaseDetector,
recognizer: BaseRecognizer | None = None,
detector: BaseDetector | None = None,
recognizer: BaseRecognizer | None = _UNSET,
attributes: list[Attribute] | None = None,
) -> None:
if detector is None:
from uniface.constants import SCRFDWeights
from uniface.detection import SCRFD
detector = SCRFD(model_name=SCRFDWeights.SCRFD_500M_KPS)
if recognizer is _UNSET:
from uniface.recognition import ArcFace
recognizer = ArcFace()
self.detector = detector
self.recognizer = recognizer
self.attributes: list[Attribute] = attributes or []
Logger.info(f'Initialized FaceAnalyzer with detector={detector.__class__.__name__}')
if recognizer:
Logger.info(f' - Recognition enabled: {recognizer.__class__.__name__}')
Logger.info(f'Recognition enabled: {recognizer.__class__.__name__}')
for attr in self.attributes:
Logger.info(f' - Attribute enabled: {attr.__class__.__name__}')
Logger.info(f'Attribute enabled: {attr.__class__.__name__}')
def analyze(self, image: np.ndarray) -> list[Face]:
"""Analyze faces in an image.
@@ -76,25 +97,26 @@ class FaceAnalyzer:
if self.recognizer is not None:
try:
face.embedding = self.recognizer.get_normalized_embedding(image, face.landmarks)
Logger.debug(f' Face {idx + 1}: Extracted embedding with shape {face.embedding.shape}')
Logger.debug(f'Face {idx + 1}: Extracted embedding with shape {face.embedding.shape}')
except Exception as e:
Logger.warning(f' Face {idx + 1}: Failed to extract embedding: {e}')
Logger.warning(f'Face {idx + 1}: Failed to extract embedding: {e}')
for attr in self.attributes:
attr_name = attr.__class__.__name__
try:
attr.predict(image, face)
Logger.debug(f' Face {idx + 1}: {attr_name} prediction succeeded')
Logger.debug(f'Face {idx + 1}: {attr_name} prediction succeeded')
except Exception as e:
Logger.warning(f' Face {idx + 1}: {attr_name} prediction failed: {e}')
Logger.warning(f'Face {idx + 1}: {attr_name} prediction failed: {e}')
Logger.info(f'Analysis complete: {len(faces)} face(s) processed')
return faces
def __repr__(self) -> str:
parts = [f'FaceAnalyzer(detector={self.detector.__class__.__name__}']
parts = [f'detector={self.detector.__class__.__name__}']
if self.recognizer:
parts.append(f'recognizer={self.recognizer.__class__.__name__}')
for attr in self.attributes:
parts.append(f'{attr.__class__.__name__}')
return ', '.join(parts) + ')'
if self.attributes:
attr_names = ', '.join(attr.__class__.__name__ for attr in self.attributes)
parts.append(f'attributes=[{attr_names}]')
return f'FaceAnalyzer({", ".join(parts)})'

View File

@@ -7,6 +7,7 @@ import cv2
import numpy as np
from uniface.attribute.base import Attribute
from uniface.common import softmax
from uniface.constants import FairFaceWeights
from uniface.log import Logger
from uniface.model_store import verify_model_weights
@@ -150,9 +151,9 @@ class FairFace(Attribute):
race_logits, gender_logits, age_logits = prediction
# Apply softmax
race_probs = self._softmax(race_logits[0])
gender_probs = self._softmax(gender_logits[0])
age_probs = self._softmax(age_logits[0])
race_probs = softmax(race_logits[0])
gender_probs = softmax(gender_logits[0])
age_probs = softmax(age_logits[0])
# Get predictions
race_idx = int(np.argmax(race_probs))
@@ -186,9 +187,3 @@ class FairFace(Attribute):
face.age_group = result.age_group
face.race = result.race
return result
@staticmethod
def _softmax(x: np.ndarray) -> np.ndarray:
"""Compute softmax values for numerical stability."""
exp_x = np.exp(x - np.max(x))
return exp_x / np.sum(exp_x)

View File

@@ -19,6 +19,7 @@ __all__ = [
'letterbox_resize',
'non_max_suppression',
'resize_image',
'softmax',
'xyxy_to_cxcywh',
]
@@ -63,6 +64,21 @@ def resize_image(
return image, resize_factor
def softmax(x: np.ndarray, axis: int = -1) -> np.ndarray:
"""Compute the numerically stable softmax of an array along ``axis``.
Args:
x: Input array.
axis: Axis along which softmax is computed. Defaults to the last axis.
Returns:
Array of the same shape as *x* with values in ``[0, 1]`` summing to 1
along *axis*.
"""
exp_x = np.exp(x - np.max(x, axis=axis, keepdims=True))
return exp_x / np.sum(exp_x, axis=axis, keepdims=True)
def xyxy_to_cxcywh(bboxes: np.ndarray) -> np.ndarray:
"""Convert bounding boxes from ``[x1, y1, x2, y2]`` to ``[cx, cy, w, h]``.

View File

@@ -57,6 +57,18 @@ class AdaFaceWeights(str, Enum):
IR_18 = "adaface_ir_18"
IR_101 = "adaface_ir_101"
class EdgeFaceWeights(str, Enum):
"""
EdgeFace: Efficient Face Recognition Model for Edge Devices.
Based on EdgeNeXt backbone with optional LoRA low-rank compression.
All models output 512-D embeddings from 112x112 aligned face crops.
https://github.com/yakhyo/edgeface-onnx
"""
XXS = "edgeface_xxs"
XS_GAMMA_06 = "edgeface_xs_gamma_06"
S_GAMMA_05 = "edgeface_s_gamma_05"
BASE = "edgeface_base"
class RetinaFaceWeights(str, Enum):
"""
Trained on WIDER FACE dataset.
@@ -143,6 +155,16 @@ class LandmarkWeights(str, Enum):
DEFAULT = "2d_106"
class PIPNetWeights(str, Enum):
"""
PIPNet: Pixel-in-Pixel Net for facial landmark detection.
ResNet-18 backbone, 256x256 input.
https://github.com/yakhyo/pipnet-onnx
"""
WFLW_98 = "pipnet_r18_wflw_98"
DW300_CELEBA_68 = "pipnet_r18_300w_celeba_68"
class GazeWeights(str, Enum):
"""
MobileGaze: Real-Time Gaze Estimation models.
@@ -189,6 +211,15 @@ class XSegWeights(str, Enum):
DEFAULT = "xseg"
class MODNetWeights(str, Enum):
"""
MODNet: Real-Time Trimap-Free Portrait Matting via Objective Decomposition.
https://github.com/yakhyo/modnet
"""
PHOTOGRAPHIC = "modnet_photographic"
WEBCAM = "modnet_webcam"
class MiniFASNetWeights(str, Enum):
"""
MiniFASNet: Lightweight Face Anti-Spoofing models.
@@ -278,6 +309,24 @@ MODEL_REGISTRY: dict[Enum, ModelInfo] = {
sha256='f2eb07d03de0af560a82e1214df799fec5e09375d43521e2868f9dc387e5a43e'
),
# EdgeFace
EdgeFaceWeights.XXS: ModelInfo(
url='https://github.com/yakhyo/edgeface-onnx/releases/download/weights/edgeface_xxs.onnx',
sha256='dc674de4cbc77fa0bf9a82d5149558ab8581d82a2cd3bb60f28fd1a5d3ff8a2f'
),
EdgeFaceWeights.XS_GAMMA_06: ModelInfo(
url='https://github.com/yakhyo/edgeface-onnx/releases/download/weights/edgeface_xs_gamma_06.onnx',
sha256='9206e2eb13a2761d7b5b76e13016d4b9acd3fa3535a9a09939f3adacd139a5ff'
),
EdgeFaceWeights.S_GAMMA_05: ModelInfo(
url='https://github.com/yakhyo/edgeface-onnx/releases/download/weights/edgeface_s_gamma_05.onnx',
sha256='b850767cf791bda585600b5c4c7d7432b2f998ccd862caae34ef1afa967d2e54'
),
EdgeFaceWeights.BASE: ModelInfo(
url='https://github.com/yakhyo/edgeface-onnx/releases/download/weights/edgeface_base.onnx',
sha256='b56942f072c67385f44734b9458b0ccc4a2226888a113f77e0c802ad0c77b4c3'
),
# SCRFD
SCRFDWeights.SCRFD_10G_KPS: ModelInfo(
url='https://github.com/yakhyo/uniface/releases/download/weights/scrfd_10g_kps.onnx',
@@ -340,6 +389,16 @@ MODEL_REGISTRY: dict[Enum, ModelInfo] = {
sha256='f001b856447c413801ef5c42091ed0cd516fcd21f2d6b79635b1e733a7109dbf'
),
# PIPNet (98 / 68 point landmarks)
PIPNetWeights.WFLW_98: ModelInfo(
url='https://github.com/yakhyo/pipnet-onnx/releases/download/weights/pipnet_r18_wflw_98.onnx',
sha256='9862838dc6144bc772b6485f6f6d31295c0b1c1ab7293e6ddeb0a439cb10218d'
),
PIPNetWeights.DW300_CELEBA_68: ModelInfo(
url='https://github.com/yakhyo/pipnet-onnx/releases/download/weights/pipnet_r18_300w_celeba_68.onnx',
sha256='63fa56fd4b8f6ccc4b88f2b36e00fa3d8c21a2c4244ab9381e8b432cef35197b'
),
# Gaze (MobileGaze)
GazeWeights.RESNET18: ModelInfo(
url='https://github.com/yakhyo/gaze-estimation/releases/download/weights/resnet18_gaze.onnx',
@@ -413,6 +472,16 @@ MODEL_REGISTRY: dict[Enum, ModelInfo] = {
url='https://github.com/yakhyo/face-segmentation/releases/download/weights/xseg.onnx',
sha256='0b57328efcb839d85973164b617ceee9dfe6cfcb2c82e8a033bba9f4f09b27e5'
),
# MODNet (Portrait Matting)
MODNetWeights.PHOTOGRAPHIC: ModelInfo(
url='https://github.com/yakhyo/modnet/releases/download/weights/modnet_photographic.onnx',
sha256='5069a5e306b9f5e9f4f2b0360264c9f8ea13b257c7c39943c7cf6a2ec3a102ae'
),
MODNetWeights.WEBCAM: ModelInfo(
url='https://github.com/yakhyo/modnet/releases/download/weights/modnet_webcam.onnx',
sha256='de03cc16f3c91f25b7c2f0b42ea1a8d34f40a752234f3887572655e744e55306'
),
}
@@ -420,4 +489,5 @@ MODEL_REGISTRY: dict[Enum, ModelInfo] = {
MODEL_URLS: dict[Enum, str] = {k: v.url for k, v in MODEL_REGISTRY.items()}
MODEL_SHA256: dict[Enum, str] = {k: v.sha256 for k, v in MODEL_REGISTRY.items()}
CHUNK_SIZE = 8192
DOWNLOAD_CHUNK_SIZE = 256 * 1024 # 256 KiB
HASH_CHUNK_SIZE = 1024 * 1024 # 1 MiB

View File

@@ -202,7 +202,7 @@ class RetinaFace(BaseDetector):
height, width, _ = image.shape
image_tensor = self.preprocess(image)
# ONNXRuntime inference
# Inference
outputs = self.inference(image_tensor)
# Postprocessing

View File

@@ -247,9 +247,10 @@ class SCRFD(BaseDetector):
image_tensor = self.preprocess(image)
# ONNXRuntime inference
# Inference
outputs = self.inference(image_tensor)
# Postprocessing
scores_list, bboxes_list, kpss_list = self.postprocess(outputs, image_size=image.shape[:2])
# Handle case when no faces are detected

View File

@@ -2,18 +2,11 @@
# Author: Yakhyokhuja Valikhujaev
# GitHub: https://github.com/yakhyo
"""
YOLOv8-Face detector implementation.
Uses anchor-free design with DFL (Distribution Focal Loss) for bbox regression.
Reference: https://github.com/yakhyo/yolov8-face-onnx-inference
"""
from typing import Any, Literal
import numpy as np
from uniface.common import letterbox_resize, non_max_suppression
from uniface.common import letterbox_resize, non_max_suppression, softmax
from uniface.constants import YOLOv8FaceWeights
from uniface.log import Logger
from uniface.model_store import verify_model_weights
@@ -171,12 +164,6 @@ class YOLOv8Face(BaseDetector):
"""
return self.session.run(self.output_names, {self.input_names: input_tensor})
@staticmethod
def _softmax(x: np.ndarray, axis: int = -1) -> np.ndarray:
"""Compute softmax values for array x along specified axis."""
exp_x = np.exp(x - np.max(x, axis=axis, keepdims=True))
return exp_x / np.sum(exp_x, axis=axis, keepdims=True)
def postprocess(
self,
predictions: list[np.ndarray],
@@ -224,7 +211,7 @@ class YOLOv8Face(BaseDetector):
# Decode bounding boxes from DFL
bbox_pred = bbox_pred.reshape(-1, 4, 16)
bbox_dist = self._softmax(bbox_pred, axis=-1) @ np.arange(16)
bbox_dist = softmax(bbox_pred, axis=-1) @ np.arange(16)
# Convert distances to xyxy format
x1 = (grid_x - bbox_dist[:, 0]) * stride
@@ -279,16 +266,14 @@ class YOLOv8Face(BaseDetector):
if len(keep) == 0:
return np.array([]), np.array([])
# Limit to max_det
# Filter detections and limit to max_det
keep = keep[: self.max_det]
boxes = boxes[keep]
scores = scores[keep]
landmarks = landmarks[keep]
# === SCALE TO ORIGINAL IMAGE COORDINATES ===
# Scale back to original image coordinates
pad_w, pad_h = padding
# Scale boxes back to original image coordinates
boxes[:, [0, 2]] = (boxes[:, [0, 2]] - pad_w) / scale
boxes[:, [1, 3]] = (boxes[:, [1, 3]] - pad_h) / scale
@@ -303,7 +288,7 @@ class YOLOv8Face(BaseDetector):
# Reshape landmarks to (N, 5, 2)
landmarks = landmarks.reshape(-1, 5, 2)
# Combine box and score
# Combine results
detections = np.concatenate([boxes, scores[:, None]], axis=1)
return detections, landmarks

View File

@@ -232,9 +232,10 @@ def draw_text_label(
def draw_detections(
*,
image: np.ndarray,
bboxes: list[np.ndarray] | list[list[float]],
scores: np.ndarray | list[float],
landmarks: list[np.ndarray] | list[list[list[float]]],
faces: list[Face] | None = None,
bboxes: list[np.ndarray] | list[list[float]] | None = None,
scores: np.ndarray | list[float] | None = None,
landmarks: list[np.ndarray] | list[list[list[float]]] | None = None,
vis_threshold: float = 0.6,
draw_score: bool = False,
corner_bbox: bool = True,
@@ -243,17 +244,31 @@ def draw_detections(
Modifies the image in-place.
Accepts either a list of :class:`Face` objects (preferred) or separate
lists of bboxes, scores, and landmarks for backward compatibility.
Args:
image: Input image to draw on (modified in-place).
faces: List of Face objects from detection. When provided,
``bboxes``, ``scores``, and ``landmarks`` are ignored.
bboxes: List of bounding boxes in xyxy format ``[x1, y1, x2, y2]``.
scores: List of confidence scores.
landmarks: List of landmark sets with shape ``(5, 2)``.
vis_threshold: Confidence threshold for filtering. Defaults to 0.6.
draw_score: Whether to draw confidence scores. Defaults to False.
corner_bbox: Use corner-style bounding boxes. Defaults to True.
"""
# Adaptive line thickness
Examples:
>>> draw_detections(image=image, faces=faces)
>>> draw_detections(image=image, faces=faces, vis_threshold=0.7, draw_score=True)
"""
if faces is not None:
bboxes = [f.bbox for f in faces]
scores = [f.confidence for f in faces]
landmarks = [f.landmarks for f in faces]
elif bboxes is None or scores is None or landmarks is None:
raise ValueError('Provide either faces or all of bboxes, scores, and landmarks')
line_thickness = max(round(sum(image.shape[:2]) / 2 * 0.003), 2)
for i, score in enumerate(scores):
@@ -262,13 +277,11 @@ def draw_detections(
bbox = np.array(bboxes[i], dtype=np.int32)
# Draw bounding box
if corner_bbox:
draw_corner_bbox(image, bbox, color=(0, 255, 0), thickness=line_thickness, proportion=0.2)
else:
cv2.rectangle(image, tuple(bbox[:2]), tuple(bbox[2:]), (0, 255, 0), line_thickness)
# Draw confidence score label
if draw_score:
font_scale = max(0.4, min(0.7, (bbox[3] - bbox[1]) / 200))
draw_text_label(
@@ -281,7 +294,6 @@ def draw_detections(
font_scale=font_scale,
)
# Draw landmarks
landmark_set = np.array(landmarks[i], dtype=np.int32)
for j, point in enumerate(landmark_set):
cv2.circle(image, tuple(point), line_thickness + 1, _LANDMARK_COLORS[j % len(_LANDMARK_COLORS)], -1)
@@ -665,14 +677,10 @@ def vis_parsing_maps(
segmentation_mask = segmentation_mask.copy().astype(np.uint8)
# Create a color mask
segmentation_mask_color = np.zeros((segmentation_mask.shape[0], segmentation_mask.shape[1], 3))
num_classes = np.max(segmentation_mask)
for class_index in range(1, num_classes + 1):
class_pixels = np.where(segmentation_mask == class_index)
segmentation_mask_color[class_pixels[0], class_pixels[1], :] = FACE_PARSING_COLORS[class_index]
segmentation_mask_color = segmentation_mask_color.astype(np.uint8)
max_class = int(segmentation_mask.max())
palette = np.zeros((max(max_class + 1, len(FACE_PARSING_COLORS)), 3), dtype=np.uint8)
palette[: len(FACE_PARSING_COLORS)] = FACE_PARSING_COLORS
segmentation_mask_color = palette[segmentation_mask]
# Convert image to BGR format for blending
bgr_image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)

View File

@@ -71,7 +71,13 @@ def estimate_norm(
alignment[:, 0] += diff_x
# Compute the transformation matrix
transform = SimilarityTransform.from_estimate(landmark, alignment)
try:
# scikit-image >= 0.26
transform = SimilarityTransform.from_estimate(landmark, alignment)
except AttributeError:
# scikit-image < 0.26 (e.g. Python 3.10 with older scikit-image)
transform = SimilarityTransform()
transform.estimate(landmark, alignment)
matrix = transform.params[0:2, :]
inverse_matrix = np.linalg.inv(transform.params)[0:2, :]

View File

@@ -6,6 +6,7 @@
import cv2
import numpy as np
from uniface.common import softmax
from uniface.constants import GazeWeights
from uniface.log import Logger
from uniface.model_store import verify_model_weights
@@ -142,11 +143,6 @@ class MobileGaze(BaseGazeEstimator):
return image
def _softmax(self, x: np.ndarray) -> np.ndarray:
"""Apply softmax along axis 1."""
e_x = np.exp(x - np.max(x, axis=1, keepdims=True))
return e_x / e_x.sum(axis=1, keepdims=True)
def postprocess(self, outputs: tuple[np.ndarray, np.ndarray]) -> GazeResult:
"""
Postprocess raw model outputs into gaze angles.
@@ -164,8 +160,8 @@ class MobileGaze(BaseGazeEstimator):
yaw_logits, pitch_logits = outputs
# Convert logits to probabilities
yaw_probs = self._softmax(yaw_logits)
pitch_probs = self._softmax(pitch_logits)
yaw_probs = softmax(yaw_logits)
pitch_probs = softmax(pitch_logits)
# Compute expected bin index (soft-argmax)
yaw_deg = np.sum(yaw_probs * self._idx_tensor, axis=1) * self._binwidth - self._angle_offset
@@ -183,6 +179,13 @@ class MobileGaze(BaseGazeEstimator):
This method orchestrates the full pipeline: preprocessing the input,
running inference, and postprocessing to return the gaze direction.
Args:
face_image (np.ndarray): A cropped face image in BGR format with shape (H, W, 3).
Returns:
GazeResult: Estimated gaze direction containing ``pitch`` (vertical) and
``yaw`` (horizontal) angles in radians.
"""
input_tensor = self.preprocess(face_image)
outputs = self.session.run(self.output_names, {self.input_name: input_tensor})

View File

@@ -170,6 +170,13 @@ class HeadPose(BaseHeadPoseEstimator):
This method orchestrates the full pipeline: preprocessing the input,
running inference, and postprocessing to return the head orientation.
Args:
face_image (np.ndarray): A cropped face image in BGR format with shape (H, W, 3).
Returns:
HeadPoseResult: Estimated head orientation containing ``pitch`` (vertical),
``yaw`` (horizontal), and ``roll`` (in-plane) angles in degrees.
"""
input_tensor = self.preprocess(face_image)
outputs = self.session.run(self.output_names, {self.input_name: input_tensor})

View File

@@ -1,9 +0,0 @@
# Copyright 2025-2026 Yakhyokhuja Valikhujaev
# Author: Yakhyokhuja Valikhujaev
# GitHub: https://github.com/yakhyo
"""Vector indexing backends for fast similarity search."""
from uniface.indexing.faiss import FAISS
__all__ = ['FAISS']

View File

@@ -4,26 +4,35 @@
from .base import BaseLandmarker
from .models import Landmark106
from .pipnet import PIPNet
def create_landmarker(method: str = '2d106det', **kwargs) -> BaseLandmarker:
"""
Factory function to create facial landmark predictors.
"""Factory function to create facial landmark predictors.
Args:
method (str): Landmark prediction method.
Options: '2d106det' (default), 'landmark106', '106'.
**kwargs: Model-specific parameters.
Options:
- ``'2d106det'`` (default): InsightFace 2d106det 106-point model.
- ``'pipnet'``: PIPNet 98-point (WFLW) or 68-point (300W+CelebA)
model. Pass ``model_name=PIPNetWeights.DW300_CELEBA_68`` for
the 68-point variant.
**kwargs: Model-specific parameters forwarded to the underlying class.
Returns:
Initialized landmarker instance.
Raises:
ValueError: If ``method`` is not supported.
"""
method = method.lower()
if method in ('2d106det', 'landmark106', '106'):
if method == '2d106det':
return Landmark106(**kwargs)
else:
available = ['2d106det', 'landmark106', '106']
raise ValueError(f"Unsupported method: '{method}'. Available: {available}")
if method == 'pipnet':
return PIPNet(**kwargs)
available = ['2d106det', 'pipnet']
raise ValueError(f"Unsupported method: '{method}'. Available: {available}")
__all__ = ['BaseLandmarker', 'Landmark106', 'create_landmarker']
__all__ = ['BaseLandmarker', 'Landmark106', 'PIPNet', 'create_landmarker']

View File

@@ -0,0 +1,161 @@
# Copyright 2025-2026 Yakhyokhuja Valikhujaev
# Author: Yakhyokhuja Valikhujaev
# GitHub: https://github.com/yakhyo
#
# Mean-face arrays vendored from upstream PIPNet (MIT):
# https://github.com/jhb86253817/PIPNet/tree/master/data
from __future__ import annotations
import numpy as np
# fmt: off
# 300W layout: 68 landmarks, 136 floats.
MEANFACE_300W_68: tuple[float, ...] = (
0.05558998895410058, 0.23848280098218655, 0.05894856684324656, 0.3590187767402909,
0.0736574254414371, 0.4792196439871159, 0.09980016420365162, 0.5959029676167197,
0.14678670154995865, 0.7035615597409001, 0.21847188218752928, 0.7971705893013413,
0.30554692814599393, 0.8750572978073209, 0.4018434142644611, 0.9365018059444535,
0.5100536090382116, 0.9521295666029498, 0.6162039414413925, 0.9309467340899419,
0.7094522484942942, 0.8669275031738761, 0.7940993502957612, 0.7879369615524398,
0.8627063649669019, 0.6933756633633967, 0.9072386130534111, 0.5836975017700834,
0.9298874997796132, 0.4657004930314701, 0.9405202670724796, 0.346063993805527,
0.9425419553088846, 0.22558131891345742, 0.13304298285530403, 0.14853071838028062,
0.18873587368440375, 0.09596491613770254, 0.2673231915839219, 0.08084218279128136,
0.34878638553224905, 0.09253591849498964, 0.4226713753717798, 0.12466063383809506,
0.5618513152452376, 0.11839668911898667, 0.6394952560845826, 0.08480191391770678,
0.7204375851516752, 0.07249669092117161, 0.7988615904537885, 0.08766933146893043,
0.8534884939460948, 0.1380096813348583, 0.49610677423740546, 0.21516740699375395,
0.49709661403980665, 0.2928875699060973, 0.4982292618461611, 0.3699985379939941,
0.49982965173254235, 0.4494119144493957, 0.406772397599095, 0.5032397294041786,
0.45231994786363067, 0.5197953144002292, 0.49969685987914064, 0.5332489262413073,
0.5470074224053442, 0.518413595827126, 0.5892261151542287, 0.5023530079850803,
0.22414578747180394, 0.22835847349949062, 0.27262947128194215, 0.19915251892241678,
0.3306759252861797, 0.20026034220607236, 0.38044435864341913, 0.23839196034290633,
0.32884072789429913, 0.24902443794896897, 0.2707409300714473, 0.24950886025380967,
0.6086826011068529, 0.23465048639345917, 0.660397116846103, 0.1937087938594717,
0.7177815187666494, 0.19317079039835858, 0.7652328176062365, 0.22088822845258235,
0.722727677909097, 0.24195514178450958, 0.6658378927310327, 0.2441554205021945,
0.32894370935769124, 0.6496589505331646, 0.39347179739100613, 0.6216899667490776,
0.4571976492475472, 0.60794251109236, 0.4990484623797022, 0.6190124015360254,
0.5465555522325872, 0.6071477960565326, 0.6116127327356168, 0.6205387097430033,
0.6742318496058836, 0.6437466364395467, 0.6144773141699744, 0.7077526646009754,
0.5526442055374252, 0.7363350735898412, 0.5018120662554302, 0.7424476622366345,
0.4554458875556401, 0.7382303858617719, 0.3923750731597415, 0.7118887028663435,
0.35530766372404593, 0.6524479416354049, 0.457111071610868, 0.6467108367268608,
0.49974082228815025, 0.6508406774477011, 0.5477027224368399, 0.6451242819422733,
0.6478392760505715, 0.647852382880368, 0.5488474760115958, 0.6779061893042735,
0.5001073351044452, 0.6845280260362221, 0.4564831746654594, 0.6799300301441035,
)
# WFLW layout: 98 landmarks, 196 floats.
MEANFACE_WFLW_98: tuple[float, ...] = (
0.07960419395480703, 0.3921576875344978, 0.08315055593117261, 0.43509551571809146,
0.08675705281580391, 0.47810288286566444, 0.09141892980469117, 0.5210356946467262,
0.09839925903528965, 0.5637522280060038, 0.10871037524559955, 0.6060410614977951,
0.12314562992759207, 0.6475338700558225, 0.14242389255404694, 0.6877152027028081,
0.16706295456951875, 0.7259564546408682, 0.19693946055282413, 0.761730578566735,
0.23131827931527224, 0.7948205670466106, 0.2691730934906831, 0.825332081636482,
0.3099415030959131, 0.853325959406618, 0.3535202097901413, 0.8782538906229107,
0.40089023799272033, 0.8984102434399625, 0.4529251732310723, 0.9112191359814178,
0.5078640056794708, 0.9146712690731943, 0.5616519666079889, 0.9094327772020283,
0.6119216923689698, 0.8950540037623425, 0.6574617882337107, 0.8738084866764846,
0.6994820494908942, 0.8482660530943744, 0.7388135339780575, 0.8198750461527688,
0.775158750479601, 0.788989141243473, 0.8078785221990765, 0.7555462713420953,
0.8361052138935441, 0.7195542055115057, 0.8592123871172533, 0.6812759034843933,
0.8771159986952748, 0.6412243940605555, 0.8902481006481506, 0.5999743595282084,
0.8992952868651163, 0.5580032282594118, 0.9050110573289222, 0.5156548913779377,
0.908338439928252, 0.4731336721500472, 0.9104896075281127, 0.4305382486815422,
0.9124796341441906, 0.38798192678294363, 0.18465941635742913, 0.35063191749632183,
0.24110421889338157, 0.31190394310826886, 0.3003235400132397, 0.30828189837331976,
0.3603094923651325, 0.3135606490643205, 0.4171060234289877, 0.32433417646045615,
0.416842139562573, 0.3526729965541497, 0.36011177591813404, 0.3439660526998693,
0.3000863121140166, 0.33890077494044946, 0.24116055928407834, 0.34065620413845005,
0.5709736930161899, 0.321407825750195, 0.6305694459247149, 0.30972642336729495,
0.6895161625920927, 0.3036453838462943, 0.7488591859761683, 0.3069143844433495,
0.8030471337135181, 0.3435156012309415, 0.7485083446528741, 0.3348759588212388,
0.6893025057931884, 0.33403402013776456, 0.6304822892126991, 0.34038458762875695,
0.5710009285609654, 0.34988479902594455, 0.4954171902473609, 0.40202330022004634,
0.49604903449415433, 0.4592869389138444, 0.49644391662771625, 0.5162862508677217,
0.4981161256057368, 0.5703284628419502, 0.40749001573145566, 0.5983629921847019,
0.4537396729649631, 0.6057169923583451, 0.5007345777827058, 0.6116695615531077,
0.5448481727980428, 0.6044131443745976, 0.5882140504891681, 0.5961738788380111,
0.24303324896316683, 0.40721003719912746, 0.27771706732644313, 0.3907171413930685,
0.31847706697401107, 0.38417234007271117, 0.3621792860449715, 0.3900847721320633,
0.3965299162804086, 0.41071434661355205, 0.3586805562211872, 0.4203724421417311,
0.31847860588240934, 0.4237674602252073, 0.2789458001651631, 0.41942757306509065,
0.5938514626567266, 0.4090628827047304, 0.6303565516542536, 0.3864501652756091,
0.6774844732813035, 0.3809319896905685, 0.7150854850525555, 0.3875173254527522,
0.747519807465081, 0.4025187328459307, 0.7155172856447009, 0.4145958479293519,
0.680051949453018, 0.420041513473271, 0.6359056750107122, 0.41803782782566573,
0.33916483987223056, 0.6968581311227738, 0.40008790639758807, 0.6758101185779204,
0.47181947887764153, 0.6678850445191217, 0.5025394453374782, 0.6682917934792593,
0.5337748367911458, 0.6671949030019636, 0.6015915330083903, 0.6742535357237751,
0.6587068892667173, 0.6932163943648724, 0.6192795131720007, 0.7283129162844936,
0.5665923267827963, 0.7550248076404299, 0.5031303335863617, 0.7648348885181623,
0.4371030429958871, 0.7572539606688756, 0.3814909500115824, 0.7320595346122074,
0.35129809553480984, 0.6986839074746692, 0.4247987356100664, 0.69127609583798,
0.5027677238758598, 0.6911145821740593, 0.576997542122097, 0.6896269708051024,
0.6471352843446794, 0.6948977432227927, 0.5799932528781817, 0.7185288017567538,
0.5024914756021335, 0.7285408331555782, 0.4218115644247556, 0.7209126133193829,
0.3219750495122499, 0.40376441481225156, 0.6751136343101699, 0.40023415216110797,
)
# fmt: on
def _build_neighbor_indices(meanface: np.ndarray, num_nb: int) -> tuple[list[int], list[int], int]:
num_lms = meanface.shape[0]
meanface_indices: list[list[int]] = []
for i in range(num_lms):
pt = meanface[i]
dists = np.sum((pt - meanface) ** 2, axis=1)
indices = np.argsort(dists)
meanface_indices.append(indices[1 : 1 + num_nb].tolist())
reversed_map: dict[int, tuple[list[int], list[int]]] = {i: ([], []) for i in range(num_lms)}
for i in range(num_lms):
for j in range(num_nb):
neighbor = meanface_indices[i][j]
reversed_map[neighbor][0].append(i)
reversed_map[neighbor][1].append(j)
max_len = max(len(reversed_map[i][0]) for i in range(num_lms))
reverse_index1: list[int] = []
reverse_index2: list[int] = []
for i in range(num_lms):
idx1, idx2 = reversed_map[i]
# Pad by repeating entries so every landmark has the same neighbor count.
pad1 = (idx1 * max_len)[: max_len - len(idx1)]
pad2 = (idx2 * max_len)[: max_len - len(idx2)]
reverse_index1.extend(idx1 + pad1)
reverse_index2.extend(idx2 + pad2)
return reverse_index1, reverse_index2, max_len
def get_meanface_info(num_lms: int, num_nb: int = 10) -> tuple[np.ndarray, np.ndarray, int]:
"""Precomputed reverse-index tables for PIPNet decoding.
Args:
num_lms: 68 (300W) or 98 (WFLW).
num_nb: Neighbor count used at training time.
Returns:
``(reverse_index1, reverse_index2, max_len)``.
Raises:
ValueError: If ``num_lms`` does not match a shipped meanface table.
"""
if num_lms == 68:
flat = MEANFACE_300W_68
elif num_lms == 98:
flat = MEANFACE_WFLW_98
else:
raise ValueError(f'No meanface table available for num_lms={num_lms}; expected 68 or 98.')
meanface = np.asarray(flat, dtype=np.float32).reshape(-1, 2)
assert meanface.shape[0] == num_lms, f'meanface mismatch: expected {num_lms} points, got {meanface.shape[0]}'
r1, r2, max_len = _build_neighbor_indices(meanface, num_nb)
return np.asarray(r1, dtype=np.int64), np.asarray(r2, dtype=np.int64), max_len

Some files were not shown because too many files have changed in this diff Show More