Files
uniface/examples/05_face_analyzer.ipynb
2026-04-27 20:51:50 +09:00

268 lines
8.3 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Face Analysis with UniFace\n",
"\n",
"<div style=\"display:flex; flex-wrap:wrap; align-items:center;\">\n",
" <a style=\"margin-right:10px; margin-bottom:6px;\" href=\"https://pepy.tech/projects/uniface\"><img alt=\"PyPI Downloads\" src=\"https://static.pepy.tech/personalized-badge/uniface?period=total&units=international_system&left_color=grey&right_color=blue&left_text=Downloads\"></a>\n",
" <a style=\"margin-right:10px; margin-bottom:6px;\" href=\"https://pypi.org/project/uniface/\"><img alt=\"PyPI Version\" src=\"https://img.shields.io/pypi/v/uniface.svg\"></a>\n",
" <a style=\"margin-right:10px; margin-bottom:6px;\" href=\"https://opensource.org/licenses/MIT\"><img alt=\"License\" src=\"https://img.shields.io/badge/License-MIT-blue.svg\"></a>\n",
" <a style=\"margin-bottom:6px;\" href=\"https://github.com/yakhyo/uniface\"><img alt=\"GitHub Stars\" src=\"https://img.shields.io/github/stars/yakhyo/uniface.svg?style=social\"></a>\n",
"</div>\n",
"\n",
"**UniFace** is a lightweight, production-ready Python library for face detection, recognition, tracking, landmark analysis, face parsing, gaze estimation, and face attributes.\n",
"\n",
"🔗 **GitHub**: [github.com/yakhyo/uniface](https://github.com/yakhyo/uniface) | 📚 **Docs**: [yakhyo.github.io/uniface](https://yakhyo.github.io/uniface)\n",
"\n",
"---\n",
"\n",
"This notebook demonstrates comprehensive face analysis using the **FaceAnalyzer** class.\n",
"\n",
"## 1. Install UniFace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -q \"uniface[cpu]\"\n",
"\n",
"# Clone repo for assets (Colab only)\n",
"import os\n",
"if 'COLAB_GPU' in os.environ or 'COLAB_RELEASE_TAG' in os.environ:\n",
" if not os.path.exists('uniface'):\n",
" !git clone --depth 1 https://github.com/yakhyo/uniface.git\n",
" os.chdir('uniface/examples')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Import Libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import cv2\n",
"import matplotlib.pyplot as plt\n",
"\n",
"import uniface\n",
"from uniface.analyzer import FaceAnalyzer\n",
"from uniface.detection import RetinaFace\n",
"from uniface.recognition import ArcFace\n",
"from uniface.attribute import AgeGender\n",
"from uniface.draw import draw_detections\n",
"\n",
"print(uniface.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Initialize FaceAnalyzer\n",
"\n",
"The `FaceAnalyzer` combines detection, recognition, and attribute prediction in one class."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"analyzer = FaceAnalyzer(\n",
" detector=RetinaFace(confidence_threshold=0.5),\n",
" recognizer=ArcFace(),\n",
" attributes=[AgeGender()]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. Analyze Faces in Images"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"image_paths = [\n",
" '../assets/test_images/image0.jpg',\n",
" '../assets/test_images/image1.jpg',\n",
" '../assets/test_images/image2.jpg',\n",
"]\n",
"\n",
"results = []\n",
"\n",
"for image_path in image_paths:\n",
" # Load image\n",
" image = cv2.imread(image_path)\n",
" if image is None:\n",
" print(f'Error: Could not read {image_path}')\n",
" continue\n",
"\n",
" # Analyze faces - returns list of Face objects\n",
" faces = analyzer.analyze(image)\n",
" print(f'\\n{image_path.split(\"/\")[-1]}: Detected {len(faces)} face(s)')\n",
"\n",
" # Print face attributes\n",
" for i, face in enumerate(faces, 1):\n",
" print(f' Face {i}: {face.sex}, {face.age}y')\n",
"\n",
" # Prepare visualization (without text overlay)\n",
" vis_image = image.copy()\n",
" draw_detections(image=vis_image, faces=faces, vis_threshold=0.5, corner_bbox=True)\n",
"\n",
" results.append((image_path, cv2.cvtColor(vis_image, cv2.COLOR_BGR2RGB), faces))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5. Visualize Results\n",
"\n",
"Display images with face information shown below each image."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fig, axes = plt.subplots(2, len(results), figsize=(15, 8),\n",
" gridspec_kw={'height_ratios': [4, 1]})\n",
"\n",
"for idx, (path, vis_image, faces) in enumerate(results):\n",
" # Display image\n",
" axes[0, idx].imshow(vis_image)\n",
" axes[0, idx].axis('off')\n",
"\n",
" # Display face information below image\n",
" axes[1, idx].axis('off')\n",
" info_text = f'{len(faces)} face(s)\\n'\n",
" for i, face in enumerate(faces, 1):\n",
" info_text += f'Face {i}: {face.sex}, {face.age}y\\n'\n",
"\n",
" axes[1, idx].text(0.5, 0.5, info_text,\n",
" ha='center', va='center',\n",
" fontsize=10, family='monospace')\n",
"\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. Access Face Attributes\n",
"\n",
"Each `Face` object contains detection, recognition, and attribute data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get first face from first image\n",
"_, _, faces = results[0]\n",
"if faces:\n",
" face = faces[0]\n",
"\n",
" print('Face Attributes:')\n",
" print(f' - Bounding box: {face.bbox.astype(int).tolist()}')\n",
" print(f' - Confidence: {face.confidence:.3f}')\n",
" print(f' - Landmarks shape: {face.landmarks.shape}')\n",
" print(f' - Age: {face.age} years')\n",
" print(f' - Gender: {face.sex}')\n",
" print(f' - Embedding shape: {face.embedding.shape}')\n",
" print(f' - Embedding dimension: {face.embedding.shape[0]}D')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 7. Compare Face Similarity\n",
"\n",
"Use face embeddings to compute similarity between faces."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Compare first two faces\n",
"if len(results) >= 2:\n",
" face1 = results[0][2][0] # First face from first image\n",
" face2 = results[1][2][0] # First face from second image\n",
"\n",
" similarity = face1.compute_similarity(face2)\n",
" print(f'Similarity between faces: {similarity:.4f}')\n",
" print(f'Same person: {\"Yes\" if similarity > 0.6 else \"No\"} (threshold=0.6)')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Notes\n",
"\n",
"- `analyzer.analyze()` performs detection, recognition, and attribute prediction in one call\n",
"- Each `Face` object contains: `bbox`, `confidence`, `landmarks`, `embedding`, `age`, `gender`\n",
"- Gender is available as both ID (0=Female, 1=Male) and string via `face.sex` property\n",
"- Face embeddings are L2-normalized (norm ≈ 1.0) for similarity computation\n",
"- Use `face.compute_similarity(other_face)` to compare faces (returns cosine similarity)\n",
"- Typical similarity threshold: 0.6 (same person if similarity > 0.6)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "base",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.5"
}
},
"nbformat": 4,
"nbformat_minor": 4
}