Files
uniface/docs/recipes/batch-processing.md
Yakhyokhuja Valikhujaev efb40f2e91 feat: Upgrade docs and Add google colab support (#52)
* docs: Add announcement section

* docs: Add landing page and improve the docs

* docs: Update docs

* docs: Update documentation

* chore: Update all examples and add google colab support

* docs: Update README.md
2025-12-31 18:07:04 +09:00

1.8 KiB

Batch Processing

Process multiple images efficiently.

!!! note "Work in Progress" This page contains example code patterns. Test thoroughly before using in production.


Basic Batch Processing

import cv2
from pathlib import Path
from uniface import RetinaFace

detector = RetinaFace()

def process_directory(input_dir, output_dir):
    """Process all images in a directory."""
    input_path = Path(input_dir)
    output_path = Path(output_dir)
    output_path.mkdir(parents=True, exist_ok=True)

    for image_path in input_path.glob("*.jpg"):
        print(f"Processing {image_path.name}...")

        image = cv2.imread(str(image_path))
        faces = detector.detect(image)

        print(f"  Found {len(faces)} face(s)")

        # Process and save results
        # ... your code here ...

# Usage
process_directory("input_images/", "output_images/")

With Progress Bar

from tqdm import tqdm

for image_path in tqdm(image_files, desc="Processing"):
    # ... process image ...
    pass

Extract Embeddings

from uniface import RetinaFace, ArcFace
import numpy as np

detector = RetinaFace()
recognizer = ArcFace()

embeddings = {}
for image_path in Path("faces/").glob("*.jpg"):
    image = cv2.imread(str(image_path))
    faces = detector.detect(image)

    if faces:
        embedding = recognizer.get_normalized_embedding(image, faces[0].landmarks)
        embeddings[image_path.stem] = embedding

# Save embeddings
np.savez("embeddings.npz", **embeddings)

See Also