2024-11-21 05:55:55 +00:00
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"source": [
"## Example Usage of UniFace Library for Face Alignment\n",
"This guide demonstrates how to use the **UniFace** library for face detection and face alignment. Follow the steps below to set up and execute the example.\n",
"\n",
"## 1. Install UniFace\n",
"Install the **UniFace** library using `pip`. The `-q` flag suppresses logs for a clean output."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"!pip install -q uniface"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Import Required Libraries\n",
"Import the necessary libraries for image processing, visualization and face alignment:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import cv2\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"from uniface import RetinaFace, face_alignment, draw_detections"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- `cv2`: Used for image reading and processing.\n",
"- `numpy`: Used for converting model outputs to numpy.\n",
"- `matplotlib`: To display inference resulst\n",
"- `RetinaFace`: The model class from the **UniFace** library.\n",
"- `face_alignment`: A utility function for face alignment.\n",
"- `draw_detections`: A utility function to draw bounding boxes and landmarks on the image.\n",
"\n",
"## 3. Initialize the RetinaFace Model\n",
"Initialize the RetinaFace model with a lightweight pre-trained backbone and detection parameters:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-11-21 05:54:31,818 - INFO - Initializing RetinaFace with model=retinaface_mnet_v2, conf_thresh=0.5, nms_thresh=0.4, pre_nms_topk=5000, post_nms_topk=750, dynamic_size=False, input_size=(640, 640)\n",
"2024-11-21 05:54:31,875 - INFO - Verified model weights located at: /home/yakhyo/.uniface/models/retinaface_mnet_v2.onnx\n",
"2024-11-21 05:54:31,969 - INFO - Successfully initialized the model from /home/yakhyo/.uniface/models/retinaface_mnet_v2.onnx\n"
]
}
],
"source": [
"# Initialize the RetinaFace model\n",
"uniface_inference = RetinaFace(\n",
" model=\"retinaface_mnet_v2\", # Model name\n",
" conf_thresh=0.5, # Confidence threshold\n",
" pre_nms_topk=5000, # Pre-NMS Top-K detections\n",
" nms_thresh=0.4, # NMS IoU threshold\n",
" post_nms_topk=750 # Post-NMS Top-K detections,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. Load and perform inference\n",
"Load set of input images to perform face detection and alignment, storing the results for visualization."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"# Paths to the input images\n",
"image_paths = [\n",
" \"../assets/image0.jpg\",\n",
" \"../assets/image1.jpg\",\n",
" \"../assets/image2.jpg\",\n",
" \"../assets/image3.jpg\"\n",
"]\n",
"\n",
"# Lists to store detection results and aligned images\n",
"detection_images = []\n",
"aligned_images = []\n",
"\n",
"# Process each image\n",
"for image_path in image_paths:\n",
" # Load the image\n",
" input_image = cv2.imread(image_path)\n",
" if input_image is None:\n",
" print(f\"Error: Could not read image from {image_path}\")\n",
" continue\n",
" \n",
" # Perform face detection\n",
" boxes, landmarks = uniface_inference.detect(input_image)\n",
"\n",
" if len(landmarks) == 0:\n",
" print(f\"No face detected in {image_path}\")\n",
" continue\n",
" \n",
" # Draw detections on the image for visualization\n",
" bbox_image = input_image.copy()\n",
" draw_detections(bbox_image, (boxes, landmarks), vis_threshold=0.6)\n",
"\n",
" # Align the first detected face\n",
" landmark_array = landmarks[0]\n",
" aligned_image = face_alignment(input_image, landmark_array, image_size=112)\n",
" \n",
" # Convert images to RGB format for proper visualization\n",
" bbox_image = cv2.cvtColor(bbox_image, cv2.COLOR_BGR2RGB) \n",
" aligned_image = cv2.cvtColor(aligned_image, cv2.COLOR_BGR2RGB)\n",
" \n",
" # Store the processed images for visualization\n",
" detection_images.append(bbox_image)\n",
" aligned_images.append(aligned_image)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5. Display inference results\n",
"Visualization of face detection and alignment."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAABdEAAANXCAYAAADabBb1AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjguNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8fJSN1AAAACXBIWXMAAA9hAAAPYQGoP6dpAAEAAElEQVR4nOz9ebStWVUfDP/WWk+792luV0XdKvAWFm00vGAbB4YScIBGlE5J4sCBmAair8E3CjEZ0viBBtGISRwakxhJHK9JbIMxxg/NBw4hIk1saESBFAgIVXWrbnPO2Xs/zVrr+2M989lzz7P2Pufce6vo1u+Ofc/eT7PaubrfnGsu5b33SEhISEhISEhISEhISEhISEhISEhISEg4BP3pTkBCQkJCQkJCQkJCQkJCQkJCQkJCQkLCZyoSiZ6QkJCQkJCQkJCQkJCQkJCQkJCQkJCwBolET0hISEhISEhISEhISEhISEhISEhISFiDRKInJCQkJCQkJCQkJCQkJCQkJCQkJCQkrEEi0RMSEhISEhISEhISEhISEhISEhISEhLWIJHoCQkJCQkJCQkJCQkJCQkJCQkJCQkJCWuQSPSEhISEhISEhISEhISEhISEhISEhISENUgkekJCQkJCQkJCQkJCQkJCQkJCQkJCQsIaJBI9ISEhISEhISEhISEhISEhISEhISEhYQ0SiX5CvOUtb4FSCq961as+3Un5rEcqy4SEhISEhISEhISEhISEhISEhITPdJyIRP/IRz4CpdTKZzKZ4NZbb8VTn/pUvOIVr8CHP/zhG5KwN7zhDVBK4Q1veMMNCe8kUErha77max70eK8FRETzT1mWuP322/HCF74QH/zgBz/dSTwxbr/9dtx+++2f7mQkJCQkJCQkJCQkJCQkJCQkJCQkJCC7lpfuuOMOPP/5zwcANE2De+65B+94xzvw6le/Gj/8wz+Ml73sZfihH/ohKKVuaGI/E/AVX/EV+NM//VOcO3fu052UFXzpl34pnvGMZwAArly5gre97W14wxvegF/91V/FO97xDjz60Y/+NKcwISEhISEhISEhISEhISEhISEhIeGzD9dEoj/iEY+IuuB461vfim/7tm/DP/tn/wzGGLz61a++3vR9xmEymeAxj3nMpzsZh/BlX/Zlh+rkxS9+MX7mZ34GP/zDP4z/8B/+w6cnYQkJCQkJCQkJCQkJCQkJCQkJCQkJn8W4oT7Rv/qrvxq/9Vu/hbIs8brXvQ4f+9jHDj3zxje+EU996lNx+vRpVFWFL/7iL8aP/diPwVo7PvPt3/7teOELXwgAeOELX7jiqoRjb28Pr3zlK/FFX/RFqOsap06dwtOf/nS89a1vjaZvb28PP/iDP4jHPe5xmEwm2N3dxROe8AS8/OUvR9d1o2sUAPjd3/3dlXjJrcwmP97vfe978bznPQ8333wzyrLEwx/+cHzP93wP7rvvvkPPksuS/f19vOQlL8Gtt96KsizxuMc9Dr/8y798rPI+Cn/n7/wdAMC73/3uaFkct+w++clP4iUveQke+chHjs8+9rGPxYtf/GJcuXJlfO5rvuZr1u4++PZv/3YopfCRj3xkbXrJXdBHP/pRfPSjH10pf17ev/Irv4I777wTN998M6qqwq233oqv/dqvxa/8yq8cs2QSEhISEhISEhISEhISEhISEhISEo6Ha7JE34RHP/rReN7znoef//mfx3/9r/8V3/3d3z3e+yf/5J/gta99LW677TY85znPwe7uLn7v934PL33pS/EHf/AH+KVf+iUAwLOe9SxcvnwZb3zjG/HMZz4Tj3/84w/Fc//99+NJT3oS3ve+9+GJT3wiXvziF+Pq1at44xvfiCc/+cn4pV/6JTzrWc8an7/nnntw55134gMf+AAe//jH4x/8g38A5xw+8IEP4Ed+5Efwvd/7vbj99tvxyle+Ej/4gz+ICxcu4Nu//dvH92Np4HjrW9+Kpz/96WjbFt/8zd+M22+/Hb//+7+Pf/Ev/gV+4zd+A29/+9sPuYDpug5Pe9rTcOnSJTz3uc/FbDbDf/7P/xnPe97z8Fu/9Vt42tOeduLyjyHLVqv5JGU3m83wxCc+ER/5yEfwtKc9Dc9+9rPRti3uuusu/PzP/zy+7/u+D7u7uzcknadOncIrX/lK/MRP/AQA4Hu+53vGe+Sj/qd/+qfxnd/5nTh//jye/exn4+zZs/jUpz6Fd7zjHfi1X/s1PPe5z70haUlISEhISEhISEhISEhISEhISEhIAAD4E+Cuu+7yAPzTn/70jc/97M/+rAfgv+3bvm289qY3vWl8d39/f7zunPMvfvGLPQD/y7/8y+P1n/u5n/MA/M/93M9F4/jWb/1WD8D/23/7b1eu33333f5hD3uYv+mmm/x8Ph+vP/e5z/UA/D/9p//0UFif+tSnfNd1428A/s4774zG++Y3v9kD8K985SvHa9Zaf8cdd3gA/rd+67dWnn/pS1/qAfjv+I7vWLl+4cIFD8A/85nP9E3TjNd/53d+51hlLNPzohe96NC9F73oRR6A/67v+q6V6ycpu1//9V/3APz3fM/3HAp/b2/PLxaL8fedd97p14nUC17wAg/A33XXXYfSzsvS+1A2Fy5ciIbzJV/yJb4oCn/33Xcfunfx4sXoOwkJCQkJCQkJCQkJCQkJCQkJCQkJ14ob6s6FcOuttwIALl68OF77yZ/8SQDAv/k3/wbT6XS8rpTCa1/7Wiil8J/+0386VvgXL17Ef/kv/wVPecpT8Hf/7t9duXfzzTfjpS99Ke699178zu/8DgDgU5/6FH71V38Vd9xxR9QNy0Me8pBD1tonwdve9jZ8+MMfxtd//dfj6U9/+sq9V7ziFThz5gx+4Rd+AW3bHnr39a9/PYqiGH8/9alPxYULF/DOd77zRGl417vehVe96lV41atehX/0j/4RvuIrvgI/8zM/g0c96lH4gR/4gfG5k5Ydoa7rQ3FubW2hLMsTpfNGIM9z5Hl+6PrZs2cf9LQkJCQkJCQkJCQkJCQkJCQkJCQkfG7jhrtzWYe3v/3tmE6n+Pf//t9H79d1jQ984APHCuud73wnrLVomiZKin/wgx8EAHzgAx/AM57xDLzrXe+C9x5PfvKTo+Tr9eIP//APASxdjnBsbW3hy77sy/CmN70Jf/Znf4a/+lf/6njv1KlTePjDH37onYc+9KH4/d///ROl4d3vfvch3+ePfvSj8da3vnXFjcxJy+5JT3oSzp8/j9e+9rX44z/+YzzjGc/AnXfeicc+9rFr/Z8/kPhbf+tv4WUvexm++Iu/GN/6rd+KJz/5yfjqr/5q7OzsPOhpSUhISEhISEhISEhISEhISEhISPjcxwNCov/lX/4lAOCmm24ar91///3o+x4/+IM/uPa9g4ODY4V///33AwgW4G9729uODI8Ov7ztttuOFf5JcfXqVQDBoj2G8+fPrzxHWOdLPMsyOOdOlIYXvehF+Nf/+l/De49PfvKTeP3rX48f+7Efw7d8y7fgd37nd2CMAXDystvd3cXb3/52vOIVr8B/+2//Db/5m78JAHjYwx6G7//+78d3fud3niid14vv+77vw9mzZ/HTP/3T+Of//J/jx37sx5BlGb7hG74Br3/966NKiYSEhISEhISEhISEhISEhISEhISEa8UD4s7lLW95CwDgy7/8y8drOzs7OHv2LLz3az933XXXscInq+Pv/d7v3RjeK1/5SgDB4hsAPvGJT9y4TEbSc/fdd0fvf+pTn1p57oGEUgq33norfvRHfxTPf/7z8Za3vAX/6l/9q0NpPW7ZAcAXfMEX4A1veAPuvfde/OEf/iF+5Ed+BM45fNd3fdeKCx6tgzj1fX8oXaTIuBH5+47v+A68853vxL333otf+7Vfw3Oe8xy88Y1vxDOe8QxYa29IPAkJCQkJCQkJCQkJCQkJCQkJCQkJwANAov/5n/85fvEXfxFlWeLZz372eP0rv/Ircd99943uQo4CWU7HSNEv//Ivh1Lq2C5PvuzLvgxaa7z5zW9G13VHPq+1PhEZ+4QnPAHAUnnAcXBwgHe9612o6xqPfvSjjx3
"text/plain": [
"<Figure size 1500x1000 with 8 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Plot images in a 2-row layout\n",
"fig, axes = plt.subplots(2, len(image_paths), figsize=(15, 10))\n",
"\n",
"# Titles for each row\n",
"row_titles = [\"Detection Results\", \"Aligned Faces\"]\n",
"\n",
"# Populate the grid with images\n",
"for row, images in enumerate([detection_images, aligned_images]):\n",
" for col, img in enumerate(images):\n",
" # Display each image in the grid\n",
" axes[row, col].imshow(img)\n",
" axes[row, col].axis(\"off\") # Remove axes for cleaner visuals\n",
" \n",
" # Set row title on the first column of each row\n",
" if col == 0:\n",
" axes[row, col].set_title(row_titles[row], fontsize=14, loc=\"left\")\n",
"\n",
"# Adjust layout to prevent overlap and display the plot\n",
"plt.tight_layout()\n",
"plt.show()\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "torch",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
}
},
"nbformat": 4,
"nbformat_minor": 2
}