mirror of
https://github.com/deepinsight/insightface.git
synced 2025-12-30 08:02:27 +00:00
Update README.md
Using onnxruntime-gpu 1.10 (last one), the following error will occur:
raise ValueError("This ORT build has {} enabled. ".format(available_providers) +
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
To solve this is enough to pass the providers to the prepare function. A better solution may be to pass a default value to the onnx session, but for the moment this should do the trick.
This commit is contained in:
@@ -22,7 +22,7 @@ from insightface.app import FaceAnalysis
|
||||
from insightface.data import get_image as ins_get_image
|
||||
|
||||
app = FaceAnalysis()
|
||||
app.prepare(ctx_id=0, det_size=(640, 640))
|
||||
app.prepare(ctx_id=0, det_size=(640, 640), providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
|
||||
img = ins_get_image('t1')
|
||||
faces = app.get(img)
|
||||
rimg = app.draw_on(img, faces)
|
||||
|
||||
Reference in New Issue
Block a user