Update README.md

Using onnxruntime-gpu 1.10 (last one), the following error will occur:

raise ValueError("This ORT build has {} enabled. ".format(available_providers) +
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

To solve this is enough to pass the providers to the prepare function. A better solution may be to pass a default value to the onnx session, but for the moment this should do the trick.
This commit is contained in:
Dani Vera
2021-12-16 20:44:16 +01:00
committed by GitHub
parent 8137469a08
commit 6d14852c15

View File

@@ -22,7 +22,7 @@ from insightface.app import FaceAnalysis
from insightface.data import get_image as ins_get_image
app = FaceAnalysis()
app.prepare(ctx_id=0, det_size=(640, 640))
app.prepare(ctx_id=0, det_size=(640, 640), providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
img = ins_get_image('t1')
faces = app.get(img)
rimg = app.draw_on(img, faces)