Commit Graph

17 Commits

Author SHA1 Message Date
Jia Guo
d05e9a12d3 pip 0.7.1 2022-12-14 10:55:08 +08:00
Andrii Yerko
6b3bc2df1e Add build system requirements for correct installation 2022-09-21 23:39:09 +03:00
Jia Guo
340f6848f2 Update README.md 2022-05-28 15:32:58 +08:00
Jia Guo
0eaf369dac Update README.md 2022-05-28 15:24:22 +08:00
戦士
09767b3ada change section 'Install' 2022-04-15 19:11:29 +02:00
Bahar Baradaran Eftekhari
0fc64615e7 Fixing a bug in the prepare function arguments. 2022-02-26 11:12:19 +03:30
Jia Guo
e7816a226d python package ver 0.6 update 2022-01-29 19:54:55 +08:00
zeno
28167da663 Update README.md 2022-01-12 16:33:37 +08:00
Dani Vera
6d14852c15 Update README.md
Using onnxruntime-gpu 1.10 (last one), the following error will occur:

raise ValueError("This ORT build has {} enabled. ".format(available_providers) +
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

To solve this is enough to pass the providers to the prepare function. A better solution may be to pass a default value to the onnx session, but for the moment this should do the trick.
2021-12-16 20:44:16 +01:00
Jia Guo
16421481e4 add 0.5 readme 2021-09-21 10:46:33 +08:00
Jia Guo
8420187ed9 Update README.md 2021-07-07 23:51:18 +08:00
nttstar
ce3600a742 a big tree refine 2021-06-19 23:37:10 +08:00
Jia Guo
dfa2b58f97 0.3.3, supports model auto download 2021-06-18 20:12:45 +08:00
Jia Guo
ff8716dba5 Update README.md 2021-06-15 22:03:14 +08:00
Jia Guo
0fdf664d68 Update README.md 2021-05-16 17:22:29 +08:00
Jia Guo
f85e523d13 Update README.md 2021-05-15 16:48:27 +08:00
nttstar
b5813f86b7 pip package 2019-08-29 23:23:35 +08:00