Is there the interpreter mode for RZ/V2L?

I see in this link that RZ/V2H supports the interpreter mode https://github.com/renesas-rz/rzv_drp-ai_tvm/blob/56b3dd425ba694dbbdd7766fa1295671f1ffccf0/tutorials/tutorial_RZV2H.md#c-interpreter-mode . It means that we can run inference compiled model on PC on the custom dataset to check accuracy. I understood that this guide is for RZ/V2H and RZ/V2H support INT8 model.

My question is that: Is there the interpreter mode for RZ/V2L? I understood that RZ/V2L supports only FP16 model and I do not see any guide for the interpreter mode with RZ/V2L.

Furthermore, I think that to evaluate actual accuracy of model on the board, we need to load the custom dataset on the board and run inference.

Thank you.

Parents Reply Children
No Data