I found this guide to deploy YOLOX-s on RZ/V2H https://github.com/renesas-rz/rzv_drp-ai_tvm/tree/main/how-to/sample_app_v2h/app_yolox_cam but in this guide you used quantized model. I know that RZ/V2L does not support INT8, it supports FP16 model.
Whether I can use https://github.com/renesas-rz/rzv_drp-ai_tvm/tree/main/how-to/sample_app_v2h/app_yolox_cam this application source for RZ/V2L board? If you have a guideline, please point me, I did not find it.
Thank you.
The only hint that I can give is that the YOLOX-s translation for RZ/V2L is described here:rzv_drp-ai_tvm/docs/model_list/how_to_convert/How_to_convert_yolox_onnx_models_V2L_V2M_V2MA.md at main · renesas-rz/rzv_drp-ai_tvm · GitHub
Hi Stefan
Thank you. I saw this link in the past.
My concern is that https://github.com/renesas-rz/rzv_drp-ai_tvm/tree/main/how-to/sample_app_v2h/app_yolox_cam is for INT8 model on RZ/V2H. Whether it can be applied for FP16 on RZ/V2L. I mean that data type can be automatically processed in application source code.
Moreover, this guide makes me confused. You can see in this link https://github.com/renesas-rz/rzv_drp-ai_tvm/blob/main/docs/model_list/how_to_convert/How_to_convert_yolox_onnx_models_V2L_V2M_V2MA.md#3-convert-pytorch-model-pth-files-to-onnx-onnx-files
python tools/export_onnx.py --output-name ./${onnx_file} -n ${arg_name} -c ./${torch_file} \ --decode_in_inference test_size ${image_size}It means that you apply decode_in_inference in the model. But in this guide https://confluence.renesas.com/display/REN/Translation+procedure+for+YOLOX you use original ONNX with cutting Transpose + Concat (it means that it DOES NOT include decode_in_inference in the model).
Hi AnhOi,yolox-S_VOC.onnx is a float32 model that would need to be translated with DRP-AI TVM for RZ/V2L.The input and output seem to fit.Nevertheless, I cannot foresee how much effort you need to spend to map the RZ/V2H application to RZ/V2L.Kind Regards.
Hi, PT_Renesas
Thank you for your response.
PT_Renesas said:Nevertheless, I cannot foresee how much effort you need to spend to map the RZ/V2H application to RZ/V2L.
As you mentioned, it is difficult to modified RZ/V2H application to RZ/V2L. Is that right? I understood that RZ/V2H supports INT8 model, so they will be quite different.
Could Renesas add sample for YOLOX on RZ/V2L, please? RZ/V2L is more cheaper than RZ/V2H and it is good for testing before move to RZ/V2H. Thank you.
I also see model list for RZ/V2L here https://github.com/renesas-rz/rzv_drp-ai_tvm/blob/main/docs/model_list/Model_List_V2L.md, there are many models. Renesas also tested performace on the board with CPU and CPU+DRP (latency), so I think that Renesas haved already implement application source code for these models that includes YOLOX.
Hi AnhOi,A sample for YOLOX on RZ/V2L is not available at the moment.Maybe, it will be available in the near future.Nevertheless, the problem is not the different neural network translation for 2 DRP-AI derivates.The input and output of the neural network is FP32 in both cases.The application is at a certain level device specific. That needs to be taken into account.Kind Regards.
Hi PT_Renesas
PT_Renesas said:A sample for YOLOX on RZ/V2L is not available at the moment.Maybe, it will be available in the near future.
Thank you. It is good new.
PT_Renesas said:Nevertheless, the problem is not the different neural network translation for 2 DRP-AI derivates.The input and output of the neural network is FP32 in both cases.The application is at a certain level device specific. That needs to be taken into account.
It means that, the source code will depends on specific device. It will be difficult to deploy models with new architectures (the models that are different from Renesas's supported models).
In my case, could you please give me some advice for application source of YOLOX + RZ/V2L?
Hi AnhOi
The YOLOX Demo Application for the RZV2H may need some porting here are some things to consider. This is the recommended starting point. You will need to do more some work.
Hi michael kosinski
michael kosinski said:The YOLOX Demo Application for the RZV2H may need some porting here are some things to consider. This is the recommended starting point. You will need to do more some work. The DRP-AI TVM Application API is standard across all RZV platforms RZV2L -> RZV2H. This demo is using USB Camera to captured image. The demo requires that the USB camera output format to be YUYV. JPEG compression is not supported. This is true for all RZV products. The application needs to be compiled with the RZV2L SDK cross compiler. RZV2H SDK will not work. YoloX output should be the same for both RZV2L and RZV2H. This depends on the TVM translation and ONNX model used. Use the link we provided to to translated YOLOX for the RZV2L.
Thank you for giving me advice. It is worth for me.
RZV2L use MIPI camera, may be we need to change interface for MIPI.
In the case, if I want to run inference with static image, the procedure is same. Is that right? If something is needed to be noted, please tell me. I ask this question because I want to evaluate models with my dataset on the board (not from camera).
I also saw that RZ/V2H supported intepreter mode when compiling and we can save quantized onnx model to evaluate in Python environment. RZ/V2L convert to FP16 model, but I do not see option to save FP16 onnx model for RZ/V2L. In theory, accuracy of FP16 converted onnx model is almost same original FP32 model, but I want to check to confirm this. If RZ/V2L supports to save FP16 converted onnx model, please tell me. Thank you so much.
Please support me the above concern. Thank you.