Hello all,I am using PTP example to synchronise time and after that starting timer with pulse time (positive width) 10 us and period 20 us. I am using IPLS and MINT interrupts to detect rising and falling edge, so that I have either interrupt at 10 us interval. In both timer edge detection ISR I am using software ELC event to trigger ADC on single channel. So Basically ADC is triggered at every 10 us on single channel. I am using ADC 12 bit conversion in normal mode without sample and hold activation on ch 0 in ADC unit 0 on PIN P000. The ADC ISR is triggered after AD conversion is finished. ADC conversion end interrupt has highest priority (0). In the ADC ISR GPIO PIN is raised high, read ADC result in a variable and GPIO is lowered.The problem is ADC conversion time varies after trigger. Like most of the time it takes around 5us but some times it takes more than 10 us as well. And this is causing troubles as next trigger is missed.Can any one suggests how to make the conversion time deterministic?See the following screen grab. The top window is the 10 us triggers. At all the edges I am triggering ADC with software ELC link. The next window immediately below the first window shows 'persistence' and the trigger is quite deterministic meaning not varying in time. The third window shows GPIO toggle in ADC ISR. Which generally is seen after 5 us on an average after each edge in the upper window, The interesting observation is the bottom most window which is the variance of ADC scan finish ISR. This seems to vary a lot. You can see the negative pulse width measurement varies from 2.9899 us to 20.039us which is huge undesired range. Given ADC single scan is triggered at every 10 us, the gap between this should be 10 us as can be seen in mean value which is 9.7067 us, but wild variation in minimum and maximum values are quite derailing the operation of signal acquisition. Any lead in making the ADC conversion time more deterministic will be highly appreciated.
I do not think that this is caused by DMA latency, this is very small.
It seems like your application(thread) is not running all the time, that is why I asked you about investigating the execution times…
Thanks for reaching out to Renesas Engineering Community.
What is the frequency of PCLKC, which is A/D conversion clock ?
Also, what is the signal source of the signal source ? The maximum permissible signal source impedance is 1KOhm.
Please let us know.
Hi AZ,I think the issue is interrupt latency of IPLS and MINT interrupts. I am toggling GPIO just before software ELC events and I see the results are quite startling. I have put highest priority to both the interrupts and still ISR is called after non deterministic time. I have event linked to GPIO which toggles the pin and that is quite accurate but till the time ISR is called it is very random. see the same picture but now in the bottom part is the glitch when I trigger ADC. And you see ADC is itself triggered at varying time after interrupts.
So my question is now how to minimise the interrupt latency in the case of IPLS and MINT interrupts?
This data is coming from the ADC and the ADC is triggered every 10us from the MINT interrupt, right ?
How do you expect a continuous waveform from an ADC that converts samples every 10us ?
This is how. https://github.com/renesas/ra-fsp-examples/blob/master/example_projects/ek_ra6m3/adc_gpt_periodic_sampling/adc_gpt_periodic_sampling_ek_ra6m3_ep/adc_gpt_periodic_sampling_notes.mdThe graph is collection of points (basically memory locations). Software connects the dots and looks like continuous graph.In my case i am using PTP example to generate MINT interrupt every 20 us and remaining process is like ADC example I quoted. So in ADC example they have used timer where I am using PTP.
AZ_Renesas said:his data is coming from the ADC and the ADC is triggered every 10us from the MINT interrupt, right ?
What is the frequency of your analog signal ?
AZ_Renesas said:Try to search for 'RTOS Resources' from Window->Show view->Other.
This is the message!
It seems like your application(thread) is not running all the time, that is why I asked you about investigating the execution times of threads.
In the original example, there is a call to tx_thread_sleep(1) in ptp_thread_entry.c file inside the while(1) loop. This function will cause the thread to suspend for 1 tick and because there are (by default) 100 ticks per second, this would mean that the thread would suspend for 10ms.
Please check if this call exists in your code and if it does modify it to "tx_thread_sleep(0)" so the service returns immediately.
The update in values in memory is done almost automatically unless there is an interrupt from DMA, so when the waveform is broken , some sampling instances are missed because DMA was busy at the time of sampling. I am toggling GPIO at every DMA ISR , look in the following image the variance in ISR triggers hits the sampling boundary which is the multiple of 10us. you can clearly see an overlap.And that's when DMA is unavailable but data is ready to be picked up. and that is what causes disruptions in the shape of waveform.
AZ_Renesas said:Please check if this call exists in your code and if it does modify it to "tx_thread_sleep(0)" so the service returns immediately.
I changed it but look at the following. There is no improvement.
This waveform looks "more" continuous than the previous one. Why do you say that there is no improvement ?