← index #2424Issue #1555
Related · high · value 0.615
QUERY · ISSUE

Pyboard SysTick losing ticks: interrupts sometimes locked out for 170ms

openby kentindellopened 2016-09-16updated 2018-07-27
port-stm32

I've spent a couple of days chasing a timing bug and I think I've nailed the cause: the SysTick handler appears to sporadically miss out a chunk of time (about 170ms). I checked this by modifying the SysTick handler to monitor itself:

uint32_t tick_timestamp;
uint32_t last_tick_timestamp;
uint32_t tick_overruns = 0;
uint32_t last_overrun;

/**
  * @brief  This function handles SysTick Handler.
  * @param  None
  * @retval None
  */
void SysTick_Handler(void) {
    // Instead of calling HAL_IncTick we do the increment here of the counter.
    // This is purely for efficiency, since SysTick is called 1000 times per
    // second at the highest interrupt priority.
    extern volatile uint32_t uwTick;

    last_tick_timestamp = tick_timestamp;
    tick_timestamp = DWT->CYCCNT;
    if((tick_timestamp - last_tick_timestamp) > 336000U) { /* 2ms late */
       last_overrun = tick_timestamp - last_tick_timestamp;
        tick_overruns++;
    }

    uwTick += 1;

Basically it uses the debug CPU clock counter to see how long since it last ran. Obviously this is normally around 168000 clocks (i.e. 1ms), sometimes a bit less, sometimes a bit more (because of ISR jitter). But every now and then it misses a huge chunk of time. Here's the output of one of my test programs:

ERROR: elapsed_ms and ms_delta mismatch, margin=169
overruns=11, overrun by=168001(28862777)

There are a bunch of overruns at boot then everything is fine for a while. Then (in the above example) overrun 11 hit. The key number above is 28862777: when divided by / 168000 it is 171.8ms. Basically the SysTick handler itself sees that it hasn't run for a big chunk of time. Which can only happen if something is locking out all interrupts for a massive amount of time or SysTick has been disabled. I'm pretty sure there's nothing disabling it.

I wondered what might be locking out interrupts for such a huge chunk of time so I wrote a minimal test program:

from pyb import delay, rng
from time import ticks_cpu


def timestamps():
    start = ticks_cpu()
    print("Starting with {}".format(start))

    while True:
        delay_time = rng() % 200
        print("Delay time = {}ms".format(delay_time))
        before = ticks_cpu() & 0x3fffffff
        delay(delay_time)
        after = ticks_cpu() & 0x3fffffff
        realtime = (after - before) & 0x3fffffff
        realtime_ms = realtime / 168000

        if abs(realtime_ms - delay_time) > 1:
            print("ERROR: delay of {}ms actually took {}ms ({:08x},{:08x} = {})".format(delay_time, realtime_ms, after, before, after - before))
            return

I apply masks above to use only the bottom 30 bits to get modulo arithmetic working right for calculating delays.

I had to modify the ticks_cpu() function to return a longer range than just a small int (the clock runs so fast 16 bits enough):

STATIC mp_obj_t time_ticks_cpu(void) {
    static bool enabled = false;
    if (!enabled) {
        CoreDebug->DEMCR |= CoreDebug_DEMCR_TRCENA_Msk;
        DWT->CYCCNT = 0;
        DWT->CTRL |= DWT_CTRL_CYCCNTENA_Msk;
        enabled = true;
    }
    return mp_obj_new_int_from_uint(DWT->CYCCNT);
}
STATIC MP_DEFINE_CONST_FUN_OBJ_0(time_ticks_cpu_obj, time_ticks_cpu);

Here's an example run:

Delay time = 22ms
Delay time = 105ms
Delay time = 199ms
Delay time = 170ms
ERROR: delay of 170ms actually took 340.4643ms (1b10cd27,17a80774 = 57198003)

This affects all timing that is based on SysTick. So delay(), millis(), etc. all go wrong.

I investigated further into what is happening and I know what at least one of the culprits is: the flash file system mounted on USB. If you run this test program then copy something to the file system it trips the bug. But it also trips at random (which may be the host OS - Ubuntu in my case - touching the file system for indexing or something).

CANDIDATE · ISSUE

stmhal: uart chars are being dropped, even when only at 115200 baud

openby dhylandsopened 2015-10-30updated 2015-11-25

If I try to copy a file to the pyboard over a UART using rshell, then characters get dropped.

For this test, boot.py contains:

import pyb
pyb.usb_mode(None)
uart = pyb.UART(6, 115200, timeout_char=200, read_buf_len=600)
pyb.repl_uart(uart)

I brought a GPIO high at the beginning of USART6_IRQHandler and low at the end. Channel 0 is the data arriving on the UART pin, and Channel 1 is the UART IRQ.

Logic Analyzer Capture: https://www.dropbox.com/s/6mj5kr8sn6ad81j/UART_irq.png?dl=0

The UART IRQs are spaced approx 86 usec or so apart (sometimes a bit more) 115200 full tilt would be 86.8 usec per character.

The first gap corresponds to the place in the buffer where the first dropped character occurs. The first gap is 847 usec long and the second gap is 694 usec long.

So this tells me that either interrupts are being disabled for these periods or a higher priority interrupt is occurring. I'll add some more instrumentation to see if I can figure out what might be happening.

I also found it interesting that even though there are timeouts set on the uart, the call to sys.stdin.buffer.readinto() never times out.

Keyboard

j / / n
next pair
k / / p
previous pair
1 / / h
show query pane
2 / / l
show candidate pane
c
copy suggested comment
r
toggle reasoning
g i
go to index
?
show this help
esc
close overlays

press ? or esc to close

copied