← index #18663Issue #5994
Related · high · value 1.556
QUERY · ISSUE

Change to `mp_handle_pending' includes possible MICROPY_EVENT_POLL_HOOK and downstream code breakage

openby Gadgetoidopened 2026-01-08updated 2026-01-12
bugpy-core

Port, board and/or hardware

all

MicroPython version

1.27+

Reproduction

For long-running processes handled by user C modules (eg: busy wait on e-ink update) I've been using the pattern:

extern void mp_handle_pending_internal(bool);
mp_handle_pending(true);

I tried to build against 1.27 today, and ran into this build error:

modules/c/ssd1680/ssd1680.cpp.o: in function `pimoroni::SSD1680::busy_wait()':
modules/c/ssd1680/ssd1680.cpp:54:(.text._ZN8pimoroni7SSD16809busy_waitEv+0x10): undefined reference to `mp_handle_pending'

This looks like it was introduced by https://github.com/micropython/micropython/commit/c57aebf790c40125b663231ec4307d2a3f3cf193

This PR seems to make the assumption that mp_handle_pending is only ever called inline in runtime.h, or in code that uses:

#include "runtime.h"
mp_handle_pending(true);

This assumption doesn't seem to hold true, for example the same "undefined reference" error should occur if an attempt is made to call "MICROPY_EVENT_POLL_HOOK" from Zephyr code:

https://github.com/micropython/micropython/blob/26c16969ab954db4d8d79bed154e3d45c12c087f/ports/zephyr/mpconfigport.h#L171-L172

But I guess runtime.h is implicitly available here? In any case making the "extern" line redundant and this pattern should probably be updated where it occurs.

Either way the inline wrapper to preserve the old mp_handle_pending behaviour does not cover all possible cases, causing a break in my code from 1.26 to 1.27.

I expect breakage, though, so I'd propose the implicit "example" uses in the MicroPython codebase (there are a few things like "MICROPY_EVENT_POLL_HOOK") are updated and this issue should, with any luck, steer any future encounters (I'll probably forget and run into this again) toward the correct usage.

Expected behaviour

Observed behaviour

Additional Information

No, I've provided everything above.

Code of Conduct

Yes, I agree

CANDIDATE · ISSUE

esp32/mpconfigport: MICROPY_EVENT_POLL_HOOK fails to yield

openby tveopened 2020-05-01updated 2020-05-03
port-esp32

On the STM32 MICROPY_EVENT_POLL_HOOK contains either a pyb_thread_yield() or a __WFI() causing the flow of execution to pause for a tad and thereby give other threads a chance at the cpu or provide an opportunity to save power. On the esp32 MICROPY_EVENT_POLL_HOOK does not contain any form of yield (https://github.com/micropython/micropython/blob/master/ports/esp32/mpconfigport.h#L245-L252). Instead, mp_hal_delay_ms both calls MICROPY_EVENT_POLL_HOOK and yields (https://github.com/micropython/micropython/blob/master/ports/esp32/mphalport.c#L164-L165) and mp_hal_stdin_rx_chr also calls both. However, in modselect poll_poll_internal just has the MICROPY_EVENT_POLL_HOOK macro (https://github.com/micropython/micropython/blob/master/extmod/moduselect.c#L256) with the result that on the esp32 it busy-waits, hogging the cpu, while on stm32 it yields as expected.

It's pretty easy to fix the esp32 situation and that's actually something included in #5473 (https://github.com/micropython/micropython/pull/5473/files#diff-b9499fc8ad5b9793822626f6befdb1b6 and https://github.com/micropython/micropython/pull/5473/files#diff-4c3d68ff23336da03f336fbc26571f7b). But that's not the end of the story...

Due to the esp32's 100hz clock tick moving the yield into MICROPY_EVENT_POLL_HOOK results in a 10ms granularity when a routine uses MICROPY_EVENT_POLL_HOOK. extmod/uasyncio_basic.py works around this by including code that checks "is the remaining sleep duration shorter than a 10ms clock tick? if yes, then busy-wait". However, poll_poll_internal doesn't (and really shouldn't since it's not port-specific). The next thing that happens is that in the test suite extmod/uasyncio_basic.py verifies that sleeping in uasyncio for 20ms and 40ms has the desired effect and it turns out it doesn't on the esp32, rather, the sleep times are often 30ms and 50ms. This over-sleeping happens because asyncio's IOQueue uses ipoll to sleep, which now gets 10ms long yields inserted and and mentioned above doesn't compensate for that. So long story short, fixing the not-yielding issue causes over-sleeping when using uasyncio.

IMHO the best fix is to increase the clock tick rate on the esp32 to 1kHz, which would also help reduce latency when waiting to receive a packet but I'm happy to learn about other alternatives!

Keyboard

j / / n
next pair
k / / p
previous pair
1 / / h
show query pane
2 / / l
show candidate pane
c
copy suggested comment
r
toggle reasoning
g i
go to index
?
show this help
esc
close overlays

press ? or esc to close

copied