← index #18663Issue #11837
Related · high · value 1.503
QUERY · ISSUE

Change to `mp_handle_pending' includes possible MICROPY_EVENT_POLL_HOOK and downstream code breakage

openby Gadgetoidopened 2026-01-08updated 2026-01-12
bugpy-core

Port, board and/or hardware

all

MicroPython version

1.27+

Reproduction

For long-running processes handled by user C modules (eg: busy wait on e-ink update) I've been using the pattern:

extern void mp_handle_pending_internal(bool);
mp_handle_pending(true);

I tried to build against 1.27 today, and ran into this build error:

modules/c/ssd1680/ssd1680.cpp.o: in function `pimoroni::SSD1680::busy_wait()':
modules/c/ssd1680/ssd1680.cpp:54:(.text._ZN8pimoroni7SSD16809busy_waitEv+0x10): undefined reference to `mp_handle_pending'

This looks like it was introduced by https://github.com/micropython/micropython/commit/c57aebf790c40125b663231ec4307d2a3f3cf193

This PR seems to make the assumption that mp_handle_pending is only ever called inline in runtime.h, or in code that uses:

#include "runtime.h"
mp_handle_pending(true);

This assumption doesn't seem to hold true, for example the same "undefined reference" error should occur if an attempt is made to call "MICROPY_EVENT_POLL_HOOK" from Zephyr code:

https://github.com/micropython/micropython/blob/26c16969ab954db4d8d79bed154e3d45c12c087f/ports/zephyr/mpconfigport.h#L171-L172

But I guess runtime.h is implicitly available here? In any case making the "extern" line redundant and this pattern should probably be updated where it occurs.

Either way the inline wrapper to preserve the old mp_handle_pending behaviour does not cover all possible cases, causing a break in my code from 1.26 to 1.27.

I expect breakage, though, so I'd propose the implicit "example" uses in the MicroPython codebase (there are a few things like "MICROPY_EVENT_POLL_HOOK") are updated and this issue should, with any luck, steer any future encounters (I'll probably forget and run into this again) toward the correct usage.

Expected behaviour

Observed behaviour

Additional Information

No, I've provided everything above.

Code of Conduct

Yes, I agree

CANDIDATE · ISSUE

Zephyr port: async event loop implementation starves CPU

openby bogdanmopened 2023-06-21updated 2023-06-22
enhancementport-zephyr

When using asyncio in MicroPython, the event loop implementation (asyncio.run) ends up polling Python objects in a queue. The polling code does this (extmod/modselect.c:poll_poll_internal):

...
    mp_uint_t start_tick = mp_hal_ticks_ms();
    mp_uint_t n_ready;
    for (;;) {
        // poll the objects
        n_ready = poll_map_poll(&self->poll_map, NULL);
        if (n_ready > 0 || (timeout != (mp_uint_t)-1 && mp_hal_ticks_ms() - start_tick >= timeout)) {
            break;
        }
        MICROPY_EVENT_POLL_HOOK
    }
...

The for(;;) loop above is a busy wait loop. In the Zephyr port, it consumes a lot of CPU time and it starves other threads. Even when MICROPY_EVENT_POOL_HOOK is set to k_yield (and thus the loop gives control back to the OS scheduler after each iteration) the code will still take most of the CPU time. Setting MICROPY_EVENT_POOL_HOOK to something like k_msleep(100) (which waits 100ms before running another iteration of the loop) fixes the starving issue, but it delays the Python thread for no good reason.

I have a possible solution for this, but please let me know first if there is interest to move this forward, since lately I got the impression that the Zephyr port isn't exactly a "fist class citizen" in the world of MicroPython ports.

Keyboard

j / / n
next pair
k / / p
previous pair
1 / / h
show query pane
2 / / l
show candidate pane
c
copy suggested comment
r
toggle reasoning
g i
go to index
?
show this help
esc
close overlays

press ? or esc to close

copied