MICROPY_PORT_DEINIT_FUNC called after gc_sweep_all
I'm using MICROPY_PORT_DEINIT_FUNC to release hardware resources upon soft reset.
Unfortunately, it's called only after gc ram is deallocated by gc_sweep_all.
My port uses gc allocated ram, and when MICROPY_PORT_DEINIT_FUNC is called it's too late, gc ram was already deallocated.
https://github.com/micropython/micropython/blob/10709846f38f8f6519dee27694ce583926a00cb9/ports/esp32/main.c#L145-L153
Is there any particular reason gc_sweep_all is called before mp_deinit?
Is it reasonable to move gc_sweep_all after mp_deinit?
Idea to aid debugging of GC issues
This is in reference to this forum issue
I tried that (enabling verbose mode in mpportconfig I assume you mean) .
That is when i saw that if i moved the stack onto the heap there were only +- 5 sweep (=free) messages (and it kept working) versus +-20 sweeps (and it crashed after suspend/resuming) if it was still in static memory during the first call of gc.collect() (if i call it every 100ms).
So that triggered my whole 'the contents of the static stack seem to be ignored' idea and when i looked at the code i thought i found the explanation ('if you are a hammer everything looks like a nail' - wise ).
So I will have to look again into why these sweep results are so different between the 2 scenarios, but it is a good starting point. And in the meantime I've learned a lot more about the GC :) . And optimized the external memory parameters so that it is now much faster which was needed with my initial stack located on the heap in external sram:).
When the gc is freeing things it maybe shouldn't be freeing, it would be amazing to correlate the things it's freeing to the things that were allocated.
Ideally, there'd be a mpconfigport.h flag to enable storing a string with every allocated block. When this block is freed, a debug message would print "Variable <XYZ> is freed".
But this requires too big change on the user-side as they'd have to modify ALL their allocations with their custom string but there's a simpler way that is more scalable.