STM32 machine.ADC channel constants overflowing MP_ROM_INT() 31 bit signed type
When trying to add the STM32H5 core voltage input, I found that the constants for internal channels don't work as expected.
TLDR
- MP_ROM_INT() handles only signed 31 bit integers
- STM32 channel constants are using full unsigned 32 bit range making them unusable for MP_ROM_INT().
- Channel numbers taken from machine_adc_locals_dict_table work by accident if they have bits 31 and 30 set.
- Channel numbers get corrupted when bit 31 is 0, bit 30 is 1
Detailed description
machine_adc.c defines constants for internal channels by means of STM32 driver #defines. See lib/stm32lib/STM32H5xx_HAL_Driver/Inc/stm32h5xx_hal_adc.h. Those can be used when creating a machine.ADC() object, i.e. adc=ADC(ADC.CORE_TEMP)
STATIC const mp_rom_map_elem_t machine_adc_locals_dict_table[] = {
{ MP_ROM_QSTR(MP_QSTR_read_u16), MP_ROM_PTR(&machine_adc_read_u16_obj) },
{ MP_ROM_QSTR(MP_QSTR_VREF), MP_ROM_INT(ADC_CHANNEL_VREF) },
{ MP_ROM_QSTR(MP_QSTR_CORE_VREF), MP_ROM_INT(ADC_CHANNEL_VREFINT) },
#if defined(STM32G4)
{ MP_ROM_QSTR(MP_QSTR_CORE_TEMP), MP_ROM_INT(ADC_CHANNEL_TEMPSENSOR_ADC1) },
#else
{ MP_ROM_QSTR(MP_QSTR_CORE_TEMP), MP_ROM_INT(ADC_CHANNEL_TEMPSENSOR) },
#endif
#if defined(ADC_CHANNEL_VBAT)
{ MP_ROM_QSTR(MP_QSTR_CORE_VBAT), MP_ROM_INT(ADC_CHANNEL_VBAT) },
#endif
#if defined(ADC_CHANNEL_VDDCORE)
{ MP_ROM_QSTR(MP_QSTR_CORE_VDD), MP_ROM_INT(ADC_CHANNEL_VDDCORE) },
#endif
};
STATIC MP_DEFINE_CONST_DICT(machine_adc_locals_dict, machine_adc_locals_dict_table);
Unfortunately the channel constants are uint32_t types that don't map to MP_ROM_INT() 31 bit signed integers. The following shows the literal values of the STM32 driver.
ADC_CHANNEL_VREFINT 0xc7520000
ADC_CHANNEL_TEMPSENSOR 0xc3210000
ADC_CHANNEL_VBAT 0x43290000
ADC_CHANNEL_VDDCORE 0x475a0000
When instantiating a machine.ADC object with one of the constants from machine_adc_locals_dict_table, the value gets changed due to shifting and sign extension in the SMALL_INT macros.
#define MP_OBJ_SMALL_INT_VALUE(o) (((mp_int_t)(o)) >> 1)
#define MP_OBJ_NEW_SMALL_INT(small_int) ((mp_obj_t)((((mp_uint_t)(small_int)) << 1) | 1))
The relevant code sequence where the incorrect constants are loaded is mp_obj_get_int() in the constructor.
// ADC(id)
STATIC mp_obj_t machine_adc_make_new(const mp_obj_type_t *type, size_t n_args, size_t n_kw, const mp_obj_t *all_args) {
// Check number of arguments
mp_arg_check_num(n_args, n_kw, 1, 1, false);
mp_obj_t source = all_args[0];
uint32_t channel;
uint32_t sample_time = ADC_SAMPLETIME_DEFAULT;
ADC_TypeDef *adc;
if (mp_obj_is_int(source)) {
#if defined(STM32WL)
adc = ADC;
#else
adc = ADC1;
#endif
channel = mp_obj_get_int(source);
...
To make the problem more clear, I modified machine_adc_print() to display the channel number in hex notation.
STATIC void machine_adc_print(const mp_print_t *print, mp_obj_t self_in, mp_print_kind_t kind) {
machine_adc_obj_t *self = MP_OBJ_TO_PTR(self_in);
unsigned adc_id = 1;
#if defined(ADC2)
if (self->adc == ADC2) {
adc_id = 2;
}
#endif
#if defined(ADC3)
if (self->adc == ADC3) {
adc_id = 3;
}
#endif
mp_printf(print, "<ADC%u channel=%u (%x)>", adc_id, self->channel, self->channel);
}
Test script
# Test Code
from machine import ADC
# Expect 0xc7520000 --> Ok, by accident
a=ADC(ADC.CORE_VREF)
a
<ADC1 channel=17 (c7520000)>
# Expect 0xc3210000 --> Ok, by accident
a=ADC(ADC.CORE_TEMP)
a
<ADC1 channel=16 (c3210000)>
# Expect 0x43290000 --> Fail, constant gets signed extended when getting with MP_OBJ_SMALL_INT_VALUE()
# Results in getting MSB set.
a=ADC(ADC.CORE_VBAT)
a
<ADC1 channel=16 (**c3290000**)>
# Expect 0x475a0000 --> Fail, constant gets signed extended when getting with MP_OBJ_SMALL_INT_VALUE()
a=ADC(ADC.CORE_VDD)
a
<ADC2 channel=17 (**c75a0000**)>
In my opinion the code that uses the uint32_t constants works only by accident. Values starting with 0xC....... have their MSB truncated in MP_ROM_INT -> MP_OBJ_NEW_SMALL_INT(). Since bit 30 is 1, during arithmetic right shift in MP_OBJ_SMALL_INT_VALUE() bit 31 gets accidentally restored.
The two values starting with 0x4....... get converted by MP_OBJ_NEW_SMALL_INT() to 0x8...... | 1, which makes them negative numbers. When reading with MP_OBJ_SMALL_INT_VALUE the result has the MSB incorrectly set (arithmetic shift right).
All in all the use of 32 bit unsigned integers makes lot of the code work by pure accident.
MP_ROM_INT() doesn't work with STM32 (at least H5) ADC channel literals.
Version Info
Custom Build
MicroPython v1.20.0-283-g7d66ae603-dirty on 2023-07-29; STM32H573I-DK with STM32H573IIK3Q
Type "help()" for more information.
* 7d66ae603 - (2 weeks ago) esp32/machine_timer: Switch from legacy driver to timer HAL. - Damien George (HEAD -> master, origin/master, origin/HEAD)
* 671b35cea - (4 weeks ago) py/builtinimport: Fix built-in imports when external import is disabled. - Jim Mussared
RFC: refine and standardise machine.ADC class
Reading ADC values is pretty common and it would be great if all ports had the same behaviour in this respect. The current situation is:
- For stm32 and esp32:
adc = machine.ADC(pin) # pin is any pin object
adc.read() # returns value between 0 and 4095
- For esp8266:
adc = machine.ADC(id) # id=0 or 1
adc.read() # returns value between 0 and 1024
- For cc3200:
periph = machine.ADC()
channel = periph.channel(pin=id) # id is a pin id, a string
channel.value() # returns value between 0 and 4095
To at least make reading values standard across ports, the return values should be in the same range across ports. I also think it's a good idea to make explicit the units/range in the name of the read method, because then it's completely unambiguous as to what it returns.
My proposal would be to specify the behaviour of machine.ADC to at least support the following usage:
adc = machine.ADC(id) # id can be an integer, pin, or any other identifier
adc.read_u16() # returns a raw integer reading between 0 and 65535 inclusive
adc.read_mv() # returns a calibrated voltage reading in millivolts (int, could be negative)
adc.read_v() # returns a calibrated voltage reading (int, or float if port supports floats)
The read_u16() method is there to get full access to the raw underlying value. read_mv() is more than just convenience: on systems that are able to calibrate the value it is very useful to be able to read a calibrated value (functionality not directly available with read_u16()), and it's an integer to avoid the need for floats. read_v() is convenience and would be read_mv()/1000.
The advantage of using a suffix for the units is that the existing read() method on the various ports can stay for backwards compatibility (at least for now).
Then there are a few items that would be useful to configure, but I don't have a concrete proposal on how to configure these just yet:
- setting bit-width precision of the ADC peripheral
- setting sample time for the reading