Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ADC support to TI cc23x0 SoC #84520

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

jpanisbl
Copy link
Contributor

@jpanisbl jpanisbl commented Jan 24, 2025

This series adds ADC support to TI cc23x0 SoC.

Datasheet: https://www.ti.com/lit/ds/symlink/cc2340r5.pdf

@zephyrbot
Copy link
Collaborator

zephyrbot commented Jan 24, 2025

The following west manifest projects have changed revision in this Pull Request:

Name Old Revision New Revision Diff

All manifest checks OK

Note: This message is automatically posted and updated by the Manifest GitHub Action.

@jpanisbl jpanisbl force-pushed the baylibre/upstream-cc23x0-adc branch from 6c2ddc0 to ffb8328 Compare February 4, 2025 14:05
@jpanisbl jpanisbl marked this pull request as ready for review February 4, 2025 16:23
@zephyrbot zephyrbot added area: ADC Analog-to-Digital Converter (ADC) platform: TI SimpleLink Texas Instruments SimpleLink MCU area: Samples Samples labels Feb 4, 2025
@kartben kartben added this to the v4.2.0 milestone Feb 17, 2025
@@ -18,4 +18,36 @@
bias-disable;
input-enable;
};

/* ADC0 */
adc0_ch0: adc0_ch0 {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why have channel 1, 5, 7, 9, 10 and 11 not been added?

Copy link
Contributor Author

@jpanisbl jpanisbl Feb 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because they are not exposed through the board connectors:
https://www.ti.com/lit/ml/swru588a/swru588a.pdf?ts=1740660340632


#define CPU_FREQ DT_PROP(DT_PATH(cpus, cpu_0), clock_frequency)

#define ADC_CC23_CH_UNDEF 0xff

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I would prefer CC23X0 instead of just CC23. It matches the file name. Same for the other defines below.

Suggested change
#define ADC_CC23_CH_UNDEF 0xff
#define ADC_CC23X0_CH_UNDEF 0xff

clk_cycles = 1;
}

ADCSetSampleDuration(adc_cc23x0_clkdiv_to_field(clk_div), clk_cycles);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you make sure that the sample duration is correct for all the configured channels? The clock divider is fixed for the entire ADC peripheral (ADC.CTL0.SCLKDIV) and only two different sample times can be configured (ADC.SCOMP0 and ADC.SCOMP1. ADCSetSampleDuration() always uses ADC.SCOMP0).
So, unless I am missing something, with this implementation the acquisition time will only be correct for the last configured channel.

Copy link
Contributor Author

@jpanisbl jpanisbl Feb 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, there's a mismatch between the Zephyr ADC API and this HW.
Zephyr offers the possibility to configure a sample duration for each channel, whereas the SoC (in sequence mode) uses a single sample duration for all the channels. A choice had to be made, so I simply decided that the latest sample duration applied (chronological order) was the more "up to date" so to speak.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is possible to achieve two different sample durations (with changes to driverlib). One can be configured using ADC.SCOMP0 and the other can be configured using ADC.SCOMP1. But the same clock divider must be used.
Maybe we can keep track of the configured sample duration for the different already configured channels, and return an error if it is not possible to configure the requested sample duration.

A related question. Is there anywhere in the Zephyr documentation that device specific limitations like this can be documented, so users don't expect they can just freely choose sample duration for each channel?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feel free to modify driverlib (and eventually submit a related patch to this driver, once it gets applied).

LOG_DBG("Requested sample duration: %u ns", acq_time_ns);

/* Iterate through each divider */
ARRAY_FOR_EACH(clk_dividers, i) {
Copy link

@n-gylling-ti n-gylling-ti Feb 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like an unnecessary amount of computation is done in each iteration of the loop. How about doing something like this (this assumes that clk_dividers is ordered in ascending order):

uint32_t base_clock_cycles = acq_time_ns * CPU_FREQ_MHZ / 1000; /* Assuming CPU_FREQ_MHZ is the cpu frequency in MHz */
uint8_t min_divider = DIV_ROUND_UP(base_clock_cycles, ADC_CC23_MAX_CYCLES); /* Could possibly be combined with above division so one less division operation is performed */
ARRAY_FOR_EACH(clk_dividers, i) {
    divider = clk_dividers[i];
    if (divider >= min_divider)
    {
        *clk_div = divider;
        *clk_cycles = DIV_ROUND_CLOSEST(base_clock_cycles, divider);
        break;
    }
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I understand what should be kept and/or removed from my code.
In your suggestion above, can you please copy/paste from my code what should be kept in ARRAY_FOR_EACH ? That will make things more clear to me.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically your entire ARRAY_FOR_EACH block would be replaced by mine. So the function would look something like this:

static int adc_cc23x0_calc_clk_cfg(uint32_t acq_time_ns, uint32_t *clk_div, uint16_t *clk_cycles)
{
	const uint32_t base_clock = CPU_FREQ;
        uint32_t base_clock_cycles;
	uint32_t divider;
	uint8_t min_divider;

	/* Initialize clk_div to zero. If it is still zero after the loop, no valid configuration is found. */
	*clk_div = 0;

	LOG_DBG("Requested sample duration: %u ns", acq_time_ns);

        base_clock_cycles = acq_time_ns * CPU_FREQ_MHZ / 1000;
        min_divider = DIV_ROUND_UP(base_clock_cycles, ADC_CC23_MAX_CYCLES);
        ARRAY_FOR_EACH(clk_dividers, i) {
                divider = clk_dividers[i];
                if (divider >= min_divider)
                {
                        *clk_div = divider;
                        *clk_cycles = DIV_ROUND_CLOSEST(base_clock_cycles, divider);
                        break;
                }
        }
	
	
	/* Check if a valid configuration was found */
	if (*clk_div == 0) {
		return -EINVAL;
	}

	return 0;
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll check but I'm not sure it will give the same result. Did you test that ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do, I didn't test this. But we have a similar implementation in the ADCBuf SimpleLink driver.

Copy link
Contributor Author

@jpanisbl jpanisbl Feb 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I'll check.

[UPDATE] I checked: for the 1.5µs you take as an example here, your algo above selects "Divider: 1, Cycles: 72" (which is not what we want).

cycles = DIV_ROUND_UP(acq_time_ns, clock_period_ns);

/* Check if this configuration is valid and optimal */
if (cycles <= ADC_CC23_MAX_CYCLES && cycles < min_cycles) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is a low number of cycles optimal? Don't we want the divider to be as low as possible and the number of cycles to be as high as possible to achieve the best resolution?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initially, I considered this option. But I finally gave it up. There are 2 situations:

[1] If the exact acq_time requested by the user can be reached with a [high clk_div | low clk_cycles] combination, that's much better for power consumption than a [low clk_div | high clk_cycles] combination (AFAIK, power is important for your team).

[2] If the exact acq_time requested by the user cannot be reached, then what's better ? Let's say the user configured 250 ms: IMO it's better having a [high clk_div | low clk_cycles] and reach 200 or 300 ms, rather than having a [low clk_div | high clk_cycles] and reach 240 ms (I did not make any computation with these figures, they are just taken as an example, just to explain the idea).
If the resolution is really important for the user, then a solution still exists: he can take a look at the manual and choose the closest acq_time that can be exactly reached (with my 250 ms example, he may even be able to choose between 240 and 260 if this is important for him :-) ). Then the algo, as described in [1], will use optimized parameters for the power consumption without sacrificing the resolution.

As a summary, with my "power" based implementation, it's still possible to reach any exact resolution that may be configured. With a "resolution" based implementation, then the "power consumption" might not be as good as possible.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure how big a power saving there is with a lower clock, but you're right that there probably is some saving.
We need to figure out what we actually want to optimize for and if it is worth it to optimize for power consumption.

I do have a question regarding your implementation though. I don't see how you will ever end up using a divider different from 48. If for example the user wants 1.5µs, which is possible to achieve exactly with a divider of 1 and 72 cycles, then the current implementation would still select 48 as the divider, right?

divider = 1:
clock_frequency = 48000000
clock_period_ns = 20.833...
cycles = ceil(1500/20.833) = 72 (resulting in 1.5µs)

divider = 2:
clock_frequency = 24000000
clock_period_ns = 41.66...
cycles = ceil(1500/41.66..) = 36 (resulting in 1.5µs)

divider = 4:
clock_frequency = 12000000
clock_period_ns = 83.33...
cycles = ceil(1500/83.33..) = 18 (resulting in 1.5µs)

divider = 8:
clock_frequency = 6000000
clock_period_ns = 166.66...
cycles = ceil(1500/166.66..) = 9 (resulting in 1.5µs)

divider = 16:
clock_frequency = 1500000
clock_period_ns = 333.33...
cycles = ceil(1500/333.33..) = 5 (resulting in 1.666µs)

divider = 24:
clock_frequency = 2000000
clock_period_ns = 500
cycles = ceil(1500/500) = 3 (resulting in 1.5µs)

divider = 48:
clock_frequency = 1000000
clock_period_ns = 1000
cycles = ceil(1500/1000) = 2 (resulting in 2µs)

Unless I am missing something, the current implementation would select a divider of 48 since the resulting number of cycles is minimized. I don't see that an exact solution with a divider of 24, 8, 4, 2 or 1 is selected.

With that said, computing this by hand did teach me that my suggestion in https://github.com/zephyrproject-rtos/zephyr/pull/84520/files#r1973998621 might not be optimal either. Since it would select a divider of 1, while it could have selected a divider of 24 and achieved the same precision. Maybe my suggestion should select the largest divider which still divides evenly into the desired number of base_clock_cycles

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that taking care of power implies many other stuffs (some additional PM patches will be submitted over the next months).

In the example mentioned above (1.5 us), the algo should select 24.

Divider 72 is not a possible value on this SoC:
image

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't follow. The algorithm is checking all the possible dividers and then it selects the divider which results in the lowest number of cycles (please correct me if I am wrong).
I did the same manually in my previous comment. And a divider of 48 would result in 2 cycles (inexact solution). A divider of 24 would result in 3 cycles (exact solution). Since 2 is less than 3 it would select 48 as the divider, right?

Copy link
Contributor Author

@jpanisbl jpanisbl Feb 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might be right. I'll re-check and eventually add a condition to avoid such scenario.

[UPDATE] I checked: you are right indeed. So I'll fix that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's fixed. Here is the output with debug level enabled for your use case above (1.5 us):

*** Booting Zephyr OS build f8906c930a7b ***
[...]
[00:00:00.017,000] <dbg> adc_cc23x0: adc_cc23x0_calc_clk_cfg: Requested sample duration: 1500 ns
[00:00:00.027,000] <dbg> adc_cc23x0: adc_cc23x0_calc_clk_cfg: Divider: 1, Cycles: 72, Actual sample duration: 1500 ns
[00:00:00.038,000] <dbg> adc_cc23x0: adc_cc23x0_calc_clk_cfg: Divider: 2, Cycles: 36, Actual sample duration: 1500 ns
[00:00:00.049,000] <dbg> adc_cc23x0: adc_cc23x0_calc_clk_cfg: Divider: 4, Cycles: 18, Actual sample duration: 1500 ns
[00:00:00.060,000] <dbg> adc_cc23x0: adc_cc23x0_calc_clk_cfg: Divider: 8, Cycles: 9, Actual sample duration: 1500 ns
[00:00:00.071,000] <dbg> adc_cc23x0: adc_cc23x0_calc_clk_cfg: Divider: 24, Cycles: 3, Actual sample duration: 1500 ns
[...]

I'm going to update the PR.

LOG_DBG("Channel %d", ch);

if (data->curr_mem_index > ADC_CC23_MEM_MAX) {
LOG_ERR("%d channels are already configured", ADC_CC23_MEM_COUNT);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there anyway to un-setup a channel so the memctl register can be freed up to be used for a new channel? Or can you only set up 4 channels, and then you are stuck with those 4 channels forever?

Copy link
Contributor Author

@jpanisbl jpanisbl Feb 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, there is not a way to un-setup the memctl register. The HW does not fit very well with the Zephyr API, so I could not implement such re-initialization (that said, I may have missed something, so I'm open to suggestions). In most use cases the channels are configured at initialization, and not changed any more.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One way to support more channels is to maintain a list of memctl configs in RAM for each configured channel and then in adc_cc23x0_read_common() (which I haven't fully reviewed yet) we can copy the relevant memctl configs from RAM to the HW. And if more than 4 channels are requested in adc_cc23x0_read_common() then it could be split into multiple sequences.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What callback would be in charge of setting (I mean, re-initializing) the list of memctl configs in RAM ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When is re-initialization necessary?

I now realize that the driverlib API doesn't configure the memctl register with just one call. If we want to use the current driverlib implementation then adc_cc23x0_channel_setup() would need to store the following information, in data, for the channel to be setup (maybe I am missing some):

  • reference
  • channel number
  • adjustment gain (which you already store)
  • sample cycles
  • clock divider

This could be implemented as an array, in adc_cc23x0_data , of 16 elements of a struct with members for each of above values.

Then in adc_cc23x0_read_common() you will call the relevant driverlib APIs to configure the memctl register(s) for the channels to be read before starting the conversion. If applicable you would do this (configure memctl registers and wait for conversion to complete) in a loop until all channels have been read. For example if more than 4 channels are to be read you can read 4 at a time until all channels have been read. Also, if some channels are not compatible to be read in one "sequence", for example if they require different clock dividers, then it could be done in two different iterations of the loop as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That might be an interesting feature indeed. But the goal of this PR is mainly to get a clean basis upstream.

* Set adjustment offset, and get adjustment gain. Both adjustment values depend on
* reference source. Internal gain will be used for subsequent measurement compensation.
*/
ADCSetAdjustmentOffset(ref);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will be applied "globally" to the ADC and it is not specific to one memctl register, so it will not work if different channels are using different references.

Copy link
Contributor Author

@jpanisbl jpanisbl Mar 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. I'm going to fix that.

@jpanisbl jpanisbl force-pushed the baylibre/upstream-cc23x0-adc branch 4 times, most recently from 0f9a1dc to e3b0190 Compare March 3, 2025 17:13
jpanisbl added 2 commits March 4, 2025 17:45
Add support for 16-channel ADC to cc23x0 SoC. The driver supports the
following conversion modes:
- Single channel | Single conversion,
- Sequence of channels (up to 4) | Single conversion.

Signed-off-by: Julien Panis <jpanis@baylibre.com>
Add support for ADC to cc23x0 SoC.

Signed-off-by: Julien Panis <jpanis@baylibre.com>
jpanisbl added 3 commits March 4, 2025 17:45
Enable ADC.

Signed-off-by: Julien Panis <jpanis@baylibre.com>
Add LP-EM-CC2340R5 board support to ADC sample.

Signed-off-by: Julien Panis <jpanis@baylibre.com>
Add LP-EM-CC2340R5 board support to ADC sample.

Signed-off-by: Julien Panis <jpanis@baylibre.com>
@jpanisbl jpanisbl force-pushed the baylibre/upstream-cc23x0-adc branch from e3b0190 to 141167b Compare March 4, 2025 16:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: ADC Analog-to-Digital Converter (ADC) area: Samples Samples platform: TI SimpleLink Texas Instruments SimpleLink MCU
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants