I'm new to using HAL functions. The description of the function HAL_GetTick()
says that it "provides a tick value in millisecond".
I don't understand if this function returns ticks or milliseconds. Of course to convert from ticks to milliseconds I need to know how many ticks are in a millisecond, and it's CPU specific.
So what does HAL_GetTick()
exactly return?
Edit:
My real problem is knowing how to measure time in microseconds. So I thought to get ticks from HAL_GetTick()
and convert them to microseconds. This is addressed in the comments and at least in one of the answers so I'm mentioning this here too and I edited the title.
HAL_GetTick()
should return the number of milliseconds elapsed since startup since a lot of HAL functions depend on it. How do you achieve it is up to you. By default, HAL_Init()
queries the system clock speed, and sets the SysTick frequency to the 1/1000th of that:
__weak HAL_StatusTypeDef HAL_InitTick(uint32_t TickPriority)
{
/*Configure the SysTick to have interrupt in 1ms time basis*/
HAL_SYSTICK_Config(SystemCoreClock /1000);
/*Configure the SysTick IRQ priority */
HAL_NVIC_SetPriority(SysTick_IRQn, TickPriority ,0);
/* Return function status */
return HAL_OK;
}
Then the default SysTick interrupt handler calls HAL_IncTick()
to increment an internal counter once every ms, and HAL_GetTick()
returns the value of that counter.
All these functions are defined as weak
, so you can override them, as long as your version of HAL_GetTick()
returns the elapsed time in milliseconds, it'll be OK. You can e.g. replace HAL_InitTick()
to let SysTick run at 10 kHz, but then you should ensure that HAL_IncTick()
gets called only at every 10th interrupt. On a 216 MHz STM32F7 controller (or the barely released 400MHz STM32H743), you can actually go down to 1 MHz Systick, but then you should be very careful to return as quickly as possible from the handler. And it would still be a horrible waste of precious processor cycles unless you do something in the handler that a hardware counter can't.
Or you may do it without configuring SysTick at all (override HAL_InitTick()
with an empty function), but set up a 32-bit hardware timer with a sufficient prescaler to count on every microsecond, and let HAL_GetTick()
return the timer counter.
Getting back to your real problem, measuring time in the order of microseconds, there are better ways.
If you have a 32-bit timer available, then you can put the MHz value of the respective APB clock in the prescaler, start it, and there is your microseconds clock, not taking away processing time from your application at all. This code should enable it (not tested) on a STM32L151/152/162STM32F4:
__HAL_RCC_TIM5_CLK_ENABLE();
TIM5->PSC = HAL_RCC_GetPCLK1Freq()/1000000 - 1;
TIM5->CR1 = TIM_CR1_EN;
then get its value anytime by reading TIM5->CNT
.
Check your reference manual which hardware timers have 32-bit counters, and where does it get its clock from. It varies a lot across the whole STM32 series but should be there on an F4.
If you can't use a 32-bit timer, then there is the core cycles counter. Just enable it once with
CoreDebug->DEMCR |= CoreDebug_DEMCR_TRCENA_Msk;
DWT->CYCCNT = 0;
DWT->CTRL |= DWT_CTRL_CYCCNTENA_Msk;
and then read the value from DWT->CYCCNT
. Note that as it returns the elapsed processor cycles, it will overflow in a couple of seconds.
EDIT:
I've just noted that you're using an STM32L0. So, forget 32-bit timers and 200+ MHz cores. Use DWT->CYCCNT
, or think very carefully about how long are the intervals you'd like to measure, and with what accuracy, then take a 16-bit timer. You could post it as a separate question, describing in more detail how your hardware looks like and what it should it do. There might be a way to trigger a counter start/stop directly by the events you'd like to time..
It's both. Most of the time the function which increments the HAL tick counter is hooked to SysTick interrupt, which is configured to tick every 1ms. Therefore HAL_GetTick()
will return the number of milliseconds since the SysTick interrupt is configured (essentially since the program start). This can also be though of as "the number of times the SysTick interrupt has 'ticked'".
Although the question was already answered, I think it would be helpful to see how HAL uses HAL_GetTick()
to count milliseconds. This can be seen in HAL's function HAL_Delay(uint32_t Delay)
.
Implementation of HAL_Delay(uint32_t Delay)
from stm32l0xx_hal.c
:
/**
* @brief This function provides minimum delay (in milliseconds) based
* on variable incremented.
* @note In the default implementation , SysTick timer is the source of time base.
* It is used to generate interrupts at regular time intervals where uwTick
* is incremented.
* @note This function is declared as __weak to be overwritten in case of other
* implementations in user file.
* @param Delay specifies the delay time length, in milliseconds.
* @retval None
*/
__weak void HAL_Delay(uint32_t Delay)
{
uint32_t tickstart = HAL_GetTick();
uint32_t wait = Delay;
/* Add a period to guaranty minimum wait */
if (wait < HAL_MAX_DELAY)
{
wait++;
}
while((HAL_GetTick() - tickstart) < wait)
{
}
}
When viewing my debugger, I can see I have available the uwTick
global variable which seems to be the same as the result of calling HAL_GetTick()
against my own defined global variable.
As per the docs:
void HAL_IncTick (void )
This function is called to increment a global variable "uwTick" used
as application time base.
Note:
In the default implementation, this variable is incremented each 1ms
in Systick ISR.
This function is declared as __weak to be
overwritten in case of other implementations in user file.
yI needed a timestamp at 1uS precision, and using TIM5 as described above worked, but a few tweaks were necessary. Here's what I came up with.
/* Initialization */
__HAL_RCC_TIM5_CLK_ENABLE();
TIM5->PSC = HAL_RCC_GetPCLK1Freq() / 500000;
TIM5->CR1 = TIM_CR1_CEN;
TIM5->CNT = -10;
/* Reading the time */
uint32_t microseconds = TIM5->CNT << 1;
I did not fully explore why I had to do what I did. But I realized two things very quickly. (1) The prescalar scaling was not working, although it looked right. This was one of several things I tried to get it to work (basically a half-us clock, and divide the result by 2). (2) The clock was already running, and gave strange results at first. I tried several unsuccessful things to stop, reprogram and restart it, and setting the count to -10 was a crude but effective way to just let it complete its current cycle, then very quickly start working as desired. There are certainly better ways of achieving this. But overall this is a simple way of getting an accurate event timestamp with very low overhead.