timer: Allow delays with a 32-bit microsecond timer
The current get_timer_us() uses 64-bit arithmetic on 32-bit machines. When implementing microsecond-level timeouts, 32-bits is plenty. Add a new function that uses an unsigned long. On 64-bit machines this is still 64-bit, but this doesn't introduce a penalty. On 32-bit machines it is more efficient. Signed-off-by: Simon Glass <sjg@chromium.org> Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
This commit is contained in:
@@ -152,6 +152,11 @@ uint64_t __weak get_timer_us(uint64_t base)
|
||||
return tick_to_time_us(get_ticks()) - base;
|
||||
}
|
||||
|
||||
unsigned long __weak get_timer_us_long(unsigned long base)
|
||||
{
|
||||
return timer_get_us() - base;
|
||||
}
|
||||
|
||||
unsigned long __weak notrace timer_get_us(void)
|
||||
{
|
||||
return tick_to_time(get_ticks() * 1000);
|
||||
|
||||
Reference in New Issue
Block a user