Discussion:
time: Should real time usage account discontinous jumps?
(too old to reply)
Petr Pisar
2013-09-25 11:10:32 UTC
Permalink
Hello,

The GNU time as well as bash built-in compute real time process usage as
a simple difference between two real time points. If there was a time
adjustement in between (by NTP or manual), the meassured value would be
affected.

I have found any hint nowhere if this is intended behavior or if one should
meassure some kind of monotic time line.

Is there any general agreement? My personal opinion is that monotonic
clock_gettime(CLOCK_MONOTONIC_RAW) is the best option.

-- Petr
Paul Eggert
2013-09-25 14:41:13 UTC
Permalink
Post by Petr Pisar
Is there any general agreement? My personal opinion is that monotonic
clock_gettime(CLOCK_MONOTONIC_RAW) is the best option.
I don't think it's standardized, but I agree that that clock
would be better to use, if it works (which it doesn't always).
Bob Proulx
2013-09-26 05:00:11 UTC
Permalink
Post by Petr Pisar
The GNU time as well as bash built-in compute real time process usage as
a simple difference between two real time points. If there was a time
adjustement in between (by NTP or manual), the meassured value would be
affected.
NTP will never step the clock. NTP will adjust the time of each clock
tick to keep the clock on time and so that every tick is present. If
the clock is being step'd it would be due to other reasons such as
manually.

But why would there be a time adjustment between? Stepping the clock
is an abnormal condition. It isn't something that should ever happen
during normal system operation. If your clock is being stepped then
that is a bug and needs to be fixed.
Post by Petr Pisar
I have found any hint nowhere if this is intended behavior or if one
should meassure some kind of monotic time line.
Since stepping the clock is not a normal condition I don't think it
matters. It certainly isn't a problem if the system is running NTP
and the clock is running normally.
Post by Petr Pisar
Is there any general agreement? My personal opinion is that
monotonic clock_gettime(CLOCK_MONOTONIC_RAW) is the best option.
I think that if you are needing a different clock that the problem is
not with time, gettime, or other time source. The problem is due to
something stepping the clock abnormally. Figure out why your system
clock is being stepped and fix that problem first.

Bob
David C Niemi
2013-09-26 14:34:52 UTC
Permalink
I presume the point here is about the wall clock time, not actual CPU time
usage, which is measured completely separately and for which
CLOCK_MONOTONIC* are not in the picture.

I can only think of 3 fairly special cases where a difference in
CLOCK_MONOTONIC_RAW differs meaningfully from a difference in the system
time:

1) a leap second occurs

2) the hardware clock is inaccurate and NTP is making frequent small
adjustments

3) the system time was set wrong and NTP is adjusting clock speed to get
it to the correct time

Case 1 is very infrequent, and unlikely to be seen while you are using
time other than on a very long-running task; and if it is a long-running
task the error in the hardware clock could well exceed 1 second. So it's
about a wash either way.

Case 2 you are much better off with the system time than the hardware
time, because the hardware time is far less accurate.

Case 3 you are temporarily better off with hardware time, but this is a
rather special case.

So on the whole I don't see CLOCK_MONOTONIC_RAW being a more accurate
source, especially not in the long run. In addition, CLOCK_MONOTONIC_RAW
is Linux-specific, and portability and simplicity are worthwhile goals in
themselves when not faced with a clearly advantageous alternative.

David C Niemi
Post by Petr Pisar
Hello,
The GNU time as well as bash built-in compute real time process usage as
a simple difference between two real time points. If there was a time
adjustement in between (by NTP or manual), the meassured value would be
affected.
I have found any hint nowhere if this is intended behavior or if one should
meassure some kind of monotic time line.
Is there any general agreement? My personal opinion is that monotonic
clock_gettime(CLOCK_MONOTONIC_RAW) is the best option.
-- Petr
+-----------------------------------------------------------+
| David C Niemi (Reston, VA, USA) niemi at tuxers dot net |
+-----------------------------------------------------------+
Petr Pisar
2013-09-26 06:37:06 UTC
Permalink
Post by Bob Proulx
Post by Petr Pisar
The GNU time as well as bash built-in compute real time process usage as
a simple difference between two real time points. If there was a time
adjustement in between (by NTP or manual), the meassured value would be
affected.
NTP will never step the clock. NTP will adjust the time of each clock
tick to keep the clock on time and so that every tick is present. If
the clock is being step'd it would be due to other reasons such as
manually.
NTP does not step, NTP slows or accelerates real time. But the effect is
the same---the time point difference mismatches physical duration.
Post by Bob Proulx
But why would there be a time adjustment between? Stepping the clock
is an abnormal condition. It isn't something that should ever happen
during normal system operation. If your clock is being stepped then
that is a bug and needs to be fixed.
NTP per definition refuses to adjust clock if the difference is too big.
Thus distributions usually step the clock on first NTP contact, and then
keep adjusting. With mobile hosts loosing and getting network
connectivity on the fly, it's quite possible the system will experience
time steps.
Post by Bob Proulx
Post by Petr Pisar
I have found any hint nowhere if this is intended behavior or if one
should meassure some kind of monotic time line.
Since stepping the clock is not a normal condition I don't think it
matters. It certainly isn't a problem if the system is running NTP
and the clock is running normally.
I agree one can consider NTP-adjusted clock as `running normally'. Because
the reason for adjustment is that local real clock is not accurate
enough. In this light, CLOCK_MONOTONIC seems good enough.

-- Petr
Charles Swiger
2013-09-26 15:09:16 UTC
Permalink
Hi--
Post by Petr Pisar
Post by Bob Proulx
Post by Petr Pisar
The GNU time as well as bash built-in compute real time process usage as
a simple difference between two real time points. If there was a time
adjustement in between (by NTP or manual), the meassured value would be
affected.
NTP will never step the clock. NTP will adjust the time of each clock
tick to keep the clock on time and so that every tick is present. If
the clock is being step'd it would be due to other reasons such as
manually.
NTP does not step, NTP slows or accelerates real time. But the effect is
the same---the time point difference mismatches physical duration.
NTP calls adjtime() or similar to adjust the rate that the system clock [1]
increments it's notion of time to match "real time" obtained from the NTP
timesource, which is either a lower-stratum NTPd server via the Internet,
or a primary time reference like a GPS sensor or atomic clock.
Post by Petr Pisar
Post by Bob Proulx
But why would there be a time adjustment between? Stepping the clock
is an abnormal condition. It isn't something that should ever happen
during normal system operation. If your clock is being stepped then
that is a bug and needs to be fixed.
NTP per definition refuses to adjust clock if the difference is too big.
Thus distributions usually step the clock on first NTP contact, and then
keep adjusting. With mobile hosts loosing and getting network
connectivity on the fly, it's quite possible the system will experience
time steps.
Oh, agreed.

For the normal case, ntpd won't change time faster than 0.5 ms per second,
but if the sample interval is long enough to contain network dropout and
re-acquisition, then ntpd might be restarted and do an initial step of time
rather than the slew rate.

However, on a good day, ntpd will have already figured out the intrinsic
first-order deviation of the local HW clock versus "real time", and the
device will continue to keep better time as a result even thru a network
outage than it would otherwise.
Post by Petr Pisar
Post by Bob Proulx
Post by Petr Pisar
I have found any hint nowhere if this is intended behavior or if one
should meassure some kind of monotic time line.
Since stepping the clock is not a normal condition I don't think it
matters. It certainly isn't a problem if the system is running NTP
and the clock is running normally.
I agree one can consider NTP-adjusted clock as `running normally'. Because
the reason for adjustment is that local real clock is not accurate
enough. In this light, CLOCK_MONOTONIC seems good enough.
Yes, if you want to compute "how long something took" via delta between start
and finish, then using CLOCK_MONOTONIC is likely the best choice.

Regards,
--
-Chuck

[1]: These days, probably the TSC or maybe ACPI or HPET timers.
Loading...