Time measurements for nim
and nim_con
implementations
#482
Unanswered
michaelsbradleyjr
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Sounds like an elusive bug. I think https://forum.nim-lang.org/ would be a better outlet for this discussion as I don't know enough about Nim and cannot add value to the convo. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm hoping this post will amount to rubber 🦆ing on my part. If not, then I'm looking for suggestions on how to further dissect a problem I've encountered with
nim
andnim_con
.Context: changes I made in #471 and 29141f3 re: how time is measured for
func process
.On the way to 29141f3 I was modifying this bit of code
In #456 I mentioned "I noticed some flakiness..." My changes involved splitting out a
let t1 = getMonotime()
and manipulation of the stringified(t1 - t0).in...
so that it always has 2 decimal places. I observed a strange effect in AWS/Docker, i.e. variations of my code resulted in wildly different reported times. Those variations involved generating the time-string in less or more steps, in a helper function or via compound expressions inmain
.The effect is real, inasmuch as I repeatedly get consistently different reported times. There is no apparent reason why one variation results in faster or slower reported times than another. Importantly, those timing differences don't involve changes to work being done by
func process
(or the routines it calls), only bookkeeping forgetMonotime()
.I brought nim-decimal into the picture in #456 and thought that worked around the undiagnosed problem (maybe something related to floats?), but I had only gotten lucky, and then got lucky again in #471.
When starting on some changes to
nim_con
, my first step was to adapt use of nim-decimal to the multi-threaded code:jinyus:main...michaelsbradleyjr:nim_rev_19
. But that consistently increases the measured time fornim_con
60k runs from ~567ms to ~809ms❗Note again, it's just bookkeeping forgetMonotime()
.I restarted tinkering in both
nim
andnim_con
, trying to determine a pattern/cause. For example, if innim/src/related.nim
the formatting helper is changed toproc fmtMilliseconds(d: Duration)
and it's called witht1 - t0
, it causes a similar increase in reported times.So far, I'm stumped.
While I observe the effect consistently in AWS/Docker and Azure/Docker, I am unable to reproduce it locally on my mac laptop (Intel). There are several variables in play, from compiler flags in
config.nims
(e.g. LTO) to the version of clang, etc.One thing I ruled out is that it's owing to something in the Dockerfile or docker_start.sh. That is, if I start with
docker run -it --rm archlinux:base
, add only what I need for Nim and clang, and do 60k runs manually, I get the same effect.I hate to say it, but this is a real confidence-buster for continuing my work on the Nim implementations. Something I noticed while investigating the problem: depending on the variation for
getMonotime()
bookkeeping, changes infunc process
, etc. can have different outcomes, i.e. a change that decreases the time with one variation can increase it with another. Upon seeing that, I realized I've lost any basis for determining what's faster/better. I also wonder if this is strictly a Nim problem or if a similar issue may be lurking in other implementations.Beta Was this translation helpful? Give feedback.
All reactions