A binary search is not a very interesting topic as it is a well known and fairly simple algorithm. However, I have encountered an interesting variant of it lately – one that I would like to share.
The Linux Kernel defines one Jiffy as the period of time between two consecutive system timer interrupts, and keeps a “jiffies” counter. Having jiffies, the next handy parameter would be the amount of delay-loop iterations that the CPU is able to fit in a single jiffy. With this number at hand (and known timer frequency) the kernel is able to support fine-grained execution delays, for instance. How would you implement computation of said “loops_per_jiffy” parameter?