Monday, April 16, 2007

Sleep considered harmful

Even though I'm not a big fan of biological sleep, in this post I'm talking about the "sleep" function available in Python. While we've all been warned that sleep may take longer to return than requested, most people don't realize just how course-grained sleep is on their operating system.

Here's a quiz: What is the minimum sleep interval on your operating system (other than 0)?

a) more than 10ms
b) 10ms
c) 4ms
d) 1ms
e) 0.1ms
f) less than 0.1ms

The answer: it depends. :)

On most flavors of Windows the answer is probably "a" (see below of a script to measure this for yourself). On operating systems with the Linux kernel the answer is one of 10, 4, or 1ms.

Why the variability on Linux? Linux uses a programmable timer to tick at a certain frequency, and then performs periodic tasks on each tick. The more ticks the more often it can task switch between processes (among other things), but there's also overhead with handling the interrupt, so higher tick rates reduce throughput.

Some distributions think a "server" kernel should have a tick frequency of 100Hz to allow for more to be done between interrupts, others think the clock should tick at 1000Hz to allow for more rapidly switching between tasks. Then there's the middle ground of 250Hz that some distributions use for desktop machines (giving 4ms sleep granularity). This isn't a simple issue, much discussion has taken place over the default setting and what the consequences are.

This is all very interesting, but what does it have to do with sleep? Python (indirectly) implements sleep based on the OS tick frequency. So if your OS has a frequency of 100Hz and you ask for a 15ms sleep, on the first tick you still have 5ms left on your sleep so you won't get woken up until the next tick. You asked for 15ms and got 20. What do you think would have happened if you'd asked for 1 or even 0.01ms instead? That's right, you'd have gotten 10ms.

Now, if you're waiting for a reply from a socket and the average round-trip time is around 1ms and you sleep for 1ms after sending the request, you just made your network app 10 times slower than it needs to be. This is where everyone says "use select instead"; and they'd be right. "select" represents the event-based approach that is generally superior to a poll-based approach involving sleep (libevent for example).

What if you really do need to sleep for less time than the sleep will allow? On unix-like operating systems there's a function (normally) available that will do what you want: "nanosleep". Using nanosleep you can request (and get) much smaller sleep times. Windows also has APIs to do finer-grained sleeping. Either should be accesible through ctypes (included in Python 2.5).

Curious what your minimum sleep time is? Here's a little Python script that will tell you:

import time

LOOPS = 100

def do_sleep(min_sleep):
total = 0.0
for i in range(LOOPS):
c = time.time()
x = time.time()
total = total + x - c
return total / LOOPS

min_sleep = 0.000001
result = None
while True:
result = do_sleep(min_sleep)
if result > 2 * min_sleep:
min_sleep *= 2

print result