标签:
Don‘t use a high precision timer unless you really need it. They consume compute cycles and battery. There can only be a limited number of high precsion timers active at once. A high precision timer is "first in line", and not every timer can be first. When too many try, all the timers lose accuracy.
Example applications that are candidates for high precision timers are games that need to provide precise frame rates, or daemons that are streaming data to hardware with limited buffering, such as audio or video data.
If you‘re writing code that needs to synchronize with frame buffer or display updates, Apple has already done much of the hard work for you. If you are developing for iOS, please see the CADisplayLink class, found in the QuartzCore.framework. If you are targeting OS X, see CVDisplayLink, also found in QuartzCore.framework.
iOS: CADisplayLink Class Reference
There are many API‘s in iOS and OS X that allow waiting for a specified period of time. They may be Objective C or C, and they take different kinds of arguments, but they all end up using the same code inside the kernel. Each timer api tells the kernel it needs to wait until a certain time, for example 10 seconds from now. The kernel keeps track of every thread, and when a timer request comes in, that thread is marked as "I‘d like to run in 10 seconds".
The kernel tries to be as frugal as possible with cpu cycles, so if there is no other work to do, it will put the cpu‘s to sleep for 10 seconds, then wake up and run your thread.
Of course, that is an optimum situation, and in the real world, things never seem to work that easily! In a real situation, there are many threads that want to run, and many threads making timer requests, and the kernel has to manage them all. With thousands of threads and only a few cpu‘s, its easy to see how timers might be inaccurate.
The only difference between a regular timer and a high precision timer is the scheduling class of the thread making the timer request. Threads that are in the real time scheduling class get first class treatment. They go to the front of the line whenever they need to run. If there is a conflict with multiple threads wanting to run in 10 seconds, a real time thread always goes first.
#include <mach/mach.h> |
#include <mach/mach_time.h> |
#include <pthread.h> |
void move_pthread_to_realtime_scheduling_class(pthread_t pthread) |
{ |
mach_timebase_info_data_t timebase_info; |
mach_timebase_info(&timebase_info); |
const uint64_t NANOS_PER_MSEC = 1000000ULL; |
double clock2abs = ((double)timebase_info.denom / (double)timebase_info.numer) * NANOS_PER_MSEC; |
thread_time_constraint_policy_data_t policy; |
policy.period = 0; |
policy.computation = (uint32_t)(5 * clock2abs); // 5 ms of work |
policy.constraint = (uint32_t)(10 * clock2abs); |
policy.preemptible = FALSE; |
int kr = thread_policy_set(pthread_mach_thread_np(pthread_self()), |
THREAD_TIME_CONSTRAINT_POLICY, |
(thread_policy_t)&policy, |
THREAD_TIME_CONSTRAINT_POLICY_COUNT); |
if (kr != KERN_SUCCESS) { |
mach_error("thread_policy_set:", kr); |
exit(1); |
} |
} |
The period, computation, constraint, and preemptible fields do have an effect, and more can be learned about them at:
Using the Mach Thread API to Influence Scheduling
As mentioned above, all the timer methods end up in the same place inside the kernel. However, some of them are more efficient than the others. At the time this note was written, mach_wait_until() is the api we would recommend using. It has the lowest overhead of all the timing API that we measured. However, if you have specific needs that aren‘t met by mach_wait_until(), for example you need to wait on a condvar, then feel free to use the appropriate timer API.
#include <mach/mach.h> |
#include <mach/mach_time.h> |
static const uint64_t NANOS_PER_USEC = 1000ULL; |
static const uint64_t NANOS_PER_MILLISEC = 1000ULL * NANOS_PER_USEC; |
static const uint64_t NANOS_PER_SEC = 1000ULL * NANOS_PER_MILLISEC; |
static mach_timebase_info_data_t timebase_info; |
static uint64_t abs_to_nanos(uint64_t abs) { |
return abs * timebase_info.numer / timebase_info.denom; |
} |
static uint64_t nanos_to_abs(uint64_t nanos) { |
return nanos * timebase_info.denom / timebase_info.numer; |
} |
void example_mach_wait_until(int argc, const char * argv[]) |
{ |
mach_timebase_info(&timebase_info); |
uint64_t time_to_wait = nanos_to_abs(10ULL * NANOS_PER_SEC); |
uint64_t now = mach_absolute_time(); |
mach_wait_until(now + time_to_wait); |
} |
Timer accuracy depends on many factors, including the type of hardware being run on, the load on that hardware, and the power available (battery vs plugged in). However, if an otherwise unloaded machine or device is consistently missing your scheduled time by more than 500 microseconds, that should be considered an error, please file a bug with Apple. Often times we can do much better than that, but its best to measure for yourself.
Timers are not accurate across a sleep and wake cycle of the hardware.
Do use as little cpu as possible during your timer loop. Remember that your thread now has special privileges, and when you‘re running, no one else can. Do try to stretch out your timer requests as much as you safely can. This allows the kernel to use less battery by sleeping more often and longer.
Don‘t spin loop! This burns cpu and battery at very high rates, and when newer faster hardware is released you‘ll burn cpu and battery even faster!
Don‘t create large numbers of real time threads. Real time scheduling isn‘t magic, it works by making your thread higher priority than other threads on the system. If everyone tries to crowd to the front of the line, all the real time threads will fail
Windows 平台的 GetTickCount 这样的函数,找到一个叫 mach_absolute_time() 的函数,但是Apple的文档非常不
给力,找个半天才比较清楚是怎么回事,原来这个函数返回的值只是启动后系统CPU/Bus的clock一个tick数,跟GetTickCount不同,
因为这个 GetTickCount 是系统启动后的毫秒数,所以要获得系统启动后的时间需要进行一次转换
https://developer.apple.com/library/ios/technotes/tn2169/_index.html
标签:
原文地址:http://www.cnblogs.com/wfwenchao/p/4685953.html