Experience report: Linux pwm-ir-tx accuracy/reliability

By David Röthlisberger. Tweet your comments @drothlis.

Published 12 Mar 2023.

At Stb-tester we send infrared remote-control signals to test TVs and set-top-boxes; and we analyse the video output from the device-under-test to verify that it is behaving correctly. For this automated testing to be viable at scale, it’s crucial that we can rely on our remote-control emulation to be 100% accurate.

For many years we have used LIRC’s ftdix user-space driver which offloads the entire signal to an external USB device. Recently we started experimenting with the in-kernel pwm-ir-tx driver; the PWM hardware will generate the carrier signal by switching the infrared output on & off at the desired frequency (say 38kHz), but the CPU needs to be involved to start & stop this carrier signal at the beginning & end of each “pulse” — these pulses or the spaces between them can be as short as 200µs. If the duration of these pulses and spaces isn’t accurate enough, the receiver might not recognize the signal.

We have a pretty decent test-bench for testing the accuracy/reliability of sent IR signals, where we can send say 500,000 IR signals and we can verify that each & every one of them had the desired effect on the receiver (e.g. a Roku set-top-box). For more details of the test-bench see https://stb-tester.com/blog/2016/05/26/ir-post-mortem (you could set this up cheaply with stb-tester and a v4l2 HDMI capture device).

On its own, pwm-ir-tx works well enough. But when the system is under load, the accuracy drops drastically. Our system needs to run video-capture processes, a WebRTC client, etc. which I guess cause loads of interrupts and general competition for CPU time. Under normal usage our load average is around 4.5 (it’s a quad-core CPU) and context switches 16-19kHz.

We tried many things to make the IR signals reliable with pwm-ir-tx under these conditions. All of the following experiments didn’t work, with at least 2% of IR signals failing (often much worse). “Failing” means that the device receiving these signals didn’t recognize them:

In the end, the only thing that worked was all of these changes together:

Under that configuration, we successfully sent 500,000 IR signals without any failures (aka “missed keypresses” on the Roku) but it sucks to dedicate an entire CPU to such a trivial task.

I found this stackexchange answer helpful (bold emphasis mine):

The smallest time you can wait for with nanosleep() is driven by several factors:

With a default timer slack value of 50 µs, even when calling nanosleep with 0 ns or 1 ns duration you effectively wait for 50 µs or so, by default, if your process runs under normal (OTHER) scheduling.

You can disable the timer slack mechanism via a prctl() call or by running your process under a realtime scheduling class.

However, a nanosleep call then still yields a voluntary context switch, which takes 3 µs or so, depending on your hardware.

We use SCHED_FIFO so timer slack should be disabled; that leaves context-switch time and timer accuracy.

See also the kernel documentation on the various delay / sleep functions.

Possible future work:

In a private email exchange, Sean Young (Linux kernel infrared maintainer) said:

pwm-ir-tx is not great, I agree. gpio-ir-tx should be very accurate. However, that holds a cpu core for the duration of the send, which is not good. It does not play nicely with other parts of the kernel (e.g. the realtime patches).

With pwm-ir-tx the hope was that it would replace gpio-ir-tx without holding a cpu core. In reality it does not work well enough, scheduling is just not predictable enough for this. We could just hold the cpu while sending with pwm-ir-tx, but then it’s just not worth it to use pwm, might as well use gpio-ir-tx, the cpu is busy spinning anyway.