Friday, November 22, 2013
To avoid repeating all the same mistakes again I've written up some of the lessons learned while botching the job for the drm/i915 driver. Most of these only cover technicalities and not the big-picture issues like what the command submission ioctl exactly should look like. Learning these lessons is probably something every GPU driver has to do on its own.
Tuesday, November 12, 2013
This specifically excludes testing with humans somewhere in the loop. We are extremely limited in our validation resources, every time we put something new onto the "manual testing" plate something else will fall off.
I've let this float for quite a while both internally in Intel and on the public mailing lists. Thanks to everyone who provided valuable input. Essentially this just codifies the already existing expectations from me as the maintainer, but those requirements haven't really been clear and a lot of emotional discussions ensued. With this we should now have solid guidelines and can go back to coding instead of blowing through so much time and energy on waging flamewars.
Saturday, November 2, 2013
Saturday, September 21, 2013
So after the recent overview over our kernel test infrastructure I've had to write a bunch multithreaded testcases. And since I'm rather lazily I've opted to create a few more helpers that hide all the little details when forking and joining processes. While at I think it's also a good time to explain a bit the infrastructure we have to help running the kernel testsuite on simulated hardware and a few other generally useful things in our testsuite helper library.
Monday, September 2, 2013
Wednesday, August 28, 2013
So recently I've again wreaked decent havoc in our intel-gpu-tools kernel testsuite. And then shockingly noticed that I've never done a big pompous blog post to announce what we've started about one-a-half years ago. So besides just describing the new infrastructure for writing testcases (apparently every decent hacker must reinvent a testframework at least once ...) I'll also recap a bit what's been going on in the past.
Wednesday, July 24, 2013
Tuesday, July 23, 2013
Like I've briefly mentioned when explaining the new modeset code merged into kernel 3.7 one of the goals was to facility atomic modesetting and fastboot. Now the big new requirement for both atomic modesetting and fastboot is that we precompute the entire output configuration before we start touching the hardware. And this includes things like pll settings, internal link configuration and watermarks. For atomic modesetting we need this to be able to decide up-front whether a configuration requested by userspace works, or whether we don't have enough bandwidth, display plls or lack some other resource. For fastboot we need to be able to compute and track the full display hardware state in software so that we can compare a requested configuration from userspace with the boot-up state taken over from the BIOS. Otherwise we might end up with suboptimal settings inherited from the firmware or worse, we'd try to reuse some resources still in use by the firmware configuration (like when the BIOS and the linux driver would pick different display plls).
Now the new modeset code merged into 3.7 already added such state tracking and precomputation for the output routing between crtcs and connectors. Over the past months, starting with the just released 3.10 linux kernel we've added new infrastructure and tons of code to track the low-level hardware state like display pll selection, FDI link configuration and tons of other things. Read on below for an overview of how the new magic works.
Friday, July 5, 2013
So sometimes someone wants the latest drm kernel driver bits, but please on a ridiculously outdated stable kernel like last month's release. Or much worse, actually something from last year.
Ok I'm getting ahead of myself here, usually these things start with a request for a "minimal" backport to enable
$NEW_PLATFORM or some other really important feature. But the lesson that drm drivers are complex beasts and that such minimal backports tend to require full validation again and a lot of work to fix up the fallout is usually learned quickly. So backporting the entire beast with all drivers it is.
The next obvious question is why we can't just copy the entire thing, since reconstructing the git history is rather tedious work, and usually ends up with a few commits in between that don't quite work correctly. Which means that the backported git history is usually of little value. The answer to that question tends to boil down to "we've promised this to the customer ...". So if you're in such a situation, read on below on how to most efficiently wrestle git to forklift the history of an entire subsystem onto an older kernel release.