stuff by sima

Vetter's ramblings on code, bugs, graphics, hw and the utter lack of sanity in all this.

  • Upstream, Why & How

    In a different epoch, before the pandemic, I’ve done a presentation about upstream first at the Siemens Linux Community Event 2018, where I’ve tried to explain the fundamentals of open source using microeconomics. Unfortunately that talk didn’t work out too well with an audience that isn’t well-versed in upstream and open source concepts, largely because it was just too much material crammed into too little time.

    Last year I got the opportunity to try again for an Intel-internal event series, and this time I’ve split the material into two parts. I think that worked a lot better. For obvious reasons I cannot publish the recordings, but I can publish the slides.

    The first part “Upstream, Why?” covers a few concepts from microeconomcis 101, and then applies them to upstream stream open source. The key concept is on one hand that open source achieves an efficient software market in the microeconomic sense by driving margins and prices to zero. And the only way to make money in such a market is to either have more-or-less unstable barriers to entry that prevent the efficient market from forming and destroying all monetary value. Or to sell a complementary product.

    The second part”Upstream, How?” then looks at what this all means for the different stakeholders involved:

    • Individual engineers, who have skills and create a product with zero economic value, and might still be stupid enough and try to build a career on that.

    • Upstream communities, often with a formal structure as a foundation, and what exactly their goals should be to build a thriving upstream open source project that can actually pay some bills, generate some revenue somewhere else and get engineers paid. Because without that you’re not going to have much of a project with a long term future.

    • Engineering organizations, what exactly their incentives and goals should be, and the fundamental conflicts of interest this causes. Specifically on this I’ve only seen bad solutions, and ugly solutions, but not yet a really good one. A relevant pre-pandemic talk of mine on this topic is also “Upstream Graphics: Too Little, Too Late”

    • And finally the overall business and more importantly, what kind of business strategy is needed to really thrive with an open source upstream first approach: You need to clearly understand which software market’s economic value you want to destroy by driving margins and prices to zero, and which complemenetary product you’re selling to still earn money.

    At least judging by the feedback I’ve received internally taking more time and going a bit more in-depth on the various concept worked much better than the keynote presentation I’ve done at Siemens, hence I decided to publish at the least the slides.

  • EOSS Prague: Kernel Locking Engineering

    EOSS in Prague was great, lots of hallway track, good talks, good food, excellent tea at meetea - first time I had proper tea in my life, quite an experience. And also my first talk since covid, pack room with standing audience, apparently one of the top ten most attended talks per LF’s conference report.

    The video recording is now uploaded, I’ve uploaded the fixed slides, including the missing slide that I accidentally cut in a last-minute edit. It’s the same content as my blog posts from last year, first talking about locking engineering principles and then the hierarchy of locking engineering patterns.

  • Locking Engineering Hierarchy

    The first part of this series covered principles of locking engineering. This part goes through a pile of locking patterns and designs, from most favourable and easiest to adjust and hence resulting in a long term maintainable code base, to the least favourable since hardest to ensure it works correctly and stays that way while the code evolves. For convenience even color coded, with the dangerous levels getting progressively more crispy red indicating how close to the burning fire you are! Think of it as Dante’s Inferno, but for locking.

    As a reminder from the intro of the first part, with locking engineering I mean the art of ensuring that there’s sufficient consistency in reading and manipulating data structures, and not just sprinkling mutex_lock() and mutex_unlock() calls around until the result looks reasonable and lockdep has gone quiet.

  • Locking Engineering Principles

    For various reasons I spent the last two years way too much looking at code with terrible locking design and trying to rectify it, instead of a lot more actual building cool things. Symptomatic that the last post here on my neglected blog is also a rant on lockdep abuse.

    I tried to distill all the lessons learned into some training slides, and this two part is the writeup of the same. There are some GPU specific rules, but I think the key points should apply to at least apply to kernel drivers in general.

    The first part here lays out some principles, the second part builds a locking engineering design pattern hierarchy from the most easiest to understand and maintain to the most nightmare inducing approaches.

    Also with locking engineering I mean the general problem of protecting data structures against concurrent access by multiple threads and trying to ensure that each sufficiently consistent view of the data it reads and that the updates it commits won’t result in confusion. Of course it highly depends upon the precise requirements what exactly sufficiently consistent means, but figuring out these kind of questions is out of scope for this little series here.

  • Lockdep False Positives, some stories about

    Recently we’ve looked a bit at lockdep annotations in the GPU subsystems, and I figured it’s a good opportunity to explain how this all works, and what the tradeoffs are. Creating working locking hierarchies for the kernel isn’t easy, making sure the kernel’s locking validator lockdep is happy and reviewers don’t have their brains explode even more so.

    First things first, and the fundamental issue:

    Lockdep is about trading false positives against better testing.

    The only way to avoid false positives for deadlocks is to only report a deadlock when the kernel actually deadlocked. Which is useless, since the entire point of lockdep is to catch potential deadlock issues before they actually happen. Hence false postives are not avoidable, at least not in theory, to be able to report potential issues before they hang the machine. Read on for what to do in practice.

  • Upstream Graphics: Too Little, Too Late

    Unlike the tradition of my past few talks at Linux Plumbers or Kernel conferences, this time around in Lisboa I did not start out with a rant proposing to change everything. Instead I celebrated roughly 10 years of upstream graphics progress and finally achieving paradise. But that was all just prelude to a few bait-and-switches later fulfill expectations on what’s broken this time around in upstream, totally, and what needs to be fixed and changed, maybe.

    The LPC video recording is now released, slides are uploaded. If neither of that is to your taste, read below the break for the written summary.

  • ELCE Lyon: Everything Great About Upstream Graphics

    At ELC Europe in Lyon I held a nice little presentation about the state of upstream graphics drivers, and how absolutely awesome it all is. Of course with a big focus on SoC and embedded drivers. Slides and the video recording

    Key takeaways for the busy:

    • The upstream DRM graphics subsystem really scales down to tiny drivers now, with the smallest driver coming in at just around 250 lines (including comments and whitespace), 10’000x less than the biggest!

    • Batteries all included - there’s modular helpers for everything. As a rule of thumb even minimal legacy fbdev drivers ported to DRM shrink by a factor of 2-4 thanks to these helpers taking care of anything that’s remotely standardized in displays and GPUs.

    • For shipping userspace drivers go with a dual-stack: Open source GL and Vulkan drivers for those who want that, and for getting the kernel driver merged into upstream. Closed source for everyone else, running on the same userspace API and kernel driver.

    • And for customer support backport the entire subsystem, try to avoid backporting an individual driver.

    In other words, world domination is assured and progressing according to plan.

  • Upstream First

    lwn.net just featured an article the sustainability of open source, which seems to be a bit a topic in various places since a while. I’ve made a keynote at Siemens Linux Community Event 2018 last year which lends itself to a different take on all this:

    The slides for those who don’t like videos.

    This talk was mostly aimed at managers of engineering teams and projects with fairly little experience in shipping open source, and much less experience in shipping open source through upstream cross vendor projects like the kernel. It goes through all the usual failings and missteps and explains why an upstream first strategy is the right one, but with a twist: Instead of technical reasons, it’s all based on economical considerations of why open source is succeeding. Fundamentally it’s not about the better software, or the cheaper prize, or that the software freedoms are a good thing worth supporting.

    Instead open source is eating the world because it enables a much more competitive software market. And all the best practices around open development are just to enable that highly competitive market. Instead of arguing that open source has open development and strongly favours public discussions because that results in better collaboration and better software we put on the economic lens, and private discussions become insider trading and collusions. And that’s just not considered cool in a competitive market. Similar arguments can be made with everything else going on in open source projects.

    Circling back to the list of articles at the top I think it’s worth looking at the sustainability of open source as an economic issue of an extremely competitive market, in other words, as a market failure: Occasionally the result is that no one gets paid, the customers only receive a sub-par product with all costs externalized - costs like keeping up with security issues. And like with other market failures, a solution needs to be externally imposed through regulations, taxation and transfers to internalize all the costs again into the product’s prize. Frankly no idea how that would look like in practice though.

    Anyway, just a thought, but good enough a reason to finally publish the recording and slides of my talk, which covers this just in passing in an offhand remark.

    Update: Fix slides link.

  • X.org Elections: freedesktop.org Merger - Vote Now!

  • Why no 2D Userspace API in DRM?

    The DRM (direct rendering manager, not the content protection stuff) graphics subsystem in the linux kernel does not have a generic 2D accelaration API. Despite an awful lot of of GPUs having more or less featureful blitter units. And many systems need them for a lot of use-cases, because the 3D engine is a bit too slow or too power hungry for just rendering desktops.

    It’s a FAQ why this doesn’t exist and why it won’t get added, so I figured I’ll answer this once and for all.

  • Linux Kernel Maintainer Statistics

    As part of preparing my last two talks at LCA on the kernel community, “Burning Down the Castle” and “Maintainers Don’t Scale”, I have looked into how the Kernel’s maintainer structure can be measured. One very interesting approach is looking at the pull request flows, for example done in the LWN article “How 4.4’s patches got to the mainline”. Note that in the linux kernel process, pull requests are only used to submit development from entire subsystems, not individual contributions. What I’m trying to work out here isn’t so much the overall patch flow, but focusing on how maintainers work, and how that’s different in different subsystems.

  • LCA Sydney: Burning Down the Castle

    I’ve done a talk about the kernel community. It’s a hot take, but with the feedback I’ve received thus far I think it was on the spot, and started a lot of uncomfortable, but necessary discussion. I don’t think it’s time yet to give up on this project, even if it will take years.

    Without further ado the recording of my talk “Burning Down the Castle” is on youtube. For those who prefer reading, LWN has you covered with “Too many lords, not enough stewards”. I think Jake Edge and Jon Corbet have done an excellent job in capturing my talk in a balanced fashion. I have also uploaded my slides.

    Further Discussion

    For understanding abuse dynamics I can’t recommend “Why Does He Do That?: Inside the Minds of Angry and Controlling Men” by Lundy Bancroft enough. All the examples are derived from a few decades of working with abusers in personal relationships, but the patterns and archetypes that Lundy Bancroft extracts transfers extremely well to any other kind of relationship, whether that’s work, family or open source communities.

    There’s endless amounts of stellar talks about building better communities. I’d like to highlight just two: “Life is better with Rust’s community automation” by Emily Dunham and “Have It Your Way: Maximizing Drive-Thru Contribution” by VM Brasseur. For learning more there’s lots of great community topic tracks at various conferences, but also dedicated ones - often as unconferences: Community Leadership Summit, including its various offsprings and maintainerati are two I’ve been at and learned a lot.

    Finally there’s the fun of trying to change a huge existing organization with lots of inertia. “Leading Change” by John Kotter has some good insights and frameworks to approach this challenge.

    Despite what it might look like I’m not quitting kernel hacking nor the X.org community, and I’m happy to discuss my talk over mail and in upcoming hallway tracks.

  • Why Github can't host the Linux Kernel Community

    A while back at the awesome maintainerati I chatted with a few great fellow maintainers about how to scale really big open source projects, and how github forces projects into a certain way of scaling. The linux kernel has an entirely different model, which maintainers hosting their projects on github don’t understand, and I think it’s worth explaining why and how it works, and how it’s different.

    Another motivation to finally get around to typing this all up is the HN discussion on my “Maintainers Don’t Scale” talk, where the top comment boils down to “… why don’t these dinosaurs use modern dev tooling?”. A few top kernel maintainers vigorously defend mailing lists and patch submissions over something like github pull requests, but at least some folks from the graphics subsystem would love more modern tooling which would be much easier to script. The problem is that github doesn’t support the way the linux kernel scales out to a huge number of contributors, and therefore we can’t simply move, not even just a few subsystems. And this isn’t about just hosting the git data, that part obviously works, but how pull requests, issues and forks work on github.

  • Review, not Rocket Science

    About a week ago there where 2 articles on LWN, the first coverging memory management patch review and the second covering the trouble with making review happen. The take away from these two articles seems to be that review is hard, there’s a constant lack of capable and willing reviewers, and this has been the state of review since forever. I’d like to counter pose this with our experiences in the graphics subsystem, where we’ve rolled out a well-working review process for the Intel driver, core subsystem and now the co-maintained small driver efforts with success, and not all that much pain.

    tl;dr: require review, no exceptions, but document your expectations

    Aside: This is written with a kernel focus, from the point of view of a maintainer or group of maintainers trying to establish review within their subsystem. But the principles really work anywhere.

  • X.org Foundation Election - Vote Now!

    It is election season again for the X.org Foundation. Beside electing half of the board seats we again have some paperwork changes - after updating the bylaws last year we realized that the membership agreement hasn’t been changed since over 10 years. It talks about the previous-previous legal org, has old addresses and a bunch of other things that just don’t fit anymore. In the board we’ve updated it to reflect our latest bylaws (thanks a lot to Rob Clark doing the editing), with no material changes intended.

    Like bylaw changes any change to the membership agreement needs a qualified supermajority of all members, every vote counts and not voting essentially means voting no.

    To vote, please go to https://members.x.org, log in and hit the “Cast” button on the listed ballot.

    Voting closes by  23:59 UTC on 11 April 2017, but please don’t cut it short, it’s a computer that decides when it’s over …

  • LCA Hobart: Maintainers Don't Scale

  • Maintainers Don't Scale

    This is the write-up of my talk at LCA 2017 in Hobart. It’s not exactly the same, because this is a blog and not a talk, but the same contents. The slides for the talk are here, and I will link to the video as soon as it is available. Update: Video is now uploaded.

  • How do you do docs?

    The fancy new Sphinx-based documentation has landed a while ago in upstream. Jani Nikula has written a nice overview on LWN (part 2). And it is getting used a lot. But judging by how often I type it in replies on the mailing list what’s missing is a super-short howto. To build the documentation, run:

    $ make DOCBOOKS="" htmldocs
    

    The output can then be found in Documentation/output/. When typing documentation please always check that your new text does get rendered. The output also contains documentation about kernel-doc and the toolchain itself. Since the build is incremental it is recommended that you first run it before touching anything. That way you’ll only see warnings in areas you’ve touched, not all of them - the build is unfortunately somewhat noisy.

    Update: of course also check out the nice documentation on kernel-doc itself.

  • Midlayers, Once More With Feelings!

    The collective internet troll fest had it’s fun recently discussing AMD’s DAL. Hacker news discussed the rejection and some of the reactions, reddit had some fun and of course everyone on phoronix forums was going totally nuts. Luckily reason seems to finally prevail with LWN covering things too. I don’t want to spill more bits over the topic itself (read the LWN coverage and mailing list threads for that), but I think it’s worth looking at the fundamental underlying problem a bit more.

  • Neat drm/i915 Stuff for 4.8

    I procristanated rather badly on this one, so instead of the previous kernel release happening the v4.8 release is already out of the door. Read on for my slightly more terse catch-up report.

  • Commit Rights in the Linux Kernel?!

    Since about a year we’re running the Intel graphics driver with a new process: Besides the two established maintainers we’ve added all regular contributors as committers to the main feature branch feeding into -next. This turned out into a tremendous success, but did require some initial adustments to how we run things in the first few months.

    I’ve presented the new model here at Kernel Recipes in Paris, and I will also talk about it at Kernel Summit in Santa Fe. Since LWN is present at both I won’t bother with a full writeup, but leave that to much better editors. Update: LWN on kernel maintainer scalability.

    Anyway, there’s a video recording and the slides. Our process is also documented - scroll down to the bottom for the more interesting bits around what’s expected of committers.

    On a related note: At XDC, and a bit before, Emma Anholt started a discussion about improving our patch submission process, especially for new contributors. He used the Rust community as a great example, and presented about it at XDC. Rather interesting to hear his perspective as a first-time contributor confirm what I learned in LCA this year in Emily Dunham’s awesome talk on Life is better with Rust’s community automation.

  • New Blog Engine!

    I finally unlazied and moved my blog away from the Google mothership to something simply, fast and statically generated. It’s built on Jekyll, hosted on github. It’s not quite as fancy as the old one, but with some googling I figured out how to add pages for tags and an archive section, and that’s about all that’s really needed.

    Comments are gone too, because I couldn’t be bothered, and because everything seems to add Orwellian amounts of trackers. Ping me on IRC, by mail or on fedi instead. The share buttons are also just plain links now without tracking for Twitter (because I’m there) and G+ (because all the cool kernel hackers are there, but I’m not cool enough).

    And in case you wonder why I blatter for so long about this change: I need a new blog entry to double check that the generated feeds are still at the right spots for the various planets to pick them up …

  • Awesome Atomic Advances

    Also, silly titles. Atomic has taken of for real, right now there’s 17 drivers supporting atomic modesetting merged into the DRM subsystem. And still a pile of them each release pending for review&merging. But it’s not just new drivers, there’s also been a steady stream of small improvements over the past year, I think it’s time for an update.

  • On Getting Patches Merged

    In some project there’s an awesome process to handle newcomer’s contributions - autobuilder picks up your pull and runs full CI on it, coding style checkers automatically do basic review, and the functional review load is also at least all assigned with tooling too.

    Then there’s projects where utter chaos and ad-hoc process reign, like the Linux kernel or the X.org community, and it’s much harder for new folks to get their foot into the door. Of course there’s documentation trying to bridge that gap, tools like get_maintainers.pl to figure out whom to ping, but that’s kinda the details. In the end you need someone from the inside to care about what you’re doing and guide you through the maze the first few times.

    I’ve been pinged about this a few times recently on IRC, so I figured I’ll type up my recommended best practices.

  • Neat drm/i915 Stuff for 4.7

    The 4.6 release is almost out of the door, it’s time to look at what’s in store for 4.7.

  • X.Org Foundation Election Results

    Two questions were up for voting, 4 seats on the Board of Directors and approval of the amended By-Laws to join SPI.

    Congratulations to our reelected and new board members Egbert Eich, Alex Deucher, Keith Packard and Bryce Harrington. Thanks a lot to Lucas Stach for running. And also big thanks to our outgoing board member Matt Dew, who stepped down for personal reasons.

    On the bylaw changes and merging with SPI, 61 out of 65 active members voted, with 54 voting yes, 4 no and 3 abstained. Which means we’re well past the 2/3rd quorum for bylaw changes, and everything’s green now to proceed with the plan to join SPI!

  • Should the X.org Foundation join SPI? Vote Now!

  • X.org Foundation Election - Vote Now!

    It’s election season in X.org land, and it matters: Besides new board seats we’re also voting on bylaw changes and whether to join SPI or not.

    Personally, and as the secretary of the board I’m very much in favour of joining SPI. It will allow us to offload all the boring bits of running a foundation, and those are also all the bits we tend to struggle with. And that would give the board more time to do things that actually matter and help the community. And all that for a really reasonable price - running our own legal entity isn’t free, and not really worth it for our small budget mostly consisting of travel sponsoring and the occasional internship.

    And bylaw changes need a qualified supermajority of all members, every vote counts and not voting essentially means voting no. Hence please vote, and please vote even when you don’t want to join - this is our second attempt and I’d really like to see a clear verdict from our members, one way or the other.

    Thanks.

    Voting closes by  Apr 26 23:59 UTC, but please don’t cut it short, it’s a computer that decides when it’s over …

  • Neat drm/i915 stuff for 4.6

    The 4.5 release is close, it’s time to look at what’s in store for the next kernel’s merge window in the Intel graphics driver.

  • ARM kernel cross compiling

    I do a lot of cross driver subsytem refactorings, and DRM has lots of drivers that only run on ARM. Which means I routinely break a leg or arm since at least in the past cross-compiling was somehow always super painful. But I’ve just learned (thanks to Daniel Stone) that cross-compiling this stuff has become real easy, so here’s my handy script for this. This assumes Debian, but the difference is just in installing a different cross-compiler toolchain.

  • LCA Geelong: Embrace the Atomic Display Age

  • VT Switching with Atomic Modeset

    First the title is a slight lie, this really is about compositor switching and not necessarily about using Linux VTs for that. But I hope that the title draws in the right folks and tempts them to read this. Since with atomic there’s a bit a problem if you want to switch between different compositors - maybe you have X running and hack on wayland-mutter, or kwin and mutter or just a DE and a login manager  - and expect it to not end up in a modern arts project like this.

  • Neat drm/i915 stuff for 4.5

    Kernel version 4.4 is released, it’s time for our regular look at what’s in store for the Intel graphics driver in the next release.

  • Better Markup for the Kernel GPU DocBook

    This summer Intel sponsored some work to improve the kerneldoc toolchain, with the aim to use all that to extend the DRM and i915 driver documentation we have. Most of it landed, but the last bit to integrate some type  of text markup processing was stalled until it could be discussed at the kernel summit, see the LWN summary. Unfortunately it died in a bikeshed fest due to an alliance of people who think docs are useless and you should just read the code, and others who didn’t even know how to convert the kerneldoc into something pretty.

    But we still need this, since without lists, highlighting, basic tables and inserting code snippets it’s really hard to write decent documentation. Luckily Dave Airlie is ok with using it for DRM kerneldoc as long as Intel maintains the support. It’s purely opt-in and the only downside of not using asciidoc is that the resulting docs won’t be as pretty. All the changes to the text itself to use this markup are going into upstream as normal. The only bit that’s not in upstream is the tooling, which is available in a topic branch at

    git://anongit.freedesktop.org/drm-intel topic/kerneldoc
    

    If you want to build pretty docs just install asciidoc and base your drm documentation patches on top of drm-intel-nightly from the same repository - that tree also includes all of Dave’s tree. Alternatively pull in the above topic branch into your own personal tree. Note that asciidoc is detected automatically, so you really only need it and the tooling branch to check the rendering of your changes.

    For added convenience Intel also maintains an autobuilder that pushes latest drm-intel-nightly DRM documentation builds to http://dri.freedesktop.org/docs/drm/.

    Aside: If all you want to build is just build the GPU DocBook instead of all of them, you can do that with

    $ make DOCBOOKS="gpu.xml" htmldocs
    

    With that have fun reading the new&improved documentation, and if you spot anything please submit a patch to dri-devel@lists.freedesktop.org.

  • Neat drm/i915 stuff for 4.4

    Due to vacations, conferences and other things I’m way later than usual and 4.3 has been released a while ago. More than overdue to take a look at what’s in store in the next kernel release.

  • XDC 2015: Atomic Modesetting for Drivers

    I’ve done a talk at XDC 2015 about atomic modesetting with a focus for driver writers. Most of the talk is an overview of how an atomic modeset looks and how to implement the different parts in a driver backend. Anyway, for all those who missed it, there’s a video and slides.

  • Neat drm/i915 stuff for 4.3

    Kernel 4.2 is released already and the 4.3 merge window in full swing, time to look at what’s in it for the intel graphics driver.

  • Atomic Modesetting Design Overview

    After a few years of development the atomic display update IOCTL for drm drivers is finally ready for prime time with the 4.2 pull request from Dave Airlie. It’s been a long road, with a lot of drivers already converted over to atomic and even more in progress, the atomic helper libraries and support code in the drm subsystem sufficiently polished. But what’s really missing is a design overview of what the overall atomic infrastructure looks like and why some decisions and details are implemented like they are.

    That’s now done and published on LWN: Part 1 talks about the problem space, issues with the Android atomic display framework and the basic atomic IOCTL interface. Part 2 goes into more detail about a few specific things like locking, helper library design and the exact semantics of atomic modessetting updates. Happy Reading!

  • Neat drm/i915 stuff for 4.2

    The 4.1 kernel release is still a few weeks off and hence a bit early to talk about 4.2. But the drm subsystem feature cut-off already passed and I’m going on vacation for 2 weeks, so here we go.

  • GFX Kernel Upstreaming Requirements

    Upstreaming requirements for the DRM subsystem are a bit special since Dave Airlie requires a full-blown open-source implementation as a demonstration vehicle for any new interfaces added. I’ve figured it’s better to clear this up once instead of dealing with the fallout from surprises and made a few slides for a training session. Dave reviewed and acked them, hence this should be the up-to-date rules - the old mails and from when some ARM SoC vendors tried to push drm drivers for blob userspace to upstream are a bit outdated.

    Any here’s the slides for my gfx kernel upstreaming requirements training.

  • Neat drm/i915 stuff for 4.1

  • Community Code of Conduct for intel-gfx

    [This is a cross-post from the mail I just sent out to intel-gfx.]

    Code of conducts seem to be in the news a bit recently, and I realized that I’ve never really documented how we run things. It’s different from the kernel’s overall Xorg Foundation event policy. Anyway, I think this is worth clarifying and here it goes.

  • Neat drm/i915 Stuff for 3.20

    Linux 3.19 was just released and my usual overview of what the next merge window will bring is more than overdue. The big thing overall is certainly all the work around atomic display updates, but read on for what else all has been done.

  • Update for Atomic Display Updates

    Another kernel release is imminent and a lot of things happened since my last big blog post about atomic modeset. Read on for what new shiny things 3.20 will bring this area.

  • LCA 2015: Botching Up IOCTLs

    So I’m stuck somewhere on an airport and jetlagged on my return trip from my very first LCA in Auckland - awesome conference, except for being on the wrong side of the globe: Very broad range of speakers, awesome people all around, great organization and since it’s still a community conference none of the marketing nonsense and sales-pitch keynotes.

    Also done a presentation about botching up ioctls. Compared to my blog post a bunch more details on technical issues and some overall comments on what’s really important to avoid a v2 ioctl because v1 ended up being unsalvageable. Slides and video (curtesy the LCA video team).

  • Neat drm/i915 stuff for 3.19

    So kernel version 3.18 is out the door and it’s time for our regular look at what’s in the next merge window.

  • Atomic Modeset Support for KMS Drivers

    So I’ve just reposted my atomic modeset helper series, and since the main goal of all that work was to ensure a smooth and simple transition for existing drivers to the promised atomic land it’s time to elaborate a bit. The big problem is that the existing helper libraries and callbacks to driver backends don’t really fit the new semantics, so some shuffling was required to avoid long-term pain. So if you are a driver writer and just interested in the details then read for what needs to be done to support atomic modeset updates using these new helper libraries.

  • Neat drm/i915 stuff for 3.18

    Since Dave Airlie moved the feature cut-off of the drm-next tree roughly one month ahead it is already time for our regular look at what’s ahead. Even though the 3.17 features aren’t even released yet.

  • Review Training Slides

    We currently have a large influx of new people contributing to i915 - for the curious just check the git logs. As part of ramping them up I’ve done a few trainings about upstream review, and a bunch of people I’ve talked with at KS in Chicago were interested in that, too. So I’ve cleaned up the slides a bit and dropped the very few references to Intel internal resources. No speaker notes or video recording, but I think this is useful all in itself. And of course if you have comments or see big gaps - feedback is very much welcome:

    Upstream Review Training Slides

  • Neat drm/i915 stuff for 3.17

    So with the 3.16 kernel out of the door it’s time to look at what’s queued up for the Intel graphics driver in 3.17.

  • Documentation for drm/i915

    So over the past few years the drm subsystem gained some very nice documentation. And recently we’ve started to follow suite with the Intel graphics driver. All the kernel documenation is integrated into one big DocBook and I regularly upload latest HTML builds of the Linux DRM Developer’s Guide. This is built from drm-intel-nightly so has slightly fresher documentation (hopefully) than the usual documentation builds from Linus’ main branch which can be found all over the place. If you want to build these yourself simply run

    $ make htmldocs
    

    For testing we now also have neat documentation for the infrastructure and helper libraries found in intel-gpu-tools. The README in the i-g-t repository has detailed build instructions - gtkdoc is a bit more of a fuzz to integrate.

    Below the break some more details about documentation requirements relevant for developers.

  • Neat drm/i915 stuff for 3.16

    Linus decided to have a bit fun with the 3.16 merge window and the 3.15 release, so I’m a bit late with our regular look at the new stuff for the Intel graphics driver.

  • LinuxTag 2014

    Tomorrow I’ll be travelling to LinuxTag in Berlin for the first time. Should be pretty cool, and to top it off I’ll give a presentation about the state of the intel kernel graphics driver. For those that can’t attend I’ve uploaded the slides already, and if there’s a video cut I’ll link to that as soon as it’s available.

  • Neat drm/i915 stuff for 3.15

    So the release of the 3.14 linux kernel already happended and I’m a bit late for our regular look at what cool stuff will land in the 3.15 merge window for the Intel graphics driver.

  • New drm/i915 Git Repository

    So earlier this year I’ve signed up Jani Nikula officially as co-maintainer. Now we’ve also gotten around to move the drm/i915 git repository to a common location so that there’s no longer a need to move it around when I’m on vacation. So everone please update any git references and remotes:

    git://anongit.freedesktop.org/drm-intel
    
    ssh://git.freedesktop.org/git/drm-intel
    
  • FOSDEM: Testing Kernel GPU Drivers

    So as usual FOSDEM was a blast and as usual the hallway track was great too. The beer certainly helped with that … Big props to the entire conference crew for organizing a stellar event and specifically to Luc and the other graphics devroom people.

    I’ve also uploaded the slides for my Kernel GPU Driver Testing talk. My OCD made me remove that additional dot and resurrect the missing ‘e’ in one of the slides even. FOSDEM also had live-streaming and rendering videos should eventually show up, I’ll add the link as soon as it’s there.

    Update: FOSDEM staff published the video of the talk!

  • Neat drm/i915 stuff for 3.14

    Kernel v3.13 is nearing its release, so it’s time at our regular look at what the next version will bring to the Intel gfx driver.

  • Botching up ioctls

    One clear insight kernel graphics hackers gained in the past few years is that trying to come up with a unified interface to manage the execution units and memory on completely different GPUs is a futile effort. So nowadays every driver has its own set of ioctls to allocate memory and submit work to the GPU. Which is nice, since there’s no more insanity in the form of fake-generic, but actually only used once interfaces. But the clear downside is that there’s much more potential to screw things up.

    To avoid repeating all the same mistakes again I’ve written up some of the lessons learned while botching the job for the drm/i915 driver. Most of these only cover technicalities and not the big-picture issues like what the command submission ioctl exactly should look like. Learning these lessons is probably something every GPU driver has to do on its own.

  • Testing Requirements for drm/i915 Features and Patches

    I want to make automated test coverage an integral part of our feature and bugfix development process. For features this means that starting with the design phase testability needs to be considered an integral part of any feature. This needs to go through the entire development process, from planning, development, patch submission and final validation. For bugfixes that means the fix is only complete once the automated testcase for it is also done, if we need a new one

    This specifically excludes testing with humans somewhere in the loop. We are extremely limited in our validation resources, every time we put something new onto the “manual testing” plate something else will fall off.

    I’ve let this float for quite a while both internally in Intel and on the public mailing lists. Thanks to everyone who provided valuable input. Essentially this just codifies the already existing expectations from me as the maintainer, but those requirements haven’t really been clear and a lot of emotional discussions ensued. With this we should now have solid guidelines and can go back to coding instead of blowing through so much time and energy on waging flamewars.

  • Neat drm/i915 stuff for 3.13

    It’s that time again when the old kernel v3.12 will be released shortly and we’ll take a look at what’s in store for the Intel GFX driver in 3.13.

  • More drm/i915 Testsuite Infrastructure

    After the recent overview over our kernel test infrastructure I’ve had to write a bunch multithreaded testcases. And since I’m rather lazily I’ve opted to create a few more helpers that hide all the little details when forking and joining processes. While at I think it’s also a good time to explain a bit the infrastructure we have to help running the kernel testsuite on simulated hardware and a few other generally useful things in our testsuite helper library.

  • Neat drm/i915 stuff for 3.12

    The linux kernel 3.11 will be released soonish and it’s time for our regular look at what the next merge window will bring in for the intel GPU driver.

  • Recent drm/i915 Testsuite Improvements

    Recently I’ve again wreaked decent havoc in our intel-gpu-tools kernel testsuite. And then shockingly noticed that I’ve never done a big pompous blog post to announce what we’ve started about one-a-half years ago. Besides just describing the new infrastructure for writing testcases (apparently every decent hacker must reinvent a testframework at least once …) I’ll also recap a bit what’s been going on in the past.

  • In Which Kernel Release Will $FEATURE Land In?

    So I get asked this question quite often and since I’m a terribly lazy person I’m just going to write this down once and then link to it …

  • Precomputing the CRTC Configuration in drm/i915

    Like I’ve briefly mentioned when explaining the new modeset code merged into kernel 3.7 one of the goals was to facility atomic modesetting and fastboot. Now the big new requirement for both atomic modesetting and fastboot is that we precompute the entire output configuration before we start touching the hardware. And this includes things like pll settings, internal link configuration and watermarks. For atomic modesetting we need this to be able to decide up-front whether a configuration requested by userspace works, or whether we don’t have enough bandwidth, display plls or lack some other resource. For fastboot we need to be able to compute and track the full display hardware state in software so that we can compare a requested configuration from userspace with the boot-up state taken over from the BIOS. Otherwise we might end up with suboptimal settings inherited from the firmware or worse, we’d try to reuse some resources still in use by the firmware configuration (like when the BIOS and the linux driver would pick different display plls).

    Now the new modeset code merged into 3.7 already added such state tracking and precomputation for the output routing between crtcs and connectors. Over the past months, starting with the just released 3.10 linux kernel we’ve added new infrastructure and tons of code to track the low-level hardware state like display pll selection, FDI link configuration and tons of other things. Read on below for an overview of how the new magic works.

  • On Forklifting and Pitchforks

    So sometimes someone wants the latest drm kernel driver bits, but please on a ridiculously outdated stable kernel like last month’s release. Or much worse, actually something from last year.

    Ok I’m getting ahead of myself here, usually these things start with a request for a “minimal” backport to enable $NEW_PLATFORM or some other really important feature. But the lesson that drm drivers are complex beasts and that such minimal backports tend to require full validation again and a lot of work to fix up the fallout is usually learned quickly. So backporting the entire beast with all drivers it is.

    The next obvious question is why we can’t just copy the entire thing, since reconstructing the git history is rather tedious work, and usually ends up with a few commits in between that don’t quite work correctly. Which means that the backported git history is usually of little value. The answer to that question tends to boil down to “we’ve promised this to the customer …”. So if you’re in such a situation, read on below on how to most efficiently wrestle git to forklift the history of an entire subsystem onto an older kernel release.

  • drm/i915 Branches Explained

    Since there’ve been questions I’ll try to explain the various branches in the drm-intel.git repository a bit.

  • Neat drm/i915 stuff for 3.11

    Kernel 3.10 will be release soon, so it’s time for our regular look at what the next kernel will bring. Purely on statistics the drm-intel-next pull for 3.11  is one of the biggest pull request with 314 patches thus far. Read on after the break for the details.

  • i915/GEM Q&A

    So apparently people do indeed read my my i915/GEM crashcourse and a bunch of follow-up questions popped up in private mails. Since I’m a lazy bastard I’ve cleaned some of the common questions&answers up to be able to easily point at them. And hopefully they also help someone else to clarify things a bit.

  • Neat drm/i915 stuff for 3.10

    So kernel 3.9 should be releasing really soon and so it’s time for our regular look at what 3.10 brings to drm/i915:

  • Overclocking your Intel GPU on Linux

    First things first: If you damage your hardware and burn down your house, it’s not my problem. You’ve been warned!

    So after a bit of irc discusssions yesterday it turns out that you can overclock intel gpus by quite a margin. Which makes some sense now that the gfx performance of intel chips isn’t something to be completely ashamed of.

  • Neat drm/i915 stuff for 3.9

    Now that 3.8 is winding down it’s time to look at what 3.9 will bring for the drm/i915 driver:

  • New Kernel Modesetting Locking

    Now that my kernel modesetting lockig rework has landed in Dave’s drm-next tree and is gearing up for inclusion into 3.9 I’ve figured it’s time to also post my little intro here:

    The aim of this locking rework is that ioctls which a compositor should be might call for every frame (set_cursor, page_flip, addfb, rmfb and getfb/create_handle) should not be able to block on kms background activities like output detection. And since each EDID read takes about 25ms (in the best case), that always means we’ll drop at least one frame.

    The solution is to add per-crtc locking for these ioctls, and restrict background activities to only use the global lock. Change-the-world type of events (modeset, dpms, …) need to grab all locks.

  • FOSDEM Slides 2013

    FOSDEM was great as ever, despite that I’ve managed to grab a bit too much attention from my management by spilling the Intel OTC gfx driver team headcount a bit ;-) Anyway, for those who missed my presentation, I’ve uploaded the slides (with that embarrassing typo fixed in one headline). Phoronix should upload a video of the talk soon, I’ll add a link as soon as that happens.

  • i915/GEM Crashcourse: Overview

    Now that the entire series is done I’ve figured a small overview would be in order.

    Part 1 talks about the different address spaces that a i915 GEM buffer object can reside in and where and how the respective page tables are set up. Then it also covers different buffer layouts as far as they’re a concern for the kernel, namely how tiling, swizzling and fencing works.

    Part 2 covers all the different bits and pieces required to submit work to the gpu and keep track of the gpu’s progress: Command submission, relocation handling, command retiring and synchronization are the topics.

    Part 3 looks at some of the details of the memory management implement in the i915.ko driver. Specifically we look at how we handle running out of GTT space and what happens when we’re generally short on memory.

    Finally part 4 discusses coherency and caches and how to most efficiently transfer between the gpu coherency domains and the cpu coherncy domain under different circumstances.

    Happy reading!

    Update: There’s now also a new article with a few questions and answers about some details in the i915 gem code.

  • i915/GEM Crashcourse, Part 4

    In the previous installment we’ve taken a closer look at some details of the gpu memory management. One of the last big topics now still left are all the various caches, both on the gpu (both render and display block) and the cpu, and what is required to keep the data coherent between them. Now one of the reasons gpus are so fast at processing raw amounts of data is that caches are managed through explicit instructions (cutting down massively on complexity and delays) and there are also a lot of special-purpose caches optimized for different use-cases. Since coherency management isn’t automatic, we will also consider the different ways to move data between different coherency domains and what the respective up- and downsides are. See the i915/GEM crashcourse overview for links to the other parts of this series.

  • i915/GEM Crashcourse, Part 3

  • Neat drm/i915 stuff for 3.8

    The linux kernel 3.7 hasn’t even shipped yet, but we’re already lining up all the ducks for 3.8. And since feature wise I don’t expect anything massive any more on top (since the feature merge period will close rsn) I’ve figured I might do the overview as well a bit earlier:

  • i915/GEM Crashcourse, Part 2

  • i915/GEM Crashcourse

    This is the first part in a short tour of the Intel hardware and what the GEM (graphics execution manager) in the i915 does. See the overview for links to the other parts.

  • Neat drm/i915 stuff for 3.7

    Now that the upstream merge window has started and my last drm-intel-next pull request landed in Dave Airlie’s drm tree it’s a good time to look at some of the things landing in 3.7:

  • Slides for XDC 2012 Presentation

    I’ve already published a short writeup, now I’ve also put up the slides. There’s also nice writeups of all other XDC sessions available at the X Wiki. I’ll add a direct link to the video of my talk once it shows up.

  • New Modeset Code

    drm/i915.ko is gearing up to gain a new modeset code, to finally move away from the crtc helper code (in drm/drm_crtc_helper.c) used by (up to now) all kms drivers. As a quick reference I’ll detail the motivation and design of the new code a bit here (mostly stitched together from patchbomb announcements and commits introducing the new concepts).

  • Eugeni Dodonov, 1981-2012

    Personally I’ll remember you as the cheerful hacker who signed up for all sorts of crazy stuff. And then followed through and came out of the dungeons with some great patches - or at least a few good jokes.

    Rest in peace, we’ll miss you.

  • git for Bug Reporters

    Pretty often I point bug reporters at random git branches and sometimes they’ll happily compile kernels from sources, but have no idea about git. Unfortunately there doesn’t seem to be a quick how-to tailored to the needs of bug reporters, so let’s fix that.

  • Slides for FODSEM 2012 dma_buf presentation

    I’m to lazy to publish a proper writeup of my FOSDEM talk of where dma_buf is currently standing, where it’ll likely head to and what to might still come in 1-2 years. But you can grabt the slides. Enjoy!

    Edit: Phoronix also has an article with video recordings.

  • New drm-intel-next Git Tree

    [This is a verbatim copy of the announcement that went out to intel-gfx & Co. today.]

    Because Keith is routinely really busy with all kinds of things, notably gathering fixes for drm-intel-fixes, the patch merge process for the next release cycle sometimes falls behind. To support him and improve things I’ve been volunteered to take over handling the -next tree.

  • GEM Overview

    This is the script of a short intro I’ve given at the Linaro@UDS memory management summit in Budapest in spring 2011. Yep, it’s a bit old …

  • Hell Froze Over

    From the infamous “i855GM cache coherency is made of fail” bug:

    legolas558: In yesterday’s git pull I have noticed some interesting patches and I can say that the vanilla kernel (with xorg-server 1.9.4 and xf86-video-intel 2.14.0) is now very stable, no video corruptions and videos playing smoothly</blockquote>

    ‘Nuff said.

  • On Getting Your API Right

    Ben Widawsky is writing hardware contexts support for the i915 drm module. The resulting API discussion showed that things are not simple and probably in need of an iteration or two. Now linux kernel rules mandate that an ioctl once merged must be supported (almost) forever. So one can still see all the evolutionary steps of gpu command submission. Read on for a bit of a history trip.

subscribe via RSS