NVIDIA Releases Drivers With Openness Flavor

This year we have already seen significant leaks of NVIDIA source code and a release of open source drivers for NVIDIA Tegra. It looks like NVIDIA has decided to bolster it and just released open-source GPU kernel modules for Linux. The GitHub link called open-gpu-kernel-modules made people happy, and we’re already testing the code, making memes, and speculating about the future. This driver is currently claimed to be experimental, “production ready” only for data center cards – but you can try it out right now!

Of course there is nuance. This is new code and is separate from the known proprietary driver. It only works on cards from the RTX 2000 and Quadro RTX series (also known as Turing and above). The good news is that performance is comparable to the closed-source driver, even at this point! A quirk of this project – much of the features that AMD and Intel drivers implement in the Linux kernel are instead delivered by a binary blob from the GPU. This blob runs on the GSP, a RISC-V core only available on Turing GPUs and younger – hence the serial limitation. Now every GPU loads a piece of firmware, but this one is big!

Apart from that, this driver already offers a more coherent integration into the Linux kernel, with huge benefits that will only increase in the future. Not everything is open yet – NVIDIA’s userspace libraries and OpenGL, Vulkan, OpenCL, and CUDA drivers will remain closed for now. The same goes for the old NVIDIA proprietary driver which I think would be left to rot – fitting, since “letting it rot” is what that driver has done before with generations old but perfectly usable cards.

Upstreaming this driver will certainly be a huge effort, but that is certainly the goal and the benefits will be significant as well. Even as it is, this driver has a lot more potential. Like a British police officer, the Linux kernel checks the license of every kernel module it loads, restricting the APIs it can use if it doesn’t have a GPL license – which the previous NVIDIA driver wasn’t, because the open parts essentially they were a thin layer between the kernel and the binary drivers, and thus not GPL licensable. Because this driver is MIT/GPL licensed, they now have a larger set of interfaces at their disposal and can better integrate it into the Linux ecosystem rather than having a set of proprietary tools.

A GPU on what appears to be some sort of expansion card, with a Raspberry Pi Compute Module box in front of it
Now with 65% more driver, per driver!

The debugging capabilities, security, and overall integration potential should improve. In addition, a whole range of new possibilities have been opened up. For starters, it certainly opens the door for driver porting to other operating systems like FreeBSD and OpenBSD, and could even help libre computing. NVIDIA GPU support on ARM will get easier in the future, and we could see more cool efforts to take advantage of what GPUs help us do when paired with an ARM SBC, from exciting video games to powerful machine learning. The Red Hat release says there’s more to come when it comes to properly integrating NVIDIA products into the Linux ecosystem.

You’ll generally see everyone cheering for this, for good reasons. The tradition is that we celebrate such radical movements, even if they are not perfect, of large companies – and rightly so, given the benefits I just mentioned and the future potential. As we see more such moves from big players, we’ll have a lot of things to rejoice about, and a lot of problems will be left in the past. However, when it comes to being open to what we value, the situation gets a little weird and hard to deal with.

Openness helps us add features we need, solve problems we encounter, learn new things from the work of others and push the boundaries as we interact with technology that increasingly defines our lives. If we are to believe all the thrilling sci-fi we read as kids, we are indeed supposed to be collaborating with technology. In many ways, this driver isn’t the kind of openness our hardware helps us out with, but it certainly checks a lot of boxes for what we consider “open”. How did we get here?

It is well known that opening every piece of code is not what big companies do – you have to hide the DRM bits and the patent violations somewhere. Here, much of the code that used to be in the proprietary driver now runs on a different CPU and is just as opaque as before. No driver relies on binary blob code as much as this one, and yet, only semi-irronically, it’s not that far from where it could technically become RYF certified. Only the objectionable binary blobs are now “firmware” instead of “software”.

Photo of a Thinkpad X200 on a desk, display with a GRUB menu with a cute image of the GNU mascot in the background
Something is wrong if this is considered more open than Novena

The Free Software Foundation’s RYF (Respects Your Freedom) certification, while well-intentioned, has received a lot of attention lately for being counterproductive to its goals and hardware creation. more complex without necessity, and even the Libreboot project leader says the principles leave something to be desired. We implicitly took the RYF certification as the guideline for openness to strive for, but the Novena laptop chose not to adhere to it and is certainly better of. We can learn a lot from RYF and it is clear that we need more help.

From here – what do we consider “open”? And who can help us keep track of what is “open” – especially the kind of openness that takes us to a more utopian, yet realistic, world where our relationship with technology is healthy and loving? Some guidelines and principles help us check if we’re staying on the right track – and the world has changed so much that old ideas don’t always apply, much like the cloud-hosted software loophole that turns out to be difficult to solve.

Still, a lot more code has been opened up, and this is a win on some fronts. At the same time, we won’t get where we want to be if other companies decide to follow suit, and as hackers we won’t achieve many of the groundbreaking things you’ll see us achieve with open source tools in our hands. And if we’re not careful, we could confuse this with the kind of openness we all come here to learn from. So it’s a mix.

As mentioned, this driver is for 2000 and above RTX series. Old maps are still limited to the proprietary driver or Nouveau – which was crippled by NVIDIA in the past. Example: In recent years NVIDIA has re-implemented vital functions such as clock control in a way that can only be accessed through a signed closed-API firmware shim that is a pain to reverse engineer and has since become uncooperative – which left the Nouveau project with no cure. insight. Unlike AMD that helps revise the code for the cards released before their open driver dropped, this problem persists.

From here, however, Nouveau will live on. In part it will still be usable for older maps that aren’t going anywhere, and in part it looks like it could help replace the aforementioned userspace libraries that remain closed-source. The official NVIDIA release page says it’s not impossible that Nouveau efforts and the NVIDIA open driver efforts could merge into one, a win for all, even if a tad bittersweet.

Due to shortages, you might not get a GPU to run this driver on anyway. That said, we’ll recover from the shortages and mining craze, and prices will drop to the point where our systems will run better – maybe not your MX150-equipped laptop, but certainly a lot of powerful systems we’ve yet to have. to build. NVIDIA is not where AMD and Intel stand yet, but they are getting there.

[Tux penguin image © Larry Ewing, coincidentally remixed using GIMP.]

Leave a Comment

Your email address will not be published.