Liquidat reports, “Pierre Ossman, the MultiMediaCard (MMC) subsystem maintainer, announced the new related patches for the Linux kernel almost two weeks ago. He described the patchset as “probably … one of the biggest ones for the MMC layer so far” and highlighted the SDIO and SPI support as the major improvements.”
Sure, it sounds great but how long before a renovation wreaks havoc on users? I recall an issue where implementation of USB scanners caused such confusion because on my beloved Debian that scanners could only be accessed as “su”, and doing that invoked warnings. This occurred because people on the kernel side of things decided to change the infrastructure. And you know what? Nobody seemed to know what was happening, except the good old folks that were doing the renovating, and their documentation was useless –at least to the non-developer Gnu-Linux user sort. Yes, I read the documentation, and it was cryptic, decentralised, and seemed incomplete and contradictory.
My issue with Gnu-Linux, is its seemingly ever evolving protocols and implementations. I’ve seen too many HOWTOs detailing how to fix something, that eventually become irrelevant because the fixes no longer work due to internal software changes (while functionality continues broken). I’ve written some of them myself and lived many other undocumented problems. Mind you, this cannot be all attributable to kernel development, but it does seem to be normative of Linux and monolithic software in general.
For instance, if you install NVidia drivers onto a Gnu-Linux distro, you may just end up with a black screen after a system update. So then, you have to go through the trouble of reinstalling a newer set of kernel compatible binary drivers post-update (and you may have to wait until they are developed). Not to say that this would not happen on BSD, but they are more conservative in relation to the chaotic development model idiosyncratic of Linux, which deteriorates the end-user experience. A more conservative approach might be needed, something with a mature infrastructure such as the BSDs or –even better yet– a “revolutionary” micro-kernel infrastructure where changes could be worked upon without extensive detrimental breakage, because some developers are irresponsible and bring about change in a haphazard fashion.
To point out a major problem with the Linux monolithic kernel model, if your hardware is not entirely supported with the Linux kernel, you probably go through the trouble of finding, verifying compatibility, and installing drivers for such and such hardware –if you’re lucky. If not, you have to compile and build your hardware drivers by hand, but not before digging through source code looking for instructions, and most of the time these documents redirect you elsewhere. But you got to read everything anyway because information is spread over multiple files. You know the best part? You probably have to do this each and every time you update your kernel, as long as Linus doesn’t incorporate support for your hardware into his kernel. And you know you’ll want to update your kernel because you’ll want the latest, greatest, fastest and shiniest, not to mention to placate security concerns. It’s been years and I’m still waiting on Syntax USB-400 (Prism 2 chip-set) kernel support (there is also a Prism 3 based USB-400). With the micro-kernel model, your video drivers would function independently of the kernel and would not depend on kernel compatibility. Thus in the above example, you could perform a system wide update (get all the security update benefits and new bells and whistles) and not end up with a black screen due to kernel incompatible video drivers, just as long as all the video bits work.
A great characteristic of the micro-kernel (such as Minix 3) for users of proprietary drivers is that their their security concerns would diminish as these drivers would not be running in the kernel, privy to root and cross-program dealings –something I find incredibly stupid of Linus’ design, or rather the design he decided to implement. You could update a micro-kernel without breaking anything or –AFAIK– rebooting. I wonder why we continue to support a faulty monolithic design, such as Linux. Sure we pride ourselves on running a state of the art OS called Gnu-Linux –something light years ahead of Windows– but how much better off are we? We’re still in the dark ages.
I seem to recall Linus Torvalds saying that faulty drivers rarely ever bring his kernel down. Sorry, I have to disagree as I’ve lived this many times before.
This is licensed under the Attribution-NonCommercial-ShareAlike 3.0 Unported Creative Commons License. All brands mentioned are properties of their respective owners. By reading this article, the reader forgoes any accountability of the writer. The reading of this article implies acceptance of the above stipulations. The author requires attribution –by full name and URL– and notification of republications.