TechNews Pictorial PriceGrabber Video Mon Nov 25 19:50:19 2024

0


Linux at 25: Linus Torvalds on the evolution and future of Linux
Source: Paul Venezia


The last time I had the occasion to interview Linus Torvalds, it was 2004, and version 2.6 of the Linux kernel had been recently released. I was working on a feature titled “Linux v2.6 scales the enterprise.” The opening sentence was “If commercial Unix vendors weren’t already worried about Linux, they should be now.” How prophetic those words turned out to be.

More than 12 years later -- several lifetimes in the computing world -- Linux can be found in every corner of the tech world. What started as a one-man project now involves thousands of developers. On this, its 25th anniversary, I once again reached out to Torvalds to see whether he had time to answer some questions regarding Linux’s origins and evolution, the pulse of Linux’s current development community, and how he sees operating systems and hardware changing in the future. He graciously agreed.

The following interview offers Torvalds’ take on the future of x86, changes to kernel development, Linux containers, and how shifts in computing and competing OS upgrade models might affect Linux down the line.

Linux’s origins were in low-resource environments, and coding practices were necessarily lean. That’s not the case today in most use cases. How do you think that has affected development practices for the kernel or operating systems in general?

I think your premise is incorrect: Linux's origins were definitely not all that low-resource. The 386 was just about the beefiest machine you could buy as a workstation at the time, and while 4MB or 8MB of RAM sounds ridiculously constrained today, and you'd say "necessarily lean," at the time it didn't feel that way at all.

So I felt like I had memory and resources to spare even back 25 years ago and not at all constrained by hardware. And hardware kept getting better, so as Linux grew -- and, perhaps more importantly, as the workloads you could use Linux for grew -- we still didn't feel very constrained by hardware resources.

From a development angle, I don't think things have changed all that much. If anything, I think that these days when people are trying to put Linux in some really tiny embedded environments (IoT), we actually have developers today that feel more constrained than kernel developers felt 25 years ago. It sounds odd, since those IoT devices tend to be more powerful than that original 386 I started on, but we've grown (a lot) and people’s expectations have grown, too.

Hardware constraints haven't been the big issue affecting development practices, because the hardware grew up with our development. But we've certainly have had other things that affect how we do things.

The fact that Linux is "serious business" obviously changes how you work -- you have more rules and need to be more thoughtful and careful about releases. The pure amount of people involved also radically changes how you develop things: When there were a few tens of developers and we all could email each other patches, things worked differently from when there are thousands of people involved, and we obviously need source control management and the whole distributed model that Git has.

Our development model has changed a lot over the quarter century, but I don't think it's been because of hardware constraints.

Do you see any fundamental differences in the younger kernel hackers today versus those of 20 years ago?

It's very hard to be introspective and actually get it right. I don't think the kernel developers are necessarily all that different; I think the scale and maturity of the project itself is the much bigger difference.

Twenty years ago, the kernel was much smaller, and there were fewer developers. It was perhaps to some degree easier to get into development due to that: There was less complexity to wrap your mind around, and it was easier to stand out and make a (relatively) big difference with a big new feature.

Today, it's a lot harder to find some big feature that hasn't already been done -- the kernel is a fairly mature project, after all. And there are tons of developers who have been around for a long time, so it is harder to stand out. At the same time, we have a lot more infrastructure for new people to get involved with, and there are lots more drivers and hardware support that you can get involved with, so in other respects things have gotten much easier. After all, today you can buy a Raspberry Pi for not very much money and get involved in doing things that 20 years ago were simply not even possible.

The other thing that has changed is obviously that 20 years ago, you'd get involved with Linux purely for the technical challenge. These days, it can easily be seen as a career: It's a big project with a lot of companies involved, and in that sense the market has certainly changed things radically. But I still think you end up having to be a pretty technically minded person to get into kernel programming, and I don't think the kind of person has changed, but it has maybe meant that people who 20 years ago would have gone, "I can't afford to tinker with a toy project, however interesting it might be," now see Linux as a place to not just have a technically interesting challenge, but also a job and a career.

Do you view the blossoming growth of higher-level and interpreted languages and associated coding methods as drawing talented developers away from core internal OS development?

No, not at all. I think it mainly expands the market, but the kind of people who are interested in the low-level details and actual interaction with hardware are still going to gravitate to projects like the kernel.

The higher-level languages mostly reflect the fact that the problem space (and the hardware) has expanded, and while a language like C is still relevant for system programming (but C has evolved a bit too over the years), there are obviously lots of areas where C is definitely not the right answer and never will be. It's not an either-or situation (and it's not a zero-sum game); it's just a reflection on the kinds of problems and resource constraints different projects have.

What do you think the future holds for x86?

I'm not much for a crystal ball, but there is obviously the big pattern of "small machines grow up," and all the historical comparisons with how the PC grew up and displaced almost everything above it. And everybody is looking at embedded and cellphones, and seeing that grow up and the PC market not growing as much.

That's the obvious story line, and it makes a lot of people excited about the whole x86-vs.-ARM thing: "ARM is going to grow up and displace x86."

At the same time, there are a few pretty big differences, too. One big reason PCs grew up and took over was that it was so easy to develop on them, and not only did you have a whole generation of developers growing up with home computers (and PCs in particular), and even when you were developing for one of those big machines that PCs eventually displaced, you were often using a PC to do so. The back-end machines might have been big serious iron, but the front end was often a PC-class workstation.

When the PCs grew up, they easily displaced the bigger machines because you had all these developers that were used to the PC environment and actually much preferred having the same development environment as their final deployment environment.

That pattern isn't holding for the whole x86-vs.-ARM comparison. In fact, it's reversed: Even if you are developing for the smaller ARM ecosystem, you still are almost certain to be using a PC (be it Linux, MacOS, or Windows) to do development, and you just deploy on ARM.

In that very real sense, in the historical comparison with how x86 PCs took over the computing world, ARM actually looks more like the big hardware that got displaced and less like the PC that displaced it.

What does it all mean? I don't know. I'm not seeing ARM grow up until it is self-sufficient enough, and that doesn't seem to be happening. I've been waiting for it for a decade now, and who knows when it actually happens.

We may be in a situation where you end up with separate architectures for different niches: ARM for consumer electronics and embedded, and x86 for the PC/workstation/server market. With IBM supporting its own architectures forever (hey, S/390 is still around, and Power doesn't seem to be going away either), reality may be less exciting than the architecture Thunderdome ("two architectures enter, one architecture leaves").

The computer market isn't quite the wild and crazy thing it used to be. Yes, smartphones certainly shook things up, but that market is maturing now, too.

What do you think of the projects currently underway to develop OS kernels in languages like Rust (touted for having built-in safeties that C does not)?

That's not a new phenomenon at all. We've had the system people who used Modula-2 or Ada, and I have to say Rust looks a lot better than either of those two disasters.

I'm not convinced about Rust for an OS kernel (there's a lot more to system programming than the kernel, though), but at the same time there is no question that C has a lot of limitations.

To anyone who wants to build their own kernel from scratch, I can just wish them luck. It's a huge project, and I don't think you actually solve any of the really hard kernel problems with your choice of programming language. The big problems tend to be about hardware support (all those drivers, all the odd details about different platforms, all the subtleties in memory management and resource accounting), and anybody who thinks that the choice of language simplifies those things a lot is likely to be very disappointed.

What for you is the biggest priority for driving kernel development: supporting new hardware or CPU features, improving performance, enhancing security, enabling new developer behaviors (such as container technology), or something else?

Me personally? I actually tend to worry most about "development flow" issues, not immediate code issues. Yes, I still get involved in a few areas (mainly the VFS layer, but occasionally VM) where I care about particular performance issues, etc., but realistically that's more of a side hobby than my main job these days.

I admit to still finding new CPU architecture features very interesting -- it's why I started Linux in the first place, after all, and it's still something I follow and love seeing interesting new things happen in. I was very excited about seeing transactional memory features, for example, even if the hype seems to have died down a lot.

But realistically, what I actually work on is the development process itself and maintaining the kernel, not a particular area of code any more. I read email, I do pull requests, I shunt things to the right developer, and I try to make sure the releases happen and people can trust me and the kernel to always be there. And yes, answering email from journalists is something I consider my job, too.

My principal model [with respect to] kernel development is to make sure we get all the details right, that we have the right people working on the right things, and that there aren't any unnecessary things standing in the way of development. If the process works right and the people involved care about quality, the end result will take care of itself, in a sense.

Yes, that is very, very different from what I did 25 years ago, obviously. Back then I wrote all the code myself, and writing code was what I did. These days, most of the code I write is actually pseudo-code snippets in emails, when discussing some issue.

What do you think still needs to be done to improve Linux containers?

I'm actually waiting for them to be more widely used -- right now they are mostly a server-side thing that a lot of big companies use to manage their workloads, but there's all this noise about using them in user distributions, and I really think that kind of use is where you end up really finding a lot of new issues and polishing the result.

Server people are used to working around their very particular issues with some quirk that is specific to their very particular load. In contrast, once you end up using containers in more of a desktop/workstation environment, where app distribution, etc., depends on it, and everybody is affected, you end up having to get it right. It's why I'm still a big believer in the desktop as a very important platform: It's this general-purpose thing where you can't work around some quirk of a very specific load.

I'm actually hoping that containers will get their head out of the cloud, so to say, and be everywhere. I'm not entirely convinced that will actually happen, but there are obviously lots of people working on it.

We’ve seen Microsoft, Google, and Apple pushing new desktop and mobile OS releases at an unprecedented pace over the past few years. What are your views on the increasingly rapid release cycles for desktop and mobile operating systems?


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |