Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why Unix v6? Why teach with a 50 years old design? I feel to teach the fundamentals of an operating system, i.e. scheduling, IPC, address space management, a microkernel design would be better.


Studying precursor technologies to ones popular today is a great way to learn what led us to this point, what tradeoffs were made and why, and perhaps what we might've lost as well. Students can get a deeper appreciation for new technologies when they're eventually exposed to them. This can only broaden their horizons in their future careers.

As someone who missed this part of history, I would love to have learned about it in college.


You gotta walk before you can run. Xv6 is basic, but it’s a great intro to operating system fundamentals that can fit in a semester for people who’ve never seen these concepts before.


I'm guessing many professors don't choose Xv6 for their operating systems class because it's a great design. Some probably pick it because it's good enough, simple and easier to teach in a single semester where students are also taking other classes. Are you saying the microkernel design is not only better but also easier to teach?


Perhaps build simplicity is also an attractive point?

Basically a rather simple Makefile, gcc, qemu, xorriso...

I see lots of open source hobbyist OS projects require you to build your own GCC, which is not fun. Xv6 works with the standard gcc (provided by distro package manager). On macOS: just install homebrew-i386-elf-toolchain, and tweak some lines on the Makefile. Done. You are ready to build Xv6, which should take about a minute or less.


Vonn Neumann architecture is 80 yrs old. Why is it relevant how long a design is if it's still the most relevant and widely used one? The basic abstractions of unix v6 still holds to this day.

The main difference between Microkernels and Monolithic is how address space get to be shared between userland and kernel. I don't see how microkernel design would be "better". Why teach a design that isn't widely used?


The Vonn Neumann architecture does not expose the same structural flaws monolithic kernels do—namely, their (of course tendency in modern computing to be) massive size and their failure scenarios in the event of a problem.

Old is not always a problem, but monolithic kernels face the same problems they did in the 80s. It's not surprising Apple is moving away from kernel drivers and into userspace functionality.


My comment was in the context of which is better for learning purposes. You spun my comment and turned it into mono vs micro kernel.

System Design is about trade offs. You conveniently say "expose the same structural flaws monolithic kernels do" and not mention anything about microkernels. System arch discussions are not about who "wins", but about trade offs.

> but monolithic kernels face the same problems they did in the 80s

Putting your lack of balance aside, what problems are you talking about specifically? Are you aware of how many new instructions have been added since the 80s that were designed specifically to address the shortcomings of monolithic kernels? Microkernels, even when resources were much scarcer back in the 1980s, still did not become popular.

Both designs have their use cases. Micro and mono kernels have pros and cons but for learning purposes it makes more sense to teach about monolothic kernels since all popular operating systems follow this design.


apple started with mach, which was a microkernel, and linked basically all of the freebsd kernel into it, morphing it from a microkernel operating system into a monolithic operating system

i agree that monolithic kernels are unpleasant and brittle, but unfortunately they don't actually seem to be obsolete


NeXT you mean.

Additionally, Apple has a long term roadmap to fix that design decision, hence why they are killing one kernel subsystem at a time, moving them into userspace, with one year given for the developers to adopt each of them, after being made available as the new way.

Finally, it is kind of ironic that systems that have doubled down on monolithic kernels, are now used to run an endless pile of containers and Kubernetes clusters.


> hence why they are killing one kernel subsystem at a time, moving them into userspace, with one year given for the developers to adopt each of them, after being made available as the new way.

Notably they aren't exposing most of the functionality, it wouldn't be possible to e.g. implement NFS or 9p with the userspace filesystem thing they've been pushing. They're basically just shutting users out from their own computer....


well said


> apple started with mach, which was a microkernel, and linked basically all of the freebsd kernel into it, morphing it from a microkernel operating system into a monolithic operating system

The microkernel aspect of it had completely vanished by the time they hit the general public.

> i agree that monolithic kernels are unpleasant and brittle, but unfortunately they don't actually seem to be obsolete

Of course they aren't obsolete! The security in your phone depends directly on sel4, at least if you use an apple device. They just aren't relevant to most of the compute and most of the software you interact with today or in all of history.


you seem to have thought i said 'unfortunately microkernels aren't obsolete', but that is the opposite of what i said


Mach as deployed at NeXT was also never a microkernel, and combined the BSD 4.2 kernel. I believe this is also effectively what OSF/1 did with the OSF variant of Mach.

Mach has basically always supported that kind of use, even back to early CMU research versions.


It's not surprising Apple is moving away from kernel drivers and into userspace functionality.

That's largely a political decision because of Apple's totalitarianism.


While microkernels play a vital role in our software ecosystem, most of the software anyone interacts with is not coordinated with a microkernel. Furthermore most of the software on the internet is not run by a microkernel, nor most of the software available on the internet. I suspect such a course would not prepare one well to either work with kernels or reason about the kernels you work with as a professional.


You right, it is run by a pile of containers in Kubernetes clusters, on top of type 1 hypervisors. The irony.


And? What's wrong with k8s and containerized setups?

Linux evolved numerous features over the years responding to server room challenges. Some of them look monolithic, others are decomposable, in any case linux became the default dev target platform for everything.

Minix might be nice, but linux has won, and it was NEVER about os architecture.


Because it is travesty of what is effectively a microkernel architecture.

A free beer UNIX clone won, that is quite different than any technical advantages.

Even Android does the same with Binder IPC, since Project Treble.


Android, SteamOS, WebOS and all the numerous Linux-based projects, mostly show that the world needs a stable target platform everybody can do a meaningful contribution to (and then make sure nobody steals the work later).

Linux literally ate the world with its POSIX-compatible open-source proposition. I don't have a single device without Linux at home, and this includes a NAS, 5 notebooks, 1 PC, a handheld gaming console, my TV, a bunch of mobile phones, a washing machine.

The world just couldn't care less if it is a microkernel, a hybrid or a monolithic kernel. Like you said, it's not about some boring technical advantages, and it never was.


Just wait until the Linux founding fathers are no longer around.

Try to write Linux POSIX code for Android, WebOS, and ChromeOS apps, and see how many normies will buy your wonderfull app.

Free beer ate the world, everyone likes free beer, even it is warm.


Alan Cox will do it fine, and the rest of the people have similar skills on GNU licensing and such.


I think he counts as one of the founding fathers. He was the second-in-command when I met him in about 1997 or 1998.

And note what he turned to when he left: an 8-bit OS.

https://www.fuzix.org/

Where do you think he got that antipathy for large and complex systems?


Yes, free as in beer and free as in freedom. No complicated licensing, code open for change, any scale, any use-case.

Hard to compete against with all these "license per working space" or "tcp stack not included" or "no code for you" of the usual competitors.

> Just wait until the Linux founding fathers are no longer around.

Yes, things change all the time. People come and go. Just as companies do.


I would say the masses won in this case, instead of academics and CEOs in high towers… now they have to share some power with the rabble.


a lot of people agree with you, which is why minix exists (though their source control system just fell offline this year), but none of windows nt, linux, os/360, and even really macos/ios are microkernel designs. sel4 and the other l4 variants are, and so is qnx, and linuxcnc runs linux under a microkernel, and xen is kind of a microkernel if you look at it funny, so we're definitely seeing significant mainstream use of microkernels, but it's not clear that that 50-year-old design is actually obsolete the way the pdp-11, the cray-1, and the system/370 are


Actually, z/OS (descendant of System/370) is more microkernel than Linux is. But the problem with microkernels is similar to the problem with microservices - you have to make the executive decisions somewhere, and that unfortunately ends up being the bulk of "business logic" that the OS does.

In theory, I like the concept of functional core, imperative shell - the imperative shell provides various functions as a kind of APIs, and the functional core handles all the business logic that involves the connections between the APIs. (It's also sometimes called hexagonal architecture.)

However, it is questionable whether it actually reduces complexity; I daresay it doesn't. Every interaction of different shell APIs (or even every interaction that serves a certain purpose) needs a controller in the core that makes decisions and mediates this interaction.

So when you split it up, you end up with more bureaucracy (something needs to call these APIs in between all the services) which brings additional overhead, but it's not clear whether the system as a whole has actually become easier to understand. There might also be some benefit in terms of testability, but it's also unclear if it is all that helpful because most of the bugs will then move to the functional core making wrong decisions.


i admit to not being very familiar with the current version of os/360; can you elaborate?

btw, when you say 'z/OS (descendant of System/370)', i think you are confusing hardware and software; system/370 was the hardware (obsolete), os/360 the software (sadly, not obsolete; later renamed os/370, mvs, and z/os in a series of increasingly desperate attempts to escape its reputation)

generally the functional/imperative contrast centers on mutability: imperative style uses mutability, and functional style doesn't. is that what you mean? i'm not sure a functional core in the sense of 'mutation-free core' is a reasonable way to build a computer operating system, because limiting resource consumption and handling failures reliably are two central concerns for operating systems, and typically immutability makes them much more difficult. immutability does have a lot of uses in modern operating systems, but at least on current hardware, it makes more sense to me to build it as an functional shell around a mutable core than the other way around

(the other aspect of the functional/imperative axis has to do with constructing new functions at runtime, passing them as arguments to subroutines, and returning them from subroutines: you do these things in functional programming, but not in imperative programming. i am at a loss how this could relate to what you're talking about at all.)

it's not clear to me what https://web.archive.org/web/20070403130947/http://alistair.c... has to do with functional-core/imperative-shell or for that matter with operating system kernels. can you elaborate?

for the most part operating systems design is an exercise in delegating as much as possible of those 'executive decisions' to userspace. 'mechanism, not policy' is the mantra for kernels and for system software in general, including things like device drivers and window servers. that way, you can use different policies in different parts of the system and change them over time without destabilizing the system. i feel like microkernels are generally better at this than monolithic kernels, and sel4 in particular takes this to the extreme


Ah, sorry for the inaccuracies. I mean MVS as a predecessor of z/OS, of course.

What I mean by functional core/imperative shell is similar to what you mean by (the "kernel" is the "imperative shell" and the "userspace" is the "functional core"):

"for the most part operating systems design is an exercise in delegating as much as possible of those 'executive decisions' to userspace. 'mechanism, not policy' is the mantra for kernels and for system software in general, including things like device drivers and window servers"

And z/OS does that a lot, much more than Linux. On a typical z/OS, many of the functions that would be normally running inside Linux kernel are running in a separate address spaces, with limited authority.

But the intractable problem IMHO is, to decide the policy, you still need the authority to do so (you need to be able to invoke the commands to the kernel), so you can still wreak havoc in the system.


like what?


For example, on z/OS, the whole disk storage subsystem (SMS, but there is more) is separate from the MVS (kernel). Security is also externalized in RACF server (in fact there are alternate products from non-IBM vendors). You can run multiple TCP/IP stacks, which are also running in their own address spaces. Sysplex serialization has its own address space.

All the address spaces involved in the operating system are coordinated through SVC or PC routines, which are like system calls, and scheduling of SRBs, which are kinda like kernel threads. I am not sure (although I am not aware of latest developments) if in Linux one can define a custom system call, like you can on z/OS. Or if you can schedule your own kernel thread from user space.

You seem to know about MVS, yet we probably disagree on whether it is to be called a microkernel or not. I am not an OS expert, and I never did kernel-level programming for Linux or z/OS. But I did read Lister's Fundamentals of Operating Systems long time ago, and that book is somewhat based on what MVS (the actual kernel of z/OS) does. It was written before the whole microkernel debate, which AFAICT might be just an artifact of enormous variety and complexity of x86 hardware.

So I would like to hear, in your opinion, what should have been different in MVS (or z/OS) for you to consider it a microkernel?


i don't know enough about it to have an opinion. thanks!


>minix exists (though their source control system just fell offline this year),

https://git.minix3.org/index.cgi seems online?


this started failing, i think, last week:

    : ~; git clone git://git.minix3.org/minix.git 
    Cloning into 'minix'...
    fatal: Could not read from remote repository.

    Please make sure you have the correct access rights
    and the repository exists.


While Windows and macOS aren't pure microkernels, they certainly are much more in architecture than pure UNIX clones will ever be.


That is how we end with students always cloning UNIX on their projects, instead of going alternative roots like Redox or SerenityOS.


The irony here is that both SerenityOS and Redox are UNIX-like. Of course in their design, they're not purely like most other UNIXen, but they also don't stray away too far.


They offer a POSIX like API on top, which isn't the same thing, as the key APIs, and overall system architecture, are something else.

Also mostly because as it happens in most hobby projects, people keep wanting to replicate GNU due to the existing software, thus keeping the UNIX cycle alive.


The POSIX API comes with a large number of warts and constraints, and requires a great deal of specific machinery to support.


GNU/Hurd it's interesting. It replicates Unix, but it gives far more power to the user.


Interesting are systems like Xerox PARC Workstations (Mesa, Cedar, Smalltalk, Interlisp-D), ETHZ Oberon, Inferno, Apollo/Domain, Tru64, QNX.


QNX it's another Unix in the end any the Photon GUI it's nothing odd to any KDE/Windows 2000 user.

Smalltalk has issues on exporting your software to be run under a standalone way.

On Interlisp, there's Mezzano, a Common Lisp OS, but it needs some tweaks and optimizations.

Oberon UI wise it's the same as Acme under p9/9front/p9port. On Inferno, 9front and Go superseded it in some ideas.


Gilad Bracha is working on that with Newspeak:

https://www.bracha.org/Site/Newspeak.html


> Smalltalk has issues on exporting your software to be run under a standalone way.

What issues?


Do not forget Genode.


Or maybe something like a lisp machine or a smalltalk os?


I would LOVE to build a modern-day operating system using a high-level programming language, even if it were just a pedagogical toy. I love Unix, but it’s not (and shouldn’t be) the final word on OS design.

In the meanwhile, Project Oberon from the late Niklaus Wirth (http://www.projectoberon.net/) is a good example of a pedagogical operating system (and programming language) that isn’t Unix. Project Oberon was heavily influenced by the Cedar and Mesa projects at Xerox PARC.


Lion's Commentary on Unix is a classic tome on the subject, but unfortunately was illegal for quite some time.

https://en.wikipedia.org/wiki/A_Commentary_on_the_UNIX_Opera...


The way it was handled only proves the point UNIX would never had taken off outside Bell Labs, if AT&T was allowed to sell UNIX the moment it stopped being a toy project for playing games.


Wat


The book was forbidden by AT&T from publishing, the moment AT&T got released from the ban to sell their research, in parallel to the BSD lawsuit.

It kept being shared via piracy across universities, until AT&T and other commercial UNIX vendors, eventually allowed the book to be published again.

https://en.wikipedia.org/wiki/A_Commentary_on_the_UNIX_Opera...

Had the book never seen the light of the day, in the alternative universe of a commercial UNIX, universities wouldn't have adopted UNIX as research material to feed the next generation of UNIX clone makers.


Why not? xv6 doesn't prevent you from learning about microkernels in any way. It's also a complete operating system with code that's friendly for beginners.


> Why teach with a 50 years old design?

> a microkernel design would be better.

why re-hash a 30 year old debate? [0]

0: https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_deb...


I guess because Tanenbaum was right after all, instead of microkernel processes, we got containers and kubernetes pods.


Because it doesn't really matter. Processor design hasn't changed much in its fundamentals. KISS. Just KISS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: