Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Red Hat 30th anniversary (redhat.com)
215 points by rwmj on March 28, 2023 | hide | past | favorite | 179 comments


Leader: "What have Red Hat ever done for us?"

Enthusiast #1: "Well, they developed and continue to support Red Hat Enterprise Linux, one of the most popular enterprise Linux distributions."

Leader: "Oh, well obviously we need that. That's true."

Enthusiast #2: "And don't forget about the Fedora Project! It's a community-driven distribution that helps test out new technologies before they make their way into RHEL."

Leader: "Oh, right, the Fedora Project. Yes, yes, we should be grateful for that."

Enthusiast #3: "Plus, they've made significant contributions to the development of the Linux kernel, especially when it comes to virtualization, security, and performance."

Leader: "Ah, yes, the kernel. You're absolutely right, that's a big one."

Enthusiast #4: "And what about all the open-source software projects they've developed and supported, like the GNOME, JBoss and Ansible?"

Leader: "Well, obviously we couldn't get by without those."

Enthusiast #5: "And let's not forget about their collaboration with other companies and organizations to advance open-source initiatives, like OpenStack and Kubernetes."

Leader: "Yes, yes, very impressive. But apart from Red Hat Enterprise Linux, the Fedora Project, contributions to the Linux kernel, open-source software projects, and collaboration with other organizations, what have Red Hat ever done for us?"


Leader: "Great. We've embraced RHEL, and decided to toss out VMware for RHV, just like you've been asking."

Enthusiast #6: "Uhhh ..... about that."

Leader: "Oh, not good huh? Well, certainly Red Hat Storage is a great take-out for our expensive proprietary storage units!"

Enthusiast #7: not present because the whole BU was moved to IBM


I have been experiencing this personally, although it's more accurate to say the RHV/oVirt team was moved on to OpenShift Virtualization (KubeVirt).


Red Hat basically made linux on the server a reality.

It provided a solid and serious corporate backing to that idea, providing not only software implementation and bug fixes, but also (and maybe more importantly) guidance, training, consulting.

Basically bridging the tech world with the corporate world.

All this in a time where the computing industry was divided among Microsoft and proprietary unix vendors.

Anybody not acknowledging these points would be dishonest at best.


IBM's big endorsement in early 2000 made a big difference as well.

But whether or not you you accept the premise that an open source "unix" was inevitable, a lot of companies in the late 90s were at least hedging on a world where Windows basically conquered all. (Sun being the notable exception among the systems companies.)


> But whether or not you you accept the premise that an open source "unix" was inevitable

Bryan Cantrill had a great talk[1] on this topic, recalling the efforts of (and the mistakes in) open sourcing Solaris (and the immense disappointment in seeing it get back to closed-source by Oracle).

Of course we can only fantasize about how things would have gone if Solaris had been open source earlier.

[1] I think it's this one? https://www.youtube.com/watch?v=-zRN7XLCRhc (LISA11 - Fork Yeah! The Rise and Development of illumos) I recognize it from the disclaimer "The views and opinions expressed in this video..." on the side :)


I did a podcast with Bryan and SJVN debating this topic. https://bitmason.blogspot.com/2020/04/podcast-if-linux-didnt...

Bryan was basically of the opinion that if Linus hadn't written Linux, BSD would have come to the fore somehow in spite of its baggage.

(I also cut that discussion into a longer series: http://bitmason.blogspot.com/2020/05/podcast-was-open-source...)


And the company that shouldn't be named on HN, Oracle.


Oracle came later though. IBM was in 2000. Oracle Linux didn't come until 2006.


I wasn't talking about Oracle Linux.

> 1998: Many major companies such as IBM, Compaq and Oracle announce their support for Linux.

-- https://en.m.wikipedia.org/wiki/History_of_Linux


Lots of companies "supported" Linux. IBM's big announcement in early 2000 was significantly higher profile than what most large companies were doing at that time however.


I can assure you that "runs Oracle" meant quite a lot during the .com wave.


From memory of back then, what made IBM noteworthy in the press was the scale of their investment and the number got repeated a lot. Dropping a billion dollars on some commie hippies made quite a splash that long ago.


One could argue it has offered GNU/Linux distributions which, in order to manage, require their guidance, training and consulting.

So, yeah, basically bridging the tech world with the corporate world, but in both directions :-(


I LOL’d a bit at JBoss, only because how radically history changed from App Servers ruling the world to practically disappearing over night into the maw of Spring boot (and it’s ancestors).


We hated appservers so much, we reinvented their whole "managed deployment" thing at the OS level (with containers/docker).


Just wait until WASM containers become the next big thing (TM).


Openshift is the new Websphere/JBoss.


I felt I was reading toward some sort of punchline, but it didn't come, or I didn't get it. Like how the leader acknowledges all the goodness, but still doesn't want to pay for it, perhaps. Though maybe that's not really a Red Hat problem?


That's a reference to the Monty Python's "what have the romans ever done for us?"

https://www.youtube.com/watch?v=Qc7HmhrgTuQ


An Absolute Classic scene but I also quite like this take on it:

https://twitter.com/bmay/status/1162996543697252357?s=20


The Judean People’s Front. PSSHHH!


You're right, I think I made a mistake by adding the last block, edited it to end on the leader, which I think makes it funnier.


I can do just fine without GNOME. In fact, I think the Linux ecosystem would have been much better off if it had never been developed, or maybe left to die after the Qt license issues were sorted out.

As for the popularity of RHEL, how much of that is because the US government is probably their biggest customer, and basically mandates its use for any application needing Linux? That's not a grassroots type of popularity.


The popularity of RH was (and is) driven by its commercial stability. They were the first to support slow-changing releases over timespans that businesses found realistic. In the enterprise, you can't rebuild a server every other year - you must be able to guarantee stability over 5 years at least. This stability gave major developers (like Oracle) a fixed target they could realistically aim to support with closed-source binaries. This way, they became "the" commercial Linux distribution of choice.

They've since missed a few boats, but still they've been careful to play only in the areas where real money can be made - and they've quietly steered platform evolution over the long term, effectively calling all the shots (or at least all the shots that matter) in the Linux world for a long time now.


> if it had never been developed

In all fairness: Gnome was absolutely great at its beginnings… until the clique around Havoc Pennington took over and decided they know better than the users what's good for them and ruined Gnome… thus earning them the affectionate title "interface nazis" dished out by Linus Torvalds.

https://www.cio.com/article/220271/11-technologies-that-tick...

> after the Qt license issues were sorted out

Much as I'd agree that GTK has proven to not be the way (either), I don't see the "Qt license issues" as being "sorted out" at all.

It did look like it for a short while after Nokia bought Trolltech and made Qt available under LGPLv2.1, but that didn't last long: Shortly after, Microsoft managed to place their mole Stephen Elop onto the CEO seat of Nokia, and Elop methodically ruined Nokia in preparation for a Microsoft takeover and threw out Qt (which represented cross-platform, a concept Microsoft always saw as as a threat to their business model), replacing it with the ill-fated Windows Phone platform. Under Digia, and even more so once splitted out as QTCOM, The licensing focus of Qt has shifted from "make it more open so as to gain wide traction" (the strategy under Nokia) to "try to force devs into commercial licensing schemes and monetize to the max you can milk out of it", and it has been a sore downhill slide into a shitty situation.

The "Qt license issues" we have today aren't the same as way back then in the QPL days of early KDE-vs-Gnome forum wars, but they are at least as bad imho. A lot of developers who used to develop mainly with Qt (me included) have since jumped ship because of that and prefer to avoid Qt too now, though the sad reality is that while there are better alternatives depending on the project, there isn't really a good free cross-platform allround alternative for Qt either. I think there is really a big window of opportunity and gap to be filled by a new modern cross-platform UI toolkit.


Can't you still use QT as FOSS, albeit not their LTS binaries


First of all, it's not just the LTS binaries, it's also the LTS source and license for it. Commercial-only. No FOSS.

But it goes further: even for what is still FOSS (i.e. the non-LTS), the licensing gets progressively more and more restrictive: * whereas Nokia made Qt available under the relatively permissive LGPL2.1, that since changed to a mixture of LGPL3 and GPL, with a clear shift towards the latter: that is: for new modules, you typically only have the choice between commercial and GPL. Unless you are content with that for all of your software projects, that's quite the problematic constraint.


>for new modules, you typically only have the choice between commercial and GPL. Unless you are content with that for all of your software projects, that's quite the problematic constraint.

For a FOSS project like KDE, why would this be problematic? I don't see how this is an issue at all.


Well, first of all, while the GPL is one particular FOSS license (out of many), it's not as if it was compatible with all FOSS licenses, far from it. So while imposing GPL or commercial may not be a problem for a FOSS project that happens by chance to have chosen that one specific license anyways… it can be a huge problem for others that want to choose a different FOSS license.

And that's just for projects that happen to be FOSS anyways. The thing is: very few software developers have the luxury of releasing ALL their software projects as FOSS. Something has to pay the rent and food. There are a few select domains where developing FOSS software can be a the base for a business model that pays a sustainable living… but those tend to inherently have a very small intersection with the kinds of software typically developed with Qt.

And huge complex frameworks aren't something a developer changes all the time like underwear. It's something devs tend to stick to, because it takes quite a significant time investment to master and to build your workflow, tooling etc. around. In other words: developers largely tend to do their FOSS projects in the same framework that they use for the work projects that pay their rent and food.

If big framework A disqualifies itself for the latter and gets replaced with framework B for that, chances are that their FOSS projects will be centered around framework B as well and no longer around framework A. As long as Qt was a viable option for work, it made perfect sense to center one's FOSS efforts around Qt as well and thus around KDE, which is the primary Qt-based Desktop. But since the licensing situation of Qt deteriorated, that is no longer the case. So that's another factor where it impacts KDE.


> until the clique around..

I think that's really unfair. Fact is, the Gnome/Freedesktop people were the first ones to actually do user experience tests thoroughly and then use that feedback.


I always tought an enhanced GNUStep+WindowMaker with a modernized setup a la OSX would create a much better environment than Gnome.

Today we would be using GNUStep using 1/10 of the resources of any DE and Cocoa Emacs would shine on GNU much more than the GTK3 version.


That's interesting, but if you're trying to make a DE that attracts users who are used to Windows, or serve as a Windows replacement for corporate/government employees, it won't be helpful. Neither is GNOME, in my opinion. KDE is easily the closest thing in FOSS-land to the Windows UI, and it's always baffled me why RedHat didn't support it instead, given their corporate focus.

Some people argue that Gnome (3+) is better here because it doesn't lend itself to customization very well, but its entire UI is so radically different from Windows that I don't see how this argument holds water. Sure, KDE is ridiculously configurable, and that can be a big problem for IT support, but for non-admin users, it shouldn't be difficult to set an option to lock it down and only give users a very limited subset of the config options. KDE might not have that now, but RH could have easily developed that with a tiny fraction of the resources they've poured into Gnome development.


Judging by the popularity of GTK as a GUI toolkit on Linux over literally decades, it's pretty convenient, and thus valuable.

I'm not a fan of Gnome the desktop environment, but GTK (which was not even invented for Gnome) is a huge boon.


>GTK (which was not even invented for Gnome)

GTK = Gimp Tool Kit


AWS Linux and the internal Facebook system were also based on Red Hat.


Before Amazon Linux, Amazon used RHEL 5 and RHEL 3 internally on both bare metal and with a customized Xen hypervisor to run virtual machines.

RHEL has been the default enterprise Linux distro for many years with SUSE as a second choice. Before that, most enterprises that were not running Windows were running Solaris.


Ok, but still this doesn't indicate any real popularity: a small handful of decision-makers picked RedHat as their standard OS for probably tens to hundreds of thousands of users, plus a ton of servers.


It seems like Red Hat was popular with decision makers. Red Hat was by far the most popular Linux before Ubuntu which might have contributed.


> advocated for the use of open standards and interoperability

systemd, pulseaudio, freedesktop have bought much pain in the way of open standards and interoperability.


For people trying to run desktop BSD, I think systemd has caused some real pain. For most Linux users and sysadmins, systemd has been helpful in the way it has standardized service management.

The freedesktop standards just seem to be offensive to BSD people as a matter of taste, though.

At its worst, PulseAudio was basically fine but pushed on some users by their distros before it was mature. And even then, Red Hat has also driven the development of its replacement, PipeWire, which has been widely recognized as kickass pretty much from the get-go.


> The freedesktop standards just seem to be offensive to BSD people as a matter of taste, though.

The freedesktop standards should be offensive to everyone as a matter of taste :-D. They are the ACPI of the desktop world.

(Note: if anyone wishes the freedesktop standards gone because they're the ACPI of the desktop world, they have probably not experienced the pre-ACPI world :-D)


My entry into Linux was actually a trip backwards into the pre-ACPI world, although that world was before my time.

As a kid, I was gifted a junker laptop that was otherwise destined for the trash heap after some hardware damage caused it to reboot whenever the AC adapter was plugged in or removed. Due to some wobble in the charging port, this meant the thing would reboot constantly— on Windows, anyway.

But when Linux booted on it, it could tell that something was not normal in what it heard via ACPI. So it disabled ACPI support, reverted to APM, and ran more or less normally. I still remember seeing those messages about APM being printed on every boot!

That was the first computer that was ever my own, and the reason I learned that Linux existed. Something mysterious going very wrong with ACPI was the start of a lifelong love for me. :)


systemd is dual edged sword really. Yes, ops wise we removed thousands of lines of fixes for "simple" (BSD people lie to you when they say sysv is simple) init scripts and also get rid of stuff like monit (auto restart of failed services) because builtin features of systemd replaced it better.

But on flipside for example journald introduced utterly misdesigned binary log format that needs to traverse every file it has on disk just to tell you "sorry, I don't have logs for this app". If it was just an SQLite file(s) I'd be utterly thrilled, ability to just throw an SQL to get logs I want ? Sign me in! But currently it's just.... bad.

Overall it is a positive change but I definitely do not blame people for disliking it, there are many parts of it to complain about.

> At its worst, PulseAudio was basically fine but pushed on some users by their distros before it was mature. And even then, Red Hat has also driven the development of its replacement, PipeWire, which has been widely recognized as kickass pretty much from the get-go.

It took 5+ years for it to be "fine", and the design was shit from the beginning. Like completely ignoring needs of real time audio, meaning anyone that wanted to do any serious audio stuff needed to either go to JACK or directly to ALSA and making JACK/ALSA and Pulseaudio play together wasn't for the faint of heart.

I still have some dumb udev scripts to apply a bunch of pulseaudio commands because apparently "remembering config of a device" isn't in scope of pulseaudio.

Even putting stuff in actual pulseaudio config is not enough as it will only apply it if device is present before pulseaudio started.

So if say you have USB audio card (that can be say just an digital output of a synthesizer or other instrument) that you plug after you have to manually apply any configuration, every time you connect it.

Sure, if you were lucky it might've worked for you but I for years heard "sec, I gotta restart pulseaudio" any time someone needed to run a skype or some other voice chat... and also had it just start screeching sounds once or twice a year, or "just" randomly starting to stutter till I kill it.


>(BSD people lie to you when they say sysv is simple)

Ah, *BSD do not use sysv init but BSD init, a totally different startup. And if you are old enough, you may remember the BSD init vs SYSV init wars, I think they made the systemd conflict look like a tea party with one mildly obnoxious person in attendance.

The current BSD init is by far the easiest. IIRC, most BSDs moved to something similar to this:

https://www.usenix.org/legacy/event/usenix01/freenix01/full_...


> BSD people lie to you when they say sysv is simple

Probably better to compare to the modern alternatives. runit and openrc are pretty great.


systemd has been helpful in the way it has standardized service management

From where I sit, absolutely not. And it's brought over-complexity, and monolith design, and far far far more bugs than other options, as a result.

My point is, the only reason systemd is less 'contentious' these days, is because fewer and fewer know how stable, streamlined, and more functional systems were before it. Systemd won by Redhat (via Gnome hard requirements) forcing it into all distros, and now, by the fact that it is now a Linux standard.

NOTE: I had to counter the assume statement here. I can get people may have differing opinions, and that people may view systemd differently, but cannot sit by when someone just says "this has been helpful!".

It's been more harmful than helpful.


The amount of privesc I used to get with initscripts is insane.

Systemd took away all that.

For that price alone, it's worth the price of admission .


Systemd took away all that.

And granted an endless litany of exploits itself, due to its overt complexity, including such absurds as null password assumptions.

Systemd has been a far less secure option.


That's a common misconception. Systemd has absorbed many disconnected projects, all of which were having their own issues well before systemd.

It's a net security win once you have done the math. I did it 2 years ago but I would think the logic still holds true.


> know how stable, streamlined, and more functional systems were before it

That is so far from my own experience that it's foreign to me.

I've been using various Unices for over three decades, mostly a sysadmin / SRE. I profoundly hated and distrusted most of the old init systems.


Been using since the late 90s.

Debian was always exceptionally stable with sysvinit. Certainly moreso than systemd ever has been.

Yes, sysvinit had issues. Yet you could fix those issues quite handily. The core init binary was small. The scripts individual, and issues readily addressed. And once you resolved an issue, you reported upstream and debian typically fixed.

Meanwhile, systemd has an entirely new litany of problems with every single release. It's an insanely fast moving target. And worse, due to the nature of the project, it keeps extending its scope, and when it does so, it does it in what I can only refer to as a half-assed way.

Want nuanced DHCP services? Have to disable systemd's builtin, and go with an external. The same is true for ntp, have to disable that and go with ntpd. When it comes to mounting NFS services, I've had systemd:

* prevent a proper shutdown

* prevent a proper startup

* fail to mount on user login

* fail to start other services, because nfs is down

Meanwhile, nfs /home may have been an issue in certain circumstances... but there were very simple, very easy ways around this. And any breakage was simple to deal with.

I guess my point is, you may have had problems with whatever you used prior, but many distros ironed out most of the kinks. And the init system wasn't, essentially, an entirely new init system every 2 years.

Systemd has so many tweaks in its code now, so many edge case fixes, so much scope and size, overlaid with so much constant change, that it is essentially easier to consider it an entirely new codebase, every new stable release of a distro.

That's the complete opposite of stable.

What really should be done, IMO, is fork systemd and:

* remove every single thing that isn't core init

(No network code, no dhcp code, no ntp, no udev, nada, nothing)

* stabilize and simplify that core code

* anything of potential value, and save it as a separate code base, maybe keeping systemd's udev or some such

I mean, look at binary logfiles, and having its own internal logging system. It's just insane. I literally cannot comprehend the logic of this. Now almost every single linux system in use, has two parallel logging systems. systemd's silly system, and then a syslog variant.

Just... why?

It's the same as the entire network blather, claiming naming conventions are more simplistic. Wha?! Before systemd, debian's udev implementation would give you eth0 for all but edge cases. It would retain knowledge of what NIC had which label, keep that through boots, no matter where you placed that NIC. It worked, it was perfect, it was simple, it had zero issues, for a decade plus.

Yet today, I still get insane interface names such as 23423434234weesa34r325233rwef from systemd in some cases. Hello?! What?! How is that a "predicatable" name. You know what's predictable? eth0.

Not some insane 'name it after the NIC slot, or then the MAC, or then the $x, onward forever.

It is immensely clear that systemd authors had very little understanding of server use linux when they started. And all of their goals were primarily, at the time, to "speed up" booting of VMs, to make VM instantiation of network easier (hence, naming after PCI(e) slot, which can be set in hypervisor configs).

Meanwhile, sysvinit had parallel bootups, and all systemd bootup optimization speed is gone, because systemd had to take into account all of the issues that sysvinit had to.

What a shocker. I recall systemd devs toting how fast stuff booted, then spending a decade adding edge cases, adding the ability to prevent instant reboots (eg, proper shutdowns so DBs and other daemons can exit gracefully), and wow, look at that, now bootup times and shutdown times are slower than sysvinit systems I manage.

Yes, slower. Along with cases of systemd not rebooting, just doing a shutdown and failing to reboot, because systemd. Literally the most absolute worst thing an init system can ever fail at. Pathetic. Beyond pathetic.

It's at least once a month I have to deal with a new systemd edge case. Literally, once a month. Yet when I managed non-systemd systems? Years would go by before an edge case hit, I'd resolve it, and done for years.

I don't know why you would distrust old init systems. Their scope was quite simple. Outside of starting/stopping a service, that was it. And outside of something more complex (shutting down a DB, and maybe a 4 minute wait for that), this was a very simple task.

What issues did you have? NFS boot issues (an easy thing to resolve, but yes this was and still is an issue, even with systemd).


> For people trying to run desktop BSD, I think systemd has caused some real pain. For most Linux users and sysadmins, systemd has been helpful in the way it has standardized service management.

It only standardized anything if you deny anything else exists.


The alternatives were frankly shit in comparison and we used at least 2 different ones on sysv systems (monit and daemontools)


The alternatives still work just fine, or at least OpenRC does.


To make desktop Linux a reality we need more standardization in the Linux world, not less. If some GNU/Stallman types get offended, so be it. It's a worthy tradeoff.


Desktop Linux is a reality. "Standardizing" won't make it any more mainstream and will destroy what makes it special: That you get the OS that YOU want.


~2% market share is far away from a reality. Everywhere mainstream I go I only see either Mac or Windows.


> Everywhere mainstream I go

Do you visit people's homes much, or to schools and universities, or anywhere in China?

Because various vendors have shipped something like ¼-⅓ billion ChromeBooks over the last 3-4 years. That's a lot. That's more than Apple has shipped Macs.

You don't see them in offices much, it's true, but that's because they're in homes and educational institutions.

China has been following an aggressive program to dispense with foreign computers, called 3-5-2:

https://www.ft.com/content/b55fc6ee-1787-11ea-8d73-6303645ac...

The plan is that by 2025, everything will be Made In China:

https://cybershack.com.au/opinion/made-in-china-2025-a-polic...

I reckon there is a lot of desktop Linux out there, but we don't see it.

Like it or not, ChromeOS is Linux. (Somewhere way down there, it's Gentoo.)

Unity OS and openKylin are, basically, Ubuntu. Deepin is its own thing but at least until last year it's Debian underneath.

You don't see much of Kylin or UOS or Deepin, but for nearly 20% of the human race, those are the dominant Linux distros.


Most people who consciously prefer 'desktop Linux' do so not because it's based on the Linux kernel. They do so because it feels better to use than alternatives. And the reason a Linux desktop feels better to use goes understated because it seems 'political' and the people who have explicitly advocated for it most visibly have been smeared as extremists, ideologues, etc. But it's clear: using a system comprised as far as practicable of free software is refreshing, because it provides respite from the artifice, malice, and opacity of big tech products. It's a lost treasure from a forgotten world.

Chromebooks are so thoroughly productized that they have very little to do with what most 'Linux' advocates really like about Linux. We should wake up and recognize ourselves as advocates of software freedom, user choice, community-driven development, and a particular free software ecosystem, instead of getting into nonsense arguments about 'what is really Linux'. The same goes double for similar discussions that crop up around Android, which is as productized as ChromeOS but additionally much more locked down, further alienating it from the actual virtues that make good Linux experiences good.


I find this a very strange response, because while I can't really argue against any of it, it reads to me not as an answer but as a passing comment, tangential to what I was writing about that intersects with it almost incidentally.

There are big issues here but I am not sure that our views overlap by more than a wafer-thin ellipse.

One question: does it really matter why people use it?

Another: which is more important, the people who use it for political or ideological reasons, or the ones who use it from pragmatism?

Another: does it maybe matter how many use what form, than why they do it?

For 3 decades or so, there have been 2 main camps in the FOSS world, which is why the term "FOSS" was invented. One camp is the GNU folk, the Free Software types. The other is the Open Source folks.

Free Software: purity is more important than functionality. Don't use anything non-Free.

Open Source: what's important is that it works and does the job. Try to keep it all Free but if the choice is not working or not being free, it's more important that it works.

Fair so far?

This split exists in Linux.

So, for example, some distros (Ubuntu, Mint, and others) include proprietary blobs and drivers because without them it won't work on most computers.

Other distros (Fedora, Debian) don't. If you have problems getting it working, tough. Try a wired Ethernet connection. Try a USB wifi dongle. It's more important to them that it's all Free, but they may include non-free firmware in some versions or as extras.

Other distros refuse that, and purge the whole codebase of all BLOBs: Trisquel, Purism Pure OS. They accept and embrace that this choice means restricted hardware choice, and you may have to build computers specifically to run them.

But the thing is that now there's a 3rd kind of Linux. Linuxes that don't even claim to be a Linux, corporate products that happen to be based on Linux, but don't mention it, and are sold to the public as appliances, devices that work and let you use the Internet, and who cares how it works underneath.

Android, ChromeBooks, ChromeCast, etc.

And the thing is, these are way more successful. Billions of people use them and don't care what they run.

As a result, some of the changes have filtered back up the software chain and now desktop Linux is way more viable than it was, and a lot of stuff mostly just works.

But the two FOSS camps unite over this: these things Do Not Count and are not considered Real Linux.

I think market success matters more than ideology. We can say that it's arrived and it's practical and usable because hundreds of millions of people buy this kit every year and just get on with using it and enjoying it, and they don't care -- and don't need to care -- if it's FOSS software underneath.


chromebooks are the standard linux desktop


No thanks. I'm not sinking myself deeper into the Google data collection and advertisement empire. The whole point of the Linux desktop is freedom from such anti-consumer behaviors.


> The whole point of the Linux desktop is freedom from such anti-consumer behaviors

[[Citation needed]]

That may be your use-case, but it's not everyone's. I use it because it works well on cheap generic kit, it's free, it's faster than Windows, and it's largely immune to viruses.

My machines are all connected to paid Google accounts.


>systemd, pulseaudio, freedesktop have bought much pain in the way of open standards and interoperability.

You were down voted, so I will be glad to join you in the swamps :)

I fully agree with that statement, with freedesktop being by far the worst of the bunch. Seems these so-called "Open Groups" are doing all they can to eliminate the BSDs and probably some rather unique Linux Distros along the way.

And yes, I stand by this and I doubt my opinion will ever change. So let the down-votes begin :)


So many people quietly making .service files for their software who don’t even know what systemd is. Thanks Red Hat.

https://stackoverflow.com/questions/4018154/how-do-i-run-a-n...


Same people would make even worse sysv script. We had to fix dozens of those for subtle bugs...


or they put it into crontab via @reboot ... . Or they run it in screen/tmux, or they will run it from crontab via @reboot which will start screen/tmux session with sudo for specific user..that's what I've seen myself so far on Linux servers. I believe having simple structured unit files saved us from part of such setups at least. Another big part is [hidden] under Docker now.

Seen nothing of that chaos on FreeBSD servers, but not cuz it's init system is superior, but cuz literally 0 presence on server market.


> even worse

Oh I didn't mean sarcastically.

.service files generally just work.

Agreed that SysV files generally don't work and have a bunch of subtle bugs.


systemd?


given the strongly diverging views and controversial nature of forum discussions whenever the subject "systemd" comes up, I feel like I have to ask, for the sake of clarity: was the mention of systemd meant positively (in favor of Red Hat) or negatively/sarcastically?


On the positive to neutral spectrum


yes


pipewire


Still to this day, THE BEST exam i ever sat was the Redhat Certified Engineer. I guess like many folks here i’ve sat a ton of exams over the years but none compares to that one. I can’t really think of another case where i loved an exam.

It might have all changed by now, I did my RHCE exam on RHEL5.5 so we’re talking many moons ago before switching careers to development.


I was one of the first people to write RHEL back when I think they were still figuring out how they should do the exam.

There was a written portion followed by a practical portion in a lab where you had to configure services/features, followed by a break where they would mess small things up that you had to troubleshoot and fix.

I had a feeling they were going to break things, so I made sure I installed tripwire on each machine so I knew exactly what files they touched. I was done that portion in under 15 minutes. When I came out, the proctor said "Did you give up?" to which I replied "Nope - got 'em all I believe."

I think they removed tripwire from the available binaries after that :-)


> I guess like many folks here i’ve sat a ton of exams over the years

At the risk of sounding like a caveman I have to admit that I haven't sat a single exam in my professional life. Am I losing out on something? What benefits have these exams brought to you that could not have been happened without them? Seriously considering if I should start signing up for them. And if I want to sign up for them, where do I start?


To me there are a few benefits of doing exams (at least the Red Hat ones that are not cheap, but well respected):

1) it looks good on the resume, which can help you get past the initial sift by people who dont understand what your experience actually means.

2) They give you the chance to fill in the gaps in what you think you know. My experience of doing my RHCE after 10 years of professional sysadmining was of the 14 chapters in the book I knew maybe 10 already and had never touched the other 4 because they never came up in my job, and the prospect of a looming exam gave me a deadline and the motivation to actually sit down and learn them, which then paid off later in other jobs that did use them.

3) to test whether you are as good as you think you are :)

If those don't speak to you then they're probably not super important to do, luckily we mostly work in an industry where experience trumps exams.


Which book are you referring to out of interest?


The coursebook I got as part of the training course, this was 5+ years ago so we got a physical book. I've no idea what they do these days but there's a list of exam objectives on the red hat website and that basically covers what you need to know


Is there still an entire section on vsftpd?


no. I don't think that was even a thing in the RHEL7 version I last did and the whole exam/course has changed a lot since then.

I'm sure there are lots of places that still use vsftpd though (I have a vague memory that it supported kerberos at least), so it might still be useful for some people


For your personal development, it offers a structured way to learn a new skill; doing it on your own works, but has the risk of skipping large swathes of it or getting bored with it. Working towards certification is a goal you can work towards, as an alternative to undirected or situational learning.

And for an employer it's a nice tick when assessing candidates, it might just tip the scales in your favors vs someone else. However, I recognize that we're very much in a "seller's market", in that it often feels like education or certifications don't matter when it comes to finding a job, to the point where some years ago, people that just had a crash course / boot camp (as opposed to a formal education, bachelor's or master's degree) could land a six-figure job at a FAANG or startup company.


There was a time (early 2000s) where certifications where all the rage and by far the easiest way for otherwise 'mediocre' employees to stand out from the crowd to get hired and/or a raise. A couple of places I worked had formalized systems tying raises to passing certifications. Several people I know got hired for their job in tech purely on the back of passing their Microsoft developer certification.


I’ve worked at a certain company maintaining Linux servers and POS systems for retail stores across the country. The AD admin was MS certified and apparently among the certification materials they used to recommend companyname.local as the AD TLD. Meanwhile one of our software suppliers pulled samba which pulled avahi.

It became a big headache at first and then we had to adjust all our image build process to avoid the collision between the domain and multicast DNS libraries.


Microsoft certifications back then, yes, practice hard enough I recognised the shape of the question text, and knew what the answer was! Past exams had the same questions as current at that time.


Red Hat deprecated that exam a couple years ago, and the content it tested for (provisioning servers of various kinds and security) simply got scrapped completely.

I attempted that exam right before it was to disappear permanently, on RHEL 7.4, but did not pass unfortunately.


Red Hat was 26 years old in 2019, and then they were acquired by IBM.

But that’s only if you count the starting year as 1993; Red Hat the actual company seems to have been started in 1995, from what I read. In that case, they would have been 28 years old today.


> Red Hat was 26 years old in 2019, and then they were acquired by IBM.

As a former Red Hat employee this was painful to read. IBM is doing a Sun Microsystems to it, Red Hat used to be such a great company.


I clicked through to this thread because I want to know more about this.

I'm a user (of both RHEL and Fedora, although I use RHEL via the free developer license so I am not actually a customer).

I used to be a Sun user in the same way, but when Oracle bought Sun it was really, really obvious that using Sun's stuff was something we should immediately prepare to stop doing.

It's a little less obvious with IBM buying Red Hat, though -- especially to a random user like me.

For one, IBM isn't nestled between Halliburton, Enron, and Juul on the despicability spectrum.

For two, it looks a lot like Apple buying NeXT, at least from the outside. As in old company with a lot of not-looking-so-hot-anymore tech buys the younger upstart with better tech, and the better tech seems to win.

So what, specifically, is IBM doing that is so bad? E.g. firing the team responsible for developing X, and just milking the dead husk for short-term enterprise deals, or... ??


> what, specifically, is IBM doing that is so bad

This is just the customer facing stuff. The internal politics are more painful.

- Got rid of RHV, gave customers no realistic migration path

- Moved an entire BU of employees from Red Hat to IBM with no choice except to GTFO if you didn't wanna become an IBM employee (storage BU)

- Each area of the business is under increased oversight due to IBM beancounters thinking they know better

- IBM interference in product development (IBM has a deal, so they want feature X, even though bugfix Y and Z are more important)

At first it was all "yeah, you red hatters are gonna save IBM", but Red Hat is at the point where OpenShift is being milked by IBM 100%.


Yeah it was genuinely really sad that the deal couldn’t have been “here have OpenShift for a mountain of cash, leave us alone.”


Still is. I’ve been at RH a long time.

IBM hasn’t touched us and they are not doing what Oracle did to Sun. Not yet anyway, but Jim made a bunch of promises to us when we were acquired and they’ve kept them all.


This has not been my experience AT ALL. Also, no one has mentioned Jim in years. Where is he anyway? Chewed up an spat out by the IBM machine.


Why would IBM acquire and not touch it? They must have had a reason to acquire?


Red Hat is now worth 10% of IBM's revenue, and growing.

They had 3.4 billion reasons to acquire when they acquired it in 2018, and they had roughly 6 billion reasons not to touch it last year.


IBM "touches" Red Hat on a constant basis, needling and throwing its weight around to make way for blue ribbon 7 figure deals that never materialize.


Big list of companies to upsell to.


Oracle rescued what they cared from Sun, everyone else including IBM, couldn't care less.


Oracle basically murdered a ton of Sun open-source projects overnight.

Is IBM really doing that to Red Hat, or just suffocating it slowly the way that happens in many acquisitions?


> Is IBM really doing that to Red Hat, or just suffocating it slowly the way that happens in many acquisitions?

OK, my previous comment was an overstatement. This is what's happening.


That makes more sense. It's still sad to hear. As an outsider, Red Hat always stood out to me as a truly special company. Too few 'open-source' enterprises have figured out how to profitably develop F/OSS in a principled way, without the 'open core' bait-and-switch.

It's just the tech company circle of life, I guess. I wonder what younger tech company might play a similar role as Red Hat has another 30 years from now, and if they even exist yet.


They will just make it PITA to use if you're not on Red Hat.


They murdered RHV, that's a good start.


Red Hat murdered RHV without IBM's help (sad to say).


Was that pre IBM? If so, I stand corrected!


It was after but like for CentOS Stream the writing had been on the wall long enough; both rwmj and I are in the virtualization team.

There was hardly any new feature between RHV 4.3 (early 2019) and 4.4 (2020, last non-maintenance release) and there literally wasn't time for IBM to say anything about RHV 4.4, considering that feature work for 4.4 would have started before the acquisition (July 2019).


I am sore because I had many customers who were left without a path that made sense for them. Kubevirt is the future but it would be like giving your Grandpop a hoverboard. He’ll break a hip.


It is not nearly as bad. Oracle assassinated OpenSolaris on first sight while IBM literally pays people to develop Fedora.


That's disappointing to hear. What are some noticeable changes since the acquisition?


I'm not there anymore but there has been significant change in the company culture (which was great) and IBM is making conditions gradually worse.

One thing that is a clear change is that IBM is pushing for having a cheaper workforce, more employees but cheaper and therefore less qualified. Many short term goals of getting quick sales rather than get stuff that is sustainable over time not just technically but also burning people out.

Depecration of CentOS is something that really pissed me off, specially when we had acquired CentOS a few years back.

Some projects have been transfered to IBM. Others without being transferred directly were influenced by IBM in a negative way (I was a software engineer in one that was).

Significant changes in teams that deal with end customers, support has been affected, consulting has been affected. I know sales had some kind of merge as well (I don't any friend in sales but I've heard some stuff), etc...

Also there are loads of ridiculous changes in policy to save costs and they don't even save any money.

Something that really pissed me off were problems in HR. For instance I've seen problems in payslips (once I was paid THREE times with different amounts and the following month I got a discount on my payslip because I was paid too much, I know a guy who got paid 4 times that month and the month after he still got an extra amount because after 4 different payments money was still missing). I've heard of problems and delays doing the paperwork for paternity leave.


Hi! I don't recognize your nick but thanks for the kind words in your other message, first of all. I'm curious who you are, it would be great if you dropped me a private email or social media DM to tell me! Sorry about not responding to every point, I tried to focus on those where I can reply without being too vague. I hope you understand.

First of all let's be clear, the deprecation of CentOS was already in the air when CentOS Stream was started internally. That was IIRC in 2017, anyway before the acquisition. Even before that there had been a serious effort on increasing CI and CD of RHEL. It was a prerequisite to a feasible "rolling release" distro and it naturally led to CentOS Stream. I totally agree that communication sucked there, it was not the first time Red Hat screwed up communication with the community and probably won't be the last. That's sad. On the other hand I like the way Rocky Linux and Alma developed out of CentOS Stream. In exchange for a shorter lifecycle, their developers now have a path towards contributing to RHEL, which they didn't have before.

And just like IBM is incorrectly accused of killing CentOS, the same is probably true of most internal policy changes. Internal communication was always very clear in the rare cases when policy changes were driven by IBM (no details sorry), and there have even been cases in which policy changes have been reverted (unlike CentOS ;)). Both of those things also points towards IBM _not_ being the mastermind here. I was hired when there were 3000 employes and now there's over 20000, it's normal that some policies change. At the same time we had no forced "return to office" after COVID, memo-list still exists and management recognizes their mistakes when they're pointed out.

Likewise, I am not sure what you refer to as "wanting a cheaper workforce", is it https://news.ycombinator.com/item?id=29114697? Again, not sure why you think IBM was behind it but I'd very much rather have that, than thousands of layoffs Google or Meta style. One comment in there says it all---people are not born as senior engineers and you must start somewhere. Growing junior developers to be open source project maintainers has always been Red Hat's superpower.

And while some projects have been transferred to IBM, notably storage, some projects have been transferred _from_ IBM (the main being OpenShift ACM, and part of the Java team). I have met some of the people who moved from IBM, they are amazing and they quickly embraced being part of Red Hat. And in fact, while I'm not aware of (or did not understand) what you are referring to with respect to customers, support or consulting, I know IBM did a serious effort to educate _their_ customer-facing teams about Red Hat. I mean, it's not like Red Hat pulled a NeXT-style reverse acquisition, but the executive behind the acquisition is now IBM CEO and for some time a former Red Hat CEO was IBM #2. That must mean something.

Over four years have passed since the acquisition was announced, and there has been no massive exodus out of Red Hat. The promise was that "Red Hat would still be Red Hat" and I think they kept it.


From the sidelines, thanks for this detailed reply. I went to bed last night before I saw it, but I was hoping I would get to see a more detailed account of this perspective.


IBM has a long history of outsourcing to cheaper countries. Just be ready to train your replacement.


I am already training my replacements, knowingly and enthusiastically.


one thing I don't hear mentioned is a huge change in company culture. we started hiring management from cisco, and they are NOT a good personal fit. like super sexist nasty backstabbing management types. that was NOT the type of people Red Hat hired pre IBM.


For one, the deprecation of CentOS


To be honest if the worst thing you can say about a company is they changed the distribution model of the thing they were giving you for free from point releases to rolling updates then they could be a lot worse.


That effectively makes it useless. The whole reason to use CentOS is because it's binary-compatible with RHEL; it's merely RHEL without the expensive licensing fees and support. So it's really useful for developing software where your customer is the US government or someone else standardized on RHEL.

Luckily, according to Wikipedia there are two new distros that have popped up to fill this need: RockyLinux and AlmaLinux.


If that is how Red Hat saw CentOS, why the hell did they buy it?


Because CentOS had been languishing, with releases and security fixes taking longer and longer, and the thought at the time was that since CentOS was seen as the entry point to Red Hat Enterprise Linux, it might be leaving a bad impression with potential future customers.

The thinking behind CentOS Stream is different. The idea was not to kill off a free competitor (those were always going to exist, and projects like Rocky and Alma forming was inevitable, and this was obvious). The idea was to create a real community where previously there was not much of one. CentOS was the Android-style "throw it over the wall" model of open source. About the most you could do as an outsider to contribute was file tickets on Bugzilla and package for EPEL. Whereas CentOS Stream provides a place for people to contribute to future versions of RHEL, and therefore, RHEL clones like Rocky and Alma.

So Rocky Linux devs and users, Alma Linux devs and users, CentOS devs and users, Facebook employees (they use CentOS Stream internally), Oracle Linux devs, and whoever else can make and review contributions, which is a more symbiotic relationship than existed before.


That one was a mistake, but also nothing at all to do with IBM. We (Red Hat) did that to ourselves.


Curious for what the rationale behind the decision was?


https://news.ycombinator.com/item?id=34988259

It was purely a communication issue and a tempest in a teapot. There are great replacements for CentOS Linux, and Red Hat employees are now all working on CentOS Stream (development is done in Stream) instead of having just a handful of people doing the rebuilds.

I for one don't miss having to ask permission months in advance to backport a bugfix to a weird RHEL package! A lot more decisions can be taken in autonomy by developers.


Absolutely nothing.


Do you work at Red Hat?


He does, he's been many years and he's one of the best engineers I worked with :)


Wow. You two had very different experiences of the acquisition.

I guess a lot of the anxiety surrounding an acquisition is anticipatory, and on some level that can cause culture changes in its own right, if it inspires a fair number of people to look for work elsewhere.


Yup, it already started. They bought Ceph and one of first things they did is discontinuing packaging for non-rhel distros


How is that related to IBM? RedHat bought Ceph back in 2014.


> But that’s only if you count the starting year as 1993; Red Hat the actual company seems to have been started in 1995, from what I read. In that case, they would have been 28 years old today.

The distro was not created before 1994 either, so looks like they're counting the moment the founders got together as the creation date (before the distro, before the company) ?!


According to the article,

> Red Hat was incorporated on March 26, 1993, by its founders Bob Young and Marc Ewing


Red Hats themselves says 95 other places.[0] Young and Ewing was working together from 93', but Red Hat Software, Inc. is from 95.

[0] https://www.redhat.com/en/about/brand/standards/history


I guess that depends on your definitions. ACC Corporation was created in 1993, but it was a simple CD-selling company and had nothing to do with Red Hat except distributing their CDs, like many other CDs they sold.


I still have my first linux cd somewhere.

They came together with the `SAMS RedHat Linux 7.0 Unleased` book I bought as I did not want to go through the same pain I went with my 33600 pstn downloading slackware 7's cd#1 in 14 days.

Cheers RedHat :)


My first was from a Redhat 5.2 book, some time in the first half of 1999. Alas at the time my SIS6326 wasn't supported in X, at least by default so there was little I could do. In those days internet access wasn't ubiquitous like today -- you could get online, you'd have to use minicom to dial your modem, then background it, then run pppd to get IP up and working, then use lynx to access the internet.

Or dual-boot back to windows, use a browser, find something else to try, then back to linux to try that.

Wasn't really until 6.0 that I started using linux properly, and wasn't until Debian Potato in August 2000 that it became my primary OS.


Ugh now I feel old, because my first Linux was a Red Hat on floppies that came tied to a RH 4.X book or something. It was so cool to install on an old Gateway 2000 I had come across, and have my own computer. Hard to believe that for as lost as I was in that shell on day one, I now spend so much time in one, sometimes from a computer that fits in my pocket.


I remember coming down an escalator around 23 years ago in an Amazon or Amazon-like bookstore seeing someone pick up a Redhat box in its distinctive colouring from behind the cashier desk and thinking "They're onto something."


I have very fond memories of the two years I spent at Red Hat. I even organised an interview with Bob Young in 2019: https://youtu.be/Z75GvP-nYtA


They still host images for the versions of linux I cut my teeth on in the 90s. For that as much as anything I'm really greatful!

Redhat 4.2 Biltmore represent!


I think I bought my first Intel based PC (a 486DX66 with 4MB RAM) in March 1993 - I was just shy of 30 myself. A core goal, which I achieved, was to dual-boot MS-DOS6 and Linux. While I started with SLS and Ydraggisil, I did end up giving Redhat a run. It's great to see it make it commercially.



Unrelated but does anyone know if Fedora/Red Hat is or if they will start funding KDE as they do GNOME?


I only have a modicum of respect for this company after they butchered CentOS.


Which honestly is going to bite them. The on-ramp from CentOS when small and RHEL once you’re big enough to need support is gone. It’s Ubuntu/Debian all the way down now.


This… it was a natural flow.

I’ve switched to Ubuntu too but my muscle memory is Redhat so still sometimes annoying- never got into Debian though


IBM's acquisition caused this, don't blame only RedHat


Ah yes, IBM: the software company that still thinks Rational ClearCase is a great version control system.


Given the amount of money in the bank versus many wannabe startups, they are somehow right.

And yes, ClearCase does stuff that many alternatives to this days still has issues dealing with.

With ClearCase I already had object files caching across the build cluster without extra stuff, a nice graphical management tooling, and I never messed my view specification files, like I routinely need to fix my Git repos.


ClearCase doesn't even have the concept of related changes in multiple files being part of the same commit. How you can even compare that to a modern VCS I have no idea.


It clearly supports changesets, maybe you're were holding it wrong.


Keyword: UCM activities.


They bought CentOS in 2014 which was long before IBM.


sweet they are a year younger than SUSE

https://www.suse.com/de-de/30years/


What would you say are the main contributions of RedHat to the FOSS world, from its inception until this time (or 2019 if we want to split hairs)?

As a devotee Debian (now Devuan), I've never been fond of RedHat/RHEL/CentOS as distributions and they always felt clunky to me somehow. But I am vaguely aware that RedHat has been involved in kernel development and the development of various apps etc. What would you say are the most prominent and "redeeming" of those?


What would you say are the main contributions of RedHat to the FOSS world

For me, I would say their biggest contribution is their early work with getting Linux into the hands of 'normal' people. RedHat was the first to really get their CDs into bookstores, into computer books and into the hands of students and hobbyists. They sponsored and supported the writing of many of the books that first taught people Linux. At the same time they managed to appear as a 'serious' company and calm and convince managers that Linux on both company servers and work stations was a safe bet. I would also say they really pioneered the whole "FOSS as a business model" concept which many companies and projects later adopted.


curl -sL 'https://www.kernel.org/doc/linux/MAINTAINERS' | grep -ci redhat

The fact that their employees can work on a large part of the Linux ecosystem during work hours makes a huge difference too. I often come across people from redhat experimenting, like implementing screen sharing in wayland.

I've heard that they've been known to buy up stuff like Ansible for example, and then release source code. In ansible's case it was AWX, but I wish I had some better examples to mention. Maybe someone else can fill me in on some company red hat has purchased only to release their product source code to the public.


Not all of redhats contributions are from @redhat.com domain, its not uncommon for people to commit from their personal addresses.

I can think of a companies that redhat has bought ( https://en.wikipedia.org/wiki/Red_Hat ), but the ones that i think got opensourced are:

- Sistina GFS - Netscape directory server - Permabit (Dedupe I think)

There are likely more but these are the ones that I've messed with and they opened source.


curl -sL 'https://www.kernel.org/doc/linux/MAINTAINERS' | grep -i "@redhat" | grep -v "L:" | sort -u | wc -l


150 out of 3873. Certainly noteworthy. Some other commercial companies:

     curl -sL 'https://www.kernel.org/doc/linux/MAINTAINERS' | egrep -o '<.*@.*>' | sed 's/<.*@//; s/>$//;' | sort | uniq -c | sort -n

     23 ti.com
     24 foss.st.com
     27 vmware.com
     28 huawei.com
     36 bootlin.com
     36 mediatek.com
     41 chromium.org <- Google, right? --einpoklum
     43 analog.com
     47 suse.com
     48 amd.com
     48 samsung.com
     49 arm.com
     49 marvell.com
     49 microchip.com
     49 nvidia.com
     54 google.com
     61 nxp.com
     76 broadcom.com
     84 linux.ibm.com
     87 linux.intel.com
    140 redhat.com
    157 linaro.org
    158 intel.com


They have set a lot of standards. If Red Hat did something it often has become embraced. Basically, they have acted as a leader and I think they have done a good job as that.


Can you name some of the prominent examples?


Sure. Systemd. Flatpak. Freedesktop was created by Red Hat employee. Red Hat has been a driving force in implementing SELinux policies. Wayland was created by a Red Hat employee. I think if you compare Red Hat and Ubuntu, you will see that projects Red Hat pours resources into find wider adoption, while Ubuntu often goes alone.


Those are more like "stuff RedHat is pushing" rather than standards. Unfortunately, systemd has caught on...


Well that shows your bias more than anything


Just one person amongs many, but employing Dave Airlie (airlied) has been a great boon for the Linux graphics stack. Among many other things, arlied started the radv driver (together with Bas Nieuwenhuizen) which has been become a significant part of Linux gaming on AMD hardware.


They are 2nd largest contributor to OpenJDK (Oracle is 1st obviously). They maintain the older releases - 8 and 11. It's mainly thanks to them that freely available maintenance releases of JDK 8 an 11 still exist.


That's truly admirable, but on a little tangent here: how sad is it if you still have to be on Java 8. Sure, there's probably a small few good reasons for an exceeding small amount of users, but all those enterprises that won't go with the times just because their compliance team won't certify an upgrade, I feel for those folks.


It's not the formal certification that's the problem - it's the actual breakage in very old code that everyone expects and nobody wants to deal with.


This may well be... And I don't mean to argue over it, but it's just so hard to imagine, having been through this myself, and it's never been very hard, given how stable the JVM has generally been. It's not like I'm asking to upgrade from Python 2 to 3.


There was a long stretch of time for which the only full-time developer working on Python (cPython) upstream was a Red Hat employee. The other core devs were either working in their free time or with their employers sponsoring only a few hours a week.

https://discuss.python.org/t/official-list-of-core-developer...

I believe Microsoft has been sponsoring a lot of work for the past year or so, so I doubt that is still the case.

Red Hat has been probably the biggest contributor to GCC for a very long time and contributes to LLVM/clang as well.

Beyond the kernel - glibc has a lot of RH maintenance work, Grub2, wayland, Xorg, Gnome, Gtk, pipewire, systemd. etc.


they're one of the largest contributors to the kernel, Gnome is largely funded and developed by Red Hat I think, they sponsor Fedora. They probably employ a significant chunk of people who work on the linux ecosystem.


So, the kernel is very much to their credit, GNOME - somewhat of a wash, in the sense that some people like it and some hate it... as for Fedora - I count that as one of "their" distributions, which like I said, I'm not a fan of.

But point taken about many people working on the ecosystem.


For better or worse: $$$$. Lots of $$$$.


See the sibling comment from Thev00d00 for a decent answer.


Cue start of a systemd debate.

Less sarcastically, systemd is a major part of most distros. It has it's haters, but it's one of the most impressive additions to FOSS from Red Hat.


Impressive in its insidiousness, I would say. I still can't believe Debian bought into that (of course, while claiming it's only an option).


Contributions to kernel by employees?


I'm not a Redhat apologist. I have no skin in the game, but I've paid attention occasionally over the years.

It depends on how you count, of course. I've done these statistics using a variety of git tools over the years. Being lazy, I asked various AI interfaces the question since AI seems better at faking SQL than I am at QUEL or the DSL of the week in schemaless databases.

The results vary depending on decade, particularly since Intel have increased their kernel contributions from 2013 onward. However, when ranked over the 30 years from 1993-2023, as of today, assuming no latency in the data, Redhat wins:

  - Red Hat: 16.9% of total contributions
  - Intel: 12.8% of total contributions
  - IBM: 8.4% of total contributions
  - SUSE: 6.3% of total contributions
  - Google: 4.5% of total contributions
  - Linaro: 3.9% of total contributions
  - Samsung: 3.2% of total contributions
  - Oracle: 2.7% of total contributions
  - AMD: 2.1% of total contributions
  - Huawei: 1.9% of total contributions




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: