Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> For a long time the X Window System had a reputation for being difficult to configure. In retrospect, I’m not 100% sure why it earned this reputation, because the configuration file format, which is plain text, has remained essentially the same since I started using Linux in the mid-1990s.

It's because X's config files were asking you questions that there was no good way of knowing the answers to other than trial-and-error. (After all, if there was some OS API already available at the time to fetch an objectively-correct answer, the X server would just use that API, and not ask you the question!)

An example of what I personally remember:

I had a PS2 mouse with three mouse-buttons and a two-axis scroll wheel ("scroll nub.") How do I make this mouse work under X? Well, X has to be told what each signal the mouse can send corresponds to. And there's no way to "just check what happens", because any mouse calibration program is relying on the X server to talk directly to the mouse driver — there wasn't yet any raw input-events API separate from X — so in the default X configuration that assumes a two-button mouse, none of the other buttons on the mouse get mapped to an X input event, so the mouse calibration program won't report anything when you try the other parts of the mose.

So instead, you have to make a random guess; start X; see if the mouse works; figure out by the particular way it's wrong what you should be telling X instead; quit X; edit the config file; restart X; ...etc.

(And now imagine this same workflow, but instead of something "forgiving" like your mouse not working, it's your display; and if you set a resolution + bit-depth + refresh rate that add up to more VRAM than you have, X just locks up the computer so hard that you can't switch back to a text console and have to reboot the whole machine.)



Yup, things are so much better now that they just work. Except when they don't, because now it's harder to do anything about it.

I've lost count of the number of Linux machines I've seen that won't offer the correct resolution for a particular monitor (typically locked to 1024x768 on a widescreen monitor).

I don't know whether the problem's with Linux, Xorg, crappy BIOSes or crappy monitors - but even now I occasionally resort to an xorg.conf file to solve such issues.


Do you work with a lot of KVMs? Directly plugged monitors usually just work thanks to EDID info, but cheap KVMs frequently block that signal and cause problems. It's rare for a monitor plugged directly into the computer to have problems these days, even on Linux.


No KVMs involved - but three of the machines I have in mind (not identical, but all running the same version of Linux Mint) have two monitors attached, one of which is OK and the other isn't. (Not mine - so I haven't put any time into trying to solve it yet.)

Another machine - which is mine - used to have a 19" VGA monitor attached which worked happily at 1280x1024 for months, then one day something got updated and it wouldn't do anything beyond 1024x768 after that until I resorted to an xorg.conf file.


Also, on modern machines you almost never want to be editing the xorg.conf. xrandr took over the responsibility of doing resolution stuff.

To fix the resolution on a modern distro the sequence is something like this (use your actual monitor dimensions and refresh rate of course):

    % cvt 1920 1080 60
Copy everything past "Modeline" into a cut buffer.

    % xrandr --newmode <paste the line from above>
Keep a note of the first line in the field, it will look something like "1920x1080_60", this is the "mode name"

Next, find out what your monitor is named:

    % xrandr | grep ' connected '
It will be HDMI-1 or VGA-1 or something like that, this is your "interface name".

Now add the mode to your monitor specification:

    % sudo xrandr --addmode <interface name> <mode name>
Finally, switch to the new mode:

    % xrandr --output <interface name> --mode <mode name>
This is the modern way of doing it. Manually setting up modelines in the xorg config file is oldschool.


That's very good info.

> For a long time the X Window System had a reputation for being difficult to configure.

But apparently some things never actually change. :)


Author of article:

Honestly, it is pretty easy to configure the X Server these days; very little manual intervention has been required since the mid-2000s if you want to accept the defaults, which largely are correct and good. I am mindful about what families of hardware I buy, though, but that's not too restrictive. The only piece of hardware of mine that needs manual configuration is the Logitech TrackMan Marble, but that's only because I operate the mouse with a right-handed layout with my left hand. Interestingly the TrackMan Marble does not work with its full feature set in Wayland (core example: the buttons to enable horizontal/vertical panning of a wide or tall document), and this is not exotic hardware. How configuration is being handled in the X to Wayland conversion is a mystery to me. Some of it is happening in libinput (I think), but other parts aren't. This is one of the reasons I am deferring the Wayland migration for as long as I can.

Configuring software stack that runs (think: what the ~/.xsession file manages) is the place I've invested most of my effort, and that's purely about aesthetics and behavior: DPI, font rendering settings, window manager, etc. And this is pretty easy to do these days, because most of these things can prototyped and altered in an existing X session (keeping a tight edit-run loop).

And both of these situations can be alleviated by storing critical configuration files (e.g., ~/.xsession or the X Server configuration) under version control. There's no point in having to invest in reconfiguration of the same hardware these days when there's cheap version control and storage.


Oh, for sure. I've been using X in various capacities for ~3 decades.

I remember how it was. I'm impressed with how it is.

I recently switched back to X as a primary desktop after a rather long hiatus of doing [mostly!] other things. There was some initial driver discourse (standard nVidia vs. OSS necessary nonsense), but it wasn't really so bad once I sorted out what I needed and most "regular" Linux users can skip by a lot of this by default.

So far, I've done zero manual configuration of X itself outside of using XFCE4's GUI tools to arrange the three monitors in front of me in the right order -- and I don't presently see any reason to change anything else.

It's been very pleasant, all said, even though I got here on Medium-Hard Mode with an a rather barebones base install of Void on an existing ZFS pool for root.

X really was one of the easier parts of the whole operation.

(I have no interest in Wayland. It offers no clear advantage to me as a user that I can identify; even the games I like to play run splendidly in X. I've also always adored the concept of remotely displaying GUI applications. It's convenient -- I ran remote X apps for years immediately prior to this recent switch, and it worked well. Remote X apps have saved my bacon a few times by allowing me to quickly get a thing done in a familiar way instead of learning how to do it using something else entirely and maybe stuffing it up in some unforeseen fashion.)


You want arcane, try doing the same thing in Windows when it can't detect your monitor properly and doesn't like your video card.


Thanks. Now I'll have nightmares of the time I spent trying to help a friend get the 32" TV they won in a contest (back when an LCD of that size was still both unusual and expensive) to work at proper native resolution in Windows.

Windows really wanted it to be 1080p, and the TV supported this input, but it was a blurry mess.

It was advertised as 720p, and the TV supported this input as well, but that was also a blurry mess.

It actually had a physical vertical resolution of something like 760 lines, which was not one of the modes that it offered up over DDC as an option for whatever was driving it to use.

Fun times.

(I did eventually get 1:1 pixel mapping, but IIRC I had to give him a different video card for this to happen.)


What Linux solves through configuration, Windows solves by having everything you buy come with its own model-specific drivers that burn the needed configuration into the .INI + .DLL.

Windows "not liking your video card" is presumably because you either aren't using the right driver, or because Windows doesn't like your driver — i.e. the monitor is old enough that there's no version of that driver for current Windows.


>This is the modern way of doing it.

So by modern you mean the 1980s?

My biggest takeaway reading through all these comments is that Plug-and-Play might as well be heresy...


It works. lcd monitors will have the native resolution.

Of course if you use debian 3 in 2024 the fault might be your own…


PnP will always work for a simple+direct+modern (GPU → DisplayPort or HDMI → display) display path; but there are a lot of people who for whatever reason still need to use VGA.

Despite EDID being invented during the VGA era, it wasn't invented at the beginning of it — so older VGA displays don't support EDID, and therefore don't support reporting their valid modes. (And this is relevant not just for CRTs, but LCDs too — and especially projectors for some reason. Some projectors released as recently as 2010 were VGA-only + non-EDID-reporting!)

Remember Windows saying "you proposed a new monitor resolution; we're gonna try it; if you don't see anything, wait 15 seconds and we'll undo the change"? That's because of expensive mid-VGA-era EDID-less VGA monitors. These were advertised as supporting all sorts of non-base-VGA-spec modes that people wanted to use, but came with nary a driver to tell Windows what those modes were — so Windows was in that era just offering people essentially the same experience as editing xorg.conf to add ModeSet lines, just through a UI. And obviously, if even Windows had no proprietary back-channel to figure out what the valid modes were, then Linux didn't have a penguin's chance in hell of deducing them without your manual intervention.

---

But also, people are often trying to "upcycle" old computers into Linux systems (embedded systems like digital-signage appliances being especially popular for this) — and these systems often only come with VGA outputs, and video controllers that don't capture EDID info for the OS even when they do receive it.

Hook an early-2000s "little guy" (https://www.youtube.com/watch?v=AHukN0JsMpo) to any display you like over VGA, no matter how modern — and it still won't know what it's talking to, and will need those modeset lines to be able to send anything other than one of the baseline-VGA-spec modes (usually 800x600@60Hz.)

And this is, of course, still true if you try to use one of these devices with a modern HDMI display using an adapter.

(You might think to get away from this by using a "USB video adapter" and creating an entirely-new PnP-compatible video path through that... but these devices are usually old enough that they only support USB 1.1. But hey, maybe you'll luck out and they have a Firewire port, and you could in theory convince Linux to use a Firewire-to-DVI adapter as an output for a display rather than an input from a camcorder!)

---

Besides the persisting relevance of VGA, there's also:

https://en.wikipedia.org/wiki/FPD-Link, which you might encounter if you're trying to get Linux running on a laptop from the 1990s, or maybe even a "palmtop" (i.e. the sort of thing that would originally have been running Windows CE);

• and the MIPI https://en.wikipedia.org/wiki/Display_Serial_Interface, which you might see if you're trying to do bring-up for a new Linux on a modern ARM SBC (or hacking on a system that embeds one — a certain popular portable game console, say.)

In both of these cases, no EDID-like info is sent over the wire, because these protocols are for devices where the system integrator ships the display as part of the system; and so said integrator is expected to know exactly what the specs of the display they're flex-cable-ing to the board is, and write that into a config file for the (proprietary firmware blob) driver themselves.

If you're rolling your own Linux for these systems, though, then you don't get a proprietary-firmware-blob driver to play with; the driver is generic, and that info has to go somewhere else. xorg.conf to the rescue!


> on modern machines you almost never want to be editing the xorg.conf.

No one ever wanted to be editing xorg.conf! (xkcd 963 anyone?)

I did try the "modern" way when I hit this problem (which would have been in early 2022) - but even if it had worked (which it didn't) I don't think it would have persisted beyond a reboot?


I've never had this technique fail on me. I've done it a lot since I work with a variety of crappy KVMs and run into this problem often enough. You do need to make it a startup script, but that's pretty easy to do.

If it didn't work it's possible you have deeper problems, like X falling back to some crappy software only VESA VGA mode because the proper drivers for your card got corrupted. I've not seen this in many many years, but it's possible. The last time it happened it was really obvious because the whole thing was crazy slow, like the mouse cursor was laggy and typing text into the terminal had over a second of delay. It wasn't subtle at all.


I seem to remember at the time I had trouble finding "current" instructions - I think the syntax changed somewhere along the line? - so there may well have been some crucial step missing.

I'm sure it hadn't fallen back to a VESA mode because I was using compositor features like zooming in on particular windows while screencasting.


> Directly plugged monitors usually just work thanks to EDID info

If you are dealing with consumer grade stuff that is sold a million times sure. I stopped keeping track of how often some special purpose/overpriced piece of display hardware had bad EDID information that made it glitch out.


> If you are dealing with consumer grade stuff that is sold a million times sure.

It's not a sure thing. Out of a bunch of mass produced monitors sharing the same model number and specs, some may still malfunction not reporting the correct EDID.


KVMs do tend to cause issues, especially when it comes to power management and waking from sleep. However, just two weeks ago I had issues with Debian when connecting directly to a monitor. Booting from the live image with a Nvidia GPU resulted in 1024x768 garbage. Surely the installer will take care of that and the open drivers will be sufficient. Surely.

Nope. I had to reinstall and the option to add the proprietary repository was not as obvious nor as emphasized as it should have been. It almost seemed like an intentional snub at Nvidia. I bailed for other desktop-related issues and ran back home to another distro.

But maybe Debian doesn't want to focus on desktop users and that's fine - they can continue to rule their kingdom of hypervisor cities filled with docker containers. The world needs that too.


> It almost seemed like an intentional snub at Nvidia.

I don't think anybody can come up with better intentional snubs at Nvidia than the Nvidia itself.

When it comes to their older graphics hardware, their drivers just refuse to work with newer kernels. GPU was capable to draw windows and play videos for a decade, but then, after a kernel update, it doesn't even show 1024x768 "garbage". Just black screen.

So effectively, buying Nvidia to use with Linux equals to buying hardware with expiration date.


I'm surprised the reverse-engineering folks that like jailbreaking game consoles and decompiling game ROMs, aren't all over the idea of decompiling old Nvidia drivers to modify + recompile them to speak to modern kernel APIs.


Such folks usually have modern GPUs, so they don't experience such problems.


Once a card is old enough you might have to switch to the Nouveau driver instead, which is probably fine since using a card that old on a modern machine suggests you aren't that interested in games or VR.


There is no other choice but Nouveau. But it's not that fine because it means losing hardware video decoding.

> using a card that old on a modern machine suggests

It's an old laptop. Totally adequate for scrolling web, watching movies and arguing about very important stuff on Hacker News. There is no way to change GPU there or switch to integrated Intel one.


> which is probably fine since using a card that old on a modern machine suggests you aren't that interested in games or VR

I think a more correct assumption is that you're likely interested in running games of at most the era the computer was purchased in. It'd be a shame if your 7-year-old GPU going out-of-support with a distro upgrade, meant that you suddenly become unable to run the 7-year-old games you've been happily playing up until that point.


It it really only 7 years? nVidia still lists driver support on their website for the GeForce GTX 600 on Linux, a card that is 12 years old.

https://www.nvidia.com/download/driverResults.aspx/226760/en...


> I've seen that won't offer the correct resolution for a particular monitor (typically locked to 1024x768 on a widescreen monitor).

I've been using linux for over 20 years, Xorg for most of that time, and I've never had any issues with screen resolution.


I'm genuinely pleased to hear that it works for you.

Unfortunately that doesn't make the problem I'm having go away! (On two of the machines I have in mind the issue is with a second monitor - that may well have something to do with it.)


I've been using 2 monitors on several machines, on several occasions. And gave plenty of presentations using projectors.


It was always fun to have the threat of overdriving and frying your CRT monitor hanging over your head when trying to get X going.


> For a long time the X Window System had a reputation for being difficult to configure.

Honestly I thought it was hard to configure because until I used Linux, my X terminals didn’t need to be configured at all!

I may be misremembering but I think my NCD terminal used bootp and probably a little tftp, then off it went. The hardest part was probably finding an Ethernet drop to plug it into.

Now - get off my lawn!


You surfaced memories of childhood me installing RedHat 5.2, carefully selecting packages and X config options, getting it wrong, not knowing how to get back to that magical installation UI, and reinstalling the OS just to have another crack at it.

Eventually I figured out how to launch that xconfig utility and found some sane defaults, and was thrilled when I finally saw the stippling pattern or even a window manager.


>It's because X's config files were asking you questions that there was no good way of knowing the answers to other than trial-and-error.

You didn't have to guess, you just had to read the specs in the manual that came with your equipment.


The manual that came with your laptop of 25 years ago isn't going to tell you whether your touchpad is Alps or Synaptic, or which PS/2 protocol it imitates.


True. Though laptops were in some ways easier than desktops, since laptops tended to have the same set of hardware in each unit, so hopefully you only had to find an `XF86Config` or `xorg.conf` that someone had shared for that model.

Examples:

http://www.neilvandyke.org/linux-thinkpad-560e/XF86Config-tp...

https://www.neilvandyke.org/linux-thinkpad-x20/xorg.conf


Those specs weren't readily available to non-experts, never mind what to do with them.

For a trip down memory lane, read through the XFree86 Video Timings HOWTO (https://tldp.org/HOWTO/XFree86-Video-Timings-HOWTO/index.htm...). Getting stuff to work in the Good Old Days was _not_ easy.


30-70h, 50-160v =)

I still remember that.

And xf86cfg, and how much Debian was improved when Sarge arrived.


To the people down-voting you: X is from a time when devices actually came with manuals. When the people using it were engineers and scientists and reading a datasheet or a manual was a normal thing to them.

I think this started around the 90ies that devices turned into magic black box consumables that are expected to "just work" while being undiagnosable when they don't.


> To the people down-voting you: X is from a time when devices actually came with manuals.

To a degree. At least from my experience, something like a monitor and video card manual would provide you with enough information to filter through a list of example modelines to figure out which ones may work. Yet they did not provide enough information to create your own modelines.

> devices turned into magic black box consumables that are expected to "just work" while being undiagnosable when they don't.

"Just work" and being diagnosable are not mutually exclusive concepts. For the most part, the Linux ecosystem reflected that and still reflects that. I suspect the shift in behavior actually came from end users. They were less willing to look through the diagnostic messages and far less willing to jump through hurdles for things that they thought should just work.


> I think this started around the 90ies that devices turned into magic black box consumables that are expected to "just work" while being undiagnosable when they don't.

I would say that it's more that the architectures where a manual created by the integrator could tell you anything useful, became irrelevant/obviated by architectures where it wouldn't.

Including a manual with a printed wiring block diagram of the hardware, made sense in the 1970s, when you (or the repair guy you called) needed something to guide your multimeter-probe-points for repair of a board consisting of a bunch of analogue parts.

And such a manual still made sense in the 1980s, now for guiding your oscilloscope signal-probing of jellybean digital-logic parts ("three NOT gates in a DIP package" kind of things) to figure out which ones have blown their magic smoke.

But once you get to the 90s, you get complex ICs that merge (integrate!) 90% of the stuff that was previously sitting out as separate components on the board; and what's remaining on the board at that point, besides those few ICs, just becomes about supporting those complex ICs.

At that point, all of the breakage modes that matter, start to happen inside the ICs. And if it's the ICs that are broken, then you none of the information from a wiring block diagram is going to be helpful; no problem you encounter is likely to be solved by probing across the board. Rather, you'll only ever be probing the pins of an individual IC.

Which means that what really helps, in the 90s and still today, are pin-out diagrams for each individual IC.

Providing that information isn't really the responsibility of the board manufacturer, though; they didn't make the ICs they're using. Rather, it's the responsibility of the IC company — who you don't have any direct relationship with, and therefore who don't have cause to be sending you you data-sheets.

Thankfully, these IC companies do sell these parts; and so they mostly have their IC data-sheets online. (No idea how you would have figured any of this out in the 90s, though. Maybe the 90s equivalent of Digikey kept phonebook-thick binders containing all the datasheets they receive along with the parts they order, and maybe repair people could order [photo]copies of that binder from them?)


In the late 90s modems would come with a manual about the special AT commands for that model.

In the 2000s you had to email the manufacturer to get that on a pdf.


You were supposed to enter a bunch of numbers in your BIOS to add a new disk… and let's not forget to set the jumpers properly.


Modelines required timing information that was rarely available. You made a best guess and tweaked the numbers until it worked.


Or you hunted around in the pre-WWW world for a modeline database, and hoped your monitor was included.

Here's a more modern incarnation and more background (the non-stippled kind):

https://www.mythtv.org/wiki/Modeline_Database

https://tldp.org/HOWTO/XFree86-Video-Timings-HOWTO/

https://nyanpasu64.gitlab.io/blog/crt-modeline-cvt-interlaci...

https://xtiming.sourceforge.net/cgi-bin/xtiming.pl


Lucky you, buying computer hardware new-in-box :)

I think, for at least the first 30 years of my life, every Linux system I've ever built was with "hand-me-down" hardware. First hardware from my parents, then from various friends, then finally from my own expired projects.

When I was young (eleven!), this meant that we'd get a new computer, and now the very old computer it replaced could be repurposed as a "playground" for me to try various things — like installing Linux on — rather than throwing it out. (My first Linux install was Slackware 3.4 on a Pentium 166 machine. Not the best hardware for 1998!) Nary a manual in sight; of course my parents didn't keep those, especially for something like a monitor.

When I was a teenager, this meant getting hand-me-down hardware from friends who had taken the parts out of their own machines as they upgraded them. Never thought to ask for manuals, of course. (Also, sometimes I just found things like monitors laying on the side of the road — and my existing stuff was so old that this "junk" was an upgrade!)

And during my early adulthood, my "main rig" was almost always a Windows or (Hackintoshed) macOS machine. So it was still the "residue" of parts that left that rig as it got upgraded, that came together to form a weird little secondary Linux system. (So I could have kept the manuals at this point; but by then, the manuals weren't needed any more, as everything did become more PnP.)

It's only very recently that I bought a machine just to throw Linux on it. (Mostly because I wanted to replace my loud, power-sucking Frankenstein box, with one of those silent little NUC-like boxes you can find on Amazon that have an AMD APU in them, so I could just throw it into my entertainment center.) And funny enough... this thing didn't come with a manual, or even a (good) data-sheet! (Which is okay for HDMI these days, but meant that it was pretty hard to determine e.g. how many PCIe lanes the two M.2 slots on the board are collectively allocated.)


> You didn't have to guess, you just had to read the specs in the manual that didn't come with your equipment.

Hey you missed a word so I added it in for you. Most consumer PC equipment definitely did not come with any documentation covering the sort of stuff X's config file was asking about.

When that documentation was available it was something you could only get by contacting the manufacturer about. But you couldn't mention the word "Linux" because the CS rep would give a blanket "we don't support Linux" and you'd get nothing.


Sure it did. There was a page in the pamphlet that came with my viewsonic 15" that listed the supported timings. You just threw it away, but that's not X's fault.


No, I had plenty of equipment that came with a little piece of printed paper that came with not quite enough information to be useful.


That's already more work than other operating systems of the late '90s made you do to get a functioning mouse, keyboard, and display.


Given that the example mentioned above was about making the scroll wheel work? When Microsoft released the IntelliMouse it came with a driver disk, just plugging in the mouse without reading the manual left you with a non functional scroll wheel. Support for Microsoft style mice by later versions of Windows also did not stop companies from requiring their own drivers to enable non standard functionality.


> Just RTFM

Ahh Linux people. Some things will never change.


lol

                               ()
    /  Oh wait you're  \       JL
    | serious. Let me  |       ||
    \laugh even harder./       LJ
            .            _,--"""""""---.
             .         ,'               `.
              .       /                   \
               .     J                     L
                .    F                     L
                    J                      J
                    |                      J
                 ___L______________        J
                /,---------------. "".     J
               JJ   /     \/      |  J     J
               LL  J      J       |   L    J
               JJ  J #    J #     |   L    |
                \\__`.___,_`.____,'   F    |
                 ""-.---------....___/     |
                    |_T--+---+--.,._       |
                      |--|----\---\-`.     |
                      |__|____J___J_ F     F
                     _|__|____|___|_/      L
                    |                      L
                    |____________________M-K
LMFAO




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: