Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Presumably which of these states of affairs pertains to a specific window is, though, at some level knowable. The question of which process created the window is not necessarily bounded to ‘processes running in user space on this machine right now’, but something created it.

And 99% of the time the answer will be a locally running process, which makes it a highly pertinent question even without the caveats.

It’s not clever to point out that a question may not have a simple answer and therefore to reject the question as flawed.



From a 2020's perspective, X11 is deeply weird.

It was written back when huge enormous servers were the only systems with enough oomph to do anything useful, and those servers had 128mb of ram (think of something like a sun 3/480 with a 33mhz 68030).

It was common to have a room of X terminals each with 1-8mb of ram for a 15-19 inch display (with 1-8bit color). In that context, everything ran on another system, often you'd have processes running on several different systems and you'd have to be careful about managing authorization otherwise people would figure out your DISPLAY and run their programs against your X terminal (innocuous, like 300 Xeyes programs or malicious like a keyboard snooper, depending on where and when).

In 2020 this whole scenario sounds absurd, but it was the way of things from when X11 was designed and through the first 15 years of X being a thing.

In some ways, the question is like asking "what is the process ID that generated this http response?"

[edited to add the analogy]


> In 2020 this whole scenario sounds absurd

It's not really absurd at all. In fact, the increase in processor speed, core count and installed RAM means that it even more likely that in a given context, a single desktop computer system has far more capacity than a single user needs.

So, you extend the number of users that can utilize that capacity by sitting them in front of an X Terminal, which from their perspective is just like sitting in front of the "real" monitor of the machine.

This doesn't make sense for gamers, compile-happy developers or data crunching, but then this model never made sense for those use cases. But a dentist's office with a central desktop and 8 users? You just don't need 8 computers - 1 computer plus 7 or 8 X Terminals is far better use of resources.


A random piece of junk refurbished dell small computer from amazon with a windows 10 pro license is $200. It's more powerful than every single computer I administered from 1990-2005, put together.

What you say is true -- they don't really need the power, but golly they've got it anyhow.


> From a 2020's perspective, X11 is deeply weird.

X11 is a distributed systems non-meshed networking protocol, that outputs to screens as a side effect.

Knowing about Erlang dist and figuring out how to do X11 a few years ago, everything makes a lot of sense. Except for xlib. xlib is a bit better now, but it turns a pretty decent (but certainly not without flaws) async protocol and implements blocking synchronous calls... And then most toolkits were based on that. Sometimes you really do need to wait for a reply, but so many things don't need it, and all the waiting makes actually networked X much less usable.:(


Why is it absurd? I use the remoting feature all the time - got probably 10 remote windows being displayed from various machines across the network right now.


Multi-user graphical computing seems to forever sit just outside the more common use case lexicon.

Being able to ask another machine to run something and display it where you are is a seriously great feature.

These days I have largely given up. Too many just don't see it. Sad day.

One of my favorite X11 setups was engineering CAD software for about 30 users. I had a beefy SGI running the program and data management system.

No user could ever touch that data except through the program. Better, the data goes where it is supposed to go with the user more concerned with just doing the work with few worries.

One happy side effect was users were largely free to run what they wanted as their desktop computer. As long as a respectable X server was available for their machine, they were good to go.


You’re aware that web browsers have filled that role pretty effectively, right?


Sort of.

Having run high 3nd X CAD modeling applications on SGI, for example, browser stuff seems clunky and anything really performant seems to load executables locally.

That said, yes! Browsers act a lot like X servers do.

What I am getting at here is not entirely a technical problem.

Most computers offered a single user, eventually multi tasking graphical environment.

The computers running a well implemented X window system offered multi user graphical computing. It could and often was run as a single user graphical environment too.

X had everything compartmentalized, and that allowed a sysadmin to set the computing up any way that made sense.

Once, just for fun, I had one machine serving up fonts, another doing window management, yet another serving the graphics to the user, another one sharing files to another one running the application!

5 machines all contributing to any number of graphical users, and in my toy, fun case, that was two users. One user was me, and I was using an sgi, and the other Windows 2000. Was kind of fun to see their face as the familiar application looked the same on his windows box as it did on IRIX

Two clicks on an icon, or a command given and "the application" happened on whatever machine was serving up graphics.


Indeed -- web browsers are very comparable to X -- especially with things like web sockets or streaming media.

Which is why I offered that the question "what process is generating my window" is comparable to "what is the PID of the process that generated this HTTPs response?"


I feel that using a browser with cloud shell alongside some cloud instances is a time travel to how I was doing development on the university computing center and my years of dotcom wave.

The more things change, the more they stay the same.


do you use ssh -XC these days? It's way faster than the classic DISPLAY=somewherelse:0 application&


Yeah, ssh's management of the X authorization tokens and managing the display stuff was one of the killer features of ssh over things like kerberized telnet or rsh. Another really nice feature of it was that it didn't have any of the idiotic user count enforcement nonsense that was default on a "workstation" licensed ultrix system, where you couldn't have more than 2 people logged into the system without the more expensive license.

Compression's nice - especially for X, and X over ISDN.


> otherwise people would figure out your DISPLAY and run their programs against your X terminal

This was always good for fun gags. In the 1990s I worked at a company where all the developers used X terminals on an HPUX host. If you were bored you could run little programs that made all the windows on someone else's terminal melt, or flip around, etc.


Things like this also worked in the small NeXT lab we had at UW-Seattle. Including, um, sound.


Which is why my login script on the university lab disabled xhost for those funny guys.


> In 2020 this whole scenario sounds absurd

Why ? I use Citrix at work in 2023 and nobody thinks it absurd to run the program on another computer and display it on yours.

Except the wayland developers (last i heard). /s


For the most part things these days are all "stuff on your screen is stuff running on your computer" or "stuff in this window is stuff running on that computer"

There are definitely ways of "this program/window is on that computer / that program/window is that computer" but getting to that point requires some effort, rather than it just being "the way everything works"

With windows, or macos, or even X on linux, the vast majority of things displayed on your screen are on the computer that's within 3 feet of you, other than if you're running a "remote desktop protocol" sort of session.


The majority of things displayed on my screen are in a datacenter, which is sending rendering instructions to a window server on my desktop. But now the rendering instructions are in Javascript and the window server also browses hypertext.


I find Windows Remote Virtual Desktop to be pretty useful for running remote things. I'm sure it's just RRP under the hood but the UI integration is nice.


On enterprise consulting using remote VMs is quite common, due to security issues.


Right -- but the notion that each window on your system is managed by some random process running on a random remote system, and it's all seamlessly integrated, is just not a normal idiom today.

You can run a remote window in an RDP session, etc, but it's different from X where the server may be the only processing running on the system you're sitting in front of and everything else is a melange of remote systems. Even a chromebook -- the UI is driven by stuff on the "local" computer.

With X, I can run a window manager on one host, and a mixture of random windows spread across as many remote hosts as I have open windows. Plan9 is similar, but for every other modern GUI / OS setup there's a "local first" mind set. Sure, you can do things in a totally different way but those are largely the exceptions.


"And 99% of the time the answer will be a locally running process"

Depends on the environment - for years I used either an HP X terminal or a PC running an X server and the one thing I could be confident about in both cases was that the process wasn't running locally.

Mind you - that wasn't recently!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: