Oh, there's another stgit user! ^5
Coming from darcs, I couldn't use git until stgit came along, and today, it's one of those few tools I can't imagine working without. Nothing else matches my way of code hacking.
So often, I watch people making a big mess with git, and I always recommend stgit to them, so they can post proper and reviewable branches for merging. But in all these years, I could never convince anybody.
> Otherwise, the very first commit should be removing all the personally identifiable information and other telemetry sent back to Matt's personal website
This is one of the things I removed in our WordPress fork. I found it horrifying to learn that an open source software does such a user-hostile thing, and wondered why nobody but me objects.
Me too! I've been using it for 20 or so years, and it's one of those pieces I didn't know it was still developed or what features were added, because the feature set from 20 years ago is still enough for me. IOW: maybe it doesn't need a maintainer at all.
Back in the days, you had a UI toolkit, and everybody would use those native controls; they looked and felt the same in all application, and you had a central place where you could customize the look. Now every application/website has customized controls for everything; everything looks and works differently.
(And don't even get me started with websites implementing their own scrollbars with JavaScript. Uh!)
Custom list control: do Home/End buttons work? How to select multiple items, does Shift-Cursorkeys work? Does Ctrl-Click work? Of course not.
Custom text control: does Ctrl-Left/Right for word jumping work? Does Ctrl-Up/Down for paragraph jumping work? Can I select everything with Ctrl-A or does it select the whole website? Can I select everything from cursor until the end with Ctrl-Shift-End work? Does Copy/Paste work at all?
(I have never figured out why Copy/Paste in Teams simply doesn't work. Apparently
I'm the only one with this problem.)
Custom dropdown control: does Alt-Down work? Can I scroll the list with the usual keys?
If (web) developers would just use standard controls, everything would work the same, and they wouldn't have to reimplement all the basic things from scratch (or not at all). Web devs could write forms that work without megabytes of JavaScript.
Hamburger menus. Those horrible things didn't need to exist even in old times with small monitors and 640x480 (or less) - but now they exist everywhere on my 32" 4K monitor for no reason.
Me too, agree with you and GP. And I can't keep wondering why this opinion is so unpopular. Today's UIs are bloated with unnecessary animations (which adds latency). But worse than animations is that UIs are horribly inconsistent; took me a while to figure out those toggles should be clicked, not dragged; or: what is even clickable, how do I scroll, ...? I could go on forever, and probably so can you.
Why do only old nerds complain about this, when today's UIs are so "easy" that every toddler can use the smartphone? Are we just living in the past, getting old; are we the problem, why is our opinion unpopular?
I think other people do feel a vague sense of anxiety using modern software from not quite knowing what all the interaction patterns are. When you click that hamburger menu on the website, what will it do, exactly? But most people from outside the software world just blame themselves for “not being very good with computers”.
The problem is that it’s not fashionable any more amongst designers to use built in controls. Everyone wants to think of themselves like Apple, and build their own beautiful design language. Even if it’s just for their own website or app. And it sort of makes sense given modern apps end up needing to be built for the web, iphone, iPad, Android, and the desktop. It makes sense to tie all of those pieces of software together with a cohesive visual language and style.
I'm old now, and I won't dismiss that the new stuff is aesthetically nice. It's also not that hard to use. But I just don't like the visual polish more than I like the clarity and responsiveness of the old UI elements.
I just don't care if my computing experience is beautiful. I care if it's snappy, productive, and reliable.
I mostly find the way the opinion presented in this thread by many people exhausting.
You're doing it there too: You're throwing every bad point of every bad UI you ever encountered into a bucket and throw all of that at this article by concluding "Animations in UI are terrible and just bloat everywhere". That's very close to a strawman.
I have worked and AB-tested in UIs for games and such dealing with just that and I would much rather say: Bad UIs are bad, yes. And animations don't help bad UI not being bad. But if you have a good, understandable UI, adding animations smartly - without impeding the user and in subtle fashion - on top of that UI... that can increase the overall aesthetics of the UI a lot and make the UI much more pleasing to use.
I agree with that; animations can be OK, but when I have a configuration setting, I usually disable them because input latency drives me crazy.
My post replied to "checkboxes vs UI toggles", and replying to that aspect was my main point. That's slightly off-topic, of course. It has to do with animations only because checkboxes wouldn't really benefit from animation, whereas toggles are an obscure visual representation for the same control, and adding animation is a feeble attempt to make it somewhat less obscure, even though it doesn't even try to address the main problem: what does toggle "left" and "right" really mean?
I believe checkbox not benefting from animation is a good thing: it's so clear and obvious that you don't need to animate it.
> what does toggle "left" and "right" really mean?
Nothing, because that's not the point of the article.
It's weird to me that this is such a big point here.
In an actual UI, you will have labels or indicators telling you what the toggle means and what the options are - "Safety door unlatched" vs "Control motors engaged". That's a toggle between two choices and having it a toggle like that would be safer than checkboxes.
Otherwise your checkbox without labels is equally bad UX because what does "on" and "off" mean for an unlabeled checkbox? I could give enough examples from work how vaguely labeled checkboxes like "remote authentication" are terrible UX.
For toggling between mutually exclusive choices please use radio buttons. Checkboxes, and less obvious variants, are for enabling/disabling clearly labeled options that are not mutually exclusive.
That used to be Interaction Design 101 back in the olden days, ie. 1990s.
The author contacted me via email yesterday (because they saw I'm writing a lot of C++ code on GH) and asked me for a review. This was my somewhat-grumpy somewhat-trollish reply:
Your library demonstrates that C++ is the superior language because it can to template specialization, resulting in better machine code. For example, your std::span implementation needs to store the element size, resulting in a larger structure (24 instead of 16 bytes), more memory memory accesses, costly integer multiplications everywhere.
C++ can omit all this, and can do simple bit shifts instead of multiplications.
There are many more places where you demonstrate C++'s superiority (e.g. it can safely do deep copies with no special code, while your library can't do copies at all, and even if it could, doing so safely/deeply would be extremely cumbersome with C, both for your library code AND for the code calling your library; all a piece of cake in C++).
Oh, and "Implements a dynamic array similar to std::array" .... that's factually wrong. std::array is not a dynamic array. std::vector is. Interestingly, your std::array implementation uses your std::vector, while adding some more runtime overhead. Hey, it's C, it's slower than C++ is what I learn again here!
It's a rather pointless library, unless your point is to demonstrate that C is a bad programming language.
Maybe I could say something good about some aspects of their coding style, but that would only distract from my main point that I find it pointless to imitate a C++ API in C, when that API is modeled carefully to take advantage of C++ features, and you lose all of that in C. (Not only that - their C API is designed in a way that adds overhead even where none would be necessary in C, by allocating all structs on the heap.)
There are lots of plain C container libraries which are probably suited better for C, if you really must use C, or prefer C for whatever reason that escapes my imagination.
The 1990's were looking so great for C++ adoption on desktop, we had Mac OS (pity about Object Pascal, but at least there was PowerPlant), OS/2 (with CSet++, and OWL), MS-DOS (with Turbo Vision), Windows (with OWL, VCL, MFC, ATL), BeOS (with its Kits), Epoch/Symbian, Windows CE (with MFC),...
And then rise of FOSS happened, with the original GNU contribution guidelines asserting only C and Lisp as the all mighty languages for the GNU ecosystem to build upon.
When someone queried Harley-Davidson bikers as to why they don't socialize with youngsters who are avid about the latest racing motorcycles, their response was, "Why should we bother? Every year, they have something new.
--
All jokes aside, check out these cool quotes about C++ from Linus Torvalds [0] and Ken Thompson [1] ;)
The UNIX model won over object approaches. And I think there is a reason: The simplicity of flat memory model, C, a unified file interface etc. removes a lot of complexity and allows composition of diverse components to a working system.
Or in other words: If C++ were actually better for engineering large systems, GNU wouldn't have had a chance.
That's funny interpretation, because what is this mysterious "UNIX model" and what does it have to do with implementation language?
Also, C++ used to be "C with classes", but has outgrown this single-paradigm thing quite quickly. I do a lot of C++, but I rarely use inheritance and virtual methods. These are not the features that make C++ worthwile for me.
> The simplicity of flat memory model, [C], a unified file interface
This "unified file interface" is a nice theoretical idea, and it leaks many nice things to the real world, but has nothing to do with the implementation language - quite contrary, it allows many different languages to communicate. Similar with "flat memory model" - you can have either language in segmented memory and flat memory. There used to be "far pointers" in both C and C++, and now they're gone, so what.
> If C++ were actually better for engineering large systems, GNU wouldn't have had a chance.
Oh, if only it were that way, if only inferior engineering systems would just lose and disappear. The sad truth is that survival of a language proves little about quality.
If you believe C is better than C++, fine by me, just opinions. You can say "C++ is bad because it's more complex" or "has too many features", I can understand that, or "C++ is confusing because you can overload operators". My features are your bugs, okay. But I can't comprehend your actual arguments because they are orthogonal to the choice of language.
The comment I was responding to talked about the rise of GNU which allegedly killed better systems that used C++. I argue that GNU won because it was a better system and that it used C and not C++ is one part which made it better. Unified file interfaces are another. You are exactly right that it lets different system communicate, including C++. But this is what makes such interfaces better than - say - an object-oriented interfaces accessed via remote procedure call (and were/are plenty of such systems), which do not interoperate well. Systems built around C++ or with a similar mindset typically tend to have overly complicated interfaces. So yes, one can use simple interfaces with C++. But if you think C++ is a good language, you will likely not define simple interfaces.
From where I am standing, it failed in everything except headless computing, with similar input/output devices as a PDP-11.
If it isn't a server, or a some piece of software running on a smart appliance, it hardly matters how much POSIX it is exposed.
By the way, C++ is also UNIX, born and raised by AT&T in their UNIX labs, it is the main reason why all C compiler vendors adopted it in first place, including Stalmman's GCC, as you should clearly be aware.
You are right that you can build a lot of overly complicated crap on top of simpler systems. Android or the modern web are good examples. But in the long run it is usually not the overly complicated crap that prevails.
History shows otherwise, as proven by any usable modern C compiler is now written in C++ like GNU's compiler, so C++ is definitely better for engineering large sytems, GNU did not had a chance either than accept it and move along into modern times.
> If C++ were actually better for engineering large systems, GNU wouldn't have had a chance.
Unfortunely I won't be around to see this happen, but I bet when the UNIX/Linux/BSD founders generation is gone, other OS pushed by big corps will take its place, maybe even taken by younger devs that took the free beer source code and created their new cool startup with an OS partially taken from it, NeXTSTEP/Solaris style.
Something that is already taking shape on IoT space with all those RTOS using MIT/Apache licenses, and very little POSIX/UNIX on them.
There exist many C compilers written in C, while only relatively few C++ compilers even exist. And while gcc is technically written in C++, large parts still look like C.
OS pushed by big corps definitely have some impact. But this is because big corps can afford to maintain otherwise unmaintainable complex frameworks. As soon as those big corps loose interest, those complex frameworks die, and are replaced by simpler and more reasonable tech.
I agree with the criticism that one should not mimick C++ in C.
A good C library would use intrusive data structures, user-controlled memory allocations, not try to replace built-in types (arrays) with inferior alternatives, and not to try to be a kitchen-sink library.
And then it would be obvious such C++ sucks in comparison... ;-)
The claim is more that the C++ std is designed to take advantage of C++ language features. Mimicking the same API in C isn't a good lib, the C API shape would need to be different.
This is certainly a matter of taste. I saw many prominent people everywhere praising TAOCP, but I was disappointed when I finally read the first three books some 10+ years ago. So much disappointed that I regret taking all the time to read it - not because they're bad books, but because I think that time would have been better spent learning things a different way.
What bothered me most is MIX. I think using a very low-level language to teach algorithms is a pretty bad choice. Algorithms are high-level concepts, and using an assembly-like language just distracts from those concepts; it requires you to solve so many micro-problems orthogonal to the real problem, and forces you to repeat yourself over and over. I prefer to implement abstract ideas in abstract code. Using MIX just causes suffering for no point.
I learned a lot about math from TAOCP (but I've never needed it for anything).
When the books were written, the target audience would all be writing programs in (very different) machine languages, so the “mix” he created of existing computer architectures was a good one (https://retrocomputing.stackexchange.com/a/18176). The specific MIX language is no longer similar to present-day machines (self-modifying code is out of style, for instance), which is why he came up with "MMIX" and there's a book (“The MMIX supplement” by Martin Ruckert, but carefully proofread by Knuth) with the MMIX equivalents of all the MIX programs.
But in any case the algorithms are taught in English not MIX; the MIX programs are only a very tiny part of the books (https://news.ycombinator.com/item?id=14520230 — the 3000+ pages of Vols 1–4A have only 90 assembly programs), used only when the low-level details are actually relevant. So they can be skipped if one is not interested in them.
For my argument, MMIX is not an improvement because it's still a low-level assembly language. True, you can simply skip over the MIX/MMIX source code, but that doesn't help me to like TAOCP. For me, (abstract) source code is the primary source of understanding a concept, and English text is just an accompanying explanation, not the other way round.
Your comments put TAOCP in a historical context. Yes, for the time, TAOCP was a huge achievement and it is a magnificient work. History is interesting, but the question here is about today, and I don't think it's the best way to learn the stuff today. (Again, just my opinion, there's no right and no wrong here.)
Yeah as you say it's just opinion/preference, but re:
> source code is the primary source of understanding a concept, and English text is just an accompanying explanation, not the other way round
note that most algorithms in TAOCP are only described in English (pseudocode), no (MIX) source code. (So if you want source code as primary you simply won't find it… but then again, a typical algorithm textbook like CLRS also would have only pseudocode).
The TAOCP pseudocode is also in a unique style that's not similar to modern programming languages — e.g. here's a note that Knuth wrote just earlier this year: https://cs.stanford.edu/~knuth/papers/cvm-note.pdf explaining (and generalizing/correcting) in his own style, the algorithm from a recent paper (https://arxiv.org/pdf/2301.10191v1.pdf): compare how the algorithm is described in Knuth (first page) vs the paper (second page): Knuth is simultaneously more high-level and more low-level (and also uses “goto” rather than “structured programming” for-loops, about which heretical preference of his I can write a long essay some other time :D).
It is possible to have different opinions about which one is the best way to learn the stuff today, but clearly Knuth rewrote it in his style because he thought it was better. ;)