Regardless of the appeal to unix philosophy in their docs, sd has never stuck with me because it doesn't natively support replace across a project/ multiple files, and that's what you always want to do. Instead I pipe `rg -R --json` output into a script (rg does do replacement, but only on the stdout, not in-place)
I'm not sure I understand what you're saying. You can pass a list of files to sd, `sd before after **/*.py` to replace in all the python files in your project. That's about as native of a replace across multiple files you could possibly want.
Good luck, it would be fantastic if you can get it accepted as a mermaid alternative; it's much, much niceer than mermaid, both the visual result, and the language. I've used D2 heavily for a couple of years or so now. It's really fantastic. I hope you're paid product is healthy. Sequence diagrams are my most common diagram type by some way I think. And I seem to always use the sketch rendering over the last year.
While you're here, can I mention a feature request? I'd like to be able to put clickable hyperlinks into sequence diagram arrow labels (e.g. so I can link the message to where in the code it occurs).
Also, I'd like more control over vertical spacing in sequence diagrams, and perhaps the ability to define groups of columns (just visually grouped).
Agreed. I'm the author of a fairly popular dev environment project. Every so often you get people turning up enraged because I chose a name that some other project once used. In the situation I'm talking about it makes even less sense than pip -- it's a command-line executable. There's no one repository (although doesn't seem like Debian people would agree with that!). There's a multitude of package managers on different platforms. Furthermore, in case people hadn't noticed, there are these things called languages, countries, and cultures. There is no reason in general why there might not be package managers whose use is culturally or geographically non-uniform and perhaps entirely unfamiliar to those in other countries. So, what's the plan for checking whether a name is "taken" across human culture, history, and computing platforms? Silly out-dated notion.
I wonder whether it will have a flat namespace that everyone competes over or whether the top-level keys will be user/project identifiers of some sort. I hope the latter.
Fundamentally we still have the flat namespace of top level python imports, which is the same as the package name for ~95% of projects, so I'm not sure how they
could really change that.
Package names and module names are not coupled to each other. You could have package name like "company-foo" and import it as "foo" or "bar" or anything else.
But you can if you want have a non-flat namespace for imports using PEP 420 – Implicit Namespace Packages, so all your different packages "company-foo", "company-bar", etc. can be installed into the "company" namespace and all just work.
Nothing stops an index from validating that wheels use the same name or namespace as their package names. Sdists with arbitrary backends would not be possible, but you could enforce what backends were allowed for certain users.
For some subjects, it's appropriate to host multiple versions of articles written natively in different languages.
But for other subjects, for example science and mathematics, it does a huge disservice to non-English readers: it means that their Wikipedia is second-rate, or worse.
Wikipedia should, in science, mathematics, and other subjects that do not have cultural inflection, use machine translation so that all articles in all languages are translations of the same underlying semantic content.
It would still be written by humans. But ML / LLMs would be involved in the editing pipeline so that people lacking a common language can edit the same text.
This is the biggest mistake Wikipedia's made IMO: it privileges English readers since the English content is highest quality in most areas that are not culturally specific, and I do not think that it's an organization that wants to privilege English readers.
Users can already translate English Wikipedia articles to other languages on the fly with Chrome etc. However, the quality of the translation is just not up to scratch yet, particularly for languages that are radically different from English; just try reading some ML-translated Japanese or Chinese Wikipedia articles.
I'm not sure why you're thinking that; perhaps you're remembering something from a few years ago, or perhaps you have a prior biased against ML/LLM solutions. The translations provided by Google Chrome of Chinese wikipedia pages seem great. For example, Linear Algebra
Science and Mathematics have no cultural inflection? Do you speak more than one language? Each language has its standard sentences structures when it comes to these disciplines, and auto translators are very much not up to the task.
I prefee my Wikipedia to remain 100% human generated quality information over garbage AI slop content, which is already abundant enough on the internet.
I understand you'd like to believe that. But it looks like you're simply wrong. For example, here is the translation produced by Google Chrome of the Chinese version of the page on Bezout's Theorem.
It reads "Bézout's theorem is a theorem in algebraic geometry that describes the number of intersections of two algebraic curves . The theorem states that the number of intersections of two coprime curves X and Y is equal to the product of their degrees."
which is perfectly good English. The problem is that that is the entire page! It is thus woefully inadequate in comparison to the English page:
"This comment is well-meaning, but it is both naive and technically flawed in several key ways. Let’s unpack why it's wrong and even counterproductive, especially when it comes to topics like science and mathematics." ...snip snip... "TL;DR: The comment is naive because it overestimates the capabilities of machine translation for precise scientific knowledge, underestimates cultural context in science/math, and proposes a solution that would undermine Wikipedia’s decentralized, community-driven model. It wrongly frames linguistic diversity as a weakness instead of a strength."
See my replies in the sibling threads. I give concrete examples of both the weakness of non-English wikipedias in mathematics and of the quality of machine translation. I understand that you want to believe the happy clappy cultural diversity thing, but sadly is and ought are not the same thing.
Remember that in Europe and the UK people don't have access to the type of dryers that Americans use that actually ... get clothes completely dry in a reasonable time period. In fact those sorts of vented dryers that make Americans think that dryers are supposed to actually get clothes dry are being made illegal in the EU:
So we hang clothes outside in good weather, and otherwise use heat-pump washer-dryers to get things partially dry, and then hang them around the house. So you will typically need to sleep while sheets are drying.