There's just one problem: privacy. I don't have that when using voice input. Writing won't decline in favor of voice input until we find a way to make it private. Tangential to this, smart glasses won't replace smartphones either for the same reason.
So far there aren't any particularly good open source voice recognition models though, in large part due to a lack of training data. You can (and should!) contribute to Common Voice to help change that: https://commonvoice.mozilla.org/en
Make that privacy and accents. There are too many regions where choice recognition just won't work without essentially building new voice models. And that's before we get to people using their second/third/... language.
- removing quotes around attribute values
- replacing href values with the most frequent characters
- sorting content alphabetically
- foregoing paragraph and line-break tags for a preformatted tag
I was able to bring it from 730 bytes (330 had you compression enabled) down to 650 bytes (313 bytes after compressing with Brotli). Rewording the text might get you even more savings. Of course I wouldn't use this in production.
Why not eliminate quotes in production, if you know the value doesn't need quotes? That's still valid, it's optional HTML: https://meiert.com/en/blog/optional-html/
Sorting content alphabetically and that sort of thing to improve compression may be silly code golfing and impractical for page content, but on the other hand I don't see that it costs you anything (aside from time experimenting with it) when applied to the <head> / metadata. https://www.ctrl.blog/entry/html-meta-order-compression.html
I think that both of these methods could be used in production, and I intend to do so when possible.
Not worth the… what? I’m not sure what you’re talking about or thinking of, but I think you’re wrong. Parser restarting is purely when speculative parsing fails, and there’s nothing here that can trigger speculative parsing, or failures in it.
If you’re using the HTML parser (e.g. served with content-type text/html), activities like including the html/head/body start and end tags and quoting attribute values will have a negligible effect. It takes you down slightly different branches in the state machines, but there’s very little to distinguish between them, one way or the other. For example, consider quoting or not quoting attribute values: start at https://html.spec.whatwg.org/multipage/parsing.html#before-a..., and see that the difference is very slight; depending on how it’s implemented, double-quoted may have simpler branching than unquoted, or may be identical; and if it happens to be identical, then omitting the quotes will probably be faster because there are two fewer characters being lugged around. But I would be mildly surprised if even a synthetic benchmark could distinguish a difference on browsers’ parser implementations. Doing things the XHTML way will not speed your document parse up.
As for the difference achieved by using the XML parser (serve with content-type application/xhtml+xml), I haven’t seen any benchmarks and don’t care to speculate about which would be faster.
So far your theory, but I measured it in practice. Developer tools give easy access to parser events and different page load timings such as parsing stage.
FYI: with some super basic benchmarking that repeats an HTML snippet 1–100000 times and times how long setting innerHTML takes, I can report that in Firefox, it’s a little faster to omit </p>, a little faster to omit trailing slashes on void elements, quoting attributes is probably a bit faster, and that the XML parser is much slower (mostly 2–4×). In Chromium, parser performance is way noisier, and I mostly can’t easily see a difference by numbers (without plotting), though it’s probably faster to omit </p>. As for the effects of using the XML parser, my benchmark crashes the tab (SIGILL) in Chromium and I don’t care enough to figure out why.
I might as well also mention that we are talking about differences of a a few dozen nanoseconds at most per instance, and that even then the noise threshold is normally well above that, and that it’s difficult to show significant results in benchmarking at all because you require preposterous amounts of serialised HTML to get a measurable result at all.
The tools you’re talking about are useless for measuring this kind of thing. We’re talking about potential differences well below microseconds, and you’re proposing using tools that (presuming I correctly understand which you mean) report answers in milliseconds, with noise rates of milliseconds (and a lot more if you try scaling it up with things like a million elements in a row). It is possible to benchmark this stuff, but the way you describe is utterly unsound.
Unless presented with concrete steps to reproduce what you’re talking about, I refuse to believe you.
(Mind you, I’m not denying in this that there are differences, just that they’re even measurable this way on even vaguely plausible documents.)
I’m not talking, like you, about “A parses faster than B”. I talk about “A causes the parser to start over, B doesn’t, so B is faster”. Resets do make a difference that does not require microbenchmarks and is in the realm of milliseconds. This way I was able to load pages in a single frame at 60Hz, which was the threshold I wanted to hit, because it made my webdev friend not realize he already was on the next page when he clicked the link. Feel free to refuse to believe me.
Don't decide which things to work on and risk analysis paralysis along with a string of other negative feelings.
Let curiosity guide you, explore as many domains as you're able and willing to and become competent with most of them. The more domains you weave into the web of knowledge, notwithstanding the lack of expertise, the higher the probability you'll find links across the ever expanding network. Maybe delve deep into the foundation, occasionally or frequently, for you could stumble into something new. Consider teaching or talking about your knowledge; open a blog, write a book, whatever. It'll help others as well as you, now and later.
In the end, whether through our descendants, works or knowledge, we're all child of that instinct to leave something after our death, of which your post is yet another manifestation.
I don't think he wanted to know the specifics of a two-pence coin, but was rather lamenting the overuse of rough and localized units in lieu of logical and standardized ones.