I've entered books from The Expanse and Lockwood & Co series and its output was not really overwhelming:
- other books from the series (duh, I don't need a recommender for that recommendation)
- Hobbit, Harry Potter, Azimov etc (duh, I like scifi and surely I've already read all the classic works).
This was actually a potential problem, at least on Commodore machines.
On those machines you were able to abbreviate keywords.
At the same time, they support full screen editing. That meant you could just cursor up over some code, make changes, hit enter, and the changes would take place.
However, when using the abbreviations, it was possible to create lines that were too long. I don't recall the specifics, but there was a line limit for BASIC input. Lets say it was 80 chars (for discussion).
Using abbreviations (like ? for print) and you could end up with a line that would LIST for more than 80, but if you tried to change it with the screen editor, the lines would be too long, and truncate silently.
So you had to be cautious with your use of the abbreviations.
Similar, and maybe more related to the article's topic: Commodore BASIC also saved the commands as tokens, so you could enter abbreviated commands like
10 ? "Hello"
20 gO 10
and a LIST command would yield
10 print "Hello"
20 goto 10
So saving commands as tokens in memory and formatting them on output was somewhat common back then.
The speccy was more advanced in terms of this (as mentioned in the parent comment), and it had the better BASIC for sure.
Or think about the M (aka MUMPS) language, which allows you to type just the first letter(s) of a keyword and considers it valid syntax.
Imagine Java if you could…
na com.mycompany.myapp;
pu cl MyClass {
pro sta i = 42;
pri fi ch[] MAGIC = ['a', 'b'];
pu sta v main(String[] args) {
OtherClass otherClass = n OtherClass();
f (i i = 0; i < MyClass.i; i++) {
otherClass.hex(i, this.MAGIC);
}
}
}
Ignoring the prediction that everything will be decentralized in the near future (it probably won't), how about this scenario: you want to see if the voting activity (recorded in your events) correlates with the number of times you went to the doctors in the year prior to election (as recorded in your health records). If we want to be able to run such a distributed query, we need to be sure that each node stores the data in predefined format ("tables") and that one of their services run a service that will receive the distributed query request, decide whether or not it wants to participate, and then execute the business logic of the conditions (healthevent.year > getdate() - 365), probably defined in some programming language ("SQL").
> decentralized in the near future (it probably won't)
I may be optimistic on this one. But it feels like dead internet is encroaching everywhere. There aren't many centralized services I can think of that people actually like. Perhaps Steam, but most of them are hated by their users. Currently, I'm imagining that more and more of our digital interactions are going to go through AI helpers. Agents that will filter out ads, and likely also hold onto our information for us. At a certain point, centralized DBs just become an unnecessary middleman with privacy concerns. If I want photos of my friends, why not just have my assistant ask their assistant?
> how about this scenario: ....
It seems like all of this data could easily fit into a 10MB of text. This is the kind of thing an assistant would be likely to churn through without issue. It could also search for other interesting correlations while it's at it.
1) They do not publish rationale of why the world needs yet another protocol / language / framework on the homepage. It is hidden in https://typeschema.org/history
2) In the history page, they confuse strongly typed and statically typed languages. I have a prejudice about people doing this.
3) The biggest challenge about data models is not auto-generated code (that many people would avoid in principle anyway), but compressed, optimized wire serialization. So you START with selecting this for your application (eg. AVRO, CapnProto, MessagePack etc) and then use the schema definition language coming with the serialization tool you've chosen.
> 3) The biggest challenge about data models is not auto-generated code
I would say auto-generated code is most definitely the harder problem to solve, and I’d also go out on a limb and say it is THE problem to solve.
Whether it’s JSON, XML, JavaScript, SQL, or what have you, integrating both data and behavior between languages is paramount. But nothing has changed in the last 40+ years solving this problem, we still generate code the same clumsy way… Chinese wall between systems, separate build steps, and all the problems that go with it.
Something like project manifold[1] for the jvm world is in my view the way forward. Shrug.
LINQ, Splunk, and KQL are all proprietary. For the purposes of setting new standards, they might as well not exist.
PRQL is the only real entrant in your list when it comes to adding a pipelining syntax to a language for relational queries in a way that others can freely build on.
No, no, oh God please nooooo.
People will use this tool for their listings on real-estate classifieds portals like Zillow.
The real estate listed there is never a bespoke design for you and your family. In some locations, there are plenty of affordable homes so you actually can choose one with the layout nearest to your goals.
In the most cases though, affordable homes are rare so people don't really care about the current layout: they will buy any home and remodel it in according to their tastes.
To estimate the remodeling costs, it is better to work with the bare, empty layout plans, not cluttered with furniture and 3D effects, and having all measures specified.
They're already doing it (not with this tool, of course).
At least around here (Spain), when selling shitty (but expensive) apartments, they'll publish "artist renditions of possible remodels" instead of the actual current state of the home.
I understand that you like some Rust features like Result and Option types, enums, and pattern matching.
These features provide for more safety, and at the same time, they reduce productivity by forcing the developer to statically type everything.
The question is then why do we need to transpile to Go, a language with GC and slower than Rust?
If we already agree on super-safe static typing, why not just use Rust? Are there any libraries in Go that are not available or of worse quality in Rust?