> Don't implement XML serialization. The simplest and most widely supported serialization is n-quads (https://www.w3.org/TR/n-quads/). 10 pages, again with exaples, toc, and lots of non-normative content.
You omit the transitive hull that the n-quads standard drags along, as if implementing a deserializer somehow only involved a parser for the most top-level EBNF.
Also, you're still tip-toeing around the wider ecosystem of OWL, SHACL, SPIN, SAIL and friends.
The fact that RDF alone even allows for that much discussion is indicative of it's complexity.
It's like a discussion about SVG and HTML that never goes beyond SGML.
And you can't have your cake and eat it too. You either HAVE to implement XML-Syntax or you won't be able to load half of the worlds datasets, nor will you even be able to start working with OWL, because they do EVERYTHING with XML.
You're still coming from a user perspective. RDF will go nowhere unless it finds a balance between usability and implementability. Currently I'd argue, it focuses on neither.
JS is a bigger ecosystem than just the browser, if you want to import any real-world dataset (or persistence) you need disk backing. So anything that just goes poof on a power failure doesn't cut it.
Sorry but "works pretty well", and 6 examples combined with an unannotated automatically extracted API, does not reach my bar for "production quality".
It's that "works pretty well" state of the entire RDF ecosystem that I bemoan. It's enough to write a paper about it, it's not enough to trust the future of your company on.
Or you know. Your life. Because the ONLY real world example of an OWL ontology ACTUALLY doing anything is ALWAYS Snowmed.
Snowmed. Snowmed. Snowmed.
[A joke we always told about theoreticians finding a new lower bound and inference engines winning competitions:
"Can snowmed be used to diagnose a patient?"
"Well it depends. It might not be able to tell you what you have, but it can tell you that your 'toe bone is connected to the foot bone' 5 million times a second!"]
Imagine making the same argument for SQL, it'd be trivial to just point to a different library/db.
And so far we've only talked about complexity inherent in the technology, and not about the complex and hostile tooling (a.k.a. protege) or even the absolut unmaintainable rats nests that big ontologies devolve to.
Having a couple different competing standards would actually improve things quite a bit, because it would force them to remain simple enough that they can still somehow interoperate.
It's a bit like YAGNI. If you have two simple standards it's trivial to make them compatible by writing a tool that translates one to the other, or even speaks both. If you have one humongous one, it's nigh impossible to have two compatible implementations, because they will diverge in some minute thing. See rich hickeys talk "simplicity matters", for an in-depth explanation on the difference between simple (few parts with potentially high overall complexity through intertwinement and parts taking multiple roles), and decomplected (consisting of independent parts with low overall system complexity).
And regarding JSON Schema:
I never advocated for JSON schema and the fact that you have to compare RDFs maturity to something that hasn't been released yet...
You would expect a standard that work began on 25 YEARS ago to be a bit more mature in it's implementations.
If it hasn't reached that after all this time, we have to ask the question, why is that?
And my guess is that implementors see the standards _and_ their transitive hull and go TL;DR, and even if they try, they get overwhelmed by the sheer amount of stuff.
You omit the transitive hull that the n-quads standard drags along, as if implementing a deserializer somehow only involved a parser for the most top-level EBNF.
Also, you're still tip-toeing around the wider ecosystem of OWL, SHACL, SPIN, SAIL and friends. The fact that RDF alone even allows for that much discussion is indicative of it's complexity. It's like a discussion about SVG and HTML that never goes beyond SGML.
And you can't have your cake and eat it too. You either HAVE to implement XML-Syntax or you won't be able to load half of the worlds datasets, nor will you even be able to start working with OWL, because they do EVERYTHING with XML.
You're still coming from a user perspective. RDF will go nowhere unless it finds a balance between usability and implementability. Currently I'd argue, it focuses on neither.
JS is a bigger ecosystem than just the browser, if you want to import any real-world dataset (or persistence) you need disk backing. So anything that just goes poof on a power failure doesn't cut it.
Sorry but "works pretty well", and 6 examples combined with an unannotated automatically extracted API, does not reach my bar for "production quality".
It's that "works pretty well" state of the entire RDF ecosystem that I bemoan. It's enough to write a paper about it, it's not enough to trust the future of your company on. Or you know. Your life. Because the ONLY real world example of an OWL ontology ACTUALLY doing anything is ALWAYS Snowmed. Snowmed. Snowmed. Snowmed.
[A joke we always told about theoreticians finding a new lower bound and inference engines winning competitions: "Can snowmed be used to diagnose a patient?" "Well it depends. It might not be able to tell you what you have, but it can tell you that your 'toe bone is connected to the foot bone' 5 million times a second!"]
Imagine making the same argument for SQL, it'd be trivial to just point to a different library/db.
And so far we've only talked about complexity inherent in the technology, and not about the complex and hostile tooling (a.k.a. protege) or even the absolut unmaintainable rats nests that big ontologies devolve to.
Having a couple different competing standards would actually improve things quite a bit, because it would force them to remain simple enough that they can still somehow interoperate.
It's a bit like YAGNI. If you have two simple standards it's trivial to make them compatible by writing a tool that translates one to the other, or even speaks both. If you have one humongous one, it's nigh impossible to have two compatible implementations, because they will diverge in some minute thing. See rich hickeys talk "simplicity matters", for an in-depth explanation on the difference between simple (few parts with potentially high overall complexity through intertwinement and parts taking multiple roles), and decomplected (consisting of independent parts with low overall system complexity).
And regarding JSON Schema: I never advocated for JSON schema and the fact that you have to compare RDFs maturity to something that hasn't been released yet...
You would expect a standard that work began on 25 YEARS ago to be a bit more mature in it's implementations. If it hasn't reached that after all this time, we have to ask the question, why is that? And my guess is that implementors see the standards _and_ their transitive hull and go TL;DR, and even if they try, they get overwhelmed by the sheer amount of stuff.