Wow, what a great summary with lots of realism and nuances. I agree with the author's conclusions that what is missing is consolidation and interoperability between standards (e.g. make Protégé easier to use and ensure libraries for RDF parsing and serializations exist for all languages). No technology will be adopted if it requires PhD-level ability to handle jargon and complexity... but if there were tutorials and HOWTOs, we could see big progress.
Personally, I'm not a big fan of the "fancy" layers of the Semantic Web Stack like OWL (see https://en.wikipedia.org/wiki/Semantic_Web_Stack ), but the basic layers of RDF + SPARQL as a means for structured exchange of data seem like a solid foundation to build upon.
It's really simple in the end: we've got databases and identifiers. INTERNALLY to any company or organization, you can setup a DB of your choosing and ensure data follows a given schema, with data linked through internal identifiers. When you want to publish data EXTERNALLY, you need to have "external identifiers" for each resource, and URIs are a logical choice for this (this is also a core idea of REST APIs of hyperlinked resources). Similarly, communicating data using the a generic schema capable of expressing arbitrary entities and relations like RDF and JSON-LD is also a logical next step, rather than each API using it's own bespoke data schema...
As for making web data machine-readable, the key there is KISS: efforts like schema.org with opt-in, progressive enhancements annotations are very promising.
For anyone wanting to know more about this domain, there is an online course here:
https://www.youtube.com/playlist?list=PLoOmvuyo5UAeihlKcWpzV...
The whole course is pretty deep (would take a month to go through it all), but you can skip ahead to lectures of specific interest.
Personally, I'm not a big fan of the "fancy" layers of the Semantic Web Stack like OWL (see https://en.wikipedia.org/wiki/Semantic_Web_Stack ), but the basic layers of RDF + SPARQL as a means for structured exchange of data seem like a solid foundation to build upon.
It's really simple in the end: we've got databases and identifiers. INTERNALLY to any company or organization, you can setup a DB of your choosing and ensure data follows a given schema, with data linked through internal identifiers. When you want to publish data EXTERNALLY, you need to have "external identifiers" for each resource, and URIs are a logical choice for this (this is also a core idea of REST APIs of hyperlinked resources). Similarly, communicating data using the a generic schema capable of expressing arbitrary entities and relations like RDF and JSON-LD is also a logical next step, rather than each API using it's own bespoke data schema...
As for making web data machine-readable, the key there is KISS: efforts like schema.org with opt-in, progressive enhancements annotations are very promising.
For anyone wanting to know more about this domain, there is an online course here: https://www.youtube.com/playlist?list=PLoOmvuyo5UAeihlKcWpzV... The whole course is pretty deep (would take a month to go through it all), but you can skip ahead to lectures of specific interest.