Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

True, but SQL is (for better or worse) the de-facto standard query language for relational databases. The alternatives are all very niche.

I would like a better alternative, but it would need some very significant benefits compared to SQL to gain any traction. SQL is just so entrenched at this point that even NoSql database engines are adding support for pseudo-SQL query languages (which is really the worst of both worlds - the clunkyness of SQL syntax without the power of the relational model).



I’m curious as to why you find sql to be clunky, I find it extremely on point in most cases. I mean, how would you write a SELECT that was better than:

SELECT whatever FROM thisplace?

I know you can make it clunky with parameters and crazy stored procedures, and I’ve myself been guilty of a few recursive SQL queries that most people who aren’t intimate with SQL struggle to understand quickly, but I consider those things to be bad practice that should only be done when everything else is unavoidable.

The fact that SQL is still the preferred standard sort of speaks volumes to me about how good it is. We’re frankly approaching something similar with C styles languages. I recently did a gig as an external examiner, and it took me a while to realise that some code I was reading by a student in their PDF report was Kotlon and not TypeScript, because they look so alike.


The problem with sql is what happens when you fall off the SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY LIMIT cliff. The simple stuff in sql reads like English, but for that case ORM would generate a pretty efficient query anyway. The complex stuff in sql looks terrible in my experience and ORM bail out quickly. Once you can’t get the result with a simple SELECT then sql stops being declarative. Instead of writing what you want to get, you write something like a postmodern poem while having a stroke, just to convince postgres benevolent spirits to give you something almost right. Complex UPDATEs and DELETEs with joins are even worse.

Also lack of syntax sugar doesn’t help. SELECT list could support something like “t1.* EXCEPT col1, col2”. Maybe JOIN ON foreign key would be nice. IS DISTINCT FROM for sane null comparisons looks terrible. Aliases for reusing complicated statements are really limited. Upsert syntax is painful. Window functions are so powerful that I can’t really complain about them though.

We use a lot of sql for business logic, but some code I have to reread from zero every time I need it. Maybe we modeled our data wrong or there is some inherent complexity you can’t avoid, but I mostly blame sql the language. Unfortunately I have no idea how it could be improved.

Anyway I think the sql cliff is real. Once you take a step outside the happy path prepare for a headache. For me sql definitely is in some local maxima, after all I use it every day at work.


The biggest thing is... SQL is not reusable, period.

Why don't we have SQL libraries?

I know that data models are kind of special snowflakes, but some models pop up over and over and over again and code reuse is always 0 with SQL.

To give you an example of a common problem, SLAs or the like for teams with regular business hours.

A team has to respond to a request within N hours. To calculate that I need to take into account 8 business hours per day, excluding weekends, excluding holidays (ideally localized holidays), etc.

It's a nightmare with SQL. It's precisely the kind of thing you want in a library.

Plus, obviously, standard SQL doesn't have a way to share and distribute any libraries, even if they were made. It's pre-C in terms of stuff like that.


And the core issue is, sql is just strings. It only fails at runtime and not a compile time. There is no compile-time strong/static typing, you only find out its broken when you run it. So it makes it really difficult to re-use. The other problem is, you have to specify table and column names in queries (duh, how else), but that means the entire persistence model is hard coded and need manual tweaking and so on. In an ideal universe, each field would be generic, each object generic and each collection of objects generic and all functions being generic. You'd end up with one huge flat structure of fields that are linked to each other to form more complex objects. But that not efficient in current computers nor do we have enough memory to store all information like that. And you might end up back at square one where everything is just strings.


> The other problem is, you have to specify table and column names in queries (duh, how else), but that means the entire persistence model is hard coded and need manual tweaking and so on. In an ideal universe, each field would be generic, each object generic and each collection of objects generic and all functions being generic.

Yeah, define the database in a database. And then define the database for the definition of the database in a database.

If you'd like to deliver a product that does something, you've got to stop adding abstraction layers at some point.


The thing is, for sure we could define some use cases where it's possible to take that general case and make it specific. That's why I gave a concrete example.

I refuse to believe that absolutely every data storage and access in this world is unique.

Everybody believe that it's unique, and that's a different story.

OS vendors also thought their hardware was special and magical a long time ago and yet POSIX was invented and suddenly they were all more or less commoditized.

I feel that we're in the teen years of data storage/data access technologies. And SQL is sort of like dental braces.


A long long time ago I sent in a patch to Hibernate to check the validity of all declared HQL queries.

There's no reason why it can't be checked, you just need to have your schema declaration.

Name clashes are everywhere. That's why you have namespaces/packages. In SQL they are called schemas. People don't really use them these days.

The tweaking is one of the great things. If you're hardcoding queries (not sql), you're actually defining the order of operations etc. A query analyzer will use statistics, and you can hint how queries have to be executed, depending on the shape of your data.

Your code is also just strings, until you compile it. Actually, these days until you run it. Your arguments would make more sense in the 90s where people would actually compile code.

You ideal universe is actually "The Inner Platform Effect". Better let pgsql be the data platform ;-)


> Your code is also just strings, until you compile it. Actually, these days until you run it. Your arguments would make more sense in the 90s where people would actually compile code.

His arguments made sense in the 90s, and amusingly, post 2015.

Your argument made sense in the 00s and before 2015.

Swift is compiled (Apple platforms; Objective-C has always been compiled).

Rust is compiled (multiple platforms).

Typescript is compiled (so web/Javascript).

Kotlin is compiled (Android; Java has always been compiled).

C/C++ were always compiled (POSIX; Windows).

C# was always compiled (Windows; POSIX).

Almost every modern language is compiled and if it's not, it's getting a very solid static analysis step that for sure you want to have and run (PHP got types a while back, Python is getting them, Ruby is getting them).


The other problem is, you have to specify table and column names in queries (duh, how else), but that means the entire persistence model is hard coded and need manual tweaking and so on.

This resonates with me If I understand you correctly, with RDBMS/SQL, the structural decisions you make in the database to represent your data "poison" your application making it difficult to change over time.


The data model is the code reuse.

You can model business hours and SLAs with relationships. Join on time and team.

  SELECT support_request.request_time + team.response_sla AS respond_by_time
  FROM support_request 
  JOIN team_sla 
    ON support_request.assigned_team = team_sla.team_id
   AND DATE_PART('day', support_request.request_time) = team_sla.day_of_week
   AND DATE_PART('hour', support_request.request_time) BETWEEN team_sla.start_hour AND team_sla.end_hour;


It's quite impressive how you've missed the core of my message.

Now bundle up your proposal, put it up on Github, license it as MIT, and publish it available on sqlpm.org (SQL Package Manager) so that I can re-use it.

What's that you say? I can't? There's no sqlpm.org? Not even a postgresqlpm.org?

Where's the SQL ecosystem?

Oh, wait, there isn't any because SQL is not really reusable. It's <<all>> one-off scripts, like back in the Dark Ages of software development.


I always install https://github.com/awslabs/amazon-redshift-utils/tree/master... on my Redshift clusters.

What you are describing as code reuse exists for databases, but they are called applications and generally utilize a general purpose programming language. It doesn’t make sense to have a SLA data model library because every use case is different. It’s a database, not procedural code.


You must have not seen enterprise apps.

There's a reason monstrosities like SAP exists, they're practically what you describe.

If stuff like SAP is the future of Line Of Business (LOB) apps, instead of having a rich Open Source ecosystem of data storage and data access libraries, then we've lost.

We're locked in the trunk.


SQL is not modular (the root cause of why it’s hard to reuse) because it’s declarative. If the underlying data models stay relatively stable, you can reuse code with certain assumptions. There’s a trade off here for sure.


No that's not why it's not modular. If that was the case, then other relational algebra-based query languages would suffer the same problem. They do not.


SQL can be somewhat reusable through views. Also in your code you can make reusable functions that contain SQL queries. You won't get the best performance reusing the queries this way, but you can put them into a transaction and build up more complex flows from reusable pieces that way


Postgis and many other libraries exist. They are widely used.


Is Postgis implemented in SQL?


That's how I remembered it from back in early days. There was some pl/sql you had to execute, and you'd have some extensions. I don't think it's like that anymore


No man you are right. There is nothing arcane here: sql is just an ugly and unintuitive language. I think sql works the opposite ways normal brains work so unless you are doing a simple query, you have to be able to parse and simulate what sql will do in your head. After doing functions/MV's with CTE's that are 100+ lines per query daily, you kinda get used to but also not. I've concluded that although sql is awesome, it is also full of warts.

What helps me is to code all of it in lower case and use something like that Datagrip with a good theme. That way you get something that is readible, colour coded and has autocomplete (very good with joins). It's the only way I've managed to keep my sanity as my experience with it grew. Bad data models doesn't reallllly impact it that much, sql is still sql even with a clean model.

I've built mini database engines in the past because of my frustrations with sql but I still use and prefer an actual rdms as opposed to trying to reinvent the wheel. There are so many features we take for granted it's not even funny. Try building your own production-ready storage system and you would quickly appreciate how deep the rabbit hole really goes.


I have used those features that you say "Would be nice to have." I didn't realize they weren't ubiquitous. I agree they are excellent.


One very very simple fix is to mention the table first:

FROM this SELECT whatever

This already allows autocomplete for the attributes to work, and has an easier mental model - you think about the tables, then you think about their attributes. It also matches relational algebra better, where you'd do the projection (picking the attributes you want) at the end.

But anyway, simple cases being simple doesn't mean the language isn't horrible for more complex ones.

One thing I always complain about is join clauses making it easy to do the wrong thing (NATURAL JOIN) and annoying to do the correct thing (joining on the defined foreign keys).


People have long joked about "yoda conditionals" ("if 5 == x", for example, instead of "if x == 5"), and flipping it in that order is the same thing: SELECT first is the same as "Get the fork from the drawer", where the flipped order actually sounds like something Yoda would say.


It is more like starting the recipe with "take sauce from point 6 and pour it over meat from point 10".

Very few people has problem with trivial queries like "SELECT x FROM y", but when query contains multiple joins or inner queries then having select at the beginning is visibly problematic.


What I did to get better at 'big' queries was start by writing out things like inner queries as CTEs, Table Parameters, or (if in oracle, lol) refCursors.

That's (sometimes!) less performant than the one big query, but you can then refactor into a single query if you so choose.

Yeah, it's slow goings at first, but you get pretty good at SQL in the process.


a[5] means "From array a, select element 5", and nobody has a problem with that. If anything, the English habit of describing things in little-endian fashion ("Take a fork from the drawer to the left of the sink in the large kitchen") adds a lot of cognitive overhead because you have to wait for the end to figure out your first step, and then reverse the order. A much more practical way of writing that would be "In the large kitchen, to the left of the sink, there's a drawer; take the fork from the drawer."


It's kind of nice to see which columns you're selecting, as they're always in the beginning


Right, that's a sort of counterpoint. It is nice to know that you're going to be getting a fork so you have context for the kitchen -> sink -> drawer instructions. Similarly, I expect in many cases, when you have a complex query, the fact that you know the result is going to be "userid, sum(score)", it makes the subsequent query easier to understand (as you know where it's got to end up). "Can you get me a fork? In the kitchen, next to the sink, in the drawer" might be even friendlier.


The question is if the syntax correspond to the logical order of operations.

In the query "SELECT foo + 2 FROM bar WHERE baz ORDER BY foo", the logical order is actually "FROM foo WHERE baz SELECT foo + 2 ORDER BY wawa" because of how each clause depends on the previous.

The SQL syntax is neither the logical order nor the direct reverse - it is just a random jumble.


SQL syntax is designed for reading, which is the majority of coding time. What gets selected is generally the thing most cared about so it goes first, where stuff comes from is the next most important so you get FROM and JOIN. Filtering and aggregation are next, because they are often hinted at by the select list anyway, then sorting is the least important so it comes at the end.


> What gets selected is generally the thing most cared about so it goes first

I disagree. When you're reading a query you often already know what you were trying to select, you just have a problem with a clause somewhere. This suggests that the select should go last, and this makes perfect sense as Haskell's comprehensions and C#'s LINQ are both way easier to work with than SQL.


You know what you want to select at the time of writing, but 3 months later when someone else needs to add column foo the first two thoughts are: is it already in the select and is it available to the select without more joins. I've rarely ever touched filtering or aggregating lines in an exiting query unless requirements completely changed, which is less common than wanting more values added to an existing query.


I disagree. I've used C#'s LINQ extensively, where the select is at the end and adding columns is trivial. It's a complete non-issue.

On the other hand having the select at the beginning has all kinds of problems for autocomplete, and syntactically obscures where you're selecting from and the clauses. I recommend you try LINQPad if you want experience with how much better this works:

https://www.linqpad.net/

> I've rarely ever touched filtering or aggregating lines in an exiting query unless requirements completely changed

Requirements change or bugs are discovered in the query. This is far more common than you imply.


As long as you have each operation in the expected order, then SQL syntax might make sense. The problem comes when you need something outside this template, e.g. a projection after an aggregation. The straightforward syntax becomes unnecessary complex and hard to read if you just venture a bit outside of the default template.

SQL syntax is like if all arithmetic expressions had to be addition followed by subtraction followed by multiplication. And if you didn't need to add anything you would just have to add 0.


It would perhaps be a good thing if one could write SQL clauses in their logical ordering; as [1] explains:

* The FROM clause: First, all data sources are defined and joined

* The WHERE clause: Then, data is filtered as early as possible

* The CONNECT BY clause: Then, data is traversed iteratively or recursively, to produce new tuples

* The GROUP BY clause: Then, data is reduced to groups, possibly producing new tuples if grouping functions like ROLLUP(), CUBE(), GROUPING SETS() are used

* The HAVING clause: Then, data is filtered again

* The SELECT clause: Only now, the projection is evaluated. In case of a SELECT DISTINCT statement, data is further reduced to remove duplicates

* The UNION clause: Optionally, the above is repeated for several UNION-connected subqueries. Unless this is a UNION ALL clause, data is further reduced to remove duplicates

* The ORDER BY clause: Now, all remaining tuples are ordered

* The LIMIT clause: Then, a paginating view is created for the ordered tuples

* The FOR clause: Transformation to XML or JSON

* The FOR UPDATE clause: Finally, pessimistic locking is applied

[1] https://www.jooq.org/doc/latest/manual/sql-building/sql-stat...


I recently had to train my new junior to level his SQL skills and I found this resource pretty helpful to make sense of this mess. https://learnsql.com/blog/sql-order-of-operations/

However I do also see the point of "SELECT first" just like a header you can infer the output data structure of a sub-expression without necessarily dive into the meat of it. It require a some brain training, but once you get there it oftentimes easier to navigate 100+ lines scripts by jumping from header to header (usually organized as CTE to make the code cleaner).


> and annoying to do the correct thing (joining on the defined foreign keys).

Maybe you’ve seen this thread already; a proposal with some alternatives on how to improve the situation for joining in foreign key columns, but in case not here is the link:

https://news.ycombinator.com/item?id=29739147


That proposal unfortunately requires you to name the foreign key constraint, which is quite unergonomic.

E.g. instead of the wrong `FROM a NATURAL JOIN b` you would use the correct `FROM a JOIN FOREIGN a.foo_fkey`, which not only needs that second name but now also loses the immediate naming of b. So e.g. autocomplete would have to look up the foreign key constraint to find out the second table. And it's still longer and harder to use than the natural join!

Most databases have one foreign key from a given table to another given table, and that simple case should be made easy to use.


There are multiple alternative syntaxes suggested in the proposal, one of them doesn’t use foreign key names, but instead the foreign key column names, similar to USING (), but without the ambiguity:

https://gist.github.com/joelonsql/15b50b65ec343dce94db6249cf...


This is my biggest, high-level thing too; in the syntax we use SELECT to mean a projection and FROM the selection.


> This already allows autocomplete for the attributes to work.

So does the other way around in several SQL engines.

If you write something a long the lines of select x.ID, y.NAME from bla.bla as x join hum.hum as y on x.ID = y.FK in msSQL you’ll get autocomplete on x. and y..

You’re right that it’s more intuitive to write the from first of course.


You can't possibly get autocompletion on x. and y. for that first select if you didn't write that from clause yet (or at least the autocompletion you'd get would not be tailored to those tables).


If you add the table name you could, and that's what "x." here is.

So yes, you can autocomplete

SELECT employee.Na<TAB>

to "employee.Name", but it requires you to type the table name "employee." first.

But with the from-first style you can autocomplete even bare column names - you know you have "name" (possibly even "employee.name" and "supervisor.name") and "employeeID".


Except that the table name isn't necessarily going to be that x. If you are matching employees with their managers then you have two employees tables in that expression so you have to work with aliases. At which point autocompletion breaks down.


You're familiar with CTE, right ?


Yes, you can work around some inadequacies of SQL by bolting more features on top.

That doesn't mean that the basic design of SQL isn't awkward.


SQL syntax assumes queries have operations in a certain order - join, filter, group, filter again, project. What if you want to join after a grouping? What if you want to filter after a project? What if you want to group over a projection? You will have to use the clunky subquery syntax or WITH-clauses.

Compare to LINQ-syntax in C#, where you can just chain the operations however you want.

Another issue is that you can't reuse expressions. If you have an expression in a projection, you will have to repeat the same expression in filters and grouping. This leads to error-prone copy-pasting of expression or more convoluted syntax using subqueries.


And for expression reuse, they're so close with aliases.

lots-of-bla-bla-bla-bla as short-name

But later on you can only refer to short-name from very specific places, as you mention. So 80% of the time you're forced to go lots-of-bla-bla-bla-bla over and over and over again.

Snatching defeat from the jaws of victory.


I really, really miss LINQ, having moved from C# to Ruby, then Node and Go. LINQ is as close to absolute perfection as I’ve seen in a concept.


That's the first / starter use case though, SQL can get a bit crazy once you get into enterprise spaces - stored procedures, funky datatypes, auditing & history features like temporal tables, hundreds, thousands of tables and a similar amount of columns, naming & organizing things, etc.

Thankfully, most people will never have to deal with any of that, myself included. The biggest databases I've had to deal with were very relatable - one about books & authors, another about football and historic results. The other biggest database is one I'm working with and building right now, it's a DB for an installation of an application managing tons of configurations, a lot of domain specific terms. The existing database is not normalized of course, and uses a column with semicolon-separated-values as an alternative to foreign keys. Sigh. Current challenge is to implement history, so that a user can revert to previous versions. I'll probably end up implementing temporal tables in sqlite.


I build and operated an employee database in accordance to the Danish OIO model for years, I even sat on a comity to define either models within the OIO model set for the public service.

These days I work with millions of entries from solar production.

I’ve never had to use complex SQL more than one time.

You use tools like SSIS or APIs on top of it to get and store the data.

I know you “can” create a lot of stored procedures and views, but as I’ve already said, you really, really shouldn’t do that exactly because it’s so terrible to work with for so many people.

Honestly though, SQL with an Odata api on top of it is one of my favorite ways of storing and retrieving data. If you have to actually transform the data, you do it with SSIS or similar tools that are much more efficient top level layers that are also testable and reusable.

But to each their own I guess. The join logic never bothered me much, and that seems to be an issue for a lot of people here.


True, keep the database as dumb as possible IMO. I have converted 200 line SQL queries into 30 lines of SQL plus 20 lines of code for OLTP. OLAP is a different beast though and SQL can get nasty.


1) it is committee driven therefore changes come slowly

2) adding new functionality requires addition of new keywords

3) you cannot define new keywords from SQL

4) despite standardization, each implementation differs

this article summarizes it pretty well, while i do not agree with everything in it it points out flaws pretty well.

https://www.scattered-thoughts.net/writing/against-sql/


> 2) adding new functionality requires addition of new keywords

There are reserved keywords and unreserved keywords. The latter can be used as table/column/function/etc names, and don’t cause any trouble.

New syntax can be invented by reusing existing reserved keywords, and introducing new unreserved keywords in places where they can’t be misinterpreted.

Not saying the problem you describe isn’t a problem, just that it’s slightly more complicated and not as bad as one might think when reading your comment.


A problem is that SQL does not cleanly map to what the DBMS does to execute it. For simple queries, this is exactly the point. For more complex queries, the SELECT-FROM-WHERE straightjacket feels quite restrictive though.

The abstraction really hurts when you have to optimize slow queries and convince the optimizer to do it the right way. Entering the query plan (essentially, the annotated AST of an expression of relational algrebra) would often be helpful.

Also, SQL is ultimately text. This makes it very cumbersome to build tools that dynamically assemble queries, like ORMs or customized search dialogs, and to insert parameters. Parser performance impact overall DBMS performance quite a bit, and it would be useful to reduce overhead there too.


Yep, every abstraction introduced has added cost, in some cases. The fastest way of executing a program is to build special hardware with the perfect state changes of a set of transistors — I am not joking.


"SELECT whatever FROM thisplace" is trivial to improve and could be for example thisplace[whatever].

With joins it gets more complex but still SQL could allow using foreign keys and having SELECT at the end. You could get at least something like:

"FROM Invoice i JOIN i.customer c SELECT c.name, i.number"

instead of

"SELECT c.name, i.number

FROM Invoice i

JOIN i.customer c ON i.CustomerId = c.CustomerId"


Isn't a "WHERE" clause more intuitive than that?

SELECT c.name, i.number

FROM Invoice i, Customer c

where i.CustomerId=c.CustomerID"


I prefer JOIN clauses because it makes it easier to reason about the underlying implementation (hash/sort-merge joins) than thinking about cartesian products. It's also much harder to screw up the ON predicate and actually cause a cross join.


> I mean, how would you write a SELECT that was better than: SELECT whatever FROM thisplace?

Haskell's comprehensions. C#'s LINQ. F#'s query providers.

SQL is really not that good actually. It had (and still has) all sorts of limitations that eventually led to new syntax, and has many quirks that require all sorts of workarounds.


There’s some good discussion of the deficiencies of SQL here: https://opensource.googleblog.com/2021/04/logica-organizing-...

> Good programming is about creating small, understandable, reusable pieces of logic that can be tested, given names, and organized into packages which can later be used to construct more useful pieces of logic. SQL resists this workflow. Although you can encapsulate certain repeated computations into views and functions, the syntax and support for these can vary among implementations, the notions of packages and imports are generally nonexistent, and higher-level constructions (e.g. passing a function to a function) are impossible.


Try a 3-way M-M join. Or recursive CTE's for hierarchical data. It always looks clean with simple examples. Data these days is much more nested and inter-related.


I love SQL, but I also love being a contrarian.

thisplace.whatever

There you go.


This looks a lot worse if thisplace is a long subquery itself.


I'd flip the FROM and SELECT, just like in UPDATE and DELETE commands.


> for better or worse

For better. The only other success story just like it is JavaScript, which remains the one and only native browser programming language.

They work, they are good enough, everybody knows how to use them, gaining skills in those languages is valuable and timeless. No inane things like new languages du jour like golang that (fail at) reinventing the wheel appear every few years.

SQL remains beautifully boring & useful and is as close to program language perfection as we will ever get.


> The only other success story just like it is JavaScript, which remains the one and only native browser programming language.

Javascript is a great example of exactly why this sort of thing is awful - Everyone has to use libraries like React or Vue just to make it usable, it's filled with weird backwards-compatibility junk, nobody can replace it because it is so entrenched, and attempts to make it better (typescript) end up with having to transpile backwards to javascript (rather than being able to just stand on their own).

The sooner we move to a world of web-assembly the better (but even web assembly frameworks at the moment end up having a substantial mix of javascript). We shouldn't have languages that are standard just because they are standard.


  > We shouldn't have languages that are standard just because they are standard.
This is extraordinarily naive. Standards are a far more important invention than Javascript, or even the transistor. A standard existing just to have a standard is far better than every browser implementing its own scripting language. We had that once, in fact my personal website still has the text "This website is not compatible with MS Internet Explorer. Please upgrade to Chrome, Firefox, or Opera for optimal experience." even though that hasn't been true for over a decade.


> This is extraordinarily naive.

I obviously disagree, and think that's a pretty dismissive comment.

> A standard existing just to have a standard is far better than every browser implementing its own scripting language.

We shouldn't be stuck with a language just because one person decided to invent a language in 7 days over 26 years ago for the internet at the time, and now we are stuck with that language forever with a VERY different internet. I would like to think in 10 years time we can move to a web where people have a choice of language.

And that doesn't mean what you imply - which is that every browser has it's own scripting language - because it's possible to architect an environment which allows for multiple programming languages in the browser (see bytecode, JVM, CLI, webassembly).

Is it really that naive to think that's a better way forwards?


I agree, better a mediocre standard than none. Out of interest, for Javascript there are many languages that transpile to javascript but support quite different approaches like functional programming, strongly typed, etc.

Is there something similar for SQL, where you allow an alternative syntax and maybe programming approach and then use SQL as connection to the DB ?


Reading through these posts, I had exactly the same thought. Why not have a language that takes the problems with SQL and abstracts them to something that is more easily organized into functions, modules, and packages, corrects some of the semantic problems, integrates well with source control, testing, deployment pipelines, etc. DBT is solving this problem well for data warehouses, but I do not know if a similar tool application databases.



I'll take the clunkiness of SQL over the clunkiness of Mongo's json-based query language any day.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: